[1] Csurka G. Domain adaptation for visual applications: a com-prehensive survey[J]. arXiv:1702.05374, 2017.
[2] Weiss K R, Khoshgoftaar T M, Wang D D. A survey of trans-fer learning[J]. Journal of Big Data, 2016, 3: 9.
[3] Day O, Khoshgoftaar T M. A survey on heterogeneous trans-fer learning[J]. Journal of Big Data, 2017, 4: 29.
[4] Moon S, Carbonell J G. Completely heterogeneous transfer learning with attention: what and what not to transfer[C]//Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, Aug 19-25, 2017: 2508-2514.
[5] Long M S, Wang J M, Ding G G, et al. Transfer joint matc-hing for unsupervised domain adaptation[C]//Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Jun 21-23, 2014. Washington: IEEE Computer Society, 2014: 1410-1417.
[6] Aljundi R, Emonet R, Muselet D, et al. Landmarks-based kernelized subspace alignment for unsupervised domain ada-ptation[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, Jun 7-12, 2015. Washington: IEEE Computer Society, 2015: 56-63.
[7] Long M S, Wang J M, Ding G G, et al. Adaptation regula-rization: a general framework for transfer learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2014, 26(5): 1076-1089.
[8] Wang J D, Feng W J, Chen Y Q, et al. Visual domain adap-tation with manifold embedded distribution alignment[J].arXiv:1807.07258, 2018.
[9] Sun B C, Feng J S, Saenko K. Return of frustratingly easy domain adaptation[C]//Proceedings of the 30th AAAI Con-ference on Artificial Intelligence, Phoenix, Feb 12-17, 2016. Menlo Park: AAAI, 2016: 2058-2065.
[10] Yan K, Kou L, Zhang D. Learning domain-invariant subspace using domain features and independence maximization[J]. IEEE Transactions on Systems, Man, and Cybernetics, 2018, 48(1): 288-299.
[11] Zhong E H, Fan W, Peng J, et al. Cross domain distribution adaptation via kernel mapping[C]//Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Dis-covery and Data Mining, Paris, Jun 28 - Jul 1, 2009. New York: ACM, 2009: 1027-1036.
[12] Fernando B, Habrard A, Sebban M, et al. Subspace alignment for domain adaptation[J]. arXiv:1409.5241, 2014.
[13] Friedjungová M, Ji?ina M. Asymmetric heterogeneous trans-fer learning: a survey[C]//Proceedings of the 6th International Conference on Data Science, Technology and Applications, Madrid, Jul 24-26, 2017: 17-27.
[14] Xiao M, Guo Y H. Feature space independent semi-supervised domain adaptation via kernel matching[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(1): 54-66.
[15] Duan L X, Tsang I W, Xu D. Domain transfer multiple ker-nel learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(3): 465-479.
[16] Chen Y M, Song S J, Li S, et al. Domain space transfer extr-eme learning machine for domain adaptation[J]. IEEE Trans-actions on Systems, Man, and Cybernetics, 2019, 49(5): 1909-1922.
[17] Deng Z H, Xu P, Xie L X, et al. Transductive joint know-ledge transfer TSK FS for recognition of epileptic EEG sig-nals[J]. IEEE Transactions on Neural Systems and Rehabilita-tion Engineering, 2018, 26(8): 1481-1494.
[18] Li J J, Lu K, Huang Z, et al. Transfer independently together: a generalized framework for domain adaptation[J]. IEEE Transactions on Cybernetics, 2019, 49(6): 2144-2155.
[19] Si S, Tao D C, Geng B. Bregman divergence-based regulari-zation for transfer subspace learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(7): 929-942.
[20] Zhuang F Z, Cheng X H, Luo P, et al. Supervised represen-tation learning: transfer learning with deep autoencoders[C]// Proceedings of the 24th International Joint Conference on Artificial Intelligence, Buenos Aires, Jul 25-31, 2015. Menlo Park: AAAI, 2015: 4119-4125.
[21] Shen J, Qu Y R, Zhang W N, et al. Wasserstein distance gui-ded representation learning for domain adaptation[C]//Procee-dings of the 32nd AAAI Conference on Artificial Intelligence, the 30th Innovative Applications of Artificial Intelligence,and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, Feb 2-7, 2018. Menlo Park: AAAI, 2018: 4058-4065.
[22] Tahmoresnezhad J, Hashemi S. Visual domain adaptation via transfer feature learning[J]. Knowledge and Information Systems, 2017, 50(2): 585-605.
[23] Wang C, Mahadevan S. Heterogeneous domain adaptation using manifold alignment[C]//Proceedings of the 22nd Inter-national Joint Conference on Artificial Intelligence, Barce-lona, Jul 16-22, 2011. Menlo Park: AAAI, 2011: 1541-1546.
[24] Yeh Y R, Huang C H, Wang Y C F. Heterogeneous domain adaptation and classification by exploiting the correlation subspace[J]. IEEE Transactions on Image Processing, 2014, 23(5): 2009-2018.
[25] Yan Y G, Li W, Ng M K P, et al. Learning discriminative correlation subspace for heterogeneous domain adaptation [C]//Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, Aug 19-25, 2017: 3252-3258.
[26] Mehrkanoon S, Suykens J A K. Regularized semipaired ker-nel CCA for domain adaptation[J]. IEEE Transactions on Neu-ral Networks and Learning Systems, 2018, 29: 3199-3213.
[27] Shi X, Liu Q, Fan W, et al. Transfer learning on heterogene-ous feature spaces via spectral transformation[C]//Proceed-ings of the 10th IEEE International Conference on Data Min-ing, Sydney, Dec 14-17, 2010: 1049-1054.
[28] Kulis B, Saenko K, Darrell T. What you saw is not what you get: domain adaptation using asymmetric kernel trans-forms[C]//Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, Jun 20-25, 2011. Washington: IEEE Computer Society, 2011: 1785-1792.
[29] Hoffman J, Rodner E, Donahue J, et al. Efficient learning of domain-invariant image representations[C]//Proceedings of the 1st International Conference on Learning Representations, Scottsdale, May 2-4, 2013: 1-42.
[30] Li W, Duan L X, Xu D, et al. Learning with augmented features for supervised and semi-supervised heterogeneous domain adaptation[J]. IEEE Transactions on Pattern Analy-sis and Machine Intelligence, 2014, 36(6): 1134-1148.
[31] Zhou J T, Tsang I W, Pan S J, et al. Heterogeneous domain adaptation for multiple classes[C]//Proceedings of the 17th International Conference on Artificial Intelligence and Stati-stics, Reykjavik, Apr 22-25, 2014: 1095-1103.
[32] Tsai Y H, Yeh Y, Wang Y F. Learning cross-domain land-marks for heterogeneous domain adaptation[C]//Proceed-ings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washing-ton: IEEE Computer Society, 2016: 5081-5090.
[33] Chen W Y, Hsu T H, Tsai Y H, et al. Transfer neural trees for heterogeneous domain adaptation[C]//LNCS 9909: Pro-ceedings of the 2016 European Conference on Computer Vision, Amsterdam, Oct 11-14, 2016. Berlin, Heidelberg:Springer, 2016: 399-414.
[34] Wang X S, Ma Y T, Cheng Y H, et al. Rodrigues, hetero-geneous domain adaptation network based on autoencoder[J]. Journal of Parallel and Distributed Computing, 2017, 117: 281-291.
[35] Xu Z J, Sun S L. Multi-view transfer learning with Adab-oost[C]//Proceedings of the 23rd IEEE International Confer-ence on Tools with Artificial Intelligence, Boca Raton, Nov 7-9, 2011: 399-402.
[36] Yang P, Gao W. Multi-view discriminant transfer learning [C]//Proceedings of the 23rd International Joint Conference on Artificial Intelligence, Beijing, Aug 3-9, 2013. Menlo Park: AAAI, 2013: 1848-1854.
[37] Hoffman J, Kulis B, Darrell T, et al. Discovering latent dom-ains for multisource domain adaptation[C]//LNCS 7573: Pro-ceedings of the 12th European Conference on Computer Vision, Florence, Oct 7-13, 2012: 702-715.
[38] Long M S, Wang J M, Ding G G, et al. Transfer feature learning with joint distribution adaptation[C]//Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Dec 1-8, 2013. Washington: IEEE Computer Society, 2013: 2200-2207.
[39] Hardoon D R, Szedmak S, Shawetaylor J. Canonical corre-lation analysis: an overview with application to learning met-hods[J]. Neural Computation, 2004, 16: 2639-2664.
[40] He X F, Niyogi P. Locality preserving projections[C]//Pro-ceedings of the 2003 Advances in Neural Information Proce-ssing Systems. Cambridge: MIT Press, 2003: 153-160.
[41] Belkin M, Niyogi P. Laplacian eigenmaps and spectral tech-niques for embedding and clustering[C]//Proceedings of the 2001 Advances in Neural Information Processing Systems, Vancouver, Dec 3-8, 2001. Cambridge: MIT Press, 2001: 585-591.
[42] Sharma A, Paliwal K K. Linear discriminant analysis for the small sample size problem: an overview[J]. International Jour-nal of Machine Learning and Cybernetics, 2015, 6(3): 443-454. |