A Survey of Transfer Learning and Categories

Document Type : Research Paper

Authors

Electrical and Computer Engineering Faculty, Semnan University, Semnan, Iran

Abstract

In a variety of real-world scenarios, techniques such as machine learning and data mining are applied. Traditional machine learning frameworks suppose that training data and testing data come from the same domain, have the same feature space, and have the same feature space distribution. This assumption, however, is capable of being applied in certain realistic machine learning cases, especially when gathering training data is prohibitively costly or impossible. As a result, high-performance learners must be developed using data that is more conveniently gathered from various domains. Transfer learning is the name given to this method; it is a learning environment based on a person's capacity to extrapolate information through activities to learn more quickly. Transfer learning tries to establish a structure for applying previous knowledge learned skills to tackle new but related issues more swiftly and efficiently. Transfer learning methodologies, in opposition to traditional machine learning technics, use data from auxiliary domains to enhance predictive modelling of distinct data patterns in the present domain. Transfer learning focuses on improving target participants' performance on target domains by passing data or knowledge from numerous but similar source domains. As a result, the reliance on a various number of target-domain available data for building target learners can be minimized. This survey paper explains transfer learning categories based on problems and solutions and explains experiment results and examples of its application and perspective related to transfer learning. Also, it provides a concise overview of the processes and methods of transfer learning, which may aid readers in better understanding the current research state and idea.

Keywords

Main Subjects


[1] F. Zhuang et al., “A Comprehensive Survey on Transfer Learning,” Proc. IEEE, vol. 109, no. 1, pp. 43–76, Jan. 2021.
[2] Z. Wan, R. Yang, M. Huang, N. Zeng, and X. Liu, “A review on transfer learning in EEG signal analysis,” Neurocomputing, vol. 421, pp. 1–14, 2021.
[3] K. Weiss, T. M. Khoshgoftaar, and D. D. Wang, “A survey of transfer learning,” J. Big Data, vol. 3, no. 1, 2016.
[4] S. J. Pan and Q. Yang, “A Survey on Transfer Learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, Oct. 2010.
[5] N. Agarwal, A. Sondhi, K. Chopra, and G. Singh, “Transfer Learning: Survey and Classification,” Springer, Singapore, 2021, pp. 145–155.
[6] G. Wilson and D. J. Cook, “A Survey of Unsupervised Deep Domain Adaptation,” ACM Trans. Intell. Syst. Technol., vol. 11, no. 5, 2020.
[7] “Annoyed Realm Outlook Taxonomy Using Twin Transfer Learning,” Int. J. Pure Appl. Math., 2017.
[8] L. Xie, Z. Deng, P. Xu, K. S. Choi, and S. Wang, “Generalized Hidden-Mapping Transductive Transfer Learning for Recognition of Epileptic Electroencephalogram Signals,” IEEE Trans. Cybern., vol. 49, no. 6, pp. 2200–2214, 2019.
[9] J. Wang, G. Wang, and M. Zhou, “Bimodal vein data mining via cross-selected-domain knowledge transfer,” IEEE Trans. Inf. Forensics Secur., vol. 13, no. 3, pp. 733–744, 2018.
[10]         H. S. Bhatt, R. Singh, M. Vatsa, and N. K. Ratha, “Improving cross-resolution face matching using ensemble-based co-transfer learning,” IEEE Trans. Image Process., vol. 23, no. 12, pp. 5654–5669, 2014.
[11]         Z. Deng, Y. Jiang, H. Ishibuchi, K.-S. Choi, and S. Wang, “Enhanced Knowledge-Leverage-Based TSK Fuzzy System Modeling for Inductive Transfer Learning,” ACM Trans. Intell. Syst. Technol., vol. 8, no. 1, pp. 1–21, Oct. 2016.
[12]         R. S. Simões, V. G. Maltarollo, P. R. Oliveira, and K. M. Honorio, “Transfer and Multi-task Learning in QSAR Modeling: Advances and Challenges,” Front. Pharmacol., vol. 9, Feb. 2018.
[13]         R. Socher, M. Ganjoo, H. Sridhar, O. Bastani, C. D. Manning, and A. Y. Ng, “Zero-shot learning through cross-modal transfer,” 1st Int. Conf. Learn. Represent. ICLR 2013 - Work. Track Proc., 2013.
[14]         M. Palatucci, D. Pomerleau, G. Hinton, and T. M. Mitchell, “Zero-shot learning with semantic output codes,” Adv. Neural Inf. Process. Syst. 22 - Proc. 2009 Conf., pp. 1410–1418, 2009.
[15]         B. Romera-Paredes and P. H. S. Torr, “An embarrassingly simple approach to zero-shot learning,” 32nd Int. Conf. Mach. Learn. ICML 2015, vol. 3, pp. 2142–2151, 2015.
[16]         B. Yang, A. J. Ma, and P. C. Yuen, “Learning domain-shared group-sparse representation for unsupervised domain adaptation,” Pattern Recognit., vol. 81, pp. 615–632, 2018.
[17]         A. Siddhant, A. Goyal, and A. Metallinou, “Unsupervised transfer learning for spoken language understanding in intelligent agents,” 33rd AAAI Conf. Artif. Intell. AAAI 2019, 31st Innov. Appl. Artif. Intell. Conf. IAAI 2019 9th AAAI Symp. Educ. Adv. Artif. Intell. EAAI 2019, pp. 4959–4966, 2019.
[18]         S. Ghosh, R. Singh, M. Vatsa, V. M. Patel, and N. Ratha, “Domain adaptation for visual understanding,” Domain Adapt. Vis. Underst., pp. 1–15, 2020.
[19]         B. Cao, N. N. Liu, and Q. Yang, “Transfer learning for collective link prediction in multiple heterogenous domains,” ICML 2010 - Proceedings, 27th Int. Conf. Mach. Learn., pp. 159–166, 2010.
[20]         Q. Zhu, Yin and Chen, Yuqiang and Lu, Zhongqi and Pan, Sinno Jialin and Xue, Gui-Rong and Yu, Yong and Yang, “Heterogeneous Transfer Learning for Image Classification,” Proc. Twenty-Fifth AAAI Conf. Artif. Intell., pp. 1304–1309, 2011.
[21]         C. Wang and S. Mahadevan, “Heterogeneous domain adaptation using manifold alignment,” IJCAI Int. Jt. Conf. Artif. Intell., pp. 1541–1546, 2011.
[22]         G. J. Qi, C. Aggarwal, and T. Huang, “Towards semantic knowledge propagation from text corpus to web images,” Proc. 20th Int. Conf. World Wide Web, WWW 2011, pp. 297–306, 2011.
[23]         P. Prettenhofer and B. Stein, “Cross-language text classification using structural correspondence learning,” ACL 2010 - 48th Annu. Meet. Assoc. Comput. Linguist. Proc. Conf., pp. 1118–1127, 2010.
[24]         J. Nam and S. Kim, “Heterogeneous defect prediction,” 2015 10th Jt. Meet. Eur. Softw. Eng. Conf. ACM SIGSOFT Symp. Found. Softw. Eng. ESEC/FSE 2015 - Proc., pp. 508–519, 2015.
[25]         B. Gong, K. Grauman, and F. Sha, “Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation,” 30th Int. Conf. Mach. Learn. ICML 2013, no. PART 1, pp. 222–230, 2013.
[26]         Y. Yao and G. Doretto, “Boosting for transfer learning with multiple sources,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 1855–1862, 2010.
[27]         B. Wang, J. A. Mendez, M. B. Cai, and E. Eaton, “Transfer learning via minimizing the performance gap between domains,” Adv. Neural Inf. Process. Syst., vol. 32, 2019.
[28]         S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain Adaptation via Transfer Component Analysis,” IEEE Trans. Neural Networks, vol. 22, no. 2, pp. 199–210, Feb. 2011.
[29]         Si Si, Dacheng Tao, and Bo Geng, “Bregman Divergence-Based Regularization for Transfer Subspace Learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 7, pp. 929–942, Jul. 2010.
[30]         Boqing Gong, Yuan Shi, Fei Sha, and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2012, pp. 2066–2073.
[31]         M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, “Transfer joint matching for unsupervised domain adaptation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 1410–1417, 2014.
[32]         C. Perlich, B. Dalessandro, T. Raeder, O. Stitelman, and F. Provost, “Machine learning for targeted display advertising: Transfer learning in action,” Mach. Learn., vol. 95, no. 1, pp. 103–127, 2014.
[33]         H. Li, Y. Shi, Y. Liu, A. G. Hauptmann, and Z. Xiong, “Cross-domain video concept detection: A joint discriminative and generative active learning approach,” Expert Syst. Appl., vol. 39, no. 15, pp. 12220–12228, 2012.
[34]         R. Chattopadhyay, Q. Sun, W. Fan, I. Davidson, S. Panchanathan, and J. Ye, “Multisource domain adaptation and its application to early detection of fatigue,” ACM Trans. Knowl. Discov. Data, vol. 6, no. 4, 2012.
[35]         M. Long, J. Wang, G. Ding, S. J. Pan, and P. S. Yu, “Adaptation Regularization: A General Framework for Transfer Learning,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 5, pp. 1076–1089, May 2014.
[36]         L. Bin, Y. Qiang, and X. Xiangyang, “Transfer learning for collaborative filtering via a rating-matrix generative model,” ACM Int. Conf. Proceeding Ser., vol. 382, 2009.
[37]         B. Li, Q. Yang, and X. Xue, “Can movies and books collaborate? Cross-domain collaborative filtering for sparsity reduction,” IJCAI Int. Jt. Conf. Artif. Intell., pp. 2052–2057, 2009.
[38]         M. Jiang, P. Cui, F. Wang, Q. Yang, W. Zhu, and S. Yang, “Social recommendation across multiple relational domains,” ACM Int. Conf. Proceeding Ser., pp. 1422–1431, 2012.
[39]         V. Behbood, J. Lu, and G. Zhang, “Text categorization by fuzzy domain adaptation,” IEEE Int. Conf. Fuzzy Syst., 2013.
[40]         P. Swietojanski, A. Ghoshal, and S. Renals, “Unsupervised cross-lingual knowledge transfer in DNN-based LVCSR,” 2012 IEEE Work. Spok. Lang. Technol. SLT 2012 - Proc., pp. 246–251, 2012,.
[41]         J. T. Huang, J. Li, D. Yu, L. Deng, and Y. Gong, “Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. - Proc., pp. 7304–7308, 2013.
[42]         J. Yoo, Y. Hong, Y. Noh, and S. Yoon, “Domain Adaptation Using Adversarial Learning for Autonomous Navigation,” 2017.
[43]         Y. Luo, L. Zheng, T. Guan, J. Yu, and Y. Yang, “Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 2502–2511, 2019.
[44]         C. Chen, Q. Dou, H. Chen, and P. A. Heng, “Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest X-ray segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11046 LNCS, pp. 143–151, 2018.
[45]         C. S. Perone, P. Ballester, R. C. Barros, and J. Cohen-Adad, “Unsupervised domain adaptation for medical imaging segmentation with self-ensembling,” Neuroimage, vol. 194, pp. 1–11, 2019.
[46]         D. C. Ciresan, U. Meier, and J. Schmidhuber, “Transfer learning for Latin and Chinese characters with deep neural networks,” Proc. Int. Jt. Conf. Neural Networks, 2012.
[47]         Y. Ma, W. Gong, and F. Mao, “Transfer learning used to analyze the dynamic evolution of the dust aerosol,” J. Quant. Spectrosc. Radiat. Transf., vol. 153, pp. 119–130, Mar. 2015.
[48]         H. A. Ogoe, S. Visweswaran, X. Lu, and V. Gopalakrishnan, “Knowledge transfer via classification rules using functional mapping for integrative modeling of gene expression data,” BMC Bioinformatics, vol. 16, no. 1, 2015.
[49]         L. D. Nguyen, D. Lin, Z. Lin, and J. Cao, “Deep CNNs for microscopic image classification by exploiting transfer learning and feature concatenation,” Proc. - IEEE Int. Symp. Circuits Syst., vol. 2018-May, 2018, .
[50]         M. Raghu, C. Zhang, J. Kleinberg, and S. Bengio, “Transfusion: Understanding transfer learning for medical imaging,” Adv. Neural Inf. Process. Syst., vol. 32, 2019.
[51]         S. U. Khan, N. Islam, Z. Jan, I. Ud Din, and J. J. P. C. Rodrigues, “A novel deep learning based framework for the detection and classification of breast cancer using transfer learning,” Pattern Recognit. Lett., vol. 125, pp. 1–6, 2019.
[52]         S. M. Salaken, A. Khosravi, T. Nguyen, and S. Nahavandi, “Seeded transfer learning for regression problems with deep learning,” Expert Syst. Appl., vol. 115, pp. 565–577, 2019.