[1] 汪航, 陈晓, 田晟兆, 等. 基于小样本学习的SAR图像识别[J]. 计算机科学, 2020, 47(5): 124-128.
WANG H, CHEN X, TIAN S Z, et al. SAR image recognition based on few-shot learning[J]. Computer Science, 2020, 47(5): 124-128.
[2] JANKOWSKI N, DUCH W, GR?BCZEWSKI K. Meta-learning in computational intelligence[M]. Berlin: Springer Science and Business Media, 2011: 97-115.
[3] LAKE B, SALAKHUTDINOV R. One-shot learning by inverting a compositional causal process[C]//Advances in Neural Information Processing Systems 26, Lake Tahoe, Dec 5-8, 2013: 2526-2534.
[4] THRUN S, PRATT L. Learning to learn: introduction and overview[M]//Learning to Learn. Boston: Springer-Verlag, 1998: 3-17.
[5] 李凡长, 刘洋, 吴鹏翔, 等. 元学习研究综述[J]. 计算机学报, 2021, 44(2): 422-446.
LI F C, LIU Y, WU P X, et al. A survey on recent advances in meta-learning[J]. Chinese Journal of Computers, 2021, 44(2): 422-446.
[6] KOCH G, ZEMEL R, SALAKHUTDINOV R. Siamese ne-ural networks for one-shot image recognition[C]//Proceedings of the 32nd International Conference on Machine Learning, Paris, Jul 10-11, 2015: 1-30.
[7] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[C]//Advances in Neural Information Processing Systems 29, Barcelona, Dec 5-10, 2016: 3630-3638.
[8] SNELL J, SWERSKY K, ZEMEL R S. Prototypical networks for few-shot learning[C]//Advances in Neural Information Processing Systems 30, Long Beach, Dec 4-9, 2017: 4077-4087.
[9] GAO T Y, HAN X, LIU Z, et al. Hybrid attention-based prototypical networks for noisy few-shot relation classification[C]//Proceedings of the 2019 AAAI Conference on Artificial Intelligence, Hawaii, Jan 27-Feb 1, 2019. Menlo Park: AAAI, 2019: 6407-6414.
[10] XU S L, ZHANG F, WEI X S, et al. Dual attention networks for few-shot fine-grained recognition[C]//Proceedings of the 2022 AAAI Conference on Artificial Intelligence, Dublin, May 22-27, 2022. Menlo Park: AAAI, 2022: 2911-2919.
[11] ZHOU B, KHOSLA A, LAPEDRIZA A, et al. Learning deep features for discriminative localization[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Jun 26-Jul 1, 2016. Washington: IEEE Computer Society, 2016: 2921-2929.
[12] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 3-19.
[13] REN M Y, TRIANTAFILLOU E, RAVI S, et al. Meta-learning for semi-supervised few-shot classification[J]. arXiv:1803. 00676, 2018.
[14] WAH C, BRANSON S, WELINDER P, et al. The Caltech-UCSD Birds-200-2011 dataset: CNS-TR-2011-001[R]. California Institute of Technology, 2011.
[15] BERTINETTO L, HENRIQUES J F, TORR P H S, et al. Meta-learning with differentiable closed-form solvers[J]. arXiv: 1805.08136, 2018.
[16] ORESHKIN B, RODRíGUEZ LóPEZ P, LACOSTE A. Tadam: task dependent adaptive metric for improved few-shot learning[C]//Advances in Neural Information Processing Systems 31, Montréal, Dec 3-8, 2018: 719-729.
[17] CAI Q, PAN Y, YAO T, et al. Memory matching networks for one-shot image recognition[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-21, 2018. Piscataway: IEEE, 2018: 4080-4088.
[18] KRIZHEVSKY A, HINTON G. Learning multiple layers of features from tiny images[D]. Toronto: University of Toronto, 2009.
[19] HE K, FAN H, WU Y, et al. Momentum contrast for unsupervised visual representation learning[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 14-19, 2020. Piscataway: IEEE, 2020: 9729-9738.
[20] LUO X, CHEN Y, WEN L, et al. Boosting few-shot classification with view-learnable contrastive learning[C]//Procee-dings of the 2021 IEEE International Conference on Multimedia and Expo, Shenzhen, Jul 5-9, 2021. Piscataway: IEEE, 2021: 1-6.
[21] LI A, LUO T, XIANG T, et al. Few-shot learning with global class representations[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 9715-9724.
[22] LI W, WANG L, XU J, et al. Revisiting local descriptor based image-to-class measure for few-shot learning[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 7260-7268.
[23] HU Z, LI Z, WANG X, et al. Unsupervised descriptor selection based meta-learning networks for few-shot classification [J]. Pattern Recognition, 2022, 122: 108304.
[24] WANG Z, MA P, CHI Z, et al. Multi-attention mutual information distributed framework for few-shot learning[J]. Expert Systems with Applications, 2022, 202: 117062.
[25] MISHRA N, ROHANINEJAD M, CHEN X, et al. A simple neural attentive meta-learner[J]. arXiv:1707.03141, 2017.
[26] MUNKHDALAI T, YUAN X, MEHRI S, et al. Rapid adaptation with conditionally shifted neurons[C]//Proceedings of the 2018 International Conference on Machine Learning, Stockholm, May 26-28, 2018: 3664-3673.
[27] RAVICHANDRAN A, BHOTIKA R, SOATTO S. Few-shot learning with embedded class models and shot-free meta training[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 331-339.
[28] LEE K, MAJI S, RAVICHANDRAN A, et al. Meta-learning with differentiable convex optimization[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 10657-10665.
[29] HOU R, CHANG H, MA B, et al. Cross attention network for few-shot classification[C]//Advances in Neural Information Processing Systems 32, Vancouver, Dec 8-14, 2019: 4005-4016.
[30] CHEN W Y, LIU Y C, KIRA Z, et al. A closer look at few-shot classification[C]//Proceedings of the 2019 International Conference on Learning Representations, New Orleans, May 6-9, 2019: 2-6.
[31] LIU Y, SCHIELE B, SUN Q. An ensemble of epoch-wise empirical Bayes for few-shot learning[C]//Proceedings of the 16th European Conference on Computer Vision, Aug 23-28, 2020. Cham: Springer, 2020: 404-421.
[32] GAO F, CAI L, YANG Z, et al. Multi-distance metric network for few-shot learning[J]. International Journal of Machine Learning and Cybernetics, 2022, 13(9): 2495-2506.
[33] GAO F, LUO X, YANG Z, et al. Label smoothing and task-adaptive loss function based on prototype network for few-shot learning[J]. Neural Networks, 2022, 156: 39-48.
[34] LI H, EIGEN D, DODGE S, et al. Finding task-relevant features for few-shot learning by category traversal[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 1-10.
[35] YANG F, WANG R, CHEN X. SEGA: semantic guided attention on visual prototype for few-shot learning[C]//Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, Jun 5-9, 2022. Piscataway: IEEE, 2022: 1056-1066.
[36] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 2017 International Conference on Machine Learning, Sydney, Aug 6-11, 2017: 1126-1135.
[37] SUNG F, YANG Y, ZHANG L, et al. Learning to compare: relation network for few-shot learning[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-21, 2018. Piscataway: IEEE, 2018: 1199-1208.
[38] HUANG H, ZHANG J, YU L, et al. TOAN: target-oriented alignment network for fine-grained image categorization with few labeled samples[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(2): 853-866.
[39] YE H J, HU H, ZHAN D C, et al. Learning embedding adaptation for few-shot learning[J]. arXiv:1812.03664, 2018.
[40] ZHANG C, CAI Y, LIN G, et al. DeepEMD: few-shot image classification with differentiable earth mover’s distance and structured classifiers[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 14-19, 2020. Piscataway: IEEE, 2020: 12203 -12213.
[41] QIN Z, WANG H, MAWULI C B, et al. Multi-instance attention network for few-shot learning[J]. Information Sciences, 2022, 611: 464-475.
[42] HU S X, MORENO P G, XIAO Y, et al. Empirical Bayes transductive meta-learning with synthetic gradients[C]//Proceedings of the 2020 International Conference on Learning Representations, Addis Ababa, Apr 26-30, 2020.
[43] QIAO L, SHI Y, LI J, et al. Transductive episodic-wise adaptive metric for few-shot learning[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 3603-3612.
[44] SIMON C, KONIUSZ P, NOCK R, et al. Adaptive subspaces for few-shot learning[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 14-19, 2020. Piscataway: IEEE, 2020: 4136- 4145.
[45] KIM J, KIM H, KIM G. Model-agnostic boundary-adversarial sampling for test-time generalization in few-shot learning[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020. Cham: Springer, 2020: 599-617.
[46] GIDARIS S, BURSUC A, KOMODAKIS N, et al. Boosting few-shot visual learning with self-supervision[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 8059-8068.
[47] TIAN Y, WANG Y, KRISHNAN D, et al. Rethinking few-shot image classification: a good embedding is all you need?[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020. Cham: Springer, 2020: 266-282.
[48] OLIVER A, ODENA A, RAFFEL C A, et al. Realistic evaluation of deep semi-supervised learning algorithms[C]//Advances in Neural Information Processing Systems 31, Montreal, Dec 3-8, 2018: 3235-3246.
[49] WANG X, CAI J, JI S, et al. Self-adaptive label augmentation for semi-supervised few-shot classification[J]. arXiv: 2206.08150, 2022.
[50] LIU Y, LEE J, PARK M, et al. Learning to propagate labels: transductive propagation network for few-shot learning[J]. arXiv:1805.10002, 2018. |