{"id":5430,"date":"2022-08-09T21:35:40","date_gmt":"2022-08-09T13:35:40","guid":{"rendered":"http:\/\/139.9.1.231\/?p=5430"},"modified":"2022-10-24T11:09:55","modified_gmt":"2022-10-24T03:09:55","slug":"few-shot-papers","status":"publish","type":"post","link":"http:\/\/139.9.1.231\/index.php\/2022\/08\/09\/few-shot-papers\/","title":{"rendered":"Few-Shot Papers&#8211;\u5c0f\u6837\u672c\u5b66\u4e60\u8bba\u6587\u6c47\u603b"},"content":{"rendered":"\n<p>\u6765\u81eaGitHub\u4ed3\u5e93\uff1a<a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers\"><em>https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers<\/em><\/a><\/p>\n\n\n\n<p>This repository contains few-shot learning (FSL) papers mentioned in our FSL survey published in ACM Computing Surveys (JCR Q1, CORE A*).<\/p>\n\n\n\n<p>For convenience, we also include public implementations of respective authors.<\/p>\n\n\n\n<p>We will update this paper list to include new FSL papers periodically.<\/p>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#citation\"><\/a>Citation<\/h2>\n\n\n\n<p>Please cite our paper if you find it helpful.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>@article{wang2020generalizing,\n  title={Generalizing from a few examples: A survey on few-shot learning},\n  author={Wang, Yaqing and Yao, Quanming and Kwok, James T and Ni, Lionel M},\n  journal={ACM Computing Surveys},\n  volume={53},\n  number={3},\n  pages={1--34},\n  year={2020},\n  publisher={ACM New York, NY, USA}\n}\n<\/code><\/pre>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\"><\/a>Content<\/h2>\n\n\n\n<ol><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Survey\">Survey<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Data\">Data<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Model\">Model<\/a><ol><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Multitask-Learning\">Multitask Learning<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Embedding\/Metric-Learning\">Embedding\/Metric Learning<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Learning-with-External-Memory\">Learning with External Memory<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Generative-Modeling\">Generative Modeling<\/a><\/li><\/ol><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Algorithm\">Algorithm<\/a><ol><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Refining-Existing-Parameters\">Refining Existing Parameters<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Refining-Meta-learned-Parameters\">Refining Meta-learned Parameters<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Learning-Search-Steps\">Learning Search Steps<\/a><\/li><\/ol><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Applications\">Applications<\/a><ol><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Computer-Vision\">Computer Vision<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Robotics\">Robotics<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Natural-Language-Processing\">Natural Language Processing<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Acoustic-Signal-Processing\">Acoustic Signal Processing<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Recommendation\">Recommendation<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#others\">Others<\/a><\/li><\/ol><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Theories\">Theories<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Few-shot-Learning-and-Zero-shot-Learning\">Few-shot Learning and Zero-shot Learning<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Variants-of-Few-shot-Learning\">Variants of Few-shot Learning<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Datasets\/Benchmarks\">Datasets\/Benchmarks<\/a><\/li><li><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#Software-Library\">Software Library<\/a><\/li><\/ol>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#survey\"><\/a><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\">Survey<\/a><\/h2>\n\n\n\n<ol><li><strong>Generalizing from a few examples: A survey on few-shot learning,<\/strong>&nbsp;CSUR, 2020&nbsp;<em>Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3386252?cid=99659542534\">paper<\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1904.05046\">arXiv<\/a><\/li><\/ol>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#data\"><\/a><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\">Data<\/a><\/h2>\n\n\n\n<ol><li><strong>Learning from one example through shared densities on transforms,<\/strong>&nbsp;in CVPR, 2000.&nbsp;<em>E. G. Miller, N. E. Matsakis, and P. A. Viola.<\/em>&nbsp;<a href=\"https:\/\/people.cs.umass.edu\/~elm\/papers\/Miller_congealing.pdf\">paper<\/a><\/li><li><strong>Domain-adaptive discriminative one-shot learning of gestures,<\/strong>&nbsp;in ECCV, 2014.&nbsp;<em>T. Pfister, J. Charles, and A. Zisserman.<\/em>&nbsp;<a href=\"https:\/\/www.robots.ox.ac.uk\/~vgg\/publications\/2014\/Pfister14\/pfister14.pdf\">paper<\/a><\/li><li><strong>One-shot learning of scene locations via feature trajectory transfer,<\/strong>&nbsp;in CVPR, 2016.&nbsp;<em>R. Kwitt, S. Hegenbart, and M. Niethammer.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2016\/papers\/Kwitt_One-Shot_Learning_of_CVPR_2016_paper.pdf\">paper<\/a><\/li><li><strong>Low-shot visual recognition by shrinking and hallucinating features,<\/strong>&nbsp;in ICCV, 2017.&nbsp;<em>B. Hariharan and R. Girshick.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2017\/papers\/Hariharan_Low-Shot_Visual_Recognition_ICCV_2017_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/facebookresearch\/low-shot-shrink-hallucinate\">code<\/a><\/li><li><strong>Improving one-shot learning through fusing side information,<\/strong>&nbsp;arXiv preprint, 2017.&nbsp;<em>Y.H.Tsai and R.Salakhutdinov.<\/em>&nbsp;<a href=\"https:\/\/lld-workshop.github.io\/2017\/papers\/LLD_2017_paper_31.pdf\">paper<\/a><\/li><li><strong>Fast parameter adaptation for few-shot image captioning and visual question answering,<\/strong>&nbsp;in ACM MM, 2018.&nbsp;<em>X. Dong, L. Zhu, D. Zhang, Y. Yang, and F. Wu.<\/em>&nbsp;<a href=\"https:\/\/xuanyidong.com\/resources\/papers\/ACM-MM-18-FPAIT.pdf\">paper<\/a><\/li><li><strong>Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>Y. Wu, Y. Lin, X. Dong, Y. Yan, W. Ouyang, and Y. Yang.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Wu_Exploit_the_Unknown_CVPR_2018_paper.pdf\">paper<\/a><\/li><li><strong>Low-shot learning with large-scale diffusion,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>M. Douze, A. Szlam, B. Hariharan, and H. J\u00e9gou.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Douze_Low-Shot_Learning_With_CVPR_2018_paper.pdf\">paper<\/a><\/li><li><strong>Diverse few-shot text classification with multiple metrics,<\/strong>&nbsp;in NAACL-HLT, 2018.&nbsp;<em>M. Yu, X. Guo, J. Yi, S. Chang, S. Potdar, Y. Cheng, G. Tesauro, H. Wang, and B. Zhou.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/N18-1109.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Gorov\/DiverseFewShot_Amazon\">code<\/a><\/li><li><strong>Delta-encoder: An effective sample synthesis method for few-shot object recognition,<\/strong>&nbsp;in NeurIPS, 2018.&nbsp;<em>E. Schwartz, L. Karlinsky, J. Shtok, S. Harary, M. Marder, A. Kumar, R. Feris, R. Giryes, and A. Bronstein.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/7549-delta-encoder-an-effective-sample-synthesis-method-for-few-shot-object-recognition.pdf\">paper<\/a><\/li><li><strong>Low-shot learning via covariance-preserving adversarial augmentation networks,<\/strong>&nbsp;in NeurIPS, 2018.&nbsp;<em>H. Gao, Z. Shou, A. Zareian, H. Zhang, and S. Chang.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/7376-low-shot-learning-via-covariance-preserving-adversarial-augmentation-networks.pdf\">paper<\/a><\/li><li><strong>Learning to self-train for semi-supervised few-shot classification,<\/strong>&nbsp;in NeurIPS, 2019.&nbsp;<em>X. Li, Q. Sun, Y. Liu, S. Zheng, Q. Zhou, T.-S. Chua, and B. Schiele.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/9216-learning-to-self-train-for-semi-supervised-few-shot-classification.pdf\">paper<\/a><\/li><li><strong>Few-shot learning with global class representations,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>A. Li, T. Luo, T. Xiang, W. Huang, and L. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Li_Few-Shot_Learning_With_Global_Class_Representations_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>AutoAugment: Learning augmentation policies from data,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Cubuk_AutoAugment_Learning_Augmentation_Strategies_From_Data_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>EDA: Easy data augmentation techniques for boosting performance on text classification tasks,<\/strong>&nbsp;in EMNLP and IJCNLP, 2019.&nbsp;<em>J. Wei and K. Zou.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/D19-1670.pdf\">paper<\/a><\/li><li><strong>LaSO: Label-set operations networks for multi-label few-shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>A. Alfassy, L. Karlinsky, A. Aides, J. Shtok, S. Harary, R. Feris, R. Giryes, and A. M. Bronstein.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Alfassy_LaSO_Label-Set_Operations_Networks_for_Multi-Label_Few-Shot_Learning_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/leokarlin\/LaSO\">code<\/a><\/li><li><strong>Image deformation meta-networks for one-shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>Z. Chen, Y. Fu, Y.-X. Wang, L. Ma, W. Liu, and M. Hebert.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Chen_Image_Deformation_Meta-Networks_for_One-Shot_Learning_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/tankche1\/IDeMe-Net\">code<\/a><\/li><li><strong>Spot and learn: A maximum-entropy patch sampler for few-shot image classification,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>W.-H. Chu, Y.-J. Li, J.-C. Chang, and Y.-C. F. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Chu_Spot_and_Learn_A_Maximum-Entropy_Patch_Sampler_for_Few-Shot_Image_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>Data augmentation using learned transformations for one-shot medical image segmentation,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>A. Zhao, G. Balakrishnan, F. Durand, J. V. Guttag, and A. V. Dalca.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Zhao_Data_Augmentation_Using_Learned_Transformations_for_One-Shot_Medical_Image_Segmentation_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>Adversarial feature hallucination networks for few-shot learning,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>K. Li, Y. Zhang, K. Li, and Y. Fu.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Li_Adversarial_Feature_Hallucination_Networks_for_Few-Shot_Learning_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Instance credibility inference for few-shot learning,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>Y. Wang, C. Xu, C. Liu, L. Zhang, and Y. Fu.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Wang_Instance_Credibility_Inference_for_Few-Shot_Learning_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Diversity transfer network for few-shot learning,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>M. Chen, Y. Fang, X. Wang, H. Luo, Y. Geng, X. Zhang, C. Huang, W. Liu, and B. Wang.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6628\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Yuxin-CV\/DTN\">code<\/a><\/li><li><strong>Neural snowball for few-shot relation learning,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>T. Gao, X. Han, R. Xie, Z. Liu, F. Lin, L. Lin, and M. Sun.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6281\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/thunlp\/Neural-Snowball\">code<\/a><\/li><li><strong>Associative alignment for few-shot image classification,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>A. Afrasiyabi, J. Lalonde, and C. Gagn\u00e9.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123500018.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ArmanAfrasiyabi\/associative-alignment-fs\">code<\/a><\/li><li><strong>Information maximization for few-shot learning,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>M. Boudiaf, I. Ziko, J. Rony, J. Dolz, P. Piantanida, and I. B. Ayed.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/196f5641aa9dc87067da4ff90fd81e7b-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/mboudiaf\/TIM\">code<\/a><\/li><li><strong>Self-training for few-shot transfer across extreme task differences,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>C. P. Phoo, and B. Hariharan.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=O3Y56aqpChA\">paper<\/a><\/li><li><strong>Free lunch for few-shot learning: Distribution calibration,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>S. Yang, L. Liu, and M. Xu.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=JWOiYxMG92s\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ShuoYang-1998\/ICLR2021-Oral_Distribution_Calibration\">code<\/a><\/li><li><strong>Parameterless transductive feature re-representation for few-shot learning,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>W. Cui, and Y. Guo;.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/cui21a\/cui21a.pdf\">paper<\/a><\/li><li><strong>Learning intact features by erasing-inpainting for few-shot classification,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>J. Li, Z. Wang, and X. Hu.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17021\/16828\">paper<\/a><\/li><li><strong>Variational feature disentangling for fine-grained few-shot classification,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>J. Xu, H. Le, M. Huang, S. Athar, and D. Samaras.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Xu_Variational_Feature_Disentangling_for_Fine-Grained_Few-Shot_Classification_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Coarsely-labeled data for better few-shot transfer,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>C. P. Phoo, and B. Hariharan.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Phoo_Coarsely-Labeled_Data_for_Better_Few-Shot_Transfer_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Pseudo-loss confidence metric for semi-supervised few-shot learning,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>K. Huang, J. Geng, W. Jiang, X. Deng, and Z. Xu.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Huang_Pseudo-Loss_Confidence_Metric_for_Semi-Supervised_Few-Shot_Learning_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Iterative label cleaning for transductive and semi-supervised few-shot learning,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>M. Lazarou, T. Stathaki, and Y. Avrithis.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Lazarou_Iterative_Label_Cleaning_for_Transductive_and_Semi-Supervised_Few-Shot_Learning_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Meta two-sample testing: Learning kernels for testing with limited data,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>F. Liu, W. Xu, J. Lu, and D. J. Sutherland.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/2e6d9c6052e99fcdfa61d9b9da273ca2-Paper.pdf\">paper<\/a><\/li><li><strong>Dynamic distillation network for cross-domain few-shot recognition with unlabeled data,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>A. Islam, C.-F. Chen, R. Panda, L. Karlinsky, R. Feris, and R. Radke.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/1d6408264d31d453d556c60fe7d0459e-Paper.pdf\">paper<\/a><\/li><li><strong>Towards better understanding and better generalization of low-shot classification in histology images with contrastive learning,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>J. Yang, H. Chen, J. Yan, X. Chen, and J. Yao.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=kQ2SOflIOVC\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/TencentAILabHealthcare\/Few-shot-WSI\">code<\/a><\/li><li><strong>FlipDA: Effective and robust data augmentation for few-shot learning,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>J. Zhou, Y. Zheng, J. Tang, L. Jian, and Z. Yang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.592.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/zhouj8553\/flipda\">code<\/a><\/li><li><strong>PromDA: Prompt-based data augmentation for low-resource NLU tasks,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>Y. Wang, C. Xu, Q. Sun, H. Hu, C. Tao, X. Geng, and D. Jiang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.292.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/garyyufei\/promda\">code<\/a><\/li><li><strong>N-shot learning for augmenting task-oriented dialogue state tracking,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>I. T. Aksu, Z. Liu, M. Kan, and N. F. Chen.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.131.pdf\">paper<\/a><\/li><li><strong>Generating representative samples for few-shot classification,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>J. Xu, and H. Le.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Xu_Generating_Representative_Samples_for_Few-Shot_Classification_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/cvlab-stonybrook\/\">code<\/a><\/li><li><strong>Semi-supervised few-shot learning via multi-factor clustering,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>J. Ling, L. Liao, M. Yang, and J. Shuai.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Ling_Semi-Supervised_Few-Shot_Learning_via_Multi-Factor_Clustering_CVPR_2022_paper.pdf\">paper<\/a><\/li><\/ol>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#model\"><\/a><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\">Model<\/a><\/h2>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#multitask-learning\"><\/a>Multitask Learning<\/h3>\n\n\n\n<ol><li><strong>Multi-task transfer methods to improve one-shot learning for multimedia event detection,<\/strong>&nbsp;in BMVC, 2015.&nbsp;<em>W. Yan, J. Yap, and G. Mori.<\/em>&nbsp;<a href=\"http:\/\/www.bmva.org\/bmvc\/2015\/papers\/paper037\/index.html\">paper<\/a><\/li><li><strong>Label efficient learning of transferable representations across domains and tasks,<\/strong>&nbsp;in NeurIPS, 2017.&nbsp;<em>Z. Luo, Y. Zou, J. Hoffman, and L. Fei-Fei.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/6621-label-efficient-learning-of-transferable-representations-acrosss-domains-and-tasks.pdf\">paper<\/a><\/li><li><strong>Few-shot adversarial domain adaptation,<\/strong>&nbsp;in NeurIPS, 2017.&nbsp;<em>S. Motiian, Q. Jones, S. Iranmanesh, and G. Doretto.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/7244-few-shot-adversarial-domain-adaptation\">paper<\/a><\/li><li><strong>One-shot unsupervised cross domain translation,<\/strong>&nbsp;in NeurIPS, 2018.&nbsp;<em>S. Benaim and L. Wolf.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/7480-one-shot-unsupervised-cross-domain-translation.pdf\">paper<\/a><\/li><li><strong>Multi-content GAN for few-shot font style transfer,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>S. Azadi, M. Fisher, V. G. Kim, Z. Wang, E. Shechtman, and T. Darrell.<\/em>&nbsp;<a href=\"http:\/\/www.vovakim.com\/papers\/18_CVPRSpotlight_FontDropper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/azadis\/MC-GAN\">code<\/a><\/li><li><strong>Feature space transfer for data augmentation,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>B. Liu, X. Wang, M. Dixit, R. Kwitt, and N. Vasconcelos.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Liu_Feature_Space_Transfer_CVPR_2018_paper.pdf\">paper<\/a><\/li><li><strong>Fine-grained visual categorization using meta-learning optimization with sample selection of auxiliary data,<\/strong>&nbsp;in ECCV, 2018.&nbsp;<em>Y. Zhang, H. Tang, and K. Jia.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Yabin_Zhang_Fine-Grained_Visual_Categorization_ECCV_2018_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot charge prediction with discriminative legal attributes,<\/strong>&nbsp;in COLING, 2018.&nbsp;<em>Z. Hu, X. Li, C. Tu, Z. Liu, and M. Sun.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/C18-1041.pdf\">paper<\/a><\/li><li><strong>Boosting few-shot visual learning with self-supervision,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>S. Gidaris, A. Bursuc, N. Komodakis, P. P\u00e9rez, and M. Cord.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Gidaris_Boosting_Few-Shot_Visual_Learning_With_Self-Supervision_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>When does self-supervision improve few-shot learning?,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>J. Su, S. Maji, and B. Hariharan.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123520630.pdf\">paper<\/a><\/li><li><strong>Pareto self-supervised training for few-shot learning,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>Z. Chen, J. Ge, H. Zhan, S. Huang, and D. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Chen_Pareto_Self-Supervised_Training_for_Few-Shot_Learning_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Bridging multi-task learning and meta-learning: Towards efficient training and effective adaptation,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>H. Wang, H. Zhao, and B. Li;.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/wang21ad\/wang21ad.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/AI-secure\/multi-task-learning\">code<\/a><\/li><\/ol>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#embeddingmetric-learning\"><\/a>Embedding\/Metric Learning<\/h3>\n\n\n\n<ol><li><strong>Object classification from a single example utilizing class relevance metrics,<\/strong>&nbsp;in NeurIPS, 2005.&nbsp;<em>M. Fink.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/2576-object-classification-from-a-single-example-utilizing-class-relevance-metrics.pdf\">paper<\/a><\/li><li><strong>Optimizing one-shot recognition with micro-set learning,<\/strong>&nbsp;in CVPR, 2010.&nbsp;<em>K. D. Tang, M. F. Tappen, R. Sukthankar, and C. H. Lampert.<\/em>&nbsp;<a href=\"http:\/\/www.cs.ucf.edu\/~mtappen\/pubs\/cvpr10_oneshot.pdf\">paper<\/a><\/li><li><strong>Siamese neural networks for one-shot image recognition,<\/strong>&nbsp;ICML deep learning workshop, 2015.&nbsp;<em>G. Koch, R. Zemel, and R. Salakhutdinov.<\/em>&nbsp;<a href=\"https:\/\/www.cs.cmu.edu\/~rsalakhu\/papers\/oneshot1.pdf\">paper<\/a><\/li><li><strong>Matching networks for one shot learning,<\/strong>&nbsp;in NeurIPS, 2016.&nbsp;<em>O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra et al.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/6385-matching-networks-for-one-shot-learning.pdf\">paper<\/a><\/li><li><strong>Learning feed-forward one-shot learners,<\/strong>&nbsp;in NeurIPS, 2016.&nbsp;<em>L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/6068-learning-feed-forward-one-shot-learners.pdf\">paper<\/a><\/li><li><strong>Few-shot learning through an information retrieval lens,<\/strong>&nbsp;in NeurIPS, 2017.&nbsp;<em>E. Triantafillou, R. Zemel, and R. Urtasun.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/6820-few-shot-learning-through-an-information-retrieval-lens.pdf\">paper<\/a><\/li><li><strong>Prototypical networks for few-shot learning,<\/strong>&nbsp;in NeurIPS, 2017.&nbsp;<em>J. Snell, K. Swersky, and R. S. Zemel.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/6996-prototypical-networks-for-few-shot-learning.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/jakesnell\/prototypical-networks\">code<\/a><\/li><li><strong>Attentive recurrent comparators,<\/strong>&nbsp;in ICML, 2017.&nbsp;<em>P. Shyam, S. Gupta, and A. Dukkipati.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v70\/shyam17a\/shyam17a.pdf\">paper<\/a><\/li><li><strong>Learning algorithms for active learning,<\/strong>&nbsp;in ICML, 2017.&nbsp;<em>P. Bachman, A. Sordoni, and A. Trischler.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v70\/bachman17a.pdf\">paper<\/a><\/li><li><strong>Active one-shot learning,<\/strong>&nbsp;arXiv preprint, 2017.&nbsp;<em>M. Woodward and C. Finn.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1702.06559\">paper<\/a><\/li><li><strong>Structured set matching networks for one-shot part labeling,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>J. Choi, J. Krishnamurthy, A. Kembhavi, and A. Farhadi.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Choi_Structured_Set_Matching_CVPR_2018_paper.pdf\">paper<\/a><\/li><li><strong>Low-shot learning from imaginary data,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>Y.-X. Wang, R. Girshick, M. Hebert, and B. Hariharan.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Wang_Low-Shot_Learning_From_CVPR_2018_paper.pdf\">paper<\/a><\/li><li><strong>Learning to compare: Relation network for few-shot learning,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Sung_Learning_to_Compare_CVPR_2018_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/floodsung\/LearningToCompare_FSL\">code<\/a><\/li><li><strong>Dynamic conditional networks for few-shot learning,<\/strong>&nbsp;in ECCV, 2018.&nbsp;<em>F. Zhao, J. Zhao, S. Yan, and J. Feng.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Fang_Zhao_Dynamic_Conditional_Networks_ECCV_2018_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ZhaoJ9014\/Dynamic-Conditional-Networks.PyTorch\">code<\/a><\/li><li><strong>TADAM: Task dependent adaptive metric for improved few-shot learning,<\/strong>&nbsp;in NeurIPS, 2018.&nbsp;<em>B. Oreshkin, P. R. L\u00f3pez, and A. Lacoste.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/7352-tadam-task-dependent-adaptive-metric-for-improved-few-shot-learning.pdf\">paper<\/a><\/li><li><strong>Meta-learning for semi-supervised few-shot classification,<\/strong>&nbsp;in ICLR, 2018.&nbsp;<em>M. Ren, S. Ravi, E. Triantafillou, J. Snell, K. Swersky, J. B. Tenen- baum, H. Larochelle, and R. S. Zemel.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=r1n5Osurf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/renmengye\/few-shot-ssl-public\">code<\/a><\/li><li><strong>Few-shot learning with graph neural networks,<\/strong>&nbsp;in ICLR, 2018.&nbsp;<em>V. G. Satorras and J. B. Estrach.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=HJcSzz-CZ\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/vgsatorras\/few-shot-gnn\">code<\/a><\/li><li><strong>A simple neural attentive meta-learner,<\/strong>&nbsp;in ICLR, 2018.&nbsp;<em>N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=B1DmUzWAW\">paper<\/a><\/li><li><strong>Meta-learning with differentiable closed-form solvers,<\/strong>&nbsp;in ICLR, 2019.&nbsp;<em>L. Bertinetto, J. F. Henriques, P. Torr, and A. Vedaldi.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=HyxnZh0ct7\">paper<\/a><\/li><li><strong>Learning to propagate labels: Transductive propagation network for few-shot learning,<\/strong>&nbsp;in ICLR, 2019.&nbsp;<em>Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. Hwang, and Y. Yang.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=SyVuRiC5K7\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/csyanbin\/TPN-pytorch\">code<\/a><\/li><li><strong>Multi-level matching and aggregation network for few-shot relation classification,<\/strong>&nbsp;in ACL, 2019.&nbsp;<em>Z.-X. Ye, and Z.-H. Ling.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/P19-1277.pdf\">paper<\/a><\/li><li><strong>Induction networks for few-shot text classification,<\/strong>&nbsp;in EMNLP-IJCNLP, 2019.&nbsp;<em>R. Geng, B. Li, Y. Li, X. Zhu, P. Jian, and J. Sun.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/D19-1403.pdf\">paper<\/a><\/li><li><strong>Hierarchical attention prototypical networks for few-shot text classification,<\/strong>&nbsp;in EMNLP-IJCNLP, 2019.&nbsp;<em>S. Sun, Q. Sun, K. Zhou, and T. Lv.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/D19-1045.pdf\">paper<\/a><\/li><li><strong>Cross attention network for few-shot classification,<\/strong>&nbsp;in NeurIPS, 2019.&nbsp;<em>R. Hou, H. Chang, B. Ma, S. Shan, and X. Chen.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/8655-cross-attention-network-for-few-shot-classification.pdf\">paper<\/a><\/li><li><strong>Hybrid attention-based prototypical networks for noisy few-shot relation classification,<\/strong>&nbsp;in AAAI, 2019.&nbsp;<em>T. Gao, X. Han, Z. Liu, and M. Sun.<\/em>&nbsp;<a href=\"https:\/\/www.aaai.org\/ojs\/index.php\/AAAI\/article\/view\/4604\/4482\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/thunlp\/HATT-Proto\">code<\/a><\/li><li><strong>Attention-based multi-context guiding for few-shot semantic segmentation,<\/strong>&nbsp;in AAAI, 2019.&nbsp;<em>T. Hu, P. Yang, C. Zhang, G. Yu, Y. Mu and C. G. M. Snoek.<\/em>&nbsp;<a href=\"https:\/\/www.aaai.org\/ojs\/index.php\/AAAI\/article\/view\/4604\/4482\">paper<\/a><\/li><li><strong>Distribution consistency based covariance metric networks for few-shot learning,<\/strong>&nbsp;in AAAI, 2019.&nbsp;<em>W. Li, L. Wang, J. Xu, J. Huo, Y. Gao and J. Luo.<\/em>&nbsp;<a href=\"https:\/\/www.aaai.org\/ojs\/index.php\/AAAI\/article\/view\/4885\/4758\">paper<\/a><\/li><li><strong>A dual attention network with semantic embedding for few-shot learning,<\/strong>&nbsp;in AAAI, 2019.&nbsp;<em>S. Yan, S. Zhang, and X. He.<\/em>&nbsp;<a href=\"https:\/\/www.aaai.org\/ojs\/index.php\/AAAI\/article\/view\/4940\/4813\">paper<\/a><\/li><li><strong>TapNet: Neural network augmented with task-adaptive projection for few-shot learning,<\/strong>&nbsp;in ICML, 2019.&nbsp;<em>S. W. Yoon, J. Seo, and J. Moon.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v97\/yoon19a\/yoon19a.pdf\">paper<\/a><\/li><li><strong>Prototype propagation networks (PPN) for weakly-supervised few-shot learning on category graph,<\/strong>&nbsp;in IJCAI, 2019.&nbsp;<em>L. Liu, T. Zhou, G. Long, J. Jiang, L. Yao, C. Zhang.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2019\/0418.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/liulu112601\/Prototype-Propagation-Net\">code<\/a><\/li><li><strong>Collect and select: Semantic alignment metric learning for few-shot learning,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>F. Hao, F. He, J. Cheng, L. Wang, J. Cao, and D. Tao.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Hao_Collect_and_Select_Semantic_Alignment_Metric_Learning_for_Few-Shot_Learning_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>Transductive episodic-wise adaptive metric for few-shot learning,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>L. Qiao, Y. Shi, J. Li, Y. Wang, T. Huang, and Y. Tian.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Qiao_Transductive_Episodic-Wise_Adaptive_Metric_for_Few-Shot_Learning_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot learning with embedded class models and shot-free meta training,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>A. Ravichandran, R. Bhotika, and S. Soatto.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Ravichandran_Few-Shot_Learning_With_Embedded_Class_Models_and_Shot-Free_Meta_Training_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>PARN: Position-aware relation networks for few-shot learning,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>Z. Wu, Y. Li, L. Guo, and K. Jia.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Wu_PARN_Position-Aware_Relation_Networks_for_Few-Shot_Learning_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>PANet: Few-shot image semantic segmentation with prototype alignment,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>K. Wang, J. H. Liew, Y. Zou, D. Zhou, and J. Feng.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Wang_PANet_Few-Shot_Image_Semantic_Segmentation_With_Prototype_Alignment_ICCV_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/kaixin96\/PANet\">code<\/a><\/li><li><strong>RepMet: Representative-based metric learning for classification and few-shot object detection,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>L. Karlinsky, J. Shtok, S. Harary, E. Schwartz, A. Aides, R. Feris, R. Giryes, and A. M. Bronstein.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Karlinsky_RepMet_Representative-Based_Metric_Learning_for_Classification_and_Few-Shot_Object_Detection_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/jshtok\/RepMet\">code<\/a><\/li><li><strong>Edge-labeling graph neural network for few-shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>J. Kim, T. Kim, S. Kim, and C. D. Yoo.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Kim_Edge-Labeling_Graph_Neural_Network_for_Few-Shot_Learning_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>Finding task-relevant features for few-shot learning by category traversal,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>H. Li, D. Eigen, S. Dodge, M. Zeiler, and X. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Li_Finding_Task-Relevant_Features_for_Few-Shot_Learning_by_Category_Traversal_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Clarifai\/few-shot-ctm\">code<\/a><\/li><li><strong>Revisiting local descriptor based image-to-class measure for few-shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>W. Li, L. Wang, J. Xu, J. Huo, Y. Gao, and J. Luo.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Li_Revisiting_Local_Descriptor_Based_Image-To-Class_Measure_for_Few-Shot_Learning_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/WenbinLee\/DN4\">code<\/a><\/li><li><strong>TAFE-Net: Task-aware feature embeddings for low shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>X. Wang, F. Yu, R. Wang, T. Darrell, and J. E. Gonzalez.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Wang_TAFE-Net_Task-Aware_Feature_Embeddings_for_Low_Shot_Learning_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ucbdrive\/tafe-net\">code<\/a><\/li><li><strong>Improved few-shot visual classification,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>P. Bateni, R. Goyal, V. Masrani, F. Wood, and L. Sigal.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Bateni_Improved_Few-Shot_Visual_Classification_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Boosting few-shot learning with adaptive margin loss,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>A. Li, W. Huang, X. Lan, J. Feng, Z. Li, and L. Wang.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Li_Boosting_Few-Shot_Learning_With_Adaptive_Margin_Loss_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Adaptive subspaces for few-shot learning,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>C. Simon, P. Koniusz, R. Nock, and M. Harandi.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Simon_Adaptive_Subspaces_for_Few-Shot_Learning_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>DPGN: Distribution propagation graph network for few-shot learning,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>L. Yang, L. Li, Z. Zhang, X. Zhou, E. Zhou, and Y. Liu.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Yang_DPGN_Distribution_Propagation_Graph_Network_for_Few-Shot_Learning_CVPR_2020_paper_check.pdf\">paper<\/a><\/li><li><strong>Few-shot learning via embedding adaptation with set-to-set functions,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>H.-J. Ye, H. Hu, D.-C. Zhan, and F. Sha.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Ye_Few-Shot_Learning_via_Embedding_Adaptation_With_Set-to-Set_Functions_CVPR_2020_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Sha-Lab\/FEAT\">code<\/a><\/li><li><strong>DeepEMD: Few-shot image classification with differentiable earth mover&#8217;s distance and structured classifiers,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>C. Zhang, Y. Cai, G. Lin, and C. Shen.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Zhang_DeepEMD_Few-Shot_Image_Classification_With_Differentiable_Earth_Movers_Distance_and_CVPR_2020_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/icoz69\/DeepEMD\">code<\/a><\/li><li><strong>Few-shot text classification with distributional signatures,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>Y. Bao, M. Wu, S. Chang, and R. Barzilay.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=H1emfT4twB\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/YujiaBao\/Distributional-Signatures\">code<\/a><\/li><li><strong>Learning task-aware local representations for few-shot learning,<\/strong>&nbsp;in IJCAI, 2020.&nbsp;<em>C. Dong, W. Li, J. Huo, Z. Gu, and Y. Gao.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2020\/0100.pdf\">paper<\/a><\/li><li><strong>SimPropNet: Improved similarity propagation for few-shot image segmentation,<\/strong>&nbsp;in IJCAI, 2020.&nbsp;<em>S. Gairola, M. Hemani, A. Chopra, and B. Krishnamurthy.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2020\/0080.pdf\">paper<\/a><\/li><li><strong>Asymmetric distribution measure for few-shot learning,<\/strong>&nbsp;in IJCAI, 2020.&nbsp;<em>W. Li, L. Wang, J. Huo, Y. Shi, Y. Gao, and J. Luo.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2020\/0409.pdf\">paper<\/a><\/li><li><strong>Transductive relation-propagation network for few-shot learning,<\/strong>&nbsp;in IJCAI, 2020.&nbsp;<em>Y. Ma, S. Bai, S. An, W. Liu, A. Liu, X. Zhen, and X. Liu.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2020\/0112.pdf\">paper<\/a><\/li><li><strong>Weakly supervised few-shot object segmentation using co-attention with visual and semantic embeddings,<\/strong>&nbsp;in IJCAI, 2020.&nbsp;<em>M. Siam, N. Doraiswamy, B. N. Oreshkin, H. Yao, and M. J\u00e4gersand.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2020\/0120.pdf\">paper<\/a><\/li><li><strong>Few-shot learning on graphs via super-classes based on graph spectral measures,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>J. Chauhan, D. Nathani, and M. Kaul.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=Bkeeca4Kvr\">paper<\/a><\/li><li><strong>SGAP-Net: Semantic-guided attentive prototypes network for few-shot human-object interaction recognition,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>Z. Ji, X. Liu, Y. Pang, and X. Li.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6764\">paper<\/a><\/li><li><strong>One-shot image classification by learning to restore prototypes,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>W. Xue, and W. Wang.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6130\">paper<\/a><\/li><li><strong>Negative margin matters: Understanding margin in few-shot classification,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>B. Liu, Y. Cao, Y. Lin, Q. Li, Z. Zhang, M. Long, and H. Hu.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123490426.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/bl0\/negative-margin.few-shot\">code<\/a><\/li><li><strong>Prototype rectification for few-shot learning,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>J. Liu, L. Song, and Y. Qin.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123460715.pdf\">paper<\/a><\/li><li><strong>Rethinking few-shot image classification: A good embedding is all you need?,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, and P. Isola.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123590256.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/WangYueFt\/rfs\/\">code<\/a><\/li><li><strong>SEN: A novel feature normalization dissimilarity measure for prototypical few-shot learning networks,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>V. N. Nguyen, S. L\u00f8kse, K. Wickstr\u00f8m, M. Kampffmeyer, D. Roverso, and R. Jenssen.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123680120.pdf\">paper<\/a><\/li><li><strong>TAFSSL: Task-adaptive feature sub-space learning for few-shot classification,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>M. Lichtenstein, P. Sattigeri, R. Feris, R. Giryes, and L. Karlinsky.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123520511.pdf\">paper<\/a><\/li><li><strong>Attentive prototype few-shot learning with capsule network-based embedding,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>F. Wu, J. S.Smith, W. Lu, C. Pang, and B. Zhang.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123730239.pdf\">paper<\/a><\/li><li><strong>Embedding propagation: Smoother manifold for few-shot classification,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>P. Rodr\u00edguez, I. Laradji, A. Drouin, and A. Lacoste.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123710120.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ElementAI\/embedding-propagation\">code<\/a><\/li><li><strong>Laplacian regularized few-shot learning,<\/strong>&nbsp;in ICML, 2020.&nbsp;<em>I. M. Ziko, J. Dolz, E. Granger, and I. B. Ayed.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v119\/ziko20a\/ziko20a.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/imtiazziko\/LaplacianShot\">code<\/a><\/li><li><strong>TAdaNet: Task-adaptive network for graph-enriched meta-learning,<\/strong>&nbsp;in KDD, 2020.&nbsp;<em>Q. Suo, i. Chou, W. Zhong, and A. Zhang.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3394486.3403230\">paper<\/a><\/li><li><strong>Concept learners for few-shot learning,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>K. Cao, M. Brbic, and J. Leskovec.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=eJIJF3-LoZO\">paper<\/a><\/li><li><strong>Reinforced attention for few-shot learning and beyond,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>J. Hong, P. Fang, W. Li, T. Zhang, C. Simon, M. Harandi, and L. Petersson.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Hong_Reinforced_Attention_for_Few-Shot_Learning_and_Beyond_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Mutual CRF-GNN for few-shot learning,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>S. Tang, D. Chen, L. Bai, K. Liu, Y. Ge, and W. Ouyang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Tang_Mutual_CRF-GNN_for_Few-Shot_Learning_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot classification with feature map reconstruction networks,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>D. Wertheimer, L. Tang, and B. Hariharan.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Wertheimer_Few-Shot_Classification_With_Feature_Map_Reconstruction_Networks_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Tsingularity\/FRN\">code<\/a><\/li><li><strong>ECKPN: Explicit class knowledge propagation network for transductive few-shot learning,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>C. Chen, X. Yang, C. Xu, X. Huang, and Z. Ma.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Chen_ECKPN_Explicit_Class_Knowledge_Propagation_Network_for_Transductive_Few-Shot_Learning_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Exploring complementary strengths of invariant and equivariant representations for few-shot learning,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>M. N. Rizve, S. Khan, F. S. Khan, and M. Shah.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Rizve_Exploring_Complementary_Strengths_of_Invariant_and_Equivariant_Representations_for_Few-Shot_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Rethinking class relations: Absolute-relative supervised and unsupervised few-shot learning,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>H. Zhang, P. Koniusz, S. Jian, H. Li, and P. H. S. Torr.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Zhang_Rethinking_Class_Relations_Absolute-Relative_Supervised_and_Unsupervised_Few-Shot_Learning_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Unsupervised embedding adaptation via early-stage feature reconstruction for few-shot classification,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>D. H. Lee, and S. Chung.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/lee21d\/lee21d.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/movinghoon\/ESFR\">code<\/a><\/li><li><strong>Learning a few-shot embedding model with contrastive learning,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>C. Liu, Y. Fu, C. Xu, S. Yang, J. Li, C. Wang, and L. Zhang.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17047\/16854\">paper<\/a><\/li><li><strong>Looking wider for better adaptive representation in few-shot learning,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>J. Zhao, Y. Yang, X. Lin, J. Yang, and L. He.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17311\/17118\">paper<\/a><\/li><li><strong>Tailoring embedding function to heterogeneous few-shot tasks by global and local feature adaptors,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>S. Lu, H. Ye, and D.-C. Zhan.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17063\/16870\">paper<\/a><\/li><li><strong>Knowledge guided metric learning for few-shot text classification,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>D. Sui, Y. Chen, B. Mao, D. Qiu, K. Liu, and J. Zhao.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.261.pdf\">paper<\/a><\/li><li><strong>Mixture-based feature space learning for few-shot image classification,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>A. Afrasiyabi, J. Lalonde, and C. Gagn\u00e9.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Afrasiyabi_Mixture-Based_Feature_Space_Learning_for_Few-Shot_Image_Classification_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Z-score normalization, hubness, and few-shot learning,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>N. Fei, Y. Gao, Z. Lu, and T. Xiang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Fei_Z-Score_Normalization_Hubness_and_Few-Shot_Learning_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Relational embedding for few-shot classification,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>D. Kang, H. Kwon, J. Min, and M. Cho.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Kang_Relational_Embedding_for_Few-Shot_Classification_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/dahyun-kang\/renet\">code<\/a><\/li><li><strong>Transductive few-shot classification on the oblique manifold,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>G. Qi, H. Yu, Z. Lu, and S. Li.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Qi_Transductive_Few-Shot_Classification_on_the_Oblique_Manifold_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/GuodongQi\/FSL-OM\">code<\/a><\/li><li><strong>Curvature generation in curved spaces for few-shot learning,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>Z. Gao, Y. Wu, Y. Jia, and M. Harandi.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Gao_Curvature_Generation_in_Curved_Spaces_for_Few-Shot_Learning_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>On episodes, prototypical networks, and few-shot learning,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>S. Laenen, and L. Bertinetto.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/cdfa4c42f465a5a66871587c69fcfa34-Paper.pdf\">paper<\/a><\/li><li><strong>Few-shot learning as cluster-induced voronoi diagrams: A geometric approach,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>C. Ma, Z. Huang, M. Gao, and J. Xu.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=6kCiVaoQdx9\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/horsepurve\/DeepVoro\">code<\/a><\/li><li><strong>Few-shot learning with siamese networks and label tuning,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>T. M\u00fcller, G. P\u00e9rez-Torr\u00f3, and M. Franco-Salvador.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.584.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/symanto-research\/few-shot-learning-label-tuning\">code<\/a><\/li><li><strong>Learning to affiliate: Mutual centralized learning for few-shot classification,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Y. Liu, W. Zhang, C. Xiang, T. Zheng, D. Cai, and X. He.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Liu_Learning_To_Affiliate_Mutual_Centralized_Learning_for_Few-Shot_Classification_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Matching feature sets for few-shot image classification,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>A. Afrasiyabi, H. Larochelle, J. Lalonde, and C. Gagn\u00e9.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Afrasiyabi_Matching_Feature_Sets_for_Few-Shot_Image_Classification_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/lvsn.github.io\/SetFeat\/\">code<\/a><\/li><li><strong>Joint distribution matters: Deep Brownian distance covariance for few-shot classification,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>J. Xie, F. Long, J. Lv, Q. Wang, and P. Li.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Xie_Joint_Distribution_Matters_Deep_Brownian_Distance_Covariance_for_Few-Shot_Classification_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>CAD: Co-adapting discriminative features for improved few-shot classification,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>P. Chikontwe, S. Kim, and S. H. Park.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Chikontwe_CAD_Co-Adapting_Discriminative_Features_for_Improved_Few-Shot_Classification_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Ranking distance calibration for cross-domain few-shot learning,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>P. Li, S. Gong, C. Wang, and Y. Fu.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Li_Ranking_Distance_Calibration_for_Cross-Domain_Few-Shot_Learning_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>EASE: Unsupervised discriminant subspace learning for transductive few-shot learning,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>H. Zhu, and P. Koniusz.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Zhu_EASE_Unsupervised_Discriminant_Subspace_Learning_for_Transductive_Few-Shot_Learning_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/allenhaozhu\/EASE\">code<\/a><\/li><li><strong>Cross-domain few-shot learning with task-specific adapters,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>W. Li, X. Liu, and H. Bilen.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Li_Cross-Domain_Few-Shot_Learning_With_Task-Specific_Adapters_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/VICO-UoE\/URL\">code<\/a><\/li><\/ol>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#learning-with-external-memory\"><\/a>Learning with External Memory<\/h3>\n\n\n\n<ol><li><strong>Meta-learning with memory-augmented neural networks,<\/strong>&nbsp;in ICML, 2016.&nbsp;<em>A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v48\/santoro16.pdf\">paper<\/a><\/li><li><strong>Few-shot object recognition from machine-labeled web images,<\/strong>&nbsp;in CVPR, 2017.&nbsp;<em>Z. Xu, L. Zhu, and Y. Yang.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Xu_Few-Shot_Object_Recognition_CVPR_2017_paper.pdf\">paper<\/a><\/li><li><strong>Learning to remember rare events,<\/strong>&nbsp;in ICLR, 2017.&nbsp;<em>\u0141. Kaiser, O. Nachum, A. Roy, and S. Bengio.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=SJTQLdqlg\">paper<\/a><\/li><li><strong>Meta networks,<\/strong>&nbsp;in ICML, 2017.&nbsp;<em>T. Munkhdalai and H. Yu.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v70\/munkhdalai17a\/munkhdalai17a.pdf\">paper<\/a><\/li><li><strong>Memory matching networks for one-shot image recognition,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>Q. Cai, Y. Pan, T. Yao, C. Yan, and T. Mei.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Cai_Memory_Matching_Networks_CVPR_2018_paper.pdf\">paper<\/a><\/li><li><strong>Compound memory networks for few-shot video classification,<\/strong>&nbsp;in ECCV, 2018.&nbsp;<em>L. Zhu and Y. Yang.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Linchao_Zhu_Compound_Memory_Networks_ECCV_2018_paper.pdf\">paper<\/a><\/li><li><strong>Memory, show the way: Memory based few shot word representation learning,<\/strong>&nbsp;in EMNLP, 2018.&nbsp;<em>J. Sun, S. Wang, and C. Zong.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/D18-1173.pdf\">paper<\/a><\/li><li><strong>Rapid adaptation with conditionally shifted neurons,<\/strong>&nbsp;in ICML, 2018.&nbsp;<em>T. Munkhdalai, X. Yuan, S. Mehri, and A. Trischler.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v80\/munkhdalai18a\/munkhdalai18a.pdf\">paper<\/a><\/li><li><strong>Adaptive posterior learning: Few-shot learning with a surprise-based memory module,<\/strong>&nbsp;in ICLR, 2019.&nbsp;<em>T. Ramalho and M. Garnelo.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=ByeSdsC9Km\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/cogentlabs\/apl\">code<\/a><\/li><li><strong>Coloring with limited data: Few-shot colorization via memory augmented networks,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>S. Yoo, H. Bahng, S. Chung, J. Lee, J. Chang, and J. Choo.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Yoo_Coloring_With_Limited_Data_Few-Shot_Colorization_via_Memory_Augmented_Networks_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>ACMM: Aligned cross-modal memory for few-shot image and sentence matching,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>Y. Huang, and L. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Huang_ACMM_Aligned_Cross-Modal_Memory_for_Few-Shot_Image_and_Sentence_Matching_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>Dynamic memory induction networks for few-shot text classification,<\/strong>&nbsp;in ACL, 2020.&nbsp;<em>R. Geng, B. Li, Y. Li, J. Sun, and X. Zhu.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.acl-main.102.pdf\">paper<\/a><\/li><li><strong>Few-shot visual learning with contextual memory and fine-grained calibration,<\/strong>&nbsp;in IJCAI, 2020.&nbsp;<em>Y. Ma, W. Liu, S. Bai, Q. Zhang, A. Liu, W. Chen, and X. Liu.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2020\/0113.pdf\">paper<\/a><\/li><li><strong>Learn from concepts: Towards the purified memory for few-shot learning,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>X. Liu, X. Tian, S. Lin, Y. Qu, L. Ma, W. Yuan, Z. Zhang, and Y. Xie.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0123.pdf\">paper<\/a><\/li><li><strong>Prototype memory and attention mechanisms for few shot image generation,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>T. Li, Z. Li, A. Luo, H. Rockwell, A. B. Farimani, and T. S. Lee.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=lY0-7bj0Vfz\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Crazy-Jack\/MoCA_release\">code<\/a><\/li><li><strong>Hierarchical variational memory for few-shot learning across domains,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>Y. Du, X. Zhen, L. Shao, and C. G. M. Snoek.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=i3RI65sR7N\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/YDU-uva\/HierMemory\">code<\/a><\/li><li><strong>Remember the difference: Cross-domain few-shot semantic segmentation via meta-memory transfer,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>W. Wang, L. Duan, Y. Wang, Q. En, J. Fan, and Z. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Wang_Remember_the_Difference_Cross-Domain_Few-Shot_Semantic_Segmentation_via_Meta-Memory_Transfer_CVPR_2022_paper.pdf\">paper<\/a><\/li><\/ol>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#generative-modeling\"><\/a>Generative Modeling<\/h3>\n\n\n\n<ol><li><strong>One-shot learning of object categories,<\/strong>&nbsp;TPAMI, 2006.&nbsp;<em>L. Fei-Fei, R. Fergus, and P. Perona.<\/em>&nbsp;<a href=\"http:\/\/vision.stanford.edu\/documents\/Fei-FeiFergusPerona2006.pdf\">paper<\/a><\/li><li><strong>Learning to learn with compound HD models,<\/strong>&nbsp;in NeurIPS, 2011.&nbsp;<em>A. Torralba, J. B. Tenenbaum, and R. R. Salakhutdinov.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/4474-learning-to-learn-with-compound-hd-models.pdf\">paper<\/a><\/li><li><strong>One-shot learning with a hierarchical nonparametric bayesian model,<\/strong>&nbsp;in ICML Workshop on Unsupervised and Transfer Learning, 2012.&nbsp;<em>R. Salakhutdinov, J. Tenenbaum, and A. Torralba.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v27\/salakhutdinov12a\/salakhutdinov12a.pdf\">paper<\/a><\/li><li><strong>Human-level concept learning through probabilistic program induction,<\/strong>&nbsp;Science, 2015.&nbsp;<em>B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum.<\/em>&nbsp;<a href=\"https:\/\/web.mit.edu\/cocosci\/Papers\/Science-2015-Lake-1332-8.pdf\">paper<\/a><\/li><li><strong>One-shot generalization in deep generative models,<\/strong>&nbsp;in ICML, 2016.&nbsp;<em>D. Rezende, I. Danihelka, K. Gregor, and D. Wierstra.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1603.05106\">paper<\/a><\/li><li><strong>One-shot video object segmentation,<\/strong>&nbsp;in CVPR, 2017.&nbsp;<em>S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taix\u00e9, D. Cremers, and L. Van Gool.<\/em>&nbsp;<a href=\"http:\/\/zpascal.net\/cvpr2017\/Caelles_One-Shot_Video_Object_CVPR_2017_paper.pdf\">paper<\/a><\/li><li><strong>Towards a neural statistician,<\/strong>&nbsp;in ICLR, 2017.&nbsp;<em>H. Edwards and A. Storkey.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=HJDBUF5le\">paper<\/a><\/li><li><strong>Extending a parser to distant domains using a few dozen partially annotated examples,<\/strong>&nbsp;in ACL, 2018.&nbsp;<em>V. Joshi, M. Peters, and M. Hopkins.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/P18-1110.pdf\">paper<\/a><\/li><li><strong>MetaGAN: An adversarial approach to few-shot learning,<\/strong>&nbsp;in NeurIPS, 2018.&nbsp;<em>R. Zhang, T. Che, Z. Ghahramani, Y. Bengio, and Y. Song.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/7504-metagan-an-adversarial-approach-to-few-shot-learning.pdf\">paper<\/a><\/li><li><strong>Few-shot autoregressive density estimation: Towards learning to learn distributions,<\/strong>&nbsp;in ICLR, 2018.&nbsp;<em>S. Reed, Y. Chen, T. Paine, A. van den Oord, S. M. A. Eslami, D. Rezende, O. Vinyals, and N. de Freitas.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=r1wEFyWCW\">paper<\/a><\/li><li><strong>The variational homoencoder: Learning to learn high capacity generative models from few examples,<\/strong>&nbsp;in UAI, 2018.&nbsp;<em>L. B. Hewitt, M. I. Nye, A. Gane, T. Jaakkola, and J. B. Tenenbaum.<\/em>&nbsp;<a href=\"http:\/\/auai.org\/uai2018\/proceedings\/papers\/351.pdf\">paper<\/a><\/li><li><strong>Meta-learning probabilistic inference for prediction,<\/strong>&nbsp;in ICLR, 2019.&nbsp;<em>J. Gordon, J. Bronskill, M. Bauer, S. Nowozin, and R. Turner.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=HkxStoC5F7\">paper<\/a><\/li><li><strong>Variational prototyping-encoder: One-shot learning with prototypical images,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>J. Kim, T.-H. Oh, S. Lee, F. Pan, and I. S. Kweon.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Kim_Variational_Prototyping-Encoder_One-Shot_Learning_With_Prototypical_Images_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/mibastro\/VPE\">code<\/a><\/li><li><strong>Variational few-shot learning,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>J. Zhang, C. Zhao, B. Ni, M. Xu, and X. Yang.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Zhang_Variational_Few-Shot_Learning_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>Infinite mixture prototypes for few-shot learning,<\/strong>&nbsp;in ICML, 2019.&nbsp;<em>K. Allen, E. Shelhamer, H. Shin, and J. Tenenbaum.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v97\/allen19b\/allen19b.pdf\">paper<\/a><\/li><li><strong>Dual variational generation for low shot heterogeneous face recognition,<\/strong>&nbsp;in NeurIPS, 2019.&nbsp;<em>C. Fu, X. Wu, Y. Hu, H. Huang, and R. He.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/8535-dual-variational-generation-for-low-shot-heterogeneous-face-recognition.pdf\">paper<\/a><\/li><li><strong>Bayesian meta sampling for fast uncertainty adaptation,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>Z. Wang, Y. Zhao, P. Yu, R. Zhang, and C. Chen.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=Bkxv90EKPB\">paper<\/a><\/li><li><strong>Empirical Bayes transductive meta-learning with synthetic gradients,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>S. X. Hu, P. G. Moreno, Y. Xiao, X. Shen, G. Obozinski, N. D. Lawrence, and A. C. Damianou.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=Hkg-xgrYvH\">paper<\/a><\/li><li><strong>Few-shot relation extraction via bayesian meta-learning on relation graphs,<\/strong>&nbsp;in ICML, 2020.&nbsp;<em>M. Qu, T. Gao, L. A. C. Xhonneux, and J. Tang.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v119\/qu20a\/qu20a.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/DeepGraphLearning\/FewShotRE\">code<\/a><\/li><li><strong>Interventional few-shot learning,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>Z. Yue, H. Zhang, Q. Sun, and X. Hua.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/1cc8a8ea51cd0adddf5dab504a285915-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/yue-zhongqi\/ifsl\">code<\/a><\/li><li><strong>Bayesian few-shot classification with one-vs-each p\u00f3lya-gamma augmented gaussian processes,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>J. Snell, and R. Zemel.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=lgNx56yZh8a\">paper<\/a><\/li><li><strong>Few-shot Bayesian optimization with deep kernel surrogates,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>M. Wistuba, and J. Grabocka.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=bJxgv5C3sYc\">paper<\/a><\/li><li><strong>Modeling the probabilistic distribution of unlabeled data for one-shot medical image segmentation,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>Y. Ding, X. Yu, and Y. Yang.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16212\/16019\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/dyh127\/Modeling-the-Probabilistic-Distribution-of-Unlabeled-Data\">code<\/a><\/li><li><strong>A hierarchical transformation-discriminating generative model for few shot anomaly detection,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>S. Sheynin, S. Benaim, and L. Wolf.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Sheynin_A_Hierarchical_Transformation-Discriminating_Generative_Model_for_Few_Shot_Anomaly_Detection_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Reinforced few-shot acquisition function learning for Bayesian optimization,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>B. Hsieh, P. Hsieh, and X. Liu.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/3fab5890d8113d0b5a4178201dc842ad-Paper.pdf\">paper<\/a><\/li><li><strong>GanOrCon: Are generative models useful for few-shot segmentation?,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>O. Saha, Z. Cheng, and S. Maji.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Saha_GanOrCon_Are_Generative_Models_Useful_for_Few-Shot_Segmentation_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Few shot generative model adaption via relaxed spatial structural alignment,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>J. Xiao, L. Li, C. Wang, Z. Zha, and Q. Huang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Xiao_Few_Shot_Generative_Model_Adaption_via_Relaxed_Spatial_Structural_Alignment_CVPR_2022_paper.pdf\">paper<\/a><\/li><\/ol>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#algorithm\"><\/a><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\">Algorithm<\/a><\/h2>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#refining-existing-parameters\"><\/a>Refining Existing Parameters<\/h3>\n\n\n\n<ol><li><strong>Cross-generalization: Learning novel classes from a single example by feature replacement,<\/strong>&nbsp;in CVPR, 2005.&nbsp;<em>E. Bart and S. Ullman.<\/em>&nbsp;<a href=\"http:\/\/www.inf.tu-dresden.de\/content\/institutes\/ki\/is\/HS_SS08_Papers\/BartUllmanCVPR05.pdf\">paper<\/a><\/li><li><strong>One-shot adaptation of supervised deep convolutional models,<\/strong>&nbsp;in ICLR, 2013.&nbsp;<em>J. Hoffman, E. Tzeng, J. Donahue, Y. Jia, K. Saenko, and T. Darrell.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=tPCrkaLa9Y5ld\">paper<\/a><\/li><li><strong>Learning to learn: Model regression networks for easy small sample learning,<\/strong>&nbsp;in ECCV, 2016.&nbsp;<em>Y.-X. Wang and M. Hebert.<\/em>&nbsp;<a href=\"https:\/\/ri.cmu.edu\/pub_files\/2016\/10\/yuxiongw_eccv16_learntolearn.pdf\">paper<\/a><\/li><li><strong>Learning from small sample sets by combining unsupervised meta-training with CNNs,<\/strong>&nbsp;in NeurIPS, 2016.&nbsp;<em>Y.-X. Wang and M. Hebert.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/6408-learning-from-small-sample-sets-by-combining-unsupervised-meta-training-with-cnns\">paper<\/a><\/li><li><strong>Efficient k-shot learning with regularized deep networks,<\/strong>&nbsp;in AAAI, 2018.&nbsp;<em>D. Yoo, H. Fan, V. N. Boddeti, and K. M. Kitani.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1710.02277\">paper<\/a><\/li><li><strong>CLEAR: Cumulative learning for one-shot one-class image recognition,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>J. Kozerawski and M. Turk.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Kozerawski_CLEAR_Cumulative_LEARning_CVPR_2018_paper.pdf\">paper<\/a><\/li><li><strong>Learning structure and strength of CNN filters for small sample size training,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>R. Keshari, M. Vatsa, R. Singh, and A. Noore.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Keshari_Learning_Structure_and_CVPR_2018_paper.pdf\">paper<\/a><\/li><li><strong>Dynamic few-shot visual learning without forgetting,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>S. Gidaris and N. Komodakis.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Gidaris_Dynamic_Few-Shot_Visual_CVPR_2018_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/gidariss\/FewShotWithoutForgetting\">code<\/a><\/li><li><strong>Low-shot learning with imprinted weights,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>H. Qi, M. Brown, and D. G. Lowe.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Qi_Low-Shot_Learning_With_CVPR_2018_paper.pdf\">paper<\/a><\/li><li><strong>Neural voice cloning with a few samples,<\/strong>&nbsp;in NeurIPS, 2018.&nbsp;<em>S. Arik, J. Chen, K. Peng, W. Ping, and Y. Zhou.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/8206-neural-voice-cloning-with-a-few-samples.pdf\">paper<\/a><\/li><li><strong>Text classification with few examples using controlled generalization,<\/strong>&nbsp;in NAACL-HLT, 2019.&nbsp;<em>A. Mahabal, J. Baldridge, B. K. Ayan, V. Perot, and D. Roth.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/N19-1319.pdf\">paper<\/a><\/li><li><strong>Low shot box correction for weakly supervised object detection,<\/strong>&nbsp;in IJCAI, 2019.&nbsp;<em>T. Pan, B. Wang, G. Ding, J. Han, and J. Yong.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2019\/0125.pdf\">paper<\/a><\/li><li><strong>Diversity with cooperation: Ensemble methods for few-shot classification,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>N. Dvornik, C. Schmid, and J. Mairal.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Dvornik_Diversity_With_Cooperation_Ensemble_Methods_for_Few-Shot_Classification_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot image recognition with knowledge transfer,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>Z. Peng, Z. Li, J. Zhang, Y. Li, G.-J. Qi, and J. Tang.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Peng_Few-Shot_Image_Recognition_With_Knowledge_Transfer_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>Generating classification weights with gnn denoising autoencoders for few-shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>S. Gidaris, and N. Komodakis.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Gidaris_Generating_Classification_Weights_With_GNN_Denoising_Autoencoders_for_Few-Shot_Learning_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/gidariss\/wDAE_GNN_FewShot\">code<\/a><\/li><li><strong>Dense classification and implanting for few-shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>Y. Lifchitz, Y. Avrithis, S. Picard, and A. Bursuc.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Lifchitz_Dense_Classification_and_Implanting_for_Few-Shot_Learning_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot adaptive faster R-CNN,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>T. Wang, X. Zhang, L. Yuan, and J. Feng.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Wang_Few-Shot_Adaptive_Faster_R-CNN_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>TransMatch: A transfer-learning scheme for semi-supervised few-shot learning,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>Z. Yu, L. Chen, Z. Cheng, and J. Luo.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Yu_TransMatch_A_Transfer-Learning_Scheme_for_Semi-Supervised_Few-Shot_Learning_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Learning to select base classes for few-shot classification,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>L. Zhou, P. Cui, X. Jia, S. Yang, and Q. Tian.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Zhou_Learning_to_Select_Base_Classes_for_Few-Shot_Classification_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot NLG with pre-trained language model,<\/strong>&nbsp;in ACL, 2020.&nbsp;<em>Z. Chen, H. Eavani, W. Chen, Y. Liu, and W. Y. Wang.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.acl-main.18.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/czyssrs\/Few-Shot-NLG\">code<\/a><\/li><li><strong>Span-ConveRT: Few-shot span extraction for dialog with pretrained conversational representations,<\/strong>&nbsp;in ACL, 2020.&nbsp;<em>S. Coope, T. Farghly, D. Gerz, I. Vulic, and M. Henderson.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.acl-main.11.pdf\">paper<\/a><\/li><li><strong>Structural supervision improves few-shot learning and syntactic generalization in neural language models,<\/strong>&nbsp;in EMNLP, 2020.&nbsp;<em>E. Wilcox, P. Qian, R. Futrell, R. Kohita, R. Levy, and M. Ballesteros.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.375.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/wilcoxeg\/fsl_invar\">code<\/a><\/li><li><strong>A baseline for few-shot image classification,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>G. S. Dhillon, P. Chaudhari, A. Ravichandran, and S. Soatto.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=rylXBkrYDS\">paper<\/a><\/li><li><strong>Cross-domain few-shot classification via learned feature-wise transformation,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>H. Tseng, H. Lee, J. Huang, and M. Yang.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=SJl5Np4tPr\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/hytseng0509\/CrossDomainFewShot\">code<\/a><\/li><li><strong>Graph few-shot learning via knowledge transfer,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>H. Yao, C. Zhang, Y. Wei, M. Jiang, S. Wang, J. Huang, N. V. Chawla, and Z. Li.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6142\">paper<\/a><\/li><li><strong>Knowledge graph transfer network for few-shot recognition,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>R. Chen, T. Chen, X. Hui, H. Wu, G. Li, and L. Lin.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6630\">paper<\/a><\/li><li><strong>Context-Transformer: Tackling object confusion for few-shot detection,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>Z. Yang, Y. Wang, X. Chen, J. Liu, and Y. Qiao.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6957\">paper<\/a><\/li><li><strong>A broader study of cross-domain few-shot learning,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>Y. Guo, N. C. Codella, L. Karlinsky, J. V. Codella, J. R. Smith, K. Saenko, T. Rosing, and R. Feris.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123720120.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/IBM\/cdfsl-benchmark\">code<\/a><\/li><li><strong>Selecting relevant features from a multi-domain representation for few-shot classification,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>N. Dvornik, C. Schmid, and J. Mairal.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123550766.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/dvornikita\/SUR\">code<\/a><\/li><li><strong>Prototype completion with primitive knowledge for few-shot learning,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>B. Zhang, X. Li, Y. Ye, Z. Huang, and L. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Zhang_Prototype_Completion_With_Primitive_Knowledge_for_Few-Shot_Learning_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/zhangbq-research\/Prototype_Completion_for_FSL\">code<\/a><\/li><li><strong>Partial is better than all: Revisiting fine-tuning strategy for few-shot learning,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>Z. Shen, Z. Liu, J. Qin, M. Savvides, and K.-T. Cheng.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17155\/16962\">paper<\/a><\/li><li><strong>PTN: A poisson transfer network for semi-supervised few-shot learning,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>H. Huang, J. Zhang, J. Zhang, Q. Wu, and C. Xu.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16252\/16059\">paper<\/a><\/li><li><strong>A universal representation transformer layer for few-shot image classification,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>L. Liu, W. L. Hamilton, G. Long, J. Jiang, and H. Larochelle.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=04cII6MumYV\">paper<\/a><\/li><li><strong>Making pre-trained language models better few-shot learners,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>T. Gao, A. Fisch, and D. Chen.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.295.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/princeton-nlp\/LM-BFF\">code<\/a><\/li><li><strong>Self-supervised network evolution for few-shot classification,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>X. Tang, Z. Teng, B. Zhang, and J. Fan.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0419.pdf\">paper<\/a><\/li><li><strong>Calibrate before use: Improving few-shot performance of language models,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>Z. Zhao, E. Wallace, S. Feng, D. Klein, and S. Singh.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/zhao21c\/zhao21c.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/www.github.com\/tonyzhaozh\/few-shot-learning\">code<\/a><\/li><li><strong>Language models are few-shot learners,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf\">paper<\/a><\/li><li><strong>It&#8217;s not just size that matters: Small language models are also few-shot learners,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>T. Schick, and H. Sch\u00fctze.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.185.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/timoschick\/pet\">code<\/a><\/li><li><strong>Self-training improves pre-training for few-shot learning in task-oriented dialog systems,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>F. Mi, W. Zhou, L. Kong, F. Cai, M. Huang, and B. Faltings.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.142.pdf\">paper<\/a><\/li><li><strong>Few-shot intent detection via contrastive pre-training and fine-tuning,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>J. Zhang, T. Bui, S. Yoon, X. Chen, Z. Liu, C. Xia, Q. H. Tran, W. Chang, and P. S. Yu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.144.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/jianguoz\/Few-Shot-Intent-Detection\">code<\/a><\/li><li><strong>Avoiding inference heuristics in few-shot prompt-based finetuning,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>P. A. Utama, N. S. Moosavi, V. Sanh, and I. Gurevych.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.713.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ukplab\/emnlp2021-prompt-ft-heuristics\">code<\/a><\/li><li><strong>Constrained language models yield few-shot semantic parsers,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>R. Shin, C. H. Lin, S. Thomson, C. Chen, S. Roy, E. A. Platanios, A. Pauls, D. Klein, J. Eisner, and B. V. Durme.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.608.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/microsoft\/semantic_parsing_with_constrained_lm\">code<\/a><\/li><li><strong>Revisiting self-training for few-shot learning of language model,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>Y. Chen, Y. Zhang, C. Zhang, G. Lee, R. Cheng, and H. Li.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.718.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/matthewcym\/sflm\">code<\/a><\/li><li><strong>Language models are few-shot butlers,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>V. Micheli, and F. Fleuret.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.734.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/vmicheli\/lm-butlers\">code<\/a><\/li><li><strong>FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>R. Chada, and P. Natarajan.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.491.pdf\">paper<\/a><\/li><li><strong>TransPrompt: Towards an automatic transferable prompting framework for few-shot text classification,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>C. Wang, J. Wang, M. Qiu, J. Huang, and M. Gao.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.221.pdf\">paper<\/a><\/li><li><strong>Meta distant transfer learning for pre-trained language models,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>C. Wang, H. Pan, M. Qiu, J. Huang, F. Yang, and Y. Zhang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.768.pdf\">paper<\/a><\/li><li><strong>STraTA: Self-training with task augmentation for better few-shot learning,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>T. Vu, M. Luong, Q. V. Le, G. Simon, and M. Iyyer.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.462.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/google-research\/google-research\">code<\/a><\/li><li><strong>Few-shot image classification: Just use a library of pre-trained feature extractors and a simple classifier,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>A. Chowdhury, M. Jiang, S. Chaudhuri, and C. Jermaine.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Chowdhury_Few-Shot_Image_Classification_Just_Use_a_Library_of_Pre-Trained_Feature_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/arjish\/PreTrainedFullLibrary_FewShot\">code<\/a><\/li><li><strong>On the importance of distractors for few-shot classification,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>R. Das, Y. Wang, and J. M. F. Moura.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Das_On_the_Importance_of_Distractors_for_Few-Shot_Classification_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/quantacode\/contrastive-finetuning\">code<\/a><\/li><li><strong>A multi-mode modulator for multi-domain few-shot classification,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>Y. Liu, J. Lee, L. Zhu, L. Chen, H. Shi, and Y. Yang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Liu_A_Multi-Mode_Modulator_for_Multi-Domain_Few-Shot_Classification_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Universal representation learning from multiple domains for few-shot classification,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>W. Li, X. Liu, and H. Bilen.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Li_Universal_Representation_Learning_From_Multiple_Domains_for_Few-Shot_Classification_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/VICO-UoE\/URL\">code<\/a><\/li><li><strong>Boosting the generalization capability in cross-domain few-shot learning via noise-enhanced supervised autoencoder,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>H. Liang, Q. Zhang, P. Dai, and J. Lu.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Liang_Boosting_the_Generalization_Capability_in_Cross-Domain_Few-Shot_Learning_via_Noise-Enhanced_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>How fine-tuning allows for effective meta-learning,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>K. Chua, Q. Lei, and J. D. Lee.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/4a533591763dfa743a13affab1a85793-Paper.pdf\">paper<\/a><\/li><li><strong>Multimodal few-shot learning with frozen language models,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>M. Tsimpoukelli, J. Menick, S. Cabi, S. M. A. Eslami, O. Vinyals, and F. Hill.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/01b7575c38dac42f3cfb7d500438b875-Paper.pdf\">paper<\/a><\/li><li><strong>Grad2Task: Improved few-shot text classification using gradients for task representation,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>J. Wang, K. Wang, F. Rudzicz, and M. Brudno.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/33a854e247155d590883b93bca53848a-Paper.pdf\">paper<\/a><\/li><li><strong>True few-shot learning with language models,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>E. Perez, D. Kiela, and K. Cho.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/5c04925674920eb58467fb52ce4ef728-Paper.pdf\">paper<\/a><\/li><li><strong>POODLE: Improving few-shot learning via penalizing out-of-distribution samples,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>D. Le, K. Nguyen, Q. Tran, R. Nguyen, and B. Hua.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/c91591a8d461c2869b9f535ded3e213e-Paper.pdf\">paper<\/a><\/li><li><strong>TOHAN: A one-step approach towards few-shot hypothesis adaptation,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>H. Chi, F. Liu, W. Yang, L. Lan, T. Liu, B. Han, W. Cheung, and J. Kwok.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/af5d5ef24881f3c3049a7b9bfe74d58b-Paper.pdf\">paper<\/a><\/li><li><strong>Task affinity with maximum bipartite matching in few-shot learning,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>C. P. Le, J. Dong, M. Soltani, and V. Tarokh.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=u2GZOiUTbt\">paper<\/a><\/li><li><strong>Differentiable prompt makes pre-trained language models better few-shot learners,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>N. Zhang, L. Li, X. Chen, S. Deng, Z. Bi, C. Tan, F. Huang, and H. Chen.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=ek9a0qIafW\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/zjunlp\/DART\">code<\/a><\/li><li><strong>ConFeSS: A framework for single source cross-domain few-shot learning,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>D. Das, S. Yun, and F. Porikli.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=zRJu6mU2BaE\">paper<\/a><\/li><li><strong>Switch to generalize: Domain-switch learning for cross-domain few-shot classification,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>Z. Hu, Y. Sun, and Y. Yang.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=H-iABMvzIc\">paper<\/a><\/li><li><strong>LM-BFF-MS: Improving few-shot fine-tuning of language models based on multiple soft demonstration memory,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>E. Park, D. H. Jeon, S. Kim, I. Kang, and S. Na.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-short.34.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/judepark96\/lm-bff-ms\">code<\/a><\/li><li><strong>Meta-learning via language model in-context tuning,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>Y. Chen, R. Zhong, S. Zha, G. Karypis, and H. He.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.53.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/yandachen\/in-context-tuning\">code<\/a><\/li><li><strong>Few-shot tabular data enrichment using fine-tuned transformer architectures,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>A. Harari, and G. Katz.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.111.pdf\">paper<\/a><\/li><li><strong>Noisy channel language model prompting for few-shot text classification,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>S. Min, M. Lewis, H. Hajishirzi, and L. Zettlemoyer.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.365.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/shmsw25\/Channel-LM-Prompting\">code<\/a><\/li><li><strong>Prompt for extraction? PAIE: Prompting argument interaction for event argument extraction,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>Y. Ma, Z. Wang, Y. Cao, M. Li, M. Chen, K. Wang, and J. Shao.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.466.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/mayubo2333\/paie\">code<\/a><\/li><li><strong>Are prompt-based models clueless?,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>P. Kavumba, R. Takahashi, and Y. Oda.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.166.pdf\">paper<\/a><\/li><li><strong>Prototypical verbalizer for prompt-based few-shot tuning,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>G. Cui, S. Hu, N. Ding, L. Huang, and Z. Liu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.483.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/thunlp\/OpenPrompt\">code<\/a><\/li><li><strong>Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>Y. Lu, M. Bartolo, A. Moore, S. Riedel, and P. Stenetorp.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.556.pdf\">paper<\/a><\/li><li><strong>PPT: Pre-trained prompt tuning for few-shot learning,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>Y. Gu, X. Han, Z. Liu, and M. Huang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.576.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/thu-coai\/ppt\">code<\/a><\/li><li><strong>ASCM: An answer space clustered prompting method without answer engineering,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>Z. Wang, Y. Yang, Z. Xi, B. Ma, L. Wang, R. Dong, and A. Anwar.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.193.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/miaomiao1215\/ascm\">code<\/a><\/li><li><strong>Exploiting language model prompts using similarity measures: A case study on the word-in-context task,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>M. Tabasi, K. Rezaee, and M. T. Pilehvar.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-short.36.pdf\">paper<\/a><\/li><li><strong>P-Tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>X. Liu, K. Ji, Y. Fu, W. Tam, Z. Du, Z. Yang, and J. Tang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-short.8.pdf\">paper<\/a><\/li><li><strong>Cutting down on prompts and parameters: Simple few-shot learning with language models,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>R. L. L. IV, I. Balazevic, E. Wallace, F. Petroni, S. Singh, and S. Riedel.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.222.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ucinlp\/null-prompts\">code<\/a><\/li><li><strong>Prompt-free and efficient few-shot learning with language models,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>R. K. Mahabadi, L. Zettlemoyer, J. Henderson, L. Mathias, M. Saeidi, V. Stoyanov, and M. Yazdani.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.254.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/facebookresearch\/perfect\">code<\/a><\/li><li><strong>Pre-training to match for unified low-shot relation extraction,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>F. Liu, H. Lin, X. Han, B. Cao, and L. Sun.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.397.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/fc-liu\/mcmn\">code<\/a><\/li><li><strong>Dual context-guided continuous prompt tuning for few-shot learning,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>J. Zhou, L. Tian, H. Yu, Z. Xiao, H. Su, and J. Zhou.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.8.pdf\">paper<\/a><\/li><li><strong>Cluster &amp; tune: Boost cold start performance in text classification,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>E. Shnarch, A. Gera, A. Halfon, L. Dankin, L. Choshen, R. Aharonov, and N. Slonim.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.526.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ibm\/intermediate-training-using-clustering\">code<\/a><\/li><li><strong>Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>S. X. Hu, D. Li, J. St\u00fchmer, M. Kim, and T. M. Hospedales.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Hu_Pushing_the_Limits_of_Simple_Pipelines_for_Few-Shot_Learning_External_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/hushell.github.io\/pmf\/\">code<\/a><\/li><\/ol>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#refining-meta-learned-parameters\"><\/a>Refining Meta-learned Parameters<\/h3>\n\n\n\n<ol><li><strong>Model-agnostic meta-learning for fast adaptation of deep networks,<\/strong>&nbsp;in ICML, 2017.&nbsp;<em>C. Finn, P. Abbeel, and S. Levine.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v70\/finn17a\/finn17a.pdf?source=post_page---------------------------\">paper<\/a><\/li><li><strong>Bayesian model-agnostic meta-learning,<\/strong>&nbsp;in NeurIPS, 2018.&nbsp;<em>J. Yoon, T. Kim, O. Dia, S. Kim, Y. Bengio, and S. Ahn.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/7963-bayesian-model-agnostic-meta-learning.pdf\">paper<\/a><\/li><li><strong>Probabilistic model-agnostic meta-learning,<\/strong>&nbsp;in NeurIPS, 2018.&nbsp;<em>C. Finn, K. Xu, and S. Levine.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/8161-probabilistic-model-agnostic-meta-learning.pdf\">paper<\/a><\/li><li><strong>Gradient-based meta-learning with learned layerwise metric and subspace,<\/strong>&nbsp;in ICML, 2018.&nbsp;<em>Y. Lee and S. Choi.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v80\/lee18a\/lee18a.pdf\">paper<\/a><\/li><li><strong>Recasting gradient-based meta-learning as hierarchical Bayes,<\/strong>&nbsp;in ICLR, 2018.&nbsp;<em>E. Grant, C. Finn, S. Levine, T. Darrell, and T. Griffiths.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=BJ_UL-k0b\">paper<\/a><\/li><li><strong>Few-shot human motion prediction via meta-learning,<\/strong>&nbsp;in ECCV, 2018.&nbsp;<em>L.-Y. Gui, Y.-X. Wang, D. Ramanan, and J. Moura.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Liangyan_Gui_Few-Shot_Human_Motion_ECCV_2018_paper.pdf\">paper<\/a><\/li><li><strong>The effects of negative adaptation in model-agnostic meta-learning,<\/strong>&nbsp;arXiv preprint, 2018.&nbsp;<em>T. Deleu and Y. Bengio.<\/em>&nbsp;<a href=\"http:\/\/metalearning.ml\/2018\/papers\/metalearn2018_paper76.pdf\">paper<\/a><\/li><li><strong>Unsupervised meta-learning for few-shot image classification,<\/strong>&nbsp;in NeurIPS, 2019.&nbsp;<em>S. Khodadadeh, L. B\u00f6l\u00f6ni, and M. Shah.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/9203-unsupervised-meta-learning-for-few-shot-image-classification.pdf\">paper<\/a><\/li><li><strong>Amortized bayesian meta-learning,<\/strong>&nbsp;in ICLR, 2019.&nbsp;<em>S. Ravi and A. Beatson.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=rkgpy3C5tX\">paper<\/a><\/li><li><strong>Meta-learning with latent embedding optimization,<\/strong>&nbsp;in ICLR, 2019.&nbsp;<em>A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=BJgklhAcK7\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/deepmind\/leo\">code<\/a><\/li><li><strong>Meta relational learning for few-shot link prediction in knowledge graphs,<\/strong>&nbsp;in EMNLP-IJCNLP, 2019.&nbsp;<em>M. Chen, W. Zhang, W. Zhang, Q. Chen, and H. Chen.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/D19-1431.pdf\">paper<\/a><\/li><li><strong>Adapting meta knowledge graph information for multi-hop reasoning over few-shot relations,<\/strong>&nbsp;in EMNLP-IJCNLP, 2019.&nbsp;<em>X. Lv, Y. Gu, X. Han, L. Hou, J. Li, and Z. Liu.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/D19-1334.pdf\">paper<\/a><\/li><li><strong>LGM-Net: Learning to generate matching networks for few-shot learning,<\/strong>&nbsp;in ICML, 2019.&nbsp;<em>H. Li, W. Dong, X. Mei, C. Ma, F. Huang, and B.-G. Hu.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v97\/li19c\/li19c.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/likesiwell\/LGM-Net\/\">code<\/a><\/li><li><strong>Meta R-CNN: Towards general solver for instance-level low-shot learning,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>X. Yan, Z. Chen, A. Xu, X. Wang, X. Liang, and L. Lin.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Yan_Meta_R-CNN_Towards_General_Solver_for_Instance-Level_Low-Shot_Learning_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>Task agnostic meta-learning for few-shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>M. A. Jamal, and G.-J. Qi.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Jamal_Task_Agnostic_Meta-Learning_for_Few-Shot_Learning_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>Meta-transfer learning for few-shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>Q. Sun, Y. Liu, T.-S. Chua, and B. Schiele.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Sun_Meta-Transfer_Learning_for_Few-Shot_Learning_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/yaoyao-liu\/meta-transfer-learning\">code<\/a><\/li><li><strong>Meta-learning of neural architectures for few-shot learning,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>T. Elsken, B. Staffler, J. H. Metzen, and F. Hutter.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Elsken_Meta-Learning_of_Neural_Architectures_for_Few-Shot_Learning_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Attentive weights generation for few shot learning via information maximization,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>Y. Guo, and N.-M. Cheung.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Guo_Attentive_Weights_Generation_for_Few_Shot_Learning_via_Information_Maximization_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot open-set recognition using meta-learning,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>B. Liu, H. Kang, H. Li, G. Hua, and N. Vasconcelos.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Liu_Few-Shot_Open-Set_Recognition_Using_Meta-Learning_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Incremental few-shot object detection,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>J.-M. Perez-Rua, X. Zhu, T. M. Hospedales, and T. Xiang.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Perez-Rua_Incremental_Few-Shot_Object_Detection_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Automated relational meta-learning,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>H. Yao, X. Wu, Z. Tao, Y. Li, B. Ding, R. Li, and Z. Li.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=rklp93EtwH\">paper<\/a><\/li><li><strong>Meta-learning with warped gradient descent,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>S. Flennerhag, A. A. Rusu, R. Pascanu, F. Visin, H. Yin, and R. Hadsell.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=rkeiQlBFPB\">paper<\/a><\/li><li><strong>Meta-learning without memorization,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>M. Yin, G. Tucker, M. Zhou, S. Levine, and C. Finn.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=BklEFpEYwS\">paper<\/a><\/li><li><strong>ES-MAML: Simple Hessian-free meta learning,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>X. Song, W. Gao, Y. Yang, K. Choromanski, A. Pacchiano, and Y. Tang.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=S1exA2NtDB\">paper<\/a><\/li><li><strong>Self-supervised tuning for few-shot segmentation,<\/strong>&nbsp;in IJCAI, 2020.&nbsp;<em>K. Zhu, W. Zhai, and Y. Cao.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2020\/0142.pd\">paper<\/a><\/li><li><strong>Multi-attention meta learning for few-shot fine-grained image recognition,<\/strong>&nbsp;in IJCAI, 2020.&nbsp;<em>Y. Zhu, C. Liu, and S. Jiang.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2020\/0152.pdf\">paper<\/a><\/li><li><strong>An ensemble of epoch-wise empirical Bayes for few-shot learning,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>Y. Liu, B. Schiele, and Q. Sun.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123610392.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/gitlab.mpi-klsb.mpg.de\/yaoyaoliu\/e3bm\">code<\/a><\/li><li><strong>Incremental few-shot meta-learning via indirect discriminant alignment,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>Q. Liu, O. Majumder, A. Achille, A. Ravichandran, R. Bhotika, and S. Soatto.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123520664.pdf\">paper<\/a><\/li><li><strong>Model-agnostic boundary-adversarial sampling for test-time generalization in few-shot learning,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>J. Kim, H. Kim, and G. Kim.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123460579.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/jaekyeom\/MABAS\">code<\/a><\/li><li><strong>Bayesian meta-learning for the few-shot setting via deep kernels,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>M. Patacchiola, J. Turner, E. J. Crowley, M. O&#8217;Boyle, and A. J. Storkey.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/b9cfe8b6042cf759dc4c0cccb27a6737-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/BayesWatch\/deep-kernel-transfer\">code<\/a><\/li><li><strong>OOD-MAML: Meta-learning for few-shot out-of-distribution detection and classification,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>T. Jeong, and H. Kim.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/28e209b61a52482a0ae1cb9f5959c792-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/twj-KAIST\/OOD-MAML\">code<\/a><\/li><li><strong>Unraveling meta-learning: Understanding feature representations for few-shot tasks,<\/strong>&nbsp;in ICML, 2020.&nbsp;<em>M. Goldblum, S. Reich, L. Fowl, R. Ni, V. Cherepanova, and T. Goldstein.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v119\/goldblum20a\/goldblum20a.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/goldblum\/FeatureClustering\">code<\/a><\/li><li><strong>Node classification on graphs with few-shot novel labels via meta transformed network embedding,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>L. Lan, P. Wang, X. Du, K. Song, J. Tao, and X. Guan.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/c055dcc749c2632fd4dd806301f05ba6-Paper.pdf\">paper<\/a><\/li><li><strong>Adversarially robust few-shot learning: A meta-learning approach,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>M. Goldblum, L. Fowl, and T. Goldstein.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/cfee398643cbc3dc5eefc89334cacdc1-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/goldblum\/AdversarialQuerying\">code<\/a><\/li><li><strong>BOIL: Towards representation change for few-shot learning,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>J. Oh, H. Yoo, C. Kim, and S. Yun.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=umIdUL8rMH\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/flennerhag\/warpgrad\">code<\/a><\/li><li><strong>Few-shot open-set recognition by transformation consistency,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>M. Jeong, S. Choi, and C. Kim.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Jeong_Few-Shot_Open-Set_Recognition_by_Transformation_Consistency_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Improving generalization in meta-learning via task augmentation,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>H. Yao, L. Huang, L. Zhang, Y. Wei, L. Tian, J. Zou, J. Huang, and Z. Li.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/yao21b\/yao21b.pdf\">paper<\/a><\/li><li><strong>A representation learning perspective on the importance of train-validation splitting in meta-learning,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>N. Saunshi, A. Gupta, and W. Hu.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/saunshi21a\/saunshi21a.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/nsaunshi\/meta_tr_val_split\">code<\/a><\/li><li><strong>Data augmentation for meta-learning,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>R. Ni, M. Goldblum, A. Sharaf, K. Kong, and T. Goldstein.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/ni21a\/ni21a.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/RenkunNi\/MetaAug\">code<\/a><\/li><li><strong>Task cooperation for semi-supervised few-shot learning,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>H. Ye, X. Li, and D.-C. Zhan.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17277\/17084\">paper<\/a><\/li><li><strong>Conditional self-supervised learning for few-shot classification,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>Y. An, H. Xue, X. Zhao, and L. Zhang.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0295.pdf\">paper<\/a><\/li><li><strong>Cross-domain few-shot classification via adversarial task augmentation,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>H. Wang, and Z.-H. Deng.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0149.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Haoqing-Wang\/CDFSL-ATA\">code<\/a><\/li><li><strong>DReCa: A general task augmentation strategy for few-shot natural language inference,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>S. Murty, T. Hashimoto, and C. D. Manning.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.88.pdf\">paper<\/a><\/li><li><strong>MetaXL: Meta representation transformation for low-resource cross-lingual learning,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>M. Xia, G. Zheng, S. Mukherjee, M. Shokouhi, G. Neubig, and A. H. Awadallah.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.42.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/microsoft\/MetaXL\">code<\/a><\/li><li><strong>Meta-learning with task-adaptive loss function for few-shot learning,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>S. Baik, J. Choi, H. Kim, D. Cho, J. Min, and K. M. Lee.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Baik_Meta-Learning_With_Task-Adaptive_Loss_Function_for_Few-Shot_Learning_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/baiksung\/MeTAL\">code<\/a><\/li><li><strong>Meta-Baseline: Exploring simple meta-learning for few-shot learning,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>Y. Chen, Z. Liu, H. Xu, T. Darrell, and X. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Chen_Meta-Baseline_Exploring_Simple_Meta-Learning_for_Few-Shot_Learning_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>A lazy approach to long-horizon gradient-based meta-learning,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>M. A. Jamal, L. Wang, and B. Gong.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Jamal_A_Lazy_Approach_to_Long-Horizon_Gradient-Based_Meta-Learning_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Task-aware part mining network for few-shot learning,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>J. Wu, T. Zhang, Y. Zhang, and F. Wu.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Wu_Task-Aware_Part_Mining_Network_for_Few-Shot_Learning_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Binocular mutual learning for improving few-shot classification,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>Z. Zhou, X. Qiu, J. Xie, J. Wu, and C. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Zhou_Binocular_Mutual_Learning_for_Improving_Few-Shot_Classification_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/zzqzzq\/bml\">code<\/a><\/li><li><strong>Meta-learning with an adaptive task scheduler,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>H. Yao, Y. Wang, Y. Wei, P. Zhao, M. Mahdavi, D. Lian, and C. Finn.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/3dc4876f3f08201c7c76cb71fa1da439-Paper.pdf\">paper<\/a><\/li><li><strong>Memory efficient meta-learning with large images,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>J. Bronskill, D. Massiceti, M. Patacchiola, K. Hofmann, S. Nowozin, and R. Turner.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/cc1aa436277138f61cda703991069eaf-Paper.pdf\">paper<\/a><\/li><li><strong>EvoGrad: Efficient gradient-based meta-learning and hyperparameter optimization,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>O. Bohdal, Y. Yang, and T. Hospedales.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/bac49b876d5dfc9cd169c22ef5178ca7-Paper.pdf\">paper<\/a><\/li><li><strong>Towards enabling meta-learning from target models,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>S. Lu, H. Ye, L. Gan, and D. Zhan.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/43baa6762fa81bb43b39c62553b2970d-Paper.pdf\">paper<\/a><\/li><li><strong>The role of global labels in few-shot classification and how to infer them,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>R. Wang, M. Pontil, and C. Ciliberto.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/e3b6fb0fd4df098162eede3313c54a8d-Paper.pdf\">paper<\/a><\/li><li><strong>How to train your MAML to excel in few-shot classification,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>H. Ye, and W. Chao.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=49h_IkpJtaE\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Han-Jia\/UNICORN-MAML\">code<\/a><\/li><li><strong>Meta-learning with fewer tasks through task interpolation,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>H. Yao, L. Zhang, and C. Finn.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=ajXWF7bVR8d\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/huaxiuyao\/MLTI\">code<\/a><\/li><li><strong>Continuous-time meta-learning with forward mode differentiation,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>T. Deleu, D. Kanaa, L. Feng, G. Kerg, Y. Bengio, G. Lajoie, and P. Bacon.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=57PipS27Km\">paper<\/a><\/li><li><strong>Bootstrapped meta-learning,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>S. Flennerhag, Y. Schroecker, T. Zahavy, H. v. Hasselt, D. Silver, and S. Singh.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=b-ny3x071E5\">paper<\/a><\/li><li><strong>Learning prototype-oriented set representations for meta-learning,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>D. d. Guo, L. Tian, M. Zhang, M. Zhou, and H. Zha.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=WH6u2SvlLp4\">paper<\/a><\/li><li><strong>Dynamic kernel selection for improved generalization and memory efficiency in meta-learning,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>A. Chavan, R. Tiwari, U. Bamba, and D. K. Gupta.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Chavan_Dynamic_Kernel_Selection_for_Improved_Generalization_and_Memory_Efficiency_in_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/transmuteAI\/MetaDOCK\">code<\/a><\/li><li><strong>What matters for meta-learning vision regression tasks?,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>N. Gao, H. Ziesche, N. A. Vien, M. Volpp, and G. Neumann.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Gao_What_Matters_for_Meta-Learning_Vision_Regression_Tasks_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/boschresearch\/what-matters-for-meta-learning\">code<\/a><\/li><li><strong>Multidimensional belief quantification for label-efficient meta-learning,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>D. S. Pandey, and Q. Yu.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Pandey_Multidimensional_Belief_Quantification_for_Label-Efficient_Meta-Learning_CVPR_2022_paper.pdf\">paper<\/a><\/li><\/ol>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#learning-search-steps\"><\/a>Learning Search Steps<\/h3>\n\n\n\n<ol><li><strong>Optimization as a model for few-shot learning,<\/strong>&nbsp;in ICLR, 2017.&nbsp;<em>S. Ravi and H. Larochelle.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=rJY0-Kcll\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/twitter\/meta-learning-lstm\">code<\/a><\/li><li><strong>Meta Navigator: Search for a good adaptation policy for few-shot learning,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>C. Zhang, H. Ding, G. Lin, R. Li, C. Wang, and C. Shen.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Zhang_Meta_Navigator_Search_for_a_Good_Adaptation_Policy_for_Few-Shot_ICCV_2021_paper.pdf\">paper<\/a><\/li><\/ol>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#applications\"><\/a><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\">Applications<\/a><\/h2>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#computer-vision\"><\/a>Computer Vision<\/h3>\n\n\n\n<ol><li><strong>Learning robust visual-semantic embeddings,<\/strong>&nbsp;in CVPR, 2017.&nbsp;<em>Y.-H. Tsai, L.-K. Huang, and R. Salakhutdinov.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2017\/papers\/Tsai_Learning_Robust_Visual-Semantic_ICCV_2017_paper.pdf\">paper<\/a><\/li><li><strong>One-shot action localization by learning sequence matching network,<\/strong>&nbsp;in CVPR, 2018.&nbsp;<em>H. Yang, X. He, and F. Porikli.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Yang_One-Shot_Action_Localization_CVPR_2018_paper.pdf\">paper<\/a><\/li><li><strong>Incremental few-shot learning for pedestrian attribute recognition,<\/strong>&nbsp;in EMNLP, 2018.&nbsp;<em>L. Xiang, X. Jin, G. Ding, J. Han, and L. Li.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2019\/0543.pdf\">paper<\/a><\/li><li><strong>Few-shot video-to-video synthesis,<\/strong>&nbsp;in NeurIPS, 2019.&nbsp;<em>T.-C. Wang, M.-Y. Liu, A. Tao, G. Liu, J. Kautz, and B. Catanzaro.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/8746-few-shot-video-to-video-synthesis.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/NVlabs\/few-shot-vid2vid\">code<\/a><\/li><li><strong>Few-shot object detection via feature reweighting,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>B. Kang, Z. Liu, X. Wang, F. Yu, J. Feng, and T. Darrell.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Kang_Few-Shot_Object_Detection_via_Feature_Reweighting_ICCV_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/bingykang\/Fewshot_Detection\">code<\/a><\/li><li><strong>Few-shot unsupervised image-to-image translation,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>M.-Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Liu_Few-Shot_Unsupervised_Image-to-Image_Translation_ICCV_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/NVlabs\/FUNIT\">code<\/a><\/li><li><strong>Feature weighting and boosting for few-shot segmentation,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>K. Nguyen, and S. Todorovic.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Nguyen_Feature_Weighting_and_Boosting_for_Few-Shot_Segmentation_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot adaptive gaze estimation,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>S. Park, S. D. Mello, P. Molchanov, U. Iqbal, O. Hilliges, and J. Kautz.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Park_Few-Shot_Adaptive_Gaze_Estimation_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>AMP: Adaptive masked proxies for few-shot segmentation,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>M. Siam, B. N. Oreshkin, and M. Jagersand.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Siam_AMP_Adaptive_Masked_Proxies_for_Few-Shot_Segmentation_ICCV_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/MSiam\/AdaptiveMaskedProxies\">code<\/a><\/li><li><strong>Few-shot generalization for single-image 3D reconstruction via priors,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>B. Wallace, and B. Hariharan.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Wallace_Few-Shot_Generalization_for_Single-Image_3D_Reconstruction_via_Priors_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot adversarial learning of realistic neural talking head models,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>E. Zakharov, A. Shysheya, E. Burkov, and V. Lempitsky.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Zakharov_Few-Shot_Adversarial_Learning_of_Realistic_Neural_Talking_Head_Models_ICCV_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/vincent-thevenin\/Realistic-Neural-Talking-Head-Models\">code<\/a><\/li><li><strong>Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>C. Zhang, G. Lin, F. Liu, J. Guo, Q. Wu, and R. Yao.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Zhang_Pyramid_Graph_Networks_With_Connection_Attentions_for_Region-Based_One-Shot_Semantic_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>Time-conditioned action anticipation in one shot,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>Q. Ke, M. Fritz, and B. Schiele.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Ke_Time-Conditioned_Action_Anticipation_in_One_Shot_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot learning with localization in realistic settings,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>D. Wertheimer, and B. Hariharan.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Wertheimer_Few-Shot_Learning_With_Localization_in_Realistic_Settings_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/daviswer\/fewshotlocal\">code<\/a><\/li><li><strong>Improving few-shot user-specific gaze adaptation via gaze redirection synthesis,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>Y. Yu, G. Liu, and J.-M. Odobez.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Yu_Improving_Few-Shot_User-Specific_Gaze_Adaptation_via_Gaze_Redirection_Synthesis_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>CANet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>C. Zhang, G. Lin, F. Liu, R. Yao, and C. Shen.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Zhang_CANet_Class-Agnostic_Segmentation_Networks_With_Iterative_Refinement_and_Attentive_Few-Shot_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/icoz69\/CaNet\">code<\/a><\/li><li><strong>Multi-level Semantic Feature Augmentation for One-shot Learning,<\/strong>&nbsp;in TIP, 2019.&nbsp;<em>Z. Chen, Y. Fu, Y. Zhang, Y.-G. Jiang, X. Xue, and L. Sigal.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1804.05298\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/tankche1\/Semantic-Feature-Augmentation-in-Few-shot-Learning\">code<\/a><\/li><li><strong>Few-shot pill recognition,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>S. Ling, A. Pastor, J. Li, Z. Che, J. Wang, J. Kim, and P. L. Callet.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Ling_Few-Shot_Pill_Recognition_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>LT-Net: Label transfer by learning reversible voxel-wise correspondence for one-shot medical image segmentation,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>S. Wang, S. Cao, D. Wei, R. Wang, K. Ma, L. Wang, D. Meng, and Y. Zheng.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Wang_LT-Net_Label_Transfer_by_Learning_Reversible_Voxel-Wise_Correspondence_for_One-Shot_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>3FabRec: Fast few-shot face alignment by reconstruction,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>B. Browatzki, and C. Wallraven.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Browatzki_3FabRec_Fast_Few-Shot_Face_Alignment_by_Reconstruction_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot video classification via temporal alignment,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>K. Cao, J. Ji, Z. Cao, C.-Y. Chang, J. C. Niebles.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Cao_Few-Shot_Video_Classification_via_Temporal_Alignment_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>One-shot adversarial attacks on visual tracking with dual attention,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>X. Chen, X. Yan, F. Zheng, Y. Jiang, S.-T. Xia, Y. Zhao, and R. Ji.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Chen_One-Shot_Adversarial_Attacks_on_Visual_Tracking_With_Dual_Attention_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>FGN: Fully guided network for few-shot instance segmentation,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>Z. Fan, J.-G. Yu, Z. Liang, J. Ou, C. Gao, G.-S. Xia, and Y. Li.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Fan_FGN_Fully_Guided_Network_for_Few-Shot_Instance_Segmentation_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>CRNet: Cross-reference networks for few-shot segmentation,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>W. Liu, C. Zhang, G. Lin, and F. Liu.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Liu_CRNet_Cross-Reference_Networks_for_Few-Shot_Segmentation_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Revisiting pose-normalization for fine-grained few-shot recognition,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>L. Tang, D. Wertheimer, and B. Hariharan.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Tang_Revisiting_Pose-Normalization_for_Fine-Grained_Few-Shot_Recognition_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot learning of part-specific probability space for 3D shape segmentation,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>L. Wang, X. Li, and Y. Fang.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Wang_Few-Shot_Learning_of_Part-Specific_Probability_Space_for_3D_Shape_Segmentation_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Semi-supervised learning for few-shot image-to-image translation,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>Y. Wang, S. Khan, A. Gonzalez-Garcia, J. van de Weijer, and F. S. Khan.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Wang_Semi-Supervised_Learning_for_Few-Shot_Image-to-Image_Translation_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Multi-domain learning for accurate and few-shot color constancy,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>J. Xiao, S. Gu, and L. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Xiao_Multi-Domain_Learning_for_Accurate_and_Few-Shot_Color_Constancy_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>One-shot domain adaptation for face generation,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>C. Yang, and S.-N. Lim.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Yang_One-Shot_Domain_Adaptation_for_Face_Generation_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>MetaPix: Few-shot video retargeting,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>J. Lee, D. Ramanan, and R. Girdhar.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=SJx1URNKwH\">paper<\/a><\/li><li><strong>Few-shot human motion prediction via learning novel motion dynamics,<\/strong>&nbsp;in IJCAI, 2020.&nbsp;<em>C. Zang, M. Pei, and Y. Kong.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2020\/0118.pdf\">paper<\/a><\/li><li><strong>Shaping visual representations with language for few-shot classification,<\/strong>&nbsp;in ACL, 2020.&nbsp;<em>J. Mu, P. Liang, and N. D. Goodman.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.acl-main.436.pdf\">paper<\/a><\/li><li><strong>MarioNETte: Few-shot face reenactment preserving identity of unseen targets,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>S. Ha, M. Kersner, B. Kim, S. Seo, and D. Kim.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6721\">paper<\/a><\/li><li><strong>One-shot learning for long-tail visual relation detection,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>W. Wang, M. Wang, S. Wang, G. Long, L. Yao, G. Qi, and Y. Chen.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6904\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Witt-Wang\/oneshot\">code<\/a><\/li><li><strong>Differentiable meta-learning model for few-shot semantic segmentation,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>P. Tian, Z. Wu, L. Qi, L. Wang, Y. Shi, and Y. Gao.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6887\">paper<\/a><\/li><li><strong>Part-aware prototype network for few-shot semantic segmentation,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>Y. Liu, X. Zhang, S. Zhang, and X. He.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123540137.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Xiangyi1996\/PPNet-PyTorch\">code<\/a><\/li><li><strong>Prototype mixture models for few-shot semantic segmentation,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>B. Yang, C. Liu, B. Li, J. Jiao, and Q. Ye.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123530749.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Yang-Bob\/PMMs\">code<\/a><\/li><li><strong>Self-supervision with superpixels: Training few-shot medical image segmentation without annotation,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>C. Ouyang, C. Biffi, C. Chen, T. Kart, H. Qiu, and D. Rueckert.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123740749.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/cheng-01037\/Self-supervised-Fewshot-Medical-Image-Segmentation\">code<\/a><\/li><li><strong>Few-shot action recognition with permutation-invariant attention,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>H. Zhang, L. Zhang, X. Qi, H. Li, P. H. S. Torr, and P. Koniusz.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123500511.pdf\">paper<\/a><\/li><li><strong>Few-shot compositional font generation with dual memory,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>J. Cha, S. Chun, G. Lee, B. Lee, S. Kim, and H. Lee.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123640715.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/clovaai\/dmfont\">code<\/a><\/li><li><strong>Few-shot object detection and viewpoint estimation for objects in the wild,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>Y. Xiao, and R. Marlet.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123620188.pdf\">paper<\/a><\/li><li><strong>Few-shot scene-adaptive anomaly detection,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>Y. Lu, F. Yu, M. K. K. Reddy, and Y. Wang.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123500120.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/yiweilu3\/Few-shot-Scene-adaptive-Anomaly-Detection\">code<\/a><\/li><li><strong>Few-shot semantic segmentation with democratic attention networks,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>H. Wang, X. Zhang, Y. Hu, Y. Yang, X. Cao, and X. Zhen.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123580715.pdf\">paper<\/a><\/li><li><strong>Few-shot single-view 3-D object reconstruction with compositional priors,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>M. Michalkiewicz, S. Parisot, S. Tsogkas, M. Baktashmotlagh, A. Eriksson, and E. Belilovsky.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123700613.pdf\">paper<\/a><\/li><li><strong>COCO-FUNIT: Few-shot unsupervised image translation with a content conditioned style encoder,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>K. Saito, K. Saenko, and M. Liu.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123480392.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/nvlabs.github.io\/COCO-FUNIT\/\">code<\/a><\/li><li><strong>Deep complementary joint model for complex scene registration and few-shot segmentation on medical images,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>Y. He, T. Li, G. Yang, Y. Kong, Y. Chen, H. Shu, J. Coatrieux, J. Dillenseger, and S. Li.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123630749.pdf\">paper<\/a><\/li><li><strong>Multi-scale positive sample refinement for few-shot object detection,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>J. Wu, S. Liu, D. Huang, and Y. Wang.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123610443.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/jiaxi-wu\/MPSR\">code<\/a><\/li><li><strong>Large-scale few-shot learning via multi-modal knowledge discovery,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>S. Wang, J. Yue, J. Liu, Q. Tian, and M. Wang.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123550715.pdf\">paper<\/a><\/li><li><strong>Graph convolutional networks for learning with few clean and many noisy labels,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>A. Iscen, G. Tolias, Y. Avrithis, O. Chum, and C. Schmid.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123550290.pdf\">paper<\/a><\/li><li><strong>Self-supervised few-shot learning on point clouds,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>C. Sharma, and M. Kaul.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/50c1f44e426560f3f2cdcb3e19e39903-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/charusharma1991\/SSL_PointClouds\">code<\/a><\/li><li><strong>Restoring negative information in few-shot object detection,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>Y. Yang, F. Wei, M. Shi, and G. Li.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/240ac9371ec2671ae99847c3ae2e6384-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/yang-yk\/NP-RepMet\">code<\/a><\/li><li><strong>Few-shot image generation with elastic weight consolidation,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>Y. Li, R. Zhang, J. Lu, and E. Shechtman.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/b6d767d2f8ed5d21a44b0e5886680cb9-Paper.pdf\">paper<\/a><\/li><li><strong>Few-shot visual reasoning with meta-analogical contrastive learning,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>Y. Kim, J. Shin, E. Yang, and S. J. Hwang.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/c39e1a03859f9ee215bc49131d0caf33-Paper.pdf\">paper<\/a><\/li><li><strong>CrossTransformers: spatially-aware few-shot transfer,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>C. Doersch, A. Gupta, and A. Zisserman.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/fa28c6cdf8dd6f41a657c3d7caa5c709-Paper.pdf\">paper<\/a><\/li><li><strong>Make one-shot video object segmentation efficient again,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>T. Meinhardt, and L. Leal-Taix\u00e9.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/781397bc0630d47ab531ea850bddcf63-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/dvl-tum\/e-osvos\">code<\/a><\/li><li><strong>Frustratingly simple few-shot object detection,<\/strong>&nbsp;in ICML, 2020.&nbsp;<em>X. Wang, T. E. Huang, J. Gonzalez, T. Darrell, and F. Yu.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v119\/wang20j\/wang20j.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ucbdrive\/few-shot-object-detection\">code<\/a><\/li><li><strong>Adversarial style mining for one-shot unsupervised domain adaptation,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>Y. Luo, P. Liu, T. Guan, J. Yu, and Y. Yang.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/781397bc0630d47ab531ea850bddcf63-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/RoyalVane\/ASM\">code<\/a><\/li><li><strong>Disentangling 3D prototypical networks for few-shot concept learning,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>M. Prabhudesai, S. Lal, D. Patil, H. Tung, A. W. Harley, and K. Fragkiadaki.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=-Lr-u0b42he\">paper<\/a><\/li><li><strong>Learning normal dynamics in videos with meta prototype network,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>H. Lv, C. Chen, Z. Cui, C. Xu, Y. Li, and J. Yang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Lv_Learning_Normal_Dynamics_in_Videos_With_Meta_Prototype_Network_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ktr-hubrt\/MPN\/\">code<\/a><\/li><li><strong>Learning dynamic alignment via meta-filter for few-shot learning,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>C. Xu, Y. Fu, C. Liu, C. Wang, J. Li, F. Huang, L. Zhang, and X. Xue.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Xu_Learning_Dynamic_Alignment_via_Meta-Filter_for_Few-Shot_Learning_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Delving deep into many-to-many attention for few-shot video object segmentation,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>H. Chen, H. Wu, N. Zhao, S. Ren, and S. He.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Chen_Delving_Deep_Into_Many-to-Many_Attention_for_Few-Shot_Video_Object_Segmentation_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/scutpaul\/DANet\">code<\/a><\/li><li><strong>Adaptive prototype learning and allocation for few-shot segmentation,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>G. Li, V. Jampani, L. Sevilla-Lara, D. Sun, J. Kim, and J. Kim.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Li_Adaptive_Prototype_Learning_and_Allocation_for_Few-Shot_Segmentation_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/git.io\/ASGNet\">code<\/a><\/li><li><strong>FAPIS: A few-shot anchor-free part-based instance segmenter,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>K. Nguyen, and S. Todorovic.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Nguyen_FAPIS_A_Few-Shot_Anchor-Free_Part-Based_Instance_Segmenter_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>FSCE: Few-shot object detection via contrastive proposal encoding,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>B. Sun, B. Li, S. Cai, Y. Yuan, and C. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Sun_FSCE_Few-Shot_Object_Detection_via_Contrastive_Proposal_Encoding_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/MegviiDetection\/FSCE\">code<\/a><\/li><li><strong>Few-shot 3D point cloud semantic segmentation,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>N. Zhao, T. Chua, and G. H. Lee.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Zhao_Few-Shot_3D_Point_Cloud_Semantic_Segmentation_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Na-Z\/attMPTI\">code<\/a><\/li><li><strong>Generalized few-shot object detection without forgetting,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>Z. Fan, Y. Ma, Z. Li, and J. Sun.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Fan_Generalized_Few-Shot_Object_Detection_Without_Forgetting_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot human motion transfer by personalized geometry and texture modeling,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>Z. Huang, X. Han, J. Xu, and T. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Huang_Few-Shot_Human_Motion_Transfer_by_Personalized_Geometry_and_Texture_Modeling_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/HuangZhiChao95\/FewShotMotionTransfer\">code<\/a><\/li><li><strong>Labeled from unlabeled: Exploiting unlabeled data for few-shot deep HDR deghosting,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>K. R. Prabhakar, G. Senthil, S. Agrawal, R. V. Babu, and R. K. S. S. Gorthi.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Prabhakar_Labeled_From_Unlabeled_Exploiting_Unlabeled_Data_for_Few-Shot_Deep_HDR_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot transformation of common actions into time and space,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>P. Yang, P. Mettes, and C. G. M. Snoek.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Yang_Few-Shot_Transformation_of_Common_Actions_Into_Time_and_Space_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/PengWan-Yang\/few-shot-transformer\">code<\/a><\/li><li><strong>Temporal-relational CrossTransformers for few-shot action recognition,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>T. Perrett, A. Masullo, T. Burghardt, M. Mirmehdi, and D. Damen.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Perrett_Temporal-Relational_CrossTransformers_for_Few-Shot_Action_Recognition_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>pixelNeRF: Neural radiance fields from one or few images,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>A. Yu, V. Ye, M. Tancik, and A. Kanazawa.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Yu_pixelNeRF_Neural_Radiance_Fields_From_One_or_Few_Images_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/alexyu.net\/pixelnerf\">code<\/a><\/li><li><strong>Hallucination improves few-shot object detection,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>W. Zhang, and Y. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Zhang_Hallucination_Improves_Few-Shot_Object_Detection_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot object detection via classification refinement and distractor retreatment,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>Y. Li, H. Zhu, Y. Cheng, W. Wang, C. S. Teo, C. Xiang, P. Vadakkepat, and T. H. Lee.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Li_Few-Shot_Object_Detection_via_Classification_Refinement_and_Distractor_Retreatment_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Dense relation distillation with context-aware aggregation for few-shot object detection,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>H. Hu, S. Bai, A. Li, J. Cui, and L. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Hu_Dense_Relation_Distillation_With_Context-Aware_Aggregation_for_Few-Shot_Object_Detection_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/hzhupku\/DCNet\">code<\/a><\/li><li><strong>Few-shot segmentation without meta-learning: A good transductive inference is all you need? ,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>M. Boudiaf, H. Kervadec, Z. I. Masud, P. Piantanida, I. B. Ayed, and J. Dolz.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Boudiaf_Few-Shot_Segmentation_Without_Meta-Learning_A_Good_Transductive_Inference_Is_All_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/mboudiaf\/RePRI-for-Few-Shot-Segmentation\">code<\/a><\/li><li><strong>Few-shot image generation via cross-domain correspondence,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>U. Ojha, Y. Li, J. Lu, A. A. Efros, Y. J. Lee, E. Shechtman, and R. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Ojha_Few-Shot_Image_Generation_via_Cross-Domain_Correspondence_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Self-guided and cross-guided learning for few-shot segmentation,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>B. Zhang, J. Xiao, and T. Qin.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Zhang_Self-Guided_and_Cross-Guided_Learning_for_Few-Shot_Segmentation_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/zbf1991\/SCL\">code<\/a><\/li><li><strong>Anti-aliasing semantic reconstruction for few-shot semantic segmentation,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>B. Liu, Y. Ding, J. Jiao, X. Ji, and Q. Ye.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Liu_Anti-Aliasing_Semantic_Reconstruction_for_Few-Shot_Semantic_Segmentation_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Beyond max-margin: Class margin equilibrium for few-shot object detection,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>B. Li, B. Yang, C. Liu, F. Liu, R. Ji, and Q. Ye.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Li_Beyond_Max-Margin_Class_Margin_Equilibrium_for_Few-Shot_Object_Detection_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Bohao-Lee\/CME\">code<\/a><\/li><li><strong>Incremental few-shot instance segmentation,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>D. A. Ganea, B. Boom, and R. Poppe.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Ganea_Incremental_Few-Shot_Instance_Segmentation_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/danganea\/iMTFA\">code<\/a><\/li><li><strong>Scale-aware graph neural network for few-shot semantic segmentation,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>G. Xie, J. Liu, H. Xiong, and L. Shao.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Xie_Scale-Aware_Graph_Neural_Network_for_Few-Shot_Semantic_Segmentation_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Semantic relation reasoning for shot-stable few-shot object detection,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>C. Zhu, F. Chen, U. Ahmed, Z. Shen, and M. Savvides.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Zhu_Semantic_Relation_Reasoning_for_Shot-Stable_Few-Shot_Object_Detection_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Accurate few-shot object detection with support-query mutual guidance and hybrid loss,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>L. Zhang, S. Zhou, J. Guan, and J. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Zhang_Accurate_Few-Shot_Object_Detection_With_Support-Query_Mutual_Guidance_and_Hybrid_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Transformation invariant few-shot object detection,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>A. Li, and Z. Li.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Li_Transformation_Invariant_Few-Shot_Object_Detection_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>MetaHTR: Towards writer-adaptive handwritten text recognition,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>A. K. Bhunia, S. Ghose, A. Kumar, P. N. Chowdhury, A. Sain, and Y. Song.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Bhunia_MetaHTR_Towards_Writer-Adaptive_Handwritten_Text_Recognition_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>What if we only use real datasets for scene text recognition? Toward scene text recognition with fewer labels,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>J. Baek, Y. Matsui, and K. Aizawa.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Baek_What_if_We_Only_Use_Real_Datasets_for_Scene_Text_CVPR_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ku21fan\/STR-Fewer-Labels\">code<\/a><\/li><li><strong>Few-shot font generation with localized style representations and factorization,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>S. Park, S. Chun, J. Cha, B. Lee, and H. Shim.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16340\/16147\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/clovaai\/lffont\">code<\/a><\/li><li><strong>Attributes-guided and pure-visual attention alignment for few-shot recognition,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>S. Huang, M. Zhang, Y. Kang, and D. Wang.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16957\/16764\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/bighuang624\/AGAM\">code<\/a><\/li><li><strong>One-shot face reenactment using appearance adaptive normalization,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>G. Yao, Y. Yuan, T. Shao, S. Li, S. Liu, Y. Liu, M. Wang, and K. Zhou.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16427\/16234\">paper<\/a><\/li><li><strong>FL-MSRE: A few-shot learning based approach to multimodal social relation extraction,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>H. Wan, M. Zhang, J. Du, Z. Huang, Y. Yang, and J. Z. Pan.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17639\/17446\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/sysulic\/FL-MSRE\">code<\/a><\/li><li><strong>StarNet: Towards weakly supervised few-shot object detection,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>L. Karlinsky, J. Shtok, A. Alfassy, M. Lichtenstein, S. Harary, E. Schwartz, S. Doveh, P. Sattigeri, R. Feris, A. Bronstein, and R. Giryes.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16268\/16075\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/leokarlin\/StarNet\">code<\/a><\/li><li><strong>Progressive one-shot human parsing,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>H. He, J. Zhang, B. Thuraisingham, and D. Tao.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16243\/16050\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Charleshhy\/One-shot-Human-Parsing\">code<\/a><\/li><li><strong>Knowledge is power: Hierarchical-knowledge embedded meta-learning for visual reasoning in artistic domains,<\/strong>&nbsp;in KDD, 2021.&nbsp;<em>W. Zheng, L. Yan, C. Gou, and F.-Y. Wang.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3447548.3467285\">paper<\/a><\/li><li><strong>MEDA: Meta-learning with data augmentation for few-shot text classification,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>P. Sun, Y. Ouyang, W. Zhang, and X.-Y. Dai.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0541.pdf\">paper<\/a><\/li><li><strong>Learning implicit temporal alignment for few-shot video classification,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>S. Zhang, J. Zhou, and X. He.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0181.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/tonysy\/PyAction\">code<\/a><\/li><li><strong>Few-shot neural human performance rendering from sparse RGBD videos,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>A. Pang, X. Chen, H. Luo, M. Wu, J. Yu, and L. Xu.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0130.pdf\">paper<\/a><\/li><li><strong>Uncertainty-aware few-shot image classification,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>Z. Zhang, C. Lan, W. Zeng, Z. Chen, and S. Chan.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0471.pdf\">paper<\/a><\/li><li><strong>Few-shot learning with part discovery and augmentation from unlabeled images,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>W. Chen, C. Si, W. Wang, L. Wang, Z. Wang, and T. Tan.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0313.pdf\">paper<\/a><\/li><li><strong>Few-shot partial-label learning,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>Y. Zhao, G. Yu, L. Liu, Z. Yan, L. Cui, and C. Domeniconi.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0475.pdf\">paper<\/a><\/li><li><strong>One-shot affordance detection,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>H. Luo, W. Zhai, J. Zhang, Y. Cao, and D. Tao.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0124.pdf\">paper<\/a><\/li><li><strong>DeFRCN: Decoupled faster R-CNN for few-shot object detection,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>L. Qiao, Y. Zhao, Z. Li, X. Qiu, J. Wu, and C. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Qiao_DeFRCN_Decoupled_Faster_R-CNN_for_Few-Shot_Object_Detection_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Learning meta-class memory for few-shot semantic segmentation,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>Z. Wu, X. Shi, G. Lin, and J. Cai.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Wu_Learning_Meta-Class_Memory_for_Few-Shot_Semantic_Segmentation_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>UVStyle-Net: Unsupervised few-shot learning of 3D style similarity measure for B-Reps,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>P. Meltzer, H. Shayani, A. Khasahmadi, P. K. Jayaraman, A. Sanghi, and J. Lambourne.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Meltzer_UVStyle-Net_Unsupervised_Few-Shot_Learning_of_3D_Style_Similarity_Measure_for_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>LoFGAN: Fusing local representations for few-shot image generation,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>Z. Gu, W. Li, J. Huo, L. Wang, and Y. Gao.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Gu_LoFGAN_Fusing_Local_Representations_for_Few-Shot_Image_Generation_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Recurrent mask refinement for few-shot medical image segmentation,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>H. Tang, X. Liu, S. Sun, X. Yan, and X. Xie.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Tang_Recurrent_Mask_Refinement_for_Few-Shot_Medical_Image_Segmentation_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/uci-cbcl\/RP-Net\">code<\/a><\/li><li><strong>H3D-Net: Few-shot high-fidelity 3D head reconstruction,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>E. Ramon, G. Triginer, J. Escur, A. Pumarola, J. Garcia, X. Gir\u00f3-i-Nieto, and F. Moreno-Noguer.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Ramon_H3D-Net_Few-Shot_High-Fidelity_3D_Head_Reconstruction_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Learned spatial representations for few-shot talking-head synthesis,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>M. Meshry, S. Suri, L. S. Davis, and A. Shrivastava.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Meshry_Learned_Spatial_Representations_for_Few-Shot_Talking-Head_Synthesis_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Putting NeRF on a diet: Semantically consistent few-shot view synthesis,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>A. Jain, M. Tancik, and P. Abbeel.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Jain_Putting_NeRF_on_a_Diet_Semantically_Consistent_Few-Shot_View_Synthesis_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Hypercorrelation squeeze for few-shot segmentation,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>J. Min, D. Kang, and M. Cho.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Min_Hypercorrelation_Squeeze_for_Few-Shot_Segmentation_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/juhongm999\/hsnet\">code<\/a><\/li><li><strong>Few-shot semantic segmentation with cyclic memory network,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>G. Xie, H. Xiong, J. Liu, Y. Yao, and L. Shao.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Xie_Few-Shot_Semantic_Segmentation_With_Cyclic_Memory_Network_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Simpler is better: Few-shot semantic segmentation with classifier weight transformer,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>Z. Lu, S. He, X. Zhu, L. Zhang, Y. Song, and T. Xiang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Lu_Simpler_Is_Better_Few-Shot_Semantic_Segmentation_With_Classifier_Weight_Transformer_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/zhiheLu\/CWT-for-FSS\">code<\/a><\/li><li><strong>Unsupervised few-shot action recognition via action-appearance aligned meta-adaptation,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>J. Patravali, G. Mittal, Y. Yu, F. Li, and M. Chen.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Patravali_Unsupervised_Few-Shot_Action_Recognition_via_Action-Appearance_Aligned_Meta-Adaptation_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Multiple heads are better than one: few-shot font generation with multiple localized experts,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>S. Park, S. Chun, J. Cha, B. Lee, and H. Shim.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Park_Multiple_Heads_Are_Better_Than_One_Few-Shot_Font_Generation_With_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/clovaai\/mxfont\">code<\/a><\/li><li><strong>Mining latent classes for few-shot segmentation,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>L. Yang, W. Zhuo, L. Qi, Y. Shi, and Y. Gao.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Yang_Mining_Latent_Classes_for_Few-Shot_Segmentation_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/LiheYoung\/MiningFSS\">code<\/a><\/li><li><strong>Partner-assisted learning for few-shot image classification,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>J. Ma, H. Xie, G. Han, S. Chang, A. Galstyan, and W. Abd-Almageed.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Ma_Partner-Assisted_Learning_for_Few-Shot_Image_Classification_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Hierarchical graph attention network for few-shot visual-semantic learning,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>C. Yin, K. Wu, Z. Che, B. Jiang, Z. Xu, and J. Tang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Yin_Hierarchical_Graph_Attention_Network_for_Few-Shot_Visual-Semantic_Learning_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Video pose distillation for few-shot, fine-grained sports action recognition,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>J. Hong, M. Fisher, M. Gharbi, and K. Fatahalian.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Hong_Video_Pose_Distillation_for_Few-Shot_Fine-Grained_Sports_Action_Recognition_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Universal-prototype enhancing for few-shot object detection,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>A. Wu, Y. Han, L. Zhu, and Y. Yang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Wu_Universal-Prototype_Enhancing_for_Few-Shot_Object_Detection_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/amingwu\/up-fsod\">code<\/a><\/li><li><strong>Query adaptive few-shot object detection with heterogeneous graph convolutional networks,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>G. Han, Y. He, S. Huang, J. Ma, and S. Chang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Han_Query_Adaptive_Few-Shot_Object_Detection_With_Heterogeneous_Graph_Convolutional_Networks_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot visual relationship co-localization,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>R. Teotia, V. Mishra, M. Maheshwari, and A. Mishra.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Teotia_Few-Shot_Visual_Relationship_Co-Localization_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/vl2g\/VRC\">code<\/a><\/li><li><strong>Shallow Bayesian meta learning for real-world few-shot recognition,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>X. Zhang, D. Meng, H. Gouk, and T. M. Hospedales.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Zhang_Shallow_Bayesian_Meta_Learning_for_Real-World_Few-Shot_Recognition_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/open-debin\/bayesian_mqda\">code<\/a><\/li><li><strong>Super-resolving cross-domain face miniatures by peeking at one-shot exemplar,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>P. Li, X. Yu, and Y. Yang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Li_Super-Resolving_Cross-Domain_Face_Miniatures_by_Peeking_at_One-Shot_Exemplar_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot segmentation via cycle-consistent transformer,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>G. Zhang, G. Kang, Y. Yang, and Y. Wei.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/b8b12f949378552c21f28deff8ba8eb6-Paper.pdf\">paper<\/a><\/li><li><strong>Generalized and discriminative few-shot object detection via SVD-dictionary enhancement,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>A. WU, S. Zhao, C. Deng, and W. Liu.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/325995af77a0e8b06d1204a171010b3a-Paper.pdf\">paper<\/a><\/li><li><strong>Re-ranking for image retrieval and transductive few-shot classification,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>X. SHEN, Y. Xiao, S. Hu, O. Sbai, and M. Aubry.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/d9fc0cdb67638d50f411432d0d41d0ba-Paper.pdf\">paper<\/a><\/li><li><strong>Neural view synthesis and matching for semi-supervised few-shot learning of 3D pose,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>A. Wang, S. Mei, A. L. Yuille, and A. Kortylewski.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/3a61ed715ee66c48bacf237fa7bb5289-Paper.pdf\">paper<\/a><\/li><li><strong>MetaAvatar: Learning animatable clothed human models from few depth images,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>S. Wang, M. Mihajlovic, Q. Ma, A. Geiger, and S. Tang.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/1680829293f2a8541efa2647a0290f88-Paper.pdf\">paper<\/a><\/li><li><strong>Few-shot object detection via association and discrimination,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>Y. Cao, J. Wang, Y. Jin, T. Wu, K. Chen, Z. Liu, and D. Lin.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/8a1e808b55fde9455cb3d8857ed88389-Paper.pdf\">paper<\/a><\/li><li><strong>Rectifying the shortcut learning of background for few-shot learning,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>X. Luo, L. Wei, L. Wen, J. Yang, L. Xie, Z. Xu, and Q. Tian.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/6cfe0e6127fa25df2a0ef2ae1067d915-Paper.pdf\">paper<\/a><\/li><li><strong>D2C: Diffusion-decoding models for few-shot conditional generation,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>A. Sinha, J. Song, C. Meng, and S. Ermon.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/682e0e796084e163c5ca053dd8573b0c-Paper.pdf\">paper<\/a><\/li><li><strong>Few-shot backdoor attacks on visual object tracking,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>Y. Li, H. Zhong, X. Ma, Y. Jiang, and S. Xia.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=qSV5CuSaK_a\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/HXZhong1997\/FSBA\">code<\/a><\/li><li><strong>Temporal alignment prediction for supervised representation learning and few-shot sequence classification,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>B. Su, and J. Wen.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=p3DKPQ7uaAi\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/BingSu12\/TAP\">code<\/a><\/li><li><strong>Learning non-target knowledge for few-shot semantic segmentation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Y. Liu, N. Liu, Q. Cao, X. Yao, J. Han, and L. Shao.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Liu_Learning_Non-Target_Knowledge_for_Few-Shot_Semantic_Segmentation_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Learning what not to segment: A new perspective on few-shot segmentation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>C. Lang, G. Cheng, B. Tu, and J. Han.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Lang_Learning_What_Not_To_Segment_A_New_Perspective_on_Few-Shot_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/chunbolang\/BAM\">code<\/a><\/li><li><strong>Few-shot keypoint detection with uncertainty learning for unseen species,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>C. Lu, and P. Koniusz.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Lu_Few-Shot_Keypoint_Detection_With_Uncertainty_Learning_for_Unseen_Species_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>XMP-Font: Self-supervised cross-modality pre-training for few-shot font generation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>W. Liu, F. Liu, F. Ding, Q. He, and Z. Yi.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Liu_XMP-Font_Self-Supervised_Cross-Modality_Pre-Training_for_Few-Shot_Font_Generation_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Spatio-temporal relation modeling for few-shot action recognition,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>A. Thatipelli, S. Narayan, S. Khan, R. M. Anwer, F. S. Khan, and B. Ghanem.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Thatipelli_Spatio-Temporal_Relation_Modeling_for_Few-Shot_Action_Recognition_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/Anirudh257\/strm\">code<\/a><\/li><li><strong>Attribute group editing for reliable few-shot image generation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>G. Ding, X. Han, S. Wang, S. Wu, X. Jin, D. Tu, and Q. Huang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Ding_Attribute_Group_Editing_for_Reliable_Few-Shot_Image_Generation_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/UniBester\/AGE\">code<\/a><\/li><li><strong>Few-shot backdoor defense using Shapley estimation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>J. Guan, Z. Tu, R. He, and D. Tao.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Guan_Few-Shot_Backdoor_Defense_Using_Shapley_Estimation_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Hybrid relation guided set matching for few-shot action recognition,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>X. Wang, S. Zhang, Z. Qing, M. Tang, Z. Zuo, C. Gao, R. Jin, and N. Sang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Wang_Hybrid_Relation_Guided_Set_Matching_for_Few-Shot_Action_Recognition_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/hyrsm-cvpr2022.github.io\/\">code<\/a><\/li><li><strong>Label, verify, correct: A simple few shot object detection method,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>P. Kaul, W. Xie, and A. Zisserman.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Kaul_Label_Verify_Correct_A_Simple_Few_Shot_Object_Detection_Method_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>InfoNeRF: Ray entropy minimization for few-shot neural volume rendering,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>M. Kim, S. Seo, and B. Han.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Kim_InfoNeRF_Ray_Entropy_Minimization_for_Few-Shot_Neural_Volume_Rendering_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>A closer look at few-shot image generation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Y. Zhao, H. Ding, H. Huang, and N. Cheung.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Zhao_A_Closer_Look_at_Few-Shot_Image_Generation_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/mseitzer\/pytorch-fid\">code<\/a><\/li><li><strong>Motion-modulated temporal fragment alignment network for few-shot action recognition,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>J. Wu, T. Zhang, Z. Zhang, F. Wu, and Y. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Wu_Motion-Modulated_Temporal_Fragment_Alignment_Network_for_Few-Shot_Action_Recognition_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Kernelized few-shot object detection with efficient integral aggregation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>S. Zhang, L. Wang, N. Murray, and P. Koniusz.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Zhang_Kernelized_Few-Shot_Object_Detection_With_Efficient_Integral_Aggregation_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ZS123-lang\/KFSOD\">code<\/a><\/li><li><strong>FS6D: Few-shot 6D pose estimation of novel objects,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Y. He, Y. Wang, H. Fan, J. Sun, and Q. Chen.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/He_FS6D_Few-Shot_6D_Pose_Estimation_of_Novel_Objects_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Look closer to supervise better: One-shot font generation via component-based discriminator,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Y. Kong, C. Luo, W. Ma, Q. Zhu, S. Zhu, N. Yuan, and L. Jin.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Kong_Look_Closer_To_Supervise_Better_One-Shot_Font_Generation_via_Component-Based_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Generalized few-shot semantic segmentation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Z. Tian, X. Lai, L. Jiang, S. Liu, M. Shu, H. Zhao, and J. Jia.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Tian_Generalized_Few-Shot_Semantic_Segmentation_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/dvlab-research\/GFS-Seg\">code<\/a><\/li><li><strong>Which images to label for few-shot medical landmark detection?,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Q. Quan, Q. Yao, J. Li, and S. K. Zhou.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Quan_Which_Images_To_Label_for_Few-Shot_Medical_Landmark_Detection_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Dynamic prototype convolution network for few-shot semantic segmentation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>J. Liu, Y. Bao, G. Xie, H. Xiong, J. Sonke, and E. Gavves.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Liu_Dynamic_Prototype_Convolution_Network_for_Few-Shot_Semantic_Segmentation_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>OSOP: A multi-stage one shot object pose estimation framework,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>I. Shugurov, F. Li, B. Busam, and S. Ilic.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Shugurov_OSOP_A_Multi-Stage_One_Shot_Object_Pose_Estimation_Framework_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Semantic-aligned fusion transformer for one-shot object detection,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Y. Zhao, X. Guo, and Y. Lu.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Zhao_Semantic-Aligned_Fusion_Transformer_for_One-Shot_Object_Detection_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>OnePose: One-shot object pose estimation without CAD models,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>J. Sun, Z. Wang, S. Zhang, X. He, H. Zhao, G. Zhang, and X. Zhou.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Sun_OnePose_One-Shot_Object_Pose_Estimation_Without_CAD_Models_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/zju3dv.github.io\/onepose\/\">code<\/a><\/li><li><strong>Few-shot object detection with fully cross-transformer,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>G. Han, J. Ma, S. Huang, L. Chen, and S. Chang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Han_Few-Shot_Object_Detection_With_Fully_Cross-Transformer_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Learning to memorize feature hallucination for one-shot image generation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Y. Xie, Y. Fu, Y. Tai, Y. Cao, J. Zhu, and C. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Xie_Learning_To_Memorize_Feature_Hallucination_for_One-Shot_Image_Generation_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot font generation by learning fine-grained local styles,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>L. Tang, Y. Cai, J. Liu, Z. Hong, M. Gong, M. Fan, J. Han, J. Liu, E. Ding, and J. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Tang_Few-Shot_Font_Generation_by_Learning_Fine-Grained_Local_Styles_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Balanced and hierarchical relation learning for one-shot object detection,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>H. Yang, S. Cai, H. Sheng, B. Deng, J. Huang, X. Hua, Y. Tang, and Y. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Yang_Balanced_and_Hierarchical_Relation_Learning_for_One-Shot_Object_Detection_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot head swapping in the wild,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>C. Shu, H. Wu, H. Zhou, J. Liu, Z. Hong, C. Ding, J. Han, J. Liu, E. Ding, and J. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Shu_Few-Shot_Head_Swapping_in_the_Wild_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Integrative few-shot learning for classification and segmentation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>D. Kang, and M. Cho.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Kang_Integrative_Few-Shot_Learning_for_Classification_and_Segmentation_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Attribute surrogates learning and spectral tokens pooling in transformers for few-shot learning,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Y. He, W. Liang, D. Zhao, H. Zhou, W. Ge, Y. Yu, and W. Zhang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/He_Attribute_Surrogates_Learning_and_Spectral_Tokens_Pooling_in_Transformers_for_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/StomachCold\/HCTransformers\">code<\/a><\/li><li><strong>Task discrepancy maximization for fine-grained few-shot classification,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>S. Lee, W. Moon, and J. Heo.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Lee_Task_Discrepancy_Maximization_for_Fine-Grained_Few-Shot_Classification_CVPR_2022_paper.pdf\">paper<\/a><\/li><\/ol>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#robotics\"><\/a>Robotics<\/h3>\n\n\n\n<ol><li><strong>Towards one shot learning by imitation for humanoid robots,<\/strong>&nbsp;in ICRA, 2010.&nbsp;<em>Y. Wu and Y. Demiris.<\/em>&nbsp;<a href=\"https:\/\/spiral.imperial.ac.uk\/bitstream\/10044\/1\/12669\/4\/icra2010.pdf\">paper<\/a><\/li><li><strong>Learning manipulation actions from a few demonstrations,<\/strong>&nbsp;in ICRA, 2013.&nbsp;<em>N. Abdo, H. Kretzschmar, L. Spinello, and C. Stachniss.<\/em>&nbsp;<a href=\"https:\/\/ieeexplore.ieee.org\/document\/6630734\">paper<\/a><\/li><li><strong>Learning assistive strategies from a few user-robot interactions: Model-based reinforcement learning approach,<\/strong>&nbsp;in ICRA, 2016.&nbsp;<em>M. Hamaya, T. Matsubara, T. Noda, T. Teramae, and J. Morimoto.<\/em>&nbsp;<a href=\"https:\/\/ieeexplore.ieee.org\/document\/7487509\">paper<\/a><\/li><li><strong>One-shot imitation learning,<\/strong>&nbsp;in NeurIPS, 2017.&nbsp;<em>Y. Duan, M. Andrychowicz, B. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/6709-one-shot-imitation-learning.pdf\">paper<\/a><\/li><li><strong>Meta-learning language-guided policy learning,<\/strong>&nbsp;in ICLR, 2019.&nbsp;<em>J. D. Co-Reyes, A. Gupta, S. Sanjeev, N. Altieri, J. DeNero, P. Abbeel, and S. Levine.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=HkgSEnA5KQ\">paper<\/a><\/li><li><strong>Meta reinforcement learning with autonomous inference of subtask dependencies,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>S. Sohn, H. Woo, J. Choi, and H. Lee.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=HkgsWxrtPB\">paper<\/a><\/li><li><strong>Watch, try, learn: Meta-learning from demonstrations and rewards,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>A. Zhou, E. Jang, D. Kappler, A. Herzog, M. Khansari, P. Wohlhart, Y. Bai, M. Kalakrishnan, S. Levine, and C. Finn.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=SJg5J6NtDr\">paper<\/a><\/li><li><strong>Few-shot Bayesian imitation learning with logical program policies,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>T. Silver, K. R. Allen, A. K. Lew, L. P. Kaelbling, and J. Tenenbaum.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6587\">paper<\/a><\/li><li><strong>One solution is not all you need: Few-shot extrapolation via structured MaxEnt RL,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>S. Kumar, A. Kumar, S. Levine, and C. Finn.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/5d151d1059a6281335a10732fc49620e-Paper.pdf\">paper<\/a><\/li><li><strong>Bowtie networks: Generative modeling for joint few-shot recognition and novel-view synthesis,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>Z. Bao, Y. Wang, and M. Hebert.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=ESG-DMKQKsD\">paper<\/a><\/li><li><strong>Demonstration-conditioned reinforcement learning for few-shot imitation,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>C. R. Dance, J. Perez, and T. Cachet.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/dance21a\/dance21a.pdf\">paper<\/a><\/li><li><strong>Hierarchical few-shot imitation with skill transition models,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>K. Hakhamaneshi, R. Zhao, A. Zhan, P. Abbeel, and M. Laskin.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=xKZ4K0lTj_\">paper<\/a><\/li><\/ol>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#natural-language-processing\"><\/a>Natural Language Processing<\/h3>\n\n\n\n<ol><li><strong>High-risk learning: Acquiring new word vectors from tiny data,<\/strong>&nbsp;in EMNLP, 2017.&nbsp;<em>A. Herbelot and M. Baroni.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/D17-1030.pdf\">paper<\/a><\/li><li><strong>MetaEXP: Interactive explanation and exploration of large knowledge graphs,<\/strong>&nbsp;in TheWebConf, 2018.&nbsp;<em>F. Behrens, S. Bischoff, P. Ladenburger, J. R\u00fcckin, L. Seidel, F. Stolp, M. Vaichenker, A. Ziegler, D. Mottin, F. Aghaei, E. M\u00fcller, M. Preusse, N. M\u00fcller, and M. Hunger.<\/em>&nbsp;<a href=\"https:\/\/meta-exp.github.io\/resources\/paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/hpi.de\/en\/mueller\/metaex\">code<\/a><\/li><li><strong>Few-shot representation learning for out-of-vocabulary words,<\/strong>&nbsp;in ACL, 2019.&nbsp;<em>Z. Hu, T. Chen, K.-W. Chang, and Y. Sun.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/P19-1402.pdf\">paper<\/a><\/li><li><strong>Learning to customize model structures for few-shot dialogue generation tasks,<\/strong>&nbsp;in ACL, 2020.&nbsp;<em>Y. Song, Z. Liu, W. Bi, R. Yan, and M. Zhang.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.acl-main.517.pdf\">paper<\/a><\/li><li><strong>Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network,<\/strong>&nbsp;in ACL, 2020.&nbsp;<em>Y. Hou, W. Che, Y. Lai, Z. Zhou, Y. Liu, H. Liu, and T. Liu.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.acl-main.128.pdf\">paper<\/a><\/li><li><strong>Meta-reinforced multi-domain state generator for dialogue systems,<\/strong>&nbsp;in ACL, 2020.&nbsp;<em>Y. Huang, J. Feng, M. Hu, X. Wu, X. Du, and S. Ma.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.acl-main.636.pdf\">paper<\/a><\/li><li><strong>Few-shot knowledge graph completion,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>C. Zhang, H. Yao, C. Huang, M. Jiang, Z. Li, and N. V. Chawla.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/5698\">paper<\/a><\/li><li><strong>Universal natural language processing with limited annotations: Try few-shot textual entailment as a start,<\/strong>&nbsp;in EMNLP, 2020.&nbsp;<em>W. Yin, N. F. Rajani, D. Radev, R. Socher, and C. Xiong.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.660.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/salesforce\/UniversalFewShotNLP\">code<\/a><\/li><li><strong>Simple and effective few-shot named entity recognition with structured nearest neighbor learning,<\/strong>&nbsp;in EMNLP, 2020.&nbsp;<em>Y. Yang, and A. Katiyar.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.516.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/asappresearch\/structshot\">code<\/a><\/li><li><strong>Discriminative nearest neighbor few-shot intent detection by transferring natural language inference,<\/strong>&nbsp;in EMNLP, 2020.&nbsp;<em>J. Zhang, K. Hashimoto, W. Liu, C. Wu, Y. Wan, P. Yu, R. Socher, and C. Xiong.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.411.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/salesforce\/DNNC-few-shot-intent\">code<\/a><\/li><li><strong>Few-shot learning for opinion summarization,<\/strong>&nbsp;in EMNLP, 2020.&nbsp;<em>A. Bra\u017einskas, M. Lapata, and I. Titov.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.337.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/abrazinskas\/FewSum\">code<\/a><\/li><li><strong>Adaptive attentional network for few-shot knowledge graph completion,<\/strong>&nbsp;in EMNLP, 2020.&nbsp;<em>J. Sheng, S. Guo, Z. Chen, J. Yue, L. Wang, T. Liu, and H. Xu.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.131.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/JiaweiSheng\/FAAN\">code<\/a><\/li><li><strong>Few-shot complex knowledge base question answering via meta reinforcement learning,<\/strong>&nbsp;in EMNLP, 2020.&nbsp;<em>Y. Hua, Y. Li, G. Haffari, G. Qi, and T. Wu.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.469.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/DevinJake\/MRL-CQA\">code<\/a><\/li><li><strong>Self-supervised meta-learning for few-shot natural language classification tasks,<\/strong>&nbsp;in EMNLP, 2020.&nbsp;<em>T. Bansal, R. Jha, T. Munkhdalai, and A. McCallum.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.38.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/iesl\/metanlp\">code<\/a><\/li><li><strong>Uncertainty-aware self-training for few-shot text classification,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>S. Mukherjee, and A. Awadallah.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/f23d125da1e29e34c552f448610ff25f-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/microsoft\/UST\">code<\/a><\/li><li><strong>Learning to extrapolate knowledge: Transductive few-shot out-of-graph link prediction,<\/strong>&nbsp;in NeurIPS, 2020:.&nbsp;<em>J. Baek, D. B. Lee, and S. J. Hwang.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/0663a4ddceacb40b095eda264a85f15c-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/JinheonBaek\/GEN\">code<\/a><\/li><li><strong>MetaNER: Named entity recognition with meta-learning,<\/strong>&nbsp;in TheWebConf, 2020.&nbsp;<em>J. Li, S. Shang, and L. Shao.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3366423.3380127\">paper<\/a><\/li><li><strong>Conditionally adaptive multi-task learning: Improving transfer learning in NLP using fewer parameters &amp; less data,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>J. Pilault, A. E. hattami, and C. Pal.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=de11dbHzAMF\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/CAMTL\/CA-MTL\">code<\/a><\/li><li><strong>Revisiting few-sample BERT fine-tuning,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>T. Zhang, F. Wu, A. Katiyar, K. Q. Weinberger, and Y. Artzi.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=cO1IH43yUF\">paper<\/a>&nbsp;<a href=\"https:\/\/pytorch.org\/docs\/1.4.0\/_modules\/torch\/optim\/adamw.html\">code<\/a><\/li><li><strong>Few-shot conversational dense retrieval,<\/strong>&nbsp;in SIGIR, 2021.&nbsp;<em>S. Yu, Z. Liu, C. Xiong, T. Feng, and Z. Liu.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3404835.3462856\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/thunlp\/ConvDR\">code<\/a><\/li><li><strong>Relational learning with gated and attentive neighbor aggregator for few-shot knowledge graph completion,<\/strong>&nbsp;in SIGIR, 2021.&nbsp;<em>G. Niu, Y. Li, C. Tang, R. Geng, J. Dai, Q. Liu, H. Wang, J. Sun, F. Huang, and L. Si.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3404835.3462925\">paper<\/a><\/li><li><strong>Few-shot language coordination by modeling theory of mind,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>H. Zhu, G. Neubig, and Y. Bisk.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/zhu21d\/zhu21d.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/CLAW-Lab\/ToM\">code<\/a><\/li><li><strong>Graph-evolving meta-learning for low-resource medical dialogue generation,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>S. Lin, P. Zhou, X. Liang, J. Tang, R. Zhao, Z. Chen, and L. Lin.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17577\/17384\">paper<\/a><\/li><li><strong>KEML: A knowledge-enriched meta-learning framework for lexical relation classification,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>C. Wang, M. Qiu, J. Huang, and X. He.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17640\/17447\">paper<\/a><\/li><li><strong>Few-shot learning for multi-label intent detection,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>Y. Hou, Y. Lai, Y. Wu, W. Che, and T. Liu.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17541\/17348\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/AtmaHou\/FewShotMultiLabel\">code<\/a><\/li><li><strong>SALNet: Semi-supervised few-shot text classification with attention-based lexicon construction,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>J.-H. Lee, S.-K. Ko, and Y.-S. Han.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17558\/17365\">paper<\/a><\/li><li><strong>Learning from my friends: Few-shot personalized conversation systems via social networks,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>Z. Tian, W. Bi, Z. Zhang, D. Lee, Y. Song, and N. L. Zhang.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17638\/17445\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/tianzhiliang\/FewShotPersonaConvData\">code<\/a><\/li><li><strong>Relative and absolute location embedding for few-shot node classification on graph,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>Z. Liu, Y. Fang, C. Liu, and S. C.H. Hoi.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16551\/16358\">paper<\/a><\/li><li><strong>Few-shot question answering by pretraining span selection,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>O. Ram, Y. Kirstain, J. Berant, A. Globerson, and O. Levy.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.239.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/oriram\/splinter\">code<\/a><\/li><li><strong>A closer look at few-shot crosslingual transfer: The choice of shots matters,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>M. Zhao, Y. Zhu, E. Shareghi, I. Vulic, R. Reichart, A. Korhonen, and H. Sch\u00fctze.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.447.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/fsxlt\">code<\/a><\/li><li><strong>Learning from miscellaneous other-classwords for few-shot named entity recognition,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>M. Tong, S. Wang, B. Xu, Y. Cao, M. Liu, L. Hou, and J. Li.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.487.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/shuaiwa16\/OtherClassNER.git\">code<\/a><\/li><li><strong>Distinct label representations for few-shot text classification,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>S. Ohashi, J. Takayama, T. Kajiwara, and Y. Arase.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-short.105.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/21335732529sky\/difference_extractor\">code<\/a><\/li><li><strong>Entity concept-enhanced few-shot relation extraction,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>S. Yang, Y. Zhang, G. Niu, Q. Zhao, and S. Pu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-short.124.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/LittleGuoKe\/ConceptFERE\">code<\/a><\/li><li><strong>On training instance selection for few-shot neural text generation,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>E. Chang, X. Shen, H.-S. Yeh, and V. Demberg.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-short.2.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/gitlab.com\/erniecyc\/few-selector\">code<\/a><\/li><li><strong>Unsupervised neural machine translation for low-resource domains via meta-learning,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>C. Park, Y. Tae, T. Kim, S. Yang, M. A. Khan, L. Park, and J. Choo.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.225.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/papago-lab\/MetaGUMT\">code<\/a><\/li><li><strong>Meta-learning with variational semantic memory for word sense disambiguation,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>Y. Du, N. Holla, X. Zhen, C. Snoek, and E. Shutova.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.409.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/YDU-uva\/VSM_WSD\">code<\/a><\/li><li><strong>Multi-label few-shot learning for aspect category detection,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>M. Hu, S. Z. H. Guo, C. Xue, H. Gao, T. Gao, R. Cheng, and Z. Su.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.495.pdf\">paper<\/a><\/li><li><strong>TextSETTR: Few-shot text style extraction and tunable targeted restyling,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>P. Rileya, N. Constantb, M. Guob, G. Kumarc, D. Uthusb, and Z. Parekh.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.293.pdf\">paper<\/a><\/li><li><strong>Few-shot text ranking with meta adapted synthetic weak supervision,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>S. Sun, Y. Qian, Z. Liu, C. Xiong, K. Zhang, J. Bao, Z. Liu, and P. Bennett.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.390.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/thunlp\/MetaAdaptRank\">code<\/a><\/li><li><strong>PROTAUGMENT: Intent detection meta-learning through unsupervised diverse paraphrasing,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>T. Dopierre, C. Gravier, and W. Logerais.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.191.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/tdopierre\/ProtAugment\">code<\/a><\/li><li><strong>AUGNLG: Few-shot natural language generation using self-trained data augmentation,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>X. Xu, G. Wang, Y.-B. Kim, and S. Lee.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.95.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/XinnuoXu\/AugNLG\">code<\/a><\/li><li><strong>Meta self-training for few-shot neural sequence labeling,<\/strong>&nbsp;in KDD, 2021.&nbsp;<em>Y. Wang, S. Mukherjee, H. Chu, Y. Tu, M. Wu, J. Gao, and A. H. Awadallah.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3447548.3467235\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/microsoft\/MetaST\">code<\/a><\/li><li><strong>Knowledge-enhanced domain adaptation in few-shot relation classification,<\/strong>&nbsp;in KDD, 2021.&nbsp;<em>J. Zhang, J. Zhu, Y. Yang, W. Shi, C. Zhang, and H. Wang.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3447548.3467438\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/imJiawen\/KEFDA\">code<\/a><\/li><li><strong>Few-shot text classification with triplet networks, data augmentation, and curriculum learning,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>J. Wei, C. Huang, S. Vosoughi, Y. Cheng, and S. Xu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.434.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/jasonwei20\/triplet-loss\">code<\/a><\/li><li><strong>Few-shot intent classification and slot filling with retrieved examples,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>D. Yu, L. He, Y. Zhang, X. Du, P. Pasupat, and Q. Li.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.59.pdf\">paper<\/a><\/li><li><strong>Non-parametric few-shot learning for word sense disambiguation,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>H. Chen, M. Xia, and D. Chen.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.142.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/princeton-nlp\/metric-wsd\">code<\/a><\/li><li><strong>Towards few-shot fact-checking via perplexity,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>N. Lee, Y. Bang, A. Madotto, and P. Fung.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.158.pdf\">paper<\/a><\/li><li><strong>ConVEx: Data-efficient and few-shot slot labeling,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>M. Henderson, and I. Vulic.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.264.pdf\">paper<\/a><\/li><li><strong>Few-shot text generation with natural language instructions,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>T. Schick, and H. Sch\u00fctze.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.32.pdf\">paper<\/a><\/li><li><strong>Towards realistic few-shot relation extraction,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>S. Brody, S. Wu, and A. Benton.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.433.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/bloomberg\/emnlp21_fewrel\">code<\/a><\/li><li><strong>Few-shot emotion recognition in conversation with sequential prototypical networks,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>G. Guibon, M. Labeau, H. Flamein, L. Lefeuvre, and C. Clavel.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.549.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/gguibon\/protoseq\">code<\/a><\/li><li><strong>Learning prototype representations across few-shot tasks for event detection,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>V. Lai, F. Dernoncourt, and T. H. Nguyen.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.427.pdf\">paper<\/a><\/li><li><strong>Exploring task difficulty for few-shot relation extraction,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>J. Han, B. Cheng, and W. Lu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.204.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/hanjiale\/hcrp\">code<\/a><\/li><li><strong>Honey or poison? Solving the trigger curse in few-shot event detection via causal intervention,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>J. Chen, H. Lin, X. Han, and L. Sun.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.637.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/chen700564\/causalfsed\">code<\/a><\/li><li><strong>Nearest neighbour few-shot learning for cross-lingual classification,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>M. S. Bari, B. Haider, and S. Mansour.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.131.pdf\">paper<\/a><\/li><li><strong>Knowledge-aware meta-learning for low-resource text classification,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>H. Yao, Y. Wu, M. Al-Shedivat, and E. P. Xing.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.136.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/huaxiuyao\/KGML\">code<\/a><\/li><li><strong>Few-shot named entity recognition: An empirical baseline study,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>J. Huang, C. Li, K. Subudhi, D. Jose, S. Balakrishnan, W. Chen, B. Peng, J. Gao, and J. Han.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.813.pdf\">paper<\/a><\/li><li><strong>MetaTS: Meta teacher-student network for multilingual sequence labeling with minimal supervision,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>Z. Li, D. Zhang, T. Cao, Y. Wei, Y. Song, and B. Yin.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.255.pdf\">paper<\/a><\/li><li><strong>Meta-LMTC: Meta-learning for large-scale multi-label text classification,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>R. Wang, X. Su, S. Long, X. Dai, S. Huang, and J. Chen.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.679.pdf\">paper<\/a><\/li><li><strong>Ontology-enhanced prompt-tuning for few-shot learning.,<\/strong>&nbsp;in TheWebConf, 2022.&nbsp;<em>H. Ye, N. Zhang, S. Deng, X. Chen, H. Chen, F. Xiong, X. Chen, and H. Chen.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3485447.3511921\">paper<\/a><\/li><li><strong>EICO: Improving few-shot text classification via explicit and implicit consistency regularization,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>L. Zhao, and C. Yao.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.283.pdf\">paper<\/a><\/li><li><strong>Dialogue summaries as dialogue states (DS2), template-guided summarization for few-shot dialogue state tracking,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>J. Shin, H. Yu, H. Moon, A. Madotto, and J. Park.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.302.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/jshin49\/ds2\">code<\/a><\/li><li><strong>A few-shot semantic parser for wizard-of-oz dialogues with the precise thingtalk representation,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>G. Campagna, S. J. Semnani, R. Kearns, L. J. K. Sato, S. Xu, and M. S. Lam.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.317.pdf\">paper<\/a><\/li><li><strong>Multi-stage prompting for knowledgeable dialogue generation,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>Z. Liu, M. Patwary, R. Prenger, S. Prabhumoye, W. Ping, M. Shoeybi, and B. Catanzaro.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.104.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/NVIDIA\/Megatron-LM\">code<\/a><\/li><li><strong>Few-shot named entity recognition with self-describing networks,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>J. Chen, Q. Liu, H. Lin, X. Han, and L. Sun.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.392.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/chen700564\/sdnet\">code<\/a><\/li><li><strong>CLIP models are few-shot learners: Empirical studies on VQA and visual entailment,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>H. Song, L. Dong, W. Zhang, T. Liu, and F. Wei.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.421.pdf\">paper<\/a><\/li><li><strong>CONTaiNER: Few-shot named entity recognition via contrastive learning,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>S. S. S. Das, A. Katiyar, R. J. Passonneau, and R. Zhang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.439.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/psunlpgroup\/container\">code<\/a><\/li><li><strong>Few-shot controllable style transfer for low-resource multilingual settings,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>K. Krishna, D. Nathani, X. Garcia, B. Samanta, and P. Talukdar.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.514.pdf\">paper<\/a><\/li><li><strong>Label semantic aware pre-training for few-shot text classification,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>A. Mueller, J. Krone, S. Romeo, S. Mansour, E. Mansimov, Y. Zhang, and D. Roth.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.570.pdf\">paper<\/a><\/li><li><strong>Inverse is better! Fast and accurate prompt for few-shot slot tagging,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>Y. Hou, C. Chen, X. Luo, B. Li, and W. Che.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.53.pdf\">paper<\/a><\/li><li><strong>Label semantics for few shot named entity recognition,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>J. Ma, M. Ballesteros, S. Doss, R. Anubhai, S. Mallya, Y. Al-Onaizan, and D. Roth.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.155.pdf\">paper<\/a><\/li><li><strong>Hierarchical recurrent aggregative generation for few-shot NLG,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>G. Zhou, G. Lampouras, and I. Iacobacci.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.170.pdf\">paper<\/a><\/li><li><strong>Towards few-shot entity recognition in document images: A label-aware sequence-to-sequence framework,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>Z. Wang, and J. Shang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.329.pdf\">paper<\/a><\/li><li><strong>A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>W. Jin, Y. Cheng, Y. Shen, W. Chen, and X. Ren.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.197.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/woojeongjin\/fewvlm\">code<\/a><\/li><li><strong>Generated knowledge prompting for commonsense reasoning,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>J. Liu, A. Liu, X. Lu, S. Welleck, P. West, R. L. Bras, Y. Choi, and H. Hajishirzi.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.225.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/liujch1998\/gkp\">code<\/a><\/li><li><strong>End-to-end modeling via information tree for one-shot natural language spatial video grounding,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>M. Li, T. Wang, H. Zhang, S. Zhang, Z. Zhao, J. Miao, W. Zhang, W. Tan, J. Wang, P. Wang, S. Pu, and F. Wu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.596.pdf\">paper<\/a><\/li><li><strong>Leveraging task transferability to meta-learning for clinical section classification with limited data,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>Z. Chen, J. Kim, R. Bhakta, and M. Y. Sir.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.461.pdf\">paper<\/a><\/li><li><strong>Improving meta-learning for low-resource text classification and generation via memory imitation,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>Y. Zhao, Z. Tian, H. Yao, Y. Zheng, D. Lee, Y. Song, J. Sun, and N. L. Zhang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.44.pdf\">paper<\/a><\/li><li><strong>A simple yet effective relation information guided approach for few-shot relation extraction,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>Y. Liu, J. Hu, X. Wan, and T. Chang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.62.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/lylylylylyly\/simplefsre\">code<\/a><\/li><li><strong>Decomposed meta-learning for few-shot named entity recognition,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>T. Ma, H. Jiang, Q. Wu, T. Zhao, and C. Lin.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.124.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/microsoft\/vert-papers\">code<\/a><\/li><li><strong>Meta-learning for fast cross-lingual adaptation in dependency parsing,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>A. Langedijk, V. Dankers, P. Lippe, S. Bos, B. C. Guevara, H. Yannakoudakis, and E. Shutova.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.582.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/annaproxy\/udify-metalearning\">code<\/a><\/li><li><strong>Enhancing cross-lingual natural language inference by prompt-learning from cross-lingual templates,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>K. Qi, H. Wan, J. Du, and H. Chen.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.134.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/qikunxun\/pct\">code<\/a><\/li><\/ol>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#acoustic-signal-processing\"><\/a>Acoustic Signal Processing<\/h3>\n\n\n\n<ol><li><strong>One-shot learning of generative speech concepts,<\/strong>&nbsp;in CogSci, 2014.&nbsp;<em>B. Lake, C.-Y. Lee, J. Glass, and J. Tenenbaum.<\/em>&nbsp;<a href=\"https:\/\/groups.csail.mit.edu\/sls\/publications\/2014\/lake-cogsci14.pdf\">paper<\/a><\/li><li><strong>Machine speech chain with one-shot speaker adaptation,<\/strong>&nbsp;INTERSPEECH, 2018.&nbsp;<em>A. Tjandra, S. Sakti, and S. Nakamura.<\/em>&nbsp;<a href=\"https:\/\/ahcweb01.naist.jp\/papers\/conference\/2018\/201809_Interspeech_andros-tj\/201809_Interspeech_andros-tj_1.paper.pdf\">paper<\/a><\/li><li><strong>Investigation of using disentangled and interpretable representations for one-shot cross-lingual voice conversion,<\/strong>&nbsp;INTERSPEECH, 2018.&nbsp;<em>S. H. Mohammadi and T. Kim.<\/em>&nbsp;<a href=\"https:\/\/isca-speech.org\/archive\/Interspeech_2018\/pdfs\/2525.pdf\">paper<\/a><\/li><li><strong>Few-shot audio classification with attentional graph neural networks,<\/strong>&nbsp;INTERSPEECH, 2019.&nbsp;<em>S. Zhang, Y. Qin, K. Sun, and Y. Lin.<\/em>&nbsp;<a href=\"https:\/\/www.isca-speech.org\/archive\/Interspeech_2019\/pdfs\/1532.pdf\">paper<\/a><\/li><li><strong>One-shot voice conversion with disentangled representations by leveraging phonetic posteriorgrams,<\/strong>&nbsp;INTERSPEECH, 2019.&nbsp;<em>S. H. Mohammadi, and T. Kim.<\/em>&nbsp;<a href=\"https:\/\/www.isca-speech.org\/archive\/Interspeech_2019\/pdfs\/1798.pdf\">paper<\/a><\/li><li><strong>One-shot voice conversion with global speaker embeddings,<\/strong>&nbsp;INTERSPEECH, 2019.&nbsp;<em>H. Lu, Z. Wu, D. Dai, R. Li, S. Kang, J. Jia, and H. Meng.<\/em>&nbsp;<a href=\"https:\/\/www.isca-speech.org\/archive\/Interspeech_2019\/pdfs\/2365.pdf\">paper<\/a><\/li><li><strong>One-shot voice conversion by separating speaker and content representations with instance normalization,<\/strong>&nbsp;INTERSPEECH, 2019.&nbsp;<em>J.-C. Chou, and H.-Y. Lee.<\/em>&nbsp;<a href=\"https:\/\/www.isca-speech.org\/archive\/Interspeech_2019\/pdfs\/2663.pdf\">paper<\/a><\/li><li><strong>Audio2Head: Audio-driven one-shot talking-head generation with natural head motion,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>S. Wang, L. Li, Y. Ding, C. Fan, and X. Yu.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0152.pdf\">paper<\/a><\/li><\/ol>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#recommendation\"><\/a>Recommendation<\/h3>\n\n\n\n<ol><li><strong>A meta-learning perspective on cold-start recommendations for items,<\/strong>&nbsp;in NeurIPS, 2017.&nbsp;<em>M. Vartak, A. Thiagarajan, C. Miranda, J. Bratman, and H. Larochelle.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/7266-a-meta-learning-perspective-on-cold-start-recommendations-for-items.pdf\">paper<\/a><\/li><li><strong>MeLU: Meta-learned user preference estimator for cold-start recommendation,<\/strong>&nbsp;in KDD, 2019.&nbsp;<em>H. Lee, J. Im, S. Jang, H. Cho, and S. Chung.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1908.00413.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/hoyeoplee\/MeLU\">code<\/a><\/li><li><strong>Sequential scenario-specific meta learner for online recommendation,<\/strong>&nbsp;in KDD, 2019.&nbsp;<em>Z. Du, X. Wang, H. Yang, J. Zhou, and J. Tang.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1906.00391.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/THUDM\/ScenarioMeta\">code<\/a><\/li><li><strong>Few-shot learning for new user recommendation in location-based social networks,<\/strong>&nbsp;in TheWebConf, 2020.&nbsp;<em>R. Li, X. Wu, X. Chen, and W. Wang.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3366423.3379994\">paper<\/a><\/li><li><strong>MAMO: Memory-augmented meta-optimization for cold-start recommendation,<\/strong>&nbsp;in KDD, 2020.&nbsp;<em>M. Dong, F. Yuan, L. Yao, X. Xu, and L. Zhu.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2007.03183.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/dongmanqing\/Code-for-MAMO\">code<\/a><\/li><li><strong>Meta-learning on heterogeneous information networks for cold-start recommendation,<\/strong>&nbsp;in KDD, 2020.&nbsp;<em>Y. Lu, Y. Fang, and C. Shi.<\/em>&nbsp;<a href=\"https:\/\/ink.library.smu.edu.sg\/cgi\/viewcontent.cgi?article=6158&amp;context=sis_research\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/rootlu\/MetaHIN\">code<\/a><\/li><li><strong>MetaSelector: Meta-learning for recommendation with user-level adaptive model selection,<\/strong>&nbsp;in TheWebConf, 2020.&nbsp;<em>M. Luo, F. Chen, P. Cheng, Z. Dong, X. He, J. Feng, and Z. Li.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2001.10378v1.pdf\">paper<\/a><\/li><li><strong>Fast adaptation for cold-start collaborative filtering with meta-learning,<\/strong>&nbsp;in ICDM, 2020.&nbsp;<em>T. Wei, Z. Wu, R. Li, Z. Hu, F. Feng, X. H. Sun, and W. Wang.<\/em>&nbsp;<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9338389\">paper<\/a><\/li><li><strong>Preference-adaptive meta-learning for cold-start recommendation,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>L. Wang, B. Jin, Z. Huang, H. Zhao, D. Lian, Q. Liu, and E. Chen.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0222.pdf\">paper<\/a><\/li><li><strong>Meta-learning helps personalized product search.,<\/strong>&nbsp;in TheWebConf, 2022.&nbsp;<em>B. Wu, Z. Meng, Q. Zhang, and S. Liang.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3485447.3512036\">paper<\/a><\/li><li><strong>Alleviating cold-start problem in CTR prediction with a variational embedding learning framework.,<\/strong>&nbsp;in TheWebConf, 2022.&nbsp;<em>X. Xu, C. Yang, Q. Yu, Z. Fang, J. Wang, C. Fan, Y. He, C. Peng, Z. Lin, and J. Shao.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3485447.3512048\">paper<\/a><\/li><li><strong>PNMTA: A pretrained network modulation and task adaptation approach for user cold-start recommendation.,<\/strong>&nbsp;in TheWebConf, 2022.&nbsp;<em>H. Pang, F. Giunchiglia, X. Li, R. Guan, and X. Feng.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3485447.3511963\">paper<\/a><\/li><\/ol>\n\n\n\n<h3><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#others\"><\/a>Others<\/h3>\n\n\n\n<ol><li><strong>Low data drug discovery with one-shot learning,<\/strong>&nbsp;ACS Central Science, 2017.&nbsp;<em>H. Altae-Tran, B. Ramsundar, A. S. Pappu, and V. Pande.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1611.03199\">paper<\/a><\/li><li><strong>SMASH: One-shot model architecture search through hypernetworks,<\/strong>&nbsp;in ICLR, 2018.&nbsp;<em>A. Brock, T. Lim, J. Ritchie, and N. Weston.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=rydeCEhs-\">paper<\/a><\/li><li><strong>SPARC: Self-paced network representation for few-shot rare category characterization,<\/strong>&nbsp;in KDD, 2018.&nbsp;<em>D. Zhou, J. He, H. Yang, and W. Fan.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3219819.3219968\">paper<\/a><\/li><li><strong>MetaPred: Meta-learning for clinical risk prediction with limited patient electronic health records,<\/strong>&nbsp;in KDD, 2019.&nbsp;<em>X. S. Zhang, F. Tang, H. H. Dodge, J. Zhou, and F. Wang.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1905.03218.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/sheryl-ai\/MetaPred\">code<\/a><\/li><li><strong>AffnityNet: Semi-supervised few-shot learning for disease type prediction,<\/strong>&nbsp;in AAAI, 2019.&nbsp;<em>T. Ma, and A. Zhang.<\/em>&nbsp;<a href=\"https:\/\/www.aaai.org\/ojs\/index.php\/AAAI\/article\/view\/3898\/3776\">paper<\/a><\/li><li><strong>Learning from multiple cities: A meta-learning approach for spatial-temporal prediction,<\/strong>&nbsp;in TheWebConf, 2019.&nbsp;<em>H. Yao, Y. Liu, Y. Wei, X. Tang, and Z. Li.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/1901.08518.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/huaxiuyao\/MetaST\">code<\/a><\/li><li><strong>Federated meta-learning for fraudulent credit card detection,<\/strong>&nbsp;in IJCAI, 2020.&nbsp;<em>W. Zheng, L. Yan, C. Gou, and F. Wang.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/Proceedings\/2020\/0642.pdf\">paper<\/a><\/li><li><strong>Differentially private meta-learning,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>J. Li, M. Khodak, S. Caldas, and A. Talwalkar.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=rJgqMRVYvr\">paper<\/a><\/li><li><strong>Towards fast adaptation of neural architectures with meta learning,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>D. Lian, Y. Zheng, Y. Xu, Y. Lu, L. Lin, P. Zhao, J. Huang, and S. Gao.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=r1eowANFvr\">paper<\/a><\/li><li><strong>Using optimal embeddings to learn new intents with few examples: An application in the insurance domain,<\/strong>&nbsp;in KDD, 2020:.&nbsp;<em>S. Acharya, and G. Fung.<\/em>&nbsp;<a href=\"http:\/\/ceur-ws.org\/Vol-2666\/KDD_Converse20_paper_10.pdf\">paper<\/a><\/li><li><strong>Meta-learning for query conceptualization at web scale,<\/strong>&nbsp;in KDD, 2020.&nbsp;<em>F. X. Han, D. Niu, H. Chen, W. Guo, S. Yan, and B. Long.<\/em>&nbsp;<a href=\"https:\/\/sites.ualberta.ca\/~dniu\/Homepage\/Publications_files\/fhan-KDD20.pdf\">paper<\/a><\/li><li><strong>Few-sample and adversarial representation learning for continual stream mining,<\/strong>&nbsp;in TheWebConf, 2020.&nbsp;<em>Z. Wang, Y. Wang, Y. Lin, E. Delord, and L. Khan.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3366423.3380153\">paper<\/a><\/li><li><strong>Few-shot graph learning for molecular property prediction,<\/strong>&nbsp;in TheWebConf, 2021.&nbsp;<em>Z. Guo, C. Zhang, W. Yu, J. Herr, O. Wiest, M. Jiang, and N. V. Chawla.<\/em>&nbsp;<a href=\"https:\/\/doi.org\/10.1145\/3442381.3450112\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/zhichunguo\/Meta-MGNN\">code<\/a><\/li><li><strong>Taxonomy-aware learning for few-shot event detection,<\/strong>&nbsp;in TheWebConf, 2021.&nbsp;<em>J. Zheng, F. Cai, W. Chen, W. Lei, and H. Chen.<\/em>&nbsp;<a href=\"https:\/\/doi.org\/10.1145\/3442381.344994\">paper<\/a><\/li><li><strong>Learning from graph propagation via ordinal distillation for one-shot automated essay scoring,<\/strong>&nbsp;in TheWebConf, 2021.&nbsp;<em>Z. Jiang, M. Liu, Y. Yin, H. Yu, Z. Cheng, and Q. Gu.<\/em>&nbsp;<a href=\"https:\/\/doi.org\/10.1145\/3442381.3450017\">paper<\/a><\/li><li><strong>Few-shot network anomaly detection via cross-network meta-learning,<\/strong>&nbsp;in TheWebConf, 2021.&nbsp;<em>K. Ding, Q. Zhou, H. Tong, and H. Liu.<\/em>&nbsp;<a href=\"https:\/\/doi.org\/10.1145\/3442381.3449922\">paper<\/a><\/li><li><strong>Few-shot knowledge validation using rules,<\/strong>&nbsp;in TheWebConf, 2021.&nbsp;<em>M. Loster, D. Mottin, P. Papotti, J. Ehm\u00fcller, B. Feldmann, and F. Naumann.<\/em>&nbsp;<a href=\"https:\/\/doi.org\/10.1145\/3442381.3450040\">paper<\/a><\/li><li><strong>Graph learning regularization and transfer learning for few-shot event detection,<\/strong>&nbsp;in SIGIR, 2021.&nbsp;<em>V. D. Lai, M. V. Nguyen, T. H. Nguyen, and F. Dernoncourt.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3404835.3463054\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/laiviet\/ed-fsl\">code<\/a><\/li><li><strong>Progressive network grafting for few-shot knowledge distillation,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>C. Shen, X. Wang, Y. Yin, J. Song, S. Luo, and M. Song.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16356\/16163\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/zju-vipa\/NetGraft\">code<\/a><\/li><li><strong>Curriculum meta-learning for next POI recommendation,<\/strong>&nbsp;in KDD, 2021.&nbsp;<em>Y. Chen, X. Wang, M. Fan, J. Huang, S. Yang, and W. Zhu.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3447548.3467132\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/PaddlePaddle\/Research\/tree\/master\/ST_DM\/KDD2021-CHAML\">code<\/a><\/li><li><strong>MFNP: A meta-optimized model for few-shot next POI recommendation,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>H. Sun, J. Xu, K. Zheng, P. Zhao, P. Chao, and X. Zhou.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0415.pdf\">paper<\/a><\/li><li><strong>Physics-aware spatiotemporal modules with auxiliary tasks for meta-learning,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>S. Seo, C. Meng, S. Rambhatla, and Y. Liu.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0405.pdf\">paper<\/a><\/li><li><strong>Property-aware relation networks for few-shot molecular property prediction,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>Y. Wang, A. Abuduweili, Q. Yao, and D. Dou.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/91bc333f6967019ac47b49ca0f2fa757-Paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/tata1661\/PAR-NeurIPS21\">code<\/a><\/li><li><strong>Few-shot data-driven algorithms for low rank approximation,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>P. Indyk, T. Wagner, and D. Woodruff.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/588da7a73a2e919a23cb9a419c4c6d44-Paper.pdf\">paper<\/a><\/li><li><strong>Non-Gaussian Gaussian processes for few-shot regression,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>M. Sendera, J. Tabor, A. Nowak, A. Bedychaj, M. Patacchiola, T. Trzcinski, P. Spurek, and M. Zieba.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/54f3bc04830d762a3b56a789b6ff62df-Paper.pdf\">paper<\/a><\/li><li><strong>HELP: Hardware-adaptive efficient latency prediction for NAS via meta-learning,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>H. Lee, S. Lee, S. Chong, and S. J. Hwang.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/e3251075554389fe91d17a794861d47b-Paper.pdf\">paper<\/a><\/li><li><strong>Learning to learn dense Gaussian processes for few-shot learning,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>Z. Wang, Z. Miao, X. Zhen, and Q. Qiu.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/6e2713a6efee97bacb63e52c54f0ada0-Paper.pdf\">paper<\/a><\/li><li><strong>A meta-learning based stress category detection framework on social media.,<\/strong>&nbsp;in TheWebConf, 2022.&nbsp;<em>X. Wang, L. Cao, H. Zhang, L. Feng, Y. Ding, and N. Li.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3485447.3512013\">paper<\/a><\/li><\/ol>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#theories\"><\/a><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\">Theories<\/a><\/h2>\n\n\n\n<ol><li><strong>Learning to learn around a common mean,<\/strong>&nbsp;in NeurIPS, 2018.&nbsp;<em>G. Denevi, C. Ciliberto, D. Stamos, and M. Pontil.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/8220-learning-to-learn-around-a-common-mean.pdf\">paper<\/a><\/li><li><strong>Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm,<\/strong>&nbsp;in ICLR, 2018.&nbsp;<em>C. Finn and S. Levine.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=HyjC5yWCW\">paper<\/a><\/li><li><strong>A theoretical analysis of the number of shots in few-shot learning,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>T. Cao, M. T. Law, and S. Fidler.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=HkgB2TNYPS\">paper<\/a><\/li><li><strong>Rapid learning or feature reuse? Towards understanding the effectiveness of MAML,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>A. Raghu, M. Raghu, S. Bengio, and O. Vinyals.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=rkgMkCEtPB\">paper<\/a><\/li><li><strong>Robust meta-learning for mixed linear regression with small batches,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>W. Kong, R. Somani, S. Kakade, and S. Oh.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/3214a6d842cc69597f9edf26df552e43-Paper.pdf\">paper<\/a><\/li><li><strong>One-shot distributed ridge regression in high dimensions,<\/strong>&nbsp;in ICML, 2020.&nbsp;<em>Y. Sheng, and E. Dobriban.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v119\/sheng20a\/sheng20a.pdf\">paper<\/a><\/li><li><strong>Bridging the gap between practice and PAC-Bayes theory in few-shot meta-learning,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>N. Ding, X. Chen, T. Levinboim, S. Goodman, and R. Soricut.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/f6b6d2a114a9644419dc8d2315f22401-Paper.pdf\">paper<\/a><\/li><li><strong>Generalization bounds for meta-learning: An information-theoretic analysis,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>Q. CHEN, C. Shui, and M. Marchand.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/d9d347f57ae11f34235b4555710547d8-Paper.pdf\">paper<\/a><\/li><li><strong>Generalization bounds for meta-learning via PAC-Bayes and uniform stability,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>A. Farid, and A. Majumdar.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/1102a326d5f7c9e04fc3c89d0ede88c9-Paper.pdf\">paper<\/a><\/li><li><strong>Unraveling model-agnostic meta-learning via the adaptation learning rate,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>Y. Zou, F. Liu, and Q. Li.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=3rULBvOJ8D2\">paper<\/a><\/li><li><strong>On the importance of firth bias reduction in few-shot classification,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>S. Ghaffari, E. Saleh, D. Forsyth, and Y. Wang.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=DNRADop4ksB\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ehsansaleh\/firth_bias_reduction\">code<\/a><\/li><li><strong>Global convergence of MAML and theory-inspired neural architecture search for few-shot learning,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>H. Wang, Y. Wang, R. Sun, and B. Li.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Wang_Global_Convergence_of_MAML_and_Theory-Inspired_Neural_Architecture_Search_for_CVPR_2022_paper.pdf\">paper<\/a><\/li><\/ol>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#few-shot-learning-and-zero-shot-learning\"><\/a><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\">Few-shot Learning and Zero-shot Learning<\/a><\/h2>\n\n\n\n<ol><li><strong>Label-embedding for attribute-based classification,<\/strong>&nbsp;in CVPR, 2013.&nbsp;<em>Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_cvpr_2013\/papers\/Akata_Label-Embedding_for_Attribute-Based_2013_CVPR_paper.pdf\">paper<\/a><\/li><li><strong>A unified semantic embedding: Relating taxonomies and attributes,<\/strong>&nbsp;in NeurIPS, 2014.&nbsp;<em>S. J. Hwang and L. Sigal.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/5289-a-unified-semantic-embedding-relating-taxonomies-and-attributes.pdf\">paper<\/a><\/li><li><strong>Multi-attention network for one shot learning,<\/strong>&nbsp;in CVPR, 2017.&nbsp;<em>P. Wang, L. Liu, C. Shen, Z. Huang, A. van den Hengel, and H. T. Shen.<\/em>&nbsp;<a href=\"http:\/\/zpascal.net\/cvpr2017\/Wang_Multi-Attention_Network_for_CVPR_2017_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot and zero-shot multi-label learning for structured label spaces,<\/strong>&nbsp;in EMNLP, 2018.&nbsp;<em>A. Rios and R. Kavuluru.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/D18-1352.pdf\">paper<\/a><\/li><li><strong>Learning compositional representations for few-shot recognition,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>P. Tokmakov, Y.-X. Wang, and M. Hebert.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Tokmakov_Learning_Compositional_Representations_for_Few-Shot_Recognition_ICCV_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/sites.google.com\/view\/comprepr\/home\">code<\/a><\/li><li><strong>Large-scale few-shot learning: Knowledge transfer with class hierarchy,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>A. Li, T. Luo, Z. Lu, T. Xiang, and L. Wang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Li_Large-Scale_Few-Shot_Learning_Knowledge_Transfer_With_Class_Hierarchy_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>Generalized zero- and few-shot learning via aligned variational autoencoders,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>E. Schonfeld, S. Ebrahimi, S. Sinha, T. Darrell, and Z. Akata.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Schonfeld_Generalized_Zero-_and_Few-Shot_Learning_via_Aligned_Variational_Autoencoders_CVPR_2019_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/edgarschnfld\/CADA-VAE-PyTorch\">code<\/a><\/li><li><strong>F-VAEGAN-D2: A feature generating framework for any-shot learning,<\/strong>&nbsp;in CVPR, 2019.&nbsp;<em>Y. Xian, S. Sharma, B. Schiele, and Z. Akata.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Xian_F-VAEGAN-D2_A_Feature_Generating_Framework_for_Any-Shot_Learning_CVPR_2019_paper.pdf\">paper<\/a><\/li><li><strong>TGG: Transferable graph generation for zero-shot and few-shot learning,<\/strong>&nbsp;in ACM MM, 2019.&nbsp;<em>C. Zhang, X. Lyu, and Z. Tang.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3343031.3351000\">paper<\/a><\/li><li><strong>Adaptive cross-modal few-shot learning,<\/strong>&nbsp;in NeurIPS, 2019.&nbsp;<em>C. Xing, N. Rostamzadeh, B. N. Oreshkin, and P. O. Pinheiro.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/8731-adaptive-cross-modal-few-shot-learning.pdf\">paper<\/a><\/li><li><strong>Learning meta model for zero- and few-shot face anti-spoofing,<\/strong>&nbsp;in AAAI, 2020.&nbsp;<em>Y. Qin, C. Zhao, X. Zhu, Z. Wang, Z. Yu, T. Fu, F. Zhou, J. Shi, and Z. Lei.<\/em>&nbsp;<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/6866\">paper<\/a><\/li><li><strong>RD-GAN: Few\/Zero-shot chinese character style transfer via radical decomposition and rendering,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>Y. Huang, M. He, L. Jin, and Y. Wang.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123510154.pdf\">paper<\/a><\/li><li><strong>An empirical study on large-scale multi-label text classification including few and zero-shot labels,<\/strong>&nbsp;in EMNLP, 2020.&nbsp;<em>I. Chalkidis, M. Fergadiotis, S. Kotitsas, P. Malakasiotis, N. Aletras, and I. Androutsopoulos.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.607.pdf\">paper<\/a><\/li><li><strong>Multi-label few\/zero-shot learning with knowledge aggregated from multiple label graphs,<\/strong>&nbsp;in EMNLP, 2020.&nbsp;<em>J. Lu, L. Du, M. Liu, and J. Dipnall.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.235.pdf\">paper<\/a><\/li><li><strong>Emergent complexity and zero-shot transfer via unsupervised environment design,<\/strong>&nbsp;in NeurIPS, 2020.&nbsp;<em>M. Dennis, N. Jaques, E. Vinitsky, A. Bayen, S. Russell, A. Critch, and S. Levine.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/985e9a46e10005356bbaf194249f6856-Paper.pdf\">paper<\/a><\/li><li><strong>Learning graphs for knowledge transfer with limited labels,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>P. Ghosh, N. Saini, L. S. Davis, and A. Shrivastava.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Ghosh_Learning_Graphs_for_Knowledge_Transfer_With_Limited_Labels_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Improving zero and few-shot abstractive summarization with intermediate fine-tuning and data augmentation,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>A. R. Fabbri, S. Han, H. Li, H. Li, M. Ghazvininejad, S. R. Joty, D. R. Radev, and Y. Mehdad.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.57.pdf\">paper<\/a><\/li><li><strong>Label verbalization and entailment for effective zero and few-shot relation extraction,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>O. Sainz, O. L. d. Lacalle, G. Labaka, A. Barrena, and E. Agirre.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.92.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/osainz59\/Ask2Transformers\">code<\/a><\/li><li><strong>An empirical investigation of word alignment supervision for zero-shot multilingual neural machine translation,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>A. Raganato, R. V\u00e1zquez, M. Creutz, and J. Tiedemann.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.664.pdf\">paper<\/a><\/li><li><strong>Bridge to target domain by prototypical contrastive learning and label confusion: Re-explore zero-shot learning for slot filling,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>L. Wang, X. Li, J. Liu, K. He, Y. Yan, and W. Xu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.746.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/w-lw\/pclc\">code<\/a><\/li><li><strong>A label-aware BERT attention network for zero-shot multi-intent detection in spoken language understanding,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>T. Wu, R. Su, and B. Juang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.399.pdf\">paper<\/a><\/li><li><strong>Zero-shot dialogue disentanglement by self-supervised entangled response selection,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>T. Chi, and A. I. Rudnicky.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.400.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/chijames\/zero_shot_dialogue_disentanglement\">code<\/a><\/li><li><strong>Robust retrieval augmented generation for zero-shot slot filling,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>M. R. Glass, G. Rossiello, M. F. M. Chowdhury, and A. Gliozzo.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.148.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/IBM\/kgi-slot-filling\">code<\/a><\/li><li><strong>Everything is all it takes: A multipronged strategy for zero-shot cross-lingual information extraction,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>M. Yarmohammadi, S. Wu, M. Marone, H. Xu, S. Ebner, G. Qin, Y. Chen, J. Guo, C. Harman, K. Murray, A. S. White, M. Dredze, and B. V. Durme.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.149.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/shijie-wu\/crosslingual-nlp\">code<\/a><\/li><li><strong>An empirical study on multiple information sources for zero-shot fine-grained entity typing,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>Y. Chen, H. Jiang, L. Liu, S. Shi, C. Fan, M. Yang, and R. Xu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.210.pdf\">paper<\/a><\/li><li><strong>Zero-shot dialogue state tracking via cross-task transfer,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>Z. Lin, B. Liu, A. Madotto, S. Moon, Z. Zhou, P. Crook, Z. Wang, Z. Yu, E. Cho, R. Subba, and P. Fung.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.622.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/facebookresearch\/Zero-Shot-DST\">code<\/a><\/li><li><strong>Finetuned language models are zero-shot learners,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>J. Wei, M. Bosma, V. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=gEZrGCozdqR\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/google-research\/flan\">code<\/a><\/li><li><strong>Zero-shot stance detection via contrastive learning.,<\/strong>&nbsp;in TheWebConf, 2022.&nbsp;<em>B. Liang, Z. Chen, L. Gui, Y. He, M. Yang, and R. Xu.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3485447.3511994\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/HITSZ-HLT\/PT-HCL\">code<\/a><\/li><li><strong>Reframing instructional prompts to GPTk&#8217;s language,<\/strong>&nbsp;in Findings of ACL, 2022.&nbsp;<em>D. Khashabi, C. Baral, Y. Choi, and H. Hajishirzi.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.findings-acl.50.pdf\">paper<\/a><\/li><li><strong>JointCL: A joint contrastive learning framework for zero-shot stance detection,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>B. Liang, Q. Zhu, X. Li, M. Yang, L. Gui, Y. He, and R. Xu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.7.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/hitsz-hlt\/jointcl\">code<\/a><\/li><li><strong>Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>S. Hu, N. Ding, H. Wang, Z. Liu, J. Wang, J. Li, W. Wu, and M. Sun.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.158.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/thunlp\/knowledgeableprompttuning\">code<\/a><\/li><li><strong>Uni-Perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>X. Zhu, J. Zhu, H. Li, X. Wu, H. Li, X. Wang, and J. Dai.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Zhu_Uni-Perceiver_Pre-Training_Unified_Architecture_for_Generic_Perception_for_Zero-Shot_and_CVPR_2022_paper.pdf\">paper<\/a><\/li><\/ol>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#variants-of-few-shot-learning\"><\/a><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\">Variants of Few-shot Learning<\/a><\/h2>\n\n\n\n<ol><li><strong>Continuous adaptation via meta-learning in nonstationary and competitive environments,<\/strong>&nbsp;in ICLR, 2018.&nbsp;<em>M. Al-Shedivat, T. Bansal, Y. Burda, I. Sutskever, I. Mordatch, and P. Abbeel.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=Sk2u1g-0-\">paper<\/a><\/li><li><strong>Deep online learning via meta-learning: Continual adaptation for model-based RL,<\/strong>&nbsp;in ICLR, 2018.&nbsp;<em>A. Nagabandi, C. Finn, and S. Levine.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/references\/pdf?id=ryuIpa6S4\">paper<\/a><\/li><li><strong>Incremental few-shot learning with attention attractor networks,<\/strong>&nbsp;in NeurIPS, 2019.&nbsp;<em>M. Ren, R. Liao, E. Fetaya, and R. S. Zemel.<\/em>&nbsp;<a href=\"https:\/\/papers.nips.cc\/paper\/8769-incremental-few-shot-learning-with-attention-attractor-networks.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/renmengye\/inc-few-shot-attractor-public\">code<\/a><\/li><li><strong>Bidirectional one-shot unsupervised domain mapping,<\/strong>&nbsp;in ICCV, 2019.&nbsp;<em>T. Cohen, and L. Wolf.<\/em>&nbsp;<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Cohen_Bidirectional_One-Shot_Unsupervised_Domain_Mapping_ICCV_2019_paper.pdf\">paper<\/a><\/li><li><strong>XtarNet: Learning to extract task-adaptive representation for incremental few-shot learning,<\/strong>&nbsp;in ICML, 2020.&nbsp;<em>S. W. Yoon, D. Kim, J. Seo, and J. Moon.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v119\/yoon20b\/yoon20b.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/EdwinKim3069\/XtarNet\">code<\/a><\/li><li><strong>Few-shot class-incremental learning,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>X. Tao, X. Hong, X. Chang, S. Dong, X. Wei, and Y. Gong.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Tao_Few-Shot_Class-Incremental_Learning_CVPR_2020_paper.pdf\">paper<\/a><\/li><li><strong>Wandering within a world: Online contextualized few-shot learning,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>M. Ren, M. L. Iuzzolino, M. C. Mozer, and R. Zemel.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=oZIvHV04XgC\">paper<\/a><\/li><li><strong>Repurposing pretrained models for robust out-of-domain few-shot learning,<\/strong>&nbsp;in ICLR, 2021.&nbsp;<em>N. Kwon, H. Na, G. Huang, and S. Lacoste-Julien.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=qkLMTphG5-h\">paper<\/a>&nbsp;<a href=\"https:\/\/anonymous.4open.science\/r\/08ef52cf-456a-4e36-a408-04e1ad0bc5a9\/\">code<\/a><\/li><li><strong>Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>X. Yue, Z. Zheng, S. Zhang, Y. Gao, T. Darrell, K. Keutzer, and A. S. Vincentelli.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Yue_Prototypical_Cross-Domain_Self-Supervised_Learning_for_Few-Shot_Unsupervised_Domain_Adaptation_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Self-promoted prototype refinement for few-shot class-incremental learning,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>K. Zhu, Y. Cao, W. Zhai, J. Cheng, and Z. Zha.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Zhu_Self-Promoted_Prototype_Refinement_for_Few-Shot_Class-Incremental_Learning_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Semantic-aware knowledge distillation for few-shot class-incremental learning,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>A. Cheraghian, S. Rahman, P. Fang, S. K. Roy, L. Petersson, and M. Harandi.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Cheraghian_Semantic-Aware_Knowledge_Distillation_for_Few-Shot_Class-Incremental_Learning_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot incremental learning with continually evolved classifiers,<\/strong>&nbsp;in CVPR, 2021.&nbsp;<em>C. Zhang, N. Song, G. Lin, Y. Zheng, P. Pan, and Y. Xu.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Zhang_Few-Shot_Incremental_Learning_With_Continually_Evolved_Classifiers_CVPR_2021_paper.pdf\">paper<\/a><\/li><li><strong>Learning a universal template for few-shot dataset generalization,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>E. Triantafillou, H. Larochelle, R. Zemel, and V. Dumoulin.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/triantafillou21a\/triantafillou21a.pdf\">paper<\/a><\/li><li><strong>GP-Tree: A gaussian process classifier for few-shot incremental learning,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>I. Achituve, A. Navon, Y. Yemini, G. Chechik, and E. Fetaya.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/achituve21a\/achituve21a.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/IdanAchituve\/GP-Tree\">code<\/a><\/li><li><strong>Addressing catastrophic forgetting in few-shot problems,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>P. Yap, H. Ritter, and D. Barber.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/yap21a\/yap21a.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/pauchingyap\/boml\">code<\/a><\/li><li><strong>Few-shot conformal prediction with auxiliary tasks,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>A. Fisch, T. Schuster, T. Jaakkola, and R. Barzilay.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/fisch21a\/fisch21a.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ajfisch\/few-shot-cp\">code<\/a><\/li><li><strong>Few-shot lifelong learning,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>P. Mazumder, P. Singh, and P. Rai.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16334\/16141\">paper<\/a><\/li><li><strong>Few-shot class-incremental learning via relation knowledge distillation,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>S. Dong, X. Hong, X. Tao, X. Chang, X. Wei, and Y. Gong.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16213\/16020\">paper<\/a><\/li><li><strong>Few-shot one-class classification via meta-learning,<\/strong>&nbsp;in AAAI, 2021.&nbsp;<em>A. Frikha, D. Krompass, H. Koepken, and V. Tresp.<\/em>&nbsp;<a href=\"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16913\/16720\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/AhmedFrikha\/Few-Shot-One-Class-Classification-via-Meta-Learning\">code<\/a><\/li><li><strong>Practical one-shot federated learning for cross-silo setting,<\/strong>&nbsp;in IJCAI, 2021.&nbsp;<em>Q. Li, B. He, and D. Song.<\/em>&nbsp;<a href=\"https:\/\/www.ijcai.org\/proceedings\/2021\/0205.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/QinbinLi\/FedK\">code<\/a><\/li><li><strong>Incremental few-shot text classification with multi-round new classes: Formulation, dataset and system,<\/strong>&nbsp;in NAACL-HLT, 2021.&nbsp;<em>C. Xia, W. Yin, Y. Feng, and P. S. Yu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.naacl-main.106.pdf\">paper<\/a><\/li><li><strong>Continual few-shot learning for text classification,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>R. Pasunuru, V. Stoyanov, and M. Bansal.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.460.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/ramakanth-pasunuru\/cfl-benchmark\">code<\/a><\/li><li><strong>Self-training with few-shot rationalization,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>M. M. Bhat, A. Sordoni, and S. Mukherjee.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.836.pdf\">paper<\/a><\/li><li><strong>Diverse distributions of self-supervised tasks for meta-learning in NLP,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>T. Bansal, K. P. Gunasekaran, T. Wang, T. Munkhdalai, and A. McCallum.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.469.pdf\">paper<\/a><\/li><li><strong>Generalized and incremental few-shot learning by explicit learning and calibration without forgetting,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>A. Kukleva, H. Kuehne, and B. Schiele.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Kukleva_Generalized_and_Incremental_Few-Shot_Learning_by_Explicit_Learning_and_Calibration_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Meta learning on a sequence of imbalanced domains with difficulty awareness,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>Z. Wang, T. Duan, L. Fang, Q. Suo, and M. Gao.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Wang_Meta_Learning_on_a_Sequence_of_Imbalanced_Domains_With_Difficulty_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/joey-wang123\/imbalancemeta\">code<\/a><\/li><li><strong>Synthesized feature based few-shot class-incremental learning on a mixture of subspaces,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>A. Cheraghian, S. Rahman, S. Ramasinghe, P. Fang, C. Simon, L. Petersson, and M. Harandi.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Cheraghian_Synthesized_Feature_Based_Few-Shot_Class-Incremental_Learning_on_a_Mixture_of_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot and continual learning with attentive independent mechanisms,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>E. Lee, C. Huang, and C. Lee.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Lee_Few-Shot_and_Continual_Learning_With_Attentive_Independent_Mechanisms_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/huang50213\/AIM-Fewshot-Continual\">code<\/a><\/li><li><strong>Low-shot validation: Active importance sampling for estimating classifier performance on rare categories,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>F. Poms, V. Sarukkai, R. T. Mullapudi, N. S. Sohoni, W. R. Mark, D. Ramanan, and K. Fatahalian.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Poms_Low-Shot_Validation_Active_Importance_Sampling_for_Estimating_Classifier_Performance_on_ICCV_2021_paper.pdf\">paper<\/a><\/li><li><strong>Overcoming catastrophic forgetting in incremental few-shot learning by finding flat minima,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>G. SHI, J. CHEN, W. Zhang, L. Zhan, and X. Wu.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/357cfba15668cc2e1e73111e09d54383-Paper.pdf\">paper<\/a><\/li><li><strong>Variational continual Bayesian meta-learning,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>Q. Zhang, J. Fang, Z. Meng, S. Liang, and E. Yilmaz.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/cdd0500dc0ef6682fa6ec6d2e6b577c4-Paper.pdf\">paper<\/a><\/li><li><strong>LFPT5: A unified framework for lifelong few-shot language learning based on prompt tuning of T5,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>C. Qin, and S. Joty.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=HCRVf71PMF\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/qcwthu\/Lifelong-Fewshot-Language-Learning\">code<\/a><\/li><li><strong>Subspace regularizers for few-shot class incremental learning,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>A. F. Aky\u00fcrek, E. Aky\u00fcrek, D. Wijaya, and J. Andreas.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=boJy41J-tnQ\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/feyzaakyurek\/subspace-reg\">code<\/a><\/li><li><strong>Meta discovery: Learning to discover novel classes given very limited data,<\/strong>&nbsp;in ICLR, 2022.&nbsp;<em>H. Chi, F. Liu, W. Yang, L. Lan, T. Liu, B. Han, G. Niu, M. Zhou, and M. Sugiyama.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=MEpKGLsY8f\">paper<\/a><\/li><li><strong>Topological transduction for hybrid few-shot learning.,<\/strong>&nbsp;in TheWebConf, 2022.&nbsp;<em>J. Chen, and A. Zhang.<\/em>&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3485447.3512033\">paper<\/a><\/li><li><strong>Continual few-shot relation learning via embedding space regularization and data augmentation,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>C. Qin, and S. Joty.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.198.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/qcwthu\/continual_fewshot_relation_learning\">code<\/a><\/li><li><strong>Few-shot class-incremental learning for named entity recognition,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>R. Wang, T. Yu, H. Zhao, S. Kim, S. Mitra, R. Zhang, and R. Henao.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.43.pdf\">paper<\/a><\/li><li><strong>Task-adaptive negative envision for few-shot open-set recognition,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>S. Huang, J. Ma, G. Han, and S. Chang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Huang_Task-Adaptive_Negative_Envision_for_Few-Shot_Open-Set_Recognition_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/shiyuanh\/TANE\">code<\/a><\/li><li><strong>Forward compatible few-shot class-incremental learning,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>D. Zhou, F. Wang, H. Ye, L. Ma, S. Pu, and D. Zhan.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Zhou_Forward_Compatible_Few-Shot_Class-Incremental_Learning_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/zhoudw-zdw\/CVPR22-Fact\">code<\/a><\/li><li><strong>Sylph: A hypernetwork framework for incremental few-shot object detection,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>L. Yin, J. M. Perez-Rua, and K. J. Liang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Yin_Sylph_A_Hypernetwork_Framework_for_Incremental_Few-Shot_Object_Detection_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Constrained few-shot class-incremental learning,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>M. Hersche, G. Karunaratne, G. Cherubini, L. Benini, A. Sebastian, and A. Rahimi.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Hersche_Constrained_Few-Shot_Class-Incremental_Learning_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>iFS-RCNN: An incremental few-shot instance segmenter,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>K. Nguyen, and S. Todorovic.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Nguyen_iFS-RCNN_An_Incremental_Few-Shot_Instance_Segmenter_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>MetaFSCIL: A meta-learning approach for few-shot class incremental learning,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>Z. Chi, L. Gu, H. Liu, Y. Wang, Y. Yu, and J. Tang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Chi_MetaFSCIL_A_Meta-Learning_Approach_for_Few-Shot_Class_Incremental_Learning_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot incremental learning for label-to-image translation,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>P. Chen, Y. Zhang, Z. Li, and L. Sun.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Chen_Few-Shot_Incremental_Learning_for_Label-to-Image_Translation_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Revisiting learnable affines for batch norm in few-shot transfer learning,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>M. Yazdanpanah, A. A. Rahman, M. Chaudhary, C. Desrosiers, M. Havaei, E. Belilovsky, and S. E. Kahou.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Yazdanpanah_Revisiting_Learnable_Affines_for_Batch_Norm_in_Few-Shot_Transfer_Learning_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Few-shot learning with noisy labels,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>K. J. Liang, S. B. Rangrej, V. Petrovic, and T. Hassner.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Liang_Few-Shot_Learning_With_Noisy_Labels_CVPR_2022_paper.pdf\">paper<\/a><\/li><li><strong>Improving adversarially robust few-shot image classification with generalizable representations,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>J. Dong, Y. Wang, J. Lai, and X. Xie.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Dong_Improving_Adversarially_Robust_Few-Shot_Image_Classification_With_Generalizable_Representations_CVPR_2022_paper.pdf\">paper<\/a><\/li><\/ol>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#datasetsbenchmarks\"><\/a><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\">Datasets\/Benchmarks<\/a><\/h2>\n\n\n\n<ol><li><strong>FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation,<\/strong>&nbsp;in EMNLP, 2018.&nbsp;<em>X. Han, H. Zhu, P. Yu, Z. Wang, Y. Yao, Z. Liu, and M. Sun.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/D18-1514.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/thunlp\/FewRel\">code<\/a><\/li><li><strong>Meta-World: A benchmark and evaluation for multi-task and meta reinforcement learning,<\/strong>&nbsp;arXiv preprint, 2019.&nbsp;<em>T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1910.10897\">paper<\/a>&nbsp;<a href=\"https:\/\/meta-world.github.io\/\">code<\/a><\/li><li><strong>The Omniglot challenge: A 3-year progress report,<\/strong>&nbsp;in Current Opinion in Behavioral Sciences, 2019.&nbsp;<em>B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum.<\/em>&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1902.03477\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/brendenlake\/omniglot\">code<\/a><\/li><li><strong>FewRel 2.0: Towards more challenging few-shot relation classification,<\/strong>&nbsp;in EMNLP-IJCNLP, 2019.&nbsp;<em>T. Gao, X. Han, H. Zhu, Z. Liu, P. Li, M. Sun, and J. Zhou.<\/em>&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/D19-1649.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/thunlp\/FewRel\">code<\/a><\/li><li><strong>META-DATASET: A dataset of datasets for learning to learn from few examples,<\/strong>&nbsp;in ICLR, 2020.&nbsp;<em>E. Triantafillou, T. Zhu, V. Dumoulin, P. Lamblin, U. Evci, K. Xu, R. Goroshin, C. Gelada, K. Swersky, P. Manzagol, and H. Larochelle.<\/em>&nbsp;<a href=\"https:\/\/openreview.net\/pdf?id=rkgAGAVKPr\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/google-research\/meta-dataset\">code<\/a><\/li><li><strong>Few-shot object detection with attention-rpn and multi-relation detector,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>Q. Fan, W. Zhuo, C.-K. Tang, Y.-W. Tai.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Fan_Few-Shot_Object_Detection_With_Attention-RPN_and_Multi-Relation_Detector_CVPR_2020_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/fanq15\/Few-Shot-Object-Detection-Dataset\">code<\/a><\/li><li><strong>FSS-1000: A 1000-class dataset for few-shot segmentation,<\/strong>&nbsp;in CVPR, 2020.&nbsp;<em>X. Li, T. Wei, Y. P. Chen, Y.-W. Tai, and C.-K. Tang.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Li_FSS-1000_A_1000-Class_Dataset_for_Few-Shot_Segmentation_CVPR_2020_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/HKUSTCV\/FSS-1000\">code<\/a><\/li><li><strong>Impact of base dataset design on few-shot image classification,<\/strong>&nbsp;in ECCV, 2020.&nbsp;<em>O. Sbai, C. Couprie, and M. Aubry.<\/em>&nbsp;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123610579.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/facebookresearch\/fewshotDatasetDesign\">code<\/a><\/li><li><strong>A large-scale benchmark for few-shot program induction and synthesis,<\/strong>&nbsp;in ICML, 2021.&nbsp;<em>F. Alet, J. Lopez-Contreras, J. Koppel, M. Nye, A. Solar-Lezama, T. Lozano-Perez, L. Kaelbling, and J. Tenenbaum.<\/em>&nbsp;<a href=\"http:\/\/proceedings.mlr.press\/v139\/alet21a\/alet21a.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/javierlc2000\/progres\">code<\/a><\/li><li><strong>FEW-NERD: A few-shot named entity recognition dataset,<\/strong>&nbsp;in ACL-IJCNLP, 2021.&nbsp;<em>N. Ding, G. Xu, Y. Chen, X. Wang, X. Han, P. Xie, H. Zheng, and Z. Liu.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.acl-long.248.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/ningding97.github.io\/fewnerd\/\">code<\/a><\/li><li><strong>CrossFit: A few-shot learning challenge for cross-task generalization in NLP,<\/strong>&nbsp;in EMNLP, 2021.&nbsp;<em>Q. Ye, B. Y. Lin, and X. Ren.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2021.emnlp-main.572.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/INK-USC\/CrossFit\">code<\/a><\/li><li><strong>ORBIT: A real-world few-shot dataset for teachable object recognition,<\/strong>&nbsp;in ICCV, 2021.&nbsp;<em>D. Massiceti, L. Zintgraf, J. Bronskill, L. Theodorou, M. T. Harris, E. Cutrell, C. Morrison, K. Hofmann, and S. Stumpf.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Massiceti_ORBIT_A_Real-World_Few-Shot_Dataset_for_Teachable_Object_Recognition_ICCV_2021_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/microsoft\/ORBIT-Dataset\">code<\/a><\/li><li><strong>FLEX: Unifying evaluation for few-shot NLP,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>J. Bragg, A. Cohan, K. Lo, and I. Beltagy.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/8493eeaccb772c0878f99d60a0bd2bb3-Paper.pdf\">paper<\/a><\/li><li><strong>Two sides of meta-learning evaluation: In vs. out of distribution,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>A. Setlur, O. Li, and V. Smith.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/1e932f24dc0aa4e7a6ac2beec387416d-Paper.pdf\">paper<\/a><\/li><li><strong>Realistic evaluation of transductive few-shot learning,<\/strong>&nbsp;in NeurIPS, 2021.&nbsp;<em>O. Veilleux, M. Boudiaf, P. Piantanida, and I. B. Ayed.<\/em>&nbsp;<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/4d7a968bb636e25818ff2a3941db08c1-Paper.pdf\">paper<\/a><\/li><li><strong>FewNLU: Benchmarking state-of-the-art methods for few-shot natural language understanding,<\/strong>&nbsp;in ACL, 2022.&nbsp;<em>Y. Zheng, J. Zhou, Y. Qian, M. Ding, C. Liao, L. Jian, R. Salakhutdinov, J. Tang, S. Ruder, and Z. Yang.<\/em>&nbsp;<a href=\"https:\/\/aclanthology.org\/2022.acl-long.38.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/THUDM\/FewNLU\">code<\/a><\/li><li><strong>Bongard-HOI: Benchmarking few-shot visual reasoning for human-object interactions,<\/strong>&nbsp;in CVPR, 2022.&nbsp;<em>H. Jiang, X. Ma, W. Nie, Z. Yu, Y. Zhu, and A. Anandkumar.<\/em>&nbsp;<a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/papers\/Jiang_Bongard-HOI_Benchmarking_Few-Shot_Visual_Reasoning_for_Human-Object_Interactions_CVPR_2022_paper.pdf\">paper<\/a>&nbsp;<a href=\"https:\/\/github.com\/nvlabs\/Bongard-HOI\">code<\/a><\/li><\/ol>\n\n\n\n<h2><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#software-library\"><\/a><a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/FewShotPapers#content\">Software Library<\/a><\/h2>\n\n\n\n<ol><li><strong>PaddleFSL,<\/strong>&nbsp;a library for few-shot learning written in&nbsp;<em>PaddlePaddle<\/em>.&nbsp;<a href=\"https:\/\/github.com\/tata1661\/FSL-Mate\/tree\/master\/PaddleFSL\">link<\/a><\/li><li><strong>Torchmeta,<\/strong>&nbsp;a library for few-shot learning &amp; meta-learning written in&nbsp;<em>PyTorch<\/em>.&nbsp;<a href=\"https:\/\/github.com\/tristandeleu\/pytorch-meta#torchmeta\">link<\/a><\/li><li><strong>learn2learn,<\/strong>&nbsp;a library for meta-learning written in&nbsp;<em>PyTorch<\/em>.&nbsp;<a href=\"https:\/\/github.com\/learnables\/learn2learn\">link<\/a><\/li><li><strong>keras-fsl,<\/strong>&nbsp;a library for few-shot learning written in&nbsp;<em>Tensorflow<\/em>.&nbsp;<a href=\"https:\/\/github.com\/few-shot-learning\/Keras-FewShotLearning\">link<\/a><\/li><\/ol>\n","protected":false},"excerpt":{"rendered":"<p>\u6765\u81eaGitHub\u4ed3\u5e93\uff1ahttps:\/\/github.com\/tata1661\/FSL-Mate\/tree\/ma &hellip; <a href=\"http:\/\/139.9.1.231\/index.php\/2022\/08\/09\/few-shot-papers\/\" class=\"more-link\">\u7ee7\u7eed\u9605\u8bfb<span class=\"screen-reader-text\">Few-Shot Papers&#8211;\u5c0f\u6837\u672c\u5b66\u4e60\u8bba\u6587\u6c47\u603b<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[19,4,9],"tags":[],"_links":{"self":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts\/5430"}],"collection":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/comments?post=5430"}],"version-history":[{"count":2,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts\/5430\/revisions"}],"predecessor-version":[{"id":5432,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts\/5430\/revisions\/5432"}],"wp:attachment":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/media?parent=5430"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/categories?post=5430"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/tags?post=5430"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}