Abstract
Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to efficiently learn new tasks, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations, classifiers, and policies for acting in environments. In practice, meta-learning has been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems. Moreover, improving one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and neuroscience shows a strong connection between human and reward learning and the growing sub-field of meta-reinforcement learning.
Some of the fundamental questions that this workshop aims to address are:
- What are the meta-learning processes in nature (e.g., in humans), and how can we take inspiration from them?
- What is the relationship between meta-learning, continual learning, and transfer learning?
- What interactions exist between meta-learning and large pretrained / foundation models?
- What principles can we learn from meta-learning to help us design the next generation of learning systems?
- What kind of theoretical principles can we develop for meta-learning?
- How can we exploit our domain knowledge to effectively guide the meta-learning process and make it more efficient?
- How can be design better benchmarks for different meta-learning scenarios?
As prospective participants, we primarily target machine learning researchers interested in the questions and foci outlined above. Specific target communities within machine learning include, but are not limited to: meta-learning, AutoML, reinforcement learning, deep learning, optimization, evolutionary computation, and Bayesian optimization. We also invite submissions from researchers who study human learning and neuroscience, to provide a broad and interdisciplinary perspective to the attendees.
Invited Speakers
(alphabetic order)
- Lucas Beyer (Google Brain) on “Large-scale pre-training and transfer learning”
- Chelsea Finn (Stanford University) on “Meta-Reinforcement Learning: Algorithms and Applications”
- Elena Gribovskaya (DeepMind) on “Retrieval augmentation and in-context learning for fast adaptation to new tasks”
- Percy Liang (Stanford University) on “Understanding In-Context Learning in Simple Models”
- Mengye Ren (New York University) on “Meta-learning within a lifetime”
- Greg Yang (Microsoft Research) on “Transferring insights from small to large models without learning”
For the abstracts of the invited speaker talks, please see this document
Organizers
(alphabetic order)
- Fábio Ferreira (Freiburg University)
- Qi Lei (Princeton University)
- Eleni Triantafillou (Google Brain)
- Joaquin Vanschoren (Eindhoven University of Technology)
- Huaxiu Yao (Stanford)
Accepted Papers
Papers accepted to the workshop (sorted in ascending order of submission id).
Poster session 1
- HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection; Lukas Fehring, Jonas Hanselle, Alexander Tornede
- Interpolating Compressed Parameter Subspaces; Siddhartha Datta, Nigel Shadbolt
- Multiple Modes for Continual Learning; Siddhartha Datta, Nigel Shadbolt
- Topological Continual Learning with Wasserstein Distance and Barycenter; Tananun Songdechakraiwut, Xiaoshuang Yin, Barry D Van Veen
- FiT: Parameter Efficient Few-shot Transfer Learning; Aliaksandra Shysheya, John F Bronskill, Massimiliano Patacchiola, Sebastian Nowozin Richard E Turner
- Betty: An Automatic Differentiation Library for Multilevel Optimization; Sang Keun Choe, Willie Neiswanger, Pengtao Xie, Eric Xing
- General-Purpose In-Context Learning by Meta-Learning Transformers; Louis Kirsch, Luke Metz, James Harrison, Jascha Sohl-Dickstein
- Conditional Neural Processes for Molecules; Miguel Garcia Ortegon, Andreas Bender, Sergio Bacallado
- Contextual Squeeze-and-Excitation; Massimiliano Patacchiola, John F Bronskill, Aliaksandra Shysheya, Katja Hofmann, Sebastian Nowozin, Richard E Turner
- Meta-learning of Black-box Solvers Using Deep Reinforcement Learning; Cedric Malherbe, Aladin Virmaux, Ludovic Dos Santos, Sofian Chaybouti
- Efficient Queries Transformer Neural Processes; Leo Feng, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed
- GramML: Exploring Context-Free Grammars with Model-Free Reinforcement Learning; Hernan Ceferino Vazquez, Jorge Sánchez, Rafael Carrascosa
- Debiasing Meta-Gradient Reinforcement Learning by Learning the Outer Value Function; Clément Bonnet, Laurence Illing Midgley, Alexandre Laterre
- MARS: Meta-learning as score matching in the function space; Krunoslav Lehman Pavasovic, Jonas Rothfuss, Andreas Krause
- Meta-Learning via Classifier(-free) Guidance; Elvis Nava, Seijin Kobayashi, Yifei Yin, Robert K. Katzschmann, Benjamin F Grewe
- Neural Architecture for Online Ensemble Continual Learning; Mateusz Andrzej Wójcik, Witold Kościukiewicz, Adam Gonczarek, Tomasz Jan Kajdanowicz
- Meta-RL for Multi-Agent RL: Learning to Adapt to Evolving Agents; Matthias Gerstgrasser, David C. Parkes
- Lightweight Prompt Learning with General Representation for Rehearsal-free Continual Learning; Hyunhee Chung, Kyung Ho Park
- Uncertainty-Aware Meta-Learning for Multimodal Task Distributions; Cesar Almecija, Apoorva Sharma, Young-Jin Park, Navid Azizan
- Unsupervised Meta-learning via Few-shot Pseudo-supervised Contrastive Learning; Huiwon Jang, Hankook Lee, Jinwoo Shin
- Multi-objective Tree-structured Parzen Estimator Meets Meta-learning; Shuhei Watanabe, Noor Awad, Masaki Onishi, Frank Hutter
- Few-Shot Calibration of Set Predictors via Meta-Learned Cross-Validation-Based Conformal Prediction; Sangwoo Park, Kfir M. Cohen, Osvaldo Simeone
- On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition; Samuel Dooley, Rhea Sanjay Sukthanker, John P Dickerson, Colin White, Frank Hutter, Micah Goldblum
Poster session 2
- HyperSound: Generating Implicit Neural Representations of Audio Signals with Hypernetworks; Filip Szatkowski, Karol J. Piczak, Przemysław Spurek, Jacek Tabor, Tomasz Trzcinski
- Towards Discovering Neural Architectures from Scratch; Simon Schrodi, Danny Stoll, Binxin Ru, Rhea Sanjay Sukthanker, Thomas Brox, Frank Hutter
- The Curse of Low Task Diversity: On the Failure of Transfer Learning to Outperform MAML and Their Empirical Equivalence; Brando Miranda, Patrick Yu, Yu-Xiong Wang, Oluwasanmi O Koyejo
- PriorBand: HyperBand + Human Expert Knowledge; Neeratyoy Mallik, Carl Hvarfner, Danny Stoll, Maciej Janowski, Eddie Bergman, Marius Lindauer, Luigi Nardi, Frank Hutter
- Expanding the Deployment Envelope of Behavior Prediction via Adaptive Meta-Learning; Boris Ivanovic, James Harrison, Marco Pavone
- GraViT-E: Gradient-based Vision Transformer Search with Entangled Weights; Rhea Sanjay Sukthanker, Arjun Krishnakumar, Sharat Patil, Frank Hutter
- Learning to Prioritize Planning Updates in Model-based Reinforcement Learning; Bradley Burega, John D Martin, Michael Bowling
- One-Shot Optimal Design for Gaussian Process Analysis of Randomized Experiments; Jelena Markovic-Voronov, Qing Feng, Eytan Bakshy
- Towards Automated Design of Bayesian Optimization via Exploratory Landscape Analysis; Carolin Benjamins, Anja Jankovic, Elena Raponi, Koen van der Blom, Marius Lindauer, Carola Doerr
- Recommendation for New Drugs with Limited Prescription Data; Zhenbang Wu, Huaxiu Yao, Zhe Su, David Liebovitz, Lucas M Glass, James Zou, Chelsea Finn, Jimeng Sun
- Bayesian Optimization with a Neural Network Meta-learned on Synthetic Data Only; Samuel Müller, Sebastian Pineda Arango, Matthias Feurer, Josif Grabocka, Frank Hutter
- PersA-FL: Personalized Asynchronous Federated Learning; M. Taha Toghani, Soomin Lee, Cesar A Uribe
- AutoRL-Bench 1.0; Gresa Shala, Sebastian Pineda Arango, André Biedenkapp, Frank Hutter, Josif Grabocka
- Gray-Box Gaussian Processes for Automated Reinforcement Learning; Gresa Shala, André Biedenkapp, Frank Hutter, Josif Grabocka
- Transfer NAS with Meta-learned Bayesian Surrogates; Gresa Shala, Thomas Elsken, Frank Hutter, Josif Grabocka
- Optimistic Meta-Gradients; Sebastian Flennerhag, Tom Zahavy, Brendan O’Donoghue, Hado van Hasselt, András György, Satinder Singh
- Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning; Sanghwan Kim, Lorenzo Noci, Antonio Orvieto, Thomas Hofmann
- Adversarial Cheap Talk; Chris Lu, Timon Willi, Alistair Letcher, Jakob Nicolaus Foerster
- Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted Networks; Steven Adriaensen, Herilalaina Rakotoarison, Samuel Müller, Frank Hutter
- Meta-Learning Makes a Better Multimodal Few-shot Learner; Ivona Najdenkoska, Xiantong Zhen, Marcel Worring
- Test-time adaptation with slot-centric models; Mihir Prabhudesai, Sujoy Paul, Sjoerd van Steenkiste, Mehdi S. M. Sajjadi, Anirudh Goyal, Deepak Pathak, Katerina Fragkiadaki, Gaurav Aggarwal, Thomas Kipf
- LOTUS: Learning to learn with Optimal Transport in Unsupervised Scenarios; Prabhant Singh, Joaquin Vanschoren
Program
Schedule for Virtual day (December 9th)
We will hold two additional virtual poster sessions, shown in the schedule below for the virtual day. Please see the above section for info on how the papers are divided into these two sections. The times below are in GMT+1.
Time | Title |
---|---|
17:00-18:00 | Poster session 1 GatherTown link |
18:00-19:00 | Poster session 2 GatherTown link |
Schedule for in-person day (December 2nd)
Time | Title |
---|---|
09:00 | Opening Remarks |
09:10 | Invited Talk: Mengye Ren |
09:40 | Invited Talk: Lucas Beyer |
10:10 | Contributed Talk 1: Parameter Efficient Few-shot Transfer Learning |
10:25 | Break |
10:40 | Poster Session 1 |
11:40 | Contributed Talk 2: Optimistic Meta-Gradients |
11:55 | Invited Talk: Elena Gribovskaya |
12:25 | Lunch Break |
14:00 | Invited Talk: Chelsea Finn |
14:30 | Invited Talk: Greg Yang |
15:00 | Contributed Talk 3: The Curse of Low Task Diversity: On the Failure of Transfer Learning to Outperform MAML and Their Empirical Equivalence |
15:15 | Poster Session 2 |
16:15 | Contributed Talk 4: HyperSound: Generating Implicit Neural Representations of Audio Signals with Hypernetworks |
16:30 | Invited talk: Percy Liang |
17:00 | Discussion Panel |
17:50 | Closing Remarks |
Important Dates
- Submission deadline: October 3rd, 2022, 17:00 CEST
- Author Notification: October 20th, 2022, AoE
- SlidesLive recording deadline: November 10th, 2022, AoE
- Camera-ready paper submission deadline: November 20th, 2022, AoE
- Poster submission deadline: November 25th, 2022, AoE
Formatting
We have provided a modified .sty
file here that appropriately lists the name of the workshop when \neuripsfinal
is enabled. Please use this style file in conjunction with the corresponding LaTeX .tex
template from the NeurIPS website to submit a final camera-ready copy. Both the submission and the camera-ready can be up to 4 pages (excluding acknowledgements, references and appendices).
Publication
Accepted papers and supplementary material will be made available on the workshop website. However, these do not constitute archival publications and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conferences.
FAQ
-
Can supplementary material be added beyond the 4-page limit for submissions, and are there any restrictions on it?
Yes, you may include additional supplementary material, but you should ensure that the main paper is self-contained, since looking at supplementary material is at the discretion of the reviewers. The supplementary material should also follow the same NeurIPS format as the paper.
-
Are references included in the 4-page limit?
No, references will not count towards the page limit.
-
Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?
We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.
-
Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?
We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).
-
Can a paper be submitted to the workshop that is currently under review or will be under review at a conference during the review phase?
From our side, it is perfectly fine to submit a condensed version of a parallel conference submission if it is also fine for the conference in question. Our workshop does not have archival proceedings, and therefore parallel submissions of extended versions to other conferences are acceptable.
Review Process
tl;dr: The review process will be double-blind. Please sign up to be, or recommend, a reviewer via https://forms.gle/EW4icbYv5uA8A13KA.
Important Dates
- Assignments of reviewers to papers (start of review phase): October 4th, 2022
- Final reviewing deadline: October 18th, 2022, AoE
Reviewing Guidelines
We encourage all reviewers to check our reviewing guidelines.
Program Committee
We thank the program committee (senior and junior reviewers) for shaping the excellent technical program; they are (in alphabetical order):
Aaron Klein,
Abhishek Gupta,
Alexander Tornede,
Ana Carolina Lorena,
Andre Carlos Ponce de Leon Ferreira De Carvalho,
Andrei Alex Rusu,
Ang Li,
Aniruddh Raghu,
Ashvin Nair,
Benjamin Eysenbach,
Bingjun Li,
Boris Knyazev,
Bradly C. Stadie,
Chunhui Zhang,
Cuong Quoc Nguyen,
Da Kuang,
Daniel Hernández-Lobato,
Eleni Triantafillou,
Erin Grant,
Haoyu Wang,
Haozhu Wang,
Huaxiu Yao,
Ishita Dasgupta,
Jake Snell,
Jasmin Bogatinovski,
Jiajun Wu,
Jiani Huang,
Jiaqi Wang,
John Willes,
Kate Rakelly,
Lars Kotthoff,
Lazar Atanackovic,
Li Zhong,
Lin Qiu,
Louis Kirsch,
Marc Pickett,
Massimiliano Patacchiola,
Matthias Feurer,
Maximilian Igl,
Mehrtash Harandi,
Mengye Ren,
Micah Goldblum,
Mihai Suteu,
Mikhail Mekhedkin Meskhi,
Minxue Jia,
Muchao Ye,
M. Taha Toghani,
Ondrej Bohdal,
Parminder Bhatia,
Parsa Mahmoudieh,
Philip Fradkin,
Piotr W Mirowski,
Praneet Dutta,
Quentin Bouniot,
Randal S. Olson,
Sharare Zehtabian,
Shengpu Tang,
Shibo Li,
Shixun Wu,
Sihong He,
Sreejan Kumar,
Sungryull Sohn,
Thomas Elsken,
Tian Xia,
Tingfeng Li,
Udayan Khurana,
Weihao Song,
Weiran Lin,
Xueying Ding,
Yao Su,
Yawen Wu,
Yihao Xue,
Ying Wei,
Yue Tan,
Yuhong Li,
Yuhui Zhang,
Yuxin Tang,
Yu-Xiong Wang,
Zhenmei Shi,
Zhepeng Wang and
Zuhui Wang.
Past Workshops
Workshop on Meta-Learning (MetaLearn 2021) @ NeurIPS 2021
Workshop on Meta-Learning (MetaLearn 2020) @ NeurIPS 2020
Workshop on Meta-Learning (MetaLearn 2019) @ NeurIPS 2019
Workshop on Meta-Learning (MetaLearn 2018) @ NeurIPS 2018
Workshop on Meta-Learning (MetaLearn 2017) @ NeurIPS 2017
Contacts
For any further questions, you can contact us at metalearn2022@googlegroups.com.