Live participation

The 2020 Workshop on Meta-Learning will be a series of streamed pre-recorded talks + live question-and-answer (Q&A) periods, and poster sessions on Gather.Town. You can participate by:

News & Updates

Workshop Overview

Abstract

Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to learn new tasks more efficiently, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers and policies over hand-crafted features, to learning representations over which classifiers and policies operate, and finally to learning algorithms that themselves acquire representations, classifiers, and policies.

Meta-learning methods are of substantial practical interest. For instance, they have been shown to yield new state-of-the-art automated machine learning algorithms and architectures, and have substantially improved few-shot learning systems. Moreover, the ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in cognitive science and reward learning in neuroscience.

Some of the fundamental questions that this workshop aims to address are:

In this edition of the meta-learning workshop, we want to stimulate discussion on several key underlying and unsolved questions, particularly:

This workshop aims to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning as well as possible solutions.

In terms of prospective participants, our main targets are machine learning researchers interested in the processes related to understanding and improving current meta-learning algorithms. Specific target communities within machine learning include, but are not limited to meta-learning, AutoML, reinforcement learning, deep learning, optimization, evolutionary computation, and Bayesian optimization. We also invite submissions from researchers who study human learning and neuroscience to provide a broad perspective to the attendees.

Virtual Format

In terms of organizing this workshop in a virtual format, we understand that accessibility across time zones is a significant challenge for a virtual meeting. We will thus offer a mix of synchronous and asynchronous formats that allow participation despite this complication. In thinking about this format, we took inspiration from best practices that emerged from past conferences.

Specifically, we will require both invited speakers and authors of accepted papers to make pre-recorded videos available in advance, allowing registered participants to engage with the recordings at any time. This will be accompanied by live offerings such as a panel session and Q&A for invited talks or spotlights, where participants will be given the opportunity to submit and vote on questions using a Q&A platform during and in advance of the livestream. One of the organizers will facilitate as a moderator for each live session.

For our virtual poster sessions, we plan to use a tool that enables audience participation such as Gather.Town. We will allow authors to choose between one or more of three sessions held several hours apart, allowing a choice based on the most suitable time in their local time zone.

Finally, we will use a dedicated channel on a chat service throughout the event to provide a means of interaction between workshop participants. A schedule will be set up to include three poster sessions throughout the day to ensure that virtual attendees in all timezones will be able to participate in at least one.

More detailed instructions will be given closer to the workshop date.

Invited Speakers

Organizers

Important Dates

Program

Schedule

The workshop schedule is aligned with 11 AM to 8 PM UTC; please see this converter for conversion to your specific time zone.

Beijing (CST) Berlin (CET) London (UTC) New York (EST) Vancouver (PST)  
19:00 12:00 11:00 06:00 03:00 Introduction and opening remarks
19:10 12:10 11:10 06:10 03:10 Invited talk 1: Frank Hutter, “Meta-learning neural architectures, initial weights, hyperparameters, and algorithm components”. Q&A
19:40 12:40 11:40 06:40 03:40 Contributed talk 1: Steinar Laenen, “On episodes, Prototypical Networks, and few-shot learning”
20:00 13:00 12:00 07:00 04:00 Poster session 1
21:00 14:00 13:00 08:00 05:00 Invited talk 2: Luisa Zintgraf, “Exploration in meta-reinforcement learning”. Q&A
21:30 14:30 13:30 08:30 05:30 Invited talk 3: Timothy Hospedales, “Meta-learning: Representations and objectives”. Q&A
22:00 15:00 14:00 09:00 06:00 Break
23:00 16:00 15:00 10:00 07:00 Poster session 2
24:00 17:00 16:00 11:00 08:00 Invited talk 4: Louis Kirsch, “General meta-learning”. Q&A
24:30 17:30 16:30 11:30 08:30 Invited talk 5: Fei-Fei Li & Kuan Fang, “Creating diverse tasks to catalyze robot learning”. Q&A
01:00 18:00 17:00 12:00 09:00 Poster session 3
02:00 19:00 18:00 13:00 10:00 Invited talk 6: Kate Rakelly, “An inference perspective on meta-reinforcement learning”. Q&A
02:30 19:30 18:30 13:30 10:30 Contributed talk 2: Niru Maheswaranathan, “Understanding the dynamics of learned optimizers”
02:45 19:45 18:45 13:45 10:45 Contributed talk 3: Louis Tiao, “Bayesian optimization by density ratio estimation”
03:00 20:00 19:00 14:00 11:00 Panel Discussion (Ask questions here)
04:00 21:00 20:00 15:00 12:00 End

Poster sessions

To make it easier to find the posters, we marked posters in the Gather.Town rooms by increasing number; the list of posters below for each session is ordered by their position in the room. Paid NeurIPS.cc registration is required to access the Gather.Town poster rooms.

Poster session 1

  1. “Few-Shot Unsupervised Continual Learning through Meta-Examples”
  2. “Similarity of classification tasks”
  3. “On Episodes, Prototypical Networks, and Few-Shot Learning”
  4. “MPLP: Learning a Message Passing Learning Protocol”
  5. “MAster of PuPpets: Model-Agnostic Meta-Learning via Pre-trained Parameters for Natural Language Generation”
  6. “Uniform Priors for Meta-Learning”
  7. “Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms”
  8. “A Meta-Learning Approach for Graph Representation Learning in Multi-Task Settings”
  9. “Meta-Learning Backpropagation And Improving It”
  10. “MobileDets: Searching for Object Detection Architecture for Mobile Accelerators”
  11. “Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search”
  12. “NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search”
  13. “Learning to Generate Noise for Multi-Attack Robustness”
  14. “Prior-guided Bayesian Optimization”
  15. “Task Similarity Aware Meta Learning: Theory-inspired Improvement on MAML”
  16. “HyperVAE: Variational Hyper-Encoding Network”
  17. “Towards Meta-Algorithm Selection”
  18. “Meta-Learning via Hypernetworks”
  19. “Flexible Dataset Distillation: Learn Labels Instead of Images”
  20. “Contextual HyperNetworks for Novel Feature Adaptation”
  21. “Bayesian Optimization by Density Ratio Estimation”

Poster Session 2

  1. “Continual learning with direction-constrained optimization”
  2. “Pareto-efficient Acquisition Functions for Cost-Aware Bayesian Optimization”
  3. “Hyperparameter Transfer Across Developer Adjustments”
  4. “Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory”
  5. “Defining Benchmarks for Continual Few-Shot Learning”
  6. “Few-shot Sequence Learning with Transformers”
  7. “Learning not to learn: Nature versus nurture in silico”
  8. “Towards Meta-Algorithm Selection”
  9. “Open-Set Incremental Learning via Bayesian Prototypical Embeddings”
  10. “Data Augmentation for Meta-Learning”
  11. “Flexible Few-Shot Learning of Contextual Similarity”
  12. “Contextual HyperNetworks for Novel Feature Adaptation”
  13. “Bayesian Optimization by Density Ratio Estimation”
  14. “Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time”
  15. “Model-Agnostic Graph Regularization for Few-Shot Learning”
  16. “Is Support Set Diversity Necessary for Meta-Learning?”
  17. “Few-Shot Unsupervised Continual Learning through Meta-Examples”
  18. “MAster of PuPpets: Model-Agnostic Meta-Learning via Pre-trained Parameters for Natural Language Generation”
  19. “Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms”
  20. “Meta-Learning Backpropagation And Improving It”
  21. “Meta-Learning via Hypernetworks”
  22. “Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search”
  23. “Continual Model-Based Reinforcement Learning with Hypernetworks”
  24. “Learning to Generate Noise for Multi-Attack Robustness”
  25. “Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training”
  26. “Prior-guided Bayesian Optimization”
  27. “Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads”
  28. “Measuring few-shot extrapolation with program induction”

Poster Session 3

  1. “Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment”
  2. “Decoupling Exploration and Exploitation in Meta-Reinforcement Learning without Sacrifices”
  3. “Task Meta-Transfer from Limited Parallel Labels”
  4. “Multi-Objective Multi-Fidelity Hyperparameter Optimization with Application to Fairness”
  5. “Meta-Learning of Compositional Task Distributions in Humans and Machines”
  6. “Reverse engineering learned optimizers reveals known and novel mechanisms”
  7. “Meta-Learning via Hypernetworks”
  8. “Flexible Dataset Distillation: Learn Labels Instead of Images”
  9. “Meta-Learning Initializations for Image Segmentation”
  10. “Prototypical Region Proposal Networks for Few-shot Localization and Classification”
  11. “Continual Model-Based Reinforcement Learning with Hypernetworks”
  12. “Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training”
  13. “How Important is the Train-Validation Split in Meta-Learning?”
  14. “Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift”
  15. “Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads”
  16. “Measuring few-shot extrapolation with program induction”
  17. “Few-Shot Unsupervised Continual Learning through Meta-Examples”
  18. “Pareto-efficient Acquisition Functions for Cost-Aware Bayesian Optimization”
  19. “Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory”
  20. “Defining Benchmarks for Continual Few-Shot Learning”
  21. “Uniform Priors for Meta-Learning”
  22. “Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms”
  23. “A Meta-Learning Approach for Graph Representation Learning in Multi-Task Settings”
  24. “Learning not to learn: Nature versus nurture in silico”
  25. “Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search”
  26. “Bayesian Optimization by Density Ratio Estimation”
  27. “Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time”
  28. “Is Support Set Diversity Necessary for Meta-Learning?”
  29. “Meta-Learning Backpropagation And Improving It”
  30. “Prior-guided Bayesian Optimization”
  31. “Training more effective learned optimizers”

Accepted Papers

Submission Instructions

Formatting

The submission window for this workshop is now closed. Decision notifications were sent out October 30th, 2020. Thank you to all who submitted!

We have provided a modified .sty file here that appropriately lists the name of the workshop when \neuripsfinal is enabled. Please use this style files in conjunction with corresponding LaTeX .tex template from the NeurIPS website to submit a final camera-ready copy. The camera-ready can be up to 8 pages.

Accepted papers and supplementary material will be made available on the workshop website. However, these do not constitute archival publications and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conferences.

FAQ

  1. Can supplementary material be added beyond the 6-page limit for submissions, and are there any restrictions on it?

    Yes, you may include additional supplementary material, but you should ensure that the main paper is self-contained, since looking at supplementary material is at the discretion of the reviewers. The supplementary material should also follow the same NeurIPS format as the paper and be limited to a reasonable amount (max 10 pages in addition to the main submission).

  2. Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?

    We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.

  3. Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?

    We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).

  4. Can a paper be submitted to the workshop that is currently under review or will be under review at a conference during the review phase?

    MetaLearn 2020 submissions are 6 pages, i.e., much shorter than standard conference submissions. But from our side, it is perfectly fine to submit a condensed version of a parallel conference submission if it also fine for the conference in question. Our workshop does not have archival proceedings, and therefore parallel submissions of extended versions to other conferences are acceptable.

  5. Is there a page limit constraint for the camera-ready paper?

    The page limit for the camera-ready will be 8 pages for all accepted contributions so that authors have space to address comments from the programme committee.

Review Process

Reviewing Mentorship

This year we are trialing a new reviewer mentorship scheme, which we hope will improve the future pool of expert reviewers in machine learning. Junior reviewers will be able to provide reviews in a guided setting and will be linked with senior reviewers who will give them feedback and advice throughout the reviewing process. It is our hope that this will be a useful learning experience for reviewers and improve the overall quality of reviewing.

In order for this to be feasible, all submissions will be asked to provide two contacts who have agreed to review for the workshop. These volunteers can, of course, be authors of the submission, or people who have agreed to review on behalf of the authors. Depending on their experience reviewing, these contacts will be assigned to either a junior or senior reviewer role. All submissions will be ensured at least one senior reviewer, since we will still be directly recruiting for reviewers as in previous years.

If you would like to sign up, or recommend somebody, to be either a junior or senior reviewer, please fill out this form by 2 October 2020. The form is now closed; reviews are due on 23 October 2020. For more detailed information on the review process, please see this document.

Program Committee

We thank the program committee for shaping the excellent technical program; they are (in alphabetical order):

Badr AlKhamissi, Alessia Bertugli, Homanga Bharadhwaj, Parminder Bhatia, Surya Bhupatiraju, Jasmin Bogatinovski, Ondrej Bohdal, Quentin Bouniot, Pavel Brazdil, Andrew Brock, Davide Buffelli, Andre Carvalho, Michael Chang, Marco Ciccone, Ignasi Clavera, Ishita Dasgupta, Nikita Dhawan, Rachit Dubey, Praneet Dutta, Thomas Elsken, Dumitru Erhan, Sergio Escalera, Ben Eysenbach, Matthias Feurer, Alexandre Galashov, Rafael Gomes Mantovani, Gauthier Guinet, Abhishek Gupta, Mehrtash Harandi, Leonard Hasenclever, Sean Hendryx, Daniel Hernandez-Lobato, Tin Ho, Kyle Hsu, Yizhou Huang, Frank Hutter, Maximilian Igl, Yiren Jian, Xiang Jiang, Martin Josifoski, Udayan Khurana, Louis Kirsch, Aaron Klein, Lars Kotthoff, Aviral Kumar, Sreejan Kumar, Nicholas Kuo, Angus Lamb, Robert Lange, Hung-yi Lee, Benjamin Letham, Ang Li, Rui Li, Chien-Fu Lin, Marius Lindauer, Evan Liu, Javier Lopez-Contreras, Ana Lorena, Divyam Madaan, Parsa Mahmoudieh, Mikhail Mekhedkin-Meskhi, Piotr Mirowski, Eric Mitchell, Igor Mordatch, Ashvin Nair, Cuong Nguyen, Renkun Ni, Eyvind Niklasson, Mateusz Ochal, Randal Olson, Razvan Pascanu, Massimiliano Patacchiola, Valerio Perrone, Valerio Perrone, Marc Pickett, Vitchyr Pong, Paul Pu Liang, Damir Pulatov, Aniruddh Raghu, Kate Rakelly, Ettore Randazzo, Dushyant Rao, Hootan Rashtian, Ievgen Redko, Mengye Ren, Stephen Roberts, Karsten Roth, Jonas Rothfuss, Andrei Rusu, Horst Samulowitz, Evgeny Saveliev, Alan Savushkin, Robin Schmucker, Brandon Schoenfeld, Ethan Shen, Julien Siems, Devendra Singh Chaplot, Samarth Sinha, Elliott Skomski, Jake Snell, Sungryull Sohn, Artur Souza, Aravind Srinivas, Nishan Srishankar, Bradly Stadie, Valdimar Steinar Ericsson Laenen, Mihai Suteu, Kevin Swersky, Jakub Sygnowski, Yunfei Teng, Louis Tiao, Alexander Tornede, Eleni Triantafillou, Johannes von Oswald, Jianan Wang, Haozhu Wang, Orion Weller, John Willes, Jiajun Wu, Kelvin Xu, Sharare Zehtabian, Marvin Zhang, Lucas Zimmer, Luisa Zintgraf, Liu Ziyin, Stefano Vincenzi

Past Workshops

Workshop on Meta-Learning (MetaLearn 2017) @ NeurIPS 2017

Workshop on Meta-Learning (MetaLearn 2018) @ NeurIPS 2018

Workshop on Meta-Learning (MetaLearn 2019) @ NeurIPS 2019

Contacts

For any further questions, you can contact us at metalearn2020@googlegroups.com.