Recent years have seen rapid progress in meta-learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience.

Meta-learning methods are also of substantial practical interest, since they have, e.g., been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems.

Some of the fundamental questions that this workshop aims to address are:

The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning, as well as possible solutions.

In terms of prospective participants, our main targets are machine learning researchers interested in the processes related to understanding and improving current meta-learning algorithms. Specific target communities within machine learning include, but are not limited to: meta-learning, AutoML, reinforcement learning, deep learning, optimization, evolutionary computation, and Bayesian optimization. Our invited speakers also include researchers who study human learning, to provide a broad perspective to the attendees.

Invited Speakers

Submit challenge questions for the speakers here.

Spotlights

Morning Session

Afternoon Session

Organizers

Important Dates

Schedule

09:00 Introduction and opening remarks
09:10 Erin Grant
09:40 Jeff Clune
10:10 Coffee & posters
10:30 Poster spotlights 1
10:50 Posters
11:30 Pieter Abbeel
12:00 Discussion 1
12:30 Lunch break
14:00 David Abel
14:30 Raia Hadsell
15:00 Poster spotlights 2
15:20 Coffee & posters
16:30 Sebastian Flennerhag: Meta-Learning with Warped Gradient Descent
16:45 Jessica Lee: MetaPix: Few-shot video retargeting
17:00 Brenden Lake
17:30 Discussion 2
17:50 End

Video1

Video2

Video3

Video4

FAQ

  1. Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?

    Yes, you may include additional supplementary material, but you should ensure that the main paper is self-contained, since looking at supplementary material is at the discretion of the reviewers. The supplementary material should also follow the same NeurIPS format as the paper and be limited to a reasonable amount (max 10 pages in addition to the main submission).

  2. Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?

    We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.

  3. If a submission is accepted, is it possible for all authors of the accepted paper to receive a chance to register?

    We cannot confirm this yet, but it is most likely that we will have at most one registration to offer per accepted paper.

  4. Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?

    We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).

  5. Can a paper be submitted to the workshop that is currently under review or will be under review at a conference during the review phase?

    MetaLearn submissions are 4 pages, i.e., much shorter than standard conference submissions. But from our side it is perfectly fine to submit a condensed version of a parallel conference submission, if it also fine for the conference in question. Our workshop does not have archival proceedings, and therefore parallel submissions of extended versions to other conferences are acceptable.

  6. Are there any instructions for poster formatting or for the camera-ready?

Accepted Papers

Program Committee

We thank the program committee for shaping the excellent technical program (in alphabetical order):

Aaron Klein, Abhishek Gupta, Alexander Toshev, Alexandre Galashov, Andre Carvalho, Andrei A. Rusu, Ang Li, Ashvin V. Nair, Avi Singh, Aviral Kumar, Ben Eysenbach, Benjamin Letham, Bradly C, Brandon Schoenfeld, Brian Cheung, Carlos Soares, Daniel Hernandez, Deirdre Quillen, Devendra ingh, Dumitru Erhan, Dushyant Rao, Eleni Triantafillou, Erin Grant, Esteban Real, Eytan Bakshy, Frank Hutter, Haoran Tang, Hugo air, Igor Mordatch, Jakub Sygnowski, Jan Humplik, Jan N. van Rijn, Jan endrik, Jiajun Wu, Jonas Rothfuss, Jonathan Schwarz, Jürgen Schmidhuber, Kate Rakelly, Katharina Eggensperger, Kevin Swersky, Kyle Hsu, Lars Kotthoff, Leonard Hasenclever, Lerrel Pinto, Luisa Zintgraf, Marc Pickett, Marta Garnelo, Marvin Zhang, Matthias Seeger, Maximilian Igl, Misha Denil, Parminder Bhatia, Parsa Mahmoudieh, Pavel Brazdil, Pieter Gijsbers, Piotr Mirowski, Rachit Dubey, Rafael Gomes, Razvan Pascanu, Ricardo B. Prudencio, Roger B. Grosse, Rowan McAllister, Sayna Ebrahimi, Sebastien Racaniere, Sergio Escalera, Siddharth Reddy, Stephen Roberts, Sungryull Sohn, Surya Bhupatiraju, Thomas Elsken, Tin K. Ho, Udayan Khurana, Vincent Dumoulin, Vitchyr H. Pong, Zeyu Zheng

Past Workshops

Workshop on Meta-Learning (MetaLearn 2017) @ NeurIPS 2017

Workshop on Meta-Learning (MetaLearn 2018) @ NeurIPS 2018

Sponsors

We are grateful for the support of our sponsors, which enabled us to offer travel grants to several participants.

Facebook Amazon Deepmind

Contacts

For any further questions, you can contact us at info@metalearning.ml.