Live participation
The 2020 Workshop on Meta-Learning will be a series of streamed pre-recorded talks + live question-and-answer (Q&A) periods, and poster sessions on Gather.Town. You can participate by:
- Accessing the livestream on our NeurIPS.cc virtual workshop page!
- Joining the Zoom to message questions to the moderator during the Q&As and panel discussion, also from the NeurIPS.cc virtual workshop page.
- Joining the poster sessions on Gather.Town (you can find the list of papers (and their placement) for each session in the Poster Sessions section below): Session 1; Session 2; Session 3.
- Watching the poster videos on the NeurIPS.cc virtual workshop page.
- Chatting with us and other participants on the MetaLearn 2020 Rocket.Chat!
-
Ask panel discussion questions on sli.do.
- Questions for the panelists!
News & Updates
-
Dec. 11, 2020: Paper, supplementary materials and posters have been added to the accepted papers section!
-
Dec. 11, 2020: Workshop day! Please check the “Live participation” section above for participation instructions.
-
Nov. 16, 2020: NeurIPS registration funding from the workshop for presenters and junior reviewers has been distributed. The NeurIPS conference also has a funding program here.
-
Nov. 14, 2020: The workshop schedule has been finalized! You can also find it on NeurIPS.cc.
-
Oct. 30, 2020: Decisions have been sent out. Thank you to everyone who submitted or participated in the reviewing process!
-
Oct. 4, 2020: The submission portal has now closed and the reviewing process has begun.
-
Sep. 15, 2020: Reviewing mentorship sign-up & recommendation form released!
-
Sep. 2, 2020: CMT open for submissions!
Workshop Overview
Abstract
Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to learn new tasks more efficiently, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers and policies over hand-crafted features, to learning representations over which classifiers and policies operate, and finally to learning algorithms that themselves acquire representations, classifiers, and policies.
Meta-learning methods are of substantial practical interest. For instance, they have been shown to yield new state-of-the-art automated machine learning algorithms and architectures, and have substantially improved few-shot learning systems. Moreover, the ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in cognitive science and reward learning in neuroscience.
Some of the fundamental questions that this workshop aims to address are:
- How can we exploit our domain knowledge to effectively guide the meta-learning process?
- What are the meta-learning processes in nature (e.g., in humans), and how can we take inspiration from them?
- Which machine learning approaches are best suited for meta-learning, in which circumstances, and why?
- What principles can we learn from meta-learning to help us design the next generation of learning systems?
- How can we design more sample-efficient meta-learning methods?
In this edition of the meta-learning workshop, we want to stimulate discussion on several key underlying and unsolved questions, particularly:
- Task distributions: What constitutes a distribution of tasks, domains, or problems? What does it mean to have learned this distribution, and to generalize outside of this distribution?
- Transfer/continual/lifelong learning: What is the relationship between meta-learning and transfer/continual or lifelong learning? How do the notions of “meta-train” and “meta-test” datasets map to continual or lifelong learning?
- Inductive biases: What is the role of inductive biases, be they algorithmic or architectural? How do they manifest in evolution, neural architecture search, etc.? What kinds of useful inductive biases can we learn from neuroscience or cognitive science?
This workshop aims to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning as well as possible solutions.
In terms of prospective participants, our main targets are machine learning researchers interested in the processes related to understanding and improving current meta-learning algorithms. Specific target communities within machine learning include, but are not limited to meta-learning, AutoML, reinforcement learning, deep learning, optimization, evolutionary computation, and Bayesian optimization. We also invite submissions from researchers who study human learning and neuroscience to provide a broad perspective to the attendees.
Virtual Format
In terms of organizing this workshop in a virtual format, we understand that accessibility across time zones is a significant challenge for a virtual meeting. We will thus offer a mix of synchronous and asynchronous formats that allow participation despite this complication. In thinking about this format, we took inspiration from best practices that emerged from past conferences.
Specifically, we will require both invited speakers and authors of accepted papers to make pre-recorded videos available in advance, allowing registered participants to engage with the recordings at any time. This will be accompanied by live offerings such as a panel session and Q&A for invited talks or spotlights, where participants will be given the opportunity to submit and vote on questions using a Q&A platform during and in advance of the livestream. One of the organizers will facilitate as a moderator for each live session.
For our virtual poster sessions, we plan to use a tool that enables audience participation such as Gather.Town. We will allow authors to choose between one or more of three sessions held several hours apart, allowing a choice based on the most suitable time in their local time zone.
Finally, we will use a dedicated channel on a chat service throughout the event to provide a means of interaction between workshop participants. A schedule will be set up to include three poster sessions throughout the day to ensure that virtual attendees in all timezones will be able to participate in at least one.
More detailed instructions will be given closer to the workshop date.
Invited Speakers
- Timothy Hospedales (University of Edinburgh)
- Frank Hutter (University of Freiburg)
- Louis Kirsch (IDSIA)
- Fei-Fei Li & Kuan Fang (Stanford University)
- Kate Rakelly (UC Berkeley)
- Luisa Zintgraf (University of Oxford)
Organizers
- Roberto Calandra (Facebook AI Research)
- Jeff Clune (OpenAI)
- Erin Grant (UC Berkeley)
- Jonathan Schwarz (University College London, Deepmind)
- Joaquin Vanschoren (Eindhoven University of Technology)
- Francesco Visin (DeepMind)
- Jane Wang (DeepMind)
Important Dates
- Submission deadline: 4 October 2020, 11:59 PM AoE - Extended!
- Notification: 30 October 2020, by 06:00 PM PDT
- Video recording to SlidesLive: 14 November 2020, 11:59 PM PST
- Camera-ready submission (paper + poster) to CMT: 27 November 2020, 11:59 PM AoE
- Workshop: 11 December 2020
Program
Schedule
The workshop schedule is aligned with 11 AM to 8 PM UTC; please see this converter for conversion to your specific time zone.
Beijing (CST) | Berlin (CET) | London (UTC) | New York (EST) | Vancouver (PST) | |
---|---|---|---|---|---|
19:00 | 12:00 | 11:00 | 06:00 | 03:00 | Introduction and opening remarks |
19:10 | 12:10 | 11:10 | 06:10 | 03:10 | Invited talk 1: Frank Hutter, “Meta-learning neural architectures, initial weights, hyperparameters, and algorithm components”. Q&A |
19:40 | 12:40 | 11:40 | 06:40 | 03:40 | Contributed talk 1: Steinar Laenen, “On episodes, Prototypical Networks, and few-shot learning” |
20:00 | 13:00 | 12:00 | 07:00 | 04:00 | Poster session 1 |
21:00 | 14:00 | 13:00 | 08:00 | 05:00 | Invited talk 2: Luisa Zintgraf, “Exploration in meta-reinforcement learning”. Q&A |
21:30 | 14:30 | 13:30 | 08:30 | 05:30 | Invited talk 3: Timothy Hospedales, “Meta-learning: Representations and objectives”. Q&A |
22:00 | 15:00 | 14:00 | 09:00 | 06:00 | Break |
23:00 | 16:00 | 15:00 | 10:00 | 07:00 | Poster session 2 |
24:00 | 17:00 | 16:00 | 11:00 | 08:00 | Invited talk 4: Louis Kirsch, “General meta-learning”. Q&A |
24:30 | 17:30 | 16:30 | 11:30 | 08:30 | Invited talk 5: Fei-Fei Li & Kuan Fang, “Creating diverse tasks to catalyze robot learning”. Q&A |
01:00 | 18:00 | 17:00 | 12:00 | 09:00 | Poster session 3 |
02:00 | 19:00 | 18:00 | 13:00 | 10:00 | Invited talk 6: Kate Rakelly, “An inference perspective on meta-reinforcement learning”. Q&A |
02:30 | 19:30 | 18:30 | 13:30 | 10:30 | Contributed talk 2: Niru Maheswaranathan, “Understanding the dynamics of learned optimizers” |
02:45 | 19:45 | 18:45 | 13:45 | 10:45 | Contributed talk 3: Louis Tiao, “Bayesian optimization by density ratio estimation” |
03:00 | 20:00 | 19:00 | 14:00 | 11:00 | Panel Discussion (Ask questions here) |
04:00 | 21:00 | 20:00 | 15:00 | 12:00 | End |
Poster sessions
To make it easier to find the posters, we marked posters in the Gather.Town rooms by increasing number; the list of posters below for each session is ordered by their position in the room. Paid NeurIPS.cc registration is required to access the Gather.Town poster rooms.
Poster session 1
- “Few-Shot Unsupervised Continual Learning through Meta-Examples”
- “Similarity of classification tasks”
- “On Episodes, Prototypical Networks, and Few-Shot Learning”
- “MPLP: Learning a Message Passing Learning Protocol”
- “MAster of PuPpets: Model-Agnostic Meta-Learning via Pre-trained Parameters for Natural Language Generation”
- “Uniform Priors for Meta-Learning”
- “Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms”
- “A Meta-Learning Approach for Graph Representation Learning in Multi-Task Settings”
- “Meta-Learning Backpropagation And Improving It”
- “MobileDets: Searching for Object Detection Architecture for Mobile Accelerators”
- “Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search”
- “NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search”
- “Learning to Generate Noise for Multi-Attack Robustness”
- “Prior-guided Bayesian Optimization”
- “Task Similarity Aware Meta Learning: Theory-inspired Improvement on MAML”
- “HyperVAE: Variational Hyper-Encoding Network”
- “Towards Meta-Algorithm Selection”
- “Meta-Learning via Hypernetworks”
- “Flexible Dataset Distillation: Learn Labels Instead of Images”
- “Contextual HyperNetworks for Novel Feature Adaptation”
- “Bayesian Optimization by Density Ratio Estimation”
Poster Session 2
- “Continual learning with direction-constrained optimization”
- “Pareto-efficient Acquisition Functions for Cost-Aware Bayesian Optimization”
- “Hyperparameter Transfer Across Developer Adjustments”
- “Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory”
- “Defining Benchmarks for Continual Few-Shot Learning”
- “Few-shot Sequence Learning with Transformers”
- “Learning not to learn: Nature versus nurture in silico”
- “Towards Meta-Algorithm Selection”
- “Open-Set Incremental Learning via Bayesian Prototypical Embeddings”
- “Data Augmentation for Meta-Learning”
- “Flexible Few-Shot Learning of Contextual Similarity”
- “Contextual HyperNetworks for Novel Feature Adaptation”
- “Bayesian Optimization by Density Ratio Estimation”
- “Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time”
- “Model-Agnostic Graph Regularization for Few-Shot Learning”
- “Is Support Set Diversity Necessary for Meta-Learning?”
- “Few-Shot Unsupervised Continual Learning through Meta-Examples”
- “MAster of PuPpets: Model-Agnostic Meta-Learning via Pre-trained Parameters for Natural Language Generation”
- “Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms”
- “Meta-Learning Backpropagation And Improving It”
- “Meta-Learning via Hypernetworks”
- “Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search”
- “Continual Model-Based Reinforcement Learning with Hypernetworks”
- “Learning to Generate Noise for Multi-Attack Robustness”
- “Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training”
- “Prior-guided Bayesian Optimization”
- “Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads”
- “Measuring few-shot extrapolation with program induction”
Poster Session 3
- “Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment”
- “Decoupling Exploration and Exploitation in Meta-Reinforcement Learning without Sacrifices”
- “Task Meta-Transfer from Limited Parallel Labels”
- “Multi-Objective Multi-Fidelity Hyperparameter Optimization with Application to Fairness”
- “Meta-Learning of Compositional Task Distributions in Humans and Machines”
- “Reverse engineering learned optimizers reveals known and novel mechanisms”
- “Meta-Learning via Hypernetworks”
- “Flexible Dataset Distillation: Learn Labels Instead of Images”
- “Meta-Learning Initializations for Image Segmentation”
- “Prototypical Region Proposal Networks for Few-shot Localization and Classification”
- “Continual Model-Based Reinforcement Learning with Hypernetworks”
- “Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training”
- “How Important is the Train-Validation Split in Meta-Learning?”
- “Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift”
- “Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads”
- “Measuring few-shot extrapolation with program induction”
- “Few-Shot Unsupervised Continual Learning through Meta-Examples”
- “Pareto-efficient Acquisition Functions for Cost-Aware Bayesian Optimization”
- “Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory”
- “Defining Benchmarks for Continual Few-Shot Learning”
- “Uniform Priors for Meta-Learning”
- “Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms”
- “A Meta-Learning Approach for Graph Representation Learning in Multi-Task Settings”
- “Learning not to learn: Nature versus nurture in silico”
- “Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search”
- “Bayesian Optimization by Density Ratio Estimation”
- “Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time”
- “Is Support Set Diversity Necessary for Meta-Learning?”
- “Meta-Learning Backpropagation And Improving It”
- “Prior-guided Bayesian Optimization”
- “Training more effective learned optimizers”
Accepted Papers
-
Reverse engineering learned optimizers reveals known and novel mechanisms. Niru Maheswaranathan, David Sussillo, Luke Metz, Ruoxi Sun, Jascha Sohl-Dickstein. [paper] [supplementary] [poster]
-
On Episodes, Prototypical Networks, and Few-Shot Learning. Steinar Laenen, Luca Bertinetto. [paper] [supplementary] [poster]
-
Bayesian Optimization by Density Ratio Estimation. Louis C Tiao, Aaron Klein, Cedric Archambeau, Edwin V. Bonilla, Matthias Seeger, Fabio Ramos. [paper] [supplementary] [poster]
-
Contextual HyperNetworks for Novel Feature Adaptation. Angus Lamb, Evgeny Saveliev, Yingzhen Li, Sebastian Tschiatschek, Camilla Longden, Simon Woodhead, Jose Miguel Hernandez-Lobato, Richard E. Turner, Pashmina Cameron, Cheng Zhang. [paper] [supplementary] [poster]
-
Learning to Generate Noise for Multi-Attack Robustness. Divyam Madaan, Jinwoo Shin, Sung Ju Hwang. [paper] [supplementary] [poster]
-
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time. Ferran Alet, Kenji Kawaguchi, Maria Bauza Villalonga, Nurullah Giray Kuru, Tomas Lozano-Perez, Leslie Kaelbling. [paper] [supplementary] [poster]
-
Model-Agnostic Graph Regularization for Few-Shot Learning. Ethan Z Shen, Maria Brbic, Nicholas Monath, Jiaqi Zhai, Manzil Zaheer, Jure Leskovec. [paper] [supplementary] [poster]
-
Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training. Eleni Triantafillou, Vincent Dumoulin, Hugo Larochelle, Richard Zemel. [paper] [poster]
-
Prior-guided Bayesian Optimization. Artur L.F. Souza, Luigi Nardi, Leonardo B. Oliveira, Kunle Olukotun, Marius Lindauer, Frank Hutter. [paper] [supplementary] [poster]
-
Is Support Set Diversity Necessary for Meta-Learning?. Oscar Li, Amrith Setlur, Virginia Smith. [paper] [poster]
-
How Important is the Train-Validation Split in Meta-Learning?. Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason Lee, Sham Kakade, Huan Wang, Caiming Xiong. [paper] [supplementary] [poster]
-
Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift. Marvin Zhang, Henrik - Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, Chelsea Finn. [paper] [supplementary] [poster]
-
Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads. Suneel Belkhale. [paper] [supplementary] [poster]
-
Task Similarity Aware Meta Learning: Theory-inspired Improvement on MAML. Pan Zhou, Yingtian Zou, Xiao-Tong Yuan, Jiashi Feng, Caiming Xiong, Steven Hoi. [paper] [supplementary] [poster]
-
HyperVAE: Variational Hyper-Encoding Network. Phuoc Nguyen, Truyen Tran, Sunil Gupta, Santu Rana, Svetha Venkatesh, Hieu-Chi Dam. [paper] [poster]
-
Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search. Aditya Rawal, Joel Lehman, Felipe Petroski Such, Jeff Clune, Kenneth O Stanley. [paper] [poster]
-
Meta-Learning Initializations for Image Segmentation. Sean M Hendryx, Andrew Leach, Paul Hein, Clayton Morrison. [paper] [poster]
-
Data Augmentation for Meta-Learning. Renkun Ni, Micah Goldblum, Amr Sharaf, Kezhi Kong, Tom Goldstein. [paper] [supplementary] [poster]
-
Flexible Few-Shot Learning of Contextual Similarity. Mengye Ren, Eleni Triantafillou, Kuan-Chieh Wang, James Lucas, Jake Snell, Xaq Pitkow, Andreas S. Tolias, Richard S. Zemel. [paper] [poster]
-
NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search. Julien Siems, Lucas Zimmer, Arber Zela, Jovita Lukasik, Margret Keuper, Frank Hutter. [paper] [supplementary] [poster]
-
Prototypical Region Proposal Networks for Few-shot Localization and Classification. Elliott Skomski, Aaron R Tuor, Andrew Avila, Lauren Phillips, Zachary New, Henry Kvinge, Courtney Corley, Nathan Hodas. [paper] [poster]
-
Continual Model-Based Reinforcement Learning with Hypernetworks. Yizhou Huang, Kevin Xie, Homanga Bharadhwaj, Florian Shkurti. [paper] [poster]
-
MAster of PuPpets: Model-Agnostic Meta-Learning via Pre-trained Parameters for Natural Language Generation. Chien-Fu Lin, Hung-yi Lee. [paper] [supplementary] [poster]
-
Few-Shot Unsupervised Continual Learning through Meta-Examples. Alessia Bertugli, Stefano Vincenzi, SIMONE CALDERARA, Andrea Passerini. [paper] [supplementary] [poster]
-
Similarity of classification tasks. Cuong Cao Nguyen, Thanh-Toan Do, Gustavo Carneiro. [paper] [supplementary] [poster]
-
MobileDets: Searching for Object Detection Architecture for Mobile Accelerators. Yunyang Xiong, Hanxiao Liu, Suyog Gupta, Berkin Akin, Gabriel Bender, Yongzhe Wang, Pieter-Jan Kindermans, Mingxing Tan, Prof Singh, Bo Chen. [paper] [supplementary] [poster]
-
Towards Meta-Algorithm Selection. Alexander Tornede, Marcel Wever, Eyke Hüllermeier. [paper] [poster]
-
Meta-Learning via Hypernetworks. Dominic Zhao, Seijin Kobayashi, João Sacramento, Johannes von Oswald. [paper] [poster]
-
Open-Set Incremental Learning via Bayesian Prototypical Embeddings. John Willes, James Harrison, Chelsea Finn, Marco Pavone, Steven L Waslander. [paper] [supplementary] [poster]
-
Multi-Objective Multi-Fidelity Hyperparameter Optimization with Application to Fairness. Robin Schmucker, Michele Donini, Valerio Perrone, Muhammad Bilal Zafar, Cedric Archambeau. [paper] [supplementary] [poster]
-
Meta-Learning of Compositional Task Distributions in Humans and Machines. Sreejan Kumar, Ishita Dasgupta, Jonathan Cohen, Nathaniel Daw, Tom Griffiths. [paper] [poster]
-
Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms. Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard. [paper] [supplementary] [poster]
-
A Meta-Learning Approach for Graph Representation Learning in Multi-Task Settings. Davide Buffelli, Fabio Vandin. [paper] [supplementary] [poster]
-
Few-shot Sequence Learning with Transformers. Lajanugen Logeswaran, Ann Lee, Myle Ott, Honglak Lee, Marc’Aurelio Ranzato, Arthur Szlam. [paper] [supplementary] [poster]
-
Learning not to learn: Nature versus nurture in silico. Robert T Lange, Henning Sprekeler. [paper] [poster]
-
Meta-Learning Backpropagation And Improving It. Louis Kirsch, Jürgen Schmidhuber. [paper] [supplementary] [poster]
-
Continual learning with direction-constrained optimization. Yunfei Teng, Anna Choromanska, Murray Campbell. [paper] [supplementary] [poster]
-
Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment. Paul Pu Liang, Peter Wu, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov. [paper] [supplementary] [poster]
-
MPLP: Learning a Message Passing Learning Protocol. Ettore Randazzo, Eyvind Niklasson, Alexander Mordvintsev. [paper] [supplementary] [poster]
-
Pareto-efficient Acquisition Functions for Cost-Aware Bayesian Optimization. Gauthier Guinet, Valerio Perrone, Cedric Archambeau. [paper] [poster]
-
Hyperparameter Transfer Across Developer Adjustments. Danny Stoll, Jörg K.H. Franke, Diane Wagner, Simon Selg, Frank Hutter. [paper] [supplementary] [poster]
-
Decoupling Exploration and Exploitation in Meta-Reinforcement Learning without Sacrifices. Evan Liu, Aditi Raghunathan, Percy Liang, Chelsea Finn. [paper] [poster]
-
Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory. Jonas Rothfuss, Martin Josifoski, Andreas Krause. [paper] [poster]
-
Task Meta-Transfer from Limited Parallel Labels. Yiren Jian, Karim Ahmed, Lorenzo Torresani. [paper] [poster]
-
Uniform Priors for Meta-Learning. Samarth Sinha, Karsten Roth, Anirudh Goyal, Marzyeh Ghassemi, Hugo Larochelle, Animesh Garg. [paper] [poster]
-
Defining Benchmarks for Continual Few-Shot Learning. Antreas Antoniou, Massimiliano Patacchiola, Mateusz Ochal, Amos Storkey. [paper] [supplementary] [poster]
-
Flexible Dataset Distillation: Learn Labels Instead of Images. Ondrej Bohdal, Yongxin Yang, Timothy Hospedales. [paper] [supplementary] [poster]
-
Measuring few-shot extrapolation with program induction. Ferran Alet, Javier Lopez-Contreras, Joshua Tenenbaum, Tomas Lozano-Perez, Leslie Kaelbling. [paper] [poster]
-
Training more effective learned optimizers. Luke Metz, Niru Maheswaranathan, Ruoxi Sun, Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein. [paper] [poster]
Submission Instructions
Formatting
The submission window for this workshop is now closed. Decision notifications were sent out October 30th, 2020. Thank you to all who submitted!
We have provided a modified .sty
file here that appropriately lists the name of the workshop when \neuripsfinal
is enabled. Please use this style files in conjunction with corresponding LaTeX .tex
template from the NeurIPS website to submit a final camera-ready copy. The camera-ready can be up to 8 pages.
Accepted papers and supplementary material will be made available on the workshop website. However, these do not constitute archival publications and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conferences.
FAQ
-
Can supplementary material be added beyond the 6-page limit for submissions, and are there any restrictions on it?
Yes, you may include additional supplementary material, but you should ensure that the main paper is self-contained, since looking at supplementary material is at the discretion of the reviewers. The supplementary material should also follow the same NeurIPS format as the paper and be limited to a reasonable amount (max 10 pages in addition to the main submission).
-
Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?
We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.
-
Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?
We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).
-
Can a paper be submitted to the workshop that is currently under review or will be under review at a conference during the review phase?
MetaLearn 2020 submissions are 6 pages, i.e., much shorter than standard conference submissions. But from our side, it is perfectly fine to submit a condensed version of a parallel conference submission if it also fine for the conference in question. Our workshop does not have archival proceedings, and therefore parallel submissions of extended versions to other conferences are acceptable.
-
Is there a page limit constraint for the camera-ready paper?
The page limit for the camera-ready will be 8 pages for all accepted contributions so that authors have space to address comments from the programme committee.
Review Process
Reviewing Mentorship
This year we are trialing a new reviewer mentorship scheme, which we hope will improve the future pool of expert reviewers in machine learning. Junior reviewers will be able to provide reviews in a guided setting and will be linked with senior reviewers who will give them feedback and advice throughout the reviewing process. It is our hope that this will be a useful learning experience for reviewers and improve the overall quality of reviewing.
In order for this to be feasible, all submissions will be asked to provide two contacts who have agreed to review for the workshop. These volunteers can, of course, be authors of the submission, or people who have agreed to review on behalf of the authors. Depending on their experience reviewing, these contacts will be assigned to either a junior or senior reviewer role. All submissions will be ensured at least one senior reviewer, since we will still be directly recruiting for reviewers as in previous years.
If you would like to sign up, or recommend somebody, to be either a junior or senior reviewer, please fill out this form by 2 October 2020. The form is now closed; reviews are due on 23 October 2020. For more detailed information on the review process, please see this document.
Program Committee
We thank the program committee for shaping the excellent technical program; they are (in alphabetical order):
Badr AlKhamissi, Alessia Bertugli, Homanga Bharadhwaj, Parminder Bhatia, Surya Bhupatiraju, Jasmin Bogatinovski, Ondrej Bohdal, Quentin Bouniot, Pavel Brazdil, Andrew Brock, Davide Buffelli, Andre Carvalho, Michael Chang, Marco Ciccone, Ignasi Clavera, Ishita Dasgupta, Nikita Dhawan, Rachit Dubey, Praneet Dutta, Thomas Elsken, Dumitru Erhan, Sergio Escalera, Ben Eysenbach, Matthias Feurer, Alexandre Galashov, Rafael Gomes Mantovani, Gauthier Guinet, Abhishek Gupta, Mehrtash Harandi, Leonard Hasenclever, Sean Hendryx, Daniel Hernandez-Lobato, Tin Ho, Kyle Hsu, Yizhou Huang, Frank Hutter, Maximilian Igl, Yiren Jian, Xiang Jiang, Martin Josifoski, Udayan Khurana, Louis Kirsch, Aaron Klein, Lars Kotthoff, Aviral Kumar, Sreejan Kumar, Nicholas Kuo, Angus Lamb, Robert Lange, Hung-yi Lee, Benjamin Letham, Ang Li, Rui Li, Chien-Fu Lin, Marius Lindauer, Evan Liu, Javier Lopez-Contreras, Ana Lorena, Divyam Madaan, Parsa Mahmoudieh, Mikhail Mekhedkin-Meskhi, Piotr Mirowski, Eric Mitchell, Igor Mordatch, Ashvin Nair, Cuong Nguyen, Renkun Ni, Eyvind Niklasson, Mateusz Ochal, Randal Olson, Razvan Pascanu, Massimiliano Patacchiola, Valerio Perrone, Valerio Perrone, Marc Pickett, Vitchyr Pong, Paul Pu Liang, Damir Pulatov, Aniruddh Raghu, Kate Rakelly, Ettore Randazzo, Dushyant Rao, Hootan Rashtian, Ievgen Redko, Mengye Ren, Stephen Roberts, Karsten Roth, Jonas Rothfuss, Andrei Rusu, Horst Samulowitz, Evgeny Saveliev, Alan Savushkin, Robin Schmucker, Brandon Schoenfeld, Ethan Shen, Julien Siems, Devendra Singh Chaplot, Samarth Sinha, Elliott Skomski, Jake Snell, Sungryull Sohn, Artur Souza, Aravind Srinivas, Nishan Srishankar, Bradly Stadie, Valdimar Steinar Ericsson Laenen, Mihai Suteu, Kevin Swersky, Jakub Sygnowski, Yunfei Teng, Louis Tiao, Alexander Tornede, Eleni Triantafillou, Johannes von Oswald, Jianan Wang, Haozhu Wang, Orion Weller, John Willes, Jiajun Wu, Kelvin Xu, Sharare Zehtabian, Marvin Zhang, Lucas Zimmer, Luisa Zintgraf, Liu Ziyin, Stefano Vincenzi
Past Workshops
Workshop on Meta-Learning (MetaLearn 2017) @ NeurIPS 2017
Workshop on Meta-Learning (MetaLearn 2018) @ NeurIPS 2018
Workshop on Meta-Learning (MetaLearn 2019) @ NeurIPS 2019
Contacts
For any further questions, you can contact us at metalearn2020@googlegroups.com.