Recent years have seen rapid progress in meta-learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience.
Meta-learning methods are also of substantial practical interest, since they have, e.g., been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems.
Some of the fundamental questions that this workshop aims to address are:
- What are the fundamental differences in the learning “task” compared to traditional “non-meta” learners?
- Is there a practical limit to the number of meta-learning layers (e.g., would a meta-meta-meta-learning algorithm be of practical use)?
- How can we design more sample-efficient meta-learning methods?
- How can we exploit our domain knowledge to effectively guide the meta-learning process?
- What are the meta-learning processes in nature (e.g, in humans), and how can we take inspiration from them?
- Which ML approaches are best suited for meta-learning, in which circumstances, and why?
- What principles can we learn from meta-learning to help us design the next generation of learning systems?
The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning, as well as possible solutions.
Speakers
- Josh Tenenbaum (MIT)
- Jane Wang (DeepMind)
- Jitendra Malik (UC Berkeley)
- Oriol Vinyals (DeepMind)
- Chelsea Finn (UC Berkeley)
- Christophe Giraud-Carrier (Brigham Young University)
Additional Panelists
- Samy Bengio (Google)
Organizers
- Roberto Calandra (UC Berkeley)
- Frank Hutter (University of Freiburg)
- Hugo Larochelle (Google Brain)
- Sergey Levine (UC Berkeley)
Important dates
- Submission deadline:
03 November 2017(Anywhere on Earth) - Notification:
24 November 2017 - Camera ready:
04 December 2017 - Workshop: 09 December 2017
Schedule
08:30 | Introduction and opening remarks |
08:40 | Jitendra Malik – Learning to optimize with reinforcement learning |
09:10 | Christophe Giraud-Carrier – Informing the Use of Hyperparameter Optimization Through Metalearning |
09:40 | Poster spotlights |
10:00 | Poster session 1 ( + Coffee Break) |
11:00 | Jane Wang – Multiple scales of task and reward-based learning |
11:30 | Chelsea Finn – Model-Agnostic Meta-Learning: Universality, Inductive Bias, and Weak Supervision |
12:00 | Lunch Break |
13:30 | Josh Tenenbaum – Learn to learn high-dimensional models from few examples |
14:00 | Contributed talk 1: Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start |
14:15 | Contributed talk 2: Learning to Model the Tail |
14:30 | Poster session 2 ( + Coffee Break) |
15:30 | Oriol Vinyals – Meta Unsupervised Learning |
16:00 | Panel discussion |
17:00 | End |
Accepted Papers
- SMASH: One-Shot Model Architecture Search through HyperNetworks
Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston - Meta Inverse Reinforcement Learning via Maximum Reward Sharing
Kun Li, Joel W. Burdick - Learning to Learn from Weak Supervision by Full Supervision
Mostafa Dehghani, Aliaksei Severyn, Sascha Rothe, Jaap Kamps - Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm
Chelsea Finn, Sergey Levine - Bayesian model ensembling using meta-trained recurrent neural networks
Luca Ambrogioni, Julia Berezutskaya, Umut Güçlü, Eva W. P. van den Borne, Yağmur Güçlütürk, Marcel A. J. van Gerven - Accelerating Neural Architecture Search using Performance Prediction
Bowen Baker, Otkrist Gupta, Ramesh Raskar, Nikhil Naik - Meta-Learning for Semi-Supervised Few-Shot Classification [Appendix]
Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle, Richard S. Zemel - Connectivity Learning in Multi-Branch Networks
Karim Ahmed, Lorenzo Torresani - A Simple Neural Attentive Meta-Learner
Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, Pieter Abbeel - Semi-Supervised Few-Shot Learning with Prototypical Networks
Rinu Boney, Alexander Ilin - Language Learning as Meta-Learning [Appendix]
Jacob Andreas, Dan Klein, Sergey Levine - Hyperparameter Optimization with Hypernets
Jonathan Lorraine, David Duvenaud - Few-Shot Learning with Meta Metric Learners
Yu Cheng, Mo Yu, Xiaoxiao Guo, Bowen Zhou - Gated Fast Weights for On-The-Fly Neural Program Generation
Imanol Schlag, Jürgen Schmidhuber - A bridge between hyperparameter optimization and learning-to-learn
Luca Franceschi, Paolo Frasconi, Michele Donini, Massimiliano Pontil - Understanding Short-Horizon Bias in Stochastic Meta-Optimization [Appendix]
Yuhuai Wu, Mengye Ren, Renjie Liao, Roger B. Grosse - Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning [Extended version]
Clemens Rosenbaum, Tim Klinger, Matthew Riemer - Learning Decision Trees with Reinforcement Learning [Appendix]
Zheng Xiong, Wenpeng Zhang, Wenwu Zhu - Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start
Valerio Perrone, Rodolphe Jenatton, Matthias Seeger, Cédric Archambeau - Backpropagated plasticity: learning to learn with gradient descent in large plastic networks Thomas Miconi, Jeff Clune, Kenneth O. Stanley
- Learning to Learn while Learning
Daniel Kappler, Stefan Schaal, Franziska Meier - Meta-Learning for Instance-Level Data Association
Ronald Clark, John McCormac, Stefan Leutenegger, Andrew J. Davison - Supervised Learning of Unsupervised Learning Rules
Luke Metz, Brian Cheung, Jascha Sohl-dickstein - Learning word embeddings from dictionary definitions only
Tom Bosc, Pascal Vincent - Learning to Model the Tail [Extended version]
Yu-Xiong Wang, Deva Ramanan, Martial Hebert - Born Again Neural Networks
Tommaso Furlanello, Zachary C. Lipton, Laurent Itti, Anima Anandkumar - Hyperactivations for Activation Function Exploration
Conner Joseph Vercellino, William Yang Wang - Concept Learning via Meta-Optimization with Energy Models
Igor Mordatch - Simple and Efficient Architecture Search for CNNs
Thomas Elsken, Jan-Hendrik Metzen, Frank Hutter
Program Committee
We thank the program committee for shaping the excellent technical program (in alphabetical order):
Parminder Bhatia, Andrew Brock, Bistra Dilkina, Rocky Duan, David Duvenaud, Thomas Elsken, Dumitru Erhan, Matthias Feurer, Chelsea Finn, Roman Garnett, Christophe Giraud-Carrier, Erin Grant, Klaus Greff, Roger Grosse, Abhishek Gupta, Matt Hoffman, Aaron Klein, Marius Lindauer, Jan-Hendrik Metzen, Igor Mordatch, Randy Olson, Sachin Ravi, Horst Samulowitz, Jürgen Schmidhuber, Matthias Seeger, Jake Snell, Jasper Snoek, Alexander Toshev, Eleni Triantafillou, Jan van Rijn, Joaquin Vanschoren.
Contacts
For any question you can contact us at info@metalearning.ml