HyLAP at ECML-PKDD 2016, Riva del Garda, Italy

Nico and me are able to attend also the ECML-PKDD 2016. After attending ECML-PKDD 2015, we both already knew that this conference will be a lot of fun with nice people. We both presented already yesterday. Nico had the pleasure to present his work Scalable Hyperparameter Optimization with Products of Gaussian Process Experts where he proposes a way to improve the scalability of Gaussian process based transfer surrogates which is an important topic for large meta-data sets. I presented in my talk Two-Stage Transfer Surrogate Model for Automatic Hyperparameter Optimization a new surrogate model for sequential model-based optimization that proposes a new way of using meta-knowledge.

Beautiful view over the former fisher village from the Monte Brione.

HyLAP at DSAA 2015, Paris, France

Salut de Paris!

I just presented our paper "Learning Hyperparameter Optimization Initializations" at IEEE International Conference on Data Science and Advanced Analytics 2015 in Paris. The paper is about how to find hyperparameter configurations, which are good across many different data sets, and, hence, are good starting points for the hyperparameter tuning. We propose a meta-loss, which allows to learn these starting points, by minimizing it.

Because the conference takes place in Paris, not everything could be about machine learning. Here, you can see me in front of the Notre-Dame and the river Seine.

Notre-Dame

HyLAP at ECML-PKDD 2015, Porto, Portugal

The visit of the ECML-PKDD 2015 is special for us for two reasons. First, we will travel in a group because our lab will give three presentations, two of them in the context of hyperparameter optimization. More importantly, MetaSel is co-located with the ECML in this year and a meta-learning tutorial is given. MetaSel is a workshop on meta-learning and algorithm selection and of personal interest for Nico and me.

Both, the workshop and the tutorial have been already on Monday and we were able to talk to many different people working on very similar aspects of machine learning as we do. We had the opportunity to exchange few words with some well known people in this domain, e.g. Prof. Brazdil and Prof. Bischl.

Overall, visiting ECML-PKDD 2015 has been a great experience and we are looking forward to ECML 2016.

Nico infront of our two posters

Big Surprise for Us

Both our submissions to the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases got accepted! We are very surprised and happy about this. As you can assume, this is very important for the project as well as personally.

Nico submitted his work named "Hyperparameter Optimization with Factorized Multilayer Perceptrons". The rough idea is to accelerate the automatic tuning of hyperparameters of machine learning algorithms. Therefore, he is making use of the knowledge of hyperparameter performances on some data sets, which he tries to transfer to the data set, which we are currently interested in. To achieve this, he proposed to use a specific neural network. This neural network is a multilayer perceptron with a different input layer, that enables to learn data set similarities in a latent representation.

Our other submission with the lengthy title "Hyperparameter Search Space Pruning - A New Component for Sequential Model-Based Hyperparameter Optimization" follows a similar idea. Again, using the knowledge of hyperparameter performances on other data sets, we are trying to accelerate the automatic hyperparameter optimization. In contrast, I proposed to add a new, orthogonal component to the state of the art hyperparameter optimization framework. This allows to prune regions in the search space, which leads to good hyperparameter configurations in less time.

At this point we would also like to thank the anonymous reviewers for their useful feedback.

See you in Porto!