• German

Main Navigation

Igor Babuschkin, DeepMind, OH 14, E23

Event Date: June 13, 2019 16:15

Generative Models

Generative models are a set of unsupervised learning techniques, which attempt to model the distribution of the data points themselves instead of predicting labels from them. In recent years, deep learning approaches to generative models have produced impressive results in areas such as modeling of images (BigGAN), audio (WaveNet), language (Transformer, GPT-2) and others. I'm going to give an overview of the three most popular underlying methods used in deep generative models today: Autoregressive models, generative adversarial networks and variational autoencoders. I will also go over some of the state of the art models and explain how they work.

Igor Babuschkin is a Senior Research Engineer at DeepMind, Google's artificial intelligence division with the ambitious goal of building a general artificial intelligence. He studied physics at the TU Dortmund (2010-2015), where he was involved in experimental particle physics research at the LHCb experiment at CERN. He then switched fields to machine learning and artificial intelligence, joining DeepMind in 2017. Since then he has been working on new types of generative models and approaches to scalable deep reinforcement learning. He is a tech lead of DeepMind's AlphaStar project, which recently produced the first software agent capable of beating a professional player at the game of StarCraft II.

Newsletter RSS Twitter