The London-based AI company DeepMind recently gained considerable attention after succeeding in developing a single AI agent capable of attaining human-level performance on a wide range of Atari video games – entirely self-taught and using only the raw pixels and game scores as input. In 2016, DeepMind again made headlines when its self-taught AI system AlphaGo succeeded in beating a world champion at the board game of Go, a feat that experts expected to be at least a decade away. What both systems have in common is that they are fundamentally grounded on a technique called Deep Reinforcement Learning. In this talk, we will demystify the mechanisms underlying this increasingly popular Machine Learning approach, which combines the agent-centered paradigm of Reinforcement Learning with state-of-the-art Deep Learning techniques.
I’m currently studying Computer Science, Statistics, Media Informatics and Business Administration at LMU. Besides that, I’m having research affiliations with Siemens Corporate Technology, working on Cognitive Deep Learning (bridging the gap between Deep Learning and Cognitive Neuroscience) as well as with LMU’s Chair of Clinical Neuropsychology, investigating the relationship between visual perception, attention and action. In 2015/16, I pursued a one-year student exchange at the National University of Singapore, where I conducted research for my B.Sc. thesis on a problem in statistical Machine Learning. On the quest to help build smarter machines.