Ben Flood – Insurance Stuff for DataGeeks

Abstract: Presenting the data story of a very complicated analysis of a financial insurance product, aimed at a non-financial, non-insurance audience. Bio: Statistician, Senior Manager in Financial Services in KPMG (has also been called software engineer, mathematician, data scientist, actuary, and quant, depending on the client). Has worked in a diamond factory, a graphic design[…]

Michael Green: Deep probabilistic neural networks – A way forward

Abstract: In a world where deep learning and other massively scalable perception machines are at our disposal, allowing us to build amazing applications, the time is now ripe to move beyond the concept of pure perception and into broader Artificial Intelligence (AI). The path towards AI goes through what’s missing in many applications today; Inference.[…]

Marcel Tilly & Olivia Klose – My Robot can learn – using Reinforcement Learning to teach my Robot

Abstract: My Robot can learn – using Reinforcement Learning to teach my Robot A new star is rising at the machine learning horizon: reinforcement learning (RL). The concept entails an agent and an incentive-based training system. The agent learns via incentives and improves its behaviour – a self-learning system using simple rules – leading to[…]

Markus Ziller – From Pokémon to Donald Trump – Mining and Visualizing weird stuff

Abstract: From Pokémon to Donald Trump – Mining and Visualizing weird stuff In his talk, Markus talks about extracting, analyzing and visualizing data from unusual sources. He will talk about two of his projects: First he’ll talk about using a Pokémon Go bot to gather and processing data on 250k spawns of Pokémon in Munich[…]

Alexander Hirner – Transfer Learning for Fun and Profit

Abstract: Transfer Learning for Fun and Profit Transfer learning is exciting because it unlocks solutions that weren’t feasible a few years ago. In fact, choices to compose from pre-trained models for computer vision tasks became abundant. In this talk, we will explore how to make these choices for image classification and feature extraction. The analysis is[…]

Daniel Kühn – Make hyperparameters great again

Abstract: Make hyperparameters great again While tuning hyperparameters of machine learning algorithms is computationally expensive, it also proves vital for improving their predictive performance. Methods for tuning range from manual search to more complex procedures like Bayesian optimization. This talk will demonstrate the latest methods for finding good hyperparameter-sets within a set period of time for[…]