07 October 2017
Great Data Science talks, food + drinks and a lot of time for networking
Chief Data Scientist
Abstract: Deep Learning and the Industries
Deep Learning is difficult (?) and there are no hard and fast rules for designing a deep network. However, there are certain practices one can follow and these will come out only after dealing with many different datasets and trying out different methods. This talk will focus on the current deep learning applications on a broad level with their impact on the industries. We will cover the most popular deep learning libraries and how to tackle different deep learning problems. The talk will end with a focus on how deep learning is being used by the current startup companies and if companies are really selling "deep-learning".
Abhishek Thakur is chief data scientist at Boost AI. His focus is more toward applied machine learning and deep learning rather than theoretical. He completed his Masters in Computer Science in University of Bonn in early 2014 and since then has been working in the industries with a research focus on Automatic Machine Learning. He likes taking part in machine learning competitions and has attained a worldwide rank of 3 on the popular website Kaggle. He has also performed well on different other machine learning competition platforms.
Abstract: Word Embeddings - the Good, the Bad, and the Ugly
Word embeddings are the new magic tool for natural language processing. Without cumbersome preprocessing and feature design they are able to capture the semantics of language and texts, simply by being fed with lots of data. So they say.
We applied word embeddings - and for that matter also sentence embeddings - to various problem domains, such as chatbots, car reviews, news and language learning all in German domain-specific corpora. We will share our experiences and learnings: how much feature design was necessary, which alternative approaches are available and for which applications we were able to make use of word embeddings (recommendations, topic detection, error correction)?
Bio: Fabian Dill, CEO, DieProduktMacher GmbH
Fabian is Co-founder and CEO of DieProduktMacher GmbH in Munich, Germany. Before founding DieProduktMacher, Fabian served as Head of Business Performance at a subsidiary of Hubert Burda Media. He also co-founded a machine learning startup (KNIME) in 2006. Fabian has many years of experience building online products, seeing them fail and succeed. The rise of conversational interfaces with chatbots and voice user interfaces led his way again into the fields of Machine Learning and Natural Language Processing.
This talk is centered around building real-time and lightening responsive news search engine. From streaming data processing to system architecture, all bits and pieces will be covered. Specially in case of news articles streaming in the system, challenges and whole ecosystems of aggregation to scoring algorithm will be introduced. Realisation of plug and play micro-service architecture will be another focus of this talk so that in the end audience will see how complex but fun is building news search engine. This talk will be accompanied by live demo showing how to create one focused news feed using Cliqz News Technology.
Heeren Sharma is Software Engineer in News Team at Cliqz GmbH. He completed his Masters in Computer Science at TU Munich. During writing his Masters thesis, interest of developing data-centric products sparked in him. And, real game changer came while working at Cliqz GmbH being a fresh university graduate. Since past 3 years, developing the products and services which are more oriented towards streaming data pipelines (specially for News Search), inference engines and fast responsive systems, Heeren has developed keen interest in data engineering roles i.e. from ideation to POC to production ready systems. A problem-solver by nature, he's particularly interested in Information Retrieval and Machine Learning. Most of the time, he likes to munch in Python and use Elasticsearch as Thor’s hammer.
Data Visualization Designer
Abstract: How data literate is your audience?
Data in the hands of a few data experts can be powerful, but data at the fingertips of many is what will be truly transformational. It is crucial to increase data access for business experts, but is getting the data out really all that we have to do to make digital transformation happen?
The vast majority of people are increasingly unprepared to navigate today’s world of data. But you want your audience to be data literate, that means to have the ability to read, work with, analyze and argue with data.
But how can you know the recipient’s level of expertise to make sure that your message will be received? Opening your organization’s data to all employees (or even the public) will only make sense if everyone has at least a basic level of data literacy.
From my experience as a data visualization designer, most of the time, the data literacy level of the audience is a black box. This is a huge pain, so together with my friends at Cavorit and the HTW Berlin, I have decided to find ways to easily measure the skill level of an audience. I want to share our learnings about this complex skill, the prerequisites and subskills involved, and how the skill levels are distributed among a population. These insights can help to adjust the complexity of visualizations for each audience, make training programs more effective, raise awareness and hopefully close the data literacy gap.
Evelyn Münster is a Data Visualization Designer at Cavorit and is also working as a freelancer. She is a multidisciplinary thinker on the crossroads of data analysis, data visualization and UX design. Having a background in media art and software development, she has been developing interactive data visualization tools for science and business since 2008. She writes a data viz newsletter in German and is active on Twitter as @dataviz_de.
Catastrophe Analyst and PhD Student
Abstract: Make hyperparameters great again
While tuning hyperparameters of machine learning algorithms is computationally expensive, it also proves vital for improving their predictive performance. Methods for tuning range from manual search to more complex procedures like Bayesian optimization. This talk will demonstrate the latest methods for finding good hyperparameter-sets within a set period of time for common algorithms like xgboost.
Daniel Kühn is a catastrophe analyst at Guy Carpenter, one of the world’s largest reinsurance brokers. In his work he uses state of the art probabilistic models to estimate the economic damage caused by large scale natural disasters. He also is a part time PhD student at the department for computational statics at LMU, where he focuses his research on hyperparameter optimization and automatic machine learning.
Abstract: Time series shootout - ARIMA vs. LSTM
Sigrid Keydana is a data scientist with the DACH-based IT consulting company Trivadis.
In the field of data science and machine learning, she focuses on deep learning (concepts and frameworks), statistical learning and statistics, natural language processing and software development using R.
She has a broad background in software development (esp. Java and functional programming languages like Scheme and Haskell), database administration, IT architecture and performance optimization.
She writes a blog (http://recurrentnull.
Title: From Pokémon to Donald Trump - Mining and Visualizing weird stuff
After working as an IT Consultant for several years, Markus is now a department head at German VoD Service maxdome. During the day he is responsible for a all client- / partnerfacing APIs and the services behind. At night however, when his secret passion comes out, he searches for yet untapped data to mine and visualize.
Abstract: Transfer Learning for Fun and Profit
Transfer learning is exciting because it unlocks solutions that weren't feasible a few years ago. In fact, choices to compose from pre-trained models for computer vision tasks became abundant. In this talk, we will explore how to make these choices for image classification and feature extraction.
The analysis is inspired by practical use-cases where human supervision and compute time is often limited. The results are presented for two datasets across PyTorch’s model-zoo. First, a toy dataset where scale invariance is important. Second, a dataset from an object detection pipeline where rotation invariance is important. Lastly, we will cover the human success factors of such a project.
Alexander Hirner is industrial engineer with the conviction to make humans smarter with computers and vice versa. He developed machine learning solutions for unstructured data while studying in Silicon Valley and Europe. In 2014 he founded Ethereum Vienna, a blockchain 2.0 meetup. His ongoing research interest in frictionless data-markets is at the intersection those two technologies.
At Autonomous Intelligent Driving GmbH we see autonomous driving as the hardest technical challenge since putting a man on the moon and the most societal impactful disruption since the invention of the automobile itself. For this reason, we are developing the full self-driving software stack with a passionate team of talented hard-core Machine Learning and software engineers.
We have Audi/VW as a sole investor and headquartered downtown Munich. Come, join us!
The Microsoft Technology Center is an inspiring think tank for customers, partners and Microsoft. Together we envision how Microsoft technology can empower your business and how to achieve more in a world which is driven by continuous innovations.
TrustYou, the world’s largest guest feedback platform, is on a mission to improve the travel experience, from finding the right hotel to having the perfect stay. Our hotel summaries are integrated on Google Maps, Hotels.com and many others, influencing millions of booking decisions every day. Our review analysis software is used by more than 10,000 hotels to improve their services.
TrustYou has data science (ML, NLP) as well as big data teams (Hadoop, Spark) in their Munich headquarters, and is always looking for motivated, impactful individuals to join our organization
20 years ago, an average media plan took around 50 decisions to make. Today, it is more like 5.000. Without an artificial intelligence you can no longer keep up. That’s why the future media agency is an A.I.gency, like ours.
Blackwood Seven was founded in Copenhagen in 2013 by a group of former CEOs from marketing and IT, who wanted to revolutionize marketing by fully embracing technology. Since then, we have been growing fast with offices in New York, Munich, Los Angeles, London and Barcelona.
All over the world, we are beginning to replace the traditional media agency model, with an end-to-end media planning platform that allows modern advertisers to take back control. With our product they can predict, execute and evaluate real-time and across all media from their own laptops.
Instead of finding new ways to keep an obsolete media agency industry alive, it is time to face the truth. Our work is laying the foundations for a new and transparent industry, based on a simple subscription model and running on A.I and the principles of block chain technology.
PwC US helps organizations and individuals create the value they’re looking for. We’re a member of the PwC network of firms in 157 countries with more than 184,000 people. We’re committed to delivering quality in assurance, tax and advisory services.
The Volkswagen Data:Lab is a future oriented, data-driven innovation hotspot for the Volkswagen Group brands, markets and business departments. It is building Use Cases across the whole automotive value chain in the areas of Big Data Analytics, Artificial Intelligence, Machine Learning, Connectivity and Internet of Things. As a technology scout and think tank it showcases what may be possible. The goal is to leverage the value of Big Data, Data Analytics and Machine Learning for Volkswagen Group through innovative prototyping. An experienced internal team of specialized Data Scientists closely collaborates with external partners. Our extensive innovation network consists of Group internal teams, experts from leading technology providers, research facilities and universities as well as carefully selected technology startup companies from all over the world.
Enterprise Data Science – Advanced Analytics for Huge Data
We develop and apply Big Data solutions for international enterprises. Our passion is to help our customers make sense of their data, across world-wide distributed data centers, using modern Big Data and Machine Learning tools, in a productive environment. We set a strong focus on the automotive R&D sector, where technological developments such as autonomous driving and connected car currently generate explosively growing data sets. Our services cover, e.g., Big Data Infrastructure, Data Engineering, and Advanced Analytics, and we love to drive innovation from prototype to operation.
Trivadis - makes IT easier
Trivadis is a leading provider of IT consultancy, system integration, solution engineering and IT services, with a focus on Microsoft and Oracle technologies in the German-speaking countries (Germany, Austria, Switzerland and Denmark). These services are supplied by Trivadis’ strategic business units: Business Intelligence, Application Development, Infrastructure Engineering, Training and Operations. Trivadis combines methods developed in-house and tested in the market and products based on these methods with quality leadership in its core technologies. The company has more than 800 companies and 14 sites in Switzerland, Germany, Austria and Denmark. The turnover of the Trivadis Group in 2016 was approximately CHF 109 million (EUR 118 million). For more information, visit: www.trivadis.com
Early bird tickets: 15€
(until 10th of August)
Regular tickets: 20€