Bayesian models, such as topic modeling (LDA), have had enormous impact on natural language processing. Although deep neural architectures have improved performance on many tasks, there are still many problems that lend themselves best to a Bayesian treatment. In this talk, we will motivate and develop a Bayesian model for verb sense induction, based on the syntactic structures of those verbs. There are many common elements between the proposed model and topic modeling, but in a simpler overall system. This should provide a friendly introduction to core concepts of Bayesian analytics, and give experienced scientists insight to adapting these models to new domains.
Daniel Peterson is a Senior Data Scientist at TrustYou, and a PhD candidate at the University of Colorado. In his day job, he tries to present accurate, comprehensive representations of millions of hotel reviews to data partners like Google. In his PhD work, he tries to extend VerbNet, a semantic resource built on sound theoretical linguistics.