The tutorial slides are available here.
Probabilistic programming is an emerging subfield of AI that extends traditional programming languages with primitives to support probabilistic inference and learning. It is closely related to statistical relational learning, but focuses on a programming language perspective rather than on a graphical model one.
This tutorial provides a gentle and coherent introduction to the field by introducing a number of core probabilistic programming concepts and their relations. It focuses on probabilistic extensions of logic programming languages, such as CLP(BN), BLPs, ICL, PRISM, ProbLog, LPADs, CP-logic, SLPs and DYNA, but also discusses relations to alternative probabilistic programming languages such as Church, IBAL and BLOG and to some extent to statistical relational learning models such as RBNs, MLNs, and PRMs.
The concepts will be illustrated on a wide variety of tasks, including models representing Bayesian networks, probabilistic graphs, stochastic grammars, etc. This should allow participants to start writing their own probabilistic programs. We further provide an overview of the different inference mechanisms developed in the field, and discuss their suitability for the different concepts. We also touch upon approaches to learn the parameters of probabilistic programs, and mention a number of applications in areas such as robotics, vision, natural language processing, web mining, and bioinformatics.
The tutorial is intended for AI researchers and practitioners, as well as domain experts interested in probabilistic programming and statistical relational learning. Basic knowledge of Prolog, logic programming and/or graphical models at the level of an introductory course in AI will be helpful, but is not a prerequisite.