Tutorial on Constraint Learning at AAAI'18

This tutorial provides a gentle and coincise introduction to the diverse area of constraint learning, covering different formalisms and learning methods, highlighting the commonalities and differences between them. It will span learning hard and soft constraints, as well as learning in offline (batch) and in interactive settings. We will discuss relevant applications, including programming-by-example and structured prediction.

 

Material

- part 1: hard constraints

- part 2: soft constraints

- part 3: interactive learning

- companion senior member track paper AAAI'18 here

 

Intended Audence

This tutorial is intended for Artificial Intelligence researchers and practitioners, as well as domain experts interested in constraint learning, programming, modelling, and satisfaction. The participants will gain an understanding of the core concepts in constraint learning, as well as a conceptual map of the variety of methods available and of the relationships between them. The main goal is to prepare the audience to understand the field of constraint learning and its relation to the broader context of machine learning and constraint satisfaction.

Basic knowledge of constraint satisfaction and machine learning at the level of an introductory course will be useful, but not required. General knowledge of Artificial Intelligence will be assumed.

 

Outline

  • Overview of areas where constraints appear in Artificial Intelligence, with a focus on the interplay between machine learning and data mining on one side, and constraint satisfaction and programming on the other.
  • Overview of the relationship between constraint learning and more traditional machine learning (i.e. function learning) and concept learning. We will also touch upon the relation between constraint learning (especially of soft theories), structured output prediction, and probabilistic models. The common factor is that in all cases the learned theory or model can be used to complete partial observations or configurations.
  • Brief discussion of target applications of constraint learning contexts such as recovering formulas and functional dependencies in spreadsheets and programming by example, learning constrained optimization problems in operations research (e.g. timetabling), configuring products for recommender systems, and designing complex artifacts (e.g. cyber-physical systems and software).
  • In-depth review of classic and recent approaches for learning hard constraint theories in an offline (batch) setting. This part will cover classical algorithms for learning concepts in the propositional logic setting (Valiant’s k-CNF learning algorithm), as well extensions to first-order logic (Inductive Logic Programming) and constraint theories (especially the version-space algorithms of Bessiere et al.).
  • Thorough discussion of approaches for learning soft constraint theories in an offline setting, i.e. theories where constraints are associated to a weight or strength. This part will cover the maximum margin algorithm of Rossi and Sperduti as well as more recent developments.
  • Detailed presentation of interactive constraint learning methods, where the system acquires feedback from a (human) annotator. The discussion will examine different user interaction protocols and the results feedback types, including: strategies based on active learning (e.g. annotation queries) and learning from rankings (ranking and improvement queries), as well as strategies specifically developed for constraint learning, such as generalization queries.
  • Interacting with human annotators also raises the problem of managing the user’s cognitive load. We will touch upon mechanisms for keeping the human effort under control, and relate them to the existing literature on preference elicitation. These will include very recent ideas such as exchanging information (labels or preferences) over partial configurations and manipulative interaction.

 

Presenters

Date: 
Monday, January 15, 2018 - 16:45