DTAI

  • Increase font size
  • Default font size
  • Decrease font size
DTAI News DTAI news PhD defense Nima Taghipour

PhD defense Nima Taghipour

On Tuesday, March 26, at 17:00, Nima Taghipour will present his PhD dissertation titled "Lifted Probabilistic Inference by Variable Elimination". The event takes place in the auditorium of the Computer Science Department, KU Leuven, Celestijnenlaan 200A, 3001 Heverlee.

Brief Summary
In artificial intelligence, there is currently a large interest in probabilistic logical models, i.e., models that combine first-order logic (to deal with complex domains with many entities) with probability theory (to capture uncertainty). Such models can be used for various reasoning (inference) tasks, allowing us to answer important questions about the domain. However, inference with probabilistic logical models is often computationally inefficient due to the large number of objects and interactions among them-- characteristics that increase the complexity of the model. A recent development that tackles this problem is `lifted probabilistic inference', that is, inference that attempts to exploit the symmetries in the model to achieve efficiency. Lifted inference has attracted a great deal of attention, due to the large speedups it can yield, and has been put forward as one of the most promising directions for achieving efficiency in probabilistic logical reasoning.

In this dissertation, we make a number of novel contributions to lifted inference, mainly focused on one particular inference method, called lifted variable elimination (LVE). First, we bring more insight into lifted inference by defining LVE's operations on the semantic level, rather on the syntactic level, thus making them language independent. Second, we generalize the tools that LVE uses for exploiting symmetries (by introducing new lifted operations), and thus enable it to tackle a broader range of problems. Third, we prove theoretical results that identify important types of problems for which LVE is complete (always has a lifted solution), and relate it to other lifted methods. Fourth, we provide a tool for finding and analyzing lifted inference solutions symbolically (without performing the computations), which is valuable for selection among different solutions. Fifth, we present a lifted pre-processing method for speeding up inference by avoiding unnecessary computations, i.e, by restricting the computations to the smallest part of the model that is requisite for answering a specific query.

Last Updated on Wednesday, 13 March 2013 13:08