This page is provided for those who would like to print the whole newsletter at once. It contains the concatenation of all articles of the newsletter collated in one large html file. The article ''Abductive logic programming for communication and negotiation amongst agents'', by Fariba Sadri & Francesca Toni can be downladed here. In order to print the whole newsletter just use the print function of your browser to print this page. For the sake of ecology, we also omitted the pages corresponding to the bottom menu items.
Now that chaos has left place to silence (in terms of research funding, and industrial interest), the time has come to really find out what LP can really do for us, and – quoting Peter Stuckey
“What does LP bring to programming/applications that makes it better than OO/FP?”
I know for a fact that LP is great in some application. I use it now for scheduling and for protocol verification, and there is no model checker that can beat it at it. Likewise, there are hundreds of applications in which LP scores well. Nevertheless, if I walk out of my little research backyard that deals mainly with specification, analysis and verification, then I probably don’t see many of the features that could help making LP a mainstream development language.
My guess is that I am not the only one. Lately, some people in the ALP exec have been exchanging quite a few emails, triggered by Peter’s question. In the discussion, I found the following argument of Gopal Gupta particularly interesting:
“Through my own experience, I am certain that there are applications where LP's got no match (see my paper, "Semantic Filtering: LP's Killer App?" in PADL'02). The way to convince others is through actually developing these applications, and for a while we held the momentum with CLP, but even that thunder is stolen as CLP has become merely CP. The other day I met a bunch of folks from i2 technologies (a big supplier of supply-chain software in Dallas, they are big on optimization, etc.). They all knew of constraint programming, had used ILOG products, but, sadly, never had heard of constraint logic programming. I believe that embedding constraints in LP is much more natural than in OO or any other language, but aside from facilitating the birth of CP, the L in CLP has not amounted to much in the industrial world.”
My inability at fully answering Peter’s question – when extrapolated to a large part of the community – can probably be counted as one of the reasons of the crisis of the LP paradigm. After all, I am teaching LP at a University. So I should know. But I would like to hear more of what the industry has to say about LP-related tools. Initiatives such as PADL are very important for this: perhaps we just need more of them. Does anyone have a better idea?
That’s it, for the moment,
Social Science Informatics (SWI),
University of Amsterdam,
Roetersstraat 15, 1018 WB Amsterdam, The Netherlands,
area editor: Roberto Bagnara
| We have explored the consequences of adding native preemptive threads
to the originally single-threaded SWI-Prolog system. Unlike classical approaches
such as Parlog, SWI-Prolog's approach follows recent initiatives to have
multiple completely separated Prolog engines sharing the same predicates.
In this article we describe multi-threading from the Prolog user as well
as from the implementation aspects. It includes metrics comparing the multi-threaded
version to the single-threaded version as well as speedup that can be achieved
using SMP hardware.
Multi-threaded SWI-Prolog is in its late beta-releases and will become the default on all systems that provide the required primitives shortly.
In todays computing world there is an important reason for concurrency: being able to talk to multiple agents in the outside world concurrently. We see programmers tackle this challenge in a variety of ways. The two most popular are embedding of a Prolog engine in a concurrent programming environment (C, C++, Java, etc.) which serialises requests on the Prolog engine or using event-driven programming. Neither is very attractive. Embedding is hard to program, requires skills in at least two languages and the resulting program is often hard to debug and maintain. Event-driven programming, realising multiple state-engines to handle events from multiple input sources is a non-trivial task at best.
A much cleaner solution can be realised using multiple Prolog threads, especially if native preemptive threading is used, avoiding complications with blocking system calls and exploiting SMP hardware if available.
Initially most of the research aimed at running multiple provers sharing at all Prolog data . This approach is difficult to implement and the role of backtracking is complicated at best. Recently a number of systems (CIAO, Qu-Prolog, Jinni) have adopted threads only sharing the Prolog database. As access to the database is generally structured and localised in the implementation the implementation is relatively simple, while programming such systems is very close to programming a classical single-threaded Prolog system.
In this article we describe our API choices and experience implementing native preemptive threads in SWI-Prolog. In Section 2 we summary our rationale. Next we describe what constitutes a thread and what primitives are used to make threads communicate and synchronise. Section 5 describes implementation experience, followed by performance analysis in Section 6, issues and conclusions.
We have chosen to use the POSIX thread (pthread) library  as a basis for its clean design and the availability on many platforms. On Windows we use a mixture of pthread-win32 and the native Win32 thread-API.
A Prolog thread is a native (POSIX) thread with a Prolog engine capable of proving a goal. A Prolog engine consists of the runtime stacks and required state for proving goals independently. In this view a Prolog term is entirely local to an engine and can only be communicated through other engines through the shared (code-)area. Threads share all other global data, such atoms, predicates, modules, records, streams, etc.
The new Prolog engine runs completely independent. If the thread is attached, any thread can wait for its completion using thread_join/2. Otherwise all resources are reclaimed silently on its completion.
The most difficult aspect of multi-threaded programming is the need to synchronise the concurrently executing threads: ensure they use proper protocols to exchange data and maintain data consistency of shared-data as maintained in dynamic predicates.
Our primarily synchronisation primitive is a FIFO (first-in-first-out) queue of Prolog terms, an approach found in other programming languages under the name port if one queue is attached to each thread or channel if a queue is a stand-alone object. SWI-Prolog supports both ports and channels. Channels were added because they simplify the implementation of the popular worker-crew model significantly as illustrated in Section 4. Queues are manipulated through the following predicates:
To make consistent changes to shared data or execute code not designed for threading, SWI-Prolog provides access to mutex objects (2) to serialise access to a code-fragment. The principal API is with_mutex/2 as described below. Wrapping mutex ownership in a goal allow to unlock the mutex on failure or exception without user concern. This is vital as a mutex held incorrectly by a thread easily deadlocks the application. For convenience and to simplify enabling existing applications with thread-support, SWI-Prolog mutex objects are recursive. To facilitate code that can run in both the single- and multi-threaded version, the single-threaded version provides with_mutex/2 simply calling once/1.
Debugging interaction as well as manager threads are simplified using the last communication primitive called thread_signal/2, which is similar to Qu-Prolog's thread_push_goal/2. Figure 1 show this predicate in action to start the command line tracer in another thread, while Figure 2 shows it interacting with a worker for a work-crew.
|Figure 1: Attach a console and start the debugger in another thread.|
catch(do_work(Work), stop, cleanup),
|Figure 2: Stopping a worker using thread_signal/2. Bold fragments show the relevant parts of the code.|
All predicates, both static and dynamic, are shared between all threads. Changes to static predicates only influence the test-edit-reload cycle, which is discussed in Section 5. For dynamic predicates we kept the `logical update semantics' as defined by the ISO standard . This implies that a goal uses the predicate with the clause-set as found when the goal was started, regardless of whether or not clauses are asserted or retracted by the calling thread or another thread. The implementation ensures consistency of the predicate as seen from Prolog's perspective. Consistency as required by the application such as clause-order and consistency with other dynamic predicates must be ensured using synchronisation as discussed in Section 3.2.
We introduced the notion of thread-local predicates to deal with code that uses dynamic predicates to store intermediate results of a computation. Purists will advocate this is almost invariably bad programming style, but if one examines real-life code written in the Prolog community it is commonly found. Without a new construct, such code must be re-written to store their data in Prolog terms, something that may require a complete redesign or the assert-, retract- and query-code must add an extra argument to indicate the calling thread or the code must be fully serialised (i.e. only one thread may run it at any time). Re-write is often infeasible, adding an extra argument can seriously harm performance and makes it hard to maintain a single and a multi-threaded version in parallel, while serialisation breaks the whole point of multi-threading. The declaration below is similar to declaring foo/2 and bar/1 as dynamic and volatile.
:- thread_local foo/2, bar/1.
Thread-local predicates have the following properties:
This section describes the implementation of a simple network service. The SWI-Prolog socket commands are described in http://www.swi-prolog.org/packages/clib.html. Our service handles a single TCP/IP request per connection, using a specified number of `worker threads' and a single `accept-thread'. The accept-thread executes acceptor/2, accepting connection requests and adding them to the queue for the workers. The workers execute worker/1, getting the accepted socket from the queue, read the request and execute process/2 to compute a reply and write this to the output stream. After this, the worker returns to the queue for the next request.
make_server(Port, Workers) :-
forall(between(1, Workers, _),
thread_create(worker(Queue), _, )),
thread_create(acceptor(Socket, Queue), _, ).
create_socket(Port, Socket) :-
|acceptor(Socket, Queue) :-
tcp_accept(Socket, Client, _Peer),
tcp_open_socket(Client, In, Out),
process(hello, Out) :-
format(Out, 'Hello world! n', ).
|Figure 3: Implementation of a multi-threaded server. Threading primitives are set in bold. The left column builds the server. The top-right runs the acceptor thread, while the bottom-right contains the code for a worker of the crew.|
The advantages of this implementation over a traditional single-threaded Prolog implementation are evident. Our server exploits SMP hardware and will show much more predicable response-times, especially if there is a large distribution in time required by process/1. In addition, we can easily improve on it with more monitoring components. For example, acceptor/2 could immediately respond with an estimated reply-time, or commands can be provided to examine and control activity of the workers. All this can be achieved without complicating the implementation of process/2 with code to be aware of its environment.
We tried to minimise the changes required to turn the single-engine and single-threaded SWI-Prolog system into a multi-threaded version. As explained by Butenhof , code for multi-threaded usage should be designed starting from shared data. Changing an existing implementation is often a compromise.
For the first implementation we split all global data into three sets: data that is initialised when Prolog is initialised and never changes afterward, data that is used for shared data-structures, such as atoms, predicates, modules, etc. and finally data that is only used from a single engine such as the stacks and state of the virtual machine. For each of these we defined a large nested C-structure. In the single-threaded version we have a single globally defined instance of all three. In the multi-threaded version the local structure is dynamically allocated and associated with the thread using pthread_setspecific().
Update to shared data was split into a number of independent subsets, each protected by its own mutex object.
In the second phase, fetching the current engine using pthread_getspecific() was reduced by caching this information inside functions that use it multiple times and passing it as an extra-variable to commonly used small functions as identified using the gprof  profiling tool. Mutex contention was analysed and reduced from some critical places:
Normal (stack-) garbage collection is not affected by threading and continues completely concurrently as it only refers to the stacks. This is an important observation as it allows for threads monitoring and handling the external world under real-time constraints by writing them such that they do not perform garbage collections, while other threads can perform garbage collection.
Atom-garbage collection is more complicated because atoms are shared resources. Atoms referenced from global data such as clauses and records use reference-counting, while atoms reachable from the Prolog stacks are marked during the marking phase of the atom garbage collector. With multiple threads this implies that all threads have to mark their atoms before the collector can reclaim unused atoms. Marking references atoms is executed asynchronously based on signals in Unix and SuspendThread()/ResumeThread() on Windows. While atom garbage collection is in progress, threads that wish to create atoms or change the reference count of an atom (e.g. use assert/1 of a term holding atoms) will block.
SWI-Prolog uses a sliding garbage collector . During the execution of GC, it is very hard to mark atoms. Therefore during atom-GC, normal GC cannot start. As threads cannot access atoms anyhow during atom-GC, this is not likely to cause much further harm. Because atom-GC is such a harmful activity, we should avoid it being blocked by a normal GC. Therefore the system keeps track of the number of normal GC instances active. If atom-GC finds GC is in progress, it signals the atom-GC request and aborts. After the last GC dies it re-schedules atom-GC.
We studied the relative performance to the single-threaded version and speedup on SMP systems.
We used the benchmark suite by Fernando Pereira for comparing the single-threaded to the multi-threaded version running one thread. Achieving comparable performance is important to avoid the need for two distributions, leading to much confusion in the user community. We used a 550Mhz single-CPU Crusoe machine for our tests.
On Windows-2000 compiled with MSVC 5 professional edition, the multi-threaded version performs equal within the measuring accuracy on all tests. On Linux compiled with gcc 2.95 the overall performance of the multi-threaded version is comparable to Windows-2000, but tests involving synchronisation are upto 30% slower while other tests are upto 30% faster. Compiled for single-threading the Linux version is approx. 15% faster on the same hardware. Approx. 8% thereof was attributed to the extra variable required in many functions to keep track of the current Prolog engine.
This section describes the results of multi-threading the Inductive Logic Programming system Aleph . Aleph is an ILP system developed by Ashwin Srinivasan at the Oxford University Computing Laboratory. Inductive Logic Programming (ILP)  is a branch of machine learning that synthesises logic programs using other logic programs as input. The program explores a vast search-space of candidate programs using a generate-and-test approach. One of the techniques used by Aleph is called randomised local search. In this approach the system selects a number of random start-points in the search space and exploring some space around it. As the local searches are independent, this search technique is a good candidate for concurrency.
The Aleph implementation heavily uses dynamic predicates for keeping track of the domain as well as the search process. It uses a largely failure-driven control-structure. We realised concurrency by:
An exploratory study was performed to study the speedup resulting from using multiple threads on an SMP machine. We realised a work-crew model implementation for randomised-local-search in Aleph. As the task is completely CPU bound we expected optimal results if the number of threads equal the number of utilised processors. The task consisted of 16 random restarts, each making 10 moves using the carcinogenesis  data set. This task was carried out using a work-crew of 1, 2, 4, 8 and 16 workers scheduled on an equal number of CPUs. Figure 4 shows the result, providing near optimal speedup, upto 8 processors, while using 16 processors hardly improves performance. Due to limited time and access to the hardware we have not been able to analyse the bottleneck.
|Figure 4: Speedup with increasing number of CPUs defined as elapsed time using one CPU divided by elapsed time using N CPUs. The task consisted of 16 restarts of 10 moves. The values are averaged over 30 runs. The study was performed on a Sun Fire 6800 with 24 SUN UltraSPARC III 900 MHz Processors, 48 GBytes of shared memory, utilising exclusively up to 16 processors. Each processor has 2 GBytes of memory.|
We have described the implementation of adding concurrency based on native preemptive threads using a shared database in an existing Prolog system. The implementation of such a system is relatively straightforward. The last beta-release of multi-threaded SWI-Prolog has been released at April 25, 2003 and will become the default release on all platforms that can support it for the following reasons.
Although we have concentrated on speedup that can be reached on SMP hardware, experience with Aleph described in Section 6.2 illustrates interesting speedups can be achieved. In the current implementation dynamic predicates, atom handling and meta-calling are the bottlenecks that limit the reachable speedup. It is assumed the use of POSIX rw-locks and/or the use of lock-free algorithms  can improve concurrent performance.
We believe Prolog threads based on a shared database has a bright future, especially if a standardised set of primitives becomes available.
SWI-Prolog is a Free Software project which, by nature, profits heavily from user feedback and participation. We would like to express our gratitude to Sergey Tikhonov for his courage to test and help debug early versions of the implementation. The work reported regarding the Aleph system was performed jointly with Ashwin Srinivasan and Steve Moyle at the Oxford University Computing Laboratory. We gratefully acknowledge the Oxford Supercomputing Centre for the use of their system, and in particular Fred Youhanaie for his patient guidance and support.
A logical framework is a general meta-language for specifying and implementing deductive systems, given by axioms and inference rules. Recently, one major application of logical frameworks has been in the area of ``certified code''. To provide guarantees about the behavior of mobile code, safety properties are expressed as deductive systems. The code producer then verifies the program according to some predetermined safety policy, and supplies a binary executable together with its safety proof (certificate). Before executing the program, the host machine then quickly checks the code's safety proof against the binary. The safety policy and the safety proofs can be expressed in the logical framework thereby providing a general safety infrastructure. Moreover, it is possible to experiment with safety policies and verify that the program is in fact safe by executing the safety policy using a logic programming interpretation.
The Twelf system  is an implementation of a logical framework based on the dependently typed lambda calculus . By assigning an operational interpretation to types, we obtain a higher-order logic programming language . Higher-order logic programming extends traditional first-order logic programming in three ways: First, we have a rich type system based on dependent types, which allows the user to define her own higher-order data-types and supports higher-order abstract syntax . Second, we not only have a static set of program clauses, but assumptions may be introduced dynamically during proof search. Third, we have an explicit notion of proof, i.e. the logic programming interpreter does not only return an answer substitution for the free variables in the query, but the actual proof as a term in the dependently typed lambda-calculus.
The Twelf system has been used in several projects on certified code [4,5,3]. The code size in the foundational proof-carrying code project  at Princeton ranges between 70,000 and 100,000 lines of Twelf code, which includes data-type definitions and proofs. The higher-order logic program, which is used to execute safety policies, consists of over 5,000 lines of code, and over 600 - 700 clauses. Such large specifications have put to test implementations of logical frameworks and exposed several problems: First, performance may be severely hampered by redundant computation, leading to long response times and slow development of formal specifications. Second, many straightforward specifications of formal systems, for example recognizers and parsers for grammars, rewriting systems, type systems with subtyping or polymorphism, are not executable, thus requiring more complex and sometimes less efficient implementations.
Here we describe three techniques to overcome some of the existing
performance barriers and extend its expressive power. This summarizes
some recent work which was carried out as part of the Twelf project at
Carnegie Mellon University. First, we describe the central idea behind
optimizing unification in logical frameworks by eliminating
unnecessary occurs-checks. Second we present a novel execution model
for higher-order logic programming based on selective memoization and
third we will discuss higher-order term indexing techniques to sustain
performance in large-scale examples. Experiments carried out in Twelf
demonstrate, these optimizations taken together constitute a
significant step toward exploring the full potential of logical
frameworks in real-world applications. Although the main focus of this
work has been the logical framework Twelf, we believe the presented
optimizations are applicable to any higher-order reasoning system such
as Lambda Prolog
 or Isabelle
The most comprehensive study on efficient and robust implementation techniques for higher-order unification so far has been carried out by Nadathur and colleagues for the simply-typed lambda-calculus in the programming language Lambda Prolog . Higher-order unification is implemented via Huet's algorithm  and special mechanisms are incorporated into the WAM instruction set to support branching and postponing unification problems. To only perform an occurs-check when necessary, the compiler distinguishes between the first occurrence and subsequent occurrences of a variable and compiles them into different WAM instructions. While for the first occurrence of a variable the occurs-check may be omitted, full unification is used for all subsequent variables. This approach seems to work well in the simply-typed setting, however it is not clear how to generalize it to dependent types. In addition, it is well known, that Huet's algorithm is highly non-deterministic and requires backtracking. An important step toward efficient implementations, has been the development of higher-order pattern unification [11,20]. For this fragment, higher-order unification is decidable and deterministic. As was shown in , most programs written in practice fall into this fragment. Unfortunately, the complexity of this algorithm is still at best linear  in the sum of the sizes of the terms being unified, which is impractical for any useful programming language or practical framework.
In , the author
and Pfenning present an abstract view of existential variables in the
dependently typed lambda-calculus based on modal type theory. This
allows us to distinguish between existential variables, which are
represented as modal variables, and bound variables, which are
described by ordinary variables. This leads to a simple clean
framework which allows us to explain a number of features of the
current implementation of higher-order unification in Twelf
 and provides insight
into several optimizations such as lowering and raising. In
particular, it explains one optimization called linearization, which
eliminates many unnecessary occurs-checks. Terms are compiled into
linear higher-order patterns and some additional variable
definitions. Linear higher-order patterns restrict higher-order
patterns in two ways: First, all existential variables occur only
once. Second, we impose some further syntactic restrictions on
existential variables, i.e. they must be applied to all
distinct bound variables. This is in contrast to higher-order
patterns, which only require that existential variables are applied to
some bound variables. Linear higher-order patterns can be
solved with an assignment algorithm which resembles first-order
unification (without the occurs check) closely and is constant time.
Experimental results show that a large class of programs falls into
the linear higher-order pattern fragment and can be handled with this
algorithm. This leads to significant performance improvement (up to a
factor of 5) in many example applications including those in the area
of proof-carrying code.
The success of memoization in first-order logic programming strongly suggests that memoization may also be valuable in higher-order logic programming. In fact, Necula and Lee point out in  that typical safety proofs in the context of certified code commonly have repeated sub-proofs that should be hoisted out and proved only once. Memoization has potentially three advantages. First, proof search is faster thereby substantially reducing the response time to the programmer. Second, the proofs themselves are more compact and smaller. This is especially important in applications to secure mobile code where a proof is attached to a program, as smaller proofs take up less time to check and transmit to another host. Third, substantially more specifications, for example recognizers and parser for grammars, evaluators based on rewriting or type systems with subtyping, are executable under the new paradigm thereby extending the power of the existing system.
Using memoization in higher-order logic programming poses several challenges, since we have to handle type dependencies and may have dynamic assumptions which are introduced during proof search. This is unlike tabling in XSB, where we have no types and it suffices to memoize atomic goals. Moreover, most descriptions of tabling in the first-order setting are closely oriented on the WAM (Warren Abstract Machine) making it hard to transfer tabling techniques and design extensions to other logic programming interpreters.
In , we have presented a characterization of tabled higher-order logic programing based on uniform proofs . This abstract view provides a high-level description of a tabled logic programming interpreter and separates logical issues from procedural ones leaving maximum freedom to choose particular control mechanisms. It also facilitates applying memoization to other logic programming environments such as Lambda Prolog  or using memoization in Isabelle .
We have implemented the search strategy based on memoization for the
Twelf system and have conducted several experiments with parser and
recognizers for grammars, refinement type systems, and reduction
semantics for functional programs, rewriting systems for the
. Experimental results
demonstrate that memoization not only allows us to execute more
specifications, but also execute them more efficiently. Since
computation based on memoization terminates for programs with
bounded-term size property and it is a complete search strategy, we
also get better and more useful failure behavior. We see this work is
an important step toward a practical programming environment for
developing and prototyping safety policies and deductive systems in
We have designed and implemented higher-order term indexing techniques based on substitution trees . Substitution tree indexing is a highly successful technique in first-order theorem proving, which allows the sharing of common sub-expressions via substitutions. This work extends first-order substitution tree indexing  to the higher-order setting.
Given two terms pred (h (h b)) (g b) b and
We can obtain the first term by composing all the substitutions in the left-most path.
This poses several challenges: First of all, building a substitution
tree relies on computing the most specific generalization of two
terms. However in the higher-order setting, the most specific
generalization of two terms may not exist in general. Second,
retrieving all terms, which unify or match, needs to be efficient -
but higher-order unification is undecidable in general. Although, most
specific generalizations exist for higher-order patterns and decidable
higher-order pattern unification
[11,20], these algorithms may not
be efficient in practice
. Therefore, it
is not obvious that they are suitable for higher-order term indexing
techniques. Instead, we use linear higher-order patterns as a basis
for higher-order term indexing
. This allows us
to reduce the problem of computing most specific generalizations for
higher-order terms to an algorithm which resembles closely its
. This technique has
been implemented to speed-up the execution of the tabled higher-order
logic programming engine in Twelf. Experimental results demonstrate
that higher-order term indexing leads to substantial performance
improvements (by up to a factor of 10), illustrating the importance
of indexing in general
We presented several techniques which considerably improve the performance of higher-order reasoning systems. This a first step toward exploring the full potential of logical frameworks in practice and apply it to new areas such as security and authentication . However, the presented techniques are just a first step toward narrowing the performance gap between higher-order and first-order systems. There are many more optimizations which have been already proven successful in the first-order setting and we may be able to apply to higher-order languages. For example, term indexing may not only be used for managing the memo-tables, but also be apply to indexing program clauses. There are also new considerations due to the dependently typed setting. For example, typing information needs to be carried around during run-time and leads to redundancies in the representation of terms. A first step toward eliminating this kind of redundancy has been proposed by Necula and Lee  for a fragment of the logical framework. Finally, the application of logical frameworks to certified code raises new question, which traditionally have not played a central role in logic programming. The central idea in certified code is not only to verify that a program is safe, but also to efficiently transmit and then check the proof. Necula and Rahul  explored the novel use of higher-order logic programming for checking the correctness of a certificate. In their case, the certificate is a bit-string which encodes the non-deterministic choices of a proof and can be used to guide a logic programming interpreter. Hence, a proof can be checked by guiding the higher-order logic programming interpreter with the bit-string and reconstructing the actual proof. Here, memoization can be used to factor out common sub-proofs and eliminate the checking of redundant sub-proofs.
Hybrid Solving for CSP
Area Editor: Eric Montfroy
Constraint Satisfaction Problems (CSP)  provide a general framework for the modeling of many practical applications (planning, scheduling, time tabling,...). Many examples of such constraint satisfaction problems can be found in . A CSP is usually defined by a set of variables associated to domains of possible values and by a set of constraints. We only consider here CSP over finite domains. Constraints can be understood as relations over the variables and therefore, solving a CSP consists in finding an assignment of values to the variables that satisfies these constraints. Many resolution algorithms have been proposed to achieve this purpose and we may distinguish at least two classes of general methods:
A common idea to get more efficient and robust algorithms consists in combining several resolution paradigms in order to take advantage of their respective assets.  presents an overview of possible uses of local search in constraint programming. Many existing works [14,16,23,22,25] propose hybridization between local search based methods and constraint propagation techniques but they often deal with a specific algorithm dedicated to a particular class of problems. Two main ideas are used to achieve the combination of complete and incomplete methods.
Note that such combinations have also been studied to improve the efficiency of evolutionary methods for CSP .
These different hybrid algorithms are applied to various constraint satisfaction problems (Satisfaction in proposition logic, Traveling Salesman Problem, Vehicle Routing,...). Nevertheless, they share a common philosophy concerning the combination of these complete and incomplete resolution techniques: one method is chosen as the master search process and the other approach is used as a kind of improvement heuristics. This hierarchical scheme is almost fixed and no real cooperation occurs along the resolution. Therefore, it would be interesting to provide a general framework for hybridization in order to address both modeling and implementation points of view. As a first attempt,  proposes an evolutionary based algorithm which includes domain reduction operators and LS strategies and allow a uniform and flexible combination.
Most of existing hybrid approaches for solving CSP are most likely ad-hoc algorithms which favor the development of systems whose efficiency is strongly related to a given CSP or class of CSP. From our point of view, a global model would allow us to abstract the usual master-slave scheme and would be more suitable to integrate the strategies of combination and the management of the different components. Therefore, we present here the main lines of an alternative and more general study:
Strategies are already used in complete constraint solvers to combine the different domain reduction and splitting operators (e.g., ) and in LS algorithms to manage the interaction between intensification and diversification of the search. Moreover, in a context of hybridization, one has to define possible strategies of combination and cooperation. Strategy rules languages seem a suitable framework for strategies control in order to insure the flexibility and the real cooperation of the complete and incomplete resolution parts. This interaction could be dynamically managed during the search according to resolution events (e.g., if LS encounters some difficulties then domain reduction or splitting could be applied to prune the neighborhood, LS may help the complete search process when the convergence is too slow,...) and adapted to the structure of the CSP.
>From a practical point of view, the previously described methods and heuristics have to share common structures and related functions but may act on different problem representations corresponding to their particular tasks and purposes (e.g., domain reduction operators work on variable domains, LS works on variables assignments, strategies may depend on configurations already explored by the search process,...). Therefore, software methods and tools are needed in order to handle these different views of a given CSP allowing us to combine rule-based strategy programming, constraint resolution and LS metaheuristics. This involves software engineering aspects (e.g., using UML) in order to provide a generic solver suitable for implementing and testing various combinations of constraint propagation operators and LS metaheuristics with different strategies.
We warmly thank Laurent Granvilliers and Eric Monfroy for their helpful remarks and comments.
This document was generated using the LaTeX2HTML translator Version 2K.1beta (1.61)
Copyright © 1993, 1994, 1995, 1996,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -verbosity 3 -split 0 -no_navigation -antialias -white -antialias_text -html_version 4.0 alp
The translation was initiated by Frederic Saubion on 2003-08-06
Roberto Bagnara, Manuel Carro
We are in the process of producing an enhanced version of the paper "Foreign Language Interfaces for Prolog: A Terse Survey" which was published some time ago by the ALP Newsletter (http://www.cwi.nl/projects/alp/newsletter/). The objectives of the new version include:
1) extending the survey to take into account some important aspects that where previously neglected,
2) having more systems surveyed.
Concerning the first point, we would like, at least, to
- evaluate the interaction between garbage collection and the foreign language
- analyze the different ways a C program may invoke Prolog;
- deepen the discussion about exception handling from C.
We would like to hear opinions about other features people consider important in a foreign language interface.
Concerning point 2 above, we would like (within our resources) to survey all
the Prolog systems to which we can obtain access (to the documentation, at the
very least). We would also appreciate
corrections concerning the systems and features we have evaluated so far. The current draft of the paper can be accessed at the URI http://www.cs.unipr.it/~bagnara/Papers/Prolog-FLI-survey2-draft
We are looking forward to receive input from anyone.
All the best
Department of Mathematics
University of Parma
Department of Languages and Systems
Technical University of Madrid
Term Rewriting Systems
Cambridge Tracts in Theoretical Computer Science 55
Cambridge University Press 2003, ISBN 0521391156, Hardback, ?90.00
906 pages, 273 line diagrams, 500 bibliographical references, 2500 index entries, 334 exercises
solutions to selected exercises will appear on the book website
Term rewriting systems (TRSs) developed out of mathematical logic and are an
important part of theoretical computer science. They describe sequences of discrete
transformation steps where one term is replaced with another. TRSs have applications
in many areas, from functional programming to automatic theorem proving and
computer algebra. This book starts at an elementary level with the earlier chapters
providing a foundation for the rest of the work. The later chapters contain
advanced material, appearing here for the first time in book form. Subjects treated include confluence, termination, orthogonality, completion, lambda calculus, higher-order rewriting, infinitary rewriting and term graph rewriting. Many exercises are included with selected solutions provided on the web. A chapter is included presenting applications of term rewriting systems, with many pointers to actual
implementations. A comprehensive bibliography makes this book ideal both for teaching and research. The necessary mathematical background has been collected in an appendix.
1. Abstract reduction systems
2. First-order term rewriting systems
3. Examples of TRSs and special rewriting formats
5. Properties of rewriting: decidability and modularity
7. Completion of equational specifications
8. Equivalence of reductions
10. Lambda calculus
11. Higher order rewriting
12. Infinitary rewriting
13. Term graph rewriting
14. Advanced ARS theory
15. Rewriting-based languages and systems
Appendix. Mathematical background.
Knowledge Representation, Reasoning and Declarative Problem Solving
Cambridge University Press, 2003
Morgan Kaufmann, 2003
Principles of Constraint Programming
Krzysztof R. Apt
Cambridge University Press, 2003
Essentials of Constraint Programming
Slim Abdennadher, Thom Fruehwirth
Springer Verlag, 2003
Logic and Learning
John W. Lloyd
Springer Verlag, 2003
TPLP is the sole official journal of the ALP. Please note that individual subscriptions are extremely convenient. Here are the 2003 prices.
$322 to institutions as print+online
$300 to institutions as print-only
$300 to institutions as online-only
$80 to individuals
$60 to ALP members
$57 for single issues.
Another LP baby
Beatrice Bagnara was born on Thursday, the 24th of July, 2003. Beatrice's weight was 3.4 kg and she is doing very well.
Congratulations to Roberto and Rossana!
International Workshop on Software Verification and Validation
In Conjunction with ICLP 2003. Mumbai (India), December 2003
Deadline for submission: 15 September.
For details, see http://www.comp.nus.edu.sg/~abhik/SVV03/index.html
DCG's and Haskell Monads
Subject: DCG's compared to Haskell monads
Monads are a tool to make the state of a computation implicit.
DCG's are are tool for serializing computation, and making
I'm just wondering how close this is to the monad concept.
Alan Baljeu wrote:
>can DCG style be used to implicitly manage side-effecty code in prolog, like
>monads do for Haskell?
Yes. This approach is used in Mercury <http://www.cs.mu.oz.au/mercury/>,
where the state-of-the-world is represented as an implicit DCG parameter.
However, this approach also relies on Mercury's unique mode support
(which is essentially a directional linear type system -- it is similar
to unique types in Clean) to ensure that there is only one reference to
the state-of-the-world at any point in time. It also relies on Mercury's
support for determinism analysis, to ensure that "side-effecty" code won't
fail and can't be backtracked over. Mercury's determinism analysis in
turn also depends on Mercury's type and mode systems. (Other approaches
to determinism are possible, though, some of which do not rely on mode
analysis -- for example see the language TEL, which is described in
Gert Smolka's PhD thesis.)
Fergus has mentioned some constructs in Mercury. Also,
Peter van Roy worked on various variations of the DCG concept a
number of years ago, IIRC. They are the same kind of thing in a
loose sense: a way of getting state into a declarative
computation in a declaratively principled way.
However, the monad concept refers to something very
specific: a data object with some specific operations that do
specific things, which (in a principled functional setting) are
accomplished by making the data object a fairly complex
expression involving lots of lambdas. The only way you could do
that in a declaratively principled way in logic programming is
with a higher order LP language, like Lambda Prolog or Twelf.
The DCG idea is essentially sophisticated syntactic sugar for
extra parameters, which is a different kind of thing.
In Googling to see if anyone had actually worked on this
area, I see that our own Paul Tarau (with Yves Bekkers) has
a paper called "Monadic Constructs for Logic Programming",
available on the Web. I haven't looked at the paper closely,
but it does seem to use Lambda Prolog.
Logic Programming Rediscovered
Subject: amazing new facts about (logic) programming and Prolog :-)
Apparently Prolog has been re-discovered. Behold the hot new trend of
From: Cesar Rabak
Subject: Re: "rules-based programming"
It seems to me that 'au contraire' the article itself shows Prolog as
old fashioned (even referring to dust in the '85 copy of C & M book) and
goes to describe rule engines in other environments and other platforms,
at the last paragraph a mention to Prolog seems to relegate it to toy
problems and atributes to "Web services" the changing to processing of
business processes and objects.
So for the masses, Prolog has not been re-discovered but only mentioned
as a language which is not in its 'heyday'. . .
This is NIN (Not Invented Now) syndrom. The Young Generation of
so-called "software engineers" has the opinion that "everything that
was invented before the Internet Era is a garbage". Plus the fact
that the average "information technology worker" reads one technical
book every 3 years...
Whilst I might agree in general (cf. the fuss about Beowulfs, or "on
demand" rented computing power), I read the article in question a bit
differently from others in this thread...
I took the "sorting out this mess" to refer to current "Enterprise
Computing" (ah yes, how do we make "business computing" sound more
interesting? name it after a fictional starship! )
Ie. Perhaps "rule-based programming" will sort out the current
business "mess" and help to extract useful information from all of
those databases. cf. references to XSLT and SQL.
Maybe he's right - perhaps bullk computing power & "useful"
databases have both grown to the point where this might happen.
( as an aside: I'm currently taking a post-grad course in Operations
Management as part of an MBA - I can see how rule-based programming
could be useful, assuming it isn't being used already )
Richard (Winwaed Software Technology) wrote:
> Whilst I might agree in general (cf. the fuss about Beowulfs, or "on
> demand" rented computing power), I read the article in question a bit
> differently from others in this thread...
> I took the "sorting out this mess" to refer to current "Enterprise
> Computing" (ah yes, how do we make "business computing" sound more
> interesting? name it after a fictional starship! )
> Ie. Perhaps "rule-based programming" will sort out the current
> business "mess"
I understand it that way, too, though the article also says, that rules-
based programming might "create a different kind of mess". But that's not
the point here, which is rather: The article says that rules-based
programming might only be of value if done with Java, C# and XML. This is,
I believe, what Andrzej calls the NIN syndrom, that Prolog, just because
it's older than Java et al., is not considered an option, even though it
could do the task just as well as the new "tools" (another one of these
damn hype-words) could.
Look at the article:
"Don't go hunting in the attic for your 5-1/4-inch Turbo Prolog floppy,
OK, I agree, there are more modern, 32 bit, ISO compatible systems out
there, but others might disagree, as some here seem to like Turbo Prolog a
lot and I certainly do not want to start another discussion about that
here, cause this is not the point, it's rather Prolog vs. hype-"technology"
like Java-based stuff.
"Although Jess, a Java-based engine developed at Sandia Labs, has roots
both Lisp and Prolog, the emerging business-rules engines work with
languages such as Java and C#, read XML-rule declarations, and are packaged
for J2EE and/or .Net deployment."
Now, what's the point here? Why is something so much better just because
is Java-, C#-, XML- or whatever-the-current-hype-might-be-based then
Prolog? The article does not explain, it just says "Prolog is old, so
Prolog is bad, Java is new, so Java is good".
"In Prolog's heyday, rules engines spent a lot of time solving old
chestnuts such as the Tower of Hanoi puzzle. Business objects and processes
weren't represented in ways that made them accessible to automatic
That's probably because the typical Prolog programmer is too intelligent
deal with boring business stuff.
"Thanks to the Web services movement, that's changing big time."
Ah, there it is again, the nice hype-of-the-day.
Prolog and the Semantic Web
From: Giovanni Tummarello
I am working on a semantic web project (my phd thesys)
and one of the
In a poster sent to the www conference in hungary I (among other
Only one reviewer commented about this saying
"To define Prolog as a query language is quite weird. To exaggerate
I am not sure he can seriously say that, i mean prolog is so ideal
Anyone has some idea about this issue?
I believe prolog could have a great role in the semantic web (think
Would the use of constraned prologs help this? I was looking into SWI
From: Jan Wielemaker
Subject: Re: Prolog as a semantic web query language?
Prolog may not be suitable to reason in extremely large domains, but it
is surely a very nice language to reason about the implications of
reasonable-sized collections of triples. We have been using SWI-Prolog
with the native Prolog database for a little more than a million triples.
Recently I started a C-extension that represents the triples as Prolog
atoms with indexing that is targetted to frequent queries. With this
we can load our current set of about 1,7 million triples in less then
10 seconds from a file in internal format or a few minutes from RDF/XML
(AMD 1600+). The size of the resulting Prolog process is about 200MB.
Query speed is well over a million lookups per second, regardless of
the size of the triple-set. The low-level engine already handles the
subPropertyOf relation internally.
Actually it was a nice exercise. The version I wrote 2 years ago was
clearly slow and wasting a lot of memory. A few months ago, equiped with
new experience I write a new triple representation and query layer in
Prolog in 2 days. Some rough estimates showed this was not really
suitable to get our job done on a laptop. So, I decided to redo the
low-level as C, saving some bits and getting more speed. I turned out it
took wel over a week to get where I was with the Prolog version. The
result requires half the memory and worked about 30% faster. It is
probably still not bug-free. Ok, it's just enough to fit our data on a
laptop, but I expected much bigger gains from moving to C.
All this is still work in progress. If you're interested,
I can provide
you with access to the prototype. Just drop me a mail.
Giovanni Tummarello wrote:
> I believe prolog could have a great role in the semantic web (think
> "topic maps", "rdf"), expecially when it will become more down to the
> people, but some people that i hear told me to just forget about it
> since its generically "too slow for any real graph exploration".
> Would the use of constraned prologs help this? I was looking into SWI
> prolog for its very open licence (LGPL) but they do not seem to offer
> support for constrains..
> Not having any on the field experience in this i ask you all: would
> that be a real obstacle in the end is it just something that can be
> emulated with a clever prolog programming?
Henk van de Casteele wrote an 8-queen solver in plain Prolog that was
faster than the contraint one at the Cyprus ICLP contest :-) On the
other hand, constraints may be useful in some areas. Possibly tabling
is even more useful though. It all depends on what you want to do.
From: Paul Tarau
Subject: Re: Prolog as a semantic web query language?
I have looked at Jan's Prolog based RDF parser - and it is definitely
very nice work! Still, accomodating very large RDF tuple spaces (and
very large XML collections of objects in general), given that most of
the time only a small subset needs to be searched/processed, can be
done quite well with a combination of event driven XML parsers (like
Jinni 2003's Java based SAX parser - see
http://www.binnetcorp.com/Jinni ) using an XPath-like query language
and a memory cache based interface to large external Prolog fact
databases. Using an event driven parser avoids keeping large XML trees
in memory and having a cache for external Prolog facts ensures that
other large data is only brought into memory as needed. The
combination of these can keep the size of a Prolog process under
control and it is likely to improve overall performance.
From: Alexei A. Morozov
Subject: Re: Prolog as a semantic web query language?
I agree that Prolog is the best choice for automating Semantic Web.
There was some projects on Web logic programming based on Prolog and
Datalog ideology (The difference is that Datalog implements Bottom-Up
inference). From my point of view, Bottom-Up approach is not flexible
enough for analyzing semi-structured data in the Web. The details of
our approach you could find in our English paper
We have some working examples of Web agents as well.
From: Oskar Bartenstein
Subject: Re: Prolog as a semantic web query language?
On the relation of Prolog, Semantic Web, XML you might
want to check some of the papers presented at INAP2001
The papers are all online, linked from the Program page.
Revised versions of selected papers are available in book form:
Springer LNAI 2543
Web Knowledge Management and Decision Support
Special Issue on Answer Set Programming
Guest editors: Chitta Baral, Ale Provetti and Tran Cao Son
The files below are the final versions of the papers submitted by the authors.
The definite, published versions of the papers are available from
the TOCL home page within the ACM Digital Library.
Volume 4, Number 3 (July 2003)
, Erich Grädel, Joseph Y. Halpern, Radha Jaghadeesan and Adolfo Piperno
The huge increase in interconnectivity we have witnessed in the last decade has boosted the development of systems which are often large-scale, distributed, time-critical, and possibly acting in an unreliable or malicious environment. Furthermore, software and hardware components are often mobile, and have to interact with a potentially arbitrary number of other entities.
These systems require solid formal techniques for their specification, analysis and verification.
In this respect, computational logic plays an increasingly important role. Concerning the specification aspect, one can think for instance at the specification, in temporal logic, of a communication protocol. Such specification offers the advantage that one can reason about it using formal methods, and at the same time it is often easily executable by rewriting it into a logic-based programming language. In addition, Computational logic plays a fundamental role by providing formal methods for proving system's correctness and tools - e.g. using techniques like constraint programming and theorem proving - for verifying their properties.
This special issue is inspired to the homonymous ICLP workshops that took place during iclp 2001 and iclp 2002. Nevertheless, submission is open to everyone.
The topics of interest include but are not limited to:
The preferred issues include, but are not limited to:
Deadline for submission: November 15, 2003
Notification: May 1, 2004
Revised paper due: October 1, 2004
Authors of submitted papers are encouraged to post their articles at CoRR, thereby helping timely distribution of the scientific works.
Theory and Practice of Logic Programming (TPLP, see also http://www.cwi.nl/projects/alp/TPLP/index.html) is published by the Cambridge University Press, and is the sole official journal of the Association for Logic Programming.