Contents
Vol. 19 n. 2, May 2006






Editorial
Enrico Pontelli

Dear Logic Programmers,
Welcome to the May 2006 issue of the ALP Newsletter.
The summer is finally here! This means end of the semester, students scrambling for their final exams, and more time for research.

ICLP 2006 is coming our way - Sandro and Mirek are working hard in assembling an exciting conference, and I hope as many of you as possible will join us in Seattle (the city of Starbucks, the Sonics, the Sci-fi Museum, Jimi Hendrix...). ICLP, as you probably know, is going to be part of the Federated Logic Conference (FLOC), which includes 6 key events (ICLP, CAV, IJCAR, LICS, RTA, and SAT) along with more than 40 workshops and about 20 plenary, keynote, and conference invited talks (with names such as Dana Scott, Tony Hoare, Monica Lam). And, of course, the chance to meet your fellow LPers!

I would also like to extend my personal invitation to join us for the 2nd Doctoral Consortium on Logic Programming, which will feature 7 outstanding doctoral students. Please come and support the new generation of logic programming researchers.

Regarding this issue of the newsletter, we have two nice contributions that are related to different aspects of concurrency (by A. Di Pierro, H. Wiklicky, and Jeff Polakow). I am also happy to see that our project of providing some personal and historical perspectives of LP has taken off; after the nice contribution by Dale Miller in the previous issue, we have two outstanding papers by V. Dahl and K. Ueda. I am sure you will enjoy reading these. I have also received committments to provide contributions of these type from several renown experts. I would like to thank Dale, Veronica, Kazunori, and all the others that are making all this possible.

Last, but not least, we are welcoming one more addition to our growing list of Area Editors. Axel Polleres has joined our team to provide insights and articles dealing with the use of logic programming technology in the world of the World Wide Web and the Semantic Web. This is an important growth area for LP, and Axel has the knowledge and expertise to guide us in learning about it.

To bring this one to closure, I would like once again to welcome your comments/critics/suggestions/... on how you would like to see the ALP Newsletter evolve in the near and not-so-near future.
           ‘till the next one.
                                   Enrico




ALP Executive Committee Report


Editor: Maria Garcia de la Banda




During the last months the ALP Executive Committee has discussed a wide range of issues. Perhaps one of the most relevant to the community, is the position of the ALP regarding the publication of the proceedings for ICLP workshops. After a lively discussion, the EC agreed to release the following policy:

A note from the EC on ICLP workshops
ICLP  workshops  are  normally  held  to  discuss  new  and  typically unfinished   ideas.    Thus,   it   is  important   to   ensure   that reworked/extended versions  of workshop papers can  be published later in  conferences. As a  result, it  does not  seem appropriate  to have workshop "proceedings"  published in formal ways and  specially not by  publishers  which  pose  any   restrictions  on  copyright.   This  is independent  of whether  the publication  method is  on-line or  via a  physical volume.

At  the  same  time,  it   is  important  to  make  these  workshop  results easily  accessible  as  easily  as  possible. On-line repositories  which do not put restrictions  on copyright seem  the  ideal solution,  since  they are more generally accessible than the physical and informal proceedings handed out at workshops.   In particular,  CoRR is one  of the most  widely known such repositories  and it is also  the one chosen by  TPLP for keeping on-line  copies of  papers published  in  the journal.  Also, CoRR  is indexed by DBLP.

Given  these considerations, the  ALP EC  has developed  the following guideline on how the proceedings of workshops associated with ICLP may be published:

"ICLP workshops  should not publish their proceedings  in venues which impose any restrictions on copyright. Simple informal proceedings (no  ISBN, etc.) should  be used. At the same  time, ICLP workshops should make every  effort to publish their  proceedings in CoRR  in order to make them  available on-line and  indexed by DBLP.   This publication policy should be  stated clearly in the call for  papers.  If at some point  an  ICLP workshop  feels  that it  has  reached  a level  that justifies formal publication, it is then perhaps preferable to change the name to symposium or conference, ideally keeping co-location with ICLP."

Actions:
Items Currently Under DIscussion:




From a Concurrent Logical Framework
to
Concurrent Logic Programming

Jeff Polakow
Carnegie Mellon University
USA

Editor: Brigitte Pientka



PDF Version of this article available.

LolliMon [9] is a new monadic concurrent linear logic programming language which grew out of the analysis of the operational semantics of term construction for the concurrent logical framework (CLF) [16,15,4]. Alternatively, LolliMon can be seen as an extension of the linear logic programming language Lolli [8] with monadically encapsulated synchronous [1] formulas. The use of a monad allows LolliMon to cleanly integrate backward chaining and forward chaining proof search in one language; forward chaining "concurrent" executions are cleanly encapsulated, via the monad, inside backward chaining "serial" executions. This article attempts to elucidate the transition from logical framework to logic programming language; we will also point out some interesting aspects of our implementation and directions for future work. A prototype disrtibution of LolliMon is available online at: www.cs.cmu.edu/~fp/lollimon.

CLF conservatively extends the linear logical framework (LLF) [3], itself a linear extension of the Edinburgh logical framework (LF) [7], with constructs allowing for the elegant encoding of concurrent computations. The LF family of logical frameworks are all based on dependently typed lambda calculus and rely upon the notion of canonical forms to ensure adequacy of object system1 representations; therefore these logical frameworks only contain types which admit canonical forms.

The existence of canonical forms for the LF family of logical frameworks additionally allows an operational semantics for term construction-- finding a term of a given type-- which corresponds to higher-order logic programming [13]. The type language of LF corresponds to (the freely generated fragment of) the formula language of  $\lambda$-Prolog [11] and the operational semantics of term construction, respectively proof search, for the two systems coincide. Similarly the type language of LLF corresponds to the formula fragment of Lolli, a linear extension of $\lambda$-Prolog, and their operational semantics also coincide.

Although $\lambda$-Prolog and Lolli were developed before LF and LLF, one can view the two logic programming languages as implementations of the term construction algorithms for the two logical frameworks. The notion of uniform proof [12], which is central to Prolog style logic programming and corresponds to canonical forms via the Curry-Howard isomorphism (and a translation from natural deduction to sequent calculus), provides the necessary mechanism for finding canonical terms of a given type. Similarly techniques for linear resource management [2,14,10], which constrain the non-determinism of linear context splitting inherent in backward chaining linear logic proof search, provide the necessary machinery for tractable linear lambda term construction.

A significant property of the type systems for LF and LLF is that the shape of a term, of non-atomic type, is independent of the current variable context; types with this property are called asynchronous [1]. Thus a term of type $\alpha will have the form $\lambda x\!:\!\alpha.m$, and a term of type $\alpha\mathbin{\&}\beta$ will have the form $\langle m,; this allows proof search in $\lambda$-Prolog and Lolli to be goal directed. However the type language of CLF, which includes synchronous types such as $\otimes$, does not have this property. A sequent derivation of $\alpha \otimes \beta \Longrightarrow \alpha\otimes\top$ must first decompose the hypothesis before the goal:
proof

In general, the derivation of any synchronous formula can require decomposing hypotheses before the synchronous goal. To allow synchronous types and still have a notion of canonical forms, CLF introduces a monadic type constructor, {}, which will be used to mark all occurrences of synchronous types; thus CLF disallows terms of type $\alpha\otimes\beta$, but does allow terms of type $\{\alpha\otimes\beta\}$. By restricting synchronous types to only occur within the monadic type constructor, CLF regains a notion of canonical forms where a term of type $\{\alpha\}$ has the shape $\{\textbf{let } \{p_1\} = e_1 \textbf{ in } \ldots \textbf{ let } where pi is a variable pattern whose shape is determined by the type of ei, a term representing the decomposition of some monadic hypothesis; e is a term of type $\alpha$; and we have overloaded {} to also be a term constructor.

In other words canonical terms of monadic synchronous type are a (possibly empty) sequence of operations on hypotheses followed by a term of appropriate type; this exactly captures the need to decompose hypotheses before the goal when deriving a synchronous formula. The preceding notion of canonical form for CLF places no constraints on the sequence of hypothesis operations executed before returning to the goal. In this way synchronous types resemble atomic types; there is not necessarily a single term for a given atomic type. However, restricting the occurrence of synchronous types to being inside the monad, effectively encapsulates the uncertainty in an otherwise canonical term.

Operationally, the derivation of a monadic goal entails a switch from backward chaining, goal directed proof search to forward chaining, context directed2 proof search; after enough forward chaining has taken place, search can resume its goal directed nature. Determining when to stop forward chaining is a matter of proof search strategy; there is in general no way of deciding when to stop, short of a global analysis which would be tantamount to finding a complete derivation. One strategy, which we have adopted in the implementation of LolliMon, is to forward chain until the context reaches a fixpoint.

The monadic formula constructor was intended to encapsulate synchronous formulas, however its operational significance, i.e. switching from backward chaining to forward chaining, makes it useful even in the absence of synchronous formulas. In particular, the monad can be used, in conjunction with embedded implications, to syntactically separate distinct phases of a forward chaining computation [9].

Although there are no constraints placed on the sequence of forward chaining operations in a monadic term, the order of any two independent operations may be switched without affecting the meaning of the term. Intuitively, the operations on hypotheses all happen concurrently with an ordering of events only forced by data dependencies and resource contention (when two hypotheses require the same linear hypothesis). Therefore the definition of equality for CLF allows for directly encoding true concurrency; the system cannot differentiate between two terms which only differ in the order of independent operations. Many concrete examples of CLF encodings are provided in the technical report by Cervesato et al. [4].

The faithful operational translation of CLF's concurrent equality would be to treat (monadic headed) hypotheses as concurrent processes, similar to rewriting rules [6] or linear concurrent constraints [5], which are turned on during a forward chaining stage. The current prototype version of LolliMon does not attempt concurrent execution and instead randomizes the order in which it tries to use each monadic hypothesis. Additionally, true concurrency is approximated by making the forward chaining phase of proof search committed choice. While this approach does prevent the system from finding two equivalent proofs, it is too restrictive and leads to incompleteness. We plan to explore truly concurrent execution in the future.

The implementation of LolliMon requires the interleaving of backtracking, goal directed proof search with committed choice, forward chaining proof search. As already stated, a goal formula can have a monadic goal nested inside and cause backchaining search to switch to forward chaining search. Likewise, a backchaining derivation can be spawned in the middle of forward chaining when decomposing an implication hypothesis. The arbitrary nesting of the two different search strategies poses interesting implementation challenges. The current version of LolliMon uses both success and failure continuations to manage control flow. In order to interleave backtracking proof search with committed choice search, the failure continuation values are structured to reflect the nesting of backtracking and non-backtracking phases. Although this approach seems to work reasonably well, it feels too ad hoc and we believe LolliMon's operational semantics and control flow can be much more cleanly specified.

In order to check whether or not the context has reached a fixpoint, or has saturated, we need to know whether a given formula is already in the context. To keep the saturation check tractable, the current implementation uses a version of a discrimination tree to store the entire context; thus we can also make use of indexing to more efficiently solve atomic goals. In order to match the depth first Prolog style search semantics, we sacrifice some amount of sharing to maintain clause order information when storing backchaining program clauses; however monadic clauses, and clauses assumed during forward chaining, are always stored with maximal sharing.

Although we can efficiently decide whether or not a given formula is already our context, we do not yet have a good way of reaching context saturation, during forward chaining. Currently, our method of reaching saturation is to try every available formula until none produce a change; however, if any formula produces a change in the context during a pass, then we schedule another pass in which every formula wiull be tried again. This is highly inefficient since many formulas will independent of each other. We are currently working on adding information to our discrimination tree structure which will let us turn off a branch of the tree until some specific event happens which triggers the formulas in that branch to come back alive. We hope to be able to leverage standard techniques from data flow analysis for logic programs and spreadsheets.

Finally, we end by noting that the current committed choice forward chaining strategy of continuing until saturation, or a fixed point, is reached is not the only, nor even the best strategy. We chose this strategy because it is relatively simple for both the implementor and the user, and it approximates the CLF semantics. However, the strategy is incomplete due to the side effects caused by linear hypotheses as well as unification. We would like to explore other forward chaining strategies. In particular a (non-deterministically) complete strategy would allow us to use LolliMon for model checking since a negative result would really mean no proof exists.

Bibliography


1
J.-M. Andreoli.
Logic programming with focusing proofs in linear logic.
Journal of Logic and Computation, 2(3):297-347, 1992.
2
I. Cervesato, J. S. Hodas, and F. Pfenning.
Efficient resource management for linear logic proof search.
Theoretical Computer Science, 232:133-163, 2000.
Revised version of paper in the Proceedings of the 5th International Workshop on Extensions of Logic Programming, Leipzig, Germany, March 1996.
3
I. Cervesato and F. Pfenning.
A linear logical framework.
Information and Computation, 1999.
To appear in the special issue with invited papers from LICS'96, E. Clarke, editor.
4
I. Cervesato, F. Pfenning, D. Walker, and K. Watkins.
A concurrent logical framework II: Examples and applications.
Technical Report CMU-CS-02-102, Department of Computer Science, Carnegie Mellon University, 2002.
Revised May 2003.
5
F. Fages, P. Ruet, and S. Soliman.
Linear concurrent constraint programming: Operational and phase semantics.
Information and Computation, 165(1):14-41, 2001.
6
T. Frühwirth.
Theory and practice of constraint handling rules.
Journal of Logic Programming, 37(1-3):95-138, Oct. 1998.
Special Issue on Constraint Logic Programming.
7
R. Harper, F. Honsell, and G. Plotkin.
A framework for defining logics.
Journal of the Association for Computing Machinery, 40(1):143-184, Jan. 1993.
8
J. S. Hodas and D. Miller.
Logic programming in a fragment of intuitionistic linear logic.
Information and Computation, 110(2):327-365, 1994.
Extended abstract in the Proceedings of the Sixth Annual Symposium on Logic in Computer Science, Amsterdam, July 15-18, 1991.
9
P. López, F. Pfenning, J. Polakow, and K. Watkins.
Monadic concurrent linear logic programming.
In PPDP '05: Proceedings of the 7th ACM SIGPLAN international conference on Principles and practice of declarative programming, pages 35-46, New York, NY, USA, 2005. ACM Press.
10
P. López and J. Polakow.
Implementing efficient resource management for linear logic programming.
In F. Baader and A. Voronkov, editors, Eleventh International Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR'04), pages 528-543, Montevideo, Uruguay, Mar. 2005. Springer-Verlag LNAI 3452.
11
D. Miller and G. Nadathur.
Higher-order logic programming.
In E. Shapiro, editor, Proceedings of the Third International Logic Programming Conference, pages 448-462, London, June 1986.
12
D. Miller, G. Nadathur, F. Pfenning, and A. Scedrov.
Uniform proofs as a foundation for logic programming.
Annals of Pure and Applied Logic, 51:125-157, 1991.
13
F. Pfenning.
Logic programming in the LF logical framework.
In G. Huet and G. Plotkin, editors, Logical Frameworks, pages 149-181. Cambridge University Press, 1991.
14
J. Polakow.
Linearity constraints as bounded intervals in linear logic programming.
Journal of Logic and Computation, 2006.
To appear. Extended version of paper in Proceedings Workshop on Logics for Resources, Processes, and Programs, Turku, Finland, July 2004.
15
K. Watkins, I. Cervesato, F. Pfenning, and D. Walker.
A concurrent logical framework I: Judgments and properties.
Technical Report CMU-CS-02-101, Department of Computer Science, Carnegie Mellon University, 2002.
Revised May 2003.
16
K. Watkins, I. Cervesato, F. Pfenning, and D. Walker.
A concurrent logical framework: The propositional fragment.
In S. Berardi, M. Coppo, and F. Damiani, editors, Types for Proofs and Programs, pages 355-377. Springer-Verlag LNCS 3085, 2004.
Revised selected papers from the Third International Workshop on Types for Proofs and Programs, Torino, Italy, April 2003.


Footnotes

... system1
The systems we wish to use the logical framework to reason about; this use of "object system" is in contrast to "meta system" (the logical framework) and has nothing to do with object-oriented design.
... directed2
The derivation is driven by the current hypotheses, rather than by the current goal.






Probabilistic Concurrent Constraint
Programming and Quantitative Observables

Alessandra Di Pierro1 and Herbert Wiklicky2
1Università di Pisa, Italy
2Imperial College, UK

Editor: Eric Monfroy



PDF Version of this Article.

Introduction

The analysis and verification of a system has a fundamental importance for the system development. The advantages of formal methods for these tasks are well known. Similarly, in the system performance evaluation one can benefit from a formal specification of properties expressing the basic performance measures, along with methods for automatically verifying those properties, which are solidly built on some mathematical base. On the other hand, in the practical realisation of systems formal methods are not always the methods of choice, and very often ad hoc methods are preferred. One reason of this relies on the absence in most cases of a viable semantics to be used as a necessary base on which to build a formal deductive system. The most desirable requirements of such a semantics should be rigour and precision on the one hand, and simplicity and clarity on the other hand. While this is in general not a trivial task, it is relatively simple to achieve when the adopted language is a declarative one.

Based on these considerations and with possible applications to programs analysis and verification in mind, we adopted the Concurrent Constraint Programming (CCP) paradigm of [12,11], whose declarativity descends from the clear `logical reading' of programs provided by its denotational semantics [5]. We then extend the classical CCP operational semantics [12,6] so as to include two quantities representing `probabilities' and `cost' respectively, the latter being some measures for computational complexity. This augmented semantics is formalised in terms of two different notions: one captures the probabilistic I/O behaviour of a program by collecting the results delivered by the program executions together with a probability distribution expressing for each result the likelihood of being computed; the other associates to each result also an index of the complexity of computing that result. The second notion is a further extension, and turns out to be particularly suitable for defining another kind of observables capturing the average behaviour of a program. This plays a fundamental role in the evaluation and modelling of many relevant performance and reliability indices of probabilistic systems. In fact, the study of system performance commonly refers to the long-run average time required by the system to perform given tasks.

A formal analysis and verification of such systems requires therefore an appropriate specification of this kind of behaviour in terms of probabilistic properties [4]. These average properties can be naturally expressed by means of the augmented operational semantics. The basic idea is that the PCCP mechanism for inferring probabilities can be used for determining the weights in the expected value of a random variable expressing the time, this being the quantity measured as the computational cost. This can be generalised to quantities other than the running time.

We give an example of construction of the various notions of observables for a simple probabilistic program, and we show how they can be used for expressing the long-run average running time of the program. We point out that, whereas for probabilistic programs the `average' has to be meant over an infinite period of time [8], for deterministic programs it is calculated over the (possibly infinite) different inputs. We also give an example for this latter case, which shows some similarities with the average-case asymptotic analysis of algorithms.


Probabilistic CCP

Probabilistic Concurrent Constraint Programming (PCCP) was introduced in [7], to the purpose of formalising randomised algorithms. In this language randomness is expressed in the form of a probabilistic choice, which replaces the CCP nondeterministic choice and allows a program to make stochastic moves during its execution. This construct also allows us to introduce the element of chance directly at the algorithmic level without the need of any modification at the data level. Thus, the constraint system ${\cal C}$ underlying the language doesn't need to be re-structured according to some probabilistic or fuzzy or belief system, and we can assume the definition as a cylindric algebraic cpo given in [12], to which we refer for more details. The syntax of the PCCP Agents is given by the grammar:

\begin{displaymath}
A ::={\mbox{\bf stop}}\;\mbox{\Large$\vert$}\;{\mbox{\bf tel...
...\mbox{\Large$\vert$}\;\exists_x A \;\mbox{\Large$\vert$}\;p(x)
\end{displaymath}

where c and ci are finite constraints in ${\cal C}$. A PCCP process P is then an object of the form D.A, where D is a set of procedure declarations of the form p(x):- A and A is an agent. PCCP agents are the same as CCP agents but for the non-deterministic choice construct, which is replaced by the probabilistic one, $\rule[-.2em]{.05em}{1em}\rule[-.2em]{0.25em}{.05em}
\rule[-.2em]{.05em}{1em}\h...
...2em}\rule[-.2em]{0em}{1em}_{i=1}^{n}\;
{\mbox{\bf ask}}(c_i)\rightarrow p_i:A_i$.

Roughly, the probability associated with each alternative expresses how likely it is (repeating the same computation `sufficiently' often) that the computation will continue by actually performing that alternative. This can be seen as restricting the original nondeterminism by imposing some requirements on the frequency of choices. The operational meaning of this construct is as follows: First, check whether constraints ci are entailed by the store. This is expressed in Table 1 by $d\vdash c_j$, where $\vdash$ is the partial ordering in the underlying constraint system. Then we have to normalise the probability distribution by considering only the enabled agents, i.e. the agents such that ci is entailed. This can be done by considering for enabled agents the normalised transition probability, \(
\tilde{p_i} = \frac{p_i}{\sum_{\vdash c_j } p_j}, \) where the sum is over all enabled agents. Finally, one of the enabled agents is chosen according to this new probability distribution.


Augmented Transition System

In [7] the operational semantics of PCCP has a simple definition in terms of a transition system, $(Conf,\longrightarrow _p)$, where Conf is the set of configurations $\left\langle A,d\right\rangle $ representing the state of the system at a certain moment, namely the agent A which has still to be executed and the common store d; and the transition relation $\longrightarrow _p$ is defined in Table 1. Rule R2 was informally described in Section 2. The remaining rules are the same as for CCP and we refer to [5] for a detailed description. In the recursion rule R5 for the procedure call p(y), we assume that before p(y) is replaced by the body of its definition in the program P, the link between the actual parameter y and the formal parameter x has been correctly established. In [12], this link is elegantly expressed by using the hiding operation $\exists_x$ and one only fresh variable.


Table 1: The transition system for PCCP.
         
  R1 $\left\langle {\mbox{\bf tell}}(c),d\right\rangle
\longrightarrow _1
\left\langle {\mbox{\bf stop}},c \sqcup d\right\rangle $    
  R2 $\left\langle \rule[-.2em]{.05em}{1em}\rule[-.2em]{0.25em}{.05em}
\rule[-.2em]{...
...ight\rangle
\longrightarrow _{\tilde{p}_j}
\left\langle A_j,d\right\rangle $ $\;\;j\,\in\, [1,n]; \; d\vdash c_j$  
  R3 $\infer{
\begin{array}{l}
\left\langle A\parallel B,c\right\rangle \longrighta...
...t\langle A,c\right\rangle \longrightarrow _p \left\langle A',c'\right\rangle
}$    
  R4 $\infer{
\left\langle \exists^d_x A,c\right\rangle
\longrightarrow _p
\left...
...ists_x c\right\rangle
\longrightarrow _p
\left\langle B, d'\right\rangle
}$    
  R5 $\left\langle p(y),c\right\rangle
\longrightarrow _1
\left\langle A, c\right\rangle $ $\;\mbox{}\; p(x):-A \in P$  

The transition system in Table 1 supports a notion I/O observables which associates to each outcome of a computation the likelihood of being computed. However, if our intention is to measure, for example, the running time of a program execution, then we need some additional information which keeps track of the computational steps performed during the computation. This kind of information is not present in the transition system in Table 1. A more appropriate one is shown in Table 2, where the transition relation $\longrightarrow _{p,q}$ expresses a transition between two configurations which takes place with probability p and has a `computational cost' of q. In particular, q can be used as a transition steps counter to express the running time.


Table 2: Augmented Transition system for PCCP.
         
  R1 $\left\langle {\mbox{\bf tell}}(c),d\right\rangle
\longrightarrow _{1,0}
\left\langle {\mbox{\bf stop}},c \sqcup d\right\rangle $    
  R2 $\left\langle \rule[-.2em]{.05em}{1em}\rule[-.2em]{0.25em}{.05em}
\rule[-.2em]{...
...ght\rangle
\longrightarrow _{\tilde{p}_j,0}
\left\langle A_j,d\right\rangle $ $\;\;j\,\in\, [1,n]; \; d\vdash c_j$  
  R3 $\infer{
\begin{array}{l}
\left\langle A\parallel B,c\right\rangle \longrighta...
...ngle A,c\right\rangle \longrightarrow _{p,q} \left\langle A',c'\right\rangle
}$    
  R4 $\infer{
\left\langle \exists^d_x A,c\right\rangle
\longrightarrow _{p,q}
\...
..._x c\right\rangle
\longrightarrow _{p,q}
\left\langle B, d'\right\rangle
}$    
  R5 $\left\langle p(y),c\right\rangle
\longrightarrow _{1,1}
\left\langle A, c\right\rangle $ $\;\mbox{}\; p(x):-A \in P$  

For simplicity, we assume that only procedure calls contribute to the complexity of a computation, whereas all other operations are ``cost free''. Although, this can be a reasonable assumption for the hiding and the halting agents, one might object that it is not justified for operations such as ${\mbox{\bf tell}}$ and in particular ${\mbox{\bf ask}}$, where most of the actual computational effort is concentrated in practice. However, we assume here a more abstract viewpoint and interpret the cost of an execution in terms of the recursion depth of the program definition, in line with a more algorithmic approach to measuring computational complexity.


Observables

The augmented transition system introduced above can be used to define various notions of observables. In this section we present three different ways of observing the I/O behaviour of a program. One is the standard one where only a qualitative view of the results is considered. The others extend this view with some quantitative features corresponding respectively to probability and cost.

Given a program P, we define the results ${\cal R}$ of an agent A and an initial store d as the set of all pairs $\left\langle c,p,q\right\rangle $, where c is the least upper bound of the partial constraints accumulated during a computation starting from d; p is the probability of reaching that result; q the associated cost.

\begin{displaymath}
\begin{array}{lll}
{\cal R}(A,d) & = &
\{ \left\langle c,p...
...\right\rangle \longrightarrow_{p_0,q_0} \ldots
\}.
\end{array}\end{displaymath}

The first term describes the finite results, where the least upper bound of the partial stores corresponds to the final store. The second term covers infinite computation.

The symbol $\longrightarrow^{*}_{p,q}$ indicates the transitive closure of the transition relation $\longrightarrow _{p,q}$. It is obvious that the probability of getting from one configuration to another in several steps is determined by the product of the probabilities associated to the single steps. As for the quantity q, it is easily arguable that it is the sum of all costs associated to each transition.

Note that ${\cal R}$ can be a multi-set where the same constraint can occur in different tuples with different quantities associated to it. This is because different computational paths can lead to the same result. Thus, in order to capture the true behaviour of a program we have to appropriately collect the accumulated probabilities and costs associated with different interleavings. While for the probability of a constraint computed through several different paths, this operation of compactification is intuitively recognised to be the sum of the probabilities associated to each path, we have more the one option for how to compactify the cost associated to such a constraint. Since this quantity commonly represents a measure of efficiency or resource consumption, a reasonable choice would be to define the cost of a result computed non-deterministically as the maximum cost among the ones associated to the various computational branches leading to that result. However, depending on the interpretation of the quantity in question, other choices can be equally reasonable, such as the sum or the product. Therefore, in defining the compactification of the results, we indicate the cost compactification by a generic operation $\Omega$, which can be conveniently instantiated case by case. Formally:

\begin{displaymath}
\begin{array}{lll}
{\cal K}(\{ \left\langle c_{ij},p_{ij},q_...
...
Q_{c_i} = \Omega\{q_{ij}\}_{j}\}_i,
%%\end{array}\end{array}\end{displaymath}

where cij denotes the jth occurrence of the constraint ci in ${\cal R}$.

Now we have to normalise the probabilities. This is necessary as the probabilities in each interleaving add up to one so that the overall sum of probabilities associated to a given constraint can be greater than one when the constraint results from two or more different interleavings (in fact, it corresponds exactly to the number of possible interleavings). The re-normalisation operation effectively implies that all interleavings are equally likely. Formally:

\begin{displaymath}
\begin{array}{lll}
{\cal N}(\{ \left\langle c_i,p_i,q_i\righ...
...}{P},q_i\right\rangle \;\mid\; P = \sum_{i}p_i
\}.
\end{array}\end{displaymath}

We are now in a position to define the observables associated to an agent A and an initial store d.

Classical I/O behaviour.

This notion corresponds to the final results of computation, where we abstract from the associated probabilities and costs, and restrict ourselves to a purely qualitative view of those results. We could express this fact by assuming that p=1 and q=0 for all computed constraints. Alternatively, we can consider a kind of projection operation which selects the first component of the results of A in ${\cal R}$, as follows:

\begin{eqnarray*}
{\cal O}_{\tt C}(A,d) & = & \{c\mid \left\langle c,p,q\right\rangle \in {\cal R}(A,d)\}.
\end{eqnarray*}

Probabilistic I/O behaviour.

Here we observe the probability of a final store, whereas the cost is not relevant (q=0 for all results). Formally:
\begin{displaymath}
\begin{array}{lll}
{\cal O_{\tt Pr}}(A,d) & = & \{\left\lang...
...ht\rangle \in
{\cal N}({\cal K}({\cal R}(A,d)))\}.
\end{array}\end{displaymath}

Resource-consumption I/O behaviour.

This notion includes both the probability and the cost of each result and is defined simply by
\begin{displaymath}
\begin{array}{lll}
{\cal O}_{\tt Q}(A,d) & = & {\cal N}({\cal K}({\cal R}(A,d))).
\end{array}\end{displaymath}


Note that the last two notions of observables differs from the classical notion of I/O behaviour in that they are `universal' in contrast with the existential character of ${\cal O}_{\tt C}$. More precisely, in the classical case a constraint c belongs to the observables if at least one path leads from the initial store d to the final result c, whereas in the quantitative case we have to consider all possible paths leading to the same result c and combine the associated quantities.

Observing the Average

As already mentioned in Section 1, the notion of average behaviour of a system is crucial for many basic performance measures considered in the field of performance modelling and evaluation. Therefore, it is important to dispose of a formal base for a correct specification and analysis of such a behaviour. In particular, an appropriate definition of an operational semantics able to model the program properties of interest would make it feasible the adoption of abstract interpretation methods of analysis [9].

Based on the augmented transition system given in Section 3, we now introduce a notion of average observables. This notion could be used for the analysis of average properties, such as the long-run average properties of probabilistic systems in [4], by considering abstract interpretation methods which appropriately extend or modify the classical Cousot's abstract interpretation framework, [3,2], so as to deal with probabilistic or, more in general, quantitative features. We will come back to this point in Section 7, and we develop here the first step of this approach to the analysis of probabilistic properties.

The average behaviour

of an agent A starting from store d is defined by:
\begin{displaymath}
{\tt Average}(A)= \sum_{\left\langle c,p,q\right\rangle \in {\cal O}_{\tt Q}(A,d)} p\cdot q.
\end{displaymath}

Note that the value q in the augmented semantics of an agent A can be seen as the value of a discrete random variable on the constraint system ${\cal C}$, $Q_{A}:{\cal C} \rightarrow \mathbb{R}$, defined by:

\begin{displaymath}
Q_{A}(c)= q \mbox{ iff } \left\langle c,p,q\right\rangle \in {\cal O}_{{\tt Q}}(A,d).
\end{displaymath}

With this, we can calculate ${\tt Average}(A)$ as the expected value or mean of QA:
\begin{displaymath}
E(Q_{A})=\sum_{c\in \cal C} q\cdot Pr(Q_{A}(c)=q),
\end{displaymath}

where Pr(QA(c)=q) is the probability distribution of QA. In fact, this value corresponds exactly to the probability p associated to the constraint c in the observables ${\cal O}_{\tt Q}(A,d)$.

Examples

In this section we illustrate the construction of the previously defined notions of observables for two simple programs, a probabilistic and a deterministic one, and we discuss the possible properties that can be described by those notions. In particular, we show the use of the ${\tt Average}$ observables for representing two distinct situations in probabilistic (average-case) analysis, namely the case in which the program is deterministic and the input data varies according to some probability distribution, and the case in which the program is probabilistic and we evaluate its behaviour in different executions determined by the probability distribution on the same input data. In the algorithms analysis terminology the first analysis is called `average-case' whereas the second is referred to as the `expected running time' [1].

Randomised Counting

Consider the following PCCP program generating all natural numbers.

\begin{displaymath}
\begin{array}{lll}
nat(x) & :- & {\mbox{\bf ask}}({\it true}...
...sts_y ({\mbox{\bf tell}}(x=s(y)) \parallel nat(y)))
\end{array}\end{displaymath}

The standard I/O behaviour of this program abstracts from any consideration about the probabilities and the costs associated to each result computed by nat(x) (i.e. the natural numbers), so it corresponds to the set:

\begin{displaymath}
\mathcal O_{{\tt C}}(nat) = \left\{
x=0, x=s(0), x=s(s(0)) \ldots, x=s^{n}(0),
\ldots \right\}.
\end{displaymath}

We can now be a little bit `more concrete' and consider the information on the transition probability provided by the augmented semantics. We then get the probabilistic observables:

\begin{displaymath}
\mathcal O_{{\tt Pr}}(nat) = \left\{
\left\langle x=0,1/2\...
...eft\langle x=s^{n-1}(0),1/2^n\right\rangle ,
\ldots \right\}.
\end{displaymath}

from which we can also see the likelihood that a given number is actually generated.

A further step will lead us to observe the number of calls to the procedure nat(x) that are necessary to generate a given number. We have then the `most informative' semantics:

\begin{displaymath}
\mathcal O_{{\tt Q}}(nat) = \{\left\langle x=0,1/2,0\right\r...
... \left\langle x=s^{n-1}(0),1/2^n,n-1\right\rangle ,
\ldots\},
\end{displaymath}

where, according to our intuition, we can see that when n tends to infinity the probability vanishes whereas the cost grows bigger and bigger.

Average Running Time.

Based on $\mathcal O_{{\tt Q}}(nat)$ we can estimate the number of calls we can expect on average by executing the program nat(x):
\begin{displaymath}
{\tt Average}(nat) =
\sum_{q=0}^\infty \frac{q}{2^q}
= 2.
\end{displaymath}

List Search

The following is a program which searches a given list for an element in the list.

\begin{displaymath}
\begin{array}{lll}
search(x,list,n) & :- &
{\mbox{\bf ask}...
...ist[n/2]) \rightarrow 1:serach(x,list[n/2+2:n],n/2)
\end{array}\end{displaymath}

As the guards of the three alternatives are mutually exclusive, this program is deterministic, despite its formulation as a (probabilistic) nondeterministic choice. Therefore, the notions of classical and probabilistic observables are quite trivial in this case. In fact, any call to search will always terminate after a finite number of steps, once x is found by establishing i as the ``index'' of x in the list with probability 1.

A slightly more interesting observable is $\mathcal O_{{\tt Q}}(search)$, which for a given input x gives the number of recursive calls needed to find x. This is fixed once x is fixed. Thus the average behaviour of such a program can be estimated by comparing runs of the program on different inputs. As an example, consider n=4. Then we have four possible calls search(x,[1,2,3,4],4) with x=1, x=2, x=3 and x=4. It is easy to verify that they take respectively 2, 1, 2 and 3 steps (or recursive calls).

Average Running Time.

Assuming a uniform distribution on the input values x, we can now compute the average running time of search by
\begin{displaymath}
{\tt Average}(search(x,[1,2,3,4],4))=\frac{2+1+2+3}{4} = 2.
\end{displaymath}

We can obtain the same result by considering the corresponding probabilistic program
\begin{displaymath}
\begin{array}{lll}
searchall(list,n) & :- &
{\mbox{\bf ask...
...sk}}(true) \rightarrow \frac{1}{4}:search(4,list,n)
\end{array}\end{displaymath}

It is easy to check that the observables of searchall,


\begin{eqnarray*}
\mathcal O_{\tt Q}(searchall([1,2,3,4],4)) & = &
\{\left\lang...
... i=3,1/4,2\right\rangle ,\left\langle i=4,1/4,3\right\rangle \},
\end{eqnarray*}

can be obtained from the observables of each call to search as:
\begin{displaymath}
\mathcal O_{\tt Q}(searchall([1,2,3,4],4)) = \frac{1}{4}
\bigcup_{i=1}^4 O_{\tt Q}(search(i,[1,2,3,4],4)),
\end{displaymath}

and its average running time is again:
\begin{displaymath}
{\tt Average}(searchall)= \sum\{ p \cdot q\mid \left\langle c,p,q\right\rangle
\in\mathcal O_{\tt Q}(searchall)\} = 2.
\end{displaymath}

We can generalise this result to n=2k by asserting that the program search has an average running time of $q=k=\log n$. Note that this is the asymptotic complexity resulting from the average-case analysis of binary search algorithms (cf. §6.2.1 of [10]).


Conclusions

This paper has presented an operational semantics modelling the I/O results of a program with respect to their probability and cost, the latter representing (some measure of) the program execution complexity. We also showed how average properties can be naturally expressed by means of this semantics, thus providing a base for the specification and analysis of probabilistic properties such as the ones called in [4] long-run average properties.

Bibliography


1
T. H. Cormen, T. E. Leiserson, and R. L. Rivest.
Introduction to algorithms.
The MIT Press, Cambridge, Mass., 1990.
2
P. Cousot.
Semantics Foundation of Program Analysis.
In S.S. Muchnick and N.D. Jones, editors, Program Flow Analysis: Theory and Applications, chapter 10. Prentice-Hall, 1981.
3
P. Cousot and R. Cousot.
Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints.
In Proceedings of POPL, pages 238-252, 1977.
4
L. de Alfaro.
How to Specify and Verify the Long-Run Average Behavior of Probabilistic Systems.
In Proceedings of LICS'98, pages 174-183. IEEE Computer Society, 1998.
5
F.S. de Boer, M. Gabbrielli, E. Marchiori, and C. Palamidessi.
Proving Concurrent Constraint Programs Correct.
ACM Transactions on Programming Languages and Systems, 1997.
6


Probabilistic Concurrent Constraint
Programming and Quantitative Observables

Alessandra Di Pierro1 and Herbert Wiklicky2
1Università di Pisa, Italy
2Imperial College, UK

Editor: Eric Monfroy



PDF Version of this Article.

Introduction

The analysis and verification of a system has a fundamental importance for the system development. The advantages of formal methods for these tasks are well known. Similarly, in the system performance evaluation one can benefit from a formal specification of properties expressing the basic performance measures, along with methods for automatically verifying those properties, which are solidly built on some mathematical base. On the other hand, in the practical realisation of systems formal methods are not always the methods of choice, and very often ad hoc methods are preferred. One reason of this relies on the absence in most cases of a viable semantics to be used as a necessary base on which to build a formal deductive system. The most desirable requirements of such a semantics should be rigour and precision on the one hand, and simplicity and clarity on the other hand. While this is in general not a trivial task, it is relatively simple to achieve when the adopted language is a declarative one.

Based on these considerations and with possible applications to programs analysis and verification in mind, we adopted the Concurrent Constraint Programming (CCP) paradigm of [12,11], whose declarativity descends from the clear `logical reading' of programs provided by its denotational semantics [5]. We then extend the classical CCP operational semantics [12,6] so as to include two quantities representing `probabilities' and `cost' respectively, the latter being some measures for computational complexity. This augmented semantics is formalised in terms of two different notions: one captures the probabilistic I/O behaviour of a program by collecting the results delivered by the program executions together with a probability distribution expressing for each result the likelihood of being computed; the other associates to each result also an index of the complexity of computing that result. The second notion is a further extension, and turns out to be particularly suitable for defining another kind of observables capturing the average behaviour of a program. This plays a fundamental role in the evaluation and modelling of many relevant performance and reliability indices of probabilistic systems. In fact, the study of system performance commonly refers to the long-run average time required by the system to perform given tasks.

A formal analysis and verification of such systems requires therefore an appropriate specification of this kind of behaviour in terms of probabilistic properties [4]. These average properties can be naturally expressed by means of the augmented operational semantics. The basic idea is that the PCCP mechanism for inferring probabilities can be used for determining the weights in the expected value of a random variable expressing the time, this being the quantity measured as the computational cost. This can be generalised to quantities other than the running time.

We give an example of construction of the various notions of observables for a simple probabilistic program, and we show how they can be used for expressing the long-run average running time of the program. We point out that, whereas for probabilistic programs the `average' has to be meant over an infinite period of time [8], for deterministic programs it is calculated over the (possibly infinite) different inputs. We also give an example for this latter case, which shows some similarities with the average-case asymptotic analysis of algorithms.


Probabilistic CCP

Probabilistic Concurrent Constraint Programming (PCCP) was introduced in [7], to the purpose of formalising randomised algorithms. In this language randomness is expressed in the form of a probabilistic choice, which replaces the CCP nondeterministic choice and allows a program to make stochastic moves during its execution. This construct also allows us to introduce the element of chance directly at the algorithmic level without the need of any modification at the data level. Thus, the constraint system ${\cal C}$ underlying the language doesn't need to be re-structured according to some probabilistic or fuzzy or belief system, and we can assume the definition as a cylindric algebraic cpo given in [12], to which we refer for more details. The syntax of the PCCP Agents is given by the grammar:

\begin{displaymath}
A ::={\mbox{\bf stop}}\;\mbox{\Large$\vert$}\;{\mbox{\bf tel...
...\mbox{\Large$\vert$}\;\exists_x A \;\mbox{\Large$\vert$}\;p(x)
\end{displaymath}

where c and ci are finite constraints in ${\cal C}$. A PCCP process P is then an object of the form D.A, where D is a set of procedure declarations of the form p(x):- A and A is an agent. PCCP agents are the same as CCP agents but for the non-deterministic choice construct, which is replaced by the probabilistic one, $\rule[-.2em]{.05em}{1em}\rule[-.2em]{0.25em}{.05em}
\rule[-.2em]{.05em}{1em}\h...
...2em}\rule[-.2em]{0em}{1em}_{i=1}^{n}\;
{\mbox{\bf ask}}(c_i)\rightarrow p_i:A_i$.

Roughly, the probability associated with each alternative expresses how likely it is (repeating the same computation `sufficiently' often) that the computation will continue by actually performing that alternative. This can be seen as restricting the original nondeterminism by imposing some requirements on the frequency of choices. The operational meaning of this construct is as follows: First, check whether constraints ci are entailed by the store. This is expressed in Table 1 by $d\vdash c_j$, where $\vdash$ is the partial ordering in the underlying constraint system. Then we have to normalise the probability distribution by considering only the enabled agents, i.e. the agents such that ci is entailed. This can be done by considering for enabled agents the normalised transition probability, \(
\tilde{p_i} = \frac{p_i}{\sum_{\vdash c_j } p_j}, \) where the sum is over all enabled agents. Finally, one of the enabled agents is chosen according to this new probability distribution.


Augmented Transition System

In [7] the operational semantics of PCCP has a simple definition in terms of a transition system, $(Conf,\longrightarrow _p)$, where Conf is the set of configurations $\left\langle A,d\right\rangle $ representing the state of the system at a certain moment, namely the agent A which has still to be executed and the common store d; and the transition relation $\longrightarrow _p$ is defined in Table 1. Rule R2 was informally described in Section 2. The remaining rules are the same as for CCP and we refer to [5] for a detailed description. In the recursion rule R5 for the procedure call p(y), we assume that before p(y) is replaced by the body of its definition in the program P, the link between the actual parameter y and the formal parameter x has been correctly established. In [12], this link is elegantly expressed by using the hiding operation $\exists_x$ and one only fresh variable.


Table 1: The transition system for PCCP.
         
  R1 $\left\langle {\mbox{\bf tell}}(c),d\right\rangle
\longrightarrow _1
\left\langle {\mbox{\bf stop}},c \sqcup d\right\rangle $    
  R2 $\left\langle \rule[-.2em]{.05em}{1em}\rule[-.2em]{0.25em}{.05em}
\rule[-.2em]{...
...ight\rangle
\longrightarrow _{\tilde{p}_j}
\left\langle A_j,d\right\rangle $ $\;\;j\,\in\, [1,n]; \; d\vdash c_j$  
  R3 $\infer{
\begin{array}{l}
\left\langle A\parallel B,c\right\rangle \longrighta...
...t\langle A,c\right\rangle \longrightarrow _p \left\langle A',c'\right\rangle
}$    
  R4 $\infer{
\left\langle \exists^d_x A,c\right\rangle
\longrightarrow _p
\left...
...ists_x c\right\rangle
\longrightarrow _p
\left\langle B, d'\right\rangle
}$    
  R5 $\left\langle p(y),c\right\rangle
\longrightarrow _1
\left\langle A, c\right\rangle $ $\;\mbox{}\; p(x):-A \in P$  

The transition system in Table 1 supports a notion I/O observables which associates to each outcome of a computation the likelihood of being computed. However, if our intention is to measure, for example, the running time of a program execution, then we need some additional information which keeps track of the computational steps performed during the computation. This kind of information is not present in the transition system in Table 1. A more appropriate one is shown in Table 2, where the transition relation $\longrightarrow _{p,q}$ expresses a transition between two configurations which takes place with probability p and has a `computational cost' of q. In particular, q can be used as a transition steps counter to express the running time.


Table 2: Augmented Transition system for PCCP.
         
  R1 $\left\langle {\mbox{\bf tell}}(c),d\right\rangle
\longrightarrow _{1,0}
\left\langle {\mbox{\bf stop}},c \sqcup d\right\rangle $    
  R2 $\left\langle \rule[-.2em]{.05em}{1em}\rule[-.2em]{0.25em}{.05em}
\rule[-.2em]{...
...ght\rangle
\longrightarrow _{\tilde{p}_j,0}
\left\langle A_j,d\right\rangle $ $\;\;j\,\in\, [1,n]; \; d\vdash c_j$  
  R3 $\infer{
\begin{array}{l}
\left\langle A\parallel B,c\right\rangle \longrighta...
...ngle A,c\right\rangle \longrightarrow _{p,q} \left\langle A',c'\right\rangle
}$    
  R4 $\infer{
\left\langle \exists^d_x A,c\right\rangle
\longrightarrow _{p,q}
\...
..._x c\right\rangle
\longrightarrow _{p,q}
\left\langle B, d'\right\rangle
}$    
  R5 $\left\langle p(y),c\right\rangle
\longrightarrow _{1,1}
\left\langle A, c\right\rangle $ $\;\mbox{}\; p(x):-A \in P$  

For simplicity, we assume that only procedure calls contribute to the complexity of a computation, whereas all other operations are ``cost free''. Although, this can be a reasonable assumption for the hiding and the halting agents, one might object that it is not justified for operations such as ${\mbox{\bf tell}}$ and in particular ${\mbox{\bf ask}}$, where most of the actual computational effort is concentrated in practice. However, we assume here a more abstract viewpoint and interpret the cost of an execution in terms of the recursion depth of the program definition, in line with a more algorithmic approach to measuring computational complexity.


Observables

The augmented transition system introduced above can be used to define various notions of observables. In this section we present three different ways of observing the I/O behaviour of a program. One is the standard one where only a qualitative view of the results is considered. The others extend this view with some quantitative features corresponding respectively to probability and cost.

Given a program P, we define the results ${\cal R}$ of an agent A and an initial store d as the set of all pairs $\left\langle c,p,q\right\rangle $, where c is the least upper bound of the partial constraints accumulated during a computation starting from d; p is the probability of reaching that result; q the associated cost.

\begin{displaymath}
\begin{array}{lll}
{\cal R}(A,d) & = &
\{ \left\langle c,p...
...\right\rangle \longrightarrow_{p_0,q_0} \ldots
\}.
\end{array}\end{displaymath}

The first term describes the finite results, where the least upper bound of the partial stores corresponds to the final store. The second term covers infinite computation.

The symbol $\longrightarrow^{*}_{p,q}$ indicates the transitive closure of the transition relation $\longrightarrow _{p,q}$. It is obvious that the probability of getting from one configuration to another in several steps is determined by the product of the probabilities associated to the single steps. As for the quantity q, it is easily arguable that it is the sum of all costs associated to each transition.

Note that ${\cal R}$ can be a multi-set where the same constraint can occur in different tuples with different quantities associated to it. This is because different computational paths can lead to the same result. Thus, in order to capture the true behaviour of a program we have to appropriately collect the accumulated probabilities and costs associated with different interleavings. While for the probability of a constraint computed through several different paths, this operation of compactification is intuitively recognised to be the sum of the probabilities associated to each path, we have more the one option for how to compactify the cost associated to such a constraint. Since this quantity commonly represents a measure of efficiency or resource consumption, a reasonable choice would be to define the cost of a result computed non-deterministically as the maximum cost among the ones associated to the various computational branches leading to that result. However, depending on the interpretation of the quantity in question, other choices can be equally reasonable, such as the sum or the product. Therefore, in defining the compactification of the results, we indicate the cost compactification by a generic operation $\Omega$, which can be conveniently instantiated case by case. Formally:

\begin{displaymath}
\begin{array}{lll}
{\cal K}(\{ \left\langle c_{ij},p_{ij},q_...
...
Q_{c_i} = \Omega\{q_{ij}\}_{j}\}_i,
%%\end{array}\end{array}\end{displaymath}

where cij denotes the jth occurrence of the constraint ci in ${\cal R}$.

Now we have to normalise the probabilities. This is necessary as the probabilities in each interleaving add up to one so that the overall sum of probabilities associated to a given constraint can be greater than one when the constraint results from two or more different interleavings (in fact, it corresponds exactly to the number of possible interleavings). The re-normalisation operation effectively implies that all interleavings are equally likely. Formally:

\begin{displaymath}
\begin{array}{lll}
{\cal N}(\{ \left\langle c_i,p_i,q_i\righ...
...}{P},q_i\right\rangle \;\mid\; P = \sum_{i}p_i
\}.
\end{array}\end{displaymath}

We are now in a position to define the observables associated to an agent A and an initial store d.

Classical I/O behaviour.

This notion corresponds to the final results of computation, where we abstract from the associated probabilities and costs, and restrict ourselves to a purely qualitative view of those results. We could express this fact by assuming that p=1 and q=0 for all computed constraints. Alternatively, we can consider a kind of projection operation which selects the first component of the results of A in ${\cal R}$, as follows:

\begin{eqnarray*}
{\cal O}_{\tt C}(A,d) & = & \{c\mid \left\langle c,p,q\right\rangle \in {\cal R}(A,d)\}.
\end{eqnarray*}

Probabilistic I/O behaviour.

Here we observe the probability of a final store, whereas the cost is not relevant (q=0 for all results). Formally:
\begin{displaymath}
\begin{array}{lll}
{\cal O_{\tt Pr}}(A,d) & = & \{\left\lang...
...ht\rangle \in
{\cal N}({\cal K}({\cal R}(A,d)))\}.
\end{array}\end{displaymath}

Resource-consumption I/O behaviour.

This notion includes both the probability and the cost of each result and is defined simply by
\begin{displaymath}
\begin{array}{lll}
{\cal O}_{\tt Q}(A,d) & = & {\cal N}({\cal K}({\cal R}(A,d))).
\end{array}\end{displaymath}


Note that the last two notions of observables differs from the classical notion of I/O behaviour in that they are `universal' in contrast with the existential character of ${\cal O}_{\tt C}$. More precisely, in the classical case a constraint c belongs to the observables if at least one path leads from the initial store d to the final result c, whereas in the quantitative case we have to consider all possible paths leading to the same result c and combine the associated quantities.

Observing the Average

As already mentioned in Section 1, the notion of average behaviour of a system is crucial for many basic performance measures considered in the field of performance modelling and evaluation. Therefore, it is important to dispose of a formal base for a correct specification and analysis of such a behaviour. In particular, an appropriate definition of an operational semantics able to model the program properties of interest would make it feasible the adoption of abstract interpretation methods of analysis [9].

Based on the augmented transition system given in Section 3, we now introduce a notion of average observables. This notion could be used for the analysis of average properties, such as the long-run average properties of probabilistic systems in [4], by considering abstract interpretation methods which appropriately extend or modify the classical Cousot's abstract interpretation framework, [3,2], so as to deal with probabilistic or, more in general, quantitative features. We will come back to this point in Section 7, and we develop here the first step of this approach to the analysis of probabilistic properties.

The average behaviour

of an agent A starting from store d is defined by:
\begin{displaymath}
{\tt Average}(A)= \sum_{\left\langle c,p,q\right\rangle \in {\cal O}_{\tt Q}(A,d)} p\cdot q.
\end{displaymath}

Note that the value q in the augmented semantics of an agent A can be seen as the value of a discrete random variable on the constraint system ${\cal C}$, $Q_{A}:{\cal C} \rightarrow \mathbb{R}$, defined by:

\begin{displaymath}
Q_{A}(c)= q \mbox{ iff } \left\langle c,p,q\right\rangle \in {\cal O}_{{\tt Q}}(A,d).
\end{displaymath}

With this, we can calculate ${\tt Average}(A)$ as the expected value or mean of QA:
\begin{displaymath}
E(Q_{A})=\sum_{c\in \cal C} q\cdot Pr(Q_{A}(c)=q),
\end{displaymath}

where Pr(QA(c)=q) is the probability distribution of QA. In fact, this value corresponds exactly to the probability p associated to the constraint c in the observables ${\cal O}_{\tt Q}(A,d)$.

Examples

In this section we illustrate the construction of the previously defined notions of observables for two simple programs, a probabilistic and a deterministic one, and we discuss the possible properties that can be described by those notions. In particular, we show the use of the ${\tt Average}$ observables for representing two distinct situations in probabilistic (average-case) analysis, namely the case in which the program is deterministic and the input data varies according to some probability distribution, and the case in which the program is probabilistic and we evaluate its behaviour in different executions determined by the probability distribution on the same input data. In the algorithms analysis terminology the first analysis is called `average-case' whereas the second is referred to as the `expected running time' [1].

Randomised Counting

Consider the following PCCP program generating all natural numbers.

\begin{displaymath}
\begin{array}{lll}
nat(x) & :- & {\mbox{\bf ask}}({\it true}...
...sts_y ({\mbox{\bf tell}}(x=s(y)) \parallel nat(y)))
\end{array}\end{displaymath}

The standard I/O behaviour of this program abstracts from any consideration about the probabilities and the costs associated to each result computed by nat(x) (i.e. the natural numbers), so it corresponds to the set:

\begin{displaymath}
\mathcal O_{{\tt C}}(nat) = \left\{
x=0, x=s(0), x=s(s(0)) \ldots, x=s^{n}(0),
\ldots \right\}.
\end{displaymath}

We can now be a little bit `more concrete' and consider the information on the transition probability provided by the augmented semantics. We then get the probabilistic observables:

\begin{displaymath}
\mathcal O_{{\tt Pr}}(nat) = \left\{
\left\langle x=0,1/2\...
...eft\langle x=s^{n-1}(0),1/2^n\right\rangle ,
\ldots \right\}.
\end{displaymath}

from which we can also see the likelihood that a given number is actually generated.

A further step will lead us to observe the number of calls to the procedure nat(x) that are necessary to generate a given number. We have then the `most informative' semantics:

\begin{displaymath}
\mathcal O_{{\tt Q}}(nat) = \{\left\langle x=0,1/2,0\right\r...
... \left\langle x=s^{n-1}(0),1/2^n,n-1\right\rangle ,
\ldots\},
\end{displaymath}

where, according to our intuition, we can see that when n tends to infinity the probability vanishes whereas the cost grows bigger and bigger.

Average Running Time.

Based on $\mathcal O_{{\tt Q}}(nat)$ we can estimate the number of calls we can expect on average by executing the program nat(x):
\begin{displaymath}
{\tt Average}(nat) =
\sum_{q=0}^\infty \frac{q}{2^q}
= 2.
\end{displaymath}

List Search

The following is a program which searches a given list for an element in the list.

\begin{displaymath}
\begin{array}{lll}
search(x,list,n) & :- &
{\mbox{\bf ask}...
...ist[n/2]) \rightarrow 1:serach(x,list[n/2+2:n],n/2)
\end{array}\end{displaymath}

As the guards of the three alternatives are mutually exclusive, this program is deterministic, despite its formulation as a (probabilistic) nondeterministic choice. Therefore, the notions of classical and probabilistic observables are quite trivial in this case. In fact, any call to search will always terminate after a finite number of steps, once x is found by establishing i as the ``index'' of x in the list with probability 1.

A slightly more interesting observable is $\mathcal O_{{\tt Q}}(search)$, which for a given input x gives the number of recursive calls needed to find x. This is fixed once x is fixed. Thus the average behaviour of such a program can be estimated by comparing runs of the program on different inputs. As an example, consider n=4. Then we have four possible calls search(x,[1,2,3,4],4) with x=1, x=2, x=3 and x=4. It is easy to verify that they take respectively 2, 1, 2 and 3 steps (or recursive calls).

Average Running Time.

Assuming a uniform distribution on the input values x, we can now compute the average running time of search by
\begin{displaymath}
{\tt Average}(search(x,[1,2,3,4],4))=\frac{2+1+2+3}{4} = 2.
\end{displaymath}

We can obtain the same result by considering the corresponding probabilistic program
\begin{displaymath}
\begin{array}{lll}
searchall(list,n) & :- &
{\mbox{\bf ask...
...sk}}(true) \rightarrow \frac{1}{4}:search(4,list,n)
\end{array}\end{displaymath}

It is easy to check that the observables of searchall,


\begin{eqnarray*}
\mathcal O_{\tt Q}(searchall([1,2,3,4],4)) & = &
\{\left\lang...
... i=3,1/4,2\right\rangle ,\left\langle i=4,1/4,3\right\rangle \},
\end{eqnarray*}

can be obtained from the observables of each call to search as:
\begin{displaymath}
\mathcal O_{\tt Q}(searchall([1,2,3,4],4)) = \frac{1}{4}
\bigcup_{i=1}^4 O_{\tt Q}(search(i,[1,2,3,4],4)),
\end{displaymath}

and its average running time is again:
\begin{displaymath}
{\tt Average}(searchall)= \sum\{ p \cdot q\mid \left\langle c,p,q\right\rangle
\in\mathcal O_{\tt Q}(searchall)\} = 2.
\end{displaymath}

We can generalise this result to n=2k by asserting that the program search has an average running time of $q=k=\log n$. Note that this is the asymptotic complexity resulting from the average-case analysis of binary search algorithms (cf. §6.2.1 of [10]).


Conclusions

This paper has presented an operational semantics modelling the I/O results of a program with respect to their probability and cost, the latter representing (some measure of) the program execution complexity. We also showed how average properties can be naturally expressed by means of this semantics, thus providing a base for the specification and analysis of probabilistic properties such as the ones called in [4] long-run average properties.

Bibliography


1
T. H. Cormen, T. E. Leiserson, and R. L. Rivest.
Introduction to algorithms.
The MIT Press, Cambridge, Mass., 1990.
2
P. Cousot.
Semantics Foundation of Program Analysis.
In S.S. Muchnick and N.D. Jones, editors, Program Flow Analysis: Theory and Applications, chapter 10. Prentice-Hall, 1981.
3
P. Cousot and R. Cousot.
Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints.
In Proceedings of POPL, pages 238-252, 1977.
4
L. de Alfaro.
How to Specify and Verify the Long-Run Average Behavior of Probabilistic Systems.
In Proceedings of LICS'98, pages 174-183. IEEE Computer Society, 1998.
5
F.S. de Boer, M. Gabbrielli, E. Marchiori, and C. Palamidessi.
Proving Concurrent Constraint Programs Correct.
ACM Transactions on Programming Languages and Systems, 1997.
6
F.S. de Boer and C. Palamidessi.
A fully abstract model for concurrent constraint programming.
In S. Abramsky and T.S.E. Maibaum, editors, Proceedings of TAPSOFT/CAAP, volume 493 of Lecture Notes in Computer Science, pages 296-319. Springer Verlag, 1991.
7
A. Di Pierro and H. Wiklicky.
An Operational Semantics for Probabilistic Concurrent Constraint Programming.
In Y. Choo P. Iyer and D. Schmidt, editors, Proceedings of ICCL'98, pages 174-183. IEEE Computer Society and ACM SIGPLAN, 1998.
8
D. Ferrari.
Computer Systems Performance Evaluation.
Prentice-Hall, 1978.
9
R. Giacobazzi.
Optimal Collecting Semantics for Analysis in a Hierarchy of Logic Program Semantics.
In C. Puech, editor, Proceedings of STACS'96, volume 1046 of Lecture Notes in Computer Science, pages 503-514, 1996.
10
D. E. Knuth.
The Art of Computer Programming, Vol.3: Sorting and Searching.
Addison-Wesley, 1998.
2nd edition.
11
V.A. Saraswat and M. Rinard.
Concurrent constraint programming.
In Proceedings of POPL, pages 232-245. ACM, 1990.
12
V.A. Saraswat, M. Rinard, and P. Panangaden.
Semantics foundations of concurrent constraint programming.
In Proceedings of POPL, pages 333-353. ACM, 1991.



F.S. de Boer and C. Palamidessi.
A fully abstract model for concurrent constraint programming.
In S. Abramsky and T.S.E. Maibaum, editors, Proceedings of TAPSOFT/CAAP, volume 493 of Lecture Notes in Computer Science, pages 296-319. Springer Verlag, 1991.
7
A. Di Pierro and H. Wiklicky.
An Operational Semantics for Probabilistic Concurrent Constraint Programming.
In Y. Choo P. Iyer and D. Schmidt, editors, Proceedings of ICCL'98, pages 174-183. IEEE Computer Society and ACM SIGPLAN, 1998.
8
D. Ferrari.
Computer Systems Performance Evaluation.
Prentice-Hall, 1978.
9
R. Giacobazzi.
Optimal Collecting Semantics for Analysis in a Hierarchy of Logic Program Semantics.
In C. Puech, editor, Proceedings of STACS'96, volume 1046 of Lecture Notes in Computer Science, pages 503-514, 1996.
10
D. E. Knuth.
The Art of Computer Programming, Vol.3: Sorting and Searching.
Addison-Wesley, 1998.
2nd edition.
11
V.A. Saraswat and M. Rinard.
Concurrent constraint programming.
In Proceedings of POPL, pages 232-245. ACM, 1990.
12
V.A. Saraswat, M. Rinard, and P. Panangaden.
Semantics foundations of concurrent constraint programming.
In Proceedings of POPL, pages 333-353. ACM, 1991.




Historical and Personal Perspectives on Logic Programming

Enrico Pontelli
New Mexico State University
USA




As announced in the previous issue of the newsletter (Vol. 19 No. 1), we have started a new regular column, dedicated to presenting informal perspectives on the field of logic programming. These articles will be written by people that are widely recognized as the founding fathers and mothers of LP, and they will provide insights on the development of this field, historical anedoctes and other personal views.
The idea for this column came around thanks to a contribution to the newsletter by Peter Szeredi (Vol. 17 No. 4), that offered a refreshing description of the early days of Prolog in Hungary. This article was so well accepted that it prompted us to consider extending the idea and invite other researchers to provide similar contributions. In the last issue, we had the first of these contributions, from Dale Miller. In this issue you will be able to find two more contributions, one by Veronica Dahl and one by Kazunori Ueda.

We have an exciting schedule prepared, so look out for these great contributions!!





The Early Days of Logic Programming:
a Personal Perspective

Veronica Dahl
Simon Fraser University
Canada

Editor: Enrico Pontelli



When I arrived at Luminy in 1975, Alain Colmerauer had just completed his article “Les grammaires de metamorphose”, Dave H. D. Warren, Luis Moñiz Pereira and Fernando Pereira were turning Prolog, from its first proof-of-concept implementation by Philippe Roussel, into a practical language, and the prof-to-student ratio was unbeatable: just five of us grads were in the DEA programme, of which only one ventured into the newly created Doctorat en Intelligence Artificielle.

Luminy - luminous, luminaries. I had chosen well. I wasn’t one more number in some huge impersonal lab. Luminous indeed it was, from the hope-filled sun rays that enlivened the hospital room where serious illness confined me upon arrival, to the splendid blue spaces of sea and sky that greeted my recovery two and a half weeks later.

My new friends Cristina and Michel van Caneghem had taken me through narrow, thyme-scented paths, tumbling down rock slides, gliding down from tree-tops on Tarzan-style ropes, and past dizzying precipices, until at one breathtaking turn, there they were: the Calanques, their white aching beauty shining againt the calm Mediterranean.

As I write these lines, trying to trace the spirit of the early LP days, it strikes me that the sense of freedom and unending peace that the Calanques conveyed must have insensibly permeated all our unconscious minds, and that the LP quest that I characterized as that of truth and beauty, as instigated by Alain Colmerauer and Bob Kowalski [1] is also, and fundamentally, a quest for freedom: from procedural concerns, from the tyranny of timeÕs sequential nature, and more recently, even from the limitations of its own roots in classical logic.

Soon, I was accepted into the group as "one more guy" - a dubious honour! I would for instance arrive at my office to the sight of one of my companions sitting in his underwear while his pants dried out from an unexpected rain Smile

But to be frank, I had been lying through my teeth when Alain, visiting me at the hospital, had asked me whether I would continue into the doctoral programme. I felt obliged to say yes to be taken seriously, yet I doubted I would have the patience and stamina needed. I was merely taking distance for one year from my country's incipient death squads, which had just killed a dear friend. In impotent counterpoint to the escalating violence in Argentina, as many more friends were murdered or disappeared, and my mother was declared "a real or potential danger to the security of the state" after the military coup, the Calanques became my physical refuge, and LP perhaps a mental one, both symbolizing resilience, life, freedom.

Those early days' obsession with ever higher levels of freedom was sustained by the remarkable stubbornness of the free spirits behind LP, in defiance of the rest of the world's general belief that automated proof had revealed its limitations and could not yield much of practical value. These traits in fact continue to characterize our field, which abounds in survivors of sorts, like Catuscia Palamidessi who persisted studying despite nine years of daily pressure from her family environment to not "waste" her time in academia and secure a husband instead, or Manuel Hermenegildo who during his military service daily tested with his life whether a surreptitious bomb had been placed in his commander's vehicle, or Hassan Ait-Kaci who survived grad studies while single parenting his baby.

My passion for the Calanques soon converted me in the group's unofficial guide for visitors, adding a more human dimension to my image of scientific "monsters": for instance, Dave H. D. Warren's questions while hiking with a baby on his backpack were somehow less intimidating than when he'd "grilled me" after my talk at Imperial College.

I'd lucked out again: I happened to be at Alain's office when Bob called to invite him to an informal workshop he was organizing. As Alain was regretting that no one in the group could attend, I was jumping up and down volunteering: Moi! Moi! I can go!

So the group sent a student to present at Imperial College her joint DEA work with Roland Sambuc: a French-consultable expert system with dynamic coroutining of the tasks implicit in a user's query, which postponed executing negative goals under negation as failure until all necessary instantiations had been made. It was  great to hear Bob's talk on n-ary vs. binary relationships, Keith's on what is now known as Clark's completion theory, and above all, to meet so many excellent researchers in person.

Towards my dream of building a bridge between the humanistic and the formal sciences, I had been studying Linguistics at the Faculty of Philosophy and Letters in Buenos Aires, as I thought it might provide a good starting point, being the most formalized of the human sciences. This dream never quite materialized, but over the years, with LP enlisted into pursuing it, good results nevertheless poured on my lap for language processing and logic grammars, and extended into unexpected fields, e.g. computational molecular biology, where logic grammars were used for helping find the human genome, for the automatic analysis of biological sequences and for RNA secondary structure design; internet technology, with results in guided and long-distance learning, acronym meaning reconstruction and multilingual access to virtual worlds; and constraint based reasoning, where initial results for language parsing proved also useful for cognitive theory, error detection and correction, and knowledge extraction from text. More recently, I have been focussing on extending Prolog with non-classical lines of reasoning such as abduction and assumptions.

Perhaps one day all these incursions into apparently disparate fields will integrate into realizing my initial dream, and perhaps that will contribute to a less dichotomized, maybe even more peaceful world.


Acknowledgements: As in Claire Bretecher's comic, I could dedicate these memoirs "to him to whom I owe everything", in the certainty that this would make lots of people happy. So if I forget to name anyone, please take that as the default dedication. My greatest debt is to Alain Colmerauer, a superb supervisor and an exceptional human being, and to the GIA members of the early years. Bob Kowalski and the colleagues who attended that fantastic first workshop at Imperial were particularly inspiring, as were the workshop on logic and databases organized by Herve Gallaire in Toulouse, Peter Szeredi's workshop in Debrecen, Luis Moñiz Pereira's at Albufeira, and the symbolically and literally floating workshop on board of the Queen Mary, all leading to the established, more formal but equally fun and productive ICLP conferences. Luis Moñiz Pereira and Helder Coelho, David H. D.Warren, Fernando Pereira, and later David Scott Warren helped spread the word re. my early Spanish system, which was ported into many languages and generated lots of further research.  Alan Robinson, to whom indeed we all owe everything, cheered me up when I needed it with his exceptional human quality, Bonnie Webber rescued my TODS submission from oblivion from an editor's forgotten pile, and the list goes on and on. Special thanks to my many wonderful collaborators from all over the globe: Harvey Abramson, Pablo Accuosto, Dulce Aguilar-Solis, Jamie Andrews, Joao Balsa, Alma Barranco-Mendoza, Maryam Bavarian, Gabriel Bès, Philippe Blache, Michel Boyer, Charles Brown, Joe Calder, Henning Christiansen, Silvia Clerici, Koen DeBosschere, Marcos Elinger, Andrew Fall, Baohua Gu, Eli Hagen, Jia Wei Han, Glendon Holst, Yan-Nong Huang, Alfredo Hurtado, Bharat Jayaraman, Andre Levesque, Renwei Li, Susana Lilliecreutz, Diane Massam, Pierre Massicotte, Michael Mc Cord, Luis Moñiz Pereira, Lidia Moreno, Manuel Palomar, T. Pattabhiraman, Jose Gabriel Pereira-Lopes, Fred Popowich, Stephen Rochefort, Michael Rochemont, Roland Sambuc, Patrick Saint-Dizier, Greg Sidebottom, Marius Scurtescu, Paul Tarau, Marie-Claude Thomas, Stephen Tse, Jorg Ueberla, Kimberly Voll, Tom Yeh, Manuel Zahariev, Osmar Zaiane. And to all of you, free spirits of LP: my heartfelt thanks.



[1] ALP Newsletter 15-4, 2002 (http://www.cs.kuleuven.ac.be/%7Edtai/projects/ALP/newsletter/nov02/index.html)





Logic Programming and Concurrency:
a Personal Perspective

Kazunori Ueda
Waseda University
Japan

Editor: Enrico Pontelli



PDF Version of this article available.

I was asked to contribute a personal, historical perspective of logic programming (and presumably its relation to concurrency). I once wrote my personal perspective for Communications of the ACM in 1993 [1]. The article was also intended to record a detailed, historical account of the design process of Guarded Horn Clauses (GHC) and the kernel language (KL1) of the Japanese Fifth Generation Computer Systems (FGCS) project based on GHC. The readers are invited to read the CACM article, which is full of facts and inside stories. My view of the field remains basically the same after thirteen years.
Another related article appeared in the ``25-Year Perspective" book on logic programming [2], in which I tried to convey the essence of concurrent logic/constraint programming and its connection to logic programming in general. If your view of concurrent logic/constraint programming is approximately ``an incomplete variant of logic programming," I would like to invite you to read [2] and update the view. In a word, it is a simple and powerful formalism of concurrency.

In this article, I'll try to convey the essence of the field to the present audience, highlighting my beliefs, principles and lessons learned. I'll also briefly describe diverse offspring of concurrent logic/constraint programming.

The First Work on Logic Programming

Since I was a master student in late 1970's, I was interested in all kinds of programming languages, from practical to pedagogical to exotic. I met Prolog in this period but it took some while to be able to appreciate it (as is the case for most of the students we teach today) since it had a number of distinctive features.
My first journal paper (1983, with Hideyuki Nakashima and Satoru Tomura, in Japanese) was on declarative I/O and string manipulation in logic programming. Our motivations and beliefs behind the work were:
  1. Declarative programming languages should not resort to side effects.
  2. I/O is the central purpose of any program and programming language connected to the real world.
These two principles, acquired before I joined the Japanese Fifth Generation Computer Systems (FGCS) project in 1983, formed the baseline of my work on language design for more than 20 years.
The second belief is not necessarily popular in the declarative language community, but I was very encouraged by reading Tony Hoare's paper on CSP (CACM, August 1978 issue), whose abstract started with:

``This paper suggest that input and output are basic primitives of programming.''

Development of Guarded Horn Clauses

The first three years of my FGCS project days were devoted to the design of the Kernel Language version 1 (KL1) of the project, which was about to start with the working hypothesis that KL1 should be based on Concurrent Prolog, a logic programming language with two new additional constructs, read-only annotation to express synchronization and the commit operator to express choice (don't-care) nondeterminism. The hypothesis, largely due to Koichi Furukawa and Akikazu Takeuchi, seemed bold to almost everybody inside the project including myself because the other form of nondeterminism, search, was absent there. Nevertheless, I soon believed this approach way was worth exploring. Ehud Shapiro convinced me of
  1. Occam's razor (or minimalism) in programming language design (by 1983, I learned two clear instances of this principle, CSP/Occam and Concurrent Prolog).
In addition, I felt that
  1. different concerns (e.g., concurrency and search in our context) should be separated to understand things analytically; only after that they could be integrated (if possible at all).
As soon as Concurrent Prolog was adopted as the working hypothesis, we launched an implementation project.
  1. A good way to understand and examine a language definition is to try to implement it; it forces us to consider every detail of the language.
Three Concurrent Prolog interpreters, each differing in the mechanisms for binding environments, were almost complete by August 1984. However, we found several subtleties in the language, and felt that we should re-examine the language specification of Concurrent Prolog in depth before we went any further.
  1. In designing programming languages or calculi, giving a formal (or at least rigorous) definition is a means and not a purpose. The important next step is to scrutinize various properties and consequences of the definition.
The purpose of the re-examination of Concurrent Prolog was on the granularity of two primitive operations, namely (1) unification under read-only annotations and (2) committed-choice after OR-parallel execution of multiple clauses. Although the way programming languages are defined has changed with the advent of Plotkin's SOS (structural operational semantics, also known as small-step semantics as opposed to late Gilles Kahn's natural (big-step) semantics), a lesson learned in this work is still valid:
  1. The small-step semantics of a language construct does not necessarily express the real atomic operations of the construct.
To appreciate the above, it should be best to consider the basic operation of logic programming, resolution. A resolution step involves unification of arbitrarily large terms that can be performed in many smaller steps. You may also recall that, in chemistry and physics, both atoms (that once were considered indivisible) and subatomic particles are important, and in computer science the same principle applies to constructs of programming languages, especially those of parallel or concurrent languages.
The conception of Guarded Horn Clauses (GHC, December 1984) was an outcome of such subatomic consideration of parallel execution of logic programs. I suppose such an approach is not common even in the design of various concurrency formalisms, but the strength and the stability of GHC as a formalism comes exactly from here.

Logic Programming versus Concurrency

GHC and other concurrent logic languages were destined to wear two different hats: It was born from logic programming but its principal connection was to concurrency. At first, I used to describe GHC by comparing it with logic programming. However, it did not take too long until I viewed GHC as a simple model of concurrency that featured (or more precisely, allowed very simple encoding of) messaging, dynamic process creation, and reconfigurable channels.
  1. Concurrent logic programming is a model of concurrency featuring channel mobility in the sense of the pi-calculus (and channel fusion in the sense of the fusion calculus).
And it was not just a model. It was a family of languages implemented and used by a number of research groups around the world. I think this is an important contribution of the logic programming community to the field of concurrency, which is not to be missed by ourselves.
At that time (1985-1986), the idea of constraint logic programming was not available to us, and ``bindings", the algebraic counterpart of constraints, was the standard term used to speak of computational aspects of logic programs. Except for the terminology, however, we were already viewing computation along the ask and tell setting of concurrent constraint programming:
``...it is quite natural to view a GHC program in terms of binding information and the agents that observe and generate it." ``In general, a goal can be viewed as a process that observes input bindings and generates output bindings according to them. Observation and generation of bindings are the basis of computation and communication in our model." -- [4]
Study of the foundations of concurrent logic languages came after practice, but time became ripe for concurrent logic programming to embrace ideas from constraint logic programming, leading to an elegant reformulation of the framework by Michael Maher and Vijay Saraswat.
Concurrent constraint programming (CCP) became a more popular term of the framework.
However, terminology needs care and attention. One thing to be noted is:
  1. Concurrent logic languages--or at least some of them including GHC--are (instances of) concurrent constraint languages.
I have found this fact very often forgotten in the literature, perhaps because it may sound like saying that an instance of a class was available before a class was defined. However, everybody will agree that Socrates was a mammal even if the notion of mammals was not available in his time.
I'm still much less confident in the names of the paradigm than its technical content. Any name would be fine when we talk to ourselves, but we must reach out to a broader community. Concurrent logic programming sounds like a branch of logic programming, which is true, but it is about concurrency as well. Likewise, concurrent constraint programming sounds like a programming paradigm, which is true, but it is a process calculus as well. Such a broad coverage is not easy to express in a single term, but let me propose the term
  1. Constraint-Based Concurrency
to mention the paradigm [3], as opposed to other process calculi in which names play principal roles and could be called Name-based Concurrency.

Let me get back to the connection between logic programming and concurrent logic programming. One thing to be stressed is:
  1. Concurrent logic programming may look remote from the rest of logic programming, but from the technical point of view it is not the case.
As mentioned earlier, the design of GHC was an outcome of subatomic analysis of logic programming. We tried to retain the properties of logic programming wherever possible to establish a robust, rule-based formalisms for communicating programs. This was achieved by exploiting the beautiful properties of constraints and logical variables.
One instance of the previous claim, which I'd like the readers to recall, is:
  1. Dataflow synchronization a useful construct in logic programming with and without choice nondeterminism alike.
Examples supporting this claims include delaying, coroutining, sound negation-as-failure, and the Andorra principle (due to D. H. D. Warren).

In conclusion, I would suggest that concurrent logic programming (and constraint-based concurrency in general) be viewed not as
(13., avoid this) \begin{displaymath}\begin{split} \textrm{Concurrent LP}&= \textrm{LP} + \textrm... ...choice}\\ &= \textrm{LP} - \textrm{completeness} \end{split}\end{displaymath}

but as [2]:
(14.) \begin{displaymath}\begin{split} \textrm{Concurrent LP}&= \textrm{LP} + \textrm... ...extrm{LP} + \textrm{embedded concurrency control}. \end{split}\end{displaymath}

The connection between constraint-based concurrency and name-based concurrency (as represented by the pi-calculus) is not well understood in the community, either. One of the purposes of my invited talk at TACS'01 [3] and the tutorial at ICLP'01 was to relate these formalisms.

Type Systems

The above view naturally motivates the study of program analysis and type systems that deal dataflow and directionality. Traditionally, the LP community has used the term ``modes", but I found the term ``types" (interpreted in a broad sense) much more appealing to the rest of the CS community, hence the slogan:
  1. Modes are types in a broad sense.
In other words, mode systems are a kind of non-standard type systems whose domain are more than sets of (first or higher-order) values.
Static analysis of dataflow may sound contrary to the spirit of constraint programming since constraint propagation is supposed to take place in arbitrary directions. However, constraint-based concurrency turned out to be an interesting place where the power of constraint programming (powerful enough to encode first-class mobile channels) and the power of static type systems meet:
  1. Constraint programming involves complicated dataflow, but there are cases where the complicated dataflow can be statically analyzed.
Thus during 1990's I worked on type systems for constraint-based concurrency that dealt with modes and linearity (the number of readers of each variable) as well as values. Kima, a tool (still available on the web) for type-based program diagnosis and fully automatic debugging was based on the mode system of Moded Flat GHC. An important general lesson I learned from that work was:
  1. A type system can tell much more than whether a program is well-typed or ill-typed.
This observation is thanks to the constraint-based (as opposed to proof-theory-style) formulation of our type system. To make the above lesson more specific, I learned:
  1. Constraint-based program analysis, if done in an appropriate setting, can pinpoint program errors and even correct them.

The Paradigm's Diverse Offspring

Constraint-based concurrency yielded diverse offspring, all with unique features. The most successful offspring up to now should be Constraint Handling Rules (CHR), which addressed constraint programming, the field in which a high-level programming language was called for. I did not foresee that an extension of concurrent logic programming languages has made such a success in a quite different field.
Another important offspring is timed and hybrid concurrent constraint programming, which gave a high-level programming language with rich control structure to the field in which (a timed or hybrid version of) automata were the basic means of problem description.

My own recent work is the design and implementation of LMNtal (pronounced ``elemental"), a model and a full-fledged language for hierarchical graph rewriting. This is another field whose programming language aspects have been scarcely studied. LMNtal was an outcome of the attempt to unify constraint-based concurrency and CHR.
LMNtal is a lean language in the sense that it does not even feature constant symbols and its variables are used only for indicating one-to-one connection between atomic formulae. Nevertheless, it turned out to be a very versatile language related to many formalisms including various process calculi, multiset rewriting, the lambda calculus, and (of course) logic programming. The main feature of LMNtal is its ability to deal with connectivity and hierarchy simultaneously in a simple setting.
  1. Connectivity and hierarchy are the two structuring mechanisms found in many fields ranging from society to biology, not to mention the world of computing.
Another formalism motivated by this observatin is Robin Milner's Bigraphical Reactive Systems.
There may be more connections from constraint-based concurrency to emerging technologies, but the above instances should be sufficient to support my final remark, which I hope to share with the readers for the development of the field:
  1. Logic programming today embraces diverse interesting technologies beyond computational logic (in a narrow sense) as well as those within computational logic.

Bibliography


1
Shapiro, E. Y., Warren, D. H. D., Fuchi, K., Kowalski, R. A., Furukawa, K., Ueda, K., Kahn, K. M., Chikayama, T. and Tick, E., The Fifth Generation Project: Personal Perspectives. Comm. ACM, Vol. 36, No. 3 (1993), pp. 46-103.
(This is actually a collection of single-authored articles, and my article (pp. 65-76) was originally titled ``Kernel Language in the Fifth Generation Computer Project.'')
2
Ueda, K., Concurrent Logic/Constraint Programming: The Next 10 Years. In The Logic Programming Paradigm: A 25-Year Perspective, Apt, K. R., Marek, V. W., Truszczynski M., and Warren D. S. (eds.), Springer-Verlag, 1999, pp. 53-71.
(Extensive references to related work and technologies can be found in this paper.)
3
Ueda, K., Resource-Passing Concurrent Programming. In Proc. Fourth Int. Symp. on Theoretical Aspects of Computer Software (TACS'01), Lecture Notes in Computer Science 2215, Springer, 2001, pp.95-126.
4
Ueda, K., Guarded Horn Clauses: A Parallel Logic Programming Language with the Concept of a Guard. ICOT Tech. Report TR-208, ICOT, Tokyo, 1986. Also in Programming of Future Generation Computers, Nivat, M. and Fuchi, K. (eds.), North-Holland, Amsterdam, 1988, pp. 441-456.




Community News

List of News:



New Releases of the Parma Polyhedra Library

Communicated by Roberto Bagnara

 
We are very happy to announce the availability of PPL 0.9, the latest release of the Parma Polyhedra Library, a modern library for the manipulation of convex polyhedra and other numerical abstractions especially targeted at static analysis and verification of complex software and hardware systems.

Besides improving upon almost all of the functionality available since the release of PPL 0.7 in December 2004, the two recent releases of PPL 0.8 and PPL 0.9 have expanded the usefulness of the library by providing new features that should be of great interest to people working in the research fields mentioned above.  In particular, the latest release includes support for:

The Domain of Relational Rational Grids
The class Grid can represent and manipulate the set of solutions of a finite conjunction of congruence relations.  Elements of this domain can thus be used to encode information about the patterns of distribution of the values that program variables can take. Operations provided include everything that is needed in the field of static analysis and verification, including affine images, preimages and their generalizations, grid-difference and widening operators. This is the first time that one of the most precise variants of the domain initially proposed by P. Granger is made freely available to the community.  Moreover, as the PPL provides several polyhedral domains as well as full support for lifting any domain to the powerset of that domain, a user of the library can now experiment with the extra precision provided by abstract domain constructions and combinations including not only Z-polyhedra, but also grid-polyhedra, grid-bounded-differences and their powersets.

The Domain of Bounded Difference Shapes
This is a subdomain of the domain of convex polyhedra providing coarse but efficient-to-compute approximations for large analysis problems. The domain can be instantiated to use a wide range of different data types, including exact arithmetic rationals or integers, bounded precision floating point data types with controlled rounding and bounded integer data types with systematic yet efficient overflow detection.  The full set of operations needed for static analysis and verification are supported, here included the semantic widening operator defined in our SAS 2005 paper.

A Linear Programming Solver Based on Exact Arithmetic
The class LP_Problem can be used to represent, check the satisfiability and optimize linear programming problems.  The solver is based on a primal simplex algorithm using exact arithmetic.

Other enhancements include a new configuration program and Autoconf function making library usage even easier; several new operations useful for static analyzers; new output methods for debugging; improvements to the C and Prolog interfaces; several important portability improvements; and (only) a handful of bug fixes.

For more information, visit the PPL web site at

       http://www.cs.unipr.it/ppl/

The PPL core development team:

      Roberto Bagnara  <bagnara@cs.unipr.it>
      Patricia M. Hill <hill@comp.leeds.ac.uk>
      Enea Zaffanella  <zaffanella@cs.unipr.it>




Add Business Rules to your Excel Spreadsheet

Communicated by Mary Kroening, AMZI

 
Amzi! announces ARulesXL™, a new add-in for Microsoft™ Excel® that integrates decision support with computational analysis. This makes it possible to solve a wider variety of problems with spreadsheets including budget analysis, product configuration, workflow, advice, planning and pricing.

Rules in ARulesXL are easier to code, less costly to maintain, more reliable, and, most importantly readily audited. Instead of difficult to read IF functions, rules are entered directly on the spreadsheet in a natural and easy to understand syntax. The rules are queried, much like a database. User inputs can be gathered from cells or forms and the results can be displayed in reports or tables.

The ARulesXL syntax is designed to model business rules for pricing, configuration, underwriting, approvals, benefits, workflow, advice and similar applications. The rule syntax is compatible with Excel and VBA, providing seamless integration between non-procedural, pattern matching rules and procedural spreadsheet calculations. Data is easily shared between rules and cells. ARulesXL supports multiple rule sets with objects, properties and inheritance to facilitate decomposition of complex applications. It also provides facilities for dynamic document assembly which is ideal for legal and government regulations applications.

ARulesXL makes it easy to write and maintain rules and provides its own menu of operations including a full rule debugger that illustrates the dynamic pattern matching that underlies rule execution. The next release will include the ability to export rules for use in web-based applications.

A major accounting firm is one of the first commercial users of ARulesXL and is using the product for developing equity compensation analysis tools. Visual Data LLC, vendor of Office Practicum™ pediatric office software, is using ARulesXL for the development and testing of rules used in vaccination planning. ARulesXL is ideal for Sarbanes Oxley compliant spreadsheets with easy to read and audit business rules that generate or verify results.

Additional information and a free 30-day evaluation is available from www.arulesxl.com. ARulesXL has an introductory price of $99 per copy, and includes product maintenance and support for one year. Quantity discounts are available for large sites.






Updated Report on Curry

Communicated by Michael Hanus

 
Dear Colleagues,

after our recent discussions on Curry, I have included all results in a new update of the Curry report.
You'll find a short description of all changes and the new version via the Curry homepage

http://www.informatik.uni-kiel.de/~curry

or the report directly at

http://www.informatik.uni-kiel.de/~curry/papers/report.pdf

Since there are changes at several places, in particular, the redefinition of the evaluation annotations (eval flex/rigid removed, ensureNotFree added etc), I have marked this version as preliminary so that I can easily include any corrections that I receive in the next days.

Best regards,

Michael




1st International Summer School on Dependable Adaptive Systems and Mathematical Modeling
DASMOD

Communicated by R. Korn

 
Excellent graduates and PhD students in mathematics and  computer science are invited to attend our summer school on  various topics in computer science, modeling, and applied  mathematics.

Lecturers
Schedule
All lecturers start their lecture series with a basic  introduction and then specialize on more advanced topics. During the two weeks, each participant may attend six courses, each consisting of about 10 hours of lecturing plus exercises. Credits will be granted for successful participation.

Location
     University of Kaiserslautern, Germany

Application
     Limited to 50 students!
     Please visit:
     http://www.dasmod.de/~summerschool

Travel Expenses
     Participants will get full refund of their travelling and housing expenses!

Scientific Organization
     Prof. Dr. R. Korn (University of Kaiserslautern, Germany)
     Prof. Dr. A. Poetzsch-Heffter (University of
     Kaiserslautern, Germany)




2 PhD Positions in Multi-agent Systems and Coordination

Communicated by Mehdi Dastani


The Intelligent Systems Group at Utrecht University and the theme SEN3 Coordination Languages at CWI have two positions for PhD students for four years. One PhD student will be employed by the Utrecht University and one by CWI. Both positions are within the research project "Coordination and Composition in Multi-Agent Systems (CoCoMAS)", recently funded by the Dutch national science organization, NWO (http://www.nwo.nl/nwohome.nsf/pages/NWOP_5SME25_Eng).

Job description
The candidates are expected to work on the design and development of compositional executable coordination models for multi-agent systems. The general perspective of this project is to provide the formal description of organizational structures of multi-agent systems in terms of exogenous coordination models. In particular, the aim is to develop and implement integrated programming languages (syntax and semantics) for exogenous coordination of roles, individual agents, and multi-agent systems. The context of this project is given by an integration and implementation of extensions of the existing agent-oriented programming language 3APL and the exogenous coordination language REO. Intended applications include the organizational structures in incident management and business processes. More information on this project can be found at the following webpage: http://www.cs.uu.nl/~mehdi/cocomas/index.html

This research spans over two separate groups: Intelligent Systems at Utrecht University and SEN3 Coordination Languages at CWI in Amsterdam. The candidates will work together and will have regular meetings. The activities in CoCoMAS involve both systems oriented and theoretical work in both multi-agent systems and software engineering. The CoCoMAS project complements the theoretical and technological framework of other current projects within these two groups. The candidates should have at least a master degree in computer science or Artificial Intelligence, with a background in multi-agent systems, logic, software engineering, concurrency and distributed systems, or practical software development.

Faculty of Science at Utrecht University &
Scientific Cluster of Software Engineering at CWI
The Faculty of Science consists of six departments: Biology, Chemistry, Information and Computing Sciences, Mathematics, Pharmaceutical Sciences and Physics and Astronomy. The Faculty is home to 3500 students and nearly 2000 staff, and is internationally renowned for the quality of its research. The faculty's academic programmes reflect current developments in today's society.

CWI is an internationally renowned research institute in mathematics and computer science, located in Amsterdam, The Netherlands. The theme SEN3 Coordination Languages of the Scientific Cluster of Software Engineering at CWI is a dynamic group of internationally recognized researchers who work on Coordination Models and Languages and Component-Based Software Composition. The activity in SEN3 is a productive, healthy mix of theoretical, foundational, and experimental work in Computer Science, ranging in a spectrum covering mathematical foundations of models of computation, formal methods and semantics, implementation of advanced research software systems, as well as their real-life applications.

Terms of employment
The salary at the Utrecht University will be € 1,933 during the first year (gross per month) and will reach € 2,472 during the fourth year. The salary is supplemented with a holiday bonus of 8% and an end-of-year bonus of 3% per year. In addition, we offer a pension scheme, parental leave, facilities for childcare, flexible employment conditions in which you may trade vacation days for extra compensations. Conditions are based on the Collective Employment Agreement of the Dutch Universities.

The salary and terms at CWI are in accordance with the Dutch "CAO-onderzoeksinstellingen".  The initial salary for a Ph.D. student is €1848,- gross per month. The salary grows to € 2367,- in the fourth year. The salary is supplemented with a holiday bonus of 8% and an end-of-year bonus of 4.1 %. Moreover CWI offers very attractive working conditions, including flexibility and help with housing for foreigners.

Further details
More information can be obtained from Dr. Mehdi Dastani, telephone +31-30-253-3599, e-mail mehdi@cs.uu.nl and Dr. F.S. D Boer, telephone +31-20-592-4139, email F.S.de.Boer@cwi.nl.

How to apply
To apply, please send a statement of your interest, together with curriculum vitae, letters of references, and possibly list of publications to: Head of Personnel and Organization, Buys Ballotlaboratorium, Princetonplein 5, 3584 CC Utrecht, The Netherlands and send an e-mail to P&O@phys.uu.nl ( with a cc to mehdi@cs.uu.nl and F.S.de.Boer@cwi.nl ). Please mention vacancy number 62603. Make sure that you specifically mention the CoCoMAS project.







Theoretical Computer Science
Special Issue on Quantitative Aspects of Programming Languages and Systems

Communicated by Herbert Wiklicky

 
We invite the submission of papers on Quantitative Aspects of Programming Languages and Systems for publication in a special issue of the Journal of Theoretical Computer Science (TCS). Papers are welcome which are revised versions of the works submitted to and presented at the QAPL 2006 Workshop, Vienna, Austria, April 1-2. We will also welcome submissions of papers not presented at QAPL 2006, provided they fall into the scope of the call and contain a clear and novel contribution to the field.

SCOPE
Quantitative aspects of computation are important and sometimes essential in characterising the behaviour and determining the properties of systems. They are related to the use of physical quantities (storage space, time, bandwidth, etc.) as well as mathematical quantities (e.g. probability and measures for reliability, risk and trust). Such quantities play a central role in defining both the model of systems (architecture, language design, semantics) and the methodologies and tools for the analysis and verification of system properties. This special issue will be devoted to work which discuss the explicit use of quantitative information such as time and probabilities either directly in the model or as a tool for the analysis of systems. In particular, contributions should focus on
TOPICS
Topics include (but are not limited to) probabilistic, timing and general quantitative aspects in

SUBMISSION
Papers should be 20-25 pages long, including appendices, and should be formatted according to Elsevier's elsart document style used for articles in the Journal of Theoretical Computer Science. Papers should be submitted electronically in .pdf format to qapl06 `at' di.unipi.it. Note that for the camera-ready versions of the papers the self-contained LaTeX sources of the paper will be needed, as well as a PostScript or PDF printable version.

Important dates:
Guest Editors:

Alessandra Di Pierro
University of Pisa
dipierro `at' di.unipi.it

Herbert Wiklicky
Imperial College London
herbert `at' doc.ic.ac.uk





European Masters Program in Computational Logic

Communicated by Enrico Franconi


http://www.computational-logic.eu

The  Faculty of  Computer Science  at  the Free  University of  Bozen-Bolzano (FUB),  in Italy  (at the heart of the  Dolomites mountains in South-Tyrol),   is   offering   the  European  Masters   Program   in Computational Logic  as part of  its Master in Computer  Science offer (Laurea Specialistica). The  European Masters Program in Computational Logic  is an international  distributed Master  of Science  course, in cooperation  with the  computer science  departments in  the following universities:
This program, completely in English, involves studying one year at the Free University  of Bozen-Bolzano, and possibly  completing the second year with a  stay in one of the partner  universities. After this, the student will obtain, together with  the European degree, two Master of
Science  degrees:  the  Laurea  Specialistica  degree  from  the  Free University  of  Bozen-Bolzano, with  legal  value  in  Italy, and  the respective Master of Science  degree from the visited university, with legal value in its country.

APPLICATION DEADLINES:

SCHOLARSHIPS & MONEY SUPPORT:
EU citizens  or non-EU citizens with  residence in Italy  can apply to scholarships  which are  granted purely  on  the basis  of the  yearly income  of  the applicant  and  of  her/his  parents or  husband/wife. Scholarships  amount up  to  6,283 EUR  per  academic year,  including facilities  on  the  accommodation  and  total  reimbursement  of  the enrolment fees.

The Free  University of Bozen-Bolzano has a  tuition-waiver budget for reimbursing  the total  tuition/enrolment fees  to all  the applicants studying in Bozen-Bolzano  and who do not have  already a scholarship; so, applicants studying in Bozen-Bolzano will pay no tuition/enrolment fees.

In 2006 the European Masters  Program in Computational Logic offers 13 new Erasmus Mundus scholarships for non European citizens (in 2004 and 2005  almost  50  Erasmus  Mundus scholarships  were  offered).   Each scholarship for each student amounts  to 21,000 EUR per academic year; this includes  10 monthly grants  of 1,600 EUR  and a fixed  amount of 5,000 EUR for fees, travel  expenses, relocation costs, etc.  The 10th February  2006 was the  final deadline  for requesting  Erasmus Mundus scholarships for non-European students.

Check the web page for detailed info on applications and scholarships: http://www.inf.unibz.it/mcs/emcl/

THE STUDY PROGRAMME:
The  European Masters Program  in Computational  Logic is  designed to meet  the demands  of industry  and research  in this  rapidly growing area.  Based on a  solid foundation in mathematical logic, theoretical computer science, artificial  intelligence and declarative programming students  will  acquire   in-depth  knowledge  necessary  to  specify, implement and  run complex systems as  well as to  prove properties of these  systems. In  particular, the  focus of  instruction will  be in deduction systems, knowledge  representation and reasoning, artificial intelligence, formal  specification and verification,  syntax directed semantics, logic  and automata  theory, logic and  computability. This basic  knowledge is  then  applied  to areas  like  logic and  natural language  processing,  logic  and  the semantic  web,  bioinformatics, information  systems and  database technology,  software  and hardware verification.  Students  will acquire  practical  experience and  will become  familiar in  the use  of tools  within these  applications. In addition, students will  be prepared for a future  PhD, they will come in  contact with  the  international research  community  and will  be integrated   into  ongoing  research   projects.  They   will  develop competence  in  foreign  languages  and  international  relationships, thereby improving their social skills.

Applicants  should  have  a  Bachelor  degree  (Laurea  triennale)  in Computer Science, Computer Engineering, or other relevant disciplines; special cases will be considered.  The programme is part of the Master in Computer  Science (Laurea Specialistica in Informatica)  and it has various  strengths that make  it unique  amongst Italian  and European universities:
The European Masters Program in  Computational Logic is one of the few European  Masters  awarded  by  the European  Union's  Erasmus  Mundus programme from its first year of existence in 2004. The Erasmus Mundus programme is  a co-operation  and mobility programme  in the  field of higher  education which  promotes the  European Union  as a  centre of excellence  in  learning  around  the  world.   It  supports  European top-quality   Masters  Courses   and  enhances   the   visibility  and attractiveness  of European  higher education  in  third-countries. It also  provides  EU-funded  scholarships  for  third-country  nationals participating in  these Masters Courses,  as well as  scholarships for EU-nationals studying in third-countries.

The  European  Masters Program  in  Computational  Logic is  sponsored scientifically by the European  Network of Excellence on Computational Logic  (CoLogNET), the  European  Association of  Logic, Language  and Information   (FoLLI),  the   European   Coordinating  Committee   for Artificial   Intelligence  (ECCAI),   the   Italian  Association   for Artificial   Intelligence  (AI*IA),   the   Italian  Association   for Informatics  (AICA, member  of  the Council  of European  Professional Informatics  Societies), the  Italian  Association for  Logic and  its Applications  (AILA), and  the Portuguese  Association  for Artificial Intelligence (APPIA).

THE FREE UNIVERSITY OF BOZEN-BOLZANO:
The Free  University of Bozen-Bolzano, founded in  1997, boasts modern premises  in   the  centre   of  Bozen-Bolzano.  The   environment  is multilingual,  South Tyrol being  a region  where three  languages are spoken: German, Italian and Ladin. Studying in a multilingual area has shown  that  our students  acquire  the  cutting  edge needed  in  the international business  world. Many of  our teaching staff  hails from abroad.   Normal  lectures  are   complemented  with   seminars,  work placements and  laboratory work, which give our  students a vocational as well  as theoretical training, preparing them  for their subsequent professional careers. Studying at the Free University of Bozen-Bolzano means,  first and  foremost,  being  guided all  the  way through  the student's educational career.
Bozen-Bolzano, due to its enviable geographical position in the centre of   the  Dolomites,  also   offers  our   students  a   multitude  of opportunities  for  spending  their  free-time. The  city  unites  the traditional with the modern. Young people and fashionable shops throng the city  centre where ancient mercantile buildings  are an attractive backdrop to a city that is  in continual growth. To the south there is the  industrial  and  manufacturing  area with  prosperous  small  and medium-sized businesses  active in every economic sector.  Back in the 17th century  Bozen-Bolzano was already a  flourishing mercantile city that, thanks  to its particular  geographic position, functioned  as a kind of bridge between northern and southern Europe. As a multilingual town  and a cultural  centre Bozen-Bolzano  still has  a lot  to offer today.  Its plethora  of theatres,  concerts with  special programmes, cinemas and museums, combined with a series of trendy night spots that create local  colour make  Bozen-Bolzano a city  that is  beginning to cater for  its increasingly demanding  student population. And  if you fancy a very special experience, go and visit the city's favourite and most famous resident - "Oetzi", the Ice Man of Similaun, housed in his very  own  refrigerated room  in  the  recently opened  archaeological museum.
Bozen-Bolzano and its surroundings are an El Dorado for sports lovers: jogging  on the  grass alongside  the River  Talfer-Talvera,  walks to Jenesien-S.Genesio   and  on   the  nearby   Schlern-Sciliar  plateau, excursions  and mountain climbing  in the  Dolomites, swimming  in the numerous nearby lakes and, last but not least, skiing and snowboarding in the surrounding ski areas.

FURTHER INFORMATION:

Prof. Enrico Franconi at franconi@inf.unibz.it
European Masters Program in Computational Logic
part of the MSc in Computer Science (Laurea Specialistica).
Faculty of Computer Science
Free University of Bozen-Bolzano
Piazza Domenicani, 3
I-39100 Bozen-Bolzano BZ, Italy

Phone: +39 0471 016 000
Fax: +39 0471 016 009
Web site: http://www.computational-logic.eu




Book Announcement
Parameterized Complexity Theory

Communicated by Kreutzer+Schweikardt

 

Parameterized Complexity Theory
By J. Flum and M. Grohe
Springer Verlag 2006, 493 pages
ISBN: 3540299521

Parameterized complexity theory is a recent branch of computational complexity theory that provides a framework for a refined analysis of hard algorithmic problems. The central notion of the theory, fixed-parameter tractability, has led to the development of various new algorithmic techniques and a whole new theory of intractability.
This book is a state-of-the-art introduction into both algorithmic  techniques for fixed-parameter tractability and the structural theory of parameterized complexity classes, and it presents detailed proofs of recent  advanced results that have not appeared in book form before. Several  chapters each are devoted to intractability, algorithmic techniques for  designing fixed-parameter tractable algorithms, and bounded fixed-parameter  tractability and subexponential time complexity. The treatment is  comprehensive, and the reader is supported with exercises, notes, a detailed  index, and some background on complexity theory and logic.

Further information can be found at
http://www2.informatik.hu-berlin.de/~grohe/pub/pkbuch.html




Postdoctoral Research Positions in Flanders

Communicated by Tom Schrijvers

 
The Flemish research network "Declarative Methods in Informatics" is looking for applicants for postdoctoral positions. The network is supported by the FWO Research Foundation of Flanders.

The network can submit candidacies for Visiting Postdoctoral Fellowships to the FWO.  Junior postdoctoral fellowships are for candidates that completed their PhD less than 5 year ago.  Senior postdoctoral fellowships are for candidates that obtained a PhD between 5 years and 10 years ago. Junior fellows receive a monthly grant of 1.570 EUR; senior fellows a monthly grant of 2.070 EUR. Grants are free of taxes; social security is covered by FWO.

The fellowships are for periods from 3 to 12 months. At most three months elapse between submission of the application and the decision.

Successful candidates have a good project with a clear added value for the hosting research group and a strong publication list.  If interested, get in touch with one of the contact persons.

The partners of the network are:

  1. Declarative Languages and Artificial Intelligence of the Katholieke Universiteit Leuven. http://www.cs.kuleuven.ac.be/~dtai/
    Staff members: Hendrik Blockeel, Maurice Bruynooghe, Bart Demoen, Marc Denecker, Danny De Schreye, Luc De Raedt and Gerda Janssens
    Main research areas: Machine Learning, Knowledge Representation and Reasoning, and Design, and  Analysis and Implementation of Declarative Programming Languages
    Contact: Maurice.Bruynooghe@cs.kuleuven.be
  2. Advanced Database Research and Modelling of the Universiteit Antwerpen. http://www.adrem.ua.ac.be/
    Staff member: Jan Paredaens
    Main research areas: XML Databases and Data Mining
    Contact: Jan.Paredaens@ua.ac.be
  3. Theoretical Computer Science of the Universiteit Hasselt. http://www.uhasselt.be/theocomp/
    Staff members: Marc Gyssens, Bart Kuijpers, Frank Neven and Jan Van den Bussche
    Research topics: Constraint Satisfaction Problems, Data Mining, Data on the Web, Database Theory, Finite Model Theory, Geometrical Data, Object-Oriented Databases, and Temporal Data.
    Contact: jan.vandenbussche@uhasselt.be
  4. Web & Information System Engineering of the Vrije Universiteit Brussel. http://wise.vub.ac.be/
    Staff members: Olga De Troyer and Geert-Jan Houben.
    Research area: Web query and transformation languages, Web data integration and distribution, adaptation and personalization in Web applications, (Semantic) Web information system architecture.
    Contact: gjhouben@vub.ac.be



EPSRC Vacancies in Automated Reasoning, Web Ontologies, and Multi-agent Systems

Communicated by Renate Schmidt

 
VACANCIES:
1 Post-Doc Research Associate
1 PhD Research Studentship

CLOSING DATE: 31 May 2006
START DATE: Beginning of September 2006

Applications are invited for the position of a research associate (Manchester) and the position of a research student (Liverpool) to work on the EPSRC funded project `Practical Reasoning Approaches for Web Ontologies and Multi-Agent Systems'. This is a three year collaborative research project between the School of Computer Science at the University of Manchester and the Department of Computer Science at the University of Liverpool.

The positions are part of a research programme which aims to develop a powerful and versatile logic engineering platform which will comprise various tools aimed to support both users and developers of logic theories, formal specification frameworks, and automated reasoning formalisms to carry out logical reasoning. The project will develop, study and implement practical resolution-based approaches for reasoning about expressive web ontology languages and expressive agent logics. The research at Manchester will focus on reasoning for the semantic web, ontologies, and description logics, while the research at Liverpool will focus on reasoning for multi-agent systems.

RESEARCH ASSOCIATE POSITION
Based at Manchester the appointment will be on a fixed term contract for 36 months. The post is available from 1 September 2006, or a later date by negotiation.
Salary: Up to 28,009 Pounds p.a.

Full details:
http://www.cs.man.ac.uk/~schmidt/womas/

Informal enquiries:
Dr. Renate Schmidt, email: schmidt@cs.man.ac.uk

RESEARCH STUDENTSHIP
Based in Liverpool the funding provided by the EPSRC includes PhD registration fees at the level of EU/UK applicants as well as a stipend towards the cost of living, both for the three year duration of the project.

Full details:
http://www.csc.liv.ac.uk/~ullrich/epsrc_phd_2006.html

Informal enquiries:
Dr Ullrich Hustadt, email: U.Hustadt@csc.liv.ac.uk




European Agent Systems Summer School
EASSS 2006

Communicated by Marc-Philippe Huget

 
8th European Agent Systems Summer School  (EASSS 2006)
Annecy, France, 17 - 21 July 2006

http://www.esia.univ-savoie.fr/easss06

EASSS06 consists of a mixture of introductory and  advanced courses delivered by internationally leading  experts in the multi-agent systems field, and it covers  the full range of theoretical and practical aspects of multi-agent systems. The final programme is available on the EASSS'06 website.

This summer school is open to anyone from research or industry. A modest registration fee will be charged to cover costs such as proceedings, lunches, social dinner,  and welcome reception. There is a limited number of cheap accommodations (25 euro) available which will be allocated based on the first come first served principle. There are several cheap connections from European cities to Annecy. Please see EASSS'06 website for further information about registration, travel and accommodation.

Since we anticipate popular demand for this edition of EASSS and there are several events (e.g., street festival) organised in the same week in Annecy, attendees are advised to book their local accommodation and travel as soon as possible.




Postdoctoral Research Position in Agent Verification

Communicated by Michael Fisher

 
Applications are invited for the position of post-doctoral researcher working on an EPSRC-funded research project entitled "Model Checking Agent Programming Languages". The project will be carried out in an internationally recognised research department and aims to develop sophisticated agent verification techniques. This work will be carried out in involve collaboration with the University of Durham, UK.

Candidates should have, or be soon to receive, a PhD in logical aspects of Computer Science or Artificial Intelligence. A research background in agent-based systems (ideally agent programming languages) or formal verification (ideally model checking) is desirable.

The post is available for 30 months. The salary is £26,401 per annum.

Informal enquiries to Prof. Michael Fisher [ M.Fisher@csc.liv.ac.uk ]  Also see http://www.csc.liv.ac.uk/~michael/mcapl06.html for further information.

For full details, or to request an application pack, visit http://www.liv.ac.uk/university/jobs.html or e-mail jobs@liv.ac.uk Alternatively, Tel (+44) 151 794 2210 (24 hr answerphone).

Please quote Ref: B/727 in all enquiries. Closing Date: 19 May 2006.




International School on Rewriting

Communicated by Laurent Vigneron

 
Overview
Rewriting is a fundamental concept and tool in computer science, logic and mathematics. It models the notion of transition or elementary transformation of abstract entities as well as common data structures like terms, strings, graphs. Rewriting is central in computation as well as deduction and is a crucial concept in semantics of programming languages as well as in proof theory. This results from a long tradition of cross-fertilization with the lambda-calculus and automated reasoning research communities.

This first International School on Rewriting is organized for Master and PhD students, researchers and practitioners interested in the study of rewriting concepts and applications.

This school is supported by the IFIP Working Group on Term Rewriting. The lectures will be given by some of the best experts on rewriting (termination, higher-order systems, strategies, ...) and applications (security, theorem proving, program analysis and proofs,...).

Invited Speakers
Franz Baader, Dresden University, Germany
Hubert Comon, ENS Cachan, France
Gilles Dowek, Ecole Polytechnique & INRIA, France
Juergen Giesl, Aachen University, Germany
Christopher Lynch, Clarkson University, USA
Claude Marche, University of Paris XI & INRIA, France
Pierre-Etienne Moreau, INRIA & LORIA, France
Vincent van Oostrom, Utrecht University, Netherlands
Detlef Plump, University of York, UK
Femke van Raamsdonk, Free University Amsterdam, Netherlands
Michael Rusinowitch, INRIA & LORIA, France

Contents
Committees
Scientific Committee:
* Juergen Giesl (Aachen)
* Claude Kirchner (Nancy)
* Pierre Lescanne (Lyon)
* Christopher Lynch (Potsdam)
* Aart Middeldorp (Innsbruck)
* Femke van Raamsdonk (Amsterdam)
* Yoshihito Toyama (Sendai)

Local Organization Committee:
* Anne-Lise Charbonnier
* Pierre-Etienne Moreau
* Anderson Santana de Oliveira
* Laurent Vigneron (chair)

Important dates
Deadline for early registration: May 31st, 2006.
Deadline for late registration: June 23rd, 2006.

Further Information
For up-to-date details on the school organization, visit the official web page:

http://isr2006.loria.fr/

or contact the organizers by e-mail: isr2006(at)loria(dot)fr




Answer Set Programming for the Semantic Web

Communicated by Roman Schindlauer

 
I would like to announce a side-event at the ESWC 06, which will be held in Budva, Montenegro, next June. There, we will have a Full-day Tutorial on

  "Answer Set Programming for the Semantic Web"

on June 11th, where we will present ASP, a declarative programming paradigm with its roots in Knowledge Representation and Logic Programming in the context of Semantic Web applications. The attendees will practice through an online interface using one of the state-of-the-art ASP solvers and some of its extensions.

The tutorial will be held by:

For details and registration, see
http://www.eswc2006.org/tutorials.html#tutorial1

with best regards,
Roman



Special Issue, ACM Transactions on Autonomous and Adaptive Systems
The Agentlink Perspective on Autonomous and Adaptive Systems

Communicated by Andrea Omicini

 
AIMS & SCOPE
AgentLink has just recently closed after eight years of fruitful and  successful activity, as a series of European Commission (EC)- sponsored projects.  In particular, AgentLink III has worked in the  last two years to put Europe at the leading edge of international  competitiveness in the increasingly-important area of agent  technologies.  To this end, and in order to support co-ordination and  collaboration of European research efforts, AgentLink III established  a series of research meetings, called the AgentLink III Technical  Forums (TFs).

Each of these Technical Forums included a number of parallel  workshops, called Technical Forum Groups (TFGs), on specific hot  research topics suggested in response to a call for proposals issued  before each Technical Forum meeting.  Three TF events were held under  AgentLink III: in Rome, Italy; in Ljubljana, Slovenia; and in  Budapest, Hungary.  Participants came mostly from European and  neighbouring countries, but also from Australia, Japan and the USA.  The outcomes of TFG activity were outstanding, in terms of both the  communities that developed around TFGs, and the scientific results  emerged from the discussions at the TFG meetings.

AgentLink now aims to collect, select and publish the most important  advances produced by the AgentLink Technical Forum activity that fall  in the area of Autonomous and Adaptive Systems, within the  prestigious ACM Transactions on Autonomous and Adaptive Systems  (TAAS).  We are therefore soliciting submissions that summarise and  expand on the TF activities, and therefor encourage potential authors  to visit the web pages of TAAS (http://esoa.unige.ch/taas/), to  understand the aim and scope of this journal.


AUTHORS & THEMES
Potential authors for this special issue should be researchers who  have actively participated to the AgentLink Technical Forum  activities in the last two years, along with their colleagues and  usual collaborators.  Each submission should be accompanied by a  message (mail or letter) explicitly stating that at least one of the  authors has actively participated to the activity of one or more of  the AgentLink III TFGs, and that the article submitted develops along  the lines elaborated in the TFG discussion. All papers should present  original research and innovative results.

Themes of the submitted articles should first of all fall within one  or more of the areas that were the subject of the scientific work of  the 14 AgentLink III TFGs:
Submitted papers should also explicitly indicate which (one or more)  TFG activity was the origin of the paper contribution.
For more precise information, please consult the AgentLink III TFG  pages at (http://www.agentlink.org/activities/tfg/).

Naturally, papers submitted to the special issue should also foucs on  themes which are of interest for the hosting journal: potential  authors should carefully take into account TAAS aims and scope as  specified in the journal web pages at (http://esoa.unige.ch/taas/ indexr.html).


SUBMISSIONS
Submissions should fall in any of the areas of interest of TAAS, be  related to at least one of the AgentLink III TFGs, and should include  at least one author who actively participated in one or more of the  AgentLink III TFG activities over the past two years. To this end,  submissions should be accompanied by a message stating the  involvement of the authors (at least one) in the AgentLink III TFG  activities, and indicating the reference TFGs (at least one, possibly  more than one) for the paper.

Papers should be formatted according to the "Instruction for  Submission" at (http://esoa.unige.ch/taas/authors.html) and be sent  in PDF format to

    Andrea Omicini <andrea.omicini@unibo.it>
    
*** BEFORE July 15th, 2006 ***

Submitted papers will be reviewed by at least three different expert  reviewers, coming from both the AgentLink community and the TAAS  editorial board.

The review process will be driven by the Guest Editors of this  Special Issue (Peter McBurney, Andrea Omicini, Terry Payne and Paolo  Petta) and by the Editor-in-chief, Giovanna Di Marzo Serugendo.

Final versions of accepted papers should be ready for publication by  the end of 2006.

For any clarification please contact Andrea Omicini at the address  above.






Ciao Prolog
Patch 7 Version 1.10

Communicated by Ciao Prolog Team

 
We have put in the Ciao webpage (http://www.clip.dia.fi.upm.es/Software/Ciao/) a new patch (patch 7) to Ciao 1.10.  This corrects the behavior of Ciao in some modern Linux
kernels, where memory management has changed somewhat, affecting Ciao (and also other applications). Some users may have experienced "segmentation violation" messages on startup due to this. We hope these problems are solved with the update. Please let us know of any problem you may find.

We would like to take this opportunity to let you all know that, although we have chosen to keep the 1.10 distribution stable for a while, publising only bug fixes, in the meantime there has been continued and very intense activity on many parts of the system, adding a long list of features and libraries, which we hope will be of interest, as well a number of bug fixes.

We will be releasing a beta version (probably a beta for 1.13) so that those interested can try things out ahead of the next stable release.




Horizons of Truth
Logics, Foundations of Mathematics, and the Quest for Understanding the
Nature of Knowledge

Goedel Centenary 2006

Communicated by Vesna Sabljakovic-Fritz

 
An International Symposium Celebrating the 100th Birthday of Kurt Gödel 27-29. April 2006 Festsaal of the University of Vienna
 Organized by the Kurt Gödel Society
 
The Symposium will take place April 27- 29 in the Celebration Hall of the University of Vienna, famous for its architectural beauty and Gustav Klimt’s murals. More than 20 lectures by eminent scientists in the fields of logics, mathematics, philosophy, physics, cosmology, and theology will provide new insights into the life and work of Kurt Gödel and their implications for future generations.
A young researchers’ competition will allow 10 young researchers to present their projects to an eminent audience, the awards being presented at Belvedere Palace on Friday, April 28, Kurt Gödel's 100th birthday. The first prize in the amount 20 000 EUR will be awarded to the best project, followed by two additional prizes of 5000 EUR each.Call for project submissions: http://www.logic.at/goedel2006/index.php?students

A poster session, containing presentations of selected posters will be held in the Small Celebration Hall, and the Senate Hall of the University of Vienna. The poster volume will be published by the Kurt Gödel Society after the Symposium and sent to all authors.
Call for posters: http://www.logic.at/goedel2006/index.php?posters
An exhibition on Gödel’s life and work will be held in the old library of the University of Vienna throughout the Symposium.
 
Honorary Chair:
Gaisi Takeuti, University of Tsukuba, President of the Kurt Gödel Society
Invited Speakers:
John D. Barrow, University of Cambridge
Gregory J. Chaitin, IBM Thomas J. Watson Research Center, New York
Paul Cohen, Stanford University
Jack Copeland, University of Canterbury, New Zealand
George Ellis, University of Cape Town
Solomon Feferman, Stanford University
Harvey Friedman, Ohio State University
Ivor Grattan-Guinness, Middlesex University
Petr Hajek, Academy of Sciences of the Czech Republic
Juliette Kennedy, University of Helsinki
Ulrich Kohlenbach, Darmstadt University of Technology
Georg Kreisel, Royal Society
Angus John Macintyre, Royal Society
Piergiorgio Odifreddi, University of Turin
Christos H. Papdimitiriou, University of California, Berkeley
Roger Penrose, University of Oxford
Hilary Putnam, Harvard University
Wolfgang Rindler, University of Texas, Dallas
Dana Scott, Carnegie Mellon University
Avi Wigderson, Institute for Advanced Study, Princeton
Hugh Woodin, University of California, Berkeley
Invited Presentation: Andrei Voronkov, University of Manchester and Microsoft
 
Web and registration: http://www.logic.at/goedel2006/





European Master's Program in Computational Logic

Communicated by Int. Center for Computational Logic

 
The Free University of Bozen-Bolzano in Italy, the Technische Universität Dresden in Germany, the Universidade Nova de Lisboa in Portugal, the Universidad Politecnica de Madrid in Spain and the Technische Universität Wien in Austria are jointly offering a two year European Master's program in Computational Logic.  The program is supported by the European Union within the Erasmus Mundus initiative, which means that we can support non-EU-students by a grant of EUR 42,000.  Students select two of the five partner universities and study one year at each of the selected sites. If successful they receive a double master's degree.  The course language is English and our students come from all over the world.

If you are a student in Computer Science or a related subject, are about to finish your Barchelor's or an equivalent degree, and have a strong interest in logic related subjects, then you should consider joining the program.

If you are not a student then please forward this email to interested students.  Many thanks.

For details please check out our web pages under
          http://european.computational-logic.org/
Please note that the deadline for applications for the Erasmus Mundus grant is          Feb. 10, 2006.

The EU has given us additional scholarships for students from India, Thailand, Malaysia, China and all other Asian countries.  So please apply. You will find all relevant information on our above mentioned web pages.





EPSRC-Funded Project - PhD and MS Positions

Communicated by Marcus Roggenbach

 
In the EPSRC funded project `Data and Processes', see
   http://www.cs.swan.ac.uk/~csmarkus/EPSRC/dataAndprocesses.html
I have funding available for
    1 PhD student and 1 MPhil student
I would be happy if you could point this project out to interested students.

With kind regards,
   Markus Roggenbach

P.S.: The positions will be shortly be advertised at
    http://www.jobs.ac.uk/





ICCL Summer School on Knowledge Structures

Communicated by Steffen Holldobler

 
Dates: June 24 - July 8, 2006
Location: Technische Universität Dresden
http://www.computational-logic.org/iccl-ss-2006 

TOPIC
The topic of this year's summer school is
KNOWLEDGE STRUCTURES
It is common wisdom that the still growing power of digital data processing greatly enhances the wealth of human knowledge and will continue to do so. A precondition for this is, however, that knowledge is encoded and represented in a computer-accessible manner, such that it can be algorithmically processed. This requires, in turn, the use of appropriate formal structures for knowledge representation and knowledge processing. Such structures, called `Knowledge Structures', will be the topic of this year's ICCL summer school. There are many approaches to this topic ranging from formal logics, to mathematical and data mining methods. The summer school's focus is on the following three areas:
The basic ideas of these areas will be introduced and discussed, with the aim of providing a broad methodological repertoire for future research and applications.

REGISTRATION 
If you want to attend the summer school, we'd prefer that you register by March 18, 2006. (See the online registration on the web page mentioned above.) For all who want to apply for a grant, this deadline is obligatory. After March 18, 2006, registration will be possible as long as there are vacant places.  (Since we intend to restrict participation to about 60 people, in case of excessive demand, we will have to close the registration to the summer school.) People applying until March 18, 2006, and applying for a grant will be informed about respective decisions on grants by end of March 2006.

FEES 
We ask for a participation fee of 150 EUR.

GRANTS
A limited number of grants may be available, please indicate in your application if the only possibility for you to participate is via a grant. Applications for grants must include an estimate of travel costs and they should be sent together with the registration.

INTEGRATED WORKSHOP
It will be possible for some participants to present their research work during a small workshop integrated in the summer school. If you would like to do so, please register by means of the online workshop registration form on the web page mentioned above.

COURSE PROGRAM 
Finger Exercises in Formal Concept Analysis. Bernhard Ganter (Technische Universität Dresden)
Knowledge, Reasoning, and the Semantic Web. Pascal Hitzler (AIFB Universität Karlsruhe)
Text clustering and Semantic Web mining. Andreas Hotho (Universität Kassel)
Inductive Logic Programming. Stefan Kramer (Technische Universität München)
Cluster Methodology. Sabine Krolak-Schwerdt (Universität Saarbrücken)
Machine Learning and Formal Concept Analysis. Sergei Kuznetsov (VINITI, Moscow, Russia)
Reasoning in Description Logics. Franz Baader (Technische Universität Dresden)

PEOPLE INVOLVED
Chairs of the ICCL Summer School 2006
Organizing Committee



Doctoral/Post-Doctoral Position in
Constraint Solving and Programming
University of Ulm, Germany

Communicated by Thom Fruehwirth

 
A research position is available for Ph.D. students or postdocs at the Faculty of Computer Science, University of Ulm, Department of Software Engineering and Compiler Construction in the area of Constraint Handling Rules (CHR).

The successful applicant will be responsible for a research project funded by the German Research Foundation DFG. Its topic is the specification, generation, implementation and analysis of rule-based algorithms for solving of global constraints in CHR. The position thus requires knowledge and interest in one or more of the aforementioned areas.

At least a master or equivalent in computer science or strongly related area is required. The initial appointment will be for two years, beginning as soon as possible, with a possible renewal for another two years depending on the evaluation of the research foundation.

Gross salary is 56.400 Euro a year according to assistant position BAT IIa. It includes health and social insurance. Additional money for conference travel and visit of the advanced summer school on global constraints will be provided.

Ulm is a pleasant city of 150.000 on the Danube, with nearby rocks for free-climbing, close to Stuttgart, Munich, the Alps, and Lake Konstanz. Every day life requires a basic command of German.

Prospective applicants should email their resume with three references, and link to homepage and publications if available, to our secretary Claudia Hammer at
hammer@informatik.uni-ulm.de
by February 28th, 2006, or until the position is filled.


Prof. Thom Fruehwirth
c/o Claudia Hammer
University of Ulm
Faculty of Computer Science
Department PM
D-89069 Ulm, Germany

Email: hammer@informatik.uni-ulm.de
WWW:   http://www.informatik.uni-ulm.de/pm/mitarbeiter/fruehwirth/
CHR:   http://www.cs.kuleuven.be/~dtai/projects/CHR/

The University of Ulm aims to increase the representation of women in research and teaching and therefore expressly encourages female scientists to apply. Disabled applicants with relevant qualifications will be given priority.




Post-Doctoral Position in
Weighted Constraint Processing
Toulouse, INRA

Communicated by Thomas Schiex

 
A 2 years postdoc position is available in our lab. in the Applied Mathematics and Computer sciences dept. at INRA in the frame of a ANR (french NSF) founded project. The project is aimed at the development, implementation and experimentation of algorithms for solving weighted constraint networks by mixing local consistency enforcing and dynamic programming like approaches. It involves three teams with a nice research record in graph decomposition, weighted constraint processing and the use of decomposition for solving weighted constraint networks and with active relations with other teams in France, Italy, Spain...

The project offers a nice spectrum of research from graph theory, development of new algorithms for weighted constraint processing, their implementation in a new platform and application to problems in biology (pedigree analysis, RNA gene finding...) using recent hardware with lot of memory.

The ideal applicant will have a good knowledge of constraint processing and a taste for algorithmic development from theory to practice. Programming skills are essential. Additional knowledge in weighted constraint processing, graph decomposition and related methods is welcome. General knowledge of biology and genetics is welcome but not essential. French or english speaking is needed.

The contract should start around June 2006 (with some flexibility) and is renewable for a total duration of 2 years roughly.

Prospective applicants should contact Thomas Schiex (http://www.inra.fr/mia/T/schiex) by phone or e-mail before March 26th, 2006. Use [ANR POSTDOC] for the e-mail subject.






Logic Programming Doctoral Dissertations

Editor: Enrico Pontelli


Introduction
The objective of this column is to advertise completed (or almost completed) doctoral dissertations that have a connection to the realm of logic programming. If you are a student and you are about to defend your dissertation, please send me your name, affiliation, and a short abstract. If you are a faculty member and you have a student that is about to graduate, please encourage her/him to send me this information.
I count on your help to make this column a success!!



Alessandro Dal Palu'
Constraint Programming Approaches to the Protein Structure Prediction Problem

Universita' degli Studi di Udine, Italy,  2006

The problem of Protein Structure Prediction is one of the most challenging aspects of Bioinformatics. In this work the problem is tackled by means of Constraint logic programming techniques, to provide an efficient, approximate yet realistic, parallel solution to this problem. Particular care is devoted to the formalization and design of constraints and of the
constraint solvers developed.



Papers to appear in TPLP and TOCL

Contents Back to top

Regular papers to appear in Theory and Practice of Logic Programming
http://www.cs.kuleuven.ac.be/~dtai/projects/ALP/TPLP/index.html


Volume 6, Issue 1 & 2. January, March 2006

Regular Papers

Programming Pearl


 

 

Special Issues (to appear)

Special Issue on Reactive Systems Special Issue on Multiparadigm Languages and Constraint Programming

 

 

Accepted Regular Papers

Accepted Technical Note


Accepted Programming Pearl


Accepted Book Review

Back to top



ACM Transactions on Computational Logic (TOCL)
http://www.acm.org/tocl
Accepted papers

The files below are the final versions of the papers submitted by the authors. The definite, published versions of the papers are available from the TOCL home page within the ACM Digital Library.


Volume 7, Number 1 (January 2006) (tentative)
Volume 7, Number 2 (April 2006)
(tentative)
Future Issues (the order of the papers can change)


Back to top



Accepted Conference Papers


List of Events:




ICLP 2006
International Conference on Logic Programming

Seattle, WA, August 17-20, 2006
http://www.cs.uky.edu/iclp06/


Accepted Papers

Sergio Antoy and Michael Hanus
Overlapping Rules and Logic Variables in Functional Logic Programs

Enrico Pontelli and Tran Son
Justifications for Logic Programs under Answer Set Semantics

Martin Gebser and Torsten Schaub
Tableaux Calculi for Answer Set Programming

Diptikalyan Saha and C.R. Ramakrishan
A Local Algorithm for Incremental Evaluation of Tabled Logic Programs

Maarten van Emden
Compositional Semantics for the Procedural Interpretation of Logic

Luke Evans Simon, Ajay Mallya and Gopal Gupta
Coinductive Logic Programming

Andy King, Lunjin Lu and Samir Genaim
Detecting Determinacy in Logic Programs

Xuan Li, Andy King and Lunjin Lu
Collapsing Closures

Katsumi Inoue and Chiaki Sakama
Generality Relations in Answer Set Programming

Sebastian Brand and Roland H.C. Yap
Propagation = Logic + Control

Gregory James Duck, Sebastian Brand and Peter James Stuckey
ACD Term Rewriting

Jon Sneyers, Tom Schrijvers and Bart Demoen
Memory reuse for CHR

Pedro Cabalar, Sergei Odintsov, David Pearce and Agustin Valverde
Analysing and Extending Well-Founded and Partial Stable Semantics using Partial Equilibrium Logic

Davy Van Nieuwenborgh, Stijn Heymans and Dirk Vermeir
Cooperating Answer Set Programming

Elvira Albert, Puri Arenas, German Puebla and Manuel Hermenegildo
Reduced Certificates for Abstraction-Carrying Code

Johan Wittocx, Joost Vennekens, Maarten Marien, Marc Denecker and Maurice Bruynooghe
Predicate Introduction under Stable and Well-founded Semantics

Luciano Caroprese, Sergio Greco, Cristina Sirangelo and Ester Zumpano
Declarative Semantics of Production Rules for Integrity Maintenance

Remy Haemmerle and Francois Fages
Modules for Prolog  Revisited

James Cheney
The Semantics of Nominal Logic Programs

Alberto Pettorossi, Maurizio Proietti and Valerio Senni
Proving Properties of Constraint Logic Programs by Eliminating Existential Variables

APPLICATION PAPERS

Martin Brain, Tom Crick, Marina DeVos and John Fitch
TOAST: Applying Answer Set Programming to Superoptimisation

Susanne Grell, Torsten Schaub and Joachim Selbig
Modelling biological networks by action languages via answer set programming

Petra Schwaiger and Burkhard Freitag
Using Answer Set Programming for the Automatic Compilation of Assessment Tests

C.R. Ramakrishan, IV Ramakrishnan and David Warren
Deductive Spreadsheets using Tabled Logic Programming

Chitta Baral, Saadat Anwar, Juraj Dzifcak and Hiro Takahashi
Macros, macro calls and use of ensembles in modular Answer Set Programming

Luis Quintano and Irene Rodrigues
Using a Logic Programming framework to Control Database Query Dialogues in Natural Language

SPECIAL INTERST PAPER

Peter Szabo and Peter Szeredi
Improving the ISO Prolog standard by analysing compliance test results

POSTERS


Salvador Abreu and Vitor Nogueira
Towards Structured Contexts and Modules

Vitor Nogueira and Salvador Abreu
Temporal Contextual Logic Programming

Hugo Lopez, Catuscia Palamidessi, Jorge A. Perez, Camilo Rueda and Frank D. Valencia
A Declarative Framework for Security: Secure Concurrent Constraint Programming

Claudio Silva, Ricardo Rocha and Ricardo Lopes
An External Module For Implementing Linear Tabling in Prolog

Juan Fernendez Ortiz and Jurgen Villadsen
Natural Language Processing Using Lexical and Logical Combinators

Paulo Jorge Lopes de Moura and Vincent Marchetti
Logtalk processing of STEP Part 21 files

Veronica Dahl and Baohua Gu
Semantic Property Grammars for Knowledge Extraction from Biomedical Text

Hai-Feng Guo, Miao Liu and Bharat Jayaraman
Relaxation on Optimization Predicates

Dulce Aguilar-Solis
Learning Semantic Parsers: a Constraint Handling Rule Approach

Kazunori Ueda, Norio Kato, Koji Hara and Ken Mizuno
LMNtal as a Unifying Declarative Language

Ricardo Rocha
Handling Incomplete and Complete Tables in Tabled Logic Programs

Rafael Caballero Roldan, Mario Rodriguez Artalejo and Rafael del Vado Virseda
Declarative Diagnosis of Wrong Answers in Constraint Functional-Logic Programming

Andrei Mantsivoda, Anton Malykh and Vladimir Lipovchenko
Logic programming in Knowledge Domains

Thi-Bich-Hanh Dao and Khalil Djelloul
Solving First-Order Constraints in the Theory of the Evaluated Trees

Quan Phan and Gerda Janssens
An Algorithm for Region-based Memory Management for Deterministic Mercury Programs

Edna Ruckhaus, Vladimir Kolovski, Bijan Parsia and Bernardo Cuenca
Integrating Datalog with OWL: Exploring the AL-log Approach

Edison Mera, Pedro Lopez-Garcia, German Puebla, Manuel Carro and Manuel Hermenegildo
Combining Static Analysis and Profiling for Estimating Execution Times in Logic Programs




PPDP 2006
Principles and Practice of Declarative Programming

Venice, Italy, July 10-12, 2006
http://www.dsi.unive.it/ppdp2006/


ACCEPTED PAPERS:

Nic Volanschi.
Condate: A Proto-language at the Confluence Between Checking and Compiling

Huu-Duc Nguyen and Atsushi Ohori.
Compiling ML Polymorphism with Explicit Layout Bitmap

Murdoch Gabbay and Aad Mathijssen.
One-and-a-halfth-order logic

Yosihiro Yuse and Atsushi Igarashi.
A Modal Type System for Multi-Level Generating Extensions with Persistent Code

Michael Hanus.
Type-Oriented Construction of Web User Interfaces

Yutaka Matsuno and Atsushi Ohori.
A Type System Equivalent to Static Single Assignment

Tom Schrijvers and Maurice Bruynooghe.
Polymorphic Algebraic Data Type Reconstruction

Coen De Roover, Johan Brichau and Theo D'Hondt.
Combining Fuzzy Logic and Behavioral Similarity for Non-Strict Program
Validation


Martin Sulzmann.
Extracting Programs from Type Class Proofs

Daniele Gorla, Matthew Hennessy and Vladimiro Sassone.
Inferring Dynamic Credentials for Role-based Trust Management

Robin Abraham and Martin Erwig.
Type Inference for Spreadsheets

Andres Loeh and Ralf Hinze.
Open data types and open functions

Dale Miller.
Collection analysis for Horn clause programs

German Puebla and Claudio Ochoa.
Poly-Controlled Partial Deduction

Rachid Echahed, Ricardo Caferra and Nicolas Peltier.
Rewriting Term-Graphs with Priority

Dario Colazzo and Carlo Sartiani.
An Efficient Algorithm for XML Type Projection

Josep Silva and Olaf Chitil.
Combining Algorithmic Debugging and Program Slicing

Marco Alberti, Federico Chesani, Marco Gavanelli, Evelina Lamma, Paola Mello
and Marco Montali.

An Abductive Framework for A-Priori Verification of Web Services

Tobias Lindahl and Konstantinos Sagonas.
Practical Subtype Inference Based on Success Typings

Maribel Fernandez and Fabien Fleutot.
A Historic Functional and Object-Oriented Calculus

Isabelle GNAEDIG and Helene KIRCHNER.
Computing constructor forms with non terminating rewrite programs

Adriana Compagnoni, Mariangiola Dezani-Ciancaglini and Pablo Garralda.
BASS: Boxed Ambients with Safe Sessions





IJCAR 2006
International Joint Conference on Automated Reasoning

Seattle, WA, August 17-20, 2006
http://ijcar06.uni-koblenz.de/index.html


System descriptions

Research papers





FM 2006
Formal Methods Symposium
 
Hamilton, Canada, August 21-27, 2006
http://fm06.mcmaster.ca/


Accepted Papers:



DL 2006
International Workshop on Description Logics

The Lake District, UK, May 30-June 1, 2006
http://www.mindswap.org/2006/dl/

Accepted Papers 





PPSWR 2006
Principles and Practice of Semantic Web Reasoning

Budva, Montenegro, June 10-11, 2006
http://rewerse.net/PPSWR06/


Preliminary Program

Saturday, June 10, 2006

    * 9:00-9:15 Opening.
    * 9:15-10:30 Invited Talk: Harold Boley, University of New Brunswick, Canada: The RuleML Family of Web Rule Languages

    * 11:00-12:30 Reasoning I
          o Fabian Suchanek and Peter Baumgartner. Automated Reasoning Support for First-Order Ontologies.
          o Uwe Assmann, Jakob Henriksson and Jan Maluszynski. Combining safe rules and ontologies by interfacing of reasoners.
          o Guilin Qi, Jeff Pan, Weiru Liu and David A. Bell. A stratification-based approach for inconsistency handling in Description Logics.
    * 14:00-15:30 Applications
          o François Bry, Michael Eckert, Paula-Lavinia Patranjan and Inna Romanenko. Realizing Business Processes with ECA Rules: Benefits, Challenges, Limits.
          o Matteo Baldoni, Cristina Baroglio, Alberto Martelli, Viviana Patti and Claudio Schifanella. Interaction Protocols and Capabilities: a preliminary report.
          o Liviu Badea. Semantic Web Reasoning for Analyzing Gene Expression Profiles.

Sunday, June 11, 2006

    * 9:00-10:30 Querying
          o François Bry, Tim Furche and Benedikt Linse. Data Model and Query Constructs for Versatile Web Query Languages: State-of-the-Art and Challenges for Xcerpt.
          o François Bry, Tim Furche and Benedikt Linse. AMaXoS---Abstract Machine for Xcerpt: Architecture and Principles.
          o Wlodzimierz Drabent. Towards more precise typing rules for Xcerpt.
    * 11:00-12:30 Reasoning II
          o Wolfgang May, Franz Schenk and Elke von Lienen. Extending an OWL Web Node with Reactive Behavior.
          o Carlos Viegas Damásio, Anastasia Analyti, Grigoris Antoniou and Gerd Wagner. Open and Closed World Reasoning in the Semantic Web.
          o Carlos A. Hurtado and Alejandro Vaisman. Reasoning with Temporal Constraints in RDF.
    * 14:00-15:30 System Demonstrations
          o Sacha Berger, François Bry, Tim Furche, Benedikt Linse, Andreas Schroeder. Effective and Efficient Data Access in the Versatile Web Query Language Xcerpt.
          o François Bry and Christoph Wieser. Web Queries with Style: Rendering Xcerpt Programs with CSS-NG.
          o Thomas Hornung, Kai Simon and Georg Lausen. Information Gathering in a Dynamic World.
          o Francesca Alessandra Lisi. Practice of Inductive Reasoning on the Semantic Web: A System for Semantic Web Mining.
          o Hans Jürgen Ohlbach. Fuzzy Time Intervals - System Description of the FuTI-Library.
          o Artur Wilk and Wlodzimierz Drabent. A Prototype of a Descriptive Type System for Xcerpt.

    * 16:00-17:30 Reasoning III
          o Kaarel Kaljurand and Norbert E. Fuchs. Bidirectional mapping between OWL DL and Attempto Controlled English.
          o Hans Eric Svensson and Artur Wilk. XML Querying Using Ontological Information.
          o Craig McKenzie, Alun Preece and Peter Gray. Semantic Web Reasoning using a Blackboard System.
    * 17:30 Closing




PLAS 2006
Workshop on Programming Languages and Analysis for Security

Ottawa, Canada, June 10, 2006
http://www.cis.upenn.edu/~stevez/plas06.html


Preliminary Program

  8:30 -   9:00 Registration
  9:00 - 10:00 Invited Talk
David Wagner University of California, Berkeley
10:00 - 11:00 Session I: Authorization and Monitoring

Applying Flow-Sensitive CQUAL to Verify MINIX Authorization Check Placement
Timothy Fraser, Nick L. Petroni Jr., William A. Arbaugh

Certified In-lined Reference Monitoring on .NET
Kevin W. Hamlen, Greg Morrisett, Fred B. Schneider

11:00 - 11:30 Break
11:30 - 12:30 Session II: Finding Security Flaws

Combining Type-Based Analysis and Model Checking for Finding Counterexamples against Non-Interference
Hiroshi Unno, Naoki Kobayashi, Akinori Yonezawa

Precise Alias Analysis for Static Detection of Web Application Vulnerabilities
Nenad Jovanovic, Christopher Kruegel, Engin Kirda

12:30 -   1:30 Lunch
  1:30 -   3:00 Session III: Structuring Secure Systems

Specifying Distributed Trust Management in LolliMon
Jeff Polakow, Christian Skalka

A Microkernel Virtual Machine: Building Security with Clear Interfaces
Xiaoqi Lu, Scott F. Smith

Empirical Relation between Coupling and Attackability in Software Systems: A Case Study on DOS
Michael Yanguo Liu, Issa Traore

  3:00 -   3:30 Break
  3:30 -   5:00 Session IV: Secure Information Flow

Trusted Declassification
Boniface Hicks, Dave King, Patrick McDaniel, Michael Hicks

Refactoring Programs to Secure Information Flows
Scott F. Smith, Mark Thober

Efficient Type Inference for Secure Information Flow
Katia Hristova, Tom Rothamel, Yanhong A. Liu, Scott D. Stoller katia@cs.sunysb.edu

  5:00 -   5:15 Break
  5:15 -   6:00 Madness Session







KR 2006
Principles of Knowledge Representation and Reasoning
Lake Districti, UK, June 2-5, 2006
http://www.kr.org/KR2006/


Accepted Papers

Semantics for Dynamic Syntactic Epistemic Logics by Thomas Agotnes, Natasha Alechina
Non-deterministic semantics for first-order paraconsistent logics
by Arnon Avron, Anna Zamansky
On planning with programs that sense
by Jorge Baier, Sheila McIlraith
Lack of simple characterizations for the distance-based revision
by Jonathan Ben-Naim
A Theory of Vague Adjectives Grounded in Relevant Observables
by Brandon Bennett
Knowledgebase compilation for efficient logical argumentation
by Philippe Besnard, Anthony Hunter
Qualitative Temporal Preferences in the Situation Calculus
by Meghyn Bienvenu, Christian Fritz, Sheila McIlraith
Expressive Non-Monotonic Description Logics Based on Circumscription
by Piero Bonatti, Carsten Lutz, Frank Wolter
Heuristics for Planning with Simple Preferences using Compiled Knowledge
by Blai Bonet, Hector Geffner
A Bad Day Surfing is Better than a Good Day Working: How to Revise a Total Preorder
by Richard Booth, Thomas Meyer, Ka-Shu Wong
Logical Foundations of Well-Founded Semantics
by Pedro Cabalar, Sergei Odintsov, David Pearce
Data Complexity of Query Answering in Description Logics
by Diego Calvanese, Giuseppe De Giacomo, Domenico Lembo, Maurizio Lenzerini, Riccardo Rosati
Expressive Power of Weighted Propositional Formulas for Cardinal Preference Modelling
by Yann Chevaleyre, Ulle Endriss, Jerome Lang
Constrained Argumentation Frameworks
by Sylvie Coste-Marquis, Caroline Devred, Pierre Marquis
Representing Policies for Quantified Boolean Formulas
by Sylvie Coste-Marquis, Helene Fargier, Jerome Lang, Daniel Le Berre, Pierre Marquis
A tree decomposition algorithm for Conceptual Graph projection
by Madalina Croitoru, Ernesto Compatangelo
Modularity and Web Ontologies
by Bernardo Cuenca Grau, Bijan Parsia, Evren Sirin, Aditya Kalyanpur
Iterated revision as prioritized merging
by James Delgrande, Didier Dubois, Jerome Lang
Qualitative decision making with bipolar information
by Didier Dubois, Helene Fargier
Possibilistic handling of uncertain default rules
by Florence Dupin de Saint-Cyr, Henri Prade
Replacements in Non-Ground Answer-Set Programming
by Thomas Eiter, Michael Fink, Hans Tompits, Patrick Traxler, Stefan Woltran
Actions as Special Cases
by Selim Erdoan, Vladimir Lifschitz
Planning with Prioritized Goals
by R. Feldmann, G. Brewka
Compiling Qualitative Preferences into Decision-Theoretic Golog Programs
by Christian Fritz, Sheila McIlraith
Grounding and the expression of belief
by Benoit Gaudou, Andreas Herzig, Dominique Longin
Did I Damage My Ontology? A Case for Conservative Extensions in Description Logics
by Silvio Ghilardi, Carsten Lutz, Frank Wolter
Redoing the Foundations of Decision Theory
by Joseph Halpern, Larry Blume, David Easley
Shapley Inconsistency Values
by Anthony Hunter, Sébastien Konieczny
Semantical considerations for a logic of actions: an imperative manifesto
by Victor Jauregui
Probabilistic abduction without priors
by Gabriele Kern-Isberner, Didier Dubois, Angelo Gilio
The Even More Irresistible SROIQ
by Oliver Kutz, Ian Horrocks, Ulrike Sattler
Foundations for Knowledge-Based Programs using ES
by Gerhard Lakemeyer, Jens Claßen
First-Order Loop Formulas for Normal Logic Programs
by Fangzhen Lin, Yin Chen, Yisong Wang, Mingyi Zhang
Updating Description Logic ABoxes
by Hongkai Liu, Carsten Lutz, Maja Milicic, Frank Wolter
Variable-Strength Conditional Preferences for Matchmaking in Description Logics
by Thomas Lukasiewicz, Joerg Schellhase
Definitorially Complete Description Logics
by Maarten Marx, Willem Conradi, Balder Ten Cate
Temporalizing Cardinal Directions: From Constraint Satisfaction to Planning
by Marco Ragni, Stefan Wölfl
Abductive Visual Perception with Feature Clouds
by D. Randell, M. Witkowski
Reasoning About Knowledge of Unawareness
by Leandro Rego, Joseph Halpern
DL+log: Tight Integration of Description Logics and Disjunctive Datalog
by Riccardo Rosati
An Ordered Theory Resolution Calculus for Hybrid Reasoning in First-order Extensions of DLs
by Scott Sanner, Sheila McIlraith
Action, Belief, and Minimal Change
by Richard Scherl
On the Limits of Planning over Belief States under Strict Uncertainty
by Sardina Sebastian, Hector Levesque, Giuseppe De Giacomo, Yves Lesperance
From Wine to Water: Optimizing Description Logic Reasoning for Nominals
by Evren Sirin, Bernardo Cuenca Grau, Bijan Parsia
Query Answering under the Any-World Assumption for Normal Logic Programs
by Umberto Straccia
The Features-and-Fluents Semantics for the Fluent Calculus
by Michael Thielscher, Thomas Witkowski
On the Completeness of Approximation Based Reasoning and Planning
by Son Tran, Tu Phan
Strong and uniform equivalence of nonmonotonic theories - an algebraic approach
by Mirek Truszczynski
Propositional DAGs: a New Graph-Based Language for Representing Boolean Functions
by Michael Wachter, Rolf Haenni
Computational Properties of Epistemic Logic Programs
by Yan Zhang




ICAPS 2006
International Conference on Automated Planning and Scheduling
Lake District, UK, June 6-10, 2006
http://www.kr.org/KR2006/


Accepted Papers







Call for Papers


List of Events:




Colloquium on Implementation of Constraint Logic Programming Systems
CICLOPS 2006
Seattle, USA, August 21 or 22, 2006
http://www.cs.nmsu.edu/lldap/CICLOPS06


Workshop Description

The workshop aims at discussing and exchanging experience on the design, implementation, and optimization of logic and constraint (logic) programming systems, or systems intimately related to logic as a means to express computations. Preference will be given to the analysis and description of implemented (or under implementation) systems and their associated techniques, problems found in their development or design, and steps taken towards the solutions.

The workshop topics include, but are not limited to:
Workshop Motivation
The last years have witnessed continuous progress in the technology available both for academic and commercial computing environments. Examples include more processor performance, increased memory capacity and bandwidth, faster networking technology, and operating system support for cluster computing. These improvements, combined with recent advances in compilation and implementation technologies, are causing high-level languages to be regarded as good candidates for programming complex, real world applications. Techniques aiming at achieving flexibility in the language design make powerful extensions easier to implement; on the other hand, implementations which reach good performance in terms of speed and memory consumption make declarative languages and systems amenable to develop non-trivial applications.

Logic Programming and Constraint Programming, in particular, seem to offer one of the best options, as they couple a high level of abstraction and a declarative nature with an extreme flexibility in the design of their implementations and extensions and of their execution model. This adaptability is key to, for example, the implicit exploitation of alternative execution strategies tailored for different applications (e.g., for domain-specific languages) without unnecessarily jeopardizing efficiency.

This workshop continues a tradition of successful workshops on Implementations of Logic Programming Systems, previously held with in Budapest (1993) and Ithaca (1994), the Compulog Net workshops on Parallelism and Implementation Technologies held in Madrid (1993 and 1994), Utrecht (1995) and Bonn (1996), the Workshop on Parallelism and Implementation Technology for (Constraint) Logic Programming Languages (ParImp) held in Port Jefferson (1997), Manchester (1998), Las Cruces (1999), and London (2000), and more recently the Colloquium on Implementation of Constraint and LOgic Programming Systems (CICLOPS) in Paphos (Cyprus, 2001), Copenhagen (2002), Mumbai (2003), Saint-Malo (France, 2004), and Sitges (Spain, 2005), and the CoLogNet Workshops on Implementation Technology for Computational Logic Systems held in Madrid (2002), Pisa (2003) and Saint-Malo (France, 2004).

Important Dates
Submission Deadline:       June 1st, 2006 (strict)
Notifications to Authors:  July 1st, 2006
Final Version Deadline:    July 20th, 2006
CICLOPS 2006 Workshop:     August 21st or 22nd, 2006

Submission Guidelines
Participants  should  submit a paper  (maximum 15 pages,  PDF format),  describing  their work  in topics  relevant  to the workshop.  Accepted papers  will be presented  during the workshop. At least one author  of an accepted contribution  is expected to register for the workshop, and present the paper.
All  submissions   should   include   the  author's  name(s), affiliation, complete mailing address, and email address.

Authors are requested to prepare their submissions, following the LNCS/LNAI Springer format. Please see: http://www.springer.de/comp/lncs/authors.html for further details.

The submission should be submitted through the electronic submission site, accessible via the workshop web page. The deadline for receipt of submissions is June 1, 2006. Papers received after this date may not be reviewed. Eligible  papers  will  be  peer-reviewed  by  at least three members of the Program Committee. Authors will be notified via email of the results by July 1, 2006.   Authors  of accepted papers  are expected  to improve their paper based on reviewers' comments and to send a camera ready version of their manuscripts by July 20, 2006.

Accepted papers will be included in the workshop proceedings, which will be distributed to the participants.

Questions about submissions may be directed to haifengguo <AT> mail <DOT> unomaha <DOT> edu

Organizing Committee
Hai-Feng Guo        (University of Nebraska at Omaha)
Enrico Pontelli     (New Mexico State University)

Program Committee
Manuel Carro        (Polytechnic University of Madrid)
Bart Demoen         (KUL Leuven)
Michel Ferreira     (University of Porto)
Hai-Feng Guo        (University of Nebraska at Omaha)
Gopal Gupta         (University of Texas at Dallas)
Enrico Pontelli     (New Mexico State University)
Vitor Santos Costa  (Federal University of Rio de Janeiro)
Tom Schrijvers      (KUL Leuven)
Christian Schulte   (University of Uppsala)
Neng-Fa Zhou        (City University of New York)

Contact Information
Hai-Feng Guo
University of Nebraska at Omaha
Department of Computer Science
6001 Dodge Street
Omaha, NE 68182, USA
haifengguo <AT> mail <DOT> unomaha <DOT> edu

Enrico Pontelli
New Mexico State University
Department of Computer Science
Box 30001, MSC CS
Las Cruces, NM 88003, USA
epontell <AT> cs <DOT> nmsu <DOT> edu





Workshop on Non-Theorems, Non-Validity, Non-Provability
DISPROVING 2006
Seattle, WA, August 16, 2006
http://www.cs.chalmers.se/~ahrendt/FLoC06-ws-disproving/


Background
Automated Reasoning (AR) traditionally has focused on proving  theorems. Because of this, AR methods and tools in the past were  mostly applied to formulae which were already known to be true. If on  the other hand a formula is not a theorem, then most traditional AR  methods and tools cannot handle this properly (i.e. they will fail,  run out of resources, or simply not terminate).  The opposite of proving, which we call disproving, particularly  aims at identifying non-theorems, i.e. showing non-validity  resp. non-provability, and providing some kind of proof of  non-validity (non-provability). The proof for example could be a  counter model, or an instantiation making the formula false.

Scope
In the scope of the workshop is every method that is able to discover non-theorems and, ideally, provides explanation why the formula is not a  theorem. Possible subjects are decision procedures, model generation  methods, reduction to SAT, formula simplification methods, abstraction based methods, and failed-proof analysis.

Topics of relevance to the workshop therefore include
  * disproving conjectures in general,
  * extending standard proving methods with disproving capabilities,
  * approximate methods for identifying non-theorems,
  * counterexample generation,
  * counter model generation,
  * finite model generation,
  * decision procedures,
  * failure analysis,
  * reparation of non-theorems,
  * heuristics that help in identifying non-theorems,
  * applications and system descriptions.

Workshop Goal
The DISPROVING workshops are intended as a platform for the exchange of ideas between researchers concerned with disproving in the broad sense. By discussing approaches across the different communities, the workshop can identify common problems and solutions. Another goal is to elaborate known, and discover unknown, connections between other areas and disproving. Also, the meeting can enable an exchange of interesting examples for non-theorems. A long term goal is that the workshop series contributes to forming a disproving community within AR, and gives the work on disproving a greater visibility.

Audience
Non-theorems are an issue wherever one tries to prove statements which are not known to be valid in advance. Therefore, we aim at researchers from all areas of automated reasoning. The issue of the workshop is particularly relevant for all logics, calculi, and proving paradigms where non-validity is not covered by the (plain versions of) standard
methods. This includes (but is not restricted to) first-order logic proving, inductive theorem proving, rewriting based reasoning, higher-order logic proving, logical frameworks, and special purpose logics like for instance program logics. We also target at the model generation community.

Beside mature work, we also solicit preliminary work or work in progress to be presented.

Technical Programme
The technical program will include presentations of the accepted papers, discussions about the state and future of the field, and an invited talk.

Programme Committee
  * Wolfgang Ahrendt (Organizer)
  * Peter Baumgartner (Organizer)
  * Johan Bos
  * Simon Colton
  * Christian Fermüller
  * Bernhard Gramlich
  * Bill McCune
  * Hans de Nivelle (Organizer)
  * Michael Norrish
  * Renate Schmidt
  * Carsten Schürmann
  * John Slaney
  * Graham Steel
  * Cesare Tinelli
  * Calogero Zarba

Submission
Submissions should not exceed 10 pages.
Please use the electronic submission page at:
www.easychair.org/DISPROVING06/
The submission procedure is electronic only, and only PDF files are acceptable.

The deadline for submission is 26th of May 2006.

Publication
The final versions of the selected contributions will be collected in a volume to be distributed at the workshop and accessible on the web.

The organizers aim for properly published 'post-workshop' proceedings, probably  in the form of a journal special issue. They will be based on extended versions of selected workshop papers, but open to non-participants, in all cases with fresh reviewing. The decision of whether to do so will be taken after the workshop. (The according post proceedings of the 2004 workshop on DISPROVING appeared within ENTCS.)

Workshop Venue
The workshop will be held on August 16 as part of FLoC 2006 (Federated Logic Conference), The Seattle Sheraton Hotel and Towers, Seattle, Washington, August 10 - 22, 2006

Workshop Organizers

    Wolfgang Ahrendt
    Chalmers University of Technology, Göteborg, Sweden
    Email: ahrendt AT cs.chalmers.se

    Peter Baumgartner
    National ICT Australia, Logic and Computation Program, Canberra, Australia
    Email: Peter DOT Baumgartner AT nicta.com.au

    Hans de Nivelle
    MPI für Informatik, Saarbrücken, Germany
    Email: nivelle AT mpi-sb.mpg.de

Important Dates
    May 26: paper submissions deadline
    June 26: notification of acceptance
    July 16: final version due
    Wednesday, August 16: workshop date

Links
  * Workshop web page: www.cs.chalmers.se/~ahrendt/FLoC06-ws-disproving/
  * FLoC 2006 home page:  research.microsoft.com/floc06/
  * last year's workshop web page: www.cs.chalmers.se/~ahrendt/cade20-ws-disproving/

For further information on the workshop, please contact any of the organizers.





Workshop on Logical Frameworks and Meta-Languages
LFMTP 2006
Seattle, WA, August 16, 2006
http://www.cs.mcgill.ca/~bpientka/lfmtp06/index.html


Important Dates:
Abstract Submission: 15 May    2006
Full Paper Submission: 22 May  2006
Author Notification: 15 June    2006
Final Version: 15 July    2006

LFMTP'06 merges the International workshop on Logical Frameworks and Meta-languages (LFM) and the MERLIN workshop on MEchanized Reasoning about Languages with variable BIndingIN).

Logical frameworks and meta-languages form a common substrate for representing, implementing, and reasoning about a wide variety of deductive systems of interest in logic and computer science. Their design and implementation on the one hand and their applications in for example proof-carrying code have been the focus of considerable research over the last two decades. This workshop will bring together designers, implementors, and practitioners to discuss all aspects of logical frameworks.

The broad subject areas of LFMTP'06 are
Topics include, but are not limited to

Invited Speaker: Gordon Plotkin (University of Edinburgh)

Program Committee:
Andrew Appel               Princeton University
Thierry Coquand     Goteborg University
Martin Hofmann         LMU Munich
Furio Honsell         University Udine
Dale Miller         Inria Futurs
Brigitte Pientka     McGill University
Andrew Pitts         Cambridge University
Kevin Watkins         Carnegie Mellon University

Paper Submissions:
Three categories of papers are solicited:

Submission is electronic in postscript or PDF format. Submitted papers  must conform to the ENTCS style preferrably using LaTeX2e. For further  information and submission instructions, see the LFMTP web page:
http://www.cs.mcgill.ca/~bpientka/lfmtp06/index.html

Proceedings are to be published as a volume in the Electronic Notes in  Theoretical Computer Science (ENTCS) series and will be available to  participants at the workshop.

Organizers:

Brigitte Pientka             
bpientka@cs.mcgill.ca        
School of Computer Science   
McGill University            

Alberto Momigliano
amomigl1@inf.ed.ac.uk
School of Informatics
University of Edinburgh






Workshop on Constraint Based Methods for Bioinformatics
WCB 2006
Nantes, France, September 25, 2006
http://www.dimi.uniud.it/dovier/WCB06/


Description
Bioinformatics  is  a  challenging   and  fast  growing  area  of  research, which is of  utmost importance for our understanding of life. Major  contributions to this discipline  can have thousands of  positive effects  in medicine,  agriculture, or  industry. To pick  out only  a few  examples, Bioinformatics  tackles problems related to:
All these problems can  be naturally formalized using constraints over finite domains or intervals of reals. Biology is a source of extremely  interesting  and  challenging  problems  that  can  be encoded  exploiting the  application of  recent and  more general techniques  of constraint programming.   In this  framework, some problems that have been successfully tackled are:
The main  aim of this workshop  is twofold.  On the  one hand, to share recent  results in this  area (new constraint  solvers, new prediction  and  simulation programs).   On  the  other hand,  to present  new challenging problems  formalized and/or  solved with constraint based methods.

Important Dates
Submission Deadline:            June 30th, 2006
Notification to Authors:        July 22th, 2006
Camera-ready copy due:          August 16th, 2006
Workshop:                       September 25th, 2006

Program Committee
Rolf Backofen Freiburg Univ., Germany
Pedro Barahona Univ. Nova de Lisbona, Portugal
Mats Carlsson SICS, Uppsala Sweden
Alessandro Dal Palu' (co-chair) Parma Univ., Italy
Simon de Givry INRA Toulouse, France
Agostino Dovier (co-chair) Udine Univ., Italy
Francois Fages INRIA Rocquencourt, France
Tony Kusalik Saskatchevan Univ., Canada.
Enrico Pontelli NMSU (USA)
Sebastian Will (co-chair) Freiburg Univ., Germany


Submission Details
Submitted papers can be
in any topic concerning bioinformatics and constraints. Submission  style is  the standard  llncs style. Page limit  is 6 pages. Please send the pdf or ps to

                      wcb06@dimi.uniud.it

We plan  to publish  a selection  of revised  papers of WCB05 and WCB06 as a special issue of a major journal.

Participation
At least  one author  of each accepted submission must attend the workshop.  All  workshop participants  must pay the  workshop fee according to CP rules.




First International Day on Constraint Programming Tools
CP-TOOLS 2006
Nantes, France, September 24, 2006


Description
Much of the success and recognition of constraint programming stems from constraint programming systems and their successful use for practical applications. By providing rich languages and sophisticated implementations, constraint programming tools model complex applications, shorten development time, and produce code that is easier to maintain and extend.

The goal of CP-TOOLS series is to foster the development of constraint programming tools, to showcase their applications, to educate young researchers in the use of CP tools, and to encourage cross-fertilization in tool development. CP-TOOLS-06 is the first event in this series that will take place yearly in conjunction to a major conference.

The theme of the first day in the series is to expose young researchers to a broad set of constraint-programming tools, to the system-oriented research process, and to research themes in the area. The CP-Tools program will consist of two invited talks exploring these issues from industrial and academic standpoints, presentations of the participating systems, and two panels. The system presentations will collectively present the spectrum of capabilities available in modern CP tools. The first panel will focus on promoting system research and will discuss what makes it so unique. The second panel will approach research themes and trends that participants are exploring or are poised to explore. Participating systems may be commercial or academic and are expected to be available.

Submission of Abstracts
Abstracts of system descriptions are sollicited from all areas of constraint programming provided that the described system be available. We expect authors to address at least the following aspects in their abstract: system goals and area (research, industrial, education), technical contributions, distinguishing aspects, problematic aspects, relation to other systems, availability, applications, future development plans. Based on submitted abstracts, systems will be selected for presentation.

Important Dates
August 6, 2006 Deadline for submissions
August 20, 2006 Notification
Abstracts should be sent to the three organizers.

Organization
Laurent Michel (ldm@engr.uconn.edu)
Christian Schulte (schulte@imit.kth.seç)
Pascal Van Hentenryck (pvh@cs.brown.edu)

Program Committee
Mats Carlsson, SICS, Sweden
Yves Colombani, Dash Optimization, UK
Yan Georget, Koalog, France
Laurent Granvilliers, U Nantes, France
Narendra Jussien, EMN, LINA, France
François Laburthe, Bouygues, France
Laurent Michel, U of Connecticut, USA
Jean-François Puget, ILOG, France
Jean-Charles Régin, ILOG, France
Christian Schulte, KTH, Sweden
Marc van Dongen, 4C, Ireland
Pascal Van Hentenryck, Brown U, USA
Mark Wallace, Monash U, Australia
Roland Yap, NUS, Singapore





Multi-valued Logic Programming and Applications
MVLPA 2006
Seattle, WA, August 21, 2006
http://utdallas.edu/~axm011500/mvlpa06.html


Multi-Valued logics provide powerful mechanisms for reasoning about domains that are incomplete and inconsistent, such as databases, knowledge representation, model checking,asynchronous electronic circuits, etc. 

It is interesting to study the various semantics of multi-valued logics in general and in particular, logic programming from the perspective of multi-valued logics. The classical semantic formulations of logic programming, such as the minimal Herbrand model semantics, the well-founded semantics, the answer set semantics need to reinterpreted in the multi-valued scenario. Given a solid semantic foundation for a multi-valued logic programming framework, it can then be used as an elegant declarative specification language for the above application domains.

Research in this area spans theoretical issues regarding the semantics and the role of negation, to implementation strategies, to practical tools for solving problems in various application domains.

The workshop is meant to provide a channel for interaction between researchers working in these areas, by presenting their results and fostering discussion. This will engender new directions for researchers to pursue and showcase the considerable amount of research that has already been performed in the area.

Authors are invited to submit original research, survey or tutorial papers in the areas of Multiple-valued Logic and Multi-valued Logic Programming, including, but not restricted to:
  • Algebraic and formal aspects
  • Implementation techniques for Multi-Valued Logic Programming Languages
  • Logic synthesis and Optimization
  • Circuit/ Device Implementation
  • Multi-Valued Model Checking
  • Switching functions
  • Machine Learning/ Data Mining
  • Biocomputing
  • Theorem Proving in Multi-Valued Logics
  • Fault Detection and diagnosis
  • Reliability
  • Information retrieval
  • Knowledge Representation/ Discovery
  • Automated Reasoning

  • Important Dates:

    Submission deadline: May 31, 2006
    Notification to authors: June 10, 2006
    Camera-ready copy due:            June 30, 2006

    Workshop Organizers:


    Fourth ASIAN  Symposium on Programming Languages and Systems
    APLAS 2006
    Sydney, Australia, November 8-10, 2006
    http://www.kb.ecei.tohoku.ac.jp/aplas2006/


    APLAS aims at stimulating programming language research by providing a forum for the presentation of recent results and the exchange of ideas and experience in topics concerned with programming languages and systems.  APLAS is based in Asia, but is an international forum that serves the worldwide programming languages community.

    The APLAS series is sponsored by the Asian Association for Foundation of Software (AAFS), which has recently been founded by Asian researchers in cooperation with many researchers from Europe and the USA. The past formal APLAS symposiums were successfully held in Tsukuba (2005, Japan), Taipei (2004, Taiwan) and Beijing (2003, China) after three informal workshops held in Shanghai (2002, China), Daejeon (2001, Korea) and Singapore (2000).

    TOPICS
    The symposium is devoted to both foundational and practical issues in programming languages and systems. Papers are solicited on, but not limited, to the following topics:
    Original results that bear on these and related topics are solicited. Papers investigating novel uses and applications of language systems are especially  encouraged. Authors concerned about the appropriateness of a topic are welcome to consult with the program chair (koba@ecei.tohoku.ac.jp) prior to submission.

    GENERAL CO-CHAIRS
    Manuel Chakravarty (University of New South Wales, Australia)
    Gabriele Keller (University of New South Wales, Australia)

    PROGRAM CHAIR
    Naoki Kobayashi (Tohoku University)

    PROGRAM COMMITTEE
    Kung Chen (National Chengchi University, Taiwan)
    Wei-Ngan Chin (National University of Singapore, Singapore)
    Patrick Cousot (ENS, France)
    Masahito Hasegawa (Kyoto University, Japan)
    Jifeng He (United Nations University, Macau)
    Haruo Hosoya (University of Tokyo, Japan)
    Bo Huang (Intel China Software Center, China)
    Naoki Kobayashi (chair) (Tohoku University, Japan)
    Oege de Moor (Oxford University, UK)
    George Necula (University of California at Berkeley, USA)
    Martin Odersky (EPFL, Switzerland)
    Tamiya Onodera (IBM Research, Tokyo Research Laboratory, Japan)
    Yunheung Paek (Seoul National University, Korea)
    Sriram Rajamani (Microsoft Research, USA)
    Andrei Sabelfeld (Chalmers University of Technology, Sweden)
    Zhong Shao (Yale University, USA)
    Harald Sondergaard (University of Melbourne, Australia)
    Nobuko Yoshida (Imperial College London, UK)

    SUBMISSIONS INFORMATION
    Papers should be submitted electronically online via the conference submission webpage. Acceptable formats are PostScript or PDF, viewable by Ghostview or Acrobat Reader. Submissions should not exceed 16 pages in LNCS format, including bibliography and figures. Submitted papers will be judged on the basis of significance, relevance, correctness, originality, and clarity. They should clearly identify what has been accomplished and why it is significant. Submitted papers must be unpublished and not submitted for publication elsewhere. The proceedings of the symposium is planned to be published as a volume in Springer-Verlag's Lecture Notes in Computer Science series.

    IMPORTANT DATES
    Submission deadline: June 2, 2006
    Author notification: August 5, 2006
    Camera Ready: August 25, 2006
    Conference: November 8-10, 2006






    First International Workshop on Preferences and Their Applications in Logic Programming Systems
    PREFS'06
    Seattle, USA, August 21 or 22, 2006
    http://www.cs.nmsu.edu/lldap/Prefs06


    Workshop Description
    This workshop attempts to address all aspects of describing, modelling, computationally handling, and application of preferences, within the context of logic programming. In particular, we seek contributions that create cross-fertilization between different approaches to preferences and different flavors of logic programming (e.g., constraint logic programming, answer set programming) -- hopefully leading to new, more general, approaches for handling preferences in logic programming.

    The workshop topics include, but are not limited to:
    The purpose of this workshop is to bring together researchers interested in modeling and implementing preferences in logic programming. The objective is to promote exchange of ideas and possible integration between the different approaches proposed so far.

    Workshop Motivation
    The concept of preference has played an important role in various aspects of computer science. For example, preferences play a key role in the design of practical and efficient reasoning systems dealing with real-world knowledge. The concept of preference has been investigated by many researchers  in different fields, both within Computer Science (e.g., Artificial  Intelligence, Optimizations, Scheduling) and outside of Computer Science (e.g., Economics, Decision Theory).

    In recent years we have witnessed a growing interest in studying the integration of preferences in the context of logic-based and logic programming systems. These directions of research are of great importance, considering that preferences are considered a vital component of reasoning with real-world knowledge, and logic programming is one of the most widely used programming paradigms employed in knowledge representation and reasoning.

    Workshop Format
    This workshop is open to all members of the CP and ICLP communities.  The workshop will emphasize discussion and cross-fertilization, so presentations will be balanced with discussion time.  In this direction, the workshop is seeking high quality papers that address cutting-edge research in this field, and that can contribute to the discussion. The agenda will include paper presentations, and possibly an invited speaker. At least one author of each accepted submission must attend the workshop, and all participants must pay the workshop fee (which covers both CP'05 and ICLP'05 workshops).

    Important Dates
       Submission Deadline: June 1
       Notifications to Authors: July 1
       Final Version Deadline: July 20
       Worksop: August 21-22

    Submission Guidelines
    Participants  should  submit a paper  (maximum 15 pages,  PDF  format),  describing  their work  in topics  relevant  to the workshop. Accepted papers  will be presented  during the workshop.   At least one author  of an accepted contribution  is expected to register for the workshop, and present the paper.
    All  submissions   should   include   the  author's  name(s), affiliation, complete mailing address, and email address.

    Authors are requested to prepare their submissions, following the LNCS/LNAI Springer format. Please see:
           http://www.springer.de/comp/lncs/authors.html
    for further details.

    The submission should be sent in PDF format, using the submission web site:
         http://www.easychair.org/PREFS2006/submit/

    WORKSHOP ORGANIZERS  
    Enrico Pontelli and Tran Cao Son
    Department of Computer Science
    New Mexico State University
    Email: epontell aT cs DOT nmsu DOT edu, tson aT cs DOT nmsu DOT edu

    PROGRAM COMMITTEE
    Marcello Balduccini, Texas Tech University, USA
    Gerhard Brewka, University of Leipzig, Germany
    Ulrich Junker, ILOG, France
    Enrico Pontelli, New Mexico State University, USA
    Torsten Schaub, University of Potsdam, Germany
    Tran Cao Son, New Mexico State University, USA
    Mirek Truszczynski, University of Kentucky, USA





    International Conference on Rules and Rule Markup Languages for the Semantic Web
    RuleML 2006
    Athens, Georgia, November 9-10, 2006
    http://2006.ruleml.org


    Semantic Web technologies have matured to the point where they are being adopted by many organizations for applications as diverse as data integration, optimized search, and decision support. The increasing use of the technology has resulted from mainstream commercial software vendors providing solutions that support Semantic Web technologies, and W3C making RDF and OWL standard recommendations.

    It is widely recognized that rules are the next layer of focus within the Semantic Web technology stack, and consequently interest and activity in this area has grown rapidly over recent years. Semantic Web rules would allow the integration, transformation and derivation of data from numerous sources in a distributed, scalable, and transparent manner.

    The rules landscape features theoretical areas such as knowledge representation (KR) and algorithms; design aspects of rule markup; engineering of engines, translators, and other tools; standardization efforts, such as the recent Rules Interchange Format activity at W3C (http://www.w3.org/2005/rules/wg.html); and applications.

    Rules complement and extend ontologies on the Semantic Web. They can be used in combination with ontologies, or as a means to specify ontologies. In these settings, rules can be used either in conjunction with or as an alternative to description logics. Rules are also frequently applied over ontologies, to draw inferences, express constraints, specify policies, react to events, discover new knowledge, transform data, etc. Rule markup languages enrich Web ontologies by supporting publishing rules on the Web, exchange rules between different systems and tools, share guidelines and policies, merge and maintain rulebases, and more.

    Meeting Format
    The RuleML-2006 Conference is aimed to be the premiere scientific forum for exchanging ideas on all aspects of rules for the Semantic Web. It follows on the heels of three successful workshops on the topic, in 2002, 2003, and 2004, and the first RuleML conference that took place in 2005. Like its precursors, RuleML-2006 will be held in collaboration with the International Semantic Web Conference (http://iswc2006.semanticweb.org/) and will be co-located with it. The RuleML-2006 Conference will consist of both an academic track and an industry track.
    Topics of interest
    We encourage submissions on all topics related to rules and rule markup languages for the Semantic Web, including (but not limiting to) the following:
    Submission
    We invite articles of no more than 15 pages in length (the exact format will be announced later) describing original completed work, work in progress, or interesting problems or use cases. The page limit includes title, abstract, and all figures, and references. Submitted papers will be fully refereed based on the originality and significance of the ideas presented as well as on technical aspects. Publication of the proceedings is planned by a major publisher.

    Submissions should be made online in PDF format on the conference submission site, which will be announced, by 22 May 2006 midnight Hawaii time. Multiple submission is not allowed. Submissions should clearly indicate the appropriate track (academic or industrial).

    Important Dates



    International Workshop on Unification
    UNIF 2006
    Seattle, WA, August 11, 2006
    http://www.iiia.csic.es/~levy/unif06/


    UNIF is the main international meeting on unification. Unification is concerned with the problem of identifying given terms, either syntactically or modulo a given logical theory. Syntactic unification is the basic operation of most automated reasoning systems, and unification modulo theories can be used, for instance, to build in special equational theories into theorem provers.

    The aim of UNIF 2006, as that of the previous meetings, is to to bring together people interested in unification, present recent (even unfinished) work, and discuss new ideas and trends in unification and related fields. This includes scientific presentations, but also descriptions of applications and softwares using unification as a strong component.

    In 2006, UNIF has been organized as part of the 2006 Federated Logic Conference (FLoC 2006)

    This workshop is the twentieth in the series: UNIF'87 (Val D'Ajol, France), UNIF'88 (Val D'Ajol, France), UNIF'89 (Lambrecht, Germany), UNIF'90 (Leeds, England), UNIF'91 (Barbizon, France), UNIF'92 (Dagstuhl, Germany), UNIF'93 (Boston, USA), UNIF'94 (Val D'Ajol, France), UNIF'95 (Sitges, Spain), UNIF'96 (Herrsching, Germany), UNIF'97 (Orléans, France), UNIF'98 (Rome, Italy), UNIF'99 (Frankfurt, Germany), UNIF'00 (Pittsburgh, USA), UNIF'01 (Siena, Italy), UNIF'02 (Copenhagen, Denmark). UNIF'03 (Valencia, Spain). UNIF'04 (Cork, Ireland). UNIF'05 (Nara, Japan).


    Topics of Interest

    The meeting will include invited talks, contributed talks, and social time to discuss current topics of interest, which include (but are not limited to):



    Important Dates


    Program Committee

    Temur Kutsia Research Institute for Symbolic Computation
    Johannes Kepler University of Linz, Linz, Austria
    Jordi Levy  (chair)   Institut d'Investigació en Intel.ligència Artificial (IIIA)
    Consejo Superior de Investigaciones Científicas (CSIC), Barcelona, Spain
    Christian Urban Mathematisches Institut der LMU
    Munchen, Germany
    Mateu Villaret Dept. Informàtica i Matemàtica Aplicada (IMA)
    Universitat de Girona (UdG), Girona, Spain

    Submission

    Authors are invited to submit via email an abstract (1-5 pages), a paper (no longer than 15 pages), or a system description (no more than 5 pages) in Postscript or PDF format using the UNIF'06 submission page, handled by the EasyChair conference system.

    Authors have been encouraged to use LaTeX2e and the Springer llncs class files.


    Publication

    Informal proceedings of accepted contributions will be available on-line. A hard copy will be distributed at the workshop to registered participants.







    International Joint Conference on Artificial Intelligence
    IJCAI 2007
    Hyderabad, India, January 6-12, 2007
    http://www.ijcai-07.org


    Theme: Artificial Intelligence and Its Benefits to Society...
    The IJCAI-07 program will include workshops, tutorials, and oral and poster presentations of technical papers. The theme of the conference, Artificial Intelligence and Its Benefits to Society, has been chosen to enable the understanding of the existing and potential contributions of artificial AI research and development to the quality of life. The program will include panels and invited talks on the theme.

    CALL FOR TECHNICAL PAPERS, details at www.ijcai07.org/
    Submissions are invited on significant, original, and previously unpublished research on all aspects of artificial intelligence. Technical papers, not to exceed six pages, will be accepted for oral or poster presentation, in one of twelve major areas, namely, constraint satisfaction, control learning, learning, knowledge representation and reasoning, multi-agent systems, natural language processing, planning and scheduling, robotics, search, uncertainty, web/information retrieval/data mining, and other. Authors are fully responsible to follow the complete submission details in terms of format, anonimity, and originality.

    Important Dates:
    June 23, 2006 - Electronic abstract submission deadline
    June 30, 2006 - Electronic paper submission deadline
    September 18, 2006 - Paper notifications sent
    October 16, 2006 - Camera-ready copy due
    January 9-12, 2007 - IJCAI-07 Technical program

    CALL FOR TUTORIAL PROPOSALS, details at www.ijcai07.org/call_tutorial_proposal.html
    Tutorial proposals are invited, serving one or more of the following objectives: (i) introduce novices to major topics of AI; (ii) introduce expert non specialists to an AI subarea; (iii) motivate and explain a topic of emerging importance for AI; (iv) survey a mature area of AI research and/or practice; (v) provide instruction in established but specialized AI methodologies; (vi) present a novel synthesis combining distinct lines of AI work. Recommendations of topics or presenters are welcome to the Tutorial Chair, tutorials07@ijcai.org.

    Important Dates:
    April 7, 2006 - Proposal submission deadline
    May 5, 2006 - Acceptance notification
    October 27, 2006 - Syllabus and course handouts
    January 6-8, 2007 - IJCAI-07 Tutorials

    CALL FOR WORKSHOP PROPOSALS, details at www.ijcai07.org/call_workshop.html
    Workshops that focus on new and emerging topics, or on applications, are particularly encouraged. Proposals for workshops should be a maximum of five pages in length, and, in addition to all the complete contact information for the organizers, it should contain: (i) a technical description of the workshop, including the workshop goals; (ii) the reason of interest of the topic; (iii) a draft workshop agenda and an organizing schedule; (iv) if available, a list of tentatively confirmed attendees; (v) a list of related workshops held within the last three years, if any, and their relation to the proposed workshop.

    Important Dates:
    May 2, 2006 - Proposal submission deadline
    June 2, 2006 - Acceptance notification
    September 25, 2006 - Submission of papers to workshops
    October 23, 2006 - Workshop paper acceptance notification
    November 15, 2006 - Final complete workshop notes deadline
    January 6-8, 2007 - IJCAI-07 Workshops

    Conference Chair
    Ramon Lopez de Mantaras
    Artificial Intelligence Research Institute
    Spanish Council for Scientific Research
    confchair07@ijcai.org

    Program Chair
    Manuela M. Veloso
    Carnegie Mellon University
    pcchair07@ijcai.org

    Workshops Chair
    Carles Sierra
    Artificial Intelligence Research Institute
    Spanish Council for Scientific Research
    workshops07@ijcai.org

    Tutorials Chair
    Cynthia Lynn Breazeal
    Massachusetts Institute of Technology
    tutorials07@ijcai.org

    Local Arrangements Committee-07 (LAC)
    Secretariat Vijaya Sekhar K S,
    Asst. Manager (Outreach Division)
    International Institute for Information Technology
    vijay@iiit.ac.in


    Search and Logic: Answer Set Programming and SAT
    Seattle, WA, August 16, 2006
    http://www.cs.sfu.ca/~SearchAndLogic/


    The workshop will bring together researchers from SAT and ASP areas to exchange ideas on the current state-of-the-art in techniques, results and methodologies; to discuss problems which are exhibited in both areas; to formulate challenges and opportunities ahead.


    Overview

    In many applications of Computer Science, especially in combinatorial optimization, hardware design and software correctness checking, complex problems are pervasive. Those problems may be represented in a variety of ways. One technique, which has been used increasingly since the pioneering work of Kautz and Selman on SAT-based planning, is to represent a problem in some logic-based formalism (for instance propositional logic, quantified boolean formulas, or logic programming with the stable semantics) in such a way that intended models correspond to solutions. Subsequently the user applies a solver (dedicated piece of software) to the representation of the problem, computes model(s) and reads off the solutions.

    Analogous techniques are used based on a variety of formalisms, such as constraint satisfaction and integer programming, but SAT and ASP are distinguished by being based on logic. While the SAT and ASP research communities apply a similar, logic-based, approach to problem solving, the roots of the two communities are different, and they stress different aspects of the approach. For example, in SAT the emphasis is more toward efficient computation; in ASP more emphasis is placed on knowledge representation. Moreover, there are few researchers who are real experts in both areas.


    SAT

    The effectiveness of the declarative approach has perhaps been demonstrated most clearly by SAT. SAT stems from classical investigations by logicians of propositional satisfiability and has over 90 years of history with contributions of researchers such as Wittgenstein, Tarski, Quine, Davis, Putnam and other great mathematicians, computer scientists, and philosophers.

    SAT Solver design is facilitated by the fact that the propositional formulas used have very simple syntax and semantics. A great deal of work has gone into engineering very good solvers, and these have now become a standard and widely used tool in the Electronic Design Automation industry, for hardware verification and other related tasks. However, SAT has severe limitations as a modeling language.


    ASP

    ASP is a younger field, with approximately 15 years of history, but shows considerable promise on some problems which are poorly handled by SAT. ASP stems of Logic Programming, but changes the paradigm by computing models rather than unifying substitutions. Due to its Logic Programming roots, ASP input languages invariably include schemata, which together with a built-in way to express recursion provides considerable modeling convenience. The built-in recursion mechanism appears particularly useful when encoding problems involving sequences of events, such as hardware and software verification and planning. In contrast to other approaches ASP is the only area of that appears to have taken modeling methodology seriously from the outset.


    SAT and ASP

    There are a number of important interactions between these areas. For example, some similar implementation techniques are used in both areas, and SAT solvers are even used as the engines in some ASP solvers. However, in many respects the areas proceed as largely independent disciplines, each one providing its own unique contribution, and with the bulk of attention on rather different particular problems or questions.

    Modeling practice plays an important role in both areas, but in rather idiomatic forms. In SAT tractable restrictions are well-studied, but this subject has not yet been developed in ASP. Input/modeling languages which include schemata (a fragment of predicate logic) are standard in ASP, but work of this sort in SAT has been mostly ad-hoc.


    Objectives

    The expectation is that the techniques developed in each area can be profitably used in the other.

    Topics of interest include:

    PAPER SUBMISSION
    Authors should submit papers of at most 15 pages in Springer LNCS format, PDF or Postscript, online at http://www.easychair.org/LaSh06/.

    IMPORTANT DATES
    Paper Submission: May 22, 2006
    Author Notification: June 27, 2006
    Workshop: August 16, 2006



    Workshop on Hybrid Logics
    Seattle, WA, August 11, 2006
    http://hylomol.ruc.dk/HyLo2006


    Theme.
    Hybrid logic is a branch of modal logic where it is  possible to directly refer to worlds/times/states or whatever the elements of the model are meant to represent. It is easy to justify interest in hybrid logic because of the usefulness of the additional expressive power, and moreover, hybridization often improves the behavior of the underlying modal formalism.

    The workshop HyLo 2006 is relevant to a wide range of people, including those interested in description logic, feature logic, applied modal logics, temporal logic, and labelled deduction.
     
    Submission.
    Details will be announced at the workshop web page. The proceedings have been accepted for publication in ENTCS.

    Submission deadline: May 26, 2006 (tentative)

    Program committee.
    Carlos Areces (INRIA Lorraine, France),
    Patrick Blackburn (INRIA Lorraine, France),
    Thomas Bolander (Technical University of Denmark),
    Torben Braüner (Roskilde University, Denmark) --- Chair,
    Valeria de Paiva (PARC, USA),
    Melvin Fitting (Lehman College, New York, USA),
    Balder ten Cate (University of Amsterdam, The Netherlands),
    Jørgen Villadsen (Roskilde University, Denmark)



    Structural Operational Semantics
    Seattle, WA, August 26, 2006
    http://www.cse.unsw.edu.au/~rvg/SOS2006


    Theme:
    Structural Operational Semantics. Specific topics of interest  include (but are not limited to): programming languages, process algebras, higher-order formalisms, rule formats for operational specifications, meaning of operational specifications, comparisons between denotational, axiomatic and SOS, compositionality of modal  logics with respect to operational specifications, congruence with  respect to behavioural equivalences conservative extensions, derivation of proof rules from operational specifications, software  tools that automate, or are based on, SOS.

    All submissions must be done electronically, via the above webpage.

    Submission Deadline: Friday May 26, 2006

    Preliminary proceedings at the meeting; final proceedings in ENTCS.

    Program committee:
    Rocco De Nicola (Florence, IT),
    Wan Fokkink  (Amsterdam, NL),
    Rob van Glabbeek (NICTA, AU, co-chair),
    Reiko Heckel (Leicester, UK),
    Matthew Hennessy (Sussex, UK),
    Ugo Montanari (Pisa, IT),
    Peter Mosses (Swansea, UK, co-chair),
    MohammadReza Mousavi (Eindhoven, NL),
    David Sands (Chalmers, SE),
    Irek Ulidowski (Leicester, UK),
    Shoji Yuen (Nagoya, JP)

     Invited speakers:
    Robin Milner (Cambridge, UK) 
    Bartek Klin (Sussex, UK)

    Workshop organisers:
    Rob van Glabbeek (National ICT Australia)
    Peter D. Mosses (Swansea University)

    Email: sos2006@cs.stanford.edu





    Workshop on Logic and Computational Complexity
    LCC 2006
    Seattle,  WA, August 10-11, 2006
    http://www.cis.syr.edu/~royer/lcc/LCC06/


    Topics of interest include:
    All submissions must be done electronically.  See the workshop webpage for details.
    Submission Deadline : June 12, 2006
    Program committee:
      Georg Gottlob (University of Oxford)
      Neil Immerman (University of Massachusetts, Amherst) co-chair
      Russell Impagliazzo (University of California, San Diego)
      Neil Jones (University of Copenhagen)
      Bruce Kapron (University of Victoria) co-chair
      Harry Mairson (Brandeis University)
      Karl-Heinz Niggl (University of Technology, Ilmenau)
      Toniann Pitassi (University of Toronto)
      Thomas Schwentick (University of Dortmund)
      Colin Stirling (University of Edinburgh)




    Workshop on Strategies in Automated Deduction
    STRATEGIES 2006
    Seattle,  WA, August 10-11, 2006
    http://research.nianet.org/strategies06


    This workshop is a successor to both the series of STRATEGIES workshops associated with CADE and IJCAR and to the STRATA 2003 workshop associated with TPHOLs.  Papers and participation are invited from both the fully automatic and interactive theorem proving communities.

    For the full Call for Papers see: http://research.nianet.org/strategies06/cfp.html




    3rd Workshop on Constraint Handling Rules
    CHR 2006
    Venice, Italy, July 9, 2006
    http://www.cs.kuleuven.be/~toms/CHR2006/


    Introduction
    The Constraint Handling Rules (CHR) language has become a major declarative specification and implementation language for constraint reasoning algorithms and applications. Algorithms are often specified using inference rules, rewrite rules, sequents, proof rules or logical axioms that can be directly written in CHR. Its clean semantics facilitates program design, analysis and transformation. See the CHR website (http://www.cs.kuleuven.be/~dtai/projects/CHR/) for more information.

    Previous Workshops on Constraint Handling Rules were organized in May 2004 in Ulm (Germany) and in October 2005 in Sitges (Spain).

    Topics of Interest
    The workshop calls for full papers and short papers describing ongoing work, on all aspects of CHR, including topics such as:
    Awards
    The best paper receives the CHR Best Paper Award. It is chosen among all submissions for its outstanding quality in both presentation and scientific contribution and for its impact on the field of CHR.
    Submission Information
    All papers must be written in English and not exceed 15 pages in Springer LNCS format. The authors are encouraged, although not obliged, to submit their papers already in Springer LNCS format. General information about the Springer LNCS series and the LNCS authors' instructions are available at the Springer LNCS/LNAI home page.

    Submissions should be sent to chrworkshop@gmail.com and mention 'CHR 2006 Submission' in the subject. Every submission should include the names and e-mail addresses of the authors (with the corresponding author indicated), the paper abstract in ASCII format and the actual paper in postscript or PDF format. The submission should also indicate whether it is a full paper or a short paper.
    Accepted papers will be published in a technical report.

    Important dates
    Organization
    Program Committee:
    Alessandra Raffaeta, Universita Ca' Foscari di Venezia
    François Fages, INRIA Rocquencourt
    Gregory Duck, NICTA Victoria Laboratory
    Henning Christiansen, Roskilde University
    Jacques Robin, Universidade Federal de Pernambuco
    Martin Sulzmann, National University of Singapore
    Maurizio Gabbrielli, Universita di Bologna
    Thom Fruehwirth, Universitaet Ulm
    Tom Schrijvers, Katholieke Universiteit Leuven
      Workshop Coordinators:  chrworkshop@gmail.com

    Tom Schrijvers (contact person)
    Department of Computer Science
    K.U.Leuven
    http://www.cs.kuleuven.be/~toms/


    Thom Fruehwirth
    Fakultaet fuer Informatik
    Universitaet Ulm
    http://www.informatik.uni-ulm.de/pm/fileadmin/pm/home/fruehwirth/



    International Workshop on Automated Specification and Verification of Web Systems
    WWV 2006
    Cyprus, November 15-19, 2006
    http://www.dsic.upv.es/workshops/wwv06


    SCOPE
    The  increased  complexity  of Web sites and the explosive growth of Web-based applications has turned their design and construction into a  challenging problem. Nowadays, many companies have diverted their Web   sites   into   interactive,   completely-automated,  Web-based applications (such  as  Amazon, on-line banking, or travel agencies) with  a  high complexity that requires appropriate specification and verification  techniques and tools. Systematic, formal approaches to the  analysis  and  verification  can  address  the problems of this particular  domain  with  automated  and  reliable  tools  that also incorporate semantic aspects.

    We solicit paper on formal methods and techniques applied to Web sites, Web services or Web-based applications, such as:
    The WWV series provides a forum for researchers from the communities of Rule-based programming, Automated Software Engineering, and Web-oriented research to facilitate the cross-fertilization and the advancement of hybrid methods that combine the three areas.

    LOCATION
    WWV'06 will be held in November in Cyprus as a Special Track of the 2006 International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA 2006).

    HISTORY
    The first WWV'05 edition was held in Valencia, Spain, during March 14-15, 2005. The workshop was attended by 42 participants from universities, research institutes, and companies from 11 countries: Austria, Canada, France, Germany, India, Italy, Japan, Mexico, Spain, UK, and USA. A selection of the papers appeared as a volume of the Elsevier series Electronic Notes in Theoretical Computer Science (ENTCS).

    PUBLICATION
    Accepted papers will be published in a preliminary proceedings volume, which will be available during the workshop. Publication of the workshop post-proceedings in the Springer series Lecture Notes in Computer Science (LNCS) is envisaged.

    PROGRAM CO-CHAIRS
    Maria Alpuente (Technical University of Valencia, Spain)
    Moreno Falaschi (University of Siena, Italy)

    WORKSHOP CHAIR
    Santiago Escobar (Technical University of Valencia, Spain)

    PROGRAM COMMITTEE
    Jose Julio Alferes (Universidade Nova de Lisboa, Portugal)
    Maria Alpuente (Technical University of Valencia, Spain)
    Demis Ballis (University of Udine, Italy)
    Francois Bry (University of Munich, Germany)
    Santiago Escobar (Technical University of Valencia, Spain)
    Francois Fages (INRIA Rocquencourt, France)
    Moreno Falaschi (University of Siena, Italy)
    Gopal Gupta (University of Texas at Dallas, USA)
    Shriram Krishnamurthi (Brown University, USA)
    Tiziana Margaria (University of Potsdam, Germany)
    I.V. Ramakrishnan (State University of New York at Stony Brook, USA)
    Leon van der Torre (University of Luxembourg, Luxembourg)

    IMPORTANT DATES
    Abstract Submission     July 3, 2006
    Full Paper Submission   July 16, 2006
    Acceptance Notification September 17, 2006
    Camera Ready            October 15, 2006
    Workshop                November 15-19, 2006 (one day)



    International Workshop on Proof-Carrying Code
    PCC 2006
    Seattle, WA, August 11, 2006
    http://www.cs.stevens.edu/~abc/PCC-Workshop.html


    Affiliated with LOGIC IN COMPUTER SCIENCE (LICS 2006) and part of the Federated Logic Conference (FLoC 2006)

    IMPORTANT DATES
    Poster Submission 31 May 2006
    Notification 11 June 2006

    KEYNOTE SPEAKERS
    Andrew Appel (Princeton University)
    Ian Stark (University of Edinburgh)

    INVITED SPEAKERS
    Amal Ahmed (Harvard University)
    Gilles Barthe (INRIA Sophia-Antipolis)
    Ricardo Medel (Stevens Institute of Technology)
    Zhong Shao (Yale University)
    Dachuan Yu (DoCoMo Labs)

    WEB SITES:
    PCC 2006: http://www.cs.stevens.edu/~abc/PCC-Workshop.html
    LICS 2006: http://www.informatik.hu-berlin.de/lics/lics06/
    FLoC 2006: http://research.microsoft.com/floc06/

    DESCRIPTION
    As pioneered by Necula and Lee, Proof-Carrying Code (PCC) is a technique that allows the safe execution of untrusted code. In the PCC framework the code receiver defines a safety policy that guarantees the safe behavior of programs and the code producer creates a proof that its code abides by that safety policy. Safety policies
    can give end users protection from a wide range of flaws in binary executables, including type errors, memory management errors, violations of resource bounds, access control, and information flow.

    PCC relies on the same formal methods as does program verification, but it has the significant advantage that safety properties are much easier to prove than program correctness. The producer's formal proof will not, in general, prove that the code yields a correct or meaningful result, so this technique cannot replace other methods of program assurance, but it guarantees that execution of the code can do no harm. The proofs can be mechanically checked by the host; the producer need not be trusted at all, since a valid proof is incontrovertible evidence of safety.

    PCC has sparked interest throughout the world, from academia to industry, and has motivated a large body of research in typed assembly languages, types in compilation, and formal verification of safety properties, stimulating new interest in formal methods and programming languages technology.

    The aim of the workshop is to bring together people from academia and industry and to promote the collaboration between those adapting PCC ideas to new industrial applications and experts in logic, type theory, programming languages, static analysis, and compilers.

    PROGRAM
    The meeting will have two keynote speakers representing ongoing research in Europe and the USA, and invited speakers from academia and industry. There will also be an open poster session to offer the possibility to showcase a broader spectrum of research in the area. Although poster submission is open to everybody actively working in areas related to the meeting, we particularly encourage submissions by students.

    POSTER SUBMISSIONS
    Posters provide a forum for presenting work in an informal and interactive setting. They are ideal for discussing current work not yet ready for publication, for PhD thesis summaries and research project overviews. The length of a poster submission is 2 pages in the Springer LNCS format. Accepted posters will be presented at a poster session during the workshop. An extended abstract (2 pages) of each accepted poster will be published in the informal proceedings. Posters must be submitted electronically to
    pcc2006@cs.stevens.edu.

    PUBLICATION
    There will be informal proceedings with extended abstracts of the presentations and posters published as a Stevens Institute of Technology Tech-Report available at the meeting. We invite speakers and registered participants to submit a paper to a post meeting special issue of MSCS.

    PROGRAM COMMITTEE:
    Adriana Compagnoni, Chair (Stevens Institute of Technology)
    Amy Felty (University of Ottawa)

    CONTACT: pcc2006@cs.stevens.edu



    Indian International Conference on Artificial Intelligence
    IICAI 2007
    Pune, India, December 17-19, 2007
    http://www.iiconference.org


    The 3rd Indian International Conference on Artificial Intelligence (IICAI-07) (website: http://www.iiconference.org) will be held in Pune, INDIA during December 17-19 2007. IICAI-07 is one of the major AI events in the world. This conference focuses on all areas of AI and related fields.  We invite paper submissions. Please visit on the conference website for more details.   


    Bhanu Prasad
    IICAI-07 Chair 
    Department of Computer and Information Sciences
    Florida A&M University

    TallahasseeFL 32307
    USA 
    Email: bhanu.prasad@famu.edu
    Phone: 850-412-7350



    Symposium on Logical Foundations of Computer Science
    LFCS 2007
    New York City, US, June 4-7, 2007
    http://www.cs.gc.cuny.edu/~sartemov/lfcs07


    Purpose. 
    The LFCS series provides an outlet for the fast-growing body of work in the logical foundations of computer science, e.g., areas of fundamental theoretical logic related to computer science.

    Theme.
    Constructive mathematics and type theory; logical foundations of programming; logical aspects of computational complexity; logic programming and constraints; automated deduction and interactive theorem proving; logical methods in protocol and program verification; logical methods in program specification and extraction; domain theory logics; logical foundations of database theory; equational logic and term rewriting; lambda and combinatory calculi; categorical logic and topological semantics; linear logic; epistemic and temporal logics; intelligent and multiple agent system logics; logics of proof and justification; nonmonotonic reasoning; logic in game theory and social software; logic of hybrid systems; distributed system logics; system design logics; other logics in computer science.

    All submissions must be done electronically (15 pages, according to LNCS standards) to lfcs07@gmail.com.

    Submission Deadline.
    December 18, 2006.

    Steering Committee.
    Anil Nerode (Cornell, General Chair);
    Stephen Cook (Toronto);
    Dirk van Dalen (Utrecht);
    Yuri Matiyasevich (St.Petersburg);
    John McCarthy (Stanford);
    J. Alan Robinson (Syracuse);
    Gerald Sacks (Harvard);
    Dana Scott (Carnegie-Mellon).

    Program Committee.
    Samson Abramsky (Oxford);
    Sergei Artemov (New York City, PC Chair);
    Matthias Baaz (Vienna);
    Lev Beklemishev (Moscow);
    Andreas Blass (Ann Arbor);
    Lenore Blum (CMU);
    Samuel Buss (San Diego);
    Thierry Coquand (Go"teborg);
    Ruy de Queiroz (Recife, Brazil);
    Denis Hirschfeldt (Chicago);
    Bakhadyr Khoussainov (Auckland);
    Yves Lafont (Marseille);
    Joachim Lambek (McGill);
    Daniel Leivant (Indiana);
    Victor Marek (Kentucky);
    Anil Nerode (Cornell, General LFCS Chair);
    Philip Scott (Ottawa);
    Anatol Slissenko (Paris);
    Alex Simpson (Edinburgh);
    V.S. Subrahmanian (Maryland);
    Michael Rathjen (Columbus);
    Alasdair Urquhart (Toronto).


    Workshop on the Integration of SAT and CP Techniques

    Nantes, France, September 25, 2006


    Workshop Description
    SAT and CP techniques are two problem solving technologies which share many similarities, and there is considerable interest in cross-fertilising these two areas. The techniques used in SAT (propagation, activity-based heuristics, conflict analysis, restarts...) constitute a very successful combination which makes modern DPLL solvers robust enough to solve large real-life instances without the heavy tuning usually required by CP tools. Whether such techniques can help the CP community develop more robust and easier-to-use tools is an exciting question. One limitation of SAT, on the other hand, is that not all problems are effectively expressed in a Boolean format. This makes CP an appealing candidate for many applications, like software verification, where SAT is traditionally used but more expressive types of constraints would be more natural. The goal of this workshop is to boost the discussions between the SAT and CP communities by encouraging submissions at the border of these two areas.

    We welcome the submission of papers related to any aspect of SAT or CP in which experience from one area benefits the other area, as well as papers which, more generally, contribute to the integration between SAT and CP. Typical topics include, but are not restricted to:

     

    Submission procedure

    We ask authors to submit technical papers either in Postscript or PDF using the CP conference style. A limit of 15 pages is required. Please submit papers to youssefh_at_microsoft_dot_com.

    Attendance

    At least one author of each accepted submission must attend the workshop.

    Format

    Half-day workshop; presentation format to be defined.

    Important dates

    Submission deadline: June 18th
    Notification of acceptance: July 30th
    Camera-ready copy deadline: August 7th
    CP early registration deadline: tba
    CP late registration deadline: tba
    CP workshops: September 25th, 2006
    CP conference: September 25-29, 2006
     

    Contact information

    Youssef Hamadi
    Microsoft Research, Cambridge
    7 J J Thomson Avenue ,
    CB3 0FB Cambridge, UK.
    youssefh_at_microsoft_dot_com

     
    Lucas Bordeaux
    Microsoft Research, Cambridge

    7 J J Thomson Avenue ,
    CB3 0FB Cambridge, UK
    lucasb_at_microsoft_dot_com

     

    Program Committee




    International Congress on Tools for Teaching Logic
    SICTTL 2006
    Salamanca, Spain, September 26-30, 2006
    http://logicae.usal.es/SICTTL


    AIMS AND SCOPE:
    Most of us share the feeling that the teaching of an interdisciplinary field spanning logic, linguistics and computer science should be available in such a way that will facilitate further interdisciplinary research. Nevertheless, we are aware that the needs are different in those fields of study which have already been stablished. The overall concern is in the teaching of logic, but with special regard in addressing innovations and the systematization of educational activity.
    We believe that the role of logic in the shaping of the epistemology of this XXI creature should be crucial; Information technology is rapidly changing the world we live in, and logic is helping us to produce, distribute and process information, as well as to understand how coded information can modify people's state of knowlege. At the University of Salamanca the First International Congress on Tools for Teaching Logic took place in June 2000. A number of logicians from different countries in Europe, the US and South America gathered there to focus on education on the interfaces between philosophy, linguistics, mathematics, computer science and related disciplines.

    The organizing committee invites you to take part in the Second International Congress on Tools for Teaching Logic, which will be held in Salamanca on September 26-30, 2006. There will be lectures, discussion sessions, round tables and software demonstrations. You are kindly invited to take active part in these sessions and to exhibit your teaching or professional software. We invite submission of papers on all aspects of teaching logics, including the following main topics:
    1. Issues, Means and Objectives in Teaching Logic.
    2. Teaching the Role of Logic in Science and Humanities.
    3. Teaching Logic Research in a Postgraduate Programme.
    4. Software for Teaching Logic and Reasoning.
    5. e-Learning Logic: Resources and Challenges.
    PROGRAM
    We have already arrange the following plenary sessions:
    Dick de Jongh, Carlos Martn Vide, Felip Manya, Raymundo Morado, Angel Nepomuceno, Huberto Marraud, Digenes Rosales, Concepcin Martnez, Enrique Caorsi, Francisco Salguero, Carlos Oller, Tulio Olmos, Antonia Huertas, Enrique Alonso, Carmen Cadena and others are also planing to give talks.



    ADVISING COMMITTEE
    Carlos Areces (Loria, Nancy, France)
    Carlos Enrique Caorsi (Facultad de Humanidades y Ciencias de la Educacin, Uruguay)
    Hans van Ditmarsch (University of Otago, New Zeeland)
    Lucila Gonzlez Pazos (Departamento de Lgica. Universidad Complutense de Madrid, Spain)
    Antonia Huertas (OUC, Barcelona, Spain)
    Concepcin Martnez, (Departamento de Lxica e Filosofia Moral, Universidad de Santiago de Compostela.)
    Huberto Marraud (Universidad Autnoma de Madrid, Spain)
    Angel Nepomuceno (ULLI, Universidad de Sevilla, Spain)
    Francisco Jos Salguero (Universidad de Sevilla, Spain)
    Carlos Oller (Universidad de la Plata, Argentina)
    Tulio Olmos (Universidad Central de Venezuela, Venezuela)
    Gladys Palau (Universidad de Buenos Aires, Argentina)
    Ruy de Queiroz (Universidade Federal de Pernambuco, Brazil)
    Digenes Rosales (Universidad Pontificia, Lima)

    ORGANIZING COMMITTEE
    Enrique Alonso, (Universidad Autnoma de Madrid, Spain) (enr.alonso@gmail.com)
    Dick de Jongh, (ILLC, Universiteit van Amsterdam, Holland) (dickdj@science.uva.nl)
    Mara Manzano, Universidad de Salamanca (mara@usal.es)
    Felip Manya (IIIA, CSIC, Bellaterra, Spain) (felip@eps.udl.es)
    Raymundo Morado, (IIF. Universidad Nacional Autnoma de Mxico, Mxico) (morado@minerva.filosoficas.unam.mx)

    LOCAL ORGANIZERS
    Mara Manzano (mara@usal.es)
    Beln Prez Lancho (lancho@usal.es)
    Gustavo Santos Garca (santos@usal.es)
    Ana Gil (abg@usal.es)



    International Workshop on Applications of Logic Programming in the Semantic Web and Semantic Web Services
    ALPSWS 2006
    Seattle, WA, August 16, 2006
    http://www.deri.at/events/workshops/alpsws2006


    Description of the Workshop Topic:
    The advent of the Semantic Web promises machine readable semantics and a machine-processable next Generation of the Web. The first step in this direction is the annotation of static data on the Web by machine processable information about knowledge and its structure by means ofOntologies. The next step in this direction is the annotation of dynamic applications and services invocable over the Web in order to facilitate automation of discovery, selection and composition of semantically described services and data sources on the Web by intelligent methods, which is called Semantic Web Services. Many workshops and conferences were dedicated to these promising areas mostly with generic topics and bringing together people from a widespread variety of research fields with different understandings of thetopic. The plethora of these workshops and conferences makes it hard to keep track of the various approaches of a particular technology such as declarative logic programming in our case.

    In this workshop we want to advance the applications of Logic Programming as a paradigm for declarative knowledge representation and reasoning for the Web. The idea is to bring together the impressive body of work related to applications of LP to Semantic Web and Semantic Web Services. This brings the following major benefits:
    Topics
    Possible topics include (but are not limited to):
    Important Dates
    * 22 May 2006, Submission of papers
    * 19 June 2006, Notification of acceptance
    * 30 June 2006, Camera-ready versions due
    * 16 August 2006, Workshop

    Submission Format
    Authors are invited to submit original research contributions in English, following the LNCS format. Papers must not exceed 16 pages.

    Besides full papers, we also encourage short paper submissions of up to 6 pages which present preliminary results, system descriptions or practical real-world use cases and implementations.

    Submitted papers will be reviewed by at least two members of the program committee.

    Papers must be submitted electronically in either PDF (preferred) or postscript format. For paper submission we use the EasyChair conference management system:

    http://www.easychair.org/ALPSWS2006/

    For accepted submissions, at least one author must register for the workshop in order for the paper to appear in the proceedings and to be scheduled in the workshop program. The best papers in the workshop will be published in a dedicated section of a special issue on 'Logic Programming and the Web' in the journal Theory and Practice of Logic Programming (TPLP), see

    http://www.dsi.unive.it/~tplp/

    More information on the venue, registration, hotels, and related events will be available at the conference web site.

    Workshop Organization
    The workshop deals with the area that is the intersection of Semantic Web and Logic Programming. The workshop will be organized in part around talks presenting research results in this area selected from the accepted submissions. Another important part of the workshop will be to identify challenges and open problems in this area and provide an opportunity for new ideas and initiatives to bubble up and get the attention they deserve.

    In order to facilitate and enable a rich interaction environment a part of the workshop will be dedicated towards discussion using the open-space methodology. Open-space methodology facilities and enables effective on site agenda building and execution.

    The co-organizers of the workshop have successfully deployed open-space agenda building in scientific workshops in the past - e.g., the Semantics in P2P and Grid Computing Workshops at the World Wide Web Conference 2004 and the Friend of a Friend Workshop in Galway, 2004.

    There the workshop and especially the way the agenda was built in interaction with the audience, enabled new initiatives to develop and had impact on the future of these fields. We hope to achieve the same within this workshop and to reconnect the Semantic Web and Logic Programming field.

    We further plan to set up an online platform to exchange information and to keep the discussion alive even after the workshop.

    Organizing Committee
    * Axel Polleres, Universidad Rey Juan Carlos, Madrid
    * Stefan Decker, DERI, National University of Ireland, Galway
    * Gopal Gupta, University of Texas at Dallas
    * Jos de Bruijn, DERI, Leopold-Franzens Universitaet Innsbruck

    Program Committee
    * Jürgen Angele, Ontoprise GmbH
    * Chitta Baral, Arizona State University
    * Robert Baumgartner, Lixto Software GmbH
    * Leopoldo Bertossi, Carleton University
    * Francois Bry, Ludwig Maximilians University
    * Thomas Eiter, Vienna University of Technology
    * Pascal Hitzler, University of Karlruhe
    * Giovambattista Ianni, University of Calabria
    * Sheila McIlraith, University of Toronto
    * David Pearce, Universidad Rey Juan Carlos
    * Hans Tompits, Vienna University of Technology
    * Sebastian Schaffert, Salzburg Research Forschungsgesellschaft
    * Andy Seaborne, HP
    * Gerd Wagner, Brandenburg University of Technology
    * Adrian Walker, Reengineering
    * Guizhen Yang, SRI