Prev No Next Up Home Keys Figs Search New

Prolog Parser in Prolog

Appeared in Volume 10/1, February 1997

Keywords: compilers.


griffith@sfs.nphil.uni-tuebingen.de
John Griffith
3rd November 1996

I'm looking for a Prolog parser written in Prolog, possibly as a DCG. I've searched all the relevant FAQs, Web sites, etc. I could find, but no luck. I have seen one written in C, but I really want one in Prolog. (I know it will be slower).

In case anyone is interested, I want to add some meta syntax to Prolog which will be transformed by a preprocessor into valid Prolog code. For example, the 2nd order predicate notation: P(X,Y) where 'P' would get instantiated to some predicate name at compile time. This kind of notation, of course, makes read_term/2 scream since it is not valid Prolog syntax. So, modifying a Prolog parser seems to be the easiest solution.


fjh@cs.mu.oz.au
Fergus Henderson
3rd November 1996

The Mercury system includes a parser for Prolog-style terms in its standard library. The parser is written in Mercury, using DCGs, but it should be pretty easy to convert it to Prolog.

The Mercury system is available (in both source and binary form) from several FTP sites. See http://www.cs.mu.oz.au/mercury for details on how to obtain it.

The next release of Mercury will support this sort of higher-order syntax.


ost@comnets.rwth-aachen.de
Martin Ostermann
4th November 1996

Turbo Prolog (now PDC) used to include a "Prolog Inference Engine (PIE)" demo, which was basically a Prolog interpreter writen in Prolog. It included a parser. As far as I recall, it was in the public domain.

Update (December 19th): Unforunately, it's not currently available on their Web or FTP sites.


thomasl@csd.uu.se
Thomas Lindgren
6th November 1996

Take a look at HiLog. It implements the syntax you are after.
ftp://ftp.cs.sunysb.edu/pub/hilog/hilog.tar.Z

XSB may also be of use:
http://www.cs.sunysb.edu/~sbprolog/


ok@goanna.cs.rmit.edu.au
Richard A. O'Keefe
7th November 1996

The public domain Prolog parser used to be mentioned in the FAQ.

If you have seen a Prolog parser written in C, you have seen one that got it wrong. (Well, it's possible to do it right in C, but since arbitray lookahead is required, I've never seen it done.)

John Griffith writes:
I know it will be slower.

Are you sure? The measurements I did a couple of years ago showed that tokenising was slower in Prolog than in C (typically because Prolog implementors don't put the effort into making get0/1 go as screamingly fast as it could go), but there was very little difference in parsing if you had a good Prolog compiler.

John Griffith continues:
I want to add some meta syntax to Prolog...

(a) It takes only a couple of lines of code to make the public-domain parser turn <variable>(<term1>,...,<termn>) into call(<variable>, <term1>, ..., <termn>).

(b) I am distressed to see people, at this late date, thinking only of predicate names and not of closures (which is what call/N supports).

He continues:
This kind of notation, of course, makes read_term/2 scream since it is not valid Prolog syntax.

You do not say which Prolog you have. In Quintus Prolog, there is full source for a tokeniser and parser in the library. If you have that, you will notice:

parse(var(Variable,_), ['('|S1], Precedence, Answer, S) :- !,
    parse(S1, 999, Arg1, S2),	% look for "," or ")"
    read_args(S2, RestArgs, S3),
    !,
    Term =.. [call,Variable,Arg1|RestArgs],
    exprtl0(S3, Term, Precedence, Answer, S).

is already present in library(read). So:
?- portable_read(Term).

or
?- portable_read(Term, Dictionary).

will already accept what you want.


griffith@sfs.nphil.uni-tuebingen.de
John Griffith
7th November 1996

RAO (Richard A O'Keefe) writes:
The public domain Prolog parser used to be mentioned in the FAQ.

Unfortunately, it is mentioned in the "Prolog Resource Guide" [1], but only in a comparative sense:

"A parser for standard Prolog text written in C is available by anonymous FTP from:
ftp://trappist.elis.rug.ac.be/pub/prolog/

It consists of only three modules (tokenizer, parser, and display routine) and an interface module to integrate it with an existing Prolog system. It is completely deterministic and about 10 times faster than O'Keefe's public domain parser written in Prolog. For more information, write to Koen De Bosschere <kdb@elis.rug.ac.be>."


The claim is that the C parser is "about 10 times faster". If inaccurate, this description should be amended. I think pointers to your public domain parser would also be appropriate somewhere in [1].

RAO writes:
(b) I am distressed to see people, at this late date, thinking only of predicate names and not of closures (which is what call/N supports).

I should expand the description of the kind of meta syntax I would like. I'm thinking about something along the lines of the Enhanced-Schema Language (ESL) described by Vasconcelos and Fuchs (see [2]).

In this language, there are not only second order-like functor symbols, but also meta variables over vectors of arguments, and the possibility to index positions within such vectors. (Their syntax includes many other expressive mechanisms, see [2] for details.) I think that ESL syntax is at least as expressive as call/N.

Moreover, I want to use an ESL-like syntax as a kind of macro language, which would be expanded into real (non-meta) Prolog code at compile time. I'm not sure of the use of call/N you had in mind, but (at least according to tests I performed) using call/1 at run-time is significantly slower than actually calling the argument directly. One could, of course, compile out instances of call/N when the arguments are known at compile time, therefore avoiding the run-time overhead.

[1] Mark Kantrowitz, "Prolog Resource Guide", comp.lang.prolog, May, 1996, ftp://ftp.cs.cmu.edu:/user/ai/pubs/faqs/prolog/prg_1.faq. Email: mkant+prg@cs.cmu.edu.

[2] Wamberto W. Vasconcelos and Norbert E. Fuchs, "An Opportunistic Approach for Logic Program Analysis and Optimisation using Enhanced Schema-Based Transformations", Proc. LoPSTr'95, 5th Int. Workshop on Logic Program Synthesis and Transformation, Utrecht, Netherlands, LNCS 1048, Springer Verlag, pp.174-188, 1996. ftp://ftp.ifi.unizh.ch/pub/techreports/other_docs/lopstr95.ps.gz


jensk@bbn.hp.com
Jens Kilian
7th November 1996

Richard A. O'Keefe wrote:
If you have seen a Prolog parser written in C, you have seen one that got it wrong. (Well, it's possible to do it right in C, but since arbitray lookahead is required, I've never seen it done.)

In ISO Prolog, there is no need for arbitrary look-ahead.

As for the PD compiler, the only version I ever found was one for an extended syntax (multi-word operators and similar weirdness), and it was kludged beyond understanding. If you know of a simpler version which actually parses Prolog, please let me know.


lee@cs.mu.oz.au
Lee Naish
10th November 1996

John Griffith writes:
I think that ESL syntax is at least as expressive as call/N.

If people still think that call/N or, heaven forbid, even less expressive primitives are the right way to do higher order logic programming, please take a look at http://www.cs.mu.oz.au/~lee/papers/ho.


ok@goanna.cs.rmit.edu.au
Richard A. O'Keefe
22nd November 1996

Jens Kilian writes:
As for the PD compiler, the only version I ever found was one for an extended syntax...

Sounds like DISTFI.PL. Distfix operators are provided in CGOL, and are used in several functional programming languages, and specification languages, such as Larch and OBJ3.

He continues:
If you know of a simpler version which actually parses Prolog

READ.PL, available from the same place as DISTFI.PL. If you have Quintus Prolog, it is in library(read).


ok@goanna.cs.rmit.edu.au
Richard A. O'Keefe
22nd November 1996

John Griffith writes:
The claim is that the C parser is "about 10 times faster". If inaccurate, this description should be amended.

Inaccurate, shminaccurate, the statement is totally meaningless.

It doesn't say: what CPU; what C system; what compiler options for the C system; what Prolog system; what options for the Prolog system.

And it doesn't point out that the two programs don't parse the same language.

The PD Prolog code parses "Edinburgh Prolog", which is a much harder language to parse than ISO Prolog, which is what the C version does. Basically, the C bigots whinged in the ISO committee until everything whose implementation in C was less than completely obvious was pruned away. Of course it looks fast, because it's parsing a simplified language that was designed to be easy to parse.

For what it's worth, if you combine the tokeniser in Quintus's "ppl.c" with the parser in C Prolog, and fiddle just a little bit, you get a Prolog-in-C parser that is:

Unfortunately, the C Prolog parser is copyright EdCAAD.

He continues:
I'm not sure of the use of call/N you had in mind, but (at least according to tests I performed) using call/1 at run-time is significantly slower than actually calling the argument directly.

call/1 is not call/N. You do not say which Prolog system you tried it in. For example, here's one possible implementation of call/1:

call(Query) :-
   assert(('$call'(Query) :- !, retractall('$call'(_)), Query)),
   '$call'(Query).

Obviously this can be slow (assert/1 might compile, for example). Something tricky has to be done for call(X) if the predicate of X is (,)/2, (;)/2, (->)/2, or (\+)/1. But for the usual case, call/N can be implemented so that the cost is one dispatch more than a direct call. There will also be indirect effects (some variables that could have gone on the local stack will go on the global stack). But call/N can be very fast. That doesn't guarantee that it is, in any particular Prolog.


gwait@bnr.co.uk
Graham Thwaites
25th November 1996

Richard A. O'Keefe wrote:
the C bigots whinged...

It is true that the ISO Committee rationalised the syntax of Prolog but this particular decision was independent of any degree of C bigotry - either for or against. I don't have my original proposal to hand but my recollection is that there were two main aspects:

Taken together these measures prevent implementation dependent ambiguities in the resulting parse and also restrict the degree of lookahead required to a single token plus one character.

While these are gains for the compiler, the far more important benefit is that the understanding of the code by a human reader is enhanced.


rss@cise.npl.co.uk
Roger Scowen
27th November 1996

Richard wrote:
the C bigots whinged...

Proponents of "Edinburgh Prolog" should remember that very early in the standardization process, Chris Moss showed in a number of examples and tests that systems claiming to support "Edinburgh syntax" varied widely. There were no "C bigots" in the working group, just many experts trying to reach a sensible acceptable compromise.

In fact, ISO Standard Prolog defines the syntax of standard Prolog so that almost all "Edinburgh Prolog" source files conform to the standard. And when they do not conform to the standard, they would probably not be portable anyway.

The standard syntax, unlike the programming manuals, fully defines for any actual source text its abstract syntax, and thus meaning. It also allows sensible extensions, e.g. for ISO 8859 (Latin-1) and multi-byte character sets.

P.S. The working group and I were very grateful to Koen De Bosschere for checking and demonstrating that the standard syntax was sound and could be efficiently parsed.


ok@goanna.cs.rmit.edu.au
Richard A. O'Keefe
3rd December 1996

Roger Scowen writes:
It also allows sensible extensions, e.g. for ISO 8859 (Latin-1) and multi-byte character sets.

My "draft proposed standard for Prolog", disseminated before there was a BSI Prolog committee, explained the need for supporting wide character sets. Quintus Prolog, while remaining fully compatible with Edinburgh Prolog, has supported ISO Latin-1 since it was a draft (we knew it as MNCS, and the standard actually changed slightly, so we supported that) and has supported Kanji, since before I even joined Quintus.

The statement that the standard allows Latin-1 and multi-byte as extensions is damning. The Prolog standard should, like the Ada 95 standard, have been defined in terms of ISO 10646. Handling the current international standard character set should not be an extension!

He continues:
P.S. The working group and I were very grateful to Koen De Bosschere for checking and demonstrating that the standard syntax was sound and could be efficiently parsed.

As I have pointed out, and written up in detail elsewhere, he failed to demonstrate that the standard syntax could be efficiently parsed. It is possible to write a substantially faster parser that breaks less old code.

I was able to use de Bosschere's parser to demonstrate that the new syntax does break a lot of old code.

None of the Prolog systems currently available to me conforms to the ISO standard, or at any rate, none of them claims to, and only one of them claims to come even close to it. Several of these systems are available in source form, and it would be easier for me to port one of these systems than to port my code to ISO, assuming I could find a useful number of ISO Prolog systems to port to. The reason for this is, of course, that the systems I use nowadays have important useful features (e.g. coroutining, constraint handling, tabling) that are not covered by the standard.


griffith@sfs.nphil.uni-tuebingen.de
John Griffith
24th November 1996

RAO (Richard A O'Keefe) writes:
For what it's worth, if you combine the tokeniser in Quintus's "ppl.c" with the parser in C Prolog, and fiddle just a little bit, you get a Prolog-in-C parser

I have never seen ppl.c, is this part of Quintus' source code? I have always assumed that we only received binaries, and that the source was unavailable.

RAO continues:
call/1 is not call/N. You do not say which Prolog system you tried it in.

I used Quintus Prolog Release 3.2 (Sun 4, SunOS 5.4). I have no idea how call/1 is implemented there. I said call/1 because the only call/N that I am aware of is (I believe) built in terms of call/1. So call/N can be no faster than call/1. For instance, in the Quintus library call.pl (written by you) call/2 is defined as:

call(Term, Y1) :-
  strip_module_prefix(Term, user, Form, Module),
  append_term(Form, 1, N, Full),
  !,
  N1 is N+1, arg(N1, Full, Y1),
  Module:Full.
call(Term, Y1) :-
  call_error(Term, call(Term,Y1)).

Here 'Module:Full' is the same as 'call(Module:Full)', isn't it? In any case, it seems as if some compilation is going on. To make this more concrete, I have included the testing code I used and some results below. It uses the time/3 predicate from Quintus library benchmark.pl.

The tests run down a list and check that every member is the atom "a", so the base predicate (is_a/1) is very simple. I compared four implementations. The first one is a specialized predicate (is_a_list_1/1) that checks if all members of a list are the atom "a". The second (is_a_list_2/1) does the same, but instead of calling is_a/1 directly, uses call/1. The third (map_list_1/1) is a meta-predicate that passes the predicate name and constructs call/2 from the predicate name and the list argument. The last predicate (map_list_2/1) is a pseudo meta-predicate that doesn't use call/1, but instead requires the existence of a properly defined "hook".

When I first performed these tests, I really wanted to see how the "hook" method compared. The reason is that I like the expressiveness of meta-predicates, but I don't like the overhead. However, I usually use meta-predicates only as a sort of shorthand. That is, I know at compile time what will be called. So the hook idea seemed like a nice compromise between expressiveness and speed. The results were positive: the hook method is around 20% slower than the specialized predicate, while the true meta-predicates with call are over an order of magnitude slower.

Of course, if compiling of the call/1 argument is the bottleneck (which it probably is) then the overhead of the true meta-predicate method should become less significant as the base predicate time becomes larger. The same should be true for the hook method, I suppose.

However, I'm now interested in just compiling out the meta-predicate and ending up with the specialized predicate. This is why I was originally looking for a Prolog parser. I want to define something like 2nd order macros that would get expanded out at compile time. Of course, one nice advantage of meta-predicates is the smaller code space. If the overhead of call can be significantly reduced then there doesn't seem to be any reason not to use real meta-predicates.

% Speed tests for meta predicates.

% Simple test predicate 
is_a(a).

% Non-meta predicate 
is_a_list_1([]).
is_a_list_1([F|R]) :-
   is_a(F), is_a_list_1(R).

% Non-meta predicate with call
is_a_list_2([]).
is_a_list_2([F|R]) :-
   call(is_a(F)), is_a_list_2(R).

% Meta predicate with call
map_list_1([],_G).
map_list_1([F|R],G) :- % Simulate call/2.
   functor(Term,G,1), arg(1,Term,F),
   call(Term), map_list_1(R,G).

% Meta predicate with hook
map_list_2([],_G).
map_list_2([F|R],G) :-
   map_list_hook(G,F), map_list_2(R,G).

map_list_hook(is_a,X) :- is_a(X).

% Generate a list of atoms
gen_a_list(0,[]) :- !.
gen_a_list(N,[a|R]) :-
   N1 is N - 1, gen_a_list(N1,R).

% Test suite
% For an alternative, see the definition of cpu_time/3 in "The Craft
% of Prolog", by Richard A. O'Keefe
:- use_module(library(benchmark),[time/3]).

test_is_a_list_1(N) :- gen_a_list(N,L), is_a_list_1(L).
test_is_a_list_2(N) :- gen_a_list(N,L), is_a_list_2(L).
test_map_list_1(N) :- gen_a_list(N,L), map_list_1(L,is_a).
test_map_list_2(N) :- gen_a_list(N,L), map_list_2(L,is_a).

test_all(ListSize,Count,InternalCount) :-
   write('Non-meta predicate:'),
   time(test_is_a_list_1(ListSize),Count,InternalCount),nl,
   %
   write('Non-meta predicate with call:'),
   time(test_is_a_list_2(ListSize),Count,InternalCount),nl,
   %
   write('Meta-predicate with call:'),
   time(test_map_list_1(ListSize),Count,InternalCount),nl,
   %
   write('Meta-predicate with hook:'),
   time(test_map_list_2(ListSize),Count,InternalCount).

test1 :- test_all(1000,10,10).
test2 :- test_all(10000,10,10).

These results were obtained on a Sun UltraSPARC 1 with Quintus Prolog Release 3.2 (Sun 4, SunOS 5.4).

[Ed. Only total times are shown due to space limitations.]

Test 1

Non-meta predicate:
Time: 220 msec total (220 compute + 0 overhead + 0 gc + 0 shift)

Non-meta predicate with call:
Time: 3670 msec total (3490 compute + 0 overhead + 180 gc + 0 shift)

Meta-predicate with call:
Time: 3890 msec total (3780 compute + 0 overhead + 110 gc + 0 shift)

Meta-predicate with hook:
Time: 330 msec total (280 compute + 0 overhead + 50 gc + 0 shift)

Test 2

Non-meta predicate:
Time: 2870 msec total (2300 compute + 0 overhead + 570 gc + 0 shift)

Non-meta predicate with call:
Time: 36550 msec total (34850 compute + 0 overhead + 1700 gc + 0 shift)

Meta-predicate with call:
Time: 39250 msec total (37530 compute + 0 overhead + 1720 gc + 0 shift)

Meta-predicate with hook:
Time: 3230 msec total (2810 compute + 0 overhead + 420 gc + 0 shift)


ok@goanna.cs.rmit.edu.au
Richard A. O'Keefe
28th November 1996

John Griffith writes:
I have never seen ppl.c, is this part of Quintus' source code?

It is a separate utility, for "pretty-printing" other people's code. It is not part of the Prolog system itself. The guts of the tokeniser can be found (with some changes) in XSB.

He continues:
I assumed that we only received binaries, and that the source was unavailable.

Much of a QP distribution is sources of one kind or another. You certainly get full library sources.

He continues:
I used Quintus Prolog Release 3.2 (Sun 4, SunOS 5.4). I have no idea how call/1 is implemented there. I said call/1 because the only call/N that I am aware of is (I believe) built in terms of call/1. So call/N can be no faster than call/1.

You are confusing an interface with an implementation. The QP3.2 implementation of call/N used call/1, but that is not the only possible implementation.

I repeat: it is possible to implement call/N so that the overhead is:

Clearly this cannot be as fast as a direct call, and this is true in any language. It can, however, be a lot faster than an implementation layered on top of call/1.

He continues:
The tests...

are completely irrelevant because they measure one, old, implementation which was known to be inefficient at the time. Other implementations are possible (NU Prolog uses one, I've benchmarked another).

He writes:
When I first performed these tests, I really wanted to see how the "hook" method compared.

The hook method you describe should be slightly slower than an efficient implementation of call/N.


fjh@cs.mu.oz.au
Fergus Henderson
25th November 1996

John Griffith writes:
The results were positive: the hook method is around 20% slower than the specialized predicate while the true meta-predicates with call are over an order of magnitude slower.

With Mercury I get quite different results. With full optimization enabled, all four methods compiled down to exactly the same code for the inner loop, and so the speed was of course the same for all of them, modulo timing noise.

He continues:
However, I'm now interested in just compiling out the meta-predicate and ending up with the specialized predicate.

That's what the Mercury compiler does, whenever it can.

He continues:
Of course one nice advantage of meta-predicates is the smaller code space. If the overhead of call can be significantly reduced then there doesn't seem to be any reason not to use real meta-predicates.

Mercury's type/mode/determinism systems allow call/N to be compiled quite efficiently. If I disable inlining and specialization of higher-order predicates, then the version using call/2 is only about 33% slower than the version using a direct call to is_a/1.


lee@cs.mu.oz.au
Lee Naish
27th November 1996

In NU-Prolog, call/N is not built in terms of call/1, and is faster.

Adding to your benchmarks:

is_a_list_3([]).
is_a_list_3([F|R]) :-
   call(is_a, F), is_a_list_3(R).

Times were (approximately):

Non-meta predicate: 2 secs
Non-meta predicate with call/1: 11 secs
Non-meta predicate with call/2: 2.4 secs

Why is call/N faster? It doesn't handle conjunctions and other constructs which may contain cut, amongst other things.


griffith@sfs.nphil.uni-tuebingen.de
John Griffith
28th November 1996

So does this mean that call/1 is not an instance of call/N, ie. N>1? Maybe you should implement call/1 in terms of call/N? :-)


ok@goanna.cs.rmit.edu.au
Richard A. O'Keefe
3rd December 1996

Actually, there is no reason why call/1 should not be an instance of call/N. However, call/1 can be given forms that contain cuts and other oddities, so needs to provide a full interpreter. Quintus Prolog, for example, has a call/1 that looks roughly like this (gross simplifications):

call(Term) :-
   if_error(E,
      check_and_normalise(Term, Debuggable_Form),
      report_error_and_fail(E)
   ),
   interpret(Debuggable_Form).

where check_and_normalise/2 has a lot in common with the first pass of the compiler. There could be a distinction between the data structures used for "fast" interpretation and the data structures used for "debugging" interpretation, but it happens that there isn't, so any call/1 pays the price for debugging.

For simple calls, like call(X is E), most of the time is overhead. It is possible to introduce this overhead in a controlled way by having special definitions for the control structures:

','(A, B) :- full_call((A , B)).

';'(A, B) :- full_call((A ; B)).

'->'(A, B) :- full_call((A -> B)). '\+'(A) :- full_call(\+(A)).

and letting call/N invoke these predicates. This would put the cost of a call/1 involving a control structure up slightly, while putting the cost of all other call/1s down quite a bit.

The Quintus 'meta_predicate' declarations, which are really weak type declarations, provide information that the compiler can use to support call/N very efficiently. (Which is not, alas, to say that the QP compiler does do this, but if AIL would like me to do that for them, I would be delighted.)


griffith@sfs.nphil.uni-tuebingen.de
John Griffith
28th November 1996

I wrote:
However, I'm now interested in just compiling out the meta-predicate and ending up with the specialized predicate.

Fergus Henderson writes:
That's what the Mercury compiler does, whenever it can.

Does it create a large increase in code size? Is there any way to share intermediate code between specializations? Suppose you have N specializations of some system of predicates which traverse some complicated data structure, and the call() is M levels of predicates deep. For instance, p1/1 through pm/1 are the M predicates and q1/1 through qn/1 are the N specializations:

p1(M) :- p2(M).
p2(M) :- p3(M).
...
pm(M) :- call(M).

q1 :- p1(foo1).
q2 :- p1(foo2).
...
qn :- p1(foon).

Do you get N*M predicates resulting from the original N+M? Or is there some way to avoid this?


fjh@cs.mu.oz.au
Fergus Henderson
29th November 1996

John Griffith writes:
Does it create a large increase in code size?

No, generally not.

Anyway, increase compared to what? With a system that didn't optimize higher-order code the way that Mercury does, usually the programmer would have written lots of different first-order versions. So in fact the presence of this optimization usually has almost no effect on the efficiency of the object code - what it affects is how people write their source code.

He continues:
Is there any way to share intermediate code between specializations?

For the rare cases in which specialization is not desired, you can disable the specialization using the '--no-optimize-higher-order' option to the Mercury compiler. But in practice I haven't found any need to use this.


griffith@sfs.nphil.uni-tuebingen.de
John Griffith
29th November 1996

I wrote:
Does it create a large increase in code size?

FH (Fergus Henderson) writes:
No, generally not.

I suspect that most programs are not of the "exponential explosion" type, but what about such programs?

FH continues:
Anyway, increase compared to what? With a system that didn't optimize higher-order code the way that Mercury does, usually the programmer would have written lots of different first-order versions.

Agreed. I meant in comparison with non-specialized meta-predicates really using something like call/N or apply/3 down below. There must be a trade-off.


griffith@sfs.nphil.uni-tuebingen.de
John Griffith
28th November 1996

Richard A O'Keefe wrote:
The tests are completely irrelevant...

The tests are relevant to me since I am currently working with QP3.2.

I would prefer to use Mercury, but we have lots of legacy code and several other people doing development who I do not think I can convince to use Mercury.

I'll just have to fantasize about strong typing, etc. for now :-)


ok@goanna.cs.rmit.edu.au
Richard A. O'Keefe
3rd December 1996

John Griffith writes:
I'll just have to fantasize...

Why fantasize? Why not use the DEC-10 Prolog type checker? Before they were taken over by Intergraph, Quintus were planning to beef it up a bit and make it understand modules.


Gregor.Meyer@FernUni-Hagen.de
Gregor Meyer
4th December 1996

Richard A. O'Keefe wrote:
Why fantasize?

I like Richard's type checker but I missed subtypes. In fact, most type systems, as far as they are implemented, do not allow subtyping. My own work on type checking Prolog resulted in a system called 'Typical for Annotated Prolog'. It allows parametric polymorphism and subtypes. One of the design goals was to type check full Prolog without restricting the programmer to a (clean) subset of Prolog (or some new language). Further information on the prototype implementation is available at:
http://www.fernuni-hagen.de/pi8/typical/

I plan to support modules in Typical in the future.

Prev No Next Up Home Keys Figs Search New