ICL has several parameters that can be changed by the user. This can be done by means of the so called settings of ICL. For each setting there is a default value. The user can change this value for an application in the S file.
Each setting (parameter) has a name and a value. By putting the fact 'name(value)' in the settings file S, one can give the setting 'name' the new value 'value'. The S file will be loaded whenever a new configuration is initialised (this is at startup, and after the command new_config). One can also explicitly ask for reloading the settings file with the interactive command load_settings.
The settings can be set to their system defaults with the command set_default_settings at the ICL prompt. One can interactively ask for the current settings with the command show_settings. A similar command is show_info, which gives information on the user, the date, the files, the settings,...
The settings are split in several groups: knowledge, language, heuristics, search,miscand advanced. We will discuss them in the following. (see also the file default_settings.pro in the source files)
classes | list of classes (>=1) | classes([pos, neg]). | -each class is a test, to decide whether a model/example belongs to that class; -a test can be any query, like pos, father(luc, X),... -each example should belong to exactly one class |
leave_out | a test | leave_out(false). | -if test succeeds in an example, it is not considered as a training example. -examples left out during training (learning) can be used as test examples. |
language | cnf or dnf | language(dnf). | type of language |
bias | dlab | bias(dlab) . | type of specification (declarative bias) |
maxhead | N>=0 | maxhead(10). | -dnf: not used
-cnf: max. literals in head of clause |
maxbody | N>=0 | maxbody(10). | -dnf: max. literals (both positive and negative) -cnf: max. literals in body of clause |
types | on / off | types(off). | |
mode | on / off | modes(off). | |
simplify | on / off | simplify(on). | -on: simplification of rules for testing during learning proces
-> useful for AV problems, rules with several non-linked parts... -off: no simplification of rules (if rules are very relational) |
multi_prune | on / off | multi_prune(on). | prune rule for seperate classes when merging into multi-class theory |
multi_test | bayes / cn2 | multi_test(bayes). | -how to test a multi-theory: cn2 : same procedure as in cn2 (adding absolute values) bayes: applying naive bayes for classification |
heuristic | laplace / m_estimate / m_estimate(M) | heuristic(m_estimate). | -heuristic used to guide the search -if M is omitted in m_estimate, M=number of classes |
significance_level | 0.995 / 0.99 / 0.98 / 0.95 / 0.90 / 0.80 / 0.0 | significance_level(0.90). | -specifies the confidence level
(as percentage) for the significance test; -a higher percentage will prune more rules; |
min_coverage | N >= 1 | min_coverage(1). | -dnf: number of positive examples that the rules must cover -cnf: number of negative examples that the rules must cover |
min_accuracy | 0.0 =< N =< 1.0 | min_accuracy(0.0) | minimal accuracy for each individual rule |
search | beam | search(beam). | |
beam_size | N > 0 | beam_size(5). | the maximum number of rules to be kept in the beam |
max_real_time | 0 / Time > 0 | max_real_time(0). | set alarm (when value = 0, then no alarm is set) |
talking | 0 / 1 / 2 / 3 / 4 | talking(2). | -0 prints almost no info to the screen -4 prints all available information during learning |
calc_stats | 1 / 2 / 3 | calc_stats(2). | -for which theories to calculate statistics
(also used for cross-validation!) 1: only multi-theories; 2: multi-theories + class-theories if only 2 classes; 3: all theories |
cv_sets | N>0 list of tests for each set list of lists of model identifiers | cv_sets(10). | for cross-validation |
talking_rule | 0 / 1 / 2 | talking_rule(1). | 0 : nothing
1 : only the rule string 2 : string and rule info |
talking_info | 0 / 1 / 2 / 3 / 4/5 | talking_info(3). | 0 : nothing 1 : heuristic value 2 : 1 + consumed cpu-time 3 : 2 + local/total info 4 : 3 + array (internal data) 5 : 4 + list models |
talking_pruning | 0 / 1 | talking_pruning(1). | 0 : everything
1 : not pruning language |
fair | yes/no | fair(yes). | |
stats_level | 1 / 2 | stats_level(1). | 1: not complete list of examples
2: complete list (can be huge!) |
beam_pruning | 1 / 2 / 3 | beam_pruning(1). | prune duplicate rules which
1: are syntactically the same 2: cover the same examples 3: cover the same number of examples |
sign_test | lg / ll /gg | sign_test(gg). | how to compute the significance of a rule: lg: local (on reduced set) to global (on all examples) ll: local to local gg: global to global |
cv_seed | N (integer) | cv_seed(-231429171). | seed used for splitting sets for cross-validation randomly |
Copyright 1998, Katholieke Universiteit Leuven, dept. Computerwetenschappen Information provider: KULeuven dept. Computerwetenschappen Comments for the authors: Wim Van Laer Page design: Wim Van Laer URL: http://www.cs.kuleuven.ac.be/settings.html |