[previous] [next] [UP] [ICL]
*** Inductive Classification Logic ***

ICL: Overview of the settings

Status: public -- Last revision: February 10, 1998

ICL has several parameters that can be changed by the user. This can be done by means of the so called settings of ICL. For each setting there is a default value. The user can change this value for an application in the S file.

Each setting (parameter) has a name and a value. By putting the fact 'name(value)' in the settings file S, one can give the setting 'name' the new value 'value'. The S file will be loaded whenever a new configuration is initialised (this is at startup, and after the command new_config). One can also explicitly ask for reloading the settings file with the interactive command load_settings.

The settings can be set to their system defaults with the command set_default_settings at the ICL prompt. One can interactively ask for the current settings with the command show_settings. A similar command is show_info, which gives information on the user, the date, the files, the settings,...

The settings are split in several groups: knowledge, language, heuristics, search,miscand advanced. We will discuss them in the following. (see also the file default_settings.pro in the source files)

Knowledge

classeslist of classes (>=1)classes([pos, neg]). -each class is a test, to decide whether a model/example belongs to that class;
-a test can be any query, like pos, father(luc, X),...
-each example should belong to exactly one class
leave_out a testleave_out(false).-if test succeeds in an example, it is not considered as a training example.
-examples left out during training (learning) can be used as test examples.

Language

languagecnf or dnflanguage(dnf).type of language
biasdlab bias(dlab) .type of specification (declarative bias)
maxhead N>=0 maxhead(10).-dnf: not used
-cnf: max. literals in head of clause
maxbodyN>=0maxbody(10).-dnf: max. literals (both positive and negative)
-cnf: max. literals in body of clause
types on / offtypes(off).
mode on / offmodes(off).
simplifyon / offsimplify(on).-on: simplification of rules for testing during learning proces -> useful for AV problems, rules with several non-linked parts...
-off: no simplification of rules (if rules are very relational)
multi_pruneon / offmulti_prune(on).prune rule for seperate classes when merging into multi-class theory
multi_testbayes / cn2multi_test(bayes).-how to test a multi-theory:
cn2 : same procedure as in cn2 (adding absolute values)
bayes: applying naive bayes for classification

Heuristics

heuristic laplace / m_estimate / m_estimate(M)heuristic(m_estimate).-heuristic used to guide the search
-if M is omitted in m_estimate, M=number of classes
significance_level0.995 / 0.99 / 0.98 / 0.95 / 0.90 / 0.80 / 0.0significance_level(0.90).-specifies the confidence level (as percentage) for the significance test;
-a higher percentage will prune more rules;
min_coverageN >= 1min_coverage(1).-dnf: number of positive examples that the rules must cover
-cnf: number of negative examples that the rules must cover
min_accuracy0.0 =< N =< 1.0min_accuracy(0.0)minimal accuracy for each individual rule

Search

search beamsearch(beam).
beam_sizeN > 0beam_size(5).the maximum number of rules to be kept in the beam

Misc

max_real_time 0 / Time > 0max_real_time(0).set alarm (when value = 0, then no alarm is set)
talking 0 / 1 / 2 / 3 / 4 talking(2).-0 prints almost no info to the screen
-4 prints all available information during learning
calc_stats1 / 2 / 3calc_stats(2).-for which theories to calculate statistics (also used for cross-validation!)
1: only multi-theories; 2: multi-theories + class-theories if only 2 classes; 3: all theories
cv_setsN>0
list of tests for each set
list of lists of model identifiers
cv_sets(10).for cross-validation

Advanced

talking_rule0 / 1 / 2talking_rule(1).0 : nothing
1 : only the rule string
2 : string and rule info
talking_info0 / 1 / 2 / 3 / 4/5talking_info(3).0 : nothing
1 : heuristic value
2 : 1 + consumed cpu-time
3 : 2 + local/total info
4 : 3 + array (internal data)
5 : 4 + list models
talking_pruning0 / 1talking_pruning(1).0 : everything
1 : not pruning language
fairyes/nofair(yes).
stats_level1 / 2stats_level(1).1: not complete list of examples
2: complete list (can be huge!)
beam_pruning1 / 2 / 3beam_pruning(1).prune duplicate rules which
1: are syntactically the same
2: cover the same examples
3: cover the same number of examples
sign_testlg / ll /gg sign_test(gg).how to compute the significance of a rule:
lg: local (on reduced set) to global (on all examples)
ll: local to local
gg: global to global
cv_seedN (integer)cv_seed(-231429171).seed used for splitting sets for cross-validation randomly

[previous] [next] [UP] [ICL]

KULeuven - Computerwetenschappen Copyright 1998, Katholieke Universiteit Leuven, dept. Computerwetenschappen
Information provider: KULeuven dept. Computerwetenschappen
Comments for the authors: Wim Van Laer
Page design: Wim Van Laer
URL: http://www.cs.kuleuven.ac.be/settings.html