» ASP Competition

Competition Phase

The competition phase begins on 15th May 2009. It is subdivided into two phases:

Calibration

The competition begins with a short calibration period in which we determine the benchmark instances to use in the competition. Instances should be neither too difficult nor too easy. Neither extreme is useful for ranking the solvers, and instances of too great a difficulty will lead to lengthy timeouts on all solvers, considerably slowing down the execution of the competition.

Bugs or errors discovered during this phase will be reporters to the competitors, who will then get a brief period to solve them.

Execution of scripts

The competition progresses by testing instance by instance. For each instance of some benchmark, the competition software calls the corresponding benchmark script of each competitors, in a random order. Each instance is executed exactly once. The test is done on a identical test machines that are isolated from the network. For each test, the participants directory is copied from a master computer to a clean test-computer, the script is called with the instance, the output and timing is recorded, and the test machine is cleaned afterwards, to prevent polution of the test computers.

For repeatibility reasons, your solutions are supposed to be deterministic, i.e., reexecuting a problem should result in the same execution and in similar time. This means that for stochastic solvers, the random seed for each benchmark should be fixed (e.g. hardcoded in the benchmark scripts).

After this process runs to its completion, results will be collected and analysed for publication. They will be made available on the webpage as soon as possible.