» ASP Competition

Ranking System

Ranking of decision problems

There are quite different types of problem in this class, which favour solvers with quite different features. Since the goal of the competition is to be as informative as possible, splitting up benchmarks in different subcategories is useful and informative.

At this point, we classify the decision problems in three classes:

  1. P-class problems: problems in this class are solved by a deterministic polynomial algorithm. Although they are simple, the challenge may be in the size of the instances, and therefore, solvers with a good grounder are favoured.
  2. NP-class problems: includes all "real" NP search problems, not in the P-class. Many teams specialize on this class. StrategicCompany is the only real search problem that does not belong to this class, but is at the next level of the polynomial hierarchy.
  3. NP^NP-class: includes all decision problems (including P and StrategicCompany). The goal of this category is to rank according to broadness of the solvers. Broad applicability is clearly a desirable feature and deserves to be measured.

For each of these categories, there will be a separate ranking. The philosophy of the ranking is that the solver that solves the most problems wins.

For the decision problems, the ranking system is similar to the ranking system used for the first ASP competition. For each instance problem of each benchmark, the contestants are given a time bound of 600 seconds to solve the instance once. Each participant is assigned a number of points according to the total number of instances that can be solved within the time bound. Ties are resolved by comparing actual running time, where timeouts and UNKNOWN answers are treated as the time bound.

Details

For each benchmark, a solver is awarded points as follows:

  • A solver gets zero points if it makes an error on one of the instances of the benchmark.
  • Otherwise, the solver is awarded 20 S/N points if it solves a number of S instances of the total number of N instances of this benchmark that were solved by at least one solver.
This boils down to a weighted sum of the number of solved instances. The weights are introduced to avoid that benchmarks with many instances dominate the competition.

Ranking of optimization problems

The ranking of solvers of the optimization problems depends on the quality of the given solution. Solvers are rewarded points proportional to the distance between the quality of the returned solution and the optimal solution. Ties are resolved according to the actual running time. Extra points are given to solvers that correctly return the keyword OPTIMUM FOUND.

Details

A solver can earn 100 points per instance of a benchmark. These points can be earned as follows:

  • Zero points if the solver makes an error for one of the instances of this benchmark.
  • If the solver correctly produces UNSATISFIABLE, it gets 100 points.
  • If the solver produces a correct witness, it gets 25 points.
  • The next 50 points are given for the quality of the solution. Let M be the best answer that was produced by any solver for this instance. If the quality of the solution produced by a solver is Q, then this solver will be awarded 50 (Q/M) points.
  • The final 25 points can be gained by correctly outputting OPTIMUM FOUND
The ranking is made by making a weighted sum of the earned points. The weight per instance is (1/N) where N is the number of solved instances of the benchmark, to avoid that benchmarks with many instances dominate the competition.

Global ranking

The rankings of the two subcategories of benchmarks are combined to obtain the global ranking. This is done with a weighted sum of the points obtained in the two subcategories; the weights should ensure that both categories of the competition are given equal importance.

Policy in case of errors

A solver that returns an incorrect answer for an instance of a particular benchmark (e.g. answer UNSATISFIABLE to a satisfiable problem instance, or returning an erroneous witness, or answering OPTIMUM FOUND when the generated witness is not optimal) receives zero points for all instances of that benchmark. It is not disqualified from the competition.

It should be noted that it is untractable to verify some sorts of answers. Pragmatically, if a solver returns UNSATISFIABLE for an instance, and no solver finds a witness for it, this is considered correct. Similarly, if a solver returns OPTIMUM FOUND for an optimization instance, and no solver finds a better witness, this is considered correct.