SCIP comes along with a set of useful tools that allow to perform automated tests. The following is a step-by-step guide from setting up the test environment for evaluation and customization of test runs.
Setting up the test environment
At first you should create a file listing all problem instances that should be part of the test. This file has to be located in the the directory scip/check/testset/
and has to have the file extension .test
, e.g., testrun.test
, in order to be found by the scip/check/check.sh
script.
All test problems can be listed in the test
-file by a relative path, e.g., ../../problems/instance1.lp
or absolute path, e.g., /home/problems/instance2.mps
in this file. Only one problem should be listed each on line (since the command cat
is used to parse this file). Note that these problems have to be readable for SCIP in order to solve them. However, you can use different file formats.
Optionally, you can provide a solution file in the scip/check/testset/
directory containing known information about the feasibility and the best known objective values for the test instances. SCIP can use these values to verify the results. The file has to have the same basename as the .test
-file, i.e., in our case testrun.solu
. One line can only contain information about one test instance. A line has to start with the type of information given:
=opt=
stating that a problem name with an optimal objective value follows
=best=
stating that a problem name with a best know objective value follows
=inf=
stating that a problem name follows which is infeasible
With these information types you can encode for an instance named instance1.lp
the following information:
- The instance has a known optimal (objective) value of 10.
- The instance has a best known solution with objective value 15.
- The instance is feasible (but has no objective function or we don't know a solution value)
- The instance is infeasible.
If you don't know whether the instance is feasible or not (so the status is unknown), you can omit the instance in the solu
-file or write
Note that in all lines the file extension of the file name is omitted.
See the files scip/check/testset/short.test
and scip/check/testset/short.solu
for an example of a test
-file and its corresponding solu
-file.
Starting a test run
in the SCIP root directory. Note that testrun
is exactly the basename of our test
-file (testrun.test
). This will cause SCIP to solve our test instances one after another and to create various output files (see Evaluating a test run).
Evaluating a test run
During computation, SCIP automatically creates the directory scip/check/results/
(if it does not already exist) and stores the following output files there.
*.out
- output of stdout
*.err
- output of stderr
*.set
- copy of the used settings file
*.res
- ASCII table containing a summary of the computational results
*.tex
- TeX table containing a summary of the computational results
*.pav
- PAVER output
The last three files in the above list, i.e., the files containing a summary of the computational results, can also be generated manually. Therefore the user has to call the evalcheck.sh
script in the check
directory with the corresponding out
file as argument. For example, this may be useful if the user stopped the test before it was finished, in which case the last three files will not be automatically generated by SCIP.
The last column of the ASCII summary table contains the solver status. We distinguish the following statuses: (in order of priority)
- abort: solver broke before returning solution
- fail: solver cut off a known feasible solution (value of the
solu
-file is beyond the dual bound; especially of problem is claimed to be solved but solution is not the optimal solution)
- ok: solver solved problem with the value in solu-file
- solved: solver solved problem which has no (optimal) value in solu-file (since we here cannot detect the direction of optimization, it is possible that a solver claims an optimal solution which contradicts a known feasible solution)
- better: solver found solution better than known best solution (or no solution was noted in the
solu
-file so far)
- gaplimit, sollimit: solver reached gaplimit or limit of number of solutions (at present: only in SCIP)
- timeout: solver reached any other limit (like time or nodes)
- unknown: otherwise
Additionally the evalcheck.sh
script can generate a solu
-file by calling
./evalcheck.sh writesolufile=1 NEWSOLUFILE=<solu-file> <out-file>
where <solu-file>
denotes the filename of the new file where the solutions shall be (and <out-file>
denotes the output (.out
) files to evaluate).
Another feature can be enabled by calling:
./evalcheck.sh printsoltimes=1 ...
The output has two additional columns containing the solving time until the first and the best solution was found.
Note: The basename of all these files is the same and has the following structure which allows us to reconstruct the test run:
check.<test name>.<binary>.<machine name>.<setting name>
- <
test name
> indicates the name of the the test file, e.g., testrun
- <
binary
> defines the used binary, e.g., scip-1.1.0.linux.x86.gnu.opt.spx
- <
machine name
> tells the name of the machine, e.g., mycomputer
- <
setting name
> denotes the name of the used settings, e.g., default
means the (SCIP) default settings were used
Using the examples out of the previous listing the six file names would have the name:
check.testrun.scip-1.1.0.linux.x86.gnu.opt.spx.mycomputer.default.<out,err,set,res,tex,pav>
Using customized setting files
It is possible to use customized settings files for the test run instead of testing SCIP with default settings. These have to be placed in the directory scip/settings/
.
Note: Accessing setting files in subfolders of the settings
directory is currently not supported.
To run SCIP with a custom settings file, say for example fast.set
, we call
make TEST=testrun SETTINGS=fast test
in the SCIP root directory.
Advanced options
We can further customize the test run by specifying the following options in the make
call:
TIME
- time limit for each test instance in seconds [default: 3600]
NODES
- node limit [default: 2100000000]
MEM
- memory limit in MB [default: 1536]
DISPFREQ
- display frequency of the output [default: 10000]
FEASTOL
- LP feasibility tolerance for constraints [default: "default"]
LOCK
- should the test run be locked to prevent other machines from performing the same test run [default: "false"]
CONTINUE
- continue the test run if it was previously aborted [default: "false"]
VALGRIND
- run valgrind on the SCIP binary; errors and memory leaks found by valgrind are reported as fails [default: "false"]
Comparing test runs for different settings
Often test runs are performed on the basis of different settings. In this case, it is useful to have a performance comparison. For this purpose, we can use the allcmpres.sh
script in the check
directory.
Suppose, we performed our test run with two different settings, say fast.set
and slow.set
. Assuming that all other parameters (including the SCIP binary), were the same, we may have the following res
-files in the directory scip/check/results/
check.testrun.scip-1.1.0.linux.x86.gnu.opt.spx.mycomputer.fast.res
check.testrun.scip-1.1.0.linux.x86.gnu.opt.spx.mycomputer.slow.res
For a comparison of both computations, we simply call
allcmpres.sh results/check.testrun.scip-1.1.0.linux.x86.gnu.opt.spx.mycomputer.fast.res \
results/check.testrun.scip-1.1.0.linux.x86.gnu.opt.spx.mycomputer.slow.res
in the check
directory. This produces an ASCII table on the console that provide a detailed performance comparison of both test runs. Note that the first res
-file serves as reference computation. The following list explains the output. (The term "solver" can be considered as the combination of SCIP with a specific setting file.)
Nodes
- Number of processed branch-and-bound nodes.
Time
- Computation time in seconds.
F
- If no feasible solution was found, then '#', empty otherwise.
NodQ
- Equals Nodes(i) / Nodes(0), where 'i' denotes the current solver and '0' stands for the reference solver.
TimQ
- Equals Time(i) / Time(0).
bounds check
- Status of the primal and dual bound check.
proc
- Number of instances processed.
eval
- Number of instances evaluated (bounds check = "ok", i.e., solved to optimality within the time and memory limit and result is correct). Only these instances are used in the calculation of the mean values.
fail
- Number of instances with bounds check = "fail".
time
- Number of instances with timeout.
solv
- Number of instances correctly solved within the time limit.
wins
- Number of instances on which the solver won (i.e., the solver was at most 10% slower than the fastest solver OR had the best primal bound in case the instance was not solved by any solver within the time limit).
bett
- Number of instances on which the solver was better than the reference solver (i.e., more than 10% faster).
wors
- Number of instances on which the solver was worse than the reference solver (i.e., more than 10% slower).
bobj
- Number of instances on which the solver had a better primal bound than the reference solver (i.e., a difference larger than 10%).
wobj
- Number of instances on which the solver had a worse primal bound than the reference solver (i.e., a difference larger than 10%).
feas
- Number of instances for which a feasible solution was found.
gnodes
- Geometric mean of the processed nodes over all evaluated instances.
shnodes
- Shifted geometric mean of the processed nodes over all evaluated instances.
gnodesQ
- Equals nodes(i) / nodes(0), where 'i' denotes the current solver and '0' stands for the reference solver.
shnodesQ
- Equals shnodes(i) / shnodes(0).
gtime
- Geometric mean of the computation time over all evaluated instances.
shtime
- Shifted geometric mean of the computation time over all evaluated instances.
gtimeQ
- Equals time(i) / time(0).
shtimeQ
- Equals shtime(i) / shtime(0).
score
- N/A
all
- All solvers.
optimal auto settings
- Theoretical result for a solver that performed 'best of all' for every instance.
diff
- Solvers with instances that differ from the reference solver in the number of processed nodes or in the total number of simplex iterations.
equal
- Solvers with instances whose number of processed nodes and total number of simplex iterations is equal to the reference solver (including a 10% tolerance) and where no timeout occured.
all optimal
- Solvers with instances that could be solved to optimality by all solvers; in particular, no timeout occurred.
Since this large amount of information is not always needed, one can generate a narrower table by calling:
where NodQ
, TimQ
and the additional comparison tables are omitted.
If the res
-files were generated with the parameter printsoltimes=1
we can enable the same feature here as well by calling:
allcmpres.sh printsoltimes=1 ...
As in the evaluation, the output contains the two additional columns of the solving time until the first and the best solution was found.
Testing and Evaluating for other solvers
Analogously to the target test
there are further targets to run automated tests with other MIP solvers. These are:
- for cplex
- for gurobi
- for cbc
- for mosek
- for glpk
- for symphony
- for blis
- for gams
make testgams GAMSSOLVER=xyz
For this target, the option GAMSSOLVER has to be given to specify the name of a GAMS solver to run, e.g. GAMSSOLVER=SCIP. Additional advanced options specific to this target are: GAMS to specify the GAMS executable (default: gams), GAP to specify a gap limit (default: 0.0), CLIENTTMPDIR to specify a directory where GAMS should put its scratch files (default: /tmp), CONVERTSCIP to specify a SCIP which can be used to convert non-gams files into gams format (default: bin/scip, if existing; set to "no" to disable conversion). The following options are NOT supported (and ignored): DISPFREQ, FEASTOL, LOCK. A memory limit (MEM option) is only passed as workspace option to GAMS, but not enforced via ulimit (it's up to the solver to regard and obey the limit).
Note: This works only if the referred programs are installed globally on your machine.
The above options like TIME
are also available for the other solvers.
For cbc, cplex, gams, and gurobi another advanced option is available:
THREADS
- number of threads used in the solution process
After the testrun there should be an .out
, an .err
and a .res
file with the same basename as described above.
Furthermore you can also use the script allcmpres.sh
for comparing results of different solvers.