SCIP is a solver for Mixed Integer Linear and Nonlinear Problems that allows for an easy integration of arbitrary constraints. It can be used as a framework for branch-cut-and-price and contains all necessary plugins to serve as a standalone solver for MIP and MINLP.
This FAQ contains separate sections covering each of these usages of SCIP. It further considers specific questions for some features.
If you are either looking for a fast non-commercial MIP/MINLP-solver or for a branch-cut-and-price-framework in which you can directly implement your own methods and allows full control of the solving process.
As long as you use it for academic, non-commercial purposes: No. This will not change. For the other cases, check the explanation of the ZIB academic license and always feel free to ask us. If you want to use SCIP commercially, please write an e-mail to koch@zib.de.
An easy way is to use the SCIP-binaries and call SCIP from a shell, see here for a tutorial.
For that, you just have to download one of the precompiled binaries from the
download section, or the zipped source code and compile it
with your favorite settings. This is described in detail in the INSTALL
file in the SCIP main directory.
Another way is to use SCIP as a solver integrated into your own program source code. See the directories "examples/MIPsolver/" and "examples/Queens/" for simple examples and this point.
A third way is to implement your own plugins into SCIP. This is explained in the HowTos for all plugin types, which you can find in the doxygen documentation. See also How to start a new project.
Unless you want to use SCIP as a pure CP-Solver (see here),
you need an underlying LP-Solver installed and linked to the libraries
(see the INSTALL
file in the SCIP root directory).
LP-solvers currently supported by SCIP are:
We also provide some precompiled binaries. Besides that, you might need a modeling language like ZIMPL to generate *.mps or *.lp files. ZIMPL files can also directly be read by SCIP. You can download a package which includes SCIP, SoPlex and ZIMPL here.
If you want to use SCIP for mixed integer nonlinear programming, you might want to use an underlying NLP solver (e.g., Ipopt). SCIP already comes with the CppAD expression interpreter (e.g., CppAD) as part of the source code.
Read the INSTALL
file in the SCIP root directory. It contains hints of how to get around problems. You can also try
the binaries available on the SCIP page.
If you want to use SoPlex as the underlying LP-solver, you can try the
following:
First, download the SCIP Optimization Suite. Then, extract the file, change into the scipoptsuite directory, and enter 'make'. As long as you have all the necessary libraries installed in your system, it should generate a SCIP binary linked to ZIMPL and Soplex. The necessary system libraries are:
If you do not have all of these libraries, read the INSTALL
file
in the SCIP Optimization Suite directory.
As a summary, the call of make ZIMPL=false ZLIB=false READLINE=false
should work on most systems.
If you have any problems while using an LP-solver different from SoPlex,
please read the SCIP INSTALL
file first.
If you encounter compilation problems dealing with the explicit
keyword you can either try a newer
compiler or set the flags LEGACY=true
for SoPlex and SPX_LEGACY=true
for SCIP.
Maybe the parameters of a function in SCIP changed. Relevant changes between version are listed below.
Compile SCIP in debug mode: make OPT=dbg
. Put the binary into a debugger, e.g., gdb
and let it run again. If you get an impression which component is causing the trouble, set #define SCIP_DEBUG
as the first
line of the corresponding *.c file, recompile and let it run again. This will print debug messages from that piece of code.
Find a short debugging tutorial here.
See above. Often, the asserts that show up in debug mode already help to clarify misunderstandings and suggest fixes, if you were calling SCIP functions in an unexpected manner. For sending bug reports, please see our Contact information from which you can directly access an online form for reporting bugs.
For an explanation of the naming see the
coding style guidelines.
The Public C-API of SCIP is separated
into a Core API provided by the header scip.h and a default plugin API provided
by scipdefplugins.h. The large API is structured into topics for a better overview.
Yes. SCIP can be used as a pure CP/SAT Solver by typing
set emphasis cpsolver
in the shell or by using the function SCIPsetEmphasis()
.
Furthermore, you can compile SCIP without any LP-Solver by make LPS=none
.
See here for more on changing the behavior of SCIP.
Since LPs are only special types of MIPs and CIPs, the principal answer is yes. If you feed a pure LP to SCIP, it
will first apply presolving and then hand this presolved problem to the underlying LP solver. If the LP is solved to
optimality, you can query the optimal solution values as always. You can also access the values of an optimal dual
solution by using display dualsolution
.
However, there are certain limitations to this: Reduced costs are not accessible. If the LP turns out to be infeasible, you cannot currently obtain a Farkas proof. And recall that this approach is only meaningful if the problem is an LP (no integer variables, only linear constraints).
Hence, if you need more, "LP specific", information than the primal solution, you are better off using an LP-Solver directly. If you are using the SCIP Optimization Suite, you could, e.g., use the included LP solver SoPlex. If you want to solve an LP not from the command line, but within your C/C++ program, you could also use SCIP's LP-Interface, see also here.
SCIP supports nonlinear constraints of the form lhs ≤ f(x) ≤ rhs, where the function f(x) is an algebraic expression that can be represented as expression tree. Such an expression tree has constants and variables as terminal nodes and operands as non-terminal nodes. Expression operands supported by SCIP include addition, subtraction, multiplication, division, exponentiation and logarithm. Trigonometric functions are not yet supported by SCIP.
Nonlinear objective functions are not supported by SCIP and must be modeled as constraint function. Note, that the support for non-quadratic nonlinear constraints is not yet as robust as the rest of SCIP. Missing bounds on nonlinear variables and tiny or huge coefficients can easily lead to numerical problems, which can be avoided by careful modeling.
See Makefiles.
In fact, there is a slight difference: SCIPvarGetSol()
is also able to return pseudo solution values.
If you do not have an idea, what pseudo solutions are, SCIPgetVarSol()
should be just fine. This
should be the only case of 'duplicate methods'. If you find, however, another one, please contact us.
Yes, currently there are two tree visualizations supported, the vbctool, and the Branch-and-Bound Analysis Kit. The first comes with a viewer that has an option to uncover the nodes one-by-one (each time you hit the space key). Additional node information such as its lower bound, depth, and number are accessed through a context menu. The BAK is a command line tool written in the Python programming language. It offers several noninteractive visualizations of the tree at a user-defined frequency.
For using one of these tools, SCIP lets you define file names set visual vbcfilename somefilename.vbc
and
set visual bakfilename somefilename.dat
.
For those who want to use the step-by-step functionality of vbctool,
it is necessary to use a time-step counter for the visualization instead of the real time. The corresponding parameter
is changed via set visual realtime FALSE
.
In the interactive shell you can set the width of the output with the following command
set display width
followed by an appropriate number.
See also the next question.
Type display display
in the interactive shell to get an explanation of them.
By the way: If a letter appears in front of a display row, it indicates,
which heuristic found the new primal bound, a star representing an integral LP-relaxation.
Typing display statistics
after finishing or interrupting the solving process gives
you plenty of extra information about the solving process.
(Typing display heuristics
gives you a list of the heuristics including their letters.)
SCIP comes with default settings that are automatically active when you start the interactive shell.
However, you have the possibility to save customized settings via the set save
and set diffsave
commands.
Both commands will prompt you to enter a file name and save either all or customized parameters only to the specified file.
A user parameter file that you save as "scip.set" has a special meaning; Whenever you invoke SCIP from a directory containing a file named "scip.set",
the settings therein overwrite the default settings. For more information about customized settings, see the
Tutorial on the interactive shell.
Settings files can become incompatible with later releases if we decide to rename/delete a parameter.
Information about this can be found in the CHANGELOG for every release, see also this related question.
You can switch the settings for all presolving, heuristics, and separation plugins to three different modes
via the set {presolving, heuristics, separation} emphasis
parameters in the interactive shell.
off
turns off the respective type of plugins, fast
chooses settings that lead to less time spent
in this type of plugins, decreasing their impact, and aggressive
increases the impact
of this type of plugins.
You can combine these general settings for cuts, presolving, and heuristics arbitrarily.
display parameters
shows you which settings currently differ from their default,
set default
resets them all.
Furthermore, there are complete settings that can be set by set emphasis
, i.e. settings for pure
feasibility problems, solution counting, and CP like search.
You can look at the statistics (type display statistics
in the
interactive shell or call SCIPprintStatistics() when using SCIP as a library).
This way you can see, which of the presolvers, propagators or constraint
handlers performed the reductions.
Then, add a #define SCIP_DEBUG
as first line of the corresponding *.c file
in src/scip (e.g. cons_linear.c or presol_probing.c)
Recompile and run again. You will get heaps of information now.
Looking into the code and documentation of the corresponding plugin and ressources from the literature
helps with the investigation.
SCIP> set branching <name of a branching rule> priority 9999999
SCIP> set nodeselectors <name of a node selector> priority 9999999
SCIP> display branching
SCIP> display nodeselectors
SCIP> set heuristics <name of a heuristic> freq -1
SCIP> set separators <name of a separator> freq -1
SCIP> set constraints <name of a constraint handler> sepafreq -1
SCIP> set presolvers <name of a presolver> maxrounds 0
SCIP> set heuristics <name of a heuristic> freq <some value>
SCIP> set heuristic <name of a diving heuristic> maxlpiterquot <some value>
SCIP> set heuristic <name of a LNS heuristic> nodesquot <some value>
SCIP> set separators <name of a separator> maxroundsroot <some value>
SCIP> set separators <name of a separator> maxrounds <some value>
SCIP> set separators <name of a separator> freq <some value>
SCIP> set separators <name of a separator> maxsepacuts <some value>
If you want to keep the interactive shell functionality, you could add a dialog handler, that introduces a new SCIP shell command that
Search SCIPdialogExecOptimize
in src/scip/dialog_default.c to see how
the functionality of the "optimize" command is invoked.
Also, in src/scip/cons_countsols.c, you can see an example of a dialog handler
being added to SCIP.
If this is the way you go, please check the How to add dialogs section
of the doxygen documentation.
Please consult this overview on the problem classes supported by SCIP and the recommendations and links for MINLPs therein.
For starters, SCIP comes with complete examples in source code that illustrate the problem creation process. Please refer to the examples of the Callable Library section in the Example Documentation of SCIP.
First you have to create a SCIP object via SCIPcreate()
, then you start to build the problem via
SCIPcreateProb()
. Then you create variables via SCIPcreateVar()
and add them to the
problem via SCIPaddVar()
.
The same has to be done for the constraints. For example, if you want to fill in the rows of a general MIP, you
have to call SCIPcreateConsLinear()
, SCIPaddConsLinear()
and additionally
SCIPreleaseCons()
after finishing. If all variables and constraints are present, you can initiate
the solution process via SCIPsolve()
.
Make sure to also call SCIPreleaseVar()
if you do not need the variable pointer anymore. For an
explanation of creating and releasing objects, please see the notes on releasing objects.
First you have to build your problem (at least all variables have to exist), then there are several different ways:
SCIPreadSol()
to pass that
solution to SCIP.
SCIPcreateSol()
and set all nonzero values by
calling SCIPsetSolVal()
. After that, you add this
solution by calling SCIPaddSol()
(the variable
stored
should be true afterwards, if your solution was
added to solution candidate store) and then release it by calling
SCIPsolFree()
. Instead of adding and releasing
sequentially, you can use SCIPaddSolFree()
which
tries to add the solution to the candidate store and
free the solution afterwards.
SCIPcreatePartialSol()
. A solution is partial if
not all solution values are known before the solve.
After creation, all solution values are unknown unless explicitly
given via SCIPsetSolVal()
.
In contrast, solutions created via SCIPcreateSol()
implicitly
assume a solution value of zero.
A typical example for problem involving integer and continuous variables
is a tentative assignment for all integer variables so that values for the continuous
variables should be determined through the solver.
After starting the solving process, SCIP will try to heuristically complete
all partial solutions that were added during problem creation.
There are fourteen different stages during a run of SCIP.
There are some methods which cannot be called in all stages, consider for example SCIPtrySol()
(see previous question).
Before the solving process starts, the original problem is copied.
This copy is called "transformed problem", and all modifications during the presolving
and solving process are only applied to the transformed problem.
This has two main advantages: first, the user can also modify the problem after partially solving it.
All modifications done by SCIP (presolving, cuts, variable fixings) during the partial solving process will
be deleted together with the transformed problem, the user can modify the original problem and restart solving.
Second, the feasibility of solutions is always tested on the original problem!
This can have several reasons. Especially names of binary variables can get different prefixes and suffixes. Each transformed variable and constraint (see here) gets a "t_" as prefix. Apart from that, the meaning of original and transformed variables and constraints is identical.
General integers with bounds that differ just by 1 will be aggregated to binary variables which get the same name
with the suffix "_bin" . E.g. an integer variable t_x
with lower bound 4 and upper bound 5
will be aggregated to a binary variable t_x_bin = t_x - 4
.
Variables can have negated counterparts, e.g. for a binary t_x
its (also binary) negated would be
t_x_neg = 1 - t_x
.
The knapsack constraint handler is able to disaggregate its constraints to cliques, which are set packing
constraints, and create names that consist of the knapsack's name and a suffix
"_clq_<int>
". E.g., a knapsack constraint
knap: x_1 + x2 +2 x_3 ≤ 2
could be disaggregated to the set packing
constraints knap_clq_1: x_1 + x_3 ≤ 1
and
knap_clq_2: x_2 + x_3 ≤ 1
.
Yes, you do. SCIP_CALL() is a global define, which handles the return codes of all methods which return a SCIP_RETCODE and should therefore parenthesize each such method. SCIP_OKAY is the code which is returned if everything worked well; there are 17 different error codes, see type_retcode.h. Each method that calls methods which return a SCIP_RETCODE should itself return a SCIP_RETCODE. If this is not possible, use SCIP_CALL_ABORT() to catch the return codes of the methods. If you do not want to use this either, you have to do the exception handling (i.e. the case that the return code is not SCIP_OKAY) on your own.
Limits are given by parameters in SCIP, for example limits/time
for a time limit or
limits/nodes
for a node limit.
If you want to set a limit, you have to change these parameters.
For example, for setting the time limit to one hour, you have to call
SCIP_CALL( SCIPsetRealParam(scip, "limits/time", 3600) )
.
In the interactive shell, you just enter set limits time 3600
.
For more examples, please have a look into heur_rens.c.
See the doxygen documentation for a list of plugin types. There is a HowTo for each of them.
This depends on whether you want to add constraints or only cutting planes. The main difference is that constraints can be "model constraints", while cutting planes are only additional LP rows that strengthen the LP relaxation. A model constraint is a constraint that is important for the feasibility of the integral solutions. If you delete a model constraint, some infeasible integral vectors would suddenly become feasible in the reduced model. A cutting plane is redundant w.r.t. integral solutions. The set of feasible integral vectors does not change if a cutting plane is removed. You can, however, relax this condition slightly and add cutting planes that do cut off feasible solutions, as long as at least one of the optimal solutions remains feasible.
You want to use a constraint handler in the following cases:
You want to use a cutting plane separator in the following cases:
Note that a constraint handler is defined by the type of constraints that it manages. For constraint handlers, always think in terms of constraint programming. For example, the "nosubtour" constraint handler in the TSP example (see "ConshdlrSubtour.cpp" in the directory "scip/examples/TSP/src/") manages "nosubtour" constraints, which demand that in a given graph no feasible solution can contain a tour that does not contain all cities. In the usual TSP problem, there is only one "nosubtour" constraint, because there is only one graph for which subtours have to be ruled out. The "nosubtour" constraint handler has various ways of enforcing the "nosubtour" property of the solutions. A simple way is to just check each integral solution candidate (in the CONSCHECK, CONSENFOLP, and CONSENFOPS callback methods) for subtours. If there is a subtour, the solution is rejected. A more elaborate way includes the generation of "subtour elimination cuts" in the CONSSEPALP callback method of the constraint handler. Additionally, the constraint handler may want to separate other types of cutting planes like comb inequalities in its CONSSEPALP callback.
Setting the status of a display column to 0 turns it off. E.g., type set display memused status 0
in the
interactive shell to disable the memory information column, or include the line SCIPsetIntParam(scip,
"display/memused/status", 0)
into your source code. Adding your own display column can be done
by calling the SCIPincludeDisp()
method, see the doxygen
documentation.
The statistic display, which is shown by display statistics
and
SCIPprintStatistics()
, respectively, cannot be changed.
Each row is of the form lhs ≤ Σ(val[j]·col[j]) + const
≤ rhs. For now, val[j]·col[j] can be interpreted as
aij·xj (for the difference between columns and variables
see here). The constant is essentially needed for collecting the influence of presolving
reductions like variable fixings and aggregations.
The lhs and rhs may take infinite values: a
less-than inequality would have lhs = -∞, and a greater-than inequality would have rhs =
+∞. For equations lhs is equal to rhs. An infinite left hand side can be recognized by
SCIPisInfinity(scip, -lhs)
, an infinite right hand side can be recognized by
SCIPisInfinity(scip, rhs)
.
You can get all rows in the current LP-relaxation by calling SCIPgetLPRowsData()
. The methods
SCIProwGetConstant()
, SCIProwGetLhs()
, SCIProwGetRhs()
,
SCIProwGetVals()
, SCIProwGetNNonz()
, SCIProwGetCols()
then give you
information about each row, see previous question.
You get a columnwise representation by
calling SCIPgetLPColsData()
. The methods SCIPcolGetLb()
and SCIPcolGetUb()
give you the locally valid bounds of a column in the LP relaxation of the current branch-and-bound-node.
If you are interested in global information, you have to call SCIPcolGetVar()
to get the variable
associated to a column (see next question), which you can ask for global bounds via
SCIPvarGetLbGlobal()
and SCIPvarGetUbGlobal()
as well as the type of the variable
(binary, general integer, implicit integer, or continuous) by calling SCIPvarGetType()
. For more
information, also see this question.
The terms columns and rows always refer to the representation in the current LP-relaxation, variables and
constraints to your global Constraint Integer Program.
Each column has an associated variable, which it
represents, but not every variable must be part of the current LP-relaxation. E.g., it could be already fixed,
aggregated to another variable, or be priced out if a column generation approach was implemented.
Each row has either been added to the LP by a constraint handler or by a cutting plane separator. A constraint handler is able to, but does not need to, add one or more rows to the LP as a linear relaxation of each of its constraints. E.g., in the usual case (i.e. without using dynamic rows) the linear constraint handler adds one row to the LP for each linear constraint.
The variable array which you get by SCIPgetVars()
is internally sorted by variable types.
The ordering is binary, integer, implicit integer and continuous variables, i.e., the binary variables are stored at
position [0,...,nbinvars-1], the general integers at [nbinvars,...,nbinvars+nintvars-1], and so on. It holds that
nvars = nbinvars + ninitvars + nimplvars + ncontvars. There is no further sorting within these sections, as well as
there is no sorting for the rows. But each column and each row has a unique index, which can be obtained by
SCIPcolGetIndex()
and SCIProwGetIndex()
, respectively.
A variable v
is implicit integer if it is guaranteed to take an integer
solution value in every optimal solution to every remaining subproblem after
fixing all integer variables of the problem.
Implicit integer variables are represented by the
variable type SCIP_VARTYPE_IMPLINT
.
Note that continuous as well as integer variables can be declared
implicit integer.
The solver benefits from the presence of implicit integrality
in several ways: Implict integer variables can be treated
like continuous variables for branching because it is not necessary to
enforce integrality. They can be treated like integer variables
to yield stronger propagations, better coefficients in cuts, etc.
Another advantage of this definition is that it allows heuristics to know
that if a feasible point has integer values for all integer variables,
then either all implicit integer variables have integer values or the
value of an implicit integer variable can be assigned to an integer
without worsening the objective value.
Currently, SCIP identifies implicit integer variables via feasibility
reasoning and optimality reasoning during presolving.
As an example of feasibility reasoning, consider the constraint
x + y = 1
where both variables are integer.
From this constraint, either one of them could be declared implicit
integer.
This is because fixing one to an integer value forces the other to be
integer as well.
Note that you must not declare both variables implicit integer, since
the reasoning depends on the other variable still being of integer.
type
As an example of optimality reasoning, consider the problem max
x
subject to x + y <= 10
and
y
has to be integer.
After fixing y
to any integer value, the optimal value of
x
will be 10 - y
which is an integer.
Hence, x
can be declared implicit integer.
Note, however, that if the constraint is 2*x + y <= 10
, then we
cannot conclude that x
is implicit integer.
There are various numerical comparison functions available, each of them using a different
epsilon in its comparisons. Let's take the equality comparison as an example. There are
the following methods available: SCIPisEQ(), SCIPisSumEQ(), SCIPisFeasEQ(), SCIPisRelEQ(),
SCIPisSumRelEQ()
.
SCIPisEQ()
should be used to compare two single values that are either results of a simple
calculation or are input data. The comparison is done w.r.t. the "numerics/epsilon" parameter, which is
1e-9 in the default settings.
SCIPisSumEQ()
should be used to compare the results of two scalar products or other "long"
sums of values. In these sums, numerical inaccuracy can occur due to cancellation of digits in the addition of
values with opposite sign. Therefore, SCIPisSumEQ()
uses a relaxed equality tolerance of
"numerics/sumepsilon", which is 1e-6 in the default settings.
SCIPisFeasEQ()
should be used to check the feasibility of some result, for example after you have
calculated the activity of a constraint and compare it with the left and right hand sides. The feasibility is
checked w.r.t. the "numerics/feastol" parameter, and equality is defined in a relative fashion in
contrast to absolute differences. That means, two values are considered to be equal if their difference divided by
the larger of their absolute values is smaller than "numerics/feastol". This parameter is 1e-6 in the
default settings.
SCIPisRelEQ()
can be used to check the relative difference between two values, just like what
SCIPisFeasEQ()
is doing. In contrast to SCIPisFeasEQ()
it uses
"numerics/epsilon" as tolerance.
SCIPisSumRelEQ()
is the same as SCIPisRelEQ()
but uses "numerics/sumepsilon"
as tolerance. It should be used to compare two results of scalar products or other "long" sums.
If the LP is only a slightly modified version of the LP relaxation - changed variable bounds or objective
coefficients - then you can use SCIP's diving mode: methods SCIPstartDive()
,
SCIPchgVarLbDive()
, SCIPsolveDiveLP()
, etc.
Alternatively, SCIP's probing mode allows for a tentative depth first search in the tree and can solve the LP
relaxations at each node: methods SCIPstartProbing()
, SCIPnewProbingNode()
,
SCIPfixVarProbing()
, etc. However, you cannot change objective coefficients or enlarge variable bounds
in probing mode.
If you need to solve a separate LP, creating a sub-SCIP is not recommended because of the overhead involved
and because dual information is not accessible (compare here). Instead you can use SCIP's LP
interface. For this you should include lpi/lpi.h
and call the methods provided therein.
Note that the LPI can be used independently from SCIP.
If you want to use SCIP as a branch-and-price framework, you normally need to implement a reader to read in your problem data and build the problem, a pricer to generate new columns, and a branching rule to do the branching (see also this question to see how to store branching decisions, if needed). SCIP takes care about everything else, for example the branch-and-bound tree management and LP solving including storage of warmstart bases. Moreover, many of SCIP's primal heuristics will be used and can help improve your primal bound. However, this also comes with a few restrictions: You are not allowed to change the objective function coefficients of variables during the solving process, because that means that previously computed dual bounds might have to be updated. This prevents the use of dual variable stabilization techniques based on a (more or less strict) bounding box in the dual. We are working on making this possible and recommend to use a weighted sum stabilization approach until then. Another point that SCIP does for you is the dynamic removal of columns from the LP due to aging (see also the next two questions). However, due to the way simplex bases are stored in SCIP, columns can only be removed at the same node where they were created.
With SCIPgetLPColsData()
you can obtain the columns of the current LP relaxation. It is correct that
not all variables are necessarily part of the current LP relaxation. In particular, in branch-and-price the
variables generated at one node in the tree are not necessarily included in the LP relaxation of a different node
(e.g., if the other node is not a descendant of the first node). But even if you are still at the same node or at
a descendant node, SCIP can remove columns from the LP, if they are 0 in the LP relaxation. This dynamic column
deletion can be avoided by setting the "removable" flag to FALSE in the SCIPcreateVar()
call.
As described in the previous question, it may happen, that some variables are not in the current LP relaxation. Nevertheless, these variables still exist, and SCIP can calculate their reduced costs and add them to the LP again, if necessary. This is the job of the variable pricer. It is called before all other pricers.
This is a very common problem in Branch-And-Price, which you can deal nicely with using SCIP. There are basically three different options. The first one is to add binary variables to the problem that encode branching decisions. Then constraints should be added that enforce the corresponding branching decisions in the subtrees.
If you have complex pricer data like a graph and need to update it after each branching decision, you should introduce "marker constraints" that are added to the branching nodes and store all the information needed (see the next question).
The third way is to use an event handler, which is described here.
This can be done by creating a new constraint handler with constraint data that can store the information and do/undo changes in the pricer's data structures.
Once you have such a constraint handler, just create constraints of this type and add them to the child nodes of
your branching by SCIPaddConsNode()
. Make sure to set the "stickingatnode" flag to TRUE in order to
prevent SCIP from moving the constraint around in the tree.
The CONSACTIVE method is always called when a node is entered on which the constraint has been added. Here, you need to apply the changes to your pricing data structures. The CONSDEACTIVE method will be called if the node is left again. Since the CONSACTIVE and CONSDEACTIVE methods of different constraints are always called in a stack-like fashion, this should be exactly what you need.
All data of a constraint need to be freed by implementing an appropriate CONSDELETE callback.
If you need to fix variables for enforcing your branching decision, this can be done in the propagation callback
of the constraint handler. Since, in general, each node is only propagated once, in this case you will have to check
in your CONSACTIVE method whether new variables were added after your last propagation of this node. If this is
the case, you will have to mark this node for repropagation by SCIPrepropagateNode()
.
You can look into the constraint handler of the coloring problem (examples/Coloring/src/cons_storeGraph.c) to get an example of a constraint handler that does all these things.
An event handler can watch for events like local bound changes on variables. So, if your pricer wants to be informed whenever a local bound of a certain variable changes, add an event handler, catch the corresponding events of the variable, and in the event handler's execution method adjust the data structures of your pricer accordingly.
Variables in SCIP are always added globally. If you want to add them locally, because they are forbidden in another part of the branch-and-bound-tree, you should ensure that they are locally fixed to 0 in all subtrees where they are not valid. A description of how this can be done is given here.
First check whether your pricing is correct. Are there upper bounds on variables that you have forgotten to take into account? If your pricer cannot cope with variable bounds other than 0 and infinity, you have to mark all constraints containing priced variables as modifiable, and you may have to disable reduced cost strengthening by setting propagating/rootredcost/freq to -1.
If your pricer works correctly and makes sure that the same column is added at most once in one pricing round, this behavior is probably caused by the PRICER_DELAY property of your pricer.
If it is set to FALSE, the following may have happened: The variable pricer (see this question) found a variable with negative dual feasibility that was not part of the current LP relaxation and added it to the LP. In the same pricing round, your own pricer found the same column and created a new variable for it. This might happen, since your pricer uses the same dual values as the variable pricer. To avoid this behavior, set PRICER_DELAY to TRUE, so that the LP is reoptimized after the variable pricer added variables to the LP. You can find some more information about the PRICER_DELAY property at How to add variable pricers .
In most cases you should deactivate separators since cutting planes that are added to your master problem may
destroy your pricing problem. Additionally, it may be necessary to deactivate some presolvers, mainly the dual
fixing presolver. This can be done by not including these plugins into SCIP, namely by not calling
SCIPincludeSepaXyz()
and SCIPincludePresolXyz()
in your own plugins-including
files. Alternatively, you can set the parameters maxrounds and maxroundsroot to zero for all separators and
maxrounds to zero for the presolvers.
In many Branch-and-Price applications, you have binary variables, but you do not want to impose upper bounds on these variables in the LP relaxation, because the upper bound is already implicitly enforced by the problem constraints and the objective. If the upper bounds are explicitly added to the LP, they lead to further dual variables, which may be hard to take into account in the pricing problem.
There are two possibilities for how to solve this problem. First, you could change the binary variables to general integer variables, if this does not change the problem. However, if you use special linear constraints like set partitioning/packing/covering, you can only add binary variables to these constraints.
In order to still allow the usage of these types of constraints in a branch-and-price approach, the concept of lazy bounds was introduced in SCIP 2.0. For each variable, you can define lazy upper and lower bounds, i.e. bounds, that are implicitly enforced by constraints and objective. SCIP adds variable bounds to the LP only if the bound is tighter than the corresponding lazy bound. Note that lazy bounds are explicitly put into and removed from the LP when starting and ending diving mode, respectively. This is needed because changing the objective in diving might reverse the implicitly enforced bounds.
For instance, if you have set partitioning constraints in your problem, you can define variables contained in these constraints as binary and set the lazy upper bound to 1, which allows you to use the better propagation methods of the setppc constraint handler compared to the linear constraint handler without taking care about upper bounds on variables in the master.
In a column generation approach, you usually have to solve the master problem to optimality; otherwise, its objective function value is not a valid dual bound. However, there is a way in SCIP to stop the pricing process earlier, called "early branching".
The reduced cost pricing method of a pricer has a result pointer that should be set each time the method is called.
In the usual case that the pricer either adds a new variable or ensures that there are no further
variables with negative dual feasibility, the result pointer should be set to SCIP_SUCCESS.
If the pricer aborts pricing without creating a new variable, but there might exist additional
variables with negative dual feasibility, the result pointer should be set to SCIP_DIDNOTRUN.
In this case, the LP solution will not be used as a lower bound.
Typically, early branching goes along with the computation of a Lagrangian bound in each pricing iteration.
The pricer store store this valid lower bound in the lowerbound pointer
in order to update the lower
bound of the current node.
Since SCIP 3.1, it is even possible to state that pricing should be stopped early even though new variables were created
in the last pricing round. For this, the pricer has to set the stopearly
pointer to TRUE.
SCIP tries to detect whether the objective function values of all solutions must be integral or the problem can be scaled such that the former holds. If this is the case, solving will be stopped as soon as the absolute gap is below 1.0 (scaled).
However, the detection does not work in case of branch-and-price, because SCIP cannot know whether any of the newly
created variables would violate this property. For this case, there is the possibility to inform SCIP that
all newly created variables will be integer and have an integer objective coefficient by calling
SCIPsetObjIntegral()
. This knowledge will then be exploited by SCIP for bounding.
SCIP features the functionality to delete variables from the problem when performing branch-and-price.
This feature is still in a beta status and can be activated by switching the parameters
pricing/delvars
and pricing/delvarsroot
to TRUE in order to allow deletion
of variables at the root node and at all other nodes, respectively.
Furthermore, variables have to be marked to be deletable by SCIPvarMarkDeletable()
, which has to be done
before adding the variable to the problem. Then, after a node of the branch-and-bound-tree is processed,
SCIP automatically deletes variables from the problem that were created at the current node and whose corresponding
columns were already removed from the LP. Note that due to the way SCIP stores basis information, it is not possible to
completely delete a variable that was created at another node than the current node.
You might want to change the parameters lp/colagelimit
, lp/cleanupcols
, and lp/cleanupcolsroot
,
which have an impact on when and how fast columns are removed from the LP.
Constraint handlers support a new callback function that deletes variables from constraints in which they were marked to be deleted. Thus, when using automatic variable deletion, you should make sure that all used constraint handlers implement this callback. By now, the linear, the set partitioning/packing/covering and the knapsack constraint handler support this callback, which should be sufficient for most branch-and-price applications. Note that set covering constraints can be used instead of logicor constraints.
Instead of deleting a variable completely, you can also remove it from the problem by either fixing the variable to zero
using SCIPfixVar()
, which fixes the variable globally or using SCIPchgVarUbNode()
and
SCIPchgVarLbNode()
, which changes the bounds only for the current subtree.
Constraint-based branching is rather straightforward to implement in SCIP. You have to add a new branching rule that uses the methods SCIPcreateChild() and SCIPaddConsNode() in its branching callbacks. A very good example for this is the Ryan/Foster branching rule that has been implemented in the binpacking example from the examples section.
Sometimes it might be more appropriate to implement a constraint handler instead of a branching rule. This is the case if, e.g., the added constraints alone do NOT ensure integrality of the integer variables, or if you still want to use the available branching rules. In the ENFOLP callback of your constraint handler, the branching really happens. The integrality constraint handler calls the branching rules within the ENFOLP callback. Give your constraint handler a positive enforcement priority to trigger your constraint branching before the integrality constraint handler and perform the constraint branching.
The functionality of copying a SCIP model was added in SCIP version 2.0.0. It gives the possibility to generate a copy of the current SCIP model. This functionality is of interest, for example, in large neighborhood heuristics (such as heur_rens.c). They can now easily copy the complete problem and fix a certain set of variables to work on a reasonable copy of the original problem.
Since SCIP version 4.0.0, the additional copying method SCIPcopyConsCompression()
is available,
which expects as additional argument a list of variables that should be fixed in the problem copy. These will be
fixed right away at creation, so that all constraints may treat those variables as constants to potentially
reduce the memory required to store the problem copy.
This, of course, depends on the problem copy's intended use. The large neighborhood search heuristics such as, e.g.,
heur_rens.c, usually create a problem copy in which they fix a number of variables and solve the remaining, smaller
subproblem only once. In this case, it makes sense to use SCIPcopyConsCompression()
that treats
fixed variables as constants at constraint creation time to save memory.
For a more general use of the problem copy such as resolving with different objective functions or multiple solves
for different sets of fixed variables, you should clearly use SCIPcopy()
because this is beyond the
scope of a compressed copy.
For the variables and constraints there are the methods SCIPgetVarCopy()
and
SCIPgetConsCopy()
which provide a copy for a variable or a constraint, respectively.
SCIP would like to know if the copied problem is a valid copy. A problem copy is called valid if it is valid in both the primal and the dual sense, i.e., if
A constraint handler may choose to not copy a constraint and still declare the resulting copy as valid. Therefore, it must ensure the feasibility of any solution to the problem copy in the original (source) space.