General Questions about SCIP
 What is SCIP?
 When should I use SCIP?
 I heard something about licenses. Do I have to pay for using SCIP?
 How do I get started?
 Do I need any extra software?
 How can I build/compile SCIP?
 What are the main differences between the CMake and the Makefile system?
 I have installation problems. What can I do?
 I changed to a new version of SCIP and now compiling breaks with some error messages which I don't understand. Do you have a general hint on that?
 How can I debug in SCIP?
 SCIP prints error messages, aborts, produces segmentation faults, or just behaves strangely. What should I do?
 I would like to check whether some functionality is implemented in SCIP. How does the naming of the methods work? Where do I find the most common methods?
 Can I use SCIP as a pure CP/SAT Solver?
 Can I use SCIP as a pure LPSolver?
 Which kind of MINLPs are supported by SCIP?

What is this business with .a and .so libraries in the directory
lib/
?  Can I compile SCIP as a shared library?
 The methods SCIPgetVarSol() and SCIPvarGetSol() seem to have the same functionality. Which one should I use?
 Is there a way to visualize the branch and bound tree?
 SCIP has found decent primal and dual bounds but still reports the gap as "Infinity". Why?
 SCIP crashes during symmetry detection with bliss. What can I do?
 How can I run SCIP in parallel?
 What are implicitinteger variables?
Using SCIP as a standalone solver
 The output is too wide for my terminal window. What can I do?
 What do the cryptic abbreviations for the columns mean which are displayed during the solving process of SCIP?
 Why does SCIP claim that it could not find the user parameters "scip.set"? Where can I get such a file?
 How do I change the behavior of SCIP?
 How can I learn more about/from the presolve reasoning SCIP applies to my combinatorial optimization problem?
 I recognized that one special plugin works very poorly / very well for my problem and I want to disable it / weaken its influence / intensify its influence. How do I do this?
 How can I use my own functions in the interactive shell/extend the set of available interactive shell commands?
 How can I input a MINLP into SCIP?
 Does SCIP handle symmetries inherent to my problem?
 Which symmetry handling method should I use?
 SCIP's incumbent solution is infeasible in the original space. What can I do?
 What is displayed in the column "compl." of the SCIP output?
 What is the difference between SCIP's search tree and the "Estimation Tree" in the SCIP statistics?
Using SCIP included in another source code
 How do I construct a problem instance in SCIP?
 I already know a solution in advance, which I want to pass to SCIP. How do I do this?
 What operational stages of SCIP are there and are they important for me?
 What is the difference between the original and the transformed problem?
 Why do the names, e.g., in debug messages often differ from the ones I defined?
 What is SCIP_CALL()? Do I need this?
 I want to stop the solving process after a certain time. How can I do this?
 Is it possible to avoid that the SCIP library overrides my signal handler for interruptions?
Using SCIP as a BranchCutAndPriceFramework
 How do I start a project?
 What types of plugins can I add and how do I do this?
 When should I implement a constraint handler, when should I implement a separator?
 Can I remove unnecessary display columns or—even better—add my own ones? Can I change the statistics displayed at the end of solving?
 What do LProws look like in SCIP?
 How do I get the data of the current LPrelaxation?
 What is the difference between columns and variables, rows and constraints?
 Are the variables and rows sorted in any particular order?
 When should I use which of the numerical comparison functions?
 How do I solve an LP inside my SCIP plugin?
 Can I write my own symmetry handling plugin for SCIP?
 What is the difference between sepastore and cutpool, and when should one add a cut to one or the other?
Specific questions about Column Generation and BranchAndPrice with SCIP
 What can I expect when using SCIP as a BranchCutandPrice framework?
 Why are not all variables in the LP?
 I only implemented one pricer, why is there a second one, called variable pricer?
 How can I store branching decisions?
 I want to store some information at the nodes and update my pricer's data structures when entering a new node. How can I do that?
 How can an event handler help me with my branching?
 How can I add locally valid variables to the problem in my branchandprice code?
 My pricer generates the same column twice. How can I solve this problem?
 Which default plugins should be deactivated in order to get a working branchandprice code?
 What are the lazy bounds for variables in SCIP and what do I need them for?
 Can I stop the pricing process before the master problem is solved to optimality?
 SCIP does not stop although my gap is below 1.0 and all variables are binary and have objective coefficient 1. What can I do?
 How can I delete variables?
 How do I branch on constraints?
Specific questions about the copy functionality in SCIP

What is
SCIPcopy()
? 
When should I use
SCIPcopy()
instead ofSCIPcopyConsCompression()
?  How do I get a copy of a variable or a constraint?

What does the
valid
pointer in the copy callback of the constraint handler and variable pricer mean?
General Questions about SCIP

What is SCIP?
SCIP is a solver for Mixed Integer Linear and Nonlinear Problems that allows for an easy integration of arbitrary constraints. It can be used as a framework for branchcutandprice and contains all necessary plugins to serve as a standalone solver for MIP and MINLP.
 You can use the precompiled binaries to solve MIPs and MINLPs. Learn more about the file formats supported by SCIP, and get an overview of the supported problem classes and additional recommendations for solving them.
 You can use SCIP as a subroutine for solving MINLPs and more general constraint integer programs from your own source code.
 You can use SCIP as a framework in which you implement your own plugins.
 You can use SCIP in any combination of the three purposes above.
This FAQ contains separate sections covering each of these usages of SCIP. It further considers specific questions for some features.

When should I use SCIP?
If you are either looking for a fast noncommercial MIP/MINLPsolver or for a branchcutandpriceframework into which you can directly implement your own methods. SCIP allows full control of the solving process through parametrization and user callbacks.

I heard something about licenses. Do I have to pay for using SCIP?
Since November 4, 2022, and SCIP version 8.0.3, SCIP is distributed under a permissive open source license, see the SCIP homepage.

How do I get started?
An easy way is to use the SCIPbinaries and call SCIP from a shell, see here for a tutorial. For that, you just have to use one of the installers from the download section, or the zipped source code and compile it with your favorite settings. Note that the coding examples are only contained in the source code bundle, but not in the precompiled packages. The compilation from source is described in detail in the
INSTALL
file in the SCIP main directory.Another way is to use SCIP as a solver integrated into your own program source code. See the directories "examples/MIPsolver/" and "examples/Queens/" for simple examples and this point.
A third way is to implement your own plugins into SCIP. This is explained in the HowTos for all plugin types, which you can find in the "How to add" section. See also How to start a new project.

Do I need any extra software?
Unless you want to use SCIP as a pure CPSolver (see here), you need an underlying LPSolver installed and linked to the libraries (see the
INSTALL
file in the SCIP root directory). LPsolvers currently supported by SCIP are: SoPlex
 IBM ILOG CPLEX
 FICO XPress
 Gurobi (version at least 7.0.2 required)
 CLP (interface currently sometimes produces wrong results)
 Glop, Google OR tools (experimental, LPI contained in OR tools)
 Mosek
 QSopt (experimental)
If you want to use SCIP for mixed integer nonlinear programming, you might want to use an underlying NLP solver (e.g., Ipopt). SCIP already comes with the CppAD expression interpreter (e.g., CppAD) as part of the source code.
Besides that, you might need a modeling language like ZIMPL to generate *.mps or *.lp files. ZIMPL files (.zpl extension) can also be read directly into SCIP. Downloading the SCIP Optimization Suite includes SCIP, SoPlex and ZIMPL, as well as GCG and UG. Find installers for different platforms and the source code bundle of all versions here.

How can I build/compile SCIP?
SCIP can be compiled using either Makefiles or CMake. It's strongly recommended to use the new CMake build system both for new and longtime users of SCIP. Consult the CMake documentation for further information about the changes introduced in the new system (like Linuxconform naming conventions for libraries). We still support the traditional Makefile system for backwards compatibility but it might be discontinued at some point.

What are the main differences between the CMake and the Makefile system?
SCIP and the SCIP Optimization Suite provide two ways of compilation, one by using the CrossPlatform configuration system CMake, and the alternative of using the manually maintained Makefilesystem. Find instructions how to use each of them in the INSTALL file in the SCIP root directory. The most notable difference is that the CMake system creates a single (complete) library "libscip" to link against, whereas the Makefile system only generates a partial library "libscip" and requires linking of further libraries for a successful compilation of a SCIP project.
The following list highlights some of the advantages of the CMake system, which we recommend for new users.
 Configuration step detects the required system libraries such as readline. Missing libraries turn off the corresponding options
 Use of our Python and Java Interfaces depends on a library "libscip" built with CMake
 Generation of Visual Studio project files under Windows
 Support of build systems other than
make
, such asninja
The following list highlights some of the advantages of the Makefile system.
 Backwards compatibility when switching to older SCIP versions (CMake support introduced with SCIP version 4.0.1)
 Situations (old operating systems) where CMake is unavailable
 Compilation of the UG framework is not (yet) supported by CMake, only possible using make

I have installation problems. What can I do?
Read the
INSTALL
file in the SCIP root directory. It contains hints of how to get around problems. You can also try the precompiled binaries by choosing one of the available installers from the SCIP page for your operating system.
If you encounter compilation problems dealing with the
explicit
keyword you should try a newer compiler. 
I changed to a new version of SCIP and now compiling breaks with some error messages which I don't understand. Do you have a general hint on that?
Maybe the parameters of a function in SCIP changed. Please have a look at the Changelog

How can I debug in SCIP?
Compile SCIP in debug mode (
"cmake DCMAKE_BUILD_TYPE=Debug"
in CMake ormake OPT=dbg
using the Makefile system). Put the binary into a debugger, e.g., gdb and let it run again. If you get an impression which component is causing the trouble, set#define SCIP_DEBUG
as the first line of the corresponding *.c file, recompile and let it run again. This will print debug messages from that piece of code. Find a short debugging tutorial here. 
SCIP prints error messages, aborts, produces segmentation faults, or just behaves strangely. What should I do?
See above. Often, the asserts that show up in debug mode already help to clarify misunderstandings and suggest fixes, if you were calling SCIP functions in an unexpected manner. For sending bug reports, please see our Contact information from which you can directly access an online form for reporting bugs.

I would like to check whether some functionality is implemented in SCIP. How does the naming of the methods work? Where do I find the most common methods?
For an explanation of the naming see the coding style guidelines.
The Public CAPI of SCIP is separated into a Core API provided by the header scip.h and a default plugin API provided by scipdefplugins.h. The large API is structured into topics for a better overview. 
Can I use SCIP as a pure CP/SAT Solver?
Yes. SCIP can be used as a pure CP/SAT Solver by typing
set emphasis cpsolver
in the shell or by using the functionSCIPsetEmphasis()
. Furthermore, you can compile SCIP without any LPSolver bycmake DLPS=none
(make LPS=none
in the Makefile system). See here for more on changing the behavior of SCIP. 
Can I use SCIP as a pure LPSolver?
Since LPs are only special types of MIPs and CIPs, the principal answer is yes. If you feed a pure LP to SCIP, it will first apply presolving and then hand this presolved problem to the underlying LP solver. If the LP is solved to optimality, you can query the optimal solution values as always. You can also access the values of an optimal dual solution by using
display dualsolution
.However, there are certain limitations to this: Reduced costs are not accessible. If the LP turns out to be infeasible, you cannot currently obtain a Farkas proof. And recall that this approach is only meaningful if the problem is an LP (no integer variables, only linear constraints).
Hence, if you need more, "LP specific", information than the primal solution, you are better off using an LPSolver directly. If you are using the SCIP Optimization Suite, you could, e.g., use the included LP solver SoPlex. If you want to solve an LP not from the command line, but within your C/C++ program, you could also use SCIP's LPInterface, see also here.

Which kind of MINLPs are supported by SCIP?
SCIP supports nonlinear constraints of the form lhs ≤ f(x) ≤ rhs, where the function f(x) is an algebraic expression that can be represented as expression tree. Such an expression tree has constants and variables as terminal nodes and operands as nonterminal nodes. Expression operands supported by SCIP include addition, subtraction, multiplication, division, exponentiation and logarithm. Trigonometric functions are not yet supported by SCIP.
Nonlinear objective functions are not supported by SCIP and must be modeled as constraint function. Note, that the support for nonquadratic nonlinear constraints is not yet as robust as the rest of SCIP. Missing bounds on nonlinear variables and tiny or huge coefficients can easily lead to numerical problems, which can be avoided by careful modeling.

The methods SCIPgetVarSol() and SCIPvarGetSol() seem to have the same functionality. Which one should I use?
In fact, there is a slight difference:
SCIPvarGetSol()
is also able to return pseudo solution values. If you do not have an idea what pseudo solutions are,SCIPgetVarSol()
should be just fine. This should be the only case of 'duplicate methods'. If you find, however, another one, please contact us. 
Is there a way to visualize the branch and bound tree?
Here is a list of external tools that can be used to create interactive and noninteractive visualizations in various formats.
 HyDraw can display a live visualization of the tree using JavaView.
 vbctool comes with a viewer that has an option to uncover the nodes onebyone (each time you hit the space key). Additional node information such as its lower bound, depth, and number are accessed through a context menu.
 Vbc2dot is a script that generates a visualization of SCIP's branchandbound tree in ps and pdf format. It is written in Ruby and requires dot (graphviz).
 GrUMPy is a Python tool that allows to create noninteractive visualizations of the tree in various formats.
For using one of these tools, SCIP lets you define file names
set visual vbcfilename somefilename.vbc
andset visual bakfilename somefilename.dat
. GrUMPy uses BAK files while the other tools parse vbc output.For those who want to use the stepbystep functionality of vbctool, it is necessary to use a timestep counter for the visualization instead of the real time. The corresponding parameter is changed via
For users of the callable library, the corresponding parameters are called "visual/bakfilename", "visual/vbcfilename", and "visual/realtime".set visual realtime FALSE
. 
SCIP has found decent primal and dual bounds but still reports the gap as "Infinity". Why?
By default, the SCIP output contains the display column "gap", which is computed as follows: If primal and dual bound have opposite signs, the gap is "Infinity". If primal and dual bound have the same sign, the gap is
primalbound  dualbound/min(primalbound,dualbound)
(see SCIPgetGap()). This definition has the advantage, that the gap decreases monotonously during the solving process.An alternative definition of the gap is
(dual bound  primal bound) / abs(primal bound)
. As in the previous gap definition, if primal and dual bound have opposite signs, the gap is "Infinity". In SCIP, this definition can be included in the output by enabling the display column "primalgap":display/primalgap/active = 2
. 
SCIP crashes during symmetry detection with bliss. What can I do?
The bliss library can be compiled with or without GMP support. If bliss is compiled with GMP the macro defininiton
BLISS_USE_GMP
must be added, otherwise the headers do not match the library which produces the crash. If bliss is compiled as shared library, the CMake system should be able to detect this automatically. In case you do not need symmetry handling, the easiest way to resolve the problem is to just disable symmetry handling during compilation (see Makefiles or CMake) or with the SCIP parametermisc/usesymmetry = 0
. Otherwise it can be resolved by either adding the macro definition with the compile flagDBLISS_USE_GMP
or by compiling bliss differently, i.e. without GMP support. 
How can I run SCIP in parallel?
SCIP itself is mostly a sequential solver and does not feature internal parallelism, e.g., to process branchandbound nodes in parallel. However, there are two ways to make use of parallel hardware:
 First, using the command
concurrentopt
in the interactive shell or the methodSCIPsolveConcurrent()
one can spawn multiple solves in parallel, which exchange bound changes and primal solutions and will terminate as soon as the fastest solver terminates. For this to work, SCIP must be built with the TPI option.  Second, one can use the external UG framework for sharedmemory and distributedmemory parallelization of the tree search. See the UG web page for further details.
 First, using the command

What are implicitinteger variables?
Originally, variable type
SCIP_VARTYPE_IMPLINT
was used for continuous variables that are known to take an integral value in any feasible solution.
For example, for a linear constraint Σ_{i} a_{i} x_{i} + y = b, variable y could be declared to be implicitinteger if the coefficients a_{i} and b were integral for all i and variables x_{i} were of type integer (or binary) for all i. Algorithms in SCIP could make use of the knowledge that y can take only integral values in a feasible solution. For example, if some domain propagation algorithm deduces y ≥ 0.5, then the lower bound can automatically be tightened to 1. Further, cuts may be tightened when integrality for integerimplicit variables can be assumed.
Of course, it would also be possible to change the type of variable y to "integer" (SCIP_VARTYPE_INTEGER
), but this would mean that SCIP may try to actively enforce integrality of y, e.g., by branching on y if it takes a fractional value in a relaxations solution. Thus, to express that a variable will eventually take an integral value but does not need to be branched on, variable type SCIP_VARTYPE_IMPLINT is used. In fact, for a linear constraint Σ_{i} a_{i} x_{i} + y = b with all coefficents being integral and all variables integer, SCIP may decide to change the type of variable y to implicitinteger.At a later time, the meaning of variable type SCIP_VARTYPE_IMPLINT has been changed to be apply to continuous variables that are known to take an integral value in at least one optimal solution. In addition, a check had been added to exclude solutions with fractional value in an implicitinteger variables.
This less restrictive definition allowed to use the variable type in additional situations. For example, for a linear constraint Σ_{i} a_{i} x_{i} + y ≤ b where again the coefficients a_{i} and b were integral for all i, and variables x_{i} are of type integer (or binary) for all i, variable y could be declared to be implicitinteger if it does not appear in any other constraint and its bounds are integral, since for any optimal solution where y takes a fractional solution, another optimal solution with y rounded up or down exists. (Generalizations for the case where y appears in additional constraints exist, see cons_linear.c:fullDualPresolve().)
Again, changing the type of a variable from continuous to implicitinteger has the advantage that algorithms can take advantage of this information, e.g., for deriving tighter bounds in domain propagation.The check that excluded solutions with fractional value in an implicitinteger variable had the disadvantage that it forbids solutions which would have been feasible before a continuous variable was changed to an implicitinteger one. In addition, it required that primal heuristics that construct solutions had to take care of ensuring integral values for implicitinteger variables. This lead to confusion when a heuristic decided that after having constructed integral values for all integer variables, it would be sufficient to check all constraints other than
cons_integral
to ensure feasibility.
As a consequence, the check that implicitinteger variables need to have an integral value in a solution has been dropped with SCIP 8.0.5. Since it is possible to use variable type SCIP_VARTYPE_IMPLINT also in the original problem formulation, this also required to update the definition of this variable type again. The current formulation reads continuous variable with optional integrality restriction.
Therefore, setting the type of a variable to SCIP_VARTYPE_IMPLINT means that SCIP does not ensure that the variable will take an integral value in a solution. However, it may apply reductions that prevent the variable from taking any fractional value. Integrality therefore serves as a kind of "softconstraint" here.
The typical use case for this variable type are variables which are known to take an integral value in all feasible or at least all optimal solutions (and where one does not want to rely on SCIP to detect this). For this reason, and to avoid major API changes, the variable type has not been renamed.
Using SCIP as a standalone solver

The output is too wide for my terminal window. What can I do?
In the interactive shell you can set the width of the output with the following command
set display width
followed by an appropriate number. See also the next question. 
What do the cryptic abbreviations for the columns mean which are displayed during the solving process of SCIP?
Type
display display
in the interactive shell to get an explanation of them.
By the way: If a letter appears in front of a display row, it indicates, which heuristic found the new primal bound, a star representing an integral LPrelaxation.
Typingdisplay statistics
after finishing or interrupting the solving process gives you plenty of extra information about the solving process.
(Typingdisplay heuristics
gives you a list of the heuristics including their letters.) 
Why does SCIP claim that it could not find the user parameters "scip.set"? Where can I get such a file?
SCIP comes with default settings that are automatically active when you start the interactive shell. However, you have the possibility to save customized settings via the
set save
andset diffsave
commands. Both commands will prompt you to enter a file name and save either all or customized parameters only to the specified file. A user parameter file that you save as "scip.set" has a special meaning; Whenever you invoke SCIP from a directory containing a file named "scip.set", the settings therein overwrite the default settings. For more information about customized settings, see the Tutorial on the interactive shell. Settings files can become incompatible with later releases if we decide to rename/delete a parameter. Information about this can be found in the CHANGELOG for every release, see also this related question. 
How do I change the behavior of SCIP?
You can switch the settings for all presolving, heuristics, and separation plugins to three different modes via the
set {presolving, heuristics, separation} emphasis
parameters in the interactive shell.off
turns off the respective type of plugins,fast
chooses settings that lead to less time spent in this type of plugins, decreasing their impact, andaggressive
increases the impact of this type of plugins. You can combine these general settings for cuts, presolving, and heuristics arbitrarily.
display parameters
shows you which settings currently differ from their default,set default
resets them all. Furthermore, there are complete settings that can be set byset emphasis
, i.e. settings for pure feasibility problems, solution counting, and CP like search. 
How can I learn more about/from the presolve reasoning SCIP applies to my combinatorial optimization problem?
You can look at the statistics (type
display statistics
in the interactive shell or call SCIPprintStatistics() when using SCIP as a library). This way you can see, which of the presolvers, propagators or constraint handlers performed the reductions. Then, add a#define SCIP_DEBUG
as first line of the corresponding *.c file in src/scip (e.g. cons_linear.c or prop_probing.c) Recompile and run again. You will get heaps of information now. Looking into the code and documentation of the corresponding plugin and ressources from the literature helps with the investigation. 
I recognized that one special plugin works very poorly / very well for my problem and I want to disable it / weaken its influence / intensify its influence. How do I do this?

For using a nondefault branching rule or node selection strategy as
standard, you just have to give it the highest priority, using
SCIP> set branching <name of a branching rule> priority 9999999
SCIP> set nodeselectors <name of a node selector> priority 9999999
SCIP> display branching
SCIP> display nodeselectors

If you want to completely disable a heuristic or a separator you have
to set its frequency to 1 and the sepafreq to 1 for separation by constraint handlers,
respectively. The commands looks like this:
SCIP> set heuristics <name of a heuristic> freq 1
SCIP> set separators <name of a separator> freq 1
SCIP> set constraints <name of a constraint handler> sepafreq 1

For disabling a presolver, you have to set its maxrounds parameter to 0.
SCIP> set presolvers <name of a presolver> maxrounds 0

If you want to intensify the usage of a heuristic,
you can reduce its frequency to some smaller, positive value, and/or raise the quotient and
offset values (maxlpiterquot for diving heuristics, nodes for LNS heuristics).
SCIP> set heuristics <name of a heuristic> freq <some value>
SCIP> set heuristic <name of a diving heuristic> maxlpiterquot <some value>
SCIP> set heuristic <name of a LNS heuristic> nodesquot <some value>

For intensifying the usage of a separator,
you can raise its maxroundsroot and maxsepacutsroot values.
SCIP> set separators <name of a separator> maxroundsroot <some value>
SCIP> set separators <name of a separator> maxrounds <some value>
SCIP> set separators <name of a separator> freq <some value>
SCIP> set separators <name of a separator> maxsepacuts <some value>
 For weakening, you should just do the opposite operation, i.e., reducing the values you would raise for intensification and vice versa.

For using a nondefault branching rule or node selection strategy as
standard, you just have to give it the highest priority, using

How can I use my own functions in the interactive shell/extend the set of available interactive shell commands?
If you want to keep the interactive shell functionality, you could add a dialog handler, that introduces a new SCIP shell command that
 solves the problem and calls your function afterwards or
 checks whether the stage is SOLVED and only calls your function.
Search
SCIPdialogExecOptimize
in src/scip/dialog_default.c to see how the functionality of the "optimize" command is invoked. Also, in src/scip/cons_countsols.c, you can see an example of a dialog handler being added to SCIP. If this is the way you go, please check the How to add dialogs section of the doxygen documentation. 
How can I input a MINLP into SCIP?
Please consult this overview on the problem classes supported by SCIP and the recommendations and links for MINLPs therein.

Does SCIP handle symmetries inherent to my problem?
SCIP provides two different mechanisms to handle symmetries on binary variables in mixedinteger programs. The first approach is based on symmetry handling inequalities as well as the propagation of these constraints, whereas the second approach is purely propagation based, socalled orbital fixing. To select which symmetry handling approach is used, you can use
SCIP> set misc usesymmetry 1
to use symmetry handling inequalities,SCIP> set misc usesymmetry 2
to use orbital fixing,SCIP> set misc usesymmetry 0
to deactivate symmetry handling.
Symmetries of general integer or continuous variables in mixedinteger programs or symmetries in nonlinear programs, however, are currently not handled in SCIP.

Which symmetry handling method should I use?
There is no clear answer to this question. The advantage of orbital fixing is that all symmetry handling reductions are based on propagation decisions. Thus, symmetry handling does not increase the size of your LP relaxations. Depending on the problem, however, additional symmetry handling inequalities might interact with other cutting plane separators such that stronger cuts can be found. So, if you are dealing with a class of optimization problems, you should test which of both methods performs better.

SCIP's incumbent solution is infeasible in the original space. What can I do?
SCIP uses a relative feasibility tolerance to check if a solution satisfies a constraint. Thus, the acceptable violation of a constraint (in absolute terms) is higher for large left/right hand sides. Certain computations that change the lhs/rhs of constraints can lead to an increased tolerance in the transformed space while solving. In such cases, SCIP may find solutions that are infeasible in the original problem space, since the feasibility tolerance in the original space is smaller. Most of the time, this is due to (multi)aggregations of variables, because SCIP cannot centrally guarantee that a (multi)aggregation requested by one constraint handler does not lead to numerical instability of other constraints.
The emphasis/numerics setting disables and enables different settings in SCIP to prevent violations in the original problem space. This numerical stability comes at the sacrifice of performance. You can enable it in one of the following ways:
 In the interactive shell:
SCIP> set emphasis numerics
,  Through the C API:
SCIP_CALL( SCIPsetEmphasis(scip, SCIP_PARAMEMPHASIS_NUMERICS, quiet) );
 In the interactive shell:

What is displayed in the column "compl." of the SCIP output?
This display column displays an approximate percentage of search completion. Ideally, the displayed percentage is a close approximation of the fraction of completed search nodes (with respect to the end of the search). The column makes use of different search tree statistics to compute this approximation. The concrete method that is used can be changed via the
estimation/completiontype
user parameter. Several methods can be trained to perform well on instances of interest, see also this tutorial. 
What is the difference between SCIP's search tree and the "Estimation Tree" in the SCIP statistics?
The size of the search tree is reported in the SCIP statistics as "nodes" at termination does not necessarily include all nodes that have been created by branching. This statistic only reports the number of nodes that were processed during the search. In particular open nodes that are pruned from the search tree when a new and better incumbent solution becomes available, are unaccounted for. Several of the estimation methods rely on counting also these pruned nodes, such that the estimation tree is in fact a binary tree at all steps. Every pruned node is counted as a solved node by the estimation tree. Because of this discrepancy how nodes are counted, the estimation tree may contain more nodes than the actual search tree. Note that the estimation tree is not stored explicitly in memory, but only represented by a constant number of node counters and its tree weight.
Using SCIP included in another source code

How do I construct a problem instance in SCIP?
For starters, SCIP comes with complete examples in source code that illustrate the problem creation process. Please refer to the examples of the Callable Library section in the Example Documentation of SCIP.
First you have to create a SCIP object via
SCIPcreate()
, then you start to build the problem viaSCIPcreateProb()
. Then you create variables viaSCIPcreateVar()
and add them to the problem viaSCIPaddVar()
.The same has to be done for the constraints. For example, if you want to fill in the rows of a general MIP, you have to call
SCIPcreateConsLinear()
,SCIPaddConsLinear()
and additionallySCIPreleaseCons()
after finishing. If all variables and constraints are present, you can initiate the solution process viaSCIPsolve()
.Make sure to also call
SCIPreleaseVar()
if you do not need the variable pointer anymore. For an explanation of creating and releasing objects, please see the notes on releasing objects. 
I already know a solution in advance, which I want to pass to SCIP. How do I do this?
First you have to build your problem (at least all variables have to exist), then there are several different ways:

You have the solution in file which fits the solution format of
SCIP, then you can use
SCIPreadSol()
to pass that solution to SCIP. 
You create a new SCIP primal solution candidate by
calling
SCIPcreateSol()
and set all nonzero values by callingSCIPsetSolVal()
. After that, you add this solution by callingSCIPaddSol()
(the variablestored
should be true afterwards, if your solution was added to solution candidate store) and then release it by callingSCIPsolFree()
. Instead of adding and releasing sequentially, you can useSCIPaddSolFree()
which tries to add the solution to the candidate store and free the solution afterwards. 
Since SCIP 4.0.0, there is the possibility to create partial solutions
via
SCIPcreatePartialSol()
. A solution is partial if not all solution values are known before the solve. After creation, all solution values are unknown unless explicitly given viaSCIPsetSolVal()
. In contrast, solutions created viaSCIPcreateSol()
implicitly assume a solution value of zero. A typical example for problem involving integer and continuous variables is a tentative assignment for all integer variables so that values for the continuous variables should be determined through the solver. After starting the solving process, SCIP will try to heuristically complete all partial solutions that were added during problem creation.

You have the solution in file which fits the solution format of
SCIP, then you can use

What operational stages of SCIP are there and are they important for me?
There are fourteen different stages during a run of SCIP. There are some methods which cannot be called in all stages, consider for example
SCIPtrySol()
(see previous question). 
What is the difference between the original and the transformed problem?
Before the solving process starts, the original problem is copied. This copy is called "transformed problem", and all modifications during the presolving and solving process are only applied to the transformed problem.
This has two main advantages: first, the user can also modify the problem after partially solving it. All modifications done by SCIP (presolving, cuts, variable fixings) during the partial solving process will be deleted together with the transformed problem, the user can modify the original problem and restart solving. Second, the feasibility of solutions is always tested on the original problem! 
Why do the names, e.g., in debug messages often differ from the ones I defined?
This can have several reasons. Especially names of binary variables can get different prefixes and suffixes. Each transformed variable and constraint (see here) gets a "t_" as prefix. Apart from that, the meaning of original and transformed variables and constraints is identical.
General integers with bounds that differ just by 1 will be aggregated to binary variables which get the same name with the suffix "_bin" . E.g. an integer variable
t_x
with lower bound 4 and upper bound 5 will be aggregated to a binary variablet_x_bin = t_x  4
.Variables can have negated counterparts, e.g. for a binary
t_x
its (also binary) negated would bet_x_neg = 1  t_x
.The knapsack constraint handler is able to disaggregate its constraints to cliques, which are set packing constraints, and create names that consist of the knapsack's name and a suffix "
_clq_<int>
". E.g., a knapsack constraintknap: x_1 + x2 +2 x_3 ≤ 2
could be disaggregated to the set packing constraintsknap_clq_1: x_1 + x_3 ≤ 1
andknap_clq_2: x_2 + x_3 ≤ 1
. 
What is SCIP_CALL()? Do I need this?
Yes, you do. SCIP_CALL() is a global define, which handles the return codes of all methods which return a SCIP_RETCODE and should therefore parenthesize each such method. SCIP_OKAY is the code which is returned if everything worked well; there are 17 different error codes, see type_retcode.h. Each method that calls methods which return a SCIP_RETCODE should itself return a SCIP_RETCODE. If this is not possible, use SCIP_CALL_ABORT() to catch the return codes of the methods. If you do not want to use this either, you have to do the exception handling (i.e. the case that the return code is not SCIP_OKAY) on your own.

I want to stop the solving process after a certain time. How can I do this?
Limits are given by parameters in SCIP, for example
limits/time
for a time limit orlimits/nodes
for a node limit. If you want to set a limit, you have to change these parameters. For example, for setting the time limit to one hour, you have to callSCIP_CALL( SCIPsetRealParam(scip, "limits/time", 3600) )
. In the interactive shell, you just enterset limits time 3600
. For more examples, please have a look into heur_rens.c. 
Is it possible to avoid that the SCIP library overrides my signal handler for interruptions?
By default, the SCIP library has an internal handler for the handling of the SIGINT signal, which is usually emitted when a user presses CTRLC during the solution process. In cases where this overrides the signal handling of a surrounding application, this may be undesirable. As a remedy, the parameter misc/catchctrlc can be set to FALSE or the SCIP library can be compiled with the special C compiler flag "DNO_SIGACTION", both disabling SCIP's internal signal handling.
Using SCIP as a BranchCutAndPriceFramework

How do I start a project?

What types of plugins can I add and how do I do this?
See the doxygen documentation for a list of plugin types. There is a HowTo for each of them.

When should I implement a constraint handler, when should I implement a separator?
This depends on whether you want to add constraints or only cutting planes. The main difference is that constraints can be "model constraints", while cutting planes are only additional LP rows that strengthen the LP relaxation. A model constraint is a constraint that is important for the feasibility of the integral solutions. If you delete a model constraint, some infeasible integral vectors would suddenly become feasible in the reduced model. A cutting plane is redundant w.r.t. integral solutions. The set of feasible integral vectors does not change if a cutting plane is removed. You can, however, relax this condition slightly and add cutting planes that do cut off feasible solutions, as long as at least one of the optimal solutions remains feasible.
You want to use a constraint handler in the following cases:
 Some of your feasibility conditions can not be expressed by existing constraint types (e.g., linear constraints), or you would need too many of them. For example, the "nosubtour" constraint in the TSP is equivalent to exponentially many linear constraints. Therefore, it is better to implement a "nosubtour" constraint handler that can inspect solutions for subtours and generate subtour elimination cuts and others (e.g., comb inequalities) to strengthen the LP relaxation.
 Although you can express your feasibility condition by a reasonable number of existing constraint types, you can represent and process the condition in a more efficient way. For example, it may be that you can, due to your structural knowledge, implement a stronger or faster domain propagation or find tighter cutting planes than what one could do with the sum of the individual "simple" constraints that model the feasibility condition.
You want to use a cutting plane separator in the following cases:
 You have a general purpose cutting plane procedure that can be applied to any MIP. It does not use problem specific knowledge. It only looks at the LP, the integrality conditions, and other deduced information like the implication graph.
 You can describe your feasibility condition by a set C of constraints of existing type (e.g., linear constraints). The cuts you want to separate are model specific, but apart from these cuts, there is nothing you can gain by substituting the set C of constraints with a special purpose constraint. For example, the preprocessing and the domain propagation methods for the special purpose constraint would do basically the same as what the existing constraint handler does with the set C of constraints. In this case, you don't need to implement the more complex constraint handler. You add constraints of existing type to your problem instance in order to produce a valid model, and you enrich the model by your problem specific cutting plane separator to make the solving process faster. You can easily evaluate the performance impact of your cutting planes by enabling and disabling the separator.
Note that a constraint handler is defined by the type of constraints that it manages. For constraint handlers, always think in terms of constraint programming. For example, the "nosubtour" constraint handler in the TSP example (see "ConshdlrSubtour.cpp" in the directory "scip/examples/TSP/src/") manages "nosubtour" constraints, which demand that in a given graph no feasible solution can contain a tour that does not contain all cities. In the usual TSP problem, there is only one "nosubtour" constraint, because there is only one graph for which subtours have to be ruled out. The "nosubtour" constraint handler has various ways of enforcing the "nosubtour" property of the solutions. A simple way is to just check each integral solution candidate (in the CONSCHECK, CONSENFOLP, and CONSENFOPS callback methods) for subtours. If there is a subtour, the solution is rejected. A more elaborate way includes the generation of "subtour elimination cuts" in the CONSSEPALP callback method of the constraint handler. Additionally, the constraint handler may want to separate other types of cutting planes like comb inequalities in its CONSSEPALP callback.

Can I remove unnecessary display columns or—even better—add my own ones? Can I change the statistics displayed at the end of solving?
Setting the status of a display column to 0 turns it off. E.g., type
set display memused status 0
in the interactive shell to disable the memory information column, or include the lineSCIPsetIntParam(scip, "display/memused/status", 0)
into your source code. Adding your own display column can be done by calling theSCIPincludeDisp()
method, see the doxygen documentation.
The statistic display, which is shown bydisplay statistics
andSCIPprintStatistics()
, respectively, cannot be changed. 
What do LProws look like in SCIP?
Each row is of the form lhs ≤ Σ(val[j]·col[j]) + const ≤ rhs. For now, val[j]·col[j] can be interpreted as a_{ij}·x_{j} (for the difference between columns and variables see here). The constant is essentially needed for collecting the influence of presolving reductions like variable fixings and aggregations.
The lhs and rhs may take infinite values: a lessthan inequality would have lhs = ∞, and a greaterthan inequality would have rhs = +∞. For equations lhs is equal to rhs. An infinite left hand side can be recognized bySCIPisInfinity(scip, lhs)
, an infinite right hand side can be recognized bySCIPisInfinity(scip, rhs)
. 
How do I get the data of the current LPrelaxation?
You can get all rows in the current LPrelaxation by calling
SCIPgetLPRowsData()
. The methodsSCIProwGetConstant()
,SCIProwGetLhs()
,SCIProwGetRhs()
,SCIProwGetVals()
,SCIProwGetNNonz()
,SCIProwGetCols()
then give you information about each row, see previous question.You get a columnwise representation by calling
SCIPgetLPColsData()
. The methodsSCIPcolGetLb()
andSCIPcolGetUb()
give you the locally valid bounds of a column in the LP relaxation of the current branchandboundnode.If you are interested in global information, you have to call
SCIPcolGetVar()
to get the variable associated to a column (see next question), which you can ask for global bounds viaSCIPvarGetLbGlobal()
andSCIPvarGetUbGlobal()
as well as the type of the variable (binary, general integer, implicit integer, or continuous) by callingSCIPvarGetType()
. For more information, also see this question. 
What is the difference between columns and variables, rows and constraints?
The terms columns and rows always refer to the representation in the current LPrelaxation, variables and constraints to your global Constraint Integer Program.
Each column has an associated variable, which it represents, but not every variable must be part of the current LPrelaxation. E.g., it could be already fixed, aggregated to another variable, or be priced out if a column generation approach was implemented.Each row has either been added to the LP by a constraint handler or by a cutting plane separator. A constraint handler is able to, but does not need to, add one or more rows to the LP as a linear relaxation of each of its constraints. E.g., in the usual case (i.e. without using dynamic rows) the linear constraint handler adds one row to the LP for each linear constraint.

Are the variables and rows sorted in any particular order?
The variable array which you get by
SCIPgetVars()
is internally sorted by variable types. The ordering is binary, integer, implicit integer and continuous variables, i.e., the binary variables are stored at position [0,...,nbinvars1], the general integers at [nbinvars,...,nbinvars+nintvars1], and so on. It holds that nvars = nbinvars + ninitvars + nimplvars + ncontvars. There is no further sorting within these sections, as well as there is no sorting for the rows. But each column and each row has a unique index, which can be obtained bySCIPcolGetIndex()
andSCIProwGetIndex()
, respectively. 
When should I use which of the numerical comparison functions?
There are various numerical comparison functions available, each of them using a different epsilon in its comparisons. Let's take the equality comparison as an example. There are the following methods available:
SCIPisEQ(), SCIPisSumEQ(), SCIPisFeasEQ(), SCIPisRelEQ(), SCIPisSumRelEQ()
.
SCIPisEQ()
should be used to compare two single values that are either results of a simple calculation or are input data. The comparison is done w.r.t. the "numerics/epsilon" parameter, which is 1e9 in the default settings. 
SCIPisSumEQ()
should be used to compare the results of two scalar products or other "long" sums of values. In these sums, numerical inaccuracy can occur due to cancellation of digits in the addition of values with opposite sign. Therefore,SCIPisSumEQ()
uses a relaxed equality tolerance of "numerics/sumepsilon", which is 1e6 in the default settings. 
SCIPisFeasEQ()
should be used to check the feasibility of some result, for example after you have calculated the activity of a constraint and compare it with the left and right hand sides. The feasibility is checked w.r.t. the "numerics/feastol" parameter, and equality is defined in a relative fashion in contrast to absolute differences. That means, two values are considered to be equal if their difference divided by the larger of their absolute values is smaller than "numerics/feastol". This parameter is 1e6 in the default settings. 
SCIPisRelEQ()
can be used to check the relative difference between two values, just like whatSCIPisFeasEQ()
is doing. In contrast toSCIPisFeasEQ()
it uses "numerics/epsilon" as tolerance. 
SCIPisSumRelEQ()
is the same asSCIPisRelEQ()
but uses "numerics/sumepsilon" as tolerance. It should be used to compare two results of scalar products or other "long" sums.


How do I solve an LP inside my SCIP plugin?
If the LP is only a slightly modified version of the LP relaxation  changed variable bounds or objective coefficients  then you can use SCIP's diving mode: methods
SCIPstartDive()
,SCIPchgVarLbDive()
,SCIPsolveDiveLP()
, etc.Alternatively, SCIP's probing mode allows for a tentative depth first search in the tree and can solve the LP relaxations at each node: methods
SCIPstartProbing()
,SCIPnewProbingNode()
,SCIPfixVarProbing()
, etc. However, you cannot change objective coefficients or enlarge variable bounds in probing mode.If you need to solve a separate LP, creating a subSCIP is not recommended because of the overhead involved and because dual information is not accessible (compare here). Instead you can use SCIP's LP interface. For this you should include
lpi/lpi.h
and call the methods provided therein. Note that the LPI can be used independently from SCIP. 
Can I write my own symmetry handling plugin for SCIP?
Yes, you can write your own symmetry handling plugin. To avoid conflicts with the internal symmetry handling methods of SCIP, you should deactivate SCIP's symmetry handling routines by calling
SCIP_CALL( SCIPsetIntParam(scip, "misc/usesymmetry", 0) )
. 
What is the difference between sepastore and cutpool, and when should one add a cut to one or the other?
Both, a sepastore and a cutpool, are data structures that can hold cuts. However, their main difference is how they are used by SCIP.
In each separation round, cutting planes are added to SCIP's internal separation storage, that is, SCIP's sepastore. The cuts are added to the sepastore with
SCIPaddRow()
. Before the cuts from the sepastore are added to the current relaxation, they are filtered through the cutselectors. Note that by adding cuts withSCIPaddRow()
one has the option to force the cut. This means that the cut is not going to be filtered out by the cutselectors. Moreover, if in a separation round all cutting planes are forced, then the cutselectors are not called. After each separation round, all cutting planes in the sepastore are removed.Cutting planes are mainly generated by constraint handlers and separators. In addition to adding cuts to SCIP with
SCIPaddRow()
, they can also add cuts withSCIPaddPoolCut()
(although only when they are globally valid). The difference is that a cut added withSCIPaddPoolCut()
does not go to the sepastore, but to SCIP's global cutpool. In each separation round, the global cutpool can then also add cuts to SCIP's sepastore.If a cut in the global cutpool is not added to the sepastore (e.g. because it is not efficacious enough), its age is increased. If its age reaches
separating/cutagelimit
, then the cut is removed from the cutpool. As soon as a cut from the cutpool is added to the sepastore, its age counter goes down to 0 and its age remains 0 as long as the cut is in the LP relaxation. In particular, this means that cuts from the global cutpool added to the sepastore are not removed from the global cutpool. This might make sense because of the following reasons. Even if the cut makes it to the sepastore, it does not mean it is going to enter the LP relaxation, because it might be filtered out by a cutselector. Furthermore, if the cut enters the LP it can still age out of the LP (e.g. because it is not active in enough consecutive LP relaxations), but the cut might still be interesting in other parts of the search tree. Thus, it makes sense to add cuts to the global cutpool that are known to be strong or expensive to generate.Another difference between sepastore and cutpool is that any plugin can create its own cutpool, while only the SCIP core can create a sepastore.
As a summary, the purpose of the global cutpool is to store globally valid cuts that can be reused in later separation rounds, while the purpose of the sepastore is to collect the cuts that are to be added in a single separation round. In each separation round, constraint handlers and separators can add cuts to the global cutpool, and contraint handlers, separators and the global cutpool can add cuts directly to the sepastore. Furthermore, locally valid cuts can only be added to the sepastore and expensive globally valid cuts might be better suited for the global cutpool.
Specific questions about Column Generation and BranchAndPrice with SCIP

What can I expect when using SCIP as a BranchCutandPrice framework?
If you want to use SCIP as a branchandprice framework, you normally need to implement a reader to read in your problem data and build the problem, a pricer to generate new columns, and a branching rule to do the branching (see also this question to see how to store branching decisions, if needed). SCIP takes care about everything else, for example the branchandbound tree management and LP solving including storage of warmstart bases. Moreover, many of SCIP's primal heuristics will be used and can help improve your primal bound. However, this also comes with a few restrictions: You are not allowed to change the objective function coefficients of variables during the solving process, because that means that previously computed dual bounds might have to be updated. This prevents the use of dual variable stabilization techniques based on a (more or less strict) bounding box in the dual. We are working on making this possible and recommend to use a weighted sum stabilization approach until then. Another point that SCIP does for you is the dynamic removal of columns from the LP due to aging (see also the next two questions). However, due to the way simplex bases are stored in SCIP, columns can only be removed at the same node where they were created.

Why are not all variables in the LP?
With
SCIPgetLPColsData()
you can obtain the columns of the current LP relaxation. It is correct that not all variables are necessarily part of the current LP relaxation. In particular, in branchandprice the variables generated at one node in the tree are not necessarily included in the LP relaxation of a different node (e.g., if the other node is not a descendant of the first node). But even if you are still at the same node or at a descendant node, SCIP can remove columns from the LP, if they are 0 in the LP relaxation. This dynamic column deletion can be avoided by setting the "removable" flag to FALSE in theSCIPcreateVar()
call. 
I only implemented one pricer, why is there a second one, called variable pricer?
As described in the previous question, it may happen, that some variables are not in the current LP relaxation. Nevertheless, these variables still exist, and SCIP can calculate their reduced costs and add them to the LP again, if necessary. This is the job of the variable pricer. It is called before all other pricers.

How can I store branching decisions?
This is a very common problem in BranchAndPrice, which you can deal nicely with using SCIP. There are basically three different options. The first one is to add binary variables to the problem that encode branching decisions. Then constraints should be added that enforce the corresponding branching decisions in the subtrees.
If you have complex pricer data like a graph and need to update it after each branching decision, you should introduce "marker constraints" that are added to the branching nodes and store all the information needed (see the next question).
The third way is to use an event handler, which is described here.

I want to store some information at the nodes and update my pricer's data structures when entering a new node. How can I do that?
This can be done by creating a new constraint handler with constraint data that can store the information and do/undo changes in the pricer's data structures.
Once you have such a constraint handler, just create constraints of this type and add them to the child nodes of your branching by
In general, all methods of the constraint handler (check, enforcing, separation, ...) should be empty (which means that they always return the status SCIP_FEASIBLE for the fundamental callbacks), just as if all constraints of this type are always feasible. The important callbacks are the CONSACTIVE and CONSDEACTIVE methods for communicating the constraints along the active path to your pricer, and the CONSDELETE callback for deleting data of constraints at nodes which became obsolete.SCIPaddConsNode()
. Make sure to set the "stickingatnode" flag to TRUE in order to prevent SCIP from moving the constraint around in the tree.The CONSACTIVE method is always called when a node is entered on which the constraint has been added. Here, you need to apply the changes to your pricing data structures. The CONSDEACTIVE method will be called if the node is left again. Since the CONSACTIVE and CONSDEACTIVE methods of different constraints are always called in a stacklike fashion, this should be exactly what you need.
All data of a constraint need to be freed by implementing an appropriate CONSDELETE callback.
If you need to fix variables for enforcing your branching decision, this can be done in the propagation callback of the constraint handler. Since, in general, each node is only propagated once, in this case you will have to check in your CONSACTIVE method whether new variables were added after your last propagation of this node. If this is the case, you will have to mark this node for repropagation by
SCIPrepropagateNode()
.You can look into the constraint handler of the coloring problem (examples/Coloring/src/cons_storeGraph.c) to get an example of a constraint handler that does all these things.

How can an event handler help me with my branching?
An event handler can watch for events like local bound changes on variables. So, if your pricer wants to be informed whenever a local bound of a certain variable changes, add an event handler, catch the corresponding events of the variable, and in the event handler's execution method adjust the data structures of your pricer accordingly.

How can I add locally valid variables to the problem in my branchandprice code?
Variables in SCIP are always added globally. If you want to add them locally, because they are forbidden in another part of the branchandboundtree, you should ensure that they are locally fixed to 0 in all subtrees where they are not valid. A description of how this can be done is given here.

My pricer generates the same column twice. How can I solve this problem?
First check whether your pricing is correct. Are there upper bounds on variables that you have forgotten to take into account? If your pricer cannot cope with variable bounds other than 0 and infinity, you have to mark all constraints containing priced variables as modifiable, and you may have to disable reduced cost strengthening by setting propagating/rootredcost/freq to 1.
If your pricer works correctly and makes sure that the same column is added at most once in one pricing round, this behavior is probably caused by the PRICER_DELAY property of your pricer.
If it is set to FALSE, the following may have happened: The variable pricer (see this question) found a variable with negative dual feasibility that was not part of the current LP relaxation and added it to the LP. In the same pricing round, your own pricer found the same column and created a new variable for it. This might happen, since your pricer uses the same dual values as the variable pricer. To avoid this behavior, set PRICER_DELAY to TRUE, so that the LP is reoptimized after the variable pricer added variables to the LP. You can find some more information about the PRICER_DELAY property at How to add variable pricers .

Which default plugins should be deactivated in order to get a working branchandprice code?
In most cases you should deactivate separators since cutting planes that are added to your master problem may destroy your pricing problem. Additionally, it may be necessary to deactivate some presolvers, mainly the dual fixing presolver. This can be done by not including these plugins into SCIP, namely by not calling
SCIPincludeSepaXyz()
andSCIPincludePresolXyz()
in your own pluginsincluding files. Alternatively, you can set the parameters maxrounds and maxroundsroot to zero for all separators and maxrounds to zero for the presolvers. 
What are the lazy bounds for variables in SCIP and what do I need them for?
In many BranchandPrice applications, you have binary variables, but you do not want to impose upper bounds on these variables in the LP relaxation, because the upper bound is already implicitly enforced by the problem constraints and the objective. If the upper bounds are explicitly added to the LP, they lead to further dual variables, which may be hard to take into account in the pricing problem.
There are two possibilities for how to solve this problem. First, you could change the binary variables to general integer variables, if this does not change the problem. However, if you use special linear constraints like set partitioning/packing/covering, you can only add binary variables to these constraints.
In order to still allow the usage of these types of constraints in a branchandprice approach, the concept of lazy bounds was introduced in SCIP 2.0. For each variable, you can define lazy upper and lower bounds, i.e. bounds, that are implicitly enforced by constraints and objective. SCIP adds variable bounds to the LP only if the bound is tighter than the corresponding lazy bound. Note that lazy bounds are explicitly put into and removed from the LP when starting and ending diving mode, respectively. This is needed because changing the objective in diving might reverse the implicitly enforced bounds.
For instance, if you have set partitioning constraints in your problem, you can define variables contained in these constraints as binary and set the lazy upper bound to 1, which allows you to use the better propagation methods of the setppc constraint handler compared to the linear constraint handler without taking care about upper bounds on variables in the master.

Can I stop the pricing process before the master problem is solved to optimality?
In a column generation approach, you usually have to solve the master problem to optimality; otherwise, its objective function value is not a valid dual bound. However, there is a way in SCIP to stop the pricing process earlier, called "early branching".
The reduced cost pricing method of a pricer has a result pointer that should be set each time the method is called. In the usual case that the pricer either adds a new variable or ensures that there are no further variables with negative dual feasibility, the result pointer should be set to SCIP_SUCCESS. If the pricer aborts pricing without creating a new variable, but there might exist additional variables with negative dual feasibility, the result pointer should be set to SCIP_DIDNOTRUN. In this case, the LP solution will not be used as a lower bound. Typically, early branching goes along with the computation of a Lagrangian bound in each pricing iteration. The pricer store store this valid lower bound in the
lowerbound pointer
in order to update the lower bound of the current node. Since SCIP 3.1, it is even possible to state that pricing should be stopped early even though new variables were created in the last pricing round. For this, the pricer has to set thestopearly
pointer to TRUE. 
SCIP does not stop although my gap is below 1.0 and all variables are binary and have objective coefficient 1. What can I do?
SCIP tries to detect whether the objective function values of all solutions must be integral or the problem can be scaled such that the former holds. If this is the case, solving will be stopped as soon as the absolute gap is below 1.0 (scaled).
However, the detection does not work in case of branchandprice, because SCIP cannot know whether any of the newly created variables would violate this property. For this case, there is the possibility to inform SCIP that all newly created variables will be integer and have an integer objective coefficient by calling
SCIPsetObjIntegral()
. This knowledge will then be exploited by SCIP for bounding. 
How can I delete variables?
SCIP features the functionality to delete variables from the problem when performing branchandprice. This feature is still in a beta status and can be activated by switching the parameters
pricing/delvars
andpricing/delvarsroot
to TRUE in order to allow deletion of variables at the root node and at all other nodes, respectively. Furthermore, variables have to be marked to be deletable bySCIPvarMarkDeletable()
, which has to be done before adding the variable to the problem. Then, after a node of the branchandboundtree is processed, SCIP automatically deletes variables from the problem that were created at the current node and whose corresponding columns were already removed from the LP. Note that due to the way SCIP stores basis information, it is not possible to completely delete a variable that was created at another node than the current node. You might want to change the parameterslp/colagelimit
,lp/cleanupcols
, andlp/cleanupcolsroot
, which have an impact on when and how fast columns are removed from the LP.Constraint handlers support a new callback function that deletes variables from constraints in which they were marked to be deleted. Thus, when using automatic variable deletion, you should make sure that all used constraint handlers implement this callback. By now, the linear, the set partitioning/packing/covering and the knapsack constraint handler support this callback, which should be sufficient for most branchandprice applications. Note that set covering constraints can be used instead of logicor constraints.
Instead of deleting a variable completely, you can also remove it from the problem by either fixing the variable to zero using
SCIPfixVar()
, which fixes the variable globally or usingSCIPchgVarUbNode()
andSCIPchgVarLbNode()
, which changes the bounds only for the current subtree. 
How do I branch on constraints?
Constraintbased branching is rather straightforward to implement in SCIP. You have to add a new branching rule that uses the methods SCIPcreateChild() and SCIPaddConsNode() in its branching callbacks. A very good example for this is the Ryan/Foster branching rule that has been implemented in the binpacking example from the examples section.
Sometimes it might be more appropriate to implement a constraint handler instead of a branching rule. This is the case if, e.g., the added constraints alone do NOT ensure integrality of the integer variables, or if you still want to use the available branching rules. In the ENFOLP callback of your constraint handler, the branching really happens. The integrality constraint handler calls the branching rules within the ENFOLP callback. Give your constraint handler a positive enforcement priority to trigger your constraint branching before the integrality constraint handler and perform the constraint branching.
Specific questions about the copy functionality in SCIP

What is
SCIPcopy()
?The functionality of copying a SCIP model was added in SCIP version 2.0.0. It gives the possibility to generate a copy of the current SCIP model. This functionality is of interest, for example, in large neighborhood heuristics (such as heur_rens.c). They can now easily copy the complete problem and fix a certain set of variables to work on a reasonable copy of the original problem.
Since SCIP version 4.0.0, the additional copying method
SCIPcopyConsCompression()
is available, which expects as additional argument a list of variables that should be fixed in the problem copy. These will be fixed right away at creation, so that all constraints may treat those variables as constants to potentially reduce the memory required to store the problem copy. 
When should I use
SCIPcopy()
instead ofSCIPcopyConsCompression()
?This, of course, depends on the problem copy's intended use. The large neighborhood search heuristics such as, e.g., heur_rens.c, usually create a problem copy in which they fix a number of variables and solve the remaining, smaller subproblem only once. In this case, it makes sense to use
SCIPcopyConsCompression()
that treats fixed variables as constants at constraint creation time to save memory.For a more general use of the problem copy such as resolving with different objective functions or multiple solves for different sets of fixed variables, you should clearly use
SCIPcopy()
because this is beyond the scope of a compressed copy. 
How do I get a copy of a variable or a constraint?
For the variables and constraints there are the methods
SCIPgetVarCopy()
andSCIPgetConsCopy()
which provide a copy for a variable or a constraint, respectively. 
What does the
valid
pointer in the copy callback of the constraint handler and variable pricer mean?SCIP would like to know if the copied problem is a valid copy. A problem copy is called valid if it is valid in both the primal and the dual sense, i.e., if
 it is a relaxation of the source problem
 it does not enlarge the feasible region.
A constraint handler may choose to not copy a constraint and still declare the resulting copy as valid. Therefore, it must ensure the feasibility of any solution to the problem copy in the original (source) space.