Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Scippy

SCIP

Solving Constraint Integer Programs

prop_nlobbt.h File Reference

Detailed Description

nonlinear OBBT propagator

Author
Benjamin Mueller

In Nonlinear Optimization-Based Bound Tightening (NLOBBT), we solve auxiliary NLPs of the form

\min / \max \, x_i \\

s.t. \; g_j(x) \le 0 \, \forall j=1,\ldots,m \\

c'x \le \mathcal{U}

x \in [\ell,u]

where each g_j is a convex function and \mathcal{U} the solution value of the current incumbent. Clearly, the optimal objective value of this nonlinear program provides a valid lower/upper bound on variable x_i .

The propagator sorts all variables w.r.t. their occurrences in convex nonlinear constraints and solves sequentially all convex NLPs. Variables which could be successfully tightened by the propagator will be prioritized in the next call of a new node in the branch-and-bound tree. By default, the propagator requires at least one nonconvex constraints to be executed. For purely convex problems, the benefit of having tighter bounds is negligible.

By default, NLOBBT is only applied for non-binary variables. A reason for this can be found here . Variables which do not appear non-linearly in the nonlinear constraints will not be considered even though they might lead to additional tightenings.

After solving the NLP to optimize x_i we try to exploit the dual information to generate a globally valid inequality, called Generalized Variable Bound (see prop_genvbounds.h). Let \lambda_j , \mu , \alpha , and \beta be the dual multipliers for the constraints of the NLP where \alpha and \beta correspond to the variable bound constraints. Because of the convexity of g_j we know that

g_j(x) \ge g_j(x^*) + \nabla g_j(x^*)(x-x^*)

holds for every x^* \in [\ell,u] . Let x^* be the optimal solution after solving the NLP for the case of minimizing x_i (similiar for the case of maximizing x_i ). Since the NLP is convex we know that the KKT conditions

e_i + \lambda' \nabla g(x^*) + \mu' c + \alpha - \beta = 0

\lambda_j g_j(x^*) = 0

hold. Aggregating the inequalities x_i \ge x_i and g_j(x) \le 0 leads to the inequality

x_i \ge x_i + \sum_{j} g_j(x_i)

Instead of calling the (expensive) propagator during the tree search we can use this inequality to derive further reductions on x_i . Multiplying the first KKT condition by (x - x^*) and using the fact that each g_j is convex we can rewrite the previous inequality to

x_i \ge (\beta - \alpha)'x + (e_i + \alpha - \beta) x^* + \mu \mathcal{U}.

which is passed to the genvbounds propagator. Note that if \alpha_i \neq \beta_i we know that the bound of x_i is the proof for optimality and thus no useful genvbound can be found.

Definition in file prop_nlobbt.h.

#include "scip/def.h"
#include "scip/type_retcode.h"
#include "scip/type_scip.h"

Go to the source code of this file.

Functions

SCIP_EXPORT SCIP_RETCODE SCIPincludePropNlobbt (SCIP *scip)