# Introduction to Mixed Integer Programming

Tutorials ·Mixed-Integer programming are used to solve optimization problems with discrete decision variables. Hence, its feasible region is a set of disconnected integer points and gradient based algorithms cannot be directly applied. .

### Motivation

Mixed-Integer programs (MIP) are used in several disciplines including:

- Electric power production
- Control engineering
- Mechanical Engineering
- operational research (production planning and management)
- Machine learning (used to check robustness of Deep learning models, logistic regression and ridge regression)

### MIP Introduction

MIPs consists of an objective for the problem and a set of constraints on its decision variables. Solving a MIP is based on the branch and bound linear programming’s algorithm.

#### Branch and Bound algorithm

Basic branch-and-bound steps are :

- creates a relaxed linear-program (LP) without integrality constraints (converting decision variables to float)
- We solve the relaxed LP
- We check that our solution satisfies the integrality constraints, if it satisfies it we have an optimal solution to the original problem.
- if it doesn’t satisfy we pick a variable that didn’t satisfy the integrality constraint and was fractional in the solved LP
- we impose restrictions excluding the fractional value of that variable called branching variable and the imposed restrictions would branch the original MIP in two sub-MIPs
- We then solve the sub-MIPs by applying the same procedure and branching a variable if necessary then comparing the results of the sub-MIPs to select the optimum solution.

After branching a variable in the resulted sub-MIP, we then create a search tree and the nodes of the tree are the MIPs generated by the search procedure. The leaves of the search tree are the MIPs that are not yet branched. Once we get an optimal solution from any node in the search tree, we then designate it as a leaf node creating a space of feasible solution from all leaf nodes.

#### GAP

The optimization process consists of identifying the feasible space generated by solving the LP relaxation of the search tree nodes and narrow down the set of feasible solution based on the objective function.

During the search process at the leaf nodes, the node having the best objective function value is called the best bound. The difference between any node objective’s value and the best bound’s objective value is called the ** gap**. We can demonstrate optimal solution by having gap with zero value.

#### Presolve

Before applying the branch-and-bound algorithm, usually the MIP solver would do a presolve step to reduce the problem. These reductions aim to reduce the size of the problem and to tighten the formulation for faster solving time. Presolve step in any MIP solver is a critical step that would reduce the LP feasible space search.

Ralph E Gomory introduced cutting planes to tighten the formulation by removing the undesired fractional solutions during the solution process.

### Example using CVXPY

Now to the practical part, we will use a library called CVXPY that’s a wrapper on top of multiple solvers commercial and free, making it easier to switch between solvers.

#### Available Solvers for cvxpy

LP | QP | SOCP | SDP | EXP | MIP | |
---|---|---|---|---|---|---|

CBC | X | X | ||||

GLPK | X | |||||

GLPK_MI | X | X | ||||

OSQP | X | X | ||||

CPLEX | X | X | X | X | ||

ECOS | X | X | X | X | ||

ECOS_BB | X | X | X | X | X | |

GUROBI | X | X | X | X | ||

MOSEK | X | X | X | X | X | X |

CVXOPT | X | X | X | X | ||

SCS | X | X | X | X | X |

#### Problem statement

In the upcoming example, we create a simple MIP quadratic problem where x is the optimization variable and A,B are problem parameters

#### Code

We start by importing CVXPY and Numpy libraries.

```
import cvxpy as cp
import numpy as np
```

Now generating the random problem using Numpy

```
np.random.seed(0)
m, n= 40, 25
A = np.random.rand(m, n)
b = np.random.randn(m)
```

Then we create CVXPY decision variable with integer flag set to true (to denote it will have an integer value)

```
x = cp.Variable(n, integer=True)
```

Now we define our objective

```
objective = cp.Minimize(cp.sum_squares(A @ x - b))
```

After that we define the problem taking as input the following params:

: The problem’s objective minimize or maximize.*objective*: List of constraints for the problem. e.g. [x>0]*constraints*

```
prob = cp.Problem(objective)
```

`solve`

function takes as input the following optional params

: solver name e.g.*solver*`cp.ECOS_BB`

- takes specific args to be sent to the solver depending on the solver used.

```
prob.solve()
```

We can get the value of the solution using the following commands

```
print("Status: ", prob.status)
print("The optimal value is", prob.value)
print("A solution x is")
print(x.value)
```

#### Side Note

I recommend checking the below references for more details about MIPs and CVXPY

### References

- M.l R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., New York, NY, USA, 1979. ISBN 0716710447.
- Ralph E Gomory. Some polyhedra related to combinatorial problems. Linear algebra and its applications, 2(4):451–558, 1969.
- Akshay Agrawal, Robin Verschueren, Steven Diamond, and Stephen Boyd. A rewriting system for convex optimization problems. Journal of Control and Decision, 5(1):42–60, 2018.
- Steven Diamond and Stephen Boyd. CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research, 17(83):1–5, 2016.
- Gurobi’s introduction to MIP
- CVXPY MIP example