Gurobi halts while optimizing without an error message
AnsweredApologies in advance if this is the wrong board.
I currently have a recurring issue with Gurobi where, while optimizing, it will stop running, seemingly at random. There is no indication in the engine log why this happened, and as far as I can tell, there is no error log anywhere else either. This is a sample of what the engine log looks like:
Changed value of parameter TimeLimit to 1200.0
Prev: inf Min: 0.0 Max: inf Default: inf
Parameter MIPGap unchanged
Value: 0.0001 Min: 0.0 Max: inf Default: 0.0001
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
Thread count: 8 physical cores, 16 logical processors, using up to 16 threads
Optimize a model with 10 rows, 250 columns and 2500 nonzeros
Model fingerprint: 0x8c57b796
Variable types: 0 continuous, 250 integer (250 binary)
Coefficient statistics:
Matrix range [2e+00, 1e+03]
Objective range [3e+02, 1e+03]
Bounds range [1e+00, 1e+00]
RHS range [3e+04, 3e+04]
Found heuristic solution: objective 45928.000000
Presolve time: 0.00s
Presolved: 10 rows, 250 columns, 2500 nonzeros
Variable types: 0 continuous, 250 integer (250 binary)
Root relaxation: objective 5.902430e+04, 35 iterations, 0.00 seconds
Nodes  Current Node  Objective Bounds  Work
Expl Unexpl  Obj Depth IntInf  Incumbent BestBd Gap  It/Node Time
0 0 59024.3016 0 10 45928.0000 59024.3016 28.5%  0s
H 0 0 57815.000000 59024.3016 2.09%  0s
H 0 0 58314.000000 59024.3016 1.22%  0s
H 0 0 58472.000000 59024.3016 0.94%  0s
0 0 59002.9550 0 15 58472.0000 59002.9550 0.91%  0s
0 0 59002.9550 0 15 58472.0000 59002.9550 0.91%  0s
H 0 0 58553.000000 59002.9550 0.77%  0s
0 2 59002.9550 0 15 58553.0000 59002.9550 0.77%  0s
H 118 120 58569.000000 59001.7035 0.74% 4.5 0s
H 1712 1431 58616.000000 58987.5540 0.63% 3.9 0s
H 8287 5688 58693.000000 58969.9241 0.47% 4.1 1s
125547 62125 58853.9026 60 10 58693.0000 58935.6836 0.41% 4.4 5s
H475805 302195 58700.000000 58904.4665 0.35% 4.6 9s
550452 352866 58750.9397 63 10 58700.0000 58901.4330 0.34% 4.6 10s
H567889 362179 58705.000000 58900.8595 0.33% 4.6 10s
1004655 651196 58834.9897 53 10 58705.0000 58889.1825 0.31% 4.7 15s
1453710 936899 58851.4276 55 10 58705.0000 58881.6689 0.30% 4.7 20s
1920839 1223813 58770.4729 62 10 58705.0000 58876.0254 0.29% 4.7 25s
2381628 1501220 58793.8243 61 10 58705.0000 58871.6174 0.28% 4.8 30s
H2421581 1519385 58708.000000 58871.2641 0.28% 4.8 30s
H2684527 991951 58781.000000 58869.1129 0.15% 4.8 33s
2790692 1022202 58798.6745 65 10 58781.0000 58867.5714 0.15% 4.8 35s
3221857 1131026 cutoff 61 58781.0000 58862.2706 0.14% 4.8 40s
3704427 1231476 58798.4852 90 9 58781.0000 58857.1157 0.13% 4.9 45s
4188157 1311111 58783.8231 58 10 58781.0000 58852.7228 0.12% 4.9 50s
There is no further text or indication for why it stopped. This is a recurring problem for me, and isn't specific to the problem I'm currently trying to solve. But just in case, here is the script I'm currently trying to run:
import gurobipy as gp
from gurobipy import GRB
import openpyxl as xl
from math import ceil
RESULTFILE = "MKP_Gurobi_Results.xlsx"
PREFIX = "mknapcb"
# Maximum numbers per line in each data file
NUMSPERLINE = 7.0
def parse(name):
with open(name + ".txt") as data:
# First line: number of test problems (K)
numproblems = int(data.readline())
problems = [numproblems] + [[ [], [], [] ] for i in range(numproblems) ]
for prob in range(1, numproblems+1):
# Next line: number of variables (n), number of constraints (m), optimal
metadata = data.readline().strip().split(' ')
n = int(metadata[0])
m = int(metadata[1])
# Next lines: the coefficients p(j); j=1,...,n
x = []
for j in range(ceil(n / NUMSPERLINE)):
x.extend(data.readline().strip().split(' '))
problems[prob][0] = [int(x[j]) for j in range(len(x))]
# Next lines: for each constraint i (i=1,...,m): the coefficients r(i,j); j=1,...,n
x = []
for i in range(m):
x.append([])
for j in range(ceil(n / NUMSPERLINE)):
x[i].extend(data.readline().strip().split(' '))
x[i] = [int(x[i][j]) for j in range(n)]
problems[prob][1] = [x[i] for i in range(m)]
# Next lines: the constraint righthand sides b(i); i=1,...,m
x = []
for i in range(ceil(m / NUMSPERLINE)):
x.extend(data.readline().strip().split(' '))
problems[prob][2] = [int(x[i]) for i in range(len(x))]
return problems
# Set number of rows as i
# Set number of columns as j
# Create Boolean objective vector as X[ j ]
# Create constraint matrix as A[ i ][ j ]
# Create profit vector as PROFIT[ j ]
# Create constraint righthandsides vector as B[i]
# Set maximum run time
# Set initial tolerance
# Run MIP program with
# OBJECTIVE FUNCTION: maximize sum( j in cols ) PROFIT[ j ] * X[ j ]
# CONSTRAINTS: subject to { forall( i in rows ) sum( j in cols ) A[ i ][ j ] * X[ j ] ≤ B[i]}
def process(name):
print("\n\nProcessing batch " + name)
results = xl.load_workbook(RESULTFILE)
sheet = results[name]
problems = parse(name)
for prob in range(1, problems[0]+1):
modelname = "{}_{}".format(name, prob)
print("\nProcessing " + modelname)
data = problems[prob]
model = gp.Model(modelname)
rows = len(data[2])
cols = len(data[0])
X = model.addVars(cols, vtype=GRB.BINARY, name='X')
PROFIT = data[0]
A = data[1]
B = data[2]
model.setObjective(X.prod(PROFIT), GRB.MAXIMIZE)
for i in range(rows):
model.addConstr(gp.quicksum(A[i][j] * X[j] for j in range(cols)) <= B[i])
model.Params.TimeLimit = 1200
model.Params.MIPGap = 0.0001
model.optimize()
sheet.cell(column=1, row=prob+1, value=model.ObjVal)
sheet.cell(column=2, row=prob+1, value=model.MIPGap)
sheet.cell(column=3, row=prob+1, value=model.Runtime)
results.save(RESULTFILE)
if __name__ == "__main__":
for i in range(5,10):
process(PREFIX + str(i))
This script is used to solve the Multidimensional Knapsack Problem using the datasets solved in P.C.Chu and J.E.Beasley "A genetic algorithm for the multidimensional knapsack problem", Journal of Heuristics, vol. 4, 1998, pp6386. The data files are located at http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files/ (mknapcb19)

Hi Emre,
Does your python script stop entirely at this point (i.e. without reaching your openpyxl commands which follow the optimize() call)? If so, the likely cause is that the optimization run is consuming a lot of memory, and your operating system has killed off the process to prevent it affecting any other tasks on your machine.
This would appear to be the case from looking at the logs, which indicate that there >1 million open nodes in the tree at this point:
4188157 1311111 58783.8231 58 10 58781.0000 58852.7228 0.12% 4.9 50s
You could verify whether this is the case by checking the system's memory use using task manager/activity monitor during the run.
There are some suggestions in this post regarding how to avoid this. I suggest trying points 2 and 3 on that list first. It may slow down the solve a little, but should prevent the solver from using so much memory.

I just tried points 24 in that thread, but they made no difference. I'm not sure memory is the issue, since Task Manager doesn't indicate it and it's very inconsistent to when it happens. It can complete a problem with 84 million nodes, and then stop during the next problem at 4 million nodes.
You are right that the program stops before the sheet.cell lines, since nothing gets written to the spreadsheet.

Hi Emre,
This is strange. I'll test this on my end to see if I can find the same behaviour. Just to confirm  you are running Gurobi 9.1.2, on windows, with which version of Python?
You mentioned that the problem is inconsistent  so if you were solving the same problem instance, would it crash at the same/similar point in the solve each time? If that's the case could you please also try changing the value of the Seed parameter (try, say Seed=1) for the model that is failing? Sometimes in these inconsistent cases, changing the random seed can be enough to change the solve path and avoid whatever is causing the problem.
Please sign in to leave a comment.
Comments
5 comments