Quick Install (Python)
Editable install for development:
git clone https://github.com/lseman/chomp
cd chomp
pip install -e .
Requires: Python ≥ 3.9, NumPy, SciPy. Native extensions build automatically if your toolchain is available.
CHOMP Handles Optimization of Many Problems
A modular nonlinear programming solver that chomps through tough constrained problems using SQP, Interior-Point, and DFO-L1 Exact Penalty, plus an Auto strategy selector — powered by automatic differentiation, sparse linear algebra, and robust regularization.
Editable install for development:
git clone https://github.com/lseman/chomp
cd chomp
pip install -e .
Requires: Python ≥ 3.9, NumPy, SciPy. Native extensions build automatically if your toolchain is available.
Compile C++ kernels and Python bindings:
git clone https://github.com/lseman/chomp
cd chomp
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release \
-DAD_ENABLE_OPENMP=ON \
-DAD_ENABLE_QDLDL=ON -DAD_ENABLE_OSQP=ON -DAD_ENABLE_PIQP=ON -DAD_ENABLE_AMDQG=ON -DAD_ENABLE_L1CORE=ON
cmake --build . -j
AD_ENABLE_OPENMP
(default OFF)AD_ENABLE_QDLDL
, AD_ENABLE_OSQP
, AD_ENABLE_PIQP
,
AD_ENABLE_AMDQG
, AD_ENABLE_L1CORE
(default ON)
Platform: GCC ≥ 11 / Clang ≥ 14 (Linux/macOS), MSVC 2022 (Windows). Use Release
builds for speed.
At iterate \(x_k\) we solve a quadratic model with linearized constraints and a trust-region:
We use a Byrd–Omojokun composite step \(p = p_n + p_t\): a normal step reduces feasibility in the equality space, then a tangential step optimizes the Lagrangian in the nullspace, both within the trust region (Euclidean or ellipsoidal).
Normal step: solve \(J_E p_n \approx -c_E(x_k)\), clipped to \(\Delta_n\) (and box).
Tangential step: minimize the reduced quadratic in the nullspace \( \mathcal N(J_E)\) with Steihaug–Toint PCG; ellipsoidal metric handled by change of variables.
Active-set loop: add violated \(J_I\) rows as temporary equalities until feasible.
We solve a slack-augmented, optionally shifted barrier problem with variable bounds:
Bounds use shifted slacks \(s_L=x-\ell,\; s_U=u-x\) with duals \(z_L,z_U\) (and optional bound shift \(\beta\)). The perturbed complementarity conditions are:
We form a regularized augmented system (HYKKT by default; LDLᵀ fallback) with diagonal barriers and solve predictor/corrector steps efficiently.
Stop when \(\|r_d\|_\infty,\|c_E\|_\infty,\|c_I\|_\infty\) and complementarity residuals are below tolerance and \(\mu\) is small.
\(\mu\) decreases via a safeguarded schedule based on predictor quality, feasibility, complementarity, and Hessian conditioning.
Chooses SQP vs IP vs DFO based on structure (active inequalities, conditioning, progress). Warm-starts duals/slacks, reuses KKT factors when possible, and flips modes if a funnel/filter rejects steps repeatedly.
CHOMP’s AD core is a C++23 engine (nanobind bindings) with lane-parallel evaluation (“HPV with lanes”). Nodes carry compact tapes; kernels are fused and vectorized (e.g., AVX2) to evaluate multiple seeds at once.
L
seeds in parallel (typical L=8–16) for fast JVPs/HVPs and batched gradients.pow
, sqrt
, abs
(and friends), with safe branches.val, grad
for lists of constraints (one pass)_safe_div
, tiny Tikhonov on normal equations, barrier-aware logs.ModelC::eval_all(...)
and getters (get_f/get_g/get_JE/get_JI
).import numpy as np
from chomp import ModelC
# assume f, c_ineq, c_eq compiled; model exposes eval_all + getters
model: ModelC = make_model()
x = np.random.randn(model.n())
v = np.random.randn(model.n())
# One lane-block computes H(x) @ v efficiently
model.eval_all(x, ["g", "hessvec"]) # engine fills internal buffers
Hv = model.get_hessvec(v).value()
print("vᵀHv =", float(v @ Hv))
SIMD-friendly tapes, lane-parallel JVP/HVP, optional OpenMP.
Specialized rules for abs
, sqrt
, pow
, trigs, reductions.
Directly feeds HYKKT/LDLᵀ with sparse \(J_E,J_I\), exact or quasi-Newton \(H\).
We solve constrained problems without derivatives:
Use an exact penalty with parameter $\mu\!>\!0$:
Under suitable conditions and sufficiently large $\mu$, minimizing $p(x)$ yields the constrained optimum directly. CHOMP builds local polynomial models of $f$ and $c_i$ in a trust-region, handles non-smoothness near active constraints, and accepts infeasible iterates when beneficial.
import numpy as np
from chomp import NLPSolver, SQPConfig
# Rosenbrock + circle inequality
f = lambda x: (1-x[0])**2 + 100*(x[1]-x[0]**2)**2
c_ineq = [lambda x: x[0]**2 + x[1]**2 - 25]
x0 = np.array([3.0, 2.0])
solver = NLPSolver(f, c_ineq=c_ineq, x0=x0, config=SQPConfig())
x_opt, lam, info = solver.solve(mode="auto", max_iter=200, tol=1e-8, verbose=True)
print(x_opt, info.status)
Build with operator-overloaded expressions; AD-aware trigs and reductions.
import math
import numpy as np
from nlp.wrapper import Model, cos # AD-aware functions
from chomp import NLPSolver, SQPConfig
# Branin with circular inequality
m = Model("branin")
x = m.add_var("x", shape=2) # decision vars
x1, x2 = x[0], x[1]
a, b, c, r, s, t = 1.0, 5.1/(4*math.pi**2), 5.0/math.pi, 6.0, 10.0, 1/(8*math.pi)
f = a*(x2 - b*(x1**2) + c*x1 - r)**2 + s*(1 - t)*cos(x1) + s
m.minimize(f)
m.add_constr(x1**2 + x2**2 <= 60.0)
# Compile to callables and solve with SQP
f_fun, cI, cE, x0 = m.build()
solver = NLPSolver(f_fun, c_ineq=cI, c_eq=cE, x0=np.array([-3.0, 12.0]),
config=SQPConfig())
x_star, info = solver.solve(mode="auto", max_iter=150, tol=1e-8, verbose=True)
print("x* =", x_star, info.status)
mode="sqp"
, "ip"
, "dfo"
, or "auto"
.
Trust-region / filter options, barrier parameters, tolerances.
Use info
/history
for residuals, step types, and timings.
NLPSolver(f, c_ineq, c_eq, x0, config)
SQPStepper
, InteriorPointStepper
, DFOStepper
Regularizer
(inertia-aware spectral fixes)KKT
(assembly/factorization, Schur)SQPConfig(tr_radius, filter, ls_armijo, dogleg, ...)
IPConfig(mu0, sigma, ftb, corrector, ...)
DFOConfig(tr_radius, mu, eps_active, ...)
Model.add_var(name, shape, lb, ub, x0)
Model.add_constr(lhs <= rhs)
, lhs == rhs
Model.minimize(expr)
/ maximize(expr)
Model.build() → (f, c_ineq, c_eq, x0)
Filter globalization + trust region with active-set changes.
import numpy as np
from chomp import NLPSolver
from nlp.blocks.aux import SQPConfig
Q = np.diag([1.0, 10.0]); q = np.array([-1.0, -2.0])
A = np.array([[1.0, 1.0]]); b = np.array([1.0]) # equality: x1+x2=1
f = lambda x: 0.5*x@Q@x + q@x
c_eq = [lambda x: A@x - b]
solver = NLPSolver(f, c_eq=c_eq, x0=np.zeros(2), config=SQPConfig())
x, lam, info = solver.solve(mode="sqp", verbose=True)
print(x, info.status)
Log-barrier with Mehrotra predictor–corrector and fraction-to-boundary.
from chomp import NLPSolver, IPConfig
f = lambda x: (x[0]-1)**2 + (x[1]+2)**2
c_ineq = [lambda x: x[0] + x[1] - 1] # <= 0
solver = NLPSolver(f, c_ineq=c_ineq, x0=[3.0, -1.0], config=IPConfig())
x, lam, info = solver.solve(mode="ip", verbose=True)
print(x, info.status)