Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Do you use an optimization solver? Which one? Do you like it?
283 points by ryan-nextmv on April 20, 2022 | hide | past | favorite | 153 comments
We’re trying to bring new ideas to the optimization space. We’ve used a bunch of technologies over time (CP, MIP, LP, heuristics, etc.). We’ve built our own solver (for https://www.nextmv.io) and learned a lot along the way.

Solvers are amazing. MIP solvers, for example, are some of the fastest and most mature software packages that exist. They’re also wickedly challenging to use effectively.

Do you use optimization solvers? Which ones? Why? Do you like them? Why or why not?




The history of state-of-the-art MIP solvers is fascinating. There are very few people in the world who can develop them, and there is a strong history of developers jumping ship from one company to another, tilting performance accordingly.

Initially, CPLEX and Xpress were founded in the eighties. In the nineties, CPLEX was acquired by ILOG (a French CP company), which in turn was purchased by IBM around 2009. Around the same time, the original technical co-founder of CPLEX, along with the two latest head developers, left CPLEX to found Gurobi. Since then, there has been a slow trickle of developers leaving CPLEX for Gurobi... until 2020, when CPLEX suddenly lost its 7 remaining devs over 6 months (because of catastrophic mismanagement at IBM, from what I heard). Unsurprisingly, those devs ended up mostly at Gurobi, resulting in the CPLEX team from 20 years ago being essentially Gurobi now. Other CPLEX devs also ended up at XPRESS, which had been purchased around 2008 by FICO (the credit rating company).

Meanwhile, there is also a smaller Danish company, Mosek, that does its own thing (they have a MIP solver, but their focus seems to be on their amazing conic optimization code). And SAS (the analytics giant) has a small MIP team too.

Then over the last 2 years, three new solvers appeared out of China: COPT (by Cardinal Operations, a startup by Stanford graduates), MindOpt (Alibaba Research) and OptVerse (Huawei). They mostly have LP solvers for now, but for newcomers, the performance is extremely impressive. This is only partially out of nowhere, though: COPT in particular has hired several devs from the incumbents.

On the academic side, ZIB (a PhD-granting research institute in Berlin) maintains a source-available family of solvers, and has been a steady provider of talent for commercial solvers. The dev behind SoPlex (LP solver) went to CPLEX after his PhD and now Gurobi. The main dev behind SCIP did the same, and is now VP of R&D at Gurobi. Many more XPRESS and Gurobi people did their PhDs at ZIB.

The Coin-OR open-source codes for LP (clp) and MIP (cbc) were written decades ago by a founding father of computational optimization, John Forrest, now retired from IBM research. Their source code is difficult to read, and Coin-OR aims to eventually replace them with a new code, HiGHS. The dev who wrote the simplex code of HiGHS as his PhD thesis went on to XPRESS and now COPT. The dev who writes the MIP code of HiGHS comes from ZIB.

As you can see, everyone is very inter-connected. Hence the throwaway :-).


If you're interested in the performance state-of-the-art for optimization solvers, the reference is Hans Mittelmann's regularly updated benchmarks [1]. They sometimes get criticized by solver makers (because one could easily game the benchmarks), but:

1. At least they exist! They give a good rough idea of the relative performance of the various solvers. Also, I think it provides a sense of competition that is great for innovation. Some of the newcomers push press releases about having "won a prize" when they leapfrog the competition in one of the (sometimes weekly) benchmarks.

2. For the MIP benchmarks [2], a lot of thought has gone into doing them properly [3].

Don't be scared by the "End of a benchmarking era" banner. In 2018, Gurobi cherry-picked numbers that made them look good in some promotional material, while suggesting endorsement by Hans Mittelmann. There was a whole fuss about it and they had to publicly apologize [4]. This was a very dumb move since they were the fastest solver anyways, just not by as much as they claimed. CPLEX and Xpress took this as an excuse to take their toys and go home (they have since refused to participate in benchmarking). Gurobi was banned from the benchmarks for a while, but now they're back.

[1] http://plato.asu.edu/bench.html

[2] http://plato.asu.edu/ftp/milp.html

[3] https://link.springer.com/article/10.1007/s12532-020-00194-3

[4] https://web.archive.org/web/20181224083542/http://www.gurobi...


Thanks! I didn't know that the benchmarking had resumed. Good news. I hope CPLEX and Xpress return someday.


you could also write a similar story about the predatory licensing of cplex and gurobi. when gurobi started their model was "we're the opposite of cplex, and we don't count cores". but they realized that wasn't making them enough money.


After several years of exploring, my current gotos are:

* Minion for IP for scheduling + edge crossing minimization.

* cvxpy (wrapping ECOS, OSQP, and SCS by default) for convex optimization for making nice geometry.

* Z3 and STP for SAT/SMT for program analysis.

All are FLOSS, which is my main criterion in many situations.

Beyond that, I like minion for its focus on only providing efficiently implementable primitives, cvxpy for the awesome lectures and documentation the folks behind it have produced, and z3 + stp for their use in tools I like, such as klee.


I'm curious what kind of person has 3 favorite constraint optimizers. What do you do that has you making use of so many different ones?


Not OP, but I work as a data scientist consultant, so various industries applications require different types of solvers e.g. mining operations optimization, branch location optimization, etc. would often each require a different solver depending on the problem space.


Not OP either, but there are many different aspects to solvers that make them useful in different situations.

For example, I would perhaps not use the same solver for a best-effort problem where I can program strong heuristics, vs a problem that must be optimized and no natural heuristics exist. Sometimes problems have a natural type of formulation, and it is best to use a solver for that type of problem (linear programming, mixed integer programming, SAT, pseudo boolean, constraint programming, ...).

Choosing a solver also very much depends on the deployment situation; how should a model be used and what operational concerns are there.


Doing all sorts of things will introduce you to all sorts of problems; from there, it’s just a matter of habitually looking around to try to see who/what has already worked on whatever problem is currently in front of you.

For example, I first found minion when I was learning about scheduling problems, initially to help a friend who was a Chief Resident on a pediatrics ward and later, to automate the administrative hassle of scheduling incident responders. (I wrote about that a little bit many years ago here if you’re curious: https://mstone.info/posts/scheduling/)

Concurrently with that, while I was working with and supervising security researchers, I spent quite a bit of time implementing various program analyses to answer questions about what certain programs of interest might do, and to find inputs that would make them do interesting things. For this purpose, though, minion is much less immediately relevant than Z3 since the interfaces and research interests of the relevant communities are so different.

Finally, this year, I am focusing on problems that have a more conventionally geometric flavor as opposed to the more discrete search spaces mentioned above. Here, convex optimization is a starting point that cvxpy makes incredibly accessible, especially in combination with Stephen Boyd’s Stanford SEE EE364A lectures (or similar): https://see.stanford.edu/Course/EE364A


Decade+ CISO of Akamai, math bent. http://www.mstone.info/resume/resume.pdf


Yep, that'll do it. :) Thanks!


I'd love to see an answer to this, too. Just to learn a little bit more about the field!


What are you thoughts on GLPK?


I would love to use one or more but the process to convert business logic to solver is painful so I ended up having to write a simulated annealing algo in Rust instead. I tried solver.com, Google OR-Tools, and a few other utilities.

It was much easier to build a score-calculator for min/max based on user-tweaked parameters, then, jiggle the data, re-calculate score, and keep doing it until there was significant improvement (again, standard SA). I would absolutely love to convert the entire production plan logic with material availability, lead-times, customer demand, quality control windows etc. to something like nextmv.io but looking at your docs, I have no idea where to begin.

Cost is a big factor too because 3 years ago I bought 4 old 24-core Xeons off eBay and they've been chugging non-stop simulating billion+ flops per hour with electricity being the only cost. I don't mind paying $50-100/day for cloud if the results are great and code is easy to manage. We have the same chicken-egg problem everyone in supply chain currently has - we don't have enough materials to make everything, don't know when we'll get everything, and so don't know the best order to buy/make everything in. I would love to write a solver for this using our dataset but I kind of don't want to re-invent the wheel.

As it stands, every solver I find is one layer of abstraction away from what I want to code in. I can explain the problem in length if you want but it's honestly nothing unique - just the standard MRP/ERP planning with ton of BOM items, PO delays, labor/machine capacity constraints etc.

Your existing tutorials explain how I can use your API/SDK to perform OR operations. That's great and a necessity. However, it's not sufficient for me because my questions are: How do I represent my production calendar in the JSON blob for your algo? How do I put a constraint of 100hrs/week/machine but also 168hrs/week/room full of specific machines. In other words, while each machine can run 100hrs/week, if there are 4 of them in the same room, only one can run at a time, and so combined machines in a given room cannot be over 168hrs/week. Maybe a tutorial or a higher-level library to help people like me convert business rules into JSON format for your APIs. Because even if I might be capable of using your API as-is, I unfortunately don't have the time to implement these things myself. Hope this makes sense and gives you some insight into at least one of your target use-cases.


Thanks for looking at our docs in light of your problem domain. That's way beyond what I anticipated, and much appreciated!

We haven't put a lot more examples into our SDK docs lately since we've been working more on our routing engine and cloud offering. Now we're getting back to more foundational questions like "what should our solver be?" and "how should model formulation work?"

Hop started off as a decision diagram solver, but even internally we strap a lot of different techniques onto it. My hope is to support those patterns better, which is really why I posed this question.

I'd be interested to know: what made the process of converting business logic into solver-speak painful?


> what made the process of converting business logic into solver-speak painful?

The fact that every solver doc I found shows me two near identical variables and 4 identical constraints. I can solve that using Excel Add-In. My problem is mind-numbingly complex, like everyone else doing supply-chain optimization. So I need examples for each type of constraint I have but instead, the tools expect me to figure out everything based on very generic examples. Dates are a big deal in what I do. I have no idea how to convert "each day the customer order is delayed is bad, but I also can't make things 3 months before they want it" into solver-speak. Cleaning a machine takes time, so it is often better to make similar things in a row (campaign-mode). No idea how to convert that to solver-speak.

The good thing is that once I understand how to convert my data into solver-speak, I can keep it updated on the fly since business logic doesn't change daily, just the data-set.

Feel free to contact me directly and/or we can Zoom etc. if you wish to discuss further. Wish you the best of luck either way!


> I need examples for each type of constraint I have but instead, the tools expect me to figure out everything based on very generic examples

if your problems can be attacked by LP / MIP type stuff, there's a book "model building in mathematical programming" by Williams that has a couple dozen different optimisation problems for a range of applications, with worked examples of how to formulate a model to solve each of them. each problem will be much less complex than your real-world optimisation puzzle but if you browse through the problem descriptions you might be able to match some against parts of your situation & get ideas for how Williams would model and optimise it.


In a previous life, when I worked with solvers a lot, this was exactly my problem too: the examples in the documentation are always way too simple.

What I saw often, and this was in finance, that one or two developers gain proficiency in converting problems to a specific solver tool, and then we could never move to a different library because it would have been so much effort to relearn.


"My problem is mind-numbingly complex, like everyone else doing supply-chain optimization" brings me joy. I'll definitely reach out - would love to get your reaction to some of our new ideas.


> "each day the customer order is delayed is bad, but I also can't make things 3 months before they want it

This would be something in the objective function, I think? Assign some daily cost to having stock in inventory and a penalty per day that the "supply delivery to customer" task completes after the deadline.

IIRC ILOG solver used to have a lot of constraints that made implementing that kind of thing easier.


Reading this from the perspective of a naive outside observer, I'm wondering (just thinking out loud) about the potential value of a hybrid approach of a) providing customers who know what they want a self-service API, b) supporting customers who just want their problems solved with consulting, and a+b) offering joint R&D scenarios in select circumstances that would progressively add support for novel workloads over time (like what's described here) in the pursuit of differentiation. Obviously this would be aligned with use cases that don't require immediate solutions/consulting. The resulting data might be worth it. And this all obviously depends on where ease-of-use sits, given that's basically NP-hard to resolve fully in this domain.


What you are describing is a scheduling problem. It is not that difficult to solve using standard CP techniques. Create an Nx168 matrix of natural numbers for each room containing N machines, where each column represents the activity for a machine for a week. Constrain the sum of the rows to <= 1. Constrain the sum of the columns to <= 100. CP is all about "tricks" like these and takes a while to get used to. I can recommend the Python interface to OR-tools which I found very easy to work with.


Yes, translating the problems is still the hardest part. Once you can write down the MIP, it's just syntax to get it into any old solver. It took me two - three days to write the I/O to get data in, formatted properly, and so on, and about half a day to write the MILP to solve ship-design optimization in Highfleet.

https://github.com/jodavaho/highfleet-ship-opt


You raise a very good point in that the "formulate the problem the way the solver wants" step is legitimately difficult and full of pitfalls. Simply figuring out the translation can be hard, and even then there are many ways to formulate a problem which are mathematically equivalent but have drastically different performance when fed to the solver.

It really feels like a tools or language problem. Heck, we used to have to manually work out derivatives for continuous optimization problems, but nowadays programming languages with performant built-in autodiff often make this trivial. Removing the manual derivation hassle let loose a flood of cool ideas and applications, even though there was no technical hurdle preventing them in the first place.

Alternate problem specifications is a well-explored area (what is Prolog if not a way of describing problems for a constraint satisfier?), but I wonder how many other neat things are dammed up behind usability problems.


> so I ended up having to write a simulated annealing algo

I think there are much better algorithms in metaheuristic search than just simulated annealing.


Certainly but I didn't and still don't have the resources to write solving algo on top of the business logic that goes into the algo. I would much rather user my 20+ manufacturing ERP experience to setup the problem specific to our use-case than read research papers and implement generic CS algos.


> don't have the resources to write solving algo on top of the business logic that goes into the algo

You already did that once, for your simulated annealing code.


Doesn't have to be the best, just has to be good enough.


would you be able to offer some examples and some discussion of how to choose?



> Handbook of Metaheuristics

There is a third edition, 9 years later: https://link.springer.com/book/10.1007/978-3-319-91086-4


Great coverage — simulated annealing, genetic algorithms, ant colony optimization.


Genetic algorithms are known to produce quasi-optimal results in a short time, if set up accordingly. They can be also applied to problems where constraints are very complex to explicitly formulate. Designing one is no picnic, though, and always problem-specific.


Interesting.

What do you mean by quasi optimal?

Genetic algorithms product quasi optimal results but simulated annealing will not?


Quasi-optimal means you can't prove that the algorithm will always find the optimal solution, though for all practical purposes, the algorithm will find it for over 99% of the time.


cplex comes with a great heuristic now (odh), and gurobi too.


Honestly that constraint is pretty straightforward to formulate as a MILP. Four machines, each has a starting and ending time as decision variables. Total duration by machine <= 100 hours, and add non-overlapping constraints between the machines. Each machine's start time >= 0, each machine's end time <= 168 hours.

I'm doing almost exactly this right now on a client project (I consult in supply chain optimization)


curious to know - how do you solve this today ? what is your abstraction/solver-speak today ? I also have faced this (though im not a power user like you) - so im curious what did u end up using today ?

I have a niggling feeling somewhere that JSON is the wrong language for this. Something like cue lang may be the right thing. (https://www.sobyte.net/post/2022-04/cue/)


I don't 'solve' it using linear optimization today. I use https://en.wikipedia.org/wiki/Simulated_annealing which basically amounts to (1) calculate score based on the dataset (2) randomly change some data that affects the score (3) discard the score if it much worse, keep if slightly better or worse (4) repeat until new score is much better than original score. I allow it to get a little worse over time but if it is much worse, I basically restart from the last-best score.

The nice thing about scoring with SA is that I don't need to think like an operations research student. I just think in code with if/then/else and the number of dimensions don't matter. The code is working fine for now but now the problem is a business requirement to make it MUCH more complex and factor in a whole new set of constraints. SA will straight up not work with it because the scoring itself can take tens of seconds now. So I need to come up with a new 'proper' way, hence my renewed interest in solvers.


Sounds like you have a good solution for your problem. Whatever solves the problem and is already done is naturally a good solution!

From my experience, the most important thing to get right in any local search algorithm is the state representation and the moves generated. If those work well, then any combination of metheuristics like simulated annealing, tabu search, population based search, restarts, parallel exploration, portfolio methods, and so on. Quite often the literature focuses only on the meta heuristic, but not so much on the representation, which IMHO can be an issue.

As an example, for some problems that I have solved with local search, the most impact in improving performance of the algorithms have been in improving the base. Making moves and score updates fast, and making the moves generated meaningful in the context.


this is super interesting - what is a "Score" ? i'm trying to model your problem as a neural net problem in my mind. how do you transform a classic linear optimization problem (which is a bunch of equations) into a score ?

pretty sure there's heuristics at play...but how would you do it ? take your own example of

>constraint of 100hrs/week/machine but also 168hrs/week/room full of specific machines. In other words, while each machine can run 100hrs/week, if there are 4 of them in the same room, only one can run at a time, and so combined machines in a given room cannot be over 168hrs/week


Now this sounds like a proper fun problem! :)


I loved the JuMP package in Julia for being able to write models once, then swap in different solvers.

Most open-source solvers don't handle parallelization well, and they lack the latest research on techniques like branch-cutting and heuristics that can speed things up significantly.

In my experience, Gurobi is still leader for linear and MiP solving. But, it's really expensive and the licensing terms seem anachronistic.

SimpleRose.com was a startup working on a new solver, too - I'm curious if anybody has tried it yet?


I love JuMP too. I am very amateur when it comes to optimization, but I found it to be a very efficient way to describe a matching algorithm for a post card exchange I ran (think of it as a Secret Santa, but generalized to each person sending multiple gifts, and discouraging mutual exchanges.) Here's the code: https://twitter.com/paulgb/status/1462483698427781120

To OP's question, besides JuMP, the other use case I've had for optimization is for optimizing the drawing order of pen plots, for which I used or-tools. A write-up on it is here: https://nb.paulbutler.org/optimizing-plots-with-tsp-solver/


> here’s the code

> link to twitter

visible_confusion.png.jpg.exe


I hadn't before realised that you can circumvent the Twitter character limit by turning your message into an image and attaching it. Clever!


It’s funny, despite the code being on GitHub, Twitter search is the fastest way for me to find it. Searching my own tweets as a bookmarking tool is its hidden killer feature.

Images aren’t ideal for sharing code, of course, but it’s not like this example works as a standalone program.


I haven't looked at JuMP in a while - the last time I tried it was when I was still doing personal blogging (https://ryanjoneil.github.io/posts/2014-07-18-are-we-getting...).

I remember liking JuMP, but Julia itself didn't feel ready yet. Some of the packages had weird behaviors. For example, Gadfly took several minutes to render some of my charts. IIRC when I looked at the source, it was solving a MIP to compute the best bounding box for display purposes.

I should probably check it out again.

Also: agreed regarding Gurobi licensing.


Julia (and the Gadfly package, which it's worth noting is not a standard library or anything like that) have come a long way since 2014. JuMP too. Julia was indeed "not ready" in 2014; it wasn't anywhere near the 1.0 release. We're now on the 1.7.x releases: stable and guaranteed backward compatible since 1.0, and a lot of quality of life improvements between 1.0 and 1.7, including much faster compilation.

If I'm solving an optimization problem for personal curiosity, a blog post, etc., JuMP is the tool I reach for. :)


Wow, 2014 and Julia v0.2! That's pretty amazing really haha. That's the "before times". That's so long ago that most of the things tracking Julia's growth start tracking after that time, and the vast majority of users today hadn't even heard of Julia at that time. I would definitely open up Julia again some time treating it like essentially a wholly different language. Things like precompilation, package compiler, Plots.jl, etc. only came into being much later, so definitely OP should scratch the old workflow and try it fresh.


HiGHS is now the default open-source LP/MIP solver in JuMP documentation. Performance-wise, for MIP HiGHS is way ahead of Cbc


At Zoba we use CLP, CBC, NLOpt, and OR-tools. Used to use Gurobi.

* CLP/CBC: open source makes deployment and devops easy, which is great. Linear models are nice in that you "know what you're getting." Performance is at times a pain point.

* Gurobi: super fast, but the licensing was just impossible. Partly that was due to high cost, but ultimately we could have borne the cost; the inability to do something like have autoscaling containers using Gurobi was ultimately the dealbreaker for us. As Zoba grew, we had to turn to alternatives.

* NLOpt: absolutely a blessing, but the variety of algorithms and tuning parameters is really opaque, even for a team with some OR experience/background.

* OR-tools: powerful but the documentation is remarkably terrible, and the vehicle routing problem solver doesn't natively support all the constraints we'd like, so we have to do some hacks.

Overall my feeling for all these tools is roughly gratitude: solvers are complex, and rolling our own would be absolutely impractical at our size. But also there's some pain in between "I've formulated my problem as a nice mathematical model" and "I can reliably get optimal results on time."


Gurobi is so frustrating. I had the same experience: blistering performance on my problems, much better than OR-Tools, but just couldn't make it work at all with the licensing. It's like they've never heard of the cloud, or had any concept that anyone would use their software in any way other than big "batch" jobs on an in-house machine. I feel like someone could make a killing just buying Gurobi and making it work in a modern way.

NextMV was practically the opposite (at the time, I'm sure it has improved now, especially since they used to be far more insistent on decision diagrams): rather bad/terrible performance, but excellent in terms of licensing and deploying the code, and they had great support too. The modern deployment made sense given they were a new/modern company. One silver lining to the terrible performance, and why I used to stick up for them at the time was that you could get somewhat acceptable results fast: if you stopped after one second, CBC might be absolutely nowhere, but NextMV's solver would at least give you something. This meant you could do things that made use of extremely fast results, like trying a configuration and checking the (approximate) solution, then trying a bunch more, all very quickly.

In the end I mostly settled on OR-Tools.


Disclaimer: I work for Gurobi, but these views are my own.

---

I'm sorry to hear that you find our licensing frustrating. It is true that in the past we focused a lot on local deployments holding on to the notion of a physical machine. However, we have added a few licensing options that you might find interesting, specifically the Web License Service (https://www.gurobi.com/web-license-service/), where you can get a short-lived JSON Web Token inside of a Docker container which renews automatically. You can find our Docker Hub images here (https://hub.docker.com/orgs/gurobi/repositories).

There are also other ways like the Instant Cloud (https://www.gurobi.com/products/gurobi-instant-cloud/) which give you a very scalable SaaS approach.

Feel free to reach out to me (<lastname>@gurobi.com) to discuss more!


Do consider HiGHS. MIP performance is way ahead of Cbc now. For LP, our simplex solver is comparable with Clp, and our interior point solver is well ahead of any open-source solver.


I noticed that your comment contains no complaint about Gurobi's performance. For comparable problems, was Gurobi superior in solution convergence speed or the quality of the solution itself?

Gurobi, AFAIK, is considered the leading edge of the field, and other tools, especially open-source ones, do not perform as well.

I cannot speak specifically for your firm and use cases, but in my experience, if the use cases are structured and specific enough, rolling out a in-house optimizer is not such a bad idea.


Gurobi had excellent speed for us, but we didn't notice a difference between Gurobi and open source tools on final solution quality; all the linear solvers we've tried get to certifiable optimality on our problems, some just faster than others.


Disclaimer: I work for Gurobi but views are my own

---

Sorry to hear you found our licensing problematic! You might find interesting that we now have developed a Web License Service (https://www.gurobi.com/web-license-service/), where you can retrieve a short-lived JSON Web Token inside of your container to run Gurobi; our Docker Hub images are here: https://hub.docker.com/orgs/gurobi/repositories.

From what you are saying ("autoscaling containers"), this may be a good fit for you. What do you think?


The last time I tried to optimise something I ended up with a column generation formulation. I.e. I was wanting to rapidly iterate between LP solves of the (restricted) master problem & solves of auxiliary problems through hand written problem-specific algorithms taking shadow prices from the master problem dual solution as inputs. Then the auxiliary solution would contribute new variables into the master problem & we'd iterate until hitting a fixed point.

I needed shadow prices defined using the master problem dual solution. In my problem instances I would very often run into scenarios where the primal (and hence also dual) master LP problem had a unique objective value but the dual solutions at which that maxima was attained were non-unique. I learned that it only makes sense to talk about shadow prices in some allowable range for each dual decision variable, but LP solvers generally don't give you an API to extract much information about this from the current internal state of the simplex algorithm [0]. I read a bunch of LP solver documentation and the best one I found discussing this kind of stuff was the section in MOSEK's manual about sensitivity analysis[1]. This was for a hobby project, not business, so I didn't try out MOSEK, even though it is looks very modestly priced compared to other commercial packages.

What I did find, however, was that some time during the last few years, scipy.optimize grew an LP solver interface, and even better, Matt Haberland contributed a python implementation of an interior-point LP solver straight into scipy [2][3]. I found that Haberland & co's open source interior point LP solver produced dual solutions that tended to be more stable and more useful for shadow prices in my problems than a bunch of the other open source LP solvers I tried (including the various other LP backends exposed by scipy.optimize.linprog).

[0] a paper I found very helpful in understand what was going on was Jansen, de Jong, Roos, Terlaky (1997) "Sensitivity analysis in linear programming: just be careful!". In 1997 they got 5 commercial LP solvers to perform a sensitivity analysis on an illustrative toy problem, and although the solvers all agreed on the optimal objective value, none of them agreed with each other on the sensitivity analysis.

[1] https://docs.mosek.com/9.2/pythonapi/sensitivity-shared.html

[2] https://github.com/scipy/scipy/pull/7123

[3] https://docs.scipy.org/doc/scipy/reference/generated/scipy.o...


I've used custom made implementation in Java ( proprietary based on the owner thesis ) to optimize train crossings for big mining companies like Vale, BHP and Rio tinto, it was stressful but super fun work!

Lot's of money to be made on it too if you guys are interested, but it's super niche and hard to get into. There is a huge resistence from train controllers and other workers. I actually understand it because of the job loss involved but it was super cool being in a NASA like control center sorrounded by panels and monitors and seeing the trains moving based on code I and other wrote!

It was a just in time local optimization with lots of heuristics and business rules embedded into it. Basically impossible to reuse between companies or even railroads sometimes, the controller would then solve all the more complex crossing that involved either some lose-lose choice or a pre-defined business decision.

The train controllers are amazing at their jobs, it's super stressful and a single mistake can kill people or make the whole thing stop for weeks, with the software running it made it a lot less risky, one dude could control an area that needed 7 or more people without it, with minor interventions.


That's awesome.

A comment on a recent thread about a train IT failure went on a bit of an interesting tangent about ahead-of-time network scheduling in (IIUC) the Netherlands' TURNI system - https://news.ycombinator.com/item?id=30902585

(The whole thread was a bit of a wide-spectrum ramble, as one might expect for a downtime event.)

So you're saying you were actually doing JIT routing as opposed to AOT? The linked system apparently precomputed the trip<->driver/conductor schedules overnight. I wonder if they're still using that approach today. It does feel like a JIT approach is much more amenable to handling the unpredictable non-spherical real world (eg electricity issues, track breakage, crashes at crossings, train malfunctions that block tracks (right on junctions >:D), etc).

This sorta thing is definitely beyond my own mental level :) but for reference, how would someone interested get their foot in the door in this area?


So we thankfully didn't have to worry about conductor schedules another system our company built did it and we just worked on the assumption that the cow was spherical, this alone made a world of difference in segmenting whose head was exploding at which time.

All the rest was on us, failures, breakages, crashes (We never had one!!!), blocks were super interesting btw!.

So I got on it by chance, I just did interviews as a sport when I was in my early 30's so at some point I passed on this position, pay was awesome, lot's of travel involved, etc... So I joined and learned it all there, it was a small mom and genious software house that specialized on all types of JIT planning for railroads.

JIT was actually "THE" solution to it, because the controllers could just try it during the day, so let's say they new 18:00 was the worst and there was a huge trains comming in at that time with priority, he would just throw shit at the system to see how to same the most time sometimes, of course he knew how to do it by heart, but was it the "Best way" we could help him answering it super fast.

P.S: I am not ducth but I lived there for a while, I remember thinking about how much of a bad day it would have been if it was me, I knew exactly what the NS people were felling :D


For MIP and LP I have used CPLEX, Gurobi and to a lesser extent Cbc. I used those three using JuMP (Julia package for mathematical programming) and Gurobi via pulp and pyomo. Of all three, I think Gurobi has a very accessible documentation, note that I am not saying better or more complete which in that case it would go to CPLEX, and the integration with Python straight out of the box is very useful. Cbc is a lifesaver when we couldn't access the academic licenses of the other two. Overall, I think CPLEX/Gurobi are my favorites with a slight edge to Gurobi. I have tried formulating problems using .lp and GAMS but JuMP is so much more ergonomic even if it's strictly tied to Julia (which I find to be a good thing).


I used Gurobi for logistics routing problems years ago and loved it. Especially loved that I could get a fully functional trial that was limited by total variables to develop my problem and use the license version for the production problems


For open-source, the HiGHS MIP solver vastly out-performs Cbc now, and is easily called from JuMP


Matlab toolboxes: Lots of algorithms. Good documentation. Converting LPs/QPs into standard forms is kind of a fun puzzle but very error-prone. Manual gradient/Jacobian for general nonconvex/nonlinear problems can be painful. Not open-source, so I stopped using it.

SciPy.optimize: Similar pros/cons as Matlab other than open-source.

CVXPY (& its default backends): Modeling languages are great. First thing I will try for a new convex problem.

CVXGEN: Amazing, but infuriating that it can only be used through a web app.

PyTorch: Only supports unconstrained first-order methods. Automatic differentiation of arbitrarily complex functions is huge. Somebody should implement interior-point and SQP on the GPU for PyTorch.

---

As a researcher, my first impression is that your product is designed for people who want to deploy optimization in some service or business process, not for me.


Optaplanner facinates me, but I have no idea what I could be using it for and the learning curve seems quite heavy

https://www.optaplanner.org/


You can use it for Vehicle Route Problems. I did so once as an interview project but never in production. If you start from some example problem, the learning curve isn't so steep.

https://en.wikipedia.org/wiki/Vehicle_routing_problem


We've done a lot lately to reduce the learning curve for OptaPlanner (open source).

Have you seen our quickstarts repo? https://github.com/kiegroup/optaplanner-quickstarts It features various use cases across various technologies.


Prolog and CLP is an interesting and powerful combination as documented here: https://www.metalevel.at/prolog/optimization. In particular the CLPZ library is very powerful: https://github.com/triska/clpz.


I use JuMP as modeling language. For MILP i am usually using Gurobi or SCIP. For ILP problems have have been looking in to the exact solver https://gitlab.com/JoD/exact which seems quiet promising.

For NLP i usually go with either https://worhp.de/ or just IpOpt.


<3 SCIP. About a dozen years ago I used to write the python-zibopt library (now defunct and rendered moot by the far superior PySCIPOpt).

Hadn't heard of JoD - that looks really interesting. What are your experiences so far?


JoD is Jakob Nordström. He gave a talk on the predecessor of Exact here: https://www.youtube.com/watch?v=yvXy7Nzn-IA

My experience so far: Quiet good at finding feasible solutions when constraints are complex. Really young code with a lot of potential. Not that good at finding optimas.


At my current job we use Optaplanner in a project, to move around seat reservations (in blocks of several hundred) whilst keeping the new seats as similar as possible to the old ones (each seat having 'attributes' with various weights). I mostly chose it due to being JVM (although it needed a little Java shim to make it usable from Scala)

In grad school I used MiniZinc to find exact optimal subsets for a certain problem, which took AGES and was only practical up to sets of size 11, but I could use its output as a benchmark for my own approximate solver.


I’m curious to know how your experience with Optaplanner has been. As I understand you model your problems quite differently and it is more of a search for a good enough solution using metaheuristics?


I think Optaplanner has its own DSL, but we use the Java/JVM API: a 'solution' is just a normal JVM object, which references other objects in the usual way. Certain fields are annotated, which tells Optaplanner that it can mutate/alter them.

Implementing the domain objects is pretty natural. The only part that feels weird is that solving is completely based around mutation; whilst that's pretty idiomatic for Java, it feels a bit alien in Scala (I previously worked mostly in Haskell, and haven't written Java in over a decade).

There's a bunch of configuration boilerplate; including the following, which is the only Java code in the whole project (and almost looks like a parody of "Enterprise Java"!):

    class Config {
        public static ScoreDirectorFactoryConfig config =
            (new ScoreDirectorFactoryConfig())
            .withEasyScoreCalculatorClass(ScoreCalculator.class);
    }
IIRC, this had to be Java due to some weird self-referential interface that makes Scala's type-checker explode.

We can give each solution both a "hard" score and a "soft" score, where the soft score will be sacrificed if it improves the hard score. We use the hard score to count errors which make a solution invalid (e.g. trying to assign two people to the same seat); and the soft score as a weighted sum of mismatched seat attributes (e.g. it would be nice to stay facing the same direction; but not crucial).

Once the problem has been defined, it can be sent to a bunch of solvers; we use simulated annealing with a timeout. Since it's an anytime-algorithm, we can adjust the timeout to however long we're prepared to wait.

To avoid "obvious" problems in Optaplanner's results, we send them through a simple hill-climbing loop to ensure we're at a local maximum.


I've been using CBC via python-mip (https://github.com/coin-or/python-mip). It's great because it's got a super clean interface (milp variables/expressions/constraints), the code is quite accessible, and it's low overhead which makes it good for solving many very small problems.

Community sentiment seems to be beginning to shift toward favouring the HiGHS solver (https://github.com/ERGO-Code/HiGHS) over CBC. Something I'm keeping a close eye on.

nextmv seems to pitch itself as a generic solving ("decision automation") platform or something (unclear). But it seems that the only fleshed out product offering is for vehicle routing, based on the docs. Are there plans to offer, for instance, a solver binary that can be used to solve generic problems?

Also all the github repos under https://github.com/nextmv-io are private, so links from docs are 404.


I use different solvers for different things, depending on the type of problem to solve and the goal of solving the problem.

* MiniZinc is my favourite tool for prototyping models. Looking into the feasibility of using it in a roduction environment as well. Typically models are small to medium sized. Also using it for recreational problem solving (e.g., Advent of Code).

* Gecode is my go-to solver for writing applications where I need more control over the solving process or I want to write a custom propagator or heuristic. Used it for scheduling, planning, and configuration.

* When in the JVM ecosystem, I've used Choco.

* I've used OR-tools some, but would like to be better at it. Mainly because OR-tools has a nice set-up with a lazy clause generation solver, good automatic heuristics, and a nice portfolio solver for parallel work.

* Quite often, a custom optimization heuristic is also the right tool for the job.

I've tried to use Gurobi sometimes, but for the problems I've tried, it has either been to hard to model effectively or was not a good fit. The licensing cost is a limiting factor as well.


Optimisation solvers are complex and incredibly powerful pieces of software. However, there are lots of different options and choosing which to use can be a daunting task.

There are some open source options ([COIN](https://www.coin-or.org/), [OR-tools](https://developers.google.com/optimization), [Minion](https://constraintmodelling.org/minion/), [CVXPY](https://www.cvxpy.org/)) and other commercial offerings ([gurobi](https://www.gurobi.com/), [Mosek](https://www.mosek.com/)), other people write their own for thiner specifics purposes. Some [benchmarks](http://plato.asu.edu/bench.html.) are maintained by Hans Mittelmannt.

Personally, I have used OR-tools, maintained by a team at Google, for vehicle routing optimisation and found it powerful but poorly documented with an inconsistent API. I've also used R's [optim](https://www.rdocumentation.org/packages/stats/versions/3.6.2...) function and [lpsolve](https://cran.r-project.org/web/packages/lpSolve/index.html) for linear and integer problems.


As a humble programmer with only occasional LP needs, not a data scientist or analyst, I have used Google OrTools with the C# binding. I do not understand the solution techniques, but the interface is simple enough that I have been able to produce useful optimization results without needing to ask an actual data person to help me. The problems I've encountered have just been difficulties in expressing my problem in LP terms. Once I've done so, using OrTools is essentially just typing it in.

One thing I don't like about OrTools is that it seems there are better solvers available if you compile it from source, but I failed to get it built. As a result, I use the "CBC_MIXED_INTEGER_PROGRAMMING" solver because it's built into the precompiled libraries, not because it's necessarily the best one. It's unclear to me what benefit other solvers offer; the solver doesn't seem to be where my problems typically are.


If I were doing another serious large-scale commercial optimisation application where it was more valuable to get a pretty good feasible solution rapidly rather than potentially wait a long time attempting to find a provably optimal solution, I would be very interested in seeing how localsolver performs.

Often the mathematical model of the real world problem or the input data used to parametrise the model has a fair bit of approximation error (e.g. assuming parameters are deterministic when actually they are uncertain, linearising things to bash them into the MIP modelling framework, etc) , so pragmatically it doesn't often seem useful to be too bothered about getting an optimal solution to an approximate problem vs getting an approximate solution to perhaps a better model approximation of the true situation.

https://www.localsolver.com/


+1 on being interested in seeing how LocalSolver performs. They seem very self-confident in their benchmarks (https://www.localsolver.com/benchmarks) and so it would be very interesting hearing from someone with hands-on experience from real-life applications using LocalSolver.


I have been using OSQP [1] quite a bit in a project where I needed to solve many quadratic programs (QPs). When I started with the project back in early 2017, OSQP was still in early stages. I ended up using both cvxopt and MOSEK; both were frustratingly slow.

After I picked up the project again a year later (around 2019ish), I stumbled across OSQP again. OSQP blew both cvxopt and MOSEK out of the water in terms of speed (up to 10 times faster) and quality of the solutions (not as sensitive to bad conditioning). Plus the C interface was quite easy to use and super easy (as far as numerics C code goes) to integrate into my larger project. I particularly liked that the C code has no external dependencies (more precisely: all external dependencies are vendored).

[1] https://osqp.org/


CPLEX, XPRESS and Gurobi are the gold standard for solving an MIP of a meaningful scale (relative performance will depend on the specific problem but you cannot go wrong with either of these three). Unfortunately there is a big gap between performance of open source v/s commercial solvers in OR space. Type of problems I need to solve are usually unsolvable on open source solvers (very large scale supply chain problems). For smaller problems, OR-Tools or GLPK or CBC do work fine - but once you go commercial, there is no need to switch back to an open source solver.

My typical setup is using Pyomo for model formulation (gives me flexibility to switch out solvers with ease). I bundle multiple licenses using GAMS, it is more cost effective than purchasing individual licenses from solver companies.


The HiGHS open-source MIP solver is vastly better than Cbc now - and getting better rapidly - although we've got a way to go to be competitive with the best commercial solvers.


I work in quantum information, and I need support for complex semidefinite programs.

I use MATLAB/Octave, with the SeDuMi solver (to share open source code) or MOSEK (amazing, with free academic licenses).

For the modeling, I use either YALMIP (amazing breadth of functionality, though additional features tend to not compose well), and CVX (restricted use cases, but I find it amazingly robust).

I haven't made the jump to Python (CVXPY support for complex variables seems missing/shaky), or Julia (I tend to write OOP-lite code and haven't been able to wrap my head around Julia's abstractions).

I've also been exploring exploiting symmetries in convex programs, see https://replab.github.io/web


Yes! We are a consultancy company that uses solvers to solve supply chain strategy questions for companies (and bespoke vehicle routing problems).

Our go-to is LINDO, mainly because it is powerful and it can be integrated into an Excel model which will let us share models with clients.


SCIP, GLPK.

I don't mind the overhead in developer time. Once you 'git gud' at translating problems to MIP/ LP, it's just a matter of learning syntax. The real trick is in translating problems.

It's a shame more CS courses don't focus on this problem translation meta-algorithm. You can get it in tough theory courses with problem mapping to prove complexity, but its usefulness goes way, way beyond that somewhat niche application. Learning to model problems as max flow, graph partitioning / matching, LP, MIP, gives you absolute super powers way, way beyond what you'd gain by having a solver that's easier to work with.


That skill can be learned, in part, by taking OR-related business administration classes. Some supply chain management courses I looked into were about learning about standard problems (say, travelling salesman problem) and being able to formulate constraints to extend them. Not very theoretical, but useful in practice.


- Cplex and Gurobi, sometimes some open source ones.

- Energy modelling: the core models for one timestep are small (ca 1-2k variables), but together with long timescales/stochastic programming the model formulation blows up (easily by a factor of 10-40k).

- Mostly LP-relaxations of MILPs with manual cuts. MILPs themselves take often too long.

- The solvers are ok:

  - There is a nice theory around LPs and MILP is understandable. So in principle, I trea them as black boxes.

  - API: Similar enough between solvers because of nice formalism of MILP. Generic APIs on most platforms (pyomo, jump, ..). Beware magical helpers.

  - Performance: For LP predictable. For MILP -- tune it. It becomes tricky when tuning solvers/staying solver-independent without sacrificing performance. Metaheuristics to the rescue. Open source solvers have a harder time here.
- The big problem is the modelling and context part: translating the problem into am MP formulation, ensuring correctness of units/scales, generation of apis and docs ... once you do it by hand, but if you want to run a modeling loop or support multiple models (variants, optimisations, special cases), then this ist most of the effort (and costs human time, instead of computing time).


We have complex non-linear stuff that maps really well to the meta heuristic approach that OptQuest uses (https://www.opttek.com/products/optquest/). It’s not free and has only a Java API but the time it saves us is worth it. I hear Python bindings are coming this spring.


Related question, what are broad recommendations for vehicle routing with capacity, time windows, and a smattering of other constraints? I'm a generalist SWE by trade and my (shallow) survey of the field revealed:

General solvers:

- LocalSolver: good docs, reasonable pricing, docs make it sound fast

- ORTools: Open source, we we're leaning towards this since we're not doing anything incredibly fancy and it'd be nice to embed the solver in our server.

- Gurobi: major player in the space, will charge you an arm and a few legs.

Vehicle routing specific solvers were covered in absolutely outstanding detail by "Open Source Routing Engines And Algorithms – An Overview" [1].

We were leaning towards ORTools since we didn't want to get charged by number of jobs or compute. LocalSolver also seemed promising.

Also, is there a cost-efficient way to get distance matrices for ~50 stops? Google maps and Mapbox are wicked expensive. Is PostGIS reasonable?

[1]: https://gis-ops.com/open-source-routing-engines-and-algorith...


VRPs are extremely complex MIPs so usually heuristics are used. So ortools is probably the best option in this case for cost.


We use OpenStreetMap Routing containers. There's a bit of setup, but it works great!


If I may hijack a bit this thread, I am currently looking for an optimization solver for a specific goal : I wish to constraint the values of variables so that tuples of variables are unique within sets of tuples.

Cf. https://stackoverflow.com/questions/71878354/constraint-prog...

For now, I could not find anything satisfactory, so any recommendation would be most welcome. Otherwise, I guess I'll have to generate a variable for each tuple and use and all_different (e.g. https://cpmpy.readthedocs.io/en/latest/api/expressions/globa... ) on those.


I've mostly worked on continuous high-dimensional problems (and not as high-dimensional as large neural networks), often times grounded in physical situations, where one can anticipate reasonably convex behavior close to the optima -- things like bundle adjustment, different kinds of generalized calibration, etc. In all these cases, (approximate) second-order optimization methods have worked great (Just throwing out a few non-MECE names eg: Conjugate-Gradient and variants, Levenberg-Marquardt, BFGS, etc.).

In the context of probabilistic modeling, my experiences with belief propagation have been good, but somewhat mixed. Sometimes it feels like mean-field methods could give more bang for the buck given how much less memory they use.


I used to use Matlab's fminsearch a lot. In retrospect I barely understood how it worked and I should have taken much more time to understand it rather than just throwing it at stuff. But I ended up getting some good results with it (some antenna design stuff).


I've used lpsolve [1] for over 20 years in various projects in the beginning. When the project momentum picks up and the finances can be justified, it's CPLEX.

lpsolve is simple to use. Open source, with packages in most every language. Easy to embed.

CPLEX because it's fast. With CPLEX I often find it takes more time (1 - 2 orders of magnitude) to formulate/construct the problem than it does for CPLEX to actually solve it.

I would love to get the chance to use Gurobi.

[1] https://sourceforge.net/projects/lpsolve/


A past project used OR Tools. I generally liked it. It was a bit confusing at times and the documentation was a bit cryptic for using some of the advanced features, though the community was good about answering questions.

In general, I would recoment OR Tools.


Google OR tools (along lpsolve). Can be used with c#, free, common api for multiple solvers. I find that on certain problems, even linear solvers can be instable and it is useful for the user to be able to use another solver with a simple drop down. I never fully understood what caused the instability (showing problems that are perfectly solvable as infeasible/abnormal). I suspect it has to do with the many orders of magnitude between the range of certain variables and the range of objectives/constraints. The only downside of Google OR Tools is that you need to recompile it yourself to use glpk (licensing).



In a previous life I used Gecode and found it had pretty good performance and was fairly easy to use, see https://www.gecode.org/


I used to have to solve a ton of LPs or MILPs and rather quickly.

After trying different stuff, and under different platforms (e.g. Matlab + CPLEX, Python + GLPK, etc), I settled first on Python or-tools with COIN-LP (about 100x faster than GLPK). Later we got access to FICO Xpress solver which in turn gave us some other gains.

I really liked this benchmarking website: http://plato.asu.edu/bench.html

It seems like they got into some trouble, so I don't know if they are still tracking the main commercial solvers.


Gurobi is by far the best on the market IME. Performance is very good even on poorly designed models. The API has a lot of convenient features and while it is expensive it's not that awful to run it with their container licensing and even SaaS solutions now.

Xpress is pretty good at performance if you put a bit more effort into getting the model right and has very reasonable pricing.

CBC is increadibly slow compared to commercial solvers. Documentation is awful and it seems like they can't even compile it sensibly anymore because they forked autotools at some point.


I use Minizinc in a personal toy project (https://gitlab.com/dustin-space/meal-scheduler), and GECODE or Google's ortools solver at the backend. It's used for meal planning. Unfortunately it's way way slower than I'd hope. I suspect I just have the domain not modeled efficiently. Maybe if I had a few days to put into it, and learn how to properly debug the CSP solver step by step, it might help...


Fun problem. When solving with Gecode, you might want to add the annotation `:: domain_propagation` to the `all_different` constraint (https://www.minizinc.org/doc-2.6.2/en/lib-stdlib-annotations...). Sometimes a good idea, sometimes not.

If you can extract some instances you could send in your problem to the MiniZinc challenge (https://www.minizinc.org/challenge2022/challenge.html), preferably with several instances with varying complexity. The benefit for you is that it is a good way to ensure that your particular problem is tested for a lot of solvers, and may be used for improvments of solvers in the future. The deadline for this years challenge problem submission is 6th of May.


Thanks for the tips!


To solve Vehicle Routing Problems at my company we use jsprit, and have integrated it with lpsolve. jsprit natively schedules stops into time buckets, then we use a linear equation solved by lpsolve to move them to precise ideal times. Works well.

Although jsprit isn’t the most lightning fast VRP solver, it’s highly customizable, and fast enough. Customizability is huge for us, as we have a tonne of very particular hard and soft constraints we need to be able to represent, and jsprit has always been up to the task.


I've used Gurobi and Mosek and today I tend to use Gurobi whenever I can as a default.

I use JuMP to switch solvers to Mosek or an open source solver is something doesn't go as expected.


Most constraints solvers require math equations based input. If you're more a fan of an Object Orientated Programming and Function Programming coding style, take a look at our solvers:

- OptaPlanner (Java, Kotlin, open source, Apache license) https://www.optaplanner.org/

- OptaPy (Python, open source, Apache license) https://www.optapy.org


My humble LP needs have always been satisfactorily fulfilled by QSopt and COIN-OR solvers (Clp/Cbc). I guess I'm just too boring for anything fancier.


I’ve used GNU MathProg (via https://online-optimizer.appspot.com/) and Z3 (via its Python bindings). I appreciated, with Z3, being able to use the existing syntax of Python, though some of the errors I got were confusing. The ease of installation/usage for the online-optimizer web UI is hard to beat.


The only optimisation solver I use is Excel Solver Add-In.

It’s great, but you have to be able to put your problem into Excel to use it which does not work for all use cases.

In cases where I’m trying to optimise something that doesn’t quite fit into Excel cleanly I’ll usually do some sort of hand-rolled Monte Carlo optimisation. This generally works for me in the types of problems I most often solve.


The Gnumeric spreadsheet has montecarlo built-in. I think this is unmatched both by Excel and Libreoffice Calc. See https://help.gnome.org/users/gnumeric/stable/sect-advanced-a...


Is it safe to say those optimizations are performed ad hoc? That is, you're manually in control of the optimization process instead of wiring it up to some kind of automation?


Yes! In particular, it is usually to create some concrete data/parameters to be used towards creating the final product.

In the usual Excel Solver case, for a simplified example, I want to create a biased coin where getting heads doubles your money and tails loses your money, with expected payout 90% in the long run. The parameters to change are heads/tails coin weighting, to target a value 90%. A fair coin gives a long run payout of 100%. It turns out that if a biased coin hits heads 45% of the time and tails 55% of the time, we end up with this 90% payout. Excel solver can come up with these two values.

In this example, we can come up with these 45% and 55% values theoretically. With more complex systems, we may want to see the effects of changing a subset of parameters that have an indirect effect on payout.

For example, if we extend the “biased coin game” to instead be “win your money back, 2, or 3 times your wager each with some probability” on a heads result, there are more parameters (and many solutions!) to get to 90% payout. Changing the heads probability has a further, second-order effect on the payout. Using Excel solver it’s straightforward to fix some parameters (heads/tails probability) and allow others to change (1x/2x/3x odds) to get desired results (90% payout).

If we further extend this methodology for a biased coin game, we can end up with a coin game that pays with distribution exactly like a slot machine in a casino, though it would not look like one!


I used OR-Tools via the Python bindings a few years ago. It was nice to work with once I got setup but it was a pain to get it installed (both locally and when deploying to a cloud server).

I would have liked some kind of API that I could call out to instead but nothing existed at the time: you pass in the inputs to construct the linear equations and then you get back the results.


I don't know if this this on-topic or not, but I found this usage of Rosette to generate optimal 8051 assembly code fascinating: https://lab.whitequark.org/notes/2020-04-06/synthesizing-opt...


Is anyone using Picat? I'm deciding whether to bother learning it at the moment. I'd be grateful for experience reports.


You're probably looking for a performance comparison. Look here:

http://plato.asu.edu/bench.html

This one stays pretty well updated. On the MIP side:

  * COPT
  * SCIP
  * HiGHS
seem to be top performers (of the OSS variety).

IIRC OR-Tools often by default uses SCIP for its MILP backend.


The latest results show HiGHS to be significantly better than vanilla SCIP. If "OSS" means "Open Source Solvers" then COPT isn't open-source and SCIP isn't open source for commercial purposes. Hence the warning given by OR-Tools. HiGHS is very much the best bet going forward for open-source MIP


CPLEX (by IBM). The documentation can be a bit thin sometimes. But its fast. Most benchmarks place it ahead of the google cloud products.

For fun I made this Golomb ruler solver using cplex: https://github.com/strateos/golomb-solver


I did use Gurobi a few years ago for route optimization within a distro center but results were slow with load. I am now playing around with nvidia's reopt ...i think they now call it cuopt. Very curious to see solver results accelerated by GPU. The results shared were quite impressive - anyone tired it yet?


Nevergrad, a Python optimization platform open-sourced by Facebook, is easy to use. Can't say how it compares to other solvers though.

https://facebookresearch.github.io/nevergrad/


Matlab: optimization and global optimization toolboxes.

Python: PyOptSparse with IPOPT (and NSGA-II, although not the best implementation) and scipy.optimize.

I use them for the Design Optimization of aircraft with multiple non-linear constraints and sometimes multiple objectives.


I used Z3, kiwisolver didn't offer the correct constraints for my geometry problems.


We do non-linear optimization mith multiple constraints using scipy optimize. What we used was beautifully coded and documented, as I assume most of the library should be, so it was easy to use it, even for beginners.


I have been playing around with nvidia's reopt ...i think they now call it cuopt. Very curious to see solver results accelerated by GPU. The results shared were quite impressive - anyone tired it yet?


Yes, there is some incredible work there. A practical problem is that these solvers are written in C or C++, and sometimes difficult to configure and install. Not as simple as referencing a Rust crate ...


CBC and Symphony for open-source MILP, Gurobi for commercial MILP. OR-Tools for some VRP work. I use R (ompr package) to interface to CBC, Symphony, and Gurobi. HiGHS is looking promising.


My go-to algorithm is differential evolution. It's nice and simple, and rather effective in my experience. I wrote my own implementation though, so not exactly what you were asking...


yes alot...

I started to write up a page to track the "business of solvers".

https://solver.news

Should be "launching" in the next week or so.


FICO Xpress (not free) has been very good in my experience.


I've tried quite a few open-source solvers but none of them come close to Gurobi. IMO its worth every penny and has great documentation and support.


ECLiPSe was always quite good back when I worked in the area.

https://eclipseclp.org/


I use optuna, it has a global blackbox function minimizer suited for parallelizing parameter search space for slow evaluations.


I have nothing to add except to say the illustration on your page of the mad scientist rabbit with the carrot is fantastic :)


Academic use: CVX[R/Py] are excellent because they plug into both open source and commercial (Gurobi, Mosek).


Mosek (mosek.com) is good if you have conic contraints. Otherwise cvxopt is quite simple for smaller problems.


Last time I needed to do some horrible non-linear least squares optimization in C++ I used Ceres from Google.


Matlab solvers have been the most stable and performant for me, sadly not open source nor free.


Minizinc!


Yes! MiniZinc is great. It's the only sticker I have on my Linux box.

I like how it targets both MIP and CP solvers. It's actually the tool that first led me to CP and Gecode.


ORTools - pretty easy to use in my experience. Not sure about scaling.


gurobi and cplex agree light-years ahead of the open source solvers. those will work on small problems, but you need the expensive ones for complex models.


Cool! I had used Minizinc for cost optimization.


Mosek is great, especially combined with Cvxpy.


Yes, Gurobi, yes


I use IPOPT, and I like it quite a bit.


Long back. Gurobi and CBC.


Ah, it must be due to Easter! From this thread it seems that now, finally, after all this time, OR (operations research and optimization) have risen from the dead! Sorry for the sacrilege. Ah, YouTube has some really good performances of Parsifal!

At one time, OR looked good to me, and I bet a lot of my career on it: Off and on I heard about various parts of optimization. Then at FedEx, in a rush I wrote some simple software that produced nice fleet schedules and literally saved the company. Smith's remark "solved the most important problem ...".

But since airplanes are staggeringly expensive, I considered optimization. That looked like 0-1 integer linear programming with mostly just one row for each city to be served and one column for each candidate single airplane tour from Memphis and back. In generating the candidate tours, could easily handle lots of complicated costing and really goofy constraints. The optimization step itself was just set covering.

But my wife was still in Maryland; the promised stock was 18+ months late; I went and got an applied math Ph.D. with a lot of emphasis on optimization -- dissertation in stochastic optimal control. I taught linear programming in a business school (trying to help my wife recover from the stress of her Ph.D. -- she never recovered and her body was found floating in a lake; the two Ph.D. degrees were very costly).

Then, after sending 1000+ resumes, net, I had to conclude: Nearly no one in business gives as much as even a small fart about optimization or operations research.

I did stumble onto some small projects: One of these was a marketing resource allocation problem, 0-1 integer linear programming, 600,000 variables, 40,000 constraints. I used the IBM OSL (Optimization Subroutine Library), called from some Fortran code I wrote, and used some Lagrangian relaxation -- in 900 seconds of computing, with as I recall 500 primal-dual iterations, got a feasible solution, from the Lagrange multiplier bounding, within 0.025% of optimality. The customer was not much interested: The CEO had a buddy, his CTO, and my success had apparently embarrassed the CTO and, thus, displeased the CEO. I never talked with the CTO after my success, but his belief was that his simulated annealing, running for days, would be the best approach. So, he was torqued at my success.

Had a similar situation in another problem.

I had to conclude: Optimization seems like an obviously valuable step, tool, and often quite worthwhile. E.g., for some $100 million project, optmization might save 10%, $10 million.

There were some Nobel prizes based on optimization.

There have been lots of successful, valuable, optimization projects reported.

So, there should be plenty of people in business eager to pursue optimization? Right? I assumed so. I was wrong.

But the guys who have the money and the responsibility and make the decisions very much don't trust or like operations research or optimization. They see such a project as a lose-lose situation: If the project fails, and some will, or just does not do very well, then their career takes a hit. If the project is nicely successful, then that success is a force in the organization that the C-level (CEO, CTO, CFO, etc.) can't really control. The C-level doesn't like to be out of control. Instead, they want to do things their way, and that does not include operations research.

So, I gave up on operations research and optimization.

So, instead of trying to have a career, doing optimization, working for a salary for people who don't want optimization, I'm doing a start-up. There is some pure/applied old/original math in the core of it, but no users will suspect anything mathematical.


I'm having a pretty good career in optimization and OR, even in the corporate sector...


any ideas on integrating with sql optimizers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: