Hacker News new | past | comments | ask | show | jobs | submit | ryan-nextmv's comments login

The conference is virtual. It runs Monday, September 18 through Wednesday, September 20. Registration is free.


Even if this paper is correct, the approach is not practical. From table 1 in section 6.2, the LP representation of a 25-city TSP requires 24,580,032 variables and 3,643,825 constraints. Merely formulating that model will require significantly more time than fully optimizing a TSP of that size using an assignment or 2-matching relaxation and adding subtour elimination constraints as they are violated. The former will likely take seconds (or maybe minutes) at that size, while the latter can be accomplished in milliseconds.


That table of values looks like it's growing exponentially. By the time they reach 70 cities, they will be into a trillion variables, which is not practical.

In real world usage, the TSP gets into the thousands of "cities." This would be for things like integrated circuit layouts.


"exponentially" is not something you can just colloquially throw about in this sort of discussion. The numbers are bounded by n^6, and the whole point of the table is that it seems like it's growing at even less than that.


If so how can they claim this is in P time?


I haven't looked at JuMP in a while - the last time I tried it was when I was still doing personal blogging (https://ryanjoneil.github.io/posts/2014-07-18-are-we-getting...).

I remember liking JuMP, but Julia itself didn't feel ready yet. Some of the packages had weird behaviors. For example, Gadfly took several minutes to render some of my charts. IIRC when I looked at the source, it was solving a MIP to compute the best bounding box for display purposes.

I should probably check it out again.

Also: agreed regarding Gurobi licensing.


Julia (and the Gadfly package, which it's worth noting is not a standard library or anything like that) have come a long way since 2014. JuMP too. Julia was indeed "not ready" in 2014; it wasn't anywhere near the 1.0 release. We're now on the 1.7.x releases: stable and guaranteed backward compatible since 1.0, and a lot of quality of life improvements between 1.0 and 1.7, including much faster compilation.

If I'm solving an optimization problem for personal curiosity, a blog post, etc., JuMP is the tool I reach for. :)


Wow, 2014 and Julia v0.2! That's pretty amazing really haha. That's the "before times". That's so long ago that most of the things tracking Julia's growth start tracking after that time, and the vast majority of users today hadn't even heard of Julia at that time. I would definitely open up Julia again some time treating it like essentially a wholly different language. Things like precompilation, package compiler, Plots.jl, etc. only came into being much later, so definitely OP should scratch the old workflow and try it fresh.


Yes! MiniZinc is great. It's the only sticker I have on my Linux box.

I like how it targets both MIP and CP solvers. It's actually the tool that first led me to CP and Gecode.


Is it safe to say those optimizations are performed ad hoc? That is, you're manually in control of the optimization process instead of wiring it up to some kind of automation?


Yes! In particular, it is usually to create some concrete data/parameters to be used towards creating the final product.

In the usual Excel Solver case, for a simplified example, I want to create a biased coin where getting heads doubles your money and tails loses your money, with expected payout 90% in the long run. The parameters to change are heads/tails coin weighting, to target a value 90%. A fair coin gives a long run payout of 100%. It turns out that if a biased coin hits heads 45% of the time and tails 55% of the time, we end up with this 90% payout. Excel solver can come up with these two values.

In this example, we can come up with these 45% and 55% values theoretically. With more complex systems, we may want to see the effects of changing a subset of parameters that have an indirect effect on payout.

For example, if we extend the “biased coin game” to instead be “win your money back, 2, or 3 times your wager each with some probability” on a heads result, there are more parameters (and many solutions!) to get to 90% payout. Changing the heads probability has a further, second-order effect on the payout. Using Excel solver it’s straightforward to fix some parameters (heads/tails probability) and allow others to change (1x/2x/3x odds) to get desired results (90% payout).

If we further extend this methodology for a biased coin game, we can end up with a coin game that pays with distribution exactly like a slot machine in a casino, though it would not look like one!


<3 SCIP. About a dozen years ago I used to write the python-zibopt library (now defunct and rendered moot by the far superior PySCIPOpt).

Hadn't heard of JoD - that looks really interesting. What are your experiences so far?


JoD is Jakob Nordström. He gave a talk on the predecessor of Exact here: https://www.youtube.com/watch?v=yvXy7Nzn-IA

My experience so far: Quiet good at finding feasible solutions when constraints are complex. Really young code with a lot of potential. Not that good at finding optimas.


Thanks for looking at our docs in light of your problem domain. That's way beyond what I anticipated, and much appreciated!

We haven't put a lot more examples into our SDK docs lately since we've been working more on our routing engine and cloud offering. Now we're getting back to more foundational questions like "what should our solver be?" and "how should model formulation work?"

Hop started off as a decision diagram solver, but even internally we strap a lot of different techniques onto it. My hope is to support those patterns better, which is really why I posed this question.

I'd be interested to know: what made the process of converting business logic into solver-speak painful?


> what made the process of converting business logic into solver-speak painful?

The fact that every solver doc I found shows me two near identical variables and 4 identical constraints. I can solve that using Excel Add-In. My problem is mind-numbingly complex, like everyone else doing supply-chain optimization. So I need examples for each type of constraint I have but instead, the tools expect me to figure out everything based on very generic examples. Dates are a big deal in what I do. I have no idea how to convert "each day the customer order is delayed is bad, but I also can't make things 3 months before they want it" into solver-speak. Cleaning a machine takes time, so it is often better to make similar things in a row (campaign-mode). No idea how to convert that to solver-speak.

The good thing is that once I understand how to convert my data into solver-speak, I can keep it updated on the fly since business logic doesn't change daily, just the data-set.

Feel free to contact me directly and/or we can Zoom etc. if you wish to discuss further. Wish you the best of luck either way!


> I need examples for each type of constraint I have but instead, the tools expect me to figure out everything based on very generic examples

if your problems can be attacked by LP / MIP type stuff, there's a book "model building in mathematical programming" by Williams that has a couple dozen different optimisation problems for a range of applications, with worked examples of how to formulate a model to solve each of them. each problem will be much less complex than your real-world optimisation puzzle but if you browse through the problem descriptions you might be able to match some against parts of your situation & get ideas for how Williams would model and optimise it.


In a previous life, when I worked with solvers a lot, this was exactly my problem too: the examples in the documentation are always way too simple.

What I saw often, and this was in finance, that one or two developers gain proficiency in converting problems to a specific solver tool, and then we could never move to a different library because it would have been so much effort to relearn.


"My problem is mind-numbingly complex, like everyone else doing supply-chain optimization" brings me joy. I'll definitely reach out - would love to get your reaction to some of our new ideas.


> "each day the customer order is delayed is bad, but I also can't make things 3 months before they want it

This would be something in the objective function, I think? Assign some daily cost to having stock in inventory and a penalty per day that the "supply delivery to customer" task completes after the deadline.

IIRC ILOG solver used to have a lot of constraints that made implementing that kind of thing easier.


Reading this from the perspective of a naive outside observer, I'm wondering (just thinking out loud) about the potential value of a hybrid approach of a) providing customers who know what they want a self-service API, b) supporting customers who just want their problems solved with consulting, and a+b) offering joint R&D scenarios in select circumstances that would progressively add support for novel workloads over time (like what's described here) in the pursuit of differentiation. Obviously this would be aligned with use cases that don't require immediate solutions/consulting. The resulting data might be worth it. And this all obviously depends on where ease-of-use sits, given that's basically NP-hard to resolve fully in this domain.


Definitely try out the API (https://cloud.nextmv.io/) to get a sense for it. Note that these are canned models, and the systems under the hood are very customizable. You can find some more information in our docs (https://docs.nextmv.io/latest/). If you have another problem or need something specific, reach out to support@nextmv.io.


We haven't published any formal benchmarks yet. We will soon!

I'm not a regular OR-Tools user, but it allows you to call its own CP solver or external MIP solvers. I can provide a few general comments about what you'll find applying different techniques to TSP solving:

1. Nothing beats Concorde for solving pure TSPs: http://www.math.uwaterloo.ca/tsp/concorde.html

2. Mixed integer programming solvers do very well because you can relax the subtour elimination constraints to solve a simpler problem (such as an assignment or a 2-matching), and add them to the problem representation as they are violated in the search tree. With tight bounds the solver can fathom away most of the search space.

3. All of this breaks apart when you need to route multiple vehicles or handle side constraints (precedence, time windows, capacity). This is where CP, DDs, and Hybrid Optimization techniques start to really shine.

It's (3) that we're most interested in because realistic problems tend not follow simple, pure representations like the TSP. So when we do publish benchmarks (and their associated models) it will likely be for problems like the Li & Lim PDPTW instances (https://www.sintef.no/projectweb/top/pdptw/li-lim-benchmark/).

Another important aspect is time to find high quality solutions. Most people doing on-demand routing need good solutions that are actionable quickly. DDs have already shown a lot of promise in this area.


Thanks for asking!

Value is actually the value of a state itself, and it can increase or decrease from one state to its successors. If the value must increase or decrease monotonically, then that (very useful) information can be given to Hop through a Bounds() method.

In the simplest cases, value is added or subtracted with each state transition, and can be stored in an attribute. However, that isn't required. It's possible to do just about anything in the value function, such as run a simulation to estimate a KPI.

If estimating value is expensive, then it's probably best to keep the diagram width low, and/or use a searcher that dives to high quality solutions before expanding the search. Hop comes with some of these capabilities built in, and most of its solver components are plug-able through interfaces so you can provide your own if you need.

When it comes to scale, we're trying to make it so Hop scales up and down well to different size problems. As you add side constraints such as precedence, time windows, driver hours, capacity, etc., you'll likely find that DDs do better since those constraints tend to support the search.

Dash is a generic discrete event simulation engine that (intentionally) looks a lot like Hop (JSON in/out, runners that package up sims as atomic binaries). Sim actors conform to a simple interface and use an event sourcing architecture to publish events, update state, and record measurements. We should have a blog post about it in the next couple weeks, as well as our docs posted online, so check back!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: