I'd be very surprised if it was as easy to set up the problem in R or Macsyma. In those languages, you'd probably need to write loops to generate your constraints (but maybe I'm wrong, I'm no R-expert).
Well, my experience is in mathematica. Which has operations research functions built right in. Point being, there are systems where this is already a solve problem, where you can get your answer in a minute or two.
So you'd simply do Minimize[function, {variables/constraints}] and then get on with your day.
If Mathematica is so easy, I'd love to see a significantly simpler version of the same problem written in it. But I have absolutely no idea how I'd write that simpler version.
The vast majority of the code I wrote is setting up the problem (which has 15 vars and 27 constraints), not solving it. I.e., those list comprehensions are a way of writing 9 constraints on a single line.
To demonstrate mathematica is easier, you need to demonstrate that it's similarly easy to translate the problem from a verbal specification to an LP problem. Solving LP is the easy part.
There are packages in R that use GLPK api calls so it would be made simpler. See Rglpk http://cran.r-project.org/web/packages/Rglpk/index.html. But if one is originally coding in a particular langauge (in this case Haskell) then I would think coding within that language would be easier.
I'd be very surprised if it was as easy to set up the problem in R or Macsyma. In those languages, you'd probably need to write loops to generate your constraints (but maybe I'm wrong, I'm no R-expert).
But I'd love to see code proving me wrong.