Hacker News new | past | comments | ask | show | jobs | submit login
Squares in Squares (erich-friedman.github.io)
337 points by yowzadave on Feb 15, 2023 | hide | past | favorite | 49 comments



Originally saw this in a twitter thread by Daniel Piker: https://twitter.com/kangaroophysics/status/16254239511563755...

As a former architect, it's tempting to see similarities in the way architects do space planning: continually re-arranging the various rooms/circulation elements to find the most efficient use of space. Seeing how counter-intuitive these solutions are (and the fact that most of them are not proved, only "found") makes me think that space-planning may not be "solvable" computationally.


> space-planning may not be "solvable" computationally

It isn't. The knapsack problem is NP-complete and roughly translates to "Given a space of size X and a set of items of sizes A, B, C, how do you pack X optimally?". Designing a room layout seems like it would fall squarely into that, with additional restrictions added on top to make it even harder.

https://en.wikipedia.org/wiki/Knapsack_problem

According to wikipedia we do have algorithms that are useful in practice, but that's not quite the same as "definitely optimal"


I mean, NP-complete problems are perfectly solvable computationally, and given the relatively small number of rooms per floor that architects are concerned with, finding the optimal solution with a computer should still take much less time than it takes a human to come up with some local optimum in the design space. The real problem is actually encoding all the constraints that an architect is really working under, including such inherently difficult-to-quantify aspects as aesthetics, ergonomics, and familiarity! A solution that maximizes the available m^2 is useless if its human users find it too weird.


The knapsack solution space is countable and effectively finite, and thus is solvable, even if not efficiently due to NP-completeness. Space-planning has to work with a potentially uncountable number of positions/orientations in continuous space, and thus might be different.


> effectively finite

This made me scratch my head a bit. I'm aware of there being different types of infinities, but to call a "smaller" infinity "effectively finite" seems a bit odd. I'm not up-to-date on math terminology, though.

In truth, I'm not quite sure what you mean by "solvable" vs "not solvable" here, either. In the above discussion I thought it meant "not tractable in a reasonable amount of time" (i.e. NP-complete => "not solvable"). But you seem to have a different definition you're working from.

Could you elaborate on what you mean by "effectively finite => solvable" ?


It is effectively finite in that the knapsack has a finite capacity and there is a finite selection of items with positive weight, hence the number of items that can fit into the knapsack is limited by the knapsack capacity divided by the smallest item weight. So, while for the unbounded knapsack problem the number of instances of an item that may be included is not limited, it is effectively still limited by the knapsack capacity. Therefore the number of item combinations that have to be considered is finite as well.

The GP had put "solvable" in quotes. I understood the GP to say that for space planning, there may not be an algorithm that is guaranteed to produce an answer in finite time. For the knapsack problem there is such an algorithm (even if that finite time, due to NP-completeness, could be very long).

The formal term "solvable", as applied to decision problems, actually only means "semidecidable", a much weaker property, see https://en.wikipedia.org/wiki/Decision_problem#Decidability.


The knapsack problem has a finite number of solutions. You can brute-force the optimal solution, if you don't care about efficiency. Ditto the Traveling Salesman problem.

I'm not sure whether this problem is finite or not, given that you can place objects at any real-number coordinates and any real-number angle. But if you simply make those increments small enough (you can only put boxes a whole number of planck-lengths apart) then there are a finite number of possible arrangements, so it's possible to brute-force.


I saw it as well and immediately I thought of posting it here, but as always I search before I post and it was reposted a year ago, so I decided not to.

Glad you did post it though because it’s a really nice visualization and a very good page and deserves to be seen by more people who like such things!


Other fun packings: https://erich-friedman.github.io/packing/

I particularly liked "squares in circles", found 8 pretty surprising


It reminds me of a real life "problem" I like to solve around the holidays: what is the smallest cut of wrapping paper required to wrap a given box. With a restriction in one dimension. I find that, most of the time for cube-ish boxes, using a somewhat diagonal orientation can result in a shorter cut of wrapping paper used, and reduces the amount of tape needed to keep it on the shape as well!


My mom did upholstery as a hobby; it only occurred to me in adulthood that wrapping paper isn’t designed so that the patterns would just magically line up somehow.


I like to use the offcuts of wrapping paper to make 'hidden' labels: trim the offcut to be 2x1 ratio rectangle; fold in half with pattern outside; pinch the crease sharp; write the name and message on the 'back' blank half; fold the tag in half again; position the 'front' pattern to exactly overlay a corresponding patch of pattern on the package; tape the back message half down to the package with transparent tape.


I’d like to read a systematic analysis of that.


If you are fond of these geometric constructions you might like the artist https://www.artsy.net/artist/victor-vasarely


I can recommend the museum as well: https://en.vasarely.hu/vasarelys-periods/



I wonder what the story is behind n=6. It fits the same simple pattern as n=2, 3, 7, 8, 14, 15, etc. but wasn’t proved until several years later than the others.


It was proved optimal by Kearney and Shiu (2002) if that helps.

https://www.combinatorics.org/ojs/index.php/eljc/article/vie...


Interestingly, some of the illustrations give me an optical illusion. Consider this one [1] with the slipped squares in the corner. The slipped squares (not the single 45' rotated one, but the two that are not rotated but offset by sqrt 2-something) appear to be larger than the squares below it that are part of a regular grid.

[1] https://erich-friedman.github.io/papers/squares/pic/s27b.gif


I'd like to see the percentage improvement over the "naive" approach (just stacking them edge-to-edge) for each of them. I can't tell if the improvements are significant enough for this to have obviously practical benefits.


Obviously(?) the improvement decreases with increasing n. From the table in [0], regarding the length (not area) of the square, it is about 10% for n=5 and 7.5% for n=10, and 4.6% for n=82.

[0] https://erich-friedman.github.io/papers/squares/squares.html...


Really fun and counterintuitive arrangements.

The linked paper has more details

https://erich-friedman.github.io/papers/squares/squares.html


I'm surprised there's no simple recursive way to find optimal (or at least good) answers. I would naturally assume the optimal way to pack n squares is to split them into groups of m and then pack each of those groups into k square subdivisions. Maybe the recursive patterns only show up at much larger n?


n centroids for the squares, n orientations (from 0 to pi/2) for the squares plus an orientation for the larger square means it can be reduced to finding minima of a 2n+1 degree polynomial.


I think this must have consequences for atoms, particularly metals. Particularly what happens under stress or shear. Maybe too for crystals and ordered regions of quasicrystals.

Interesting to think that the universe is solving NP-complete problems in real-time all day everyday. Mad respect.


Universe is looking for local minima, solving NP-complete problem means finding a global minimum


Hmm, is that really the case tho? Here, can't you imagine that in a compressed sample of atoms, a small bounded region, things arrange themselves respecting minimum packings? I can. Why would you think that would be different to a global minima? It's the solution to the case, why would it be different?


It may be a global minimum but it doesn't have to be. If it ends up in a local minimum then going to a global one would reduce the total energy of the system but it may require additional energy first to 'go over the bump'.

I can't find a good source with packing but a very similar phenomenon is a soap film on a wire frame. Sometimes it gets stuck in a non-optimal configuration until you blow on it to give it a kick needed to reconfigure. You can see it at 19:35 in this talk by Matt Parker[1]

[1]: https://youtu.be/Iip8VNrHK_8?t=1175


Sure, but I think if you're looking at atoms under compression or shear, there's a lot of energy to go around, so you're going to find them with plenty of energy enough to rearrange into the min energy configuration. And for small scales (define small, hah!) the local optima are going to be global optima because there's not "that many" other possible unoptimal configurations to end up in.

I'm not saying, you put a tungsten cube in the hydraulic press and instantly the cube atoms go into min packing config...but I am saying that in small enough (but not insignificantly small!) regions you're going to see these global optima packing solutions. And it's going to be everywhere all the time. You cut a wire with some pliars? I guarantee that some region in that wire the metal atoms were rearranged into an optimal packing under stress. That right there is nature solving an NP-complete problem. Happens all the time, I bet.

That the whole cube doesn't become optima is no biggie, but that these form at all is amazing to me. But also logical. It's just cool that nature solves NP complete problems as a matter of course. And we need "mathematicians to prove it". Nature does it without a care! :P ;) xx ;p

But also what about crystals? Can you just consider a perfectly formed crystal matrix, like a pure silicon wafer, to be the atoms in a global optima configuration? That may not correlate with packing exactly with packing as the unit is not always a cube, but you see what I mean? I don't think global optima in nature is such an irregularity or rarity as you may think. I'm not saying nature isn't messy, it is, but there's sufficient microstates and energy to get these global optimas all the time.

At least that's my intuition about it. I understand if yours differs, but that's no worries. Maybe this line of thought, that nature has ways to solve global optima easily all the time, can lead to better algorithms--different to simulated annealing.


I wonder if there's a number class for the value that the sides take. Are they all ultimately numbers formed with standard operations and radicals? Or are there some that we have only found with like 'numeric' techniques?


The main sentence at the top does not make any sense to me, can some explain like I'm 42. (I'm 43)


We're putting unit squares into larger squares, without letting them overlap. Say you want to put 5 unit squares in a bigger square. You could put them into a square of side length 12, or 100, or 3, with various arrangements. This page is showing the smallest known square that you could put n squares into, for each value of n, along with how they need to be arranged to be able to achieve that.


I believe what it means is: shown below are, for each n, the smallest square that fits n unit squares inside it. N is not shown if the smallest square that fits n unit squares inside it has no tilted unit squares inside it [except that's obviously not quite true, plenty of no-tilted-unit-square squares are shown].


from context I am guessing that the non-tilted ones that are shown are provably the smallest possible such squares, not just the smallest known. But that is not stated explicitly.


Each is tagged with ‘trivial’, ‘found’ or ‘proved’.


This highlights the worldliness and arbitrariness of many branches of mathematics. Forgive my atheism but there is nothing 'devine' in many of the cramped packagings.


It may help to realise that what you are looking at is some projection of a chaotic process. As in other views of chaos such as the Mandelbrot set, one can gain a sense of awe by realising the intricate beauty of the interplay between order and chaos.

Consider the dance between order and chaos in this set of cramped squares. Deviating into chaos from numbers 53-55 only to return to a beautiful order in 56.


your eyes are beholding


Interesting how Frits Göbel found so many in 1979, including some larger n, but some smaller n eluded him, found by others decades later.


Not clear from this page whether he found results for smaller n that better solutions were found for later, or not at all. Presumably one would have to read his paper to see, but I can't find it.


You imagine this could at least be used as material for a 90 minute tutorial somewhere in a logistics or packaging training course.


Sure, it's a good puzzle too.


Is there a standard way to serialize these shapes? Really, I’m interested in how you could validate packings in Python at scale.


I'd like to see this to 10k squares


sqrt(10k) = 100. It's just a full dense 100x100 grid. Now, 10001 is more interesting.


I meant UP to 10k squares. Like, the whole list.

Would look like stuff from a "A New Kind of Science" I bet.

https://writings.stephenwolfram.com/2017/05/a-new-kind-of-sc...


why some combinations are labeled as "found" and others like "proved"?


Because the "found" ones have not been proven to be the best. They're just the best known.


Interesting past time




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: