Hacker News new | past | comments | ask | show | jobs | submit login
When Math Gets Impossibly Hard (quantamagazine.org)
109 points by bryanrasmussen on Sept 16, 2020 | hide | past | favorite | 119 comments



I know a military contractor working on some pretty cool stuff, especially back in the day. On one of their projects they had to solve some really gnarly partial differential equation and weren't really getting anywhere. Since what they actually wanted were values (not the algebraic solution) what they ended up doing is building the equivalent circuit and measured what the voltage and current was in the circuit to get the values that they needed for some completely different domain.

I always thought that that way of thinking was pretty neat. Can the same abstraction have a different physical manifestation that's cheap to create? If you kinda squint a bit, it's what we do with computers.


I've been doing some restoration on an analog computer. While it seems bizarre now to solve a problem with analog circuits instead of a digital computer, analog computers were a reasonable approach until about the 1970s.

The basic idea of an analog computer is you use op amps to sum voltages. Integration is easy, by adding a capacitor. By hooking up op amps, capacitors, and resistors (usually with a plugboard), you can quickly and easily solve differential equations. (Surprisingly, multiplication by a non-constant value is hard with an analog computer.)

The nice thing about analog computers is they gave the answer almost instantaneously. You could turn the dials and experiment with different parameters interactively. A digital computer in 1970, on the other hand, might take minutes to solve the same equation. This also made analog computers useful for real-time applications such as flight simulators.

A big problem with analog computers is that the accuracy is limited by your components. If you want 0.1% accuracy, you need expensive 0.1% resistors and capacitors. With a digital computer, you can get as many bits of accuracy as you want using cheap components. Basically, Moore's law killed off analog computers.


Just fire up any older electronics textbook, and they'll probably have examples of integrator and summing circuits solving differential equations. At least that's what we had in our control (engineering) books.


In my dad's Calculus II equivalent class in Morocco, one of the question in their exam was to built the circuit to solve a given differential equation. Pretty neat!

That being said, that exam was substantially shorter and easier than the Cal II exam I had to take!


> multiplication by a non-constant value is hard with an analog computer

Researching this topic also qualifies you to write technobabble for sci-fi movies. Like "operational trans impedance amplifier."


If you haven't seen it yet, the retroencabultaor video [1] is a masterclass on how to do it.

[1] https://youtu.be/RXJKdh1KZ0w



I actually use a transimpedance amplifier for my work, and it’s even operational most of the time! But one does wonder if techno-babble really does sound like real techno jargon to none experts.


There is practically no scenario where a digital computer couldn't do better if what you wanted was a numerical result and not a symbolic one.

It's simply criminal how underutilised the technique of Automatic Differentiation is! It can solve incredibly complex differentials to machine precision, good numerical stability, and minimal tuning or hand-holding.

Lots of these "difficult" problems aren't, it's just that people aren't aware of the solution because it's not commonly taught in a University setting. Automatic Differentiation is one of those techniques that's "boring" and "numerical", so there are no courses that teach it.


The parent comment is talking about solving differential equations. That's a very different thing from computing differentials. Automatic differentiation doesn't help with differential equations at all.


Yeah, this was a while back. The guy used to work for the South African military during apartheid and later worked on systems in Canada and Asia. I'm sure you're right that digital computers could do the job today, but back then the software and hardware wasn't close to being up to the task.


Doesn't literally every optimisation and ML class teaches it nowadays?


Actually, no! You would think so, but again, it's a boring numerical technique so it is all too often sidelined in favour of the more "flashy" symbolic techniques.

An awful lot of tertiary education has been caught up in the publish-or-perish mentality. Unfortunately for AD, it is not "impressive". Differentiating hideously difficult expressions symbolically is impressive. Publishing a paper packed full of enormous, scary looking expressions is impressive. Merely saying that "we used AD and it worked just fine" is unimpressive.

For decades now we've been incentivizing intellectual wankery over practical utility.


Here's an example from my field: "The derivatives of the finite-build coils are computed -- with almost no additional effort from the authors -- using automatic differentiation." Well, what kind of scientist are you, if you are too lazy to take derivatives? /sarcasm

http://meetings.aps.org/Meeting/DPP20/Session/BP14.32


Isn't that the basis of analog computing? https://en.wikipedia.org/wiki/Analog_computer


Engineers used to use the shape of a soap bubble to calculate the torsional stiffness or perhaps stress distribution of a non-circular shaft.

http://www.newsteelconstruction.com/wp/wp-content/uploads/Te...


Soap film is also used to solve the steiner problem: https://www.youtube.com/watch?v=dAyDi1aa40E


Isn't that just analog computing? Or is it something more fancy?


This is analog computing, and it is really cool. Yet, I guess if it corresponds to a circuit it should be more an ODE than a PDE (unless your "circuit" involves large pieces of surface of non-uniform resistivity).


It was a true PDE, not an ODE. The circuit was non-trivial, but though I knew PDEs at the time he told the story, I'd never used them on electronics (only heat / pressure flow, and other physical engineering stuff) so I couldn't really understand what he was saying.


You can definitely do this! Check out Karplus's "Analog Simulation" for examples with conductive fluids and resistance networks: https://archive.org/details/analogsimulation0000karp


Yes. And a numerical calculus software would probably get there faster and with less work.


Numerical solution of PDE is not really a solved problem. If your PDE involves shockwaves or turbulence (which happen for really simple PDEs) then the methods to compute solutions get incredibly fancy. There's no "black box" solution for numerical PDE, as there is for ordinary equations.


If you can solve with an analog computer, without problems with metastability, oscillations or excessive noise, then you can solve it with numeric calculus too. Those computers have much more strict limitations than digital math.


I should call the guy up sometime and get the details about what they were working. It wasn't a run of the mill analog computer, but who knows how novel it really was.


Your description covers pretty much every quantum computer ever conceived. How certain are you?


Not sure you can call any of the actual quantum computers we've made "analog", but if you're talking about the quantum annealment machines, they are generally more limited and take longer than an equivalent supercomputing cluster.


Hum... Quantum computers tend to be digital, not analog. The word "q-bit" is there for a reason, just because the bits are entangled it doesn't make them analog.

Besides, the set of problems you can solve is different from the set of problems you can solve quickly. When discussing analog and digital computers, the speed difference isn't all that relevant, but you seem to be confusing them here. Quantum computers do not solve any problem you can't solve on a classical computer, they just solve some faster.


Well yeah, that's exactly my point. Quantum computers are digital, which is not what the above comment claimed, except for quantum annealers which are closer to analog computers.

What I'm saying is that even quantum annealers aren't really faster at any practical problem than a comparable supercalculator as of yet.


Oh, I was intending to reply to the parent of your comment.


Can you provide a simple example of this?


The classic example are Navier-Stokes equations, a simplified model of fluid mechanics. Mathematicians do not yet know how to prove that they have a solution, but they can implement discrete versions of these equations that behave somewhat realistically. Yet, it is extremely tricky to get it right; for example, to observe turbulence at the same speed that appears on a real fluid.

A much simpler example where naive numerical methods do not work, is any first-order equation with converging characteristics. For example the inviscid burgers equation u_t + u u_x = 0. Even if you start your evolution with a smooth profile, it will form discontinuities. When you try to discretize it, computing derivatives by finite differences does not work at all, and the simplest upwind schemes have a lot of diffusion (which does not correspond to the model).


> Mathematicians do not yet know how to prove that they have a solution

Probably just a typo, but it's not hard to find particular solutions. The question is more whether solutions exist for all reasonable initial conditions. Here's a famous particular solution:

https://en.wikipedia.org/wiki/Taylor%E2%80%93Green_vortex

I've used a variant of this one to test CFD software.

The Millenium prize problem has a succinct statement of what's under question:

> In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.

https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_existenc...

Keep in mind that this is for incompressible Navier-Stokes with a constant viscosity. For inviscid flows you could get discontinuities as you've indicated with the inviscid Burgers equations, and I know that existence and uniqueness problem for the incompressible Navier-Stokes equations has been proved for certain uncommon viscosity laws (which should be reasonably realistic as a constant viscosity is not quite right).

(And of course there are other complications not considered in this problem like boundary conditions.)


> Probably just a typo, but it's not hard to find particular solutions.

Indeed, a fluid eternally at rest is surely a solution :)


It depends on what "back in the day" means. Modern computers are absurdly capable compared to what might have been available to these engineers "back in the day".


and higher precision.


It is. I hope it comes back in the photonic form.


I knew a guy who built specific surfaces with plaster and threw marbles at them to find some special solutions to nonlinear mechanics problems. He thought of himself as an experimental mathematician.


Also see Optical Fourier co-processors for a similar example.


Banks do that all the time with monte carlo simulation. Can't get a closed form solution? Simulate every possible outcome and aggregate them.


This is neither how a Monte-Carlo process works (it's sampling the possible outcomes randomly, not simulating all outcomes), nor what is described here (building a physical system where an observable value is described by the differential equation one is trying to solve).


In practice the number of simulated scenarii is quite large which is what I meant by "every". Not rigorous though, I concur.

Monte Carlo methods are used to solve partial differential equations, I found both approaches similar in the sense that they look at how the system actually/could behave, instead of looking at the equations solely.


This is a pedantic response. There isn't an important difference between building a physical system and building a system in code whose generating process is that which you're trying to model. And everyone knew he didn't mean every outcome literally.


Worked in the securities division of a leading investment bank for 8 years including on the equities exotics and hybrids trading desk. And I can confirm that we used to use MC whenever we had something we couldn't price using a closed form method. That said, it's a lot slower and more compute intensive so when you could use a grid pricer or pde solver or whatever people would use that instead.

You're not really simulating every outcome, just a "large enough" representative sample.

As an aside, my dad was a chemical engineer and Newton-Raphson was his equivalent everything-is-a-nail numerical method. We used to use brent root and monte carlo simulation in the same way.

Numerical methods are awesome in general.


There is actually even a connection between PDEs and SDEs (stochastic differential equations), namely the Feynman-Kac formula which is used a lot in financial mathematics, but it generally allows you to switch between the expectation operator and solving a PDE.


You’re talking more about goal-seeking functions right? That’s the one I always see in a finance context.


Is this something an FPGA or something similar could do? Would be interesting to know of any home-brew attempts at this out there.



FPAAs don't really exist as products you can buy, as far as I know (unlike FPGAs, which are a very mature technology to the point that I suspect there's nontrivial (say, >5%) odds the average HN reader owns something with a FPGA in it somewhere.))


I didn't know about this. It's curious that the article mentions that recently (2016) improved FPAAs were demonstrated. Perhaps an opportunity where there wasn't one before.


I wonder how difficult it would be to re-purpose a modular analog audio synthesizer for this. Since they're direct descendants of analog computers in the first place.


There are ASICs out there that simulate analog synths, such Universal Audio. Brings to mind some interesting pure software approaches if that were the case.


This might be incredibly pedantic, but since this is a mathematical philosophy article, I'm going to go ahead.

These aren't "hard" problems, they're simply questions that don't make sense. Is finding an odd number that is divisible by 2 a "hard problem"? I don't think so, it's just that it turns out that it doesn't make sense to ask the question.

Some of these "hard problems" actually have solutions that demonstrate they aren't solvable, and these solutions are fairly elegant, in which case I'd also say these are not "hard".


I think that's the joke of the title, though; perhaps rephrased slightly: "no matter how hard you try, this mathematical problem cannot actually be solved (and we can prove it, too)."


One interesting way of detecting gerrymandering that has been used in court [0] is to do a random walk on the graph of districting plans, where two plans have an edge between them if you can flip a precinct (the basic building block of a districting plan) in one plan to get the other plan.

By randomly walking this graph, starting from the current plan and randomly flipping precincts, you can build a distribution of how compact the plans you saw were (there are some measures of compactness defined in TFA and the document linked below), and identify outliers, which are likely to be gerrymandered.

[0] https://sites.tufts.edu/vrdi/files/2018/06/md-report.pdf


The solution to gerrymandering is probably social, not mathematical, because the goals we want to pursue in district-drawing aren't strictly mathematical.


The journal Science just published an article on gerrymandering with a similar argument. It's appealing to mathematically "solve" gerrymandering, but "The most important decisions in devising an electoral map are grounded in philosophical or political judgments about which the technology is irrelevant. It is nonsensical to completely transform a debate over philosophical values into a mathematical exercise." The article has a nice diagram showing how gerrymandering works as well as a discussion of the various conflicting goals.

https://science.sciencemag.org/content/369/6508/1179


The best solution to gerrymandering is to make it instantly mostly irrelevant, by using the popular vote and ranked-choice voting.

And indeed, that's mostly a social solution in it's implementation.


Gerrymandering is about how you break up people into districts. Those districts elect their representatives by popular vote, so popular voting for these representatives doesn't fix gerrymandering.

Statewide votes are mostly done by popular vote, so they're not affected by gerrymandering. Nationwide votes like the one for president are not done by popular vote, but they're also not done by district, so gerrymandering doesn't affect them.


It bothers me that two arguments are commonly advanced for the idea that gerrymandering is a menace:

1. It lets politicians create districts that will vote for them no matter what.

2. It lets politicians waste opposition votes by putting them into districts that will, overall, just barely vote the other way.

The problem is that these two visions of "the problem" directly oppose each other. #1 says that the problem is districts that vote an 80/20 split. #2 says the problem is districts that vote a 51/49 split.


You don't need to hypothesize. Look at Wisconsin. A bunch of fairly Republican districts and a few heavily Democratic districts means that it's quite easy to produce an overwhelming R majority in the State House. The fairly R districts are not sure wins, but they don't have to be.

> Republicans won 57 percent of the vote in contested races in 2016 and won 43 of 50 seats. In 2018, Republicans won the same percentage of the vote in contested races — but won 56 of 63 seats.

> That’s how gerrymandering works. Even the fact that so many more Democratic seats are uncontested is by design: By making certain districts heavily Democratic, many others can be more easily made slightly Republican.

https://www.washingtonpost.com/politics/2018/12/04/several-l...

This stuff works, it's easy to see the effects in heavily gerrymandered states.


> A bunch of fairly Republican districts and a few heavily Democratic districts

Something I just realized is that it will always be that way!

Democrats do better in cities (in general heavily democratic), while Republicans do better in rural areas (typicaly Republican, but not in a dense way like a city would be).

So if you divide districts naturally by geography they will always appear gerrymandered in Republicans favor.


That's why democracies generally divide districts by population. Geography is largely irrelevant. This was actually a deliberate choice in order to skew the electoral advantage into landed men.


> That's why democracies generally divide districts by population. Geography is largely irrelevant.

That's kind of my point: No it is not. Districts are meant to be contiguous geographical areas.

Even if you adjust the geographical size of each district to have similar population sizes (which was your point), my point still stands: Because of the geographical nature of districts, cities will always have a larger Democratic percentage, and it will always look like Republicans have gerrymandered things.


.... Which is why most democracies have multiple districts per city.

It won't look like Republicans gerrymandered it to their advantage, because they won't have the advantage.


The rural vs urban partisan divide is not some exogenous thing. It's only existed in the last couple of political alignments in the US, basically since FDR. It's entirely possible things realign.


The rural/urban divide has existed since the birth of the US. See Jeffersonian v Hamiltonian economies.


Yes, but the partisan divide based on political parties historically has been far more along the North/South divide than the rural/urban divide.


The rural/urban divide may have existed since the birth of the U.S but the division has not aligned with the same political parties all the time, which would be anyway impossible as some parties no longer exist.


Have you seen those maps? They quite clearly are not geographic city vs rural. Nor geographic anything, they sometimes even ressemble abstract drawings of four legged animals.


I think it's because you're off on #1. If you're going to create a district that will overwhelmingly vote for one party, you want it to vote for the other side. All votes over 51% are wasted. It's called "stacking." It's complemented by "cracking," where you break up a unit that would vote for the other side and add them to multiple districts where the other side will only get 40%.

The "cracked" districts will vote for you no matter what, but certainly not at 80/20. You're looking for 51% + a safety factor of 10-20%.


They aren’t necessarily in opposition to each other, though. Imagine having three districts of 60/40 instead of two districts of 90/10 and one of 0/100. In this way, the party in charge has effectively erased the substantial portion of the population that would support the opposition!


What do you think the policy effects are in each case? In your second scenario, you have three politicians in two parties who are all basically unconstrained. In the first scenario, you have three politicians who all nominally belong to the same party, but their behavior is constrained much more sharply by the closeness of their districts.


The core problem is the ability to leverage approximately 50% of votes into winning far more than 50% of seats.


I'd say the core problem goes deeper than that, and getting rid of gerrymandering only solves part of it.

Let's say you have parties representing 10%, 20%, 30% and 40% of the population respectively. Ideally, you'd like the politicians elected in the same proportions. In reality if the populations are dispersed evenly, then when you divide up the populations into seats and elect one representative from each seat you are going to end up with some mix of the 30% and 40% making up 100% of the parliament. If the distribution is perfectly even and you are using FPP then the entire parliament will all be from the 40% - which is more of a dictatorship by the 40% than a democracy.

Gerrymandering, and a "fairer" voting system like instant runoff might alter the break up of the 30% and 40%, but either way the 10% doesn't get a seat. To have any say at all they have to vote for one of the 30% or 40%. In other words you end up with a 2 party system, which is of course what we do have. Neither getting rid of gerrymandering nor using a better voting system actually solve the problem, if the problem is ensuring parliament reflects the voting population. Yet we see those two things proposed over and over again here on HN and elsewhere.

The solution is what the other posters in this sub-thread mention. Germany and NZ use it. To me, it looks to have worked very well for them. But getting there is hard as the incumbent 2 parties will combine to oppose it. The change always makes them lose seats to the minorities.


Which suggests that a simplistic "solution" to gerrymandering is that, after the total vote share of the best-performing party is calculated (call it V%), and their share of seats is also calculated (S%), the second-best-performing party should be able to propose a change in district winners that reduces the gap between V% and S%. (And so on for the other parties competing).

Of course that would be outrageously unfair, with politicians choosing which districts they win rather than voters choosing the politicians to represent them, and yet that is somehow an improvement over the current system of gerrymandering.

A slightly less extreme option would be for the districts that get flipped to be decided by where a given party came closest to winning (the "best near-winner" system used in Baden-Württemberg), but really the aim of this system is to remove all incentive to gerrymander in the first place, so this result flipping rule would never have to be applied.


This is how the German system works. Not unfair at all. You cast two votes: one for a person, one for a party. At the end of the day the party percentages have to roughly line up (with a minimum percent for any representative at all.) This system has some very nice balancing properties in that you can vote for the person you want to represent you in the present race as well as the party you most strongly identify with.


My understanding is that the German system is Mixed-Member Proportional Representation (MMP) [0] which usually involves having a list of top-up candidates who don't belong to any specific district. The Baden-Württemberg system is a clever way around this, although it does lead to some districts being represented by multiple representatives.

Having more representatives than districts seems like it would be an overly-ambitious cultural shift in the US, whereas my proposal keeps the familiar 1:1 correspondence, even if there is gerrymandering. It also doesn't change the ballot papers or counting process; and, if gerrymandering is successfully disincentivised, the process works exactly the same as under the existing system.

[0] https://en.wikipedia.org/wiki/Mixed-member_proportional_repr...


yes, i dont believe i contradicted anything there, but you are right i didnt tell the whole story and thank you for spelling out some of the details. more representatives than districts would be a shift and i agree this may be a harder pill to swallow.


These don't seem to contradict. It's entirely possible to carve up a rural area into sure-thing 60/40 districts, while carving up cities to be 55/45 while leaving a handful of 5/95 districts to the other party.


Another way of looking at it: You have nine districts. Overall popular vote in the state is 55-44. Should the districts be awarded 5-4? Or should each have a 55-44 split?


Gerrymandering can be much worse than that. The unpopular party could win 8 districts and leave one for the more popular party.

Math: Assume 900 people, so 495 voting for the popular party and 405 for the unpopular party. Put 100 popular voters in district 1. Split the remaining equally (395 popular, 405 unpopular). All districts except the first will be won 51%-49% by the unpopular party. (This is the "pack and crack" gerrymandering strategy, packing the party into some districts and cracking them across the other.)


The thing about pack and crack is that if you try to do something most people think is fair (“preserving the integrity of communities of common interest”), you find that in areas with a urban core and suburban peripheries you end up naturally with pack-and-crack favoring Republicans because Democrats are often hyperconcetrated in the urban areas with Republicans holding a relatively narrow majority in the suburbs.

Of course, if you adopt something like STV in small (say ~5 seat) multimember districts, the opportunity for altering representation by how you draw district lines goes way down, and you still retain individual candidate accountability to the general electorate (unlike in party-list proportional schemes).

The problem isn't “how do you draw lines fairly in single-member, FPTP districts”. The problem is single-member, FPTP districts.


We're talking about Federal districts here, not state. It'd take a constitutional amendment to alter the single-member, FPTP practice, wouldn't it? That's much, much, MUCH harder than passing a state law to change redistricting / gerrymandering practices.


> We're talking about Federal districts here, not state.

Yes.

> It'd take a constitutional amendment to alter the single-member, FPTP practice, wouldn't it.

Nope, the single-member district requirement is a federal statute that was adopted in response to states turning to, or at least considering, multimember at-large FPTP districts (majority-take-all) to prevent effectively enfranchising minorities (well, specifically blacks). Which was certainly a good motive, and single-member FPTP districts are better than that, but the same law could be retuned to prevent that but allow (or even require, in states large enough to support it) multimember districts with a candidate-centered proportional scheme like STV.


Ah, thank you - I had been unable to find the details on that.

Suffice to say that it would take a federal statute to allow MMD outside of NM and Hawaii? That still seems like a heavier lift than passing state-level gerrymandering reform.

And they don't need to be set against themselves anyway - if MMD were allowed, state-level gerrymandering reform is still valuable, for those that stick with single districts.

Unless you're talking about a federal law to mandate MMD, which is even harder.


I agree. I'm more just pointing out that it's non-obvious what "fair" even is.

Although I do like the standard that was suggested in the Wisconsin case, the one that Kennedy chickened out on before he retired. Sorry, I forget the details - something about minimizing wasted votes.


You're getting it slightly wrong/backwards. When gerrymandering you want to:

1. Create districts that will overwhelmingly vote for the other guy.

and

2. Create districts that will just barely vote for you.

Because it doesn't matter if you win a district by 1% or by 49% you can take a population that overall is 50/50, or even prefers the other guy and create a lot of districts that will vote for you by a small margin, and only a handful of districts that will vote for the other guy but by a really big margin.

There's actually an android puzzle game that does a great job of communicating how it works: https://play.google.com/store/apps/details?id=com.studioBlim...


No, they complement each other. What do you do with the 'excess' vote after creating a 51/49 district? You combine the excess from N 51/49 districts into a single district that is 20/80. That way you have N districts that fairly reliably vote for you, and a single districts that extremely reliably votes against you.


They oppose each other. You're doing a better job the more of the vote you have in a district of type 1, and you're doing a better job the less of the vote you have in a district of type 2. Contradictions don't get any more direct than that. To the extent that a politician is benefiting himself in style #1, he's hurting himself in style #2, and vice versa.


The people you're replying to aren't realizing your mistake, and you're not reading their replies. You're doing a worse job the more of your vote you have in a district of Type 1. You're creating districts of Type 1 for the other side. Every vote over 51% is a wasted vote, and there's never a reason you would shove all of your voters into one district if you were in control of districting.

> To the extent that a politician is benefiting himself in style #1

There's no possible benefit to a politician in winning a district 80/20 rather than 60/40.

edit: If 6 people want to vote for me, and 6 want to vote for you, I want to put 3 of your voters in one district, and have three more identical districts each with 2 of my voters and 1 of yours. The first district is what you're referring to as Type 1, and the other three are what you're referring to as Type 2. If you can find a more ideal districting arrangement for my party, I'd like to hear it.


> The people you're replying to aren't realizing your mistake, and you're not reading their replies.

No, you're just not bothering to read my comment. I'll repeat it.

There are two (2) arguments commonly advanced toward the idea that gerrymandering is bad. In summary, they are:

1. Creating safe seats;

2. Nullifying opposition votes.

More of one means less of the other.


Old-fashioned gerrymandering sometimes worked that way. You'd have incumbents- in both parties- try and use the gerrymander to keep their individual seats, for personal political reasons. Party bosses made sure that their own seats were uncontestable, because they had power over the process. Sometimes this would cross party lines (you, party A boss, give me a safe seat, I'll vote to give you a safe seat, we'll both be happy).

The way it works now is that you create a perfectly safe seat for the other party in exchange for many more fairly comfortable seats for your own party. The maps are not drawn up to make individual incumbents happy, but to maximize party power.

The fairly comfortable seats may still be "safe" in normal elections- they're not 100% a sure thing, but they're not particularly competitive and elections can be reliably fought and won cycle after cycle. You can't just not campaign, but the other side has one hand tied behind their backs, so as long as you have a competent campaign and no sex scandals, you're fine. Whether that's the sort of "safe seat" people are talking about I couldn't say.


Can you give examples of informed people arguing that (1) is a problem? 99% of the criticism that I have seen, and what I feel, is that the issue is only (2).

The handful of ultra-secure opposition seats created by (1) are more like an unfortunate byproduct of the truly foul (2).


Does anyone actually complain about #1? In practice, congress members in very safe districts then get primaried from the left: Ocasio-Cortez knocked out Joe Crowley in NY-14, and Pelosi is currently fighting a primary election. The people disadvantaged in a democratic district in a gerrymandered republican state are... republican voters in that district. But they own the state, so what?


Well in that case, #1 is a misinformed argument. #2 is the reason districts are gerrymandered.

The party in charge of drawing the lines will—as a side effect–create some safe seats for the minority party. And in doing so, will retain control of the legislative body, by having a larger number of districts they can win.


They don't necessarily contradict - one could create a couple of 80/20 districts for the political boss of the region and 51/49 (or 60/40 to be safer) for the rest of the region.

Also, I've mostly heard the second argument made, not the first. A large party's leadership should probably lean in that direction anyway, not caring as much about the individual official.


I always thought the solution to gerrymandering would be to use rank choice voting across the state and the top vote getters would be the ones who are elected. Than you create the districts that they represent. After all land doesn't vote.


The actual solution to gerrymandering is to discard the whole silly idea of 'district' and vote proportionally, just many countries do and are not the worse off for it.

What about the rural states, you say? Well they still get to manage their own, internal affairs with their own legislature, governors, etc. But as far as state-level decisions are concerned, their weight should be exactly equal to that of their population representation.


>Many states require that districts be “compact,” a term with no fixed mathematical definition.

Every open cover has a finite subcover.


Logged in just to upvote this. I shudder to think what happens in those other states!


I've always found the perimeter over area close to the mark for jerrymandering but not quite good enough (because of the coastline paradox - forces picking an arbitrary segment size [1]).

My idea - I welcome counter arguments - is to calculate the district's center of mass and find the furthest point the district still contains from that center of mass. Then create a circle with that distance as the radius and find the percentage of _land_ (maybe even privately owned land?) within that circle that is in the district. My view is that the percentage should be something like 50% (maybe higher? Hard to say without actually designing districts and checking the numbers).

50% would mean that half of the habitable land remaining in the circle could be a different district but no more.

[1] https://en.wikipedia.org/wiki/Coastline_paradox


I don't think the coastline paradox has to apply, because we don't have to define the boundaries by existing geographical features like rivers or so on.

Instead, you can simply put down a finite number of points on a map and define the districts using polygons (connect the points with straight lines.


What about perimeter vs area but let coastlines be counted as straight lines?


The coastline paradox extends to much more than coastlines - pretty much all natural boundaries I would think. It's certainly doable, it just seems to add extra variables to what really needs to be as foolproof as possible.


Nitpick: the bridge problem is impossible because there aren't 2 or 0 nodes of odd degree.

If you have 2 islands with three bridges connecting them, you could just walk them trivially, yet there are an uneven number of connections on each node.


TFA specifies the bridge problem in the form where you have to "end up back home", so its conclusion is correct.


Except that, if I understand correctly, the author poses the problem such that one must 'end up back home' (that is, one must find an Eulerian circuit, not a path). If there are 2 nodes of odd degree, then you must start at one and end at the other.


I may be misremembering my graph theory class, but isn't the necessary and sufficient conditions for an eulerian path just that every vertex needs to have even degree?

I'm not sure how it could work if any nodes had odd degree.


For an Eulerian circuit, you need the graph to be connected and for every node to have even degree. For an Eulerian path (which doesn't have to start and end at the same point) you may also allow two vertices to have odd degree.


Got it, thanks!


It’s sufficient but not necessary. Consider a trivial example with three nodes arranged in a chain. The edge nodes at the end of the chain both have degree 1 but walking from one edge to another is an eulerian path.


Math is hard for me because once a step of a proof is lost on me, the whole thing is lost unless I branch out into a rabbit hole, and learn what just happened


That's part of what makes it such a rich subject. Even something as simple as writing y=sin(x) has so much more embedded into it than the "input x output y" notion of functions. Parametrization of the Unit Circle, right triangles, special angles, you name it. Diving into the rabbit hole enough to convince yourself the step is justified is part of the learning process, if you're engaging with it properly. You do need to be judicious sometimes with what you decide is important to dive further into. That comes with the context/goals and experience of the reader.


ok, but all of those rabbit holes have bottoms. It just may be that the space in between represents a year or more of learning.


They don't have bottoms, because at the bottom you have logical assumptions like e.g. classical logic. These only exist because of history, only because it was the easiest way for someone to say something "insightful". Going down the rabbit hole, you have to choose some arbitrary point to stop at; it would be a point at which you think that you can apply the knowledge you have gained so far, in a way that other people will "appreciate".

Or you can just not stop, and try to go even deeper. But then you stop being a mathematician, and you turn into a madman, a philosopher or a theologist.


The "rabbit hole" being discussed is just "understanding a step in an argument", not, providing a foundation for logic.

It's not impossible to gain understanding of a step in someone else's argument.


I agree. Not everyone thinks or solves problems in the same ways, and math notation has historically favored only one type of cognition. More visual approaches using touch screens or spatial / tactile ones using AR would be cool to see.

There may also exist types of math that are impossible to represent or reason about with current notation.


If you are interested in mathematical notations, check out Terry Tao's comment [0][1] on desirable properties of math notations. It was discussed not long ago [2].

[0] https://en.wikipedia.org/wiki/Terence_Tao

[1] https://mathoverflow.net/questions/366070/what-are-the-benef...

[2] https://news.ycombinator.com/item?id=23911903




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: