Hacker News new | past | comments | ask | show | jobs | submit login

I don't understand how their min cut preserving sparsification works.

You sample edges, and it's supposed to always happen to include the relevant min cut edges? How?




The details of the paper are way over my head, but here's what I took away from a quick skim.

The random sampling technique of Benzur and Karger approximately preserves the total weights of all cuts in the graph, up to certain error bounds (with high probability). That is, if you draw a particular cut through the graph, the original version might have 100 edges crossing the cut, and the sparse graph might have 3 edges crossing the cut with a total weight of 102. Since all cut sizes are approximately preserved, you can get an approximate answer to the min cut in the original graph by solving the same problem on the sparse graph.

The deterministic, exact approach is a lot more complicated. In this case, we're not trying to preserve all cut weights, just the weights of minimum cuts. If you imagine a graph that consists of some dense "clusters" connected by other sparse "bottlenecks", then a minimum cut must go through the bottlenecks while avoiding the clusters.

So hand-waving the details, if you could exactly partition the graph into dense clusters, you could just collapse each cluster into a single vertex, and then easily solve the min-cut problem on the graph of clusters instead of the original graph. This partitioning is hard to do exactly. But if you start by doing it approximately, then you can bound the number of "adjustments" that need to be made in order to make the partitioning work.


My very rough understanding from reading the article (I could be wrong, I'm not a researcher) is that this is accomplished by the key insight of Kawayabarashi and Thorup (2015), which is that min-cuts must have low conductance. So it seems that you can partition the graph into well-connected components (perhaps by measuring the conductance of edges?) and preserve the min-cut of the original graph in the new reduced graph where you contract each the partitioned components into a single node.

Unless you're asking about the randomized graph sparsification of Benzur and Karger, in which case the output is correct with high probability [0], but like with many randomized algorithms, is not guaranteed to be correct.

[0] https://en.wikipedia.org/wiki/With_high_probability




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: