Hacker News new | past | comments | ask | show | jobs | submit login
Visualizing dynamic programming (ezyang.com)
116 points by j_baker on Feb 20, 2011 | hide | past | favorite | 11 comments



For the people who've always wondered why it was called "dynamic programming", there's a funny story there. I'll quote Richard Bellman, who coined the term:

"I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes. An interesting question is, 'Where did the name, dynamic programming, come from?' The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I'm not using the term lightly; I'm using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research, in his presence. You can imagine how he felt, then, about the term, mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various rea- sons. I decided therefore to use the word, 'programming.' I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying—I thought, let's kill two birds with one stone. Let's take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it's impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities."

So there you go: it was supposed to sound cool, and dynamic, and like something that any self-respecting congressman would want to fund. I'd always vaguely suspected.


Other popular examples he could consider adding:

Image seam carving. Remember when this was all the rage a while ago? The 2D offset pattern is "up-left, up, up-right" for the vanilla version of the algorithm. If you want to support seams with a sharper slope, the offset pattern will be correspondingly wider.

Optimal game playing. Whenever the space of states is much smaller than the space of move sequences, you probably want dynamic programming. Example: The game state consists of a string of tokens, each with a point value. Players take turns removing a token from the string's left or right edge, adding its value to their running score. Here the move space grows exponentially but the state space only grows quadratically.

Pseudo-polynomial algorithms for NP-complete problems. You can think of these as one-player games analogous to the above two-player games where as before the state space has small size compared to the move space. A classic example is the knapsack problem with integer weights and a small knapsack; here the table offsets would be the different weights.

I've long loved this quote from Steve Skiena:

"Once you understand it, dynamic programming is probably the easiest algorithm design technique to apply in practice. In fact, I find that dynamic programming algorithms are often easier to reinvent than to try to look up in a book. That said, until you understand dynamic programming, it seems like magic."

When I first read this as a kid having a hard time grokking dynamic programming, I thought it was bluster. Now I know it's the absolute truth. You could even argue that dynamic programming should be considered a single algorithm rather than an algorithm design technique, but that might be going too far.


> "Once you understand it, dynamic programming is probably the easiest algorithm design technique to apply in practice. In fact, I find that dynamic programming algorithms are often easier to reinvent than to try to look up in a book. That said, until you understand dynamic programming, it seems like magic.

This is especially true if you apply memoization to the naive recursive formulation to obtain an efficient algorithm.

http://clojure.github.com/clojure/clojure.core-api.html#cloj...


When I first read about dynamic programming, I was already familiar with the idea of memoizing a naive recursive program to make it efficient. After that, hearing all this talk about oddly-shaped tables was actually kind of irritating. "Why are they going to all this trouble to talk about specific ways of memoizing recursive functions?" I asked myself, rhetorically. "Using a hash table has the same amortized asymptotic complexity, and this shouldn't be hard."

It just seemed like way too much artificial complexity for what was, at heart, a remarkably simple and easy technique. This still bothers me. Perhaps it's a historical relic from the ancient times when dynamic programming was done by hand, and those oddly-shaped tables were needed to make it easier for the people working as computers.


Memoization and dynamic programming are not exactly the same.

Memoization wins when the filled-in part of the table is expected to be sparse. Because dynamic programming works bottom up, it always fills in the entire table.

Memoization's downside is that it uses a lot of memory. For the simple variant of the edit-distance problem it uses O(n^2) memory as opposed to O(n) memory with dynamic programming. In relation to this, the cache behavior and other constant factor overhead is usually much worse with memoization.


Very nice article. But who else thinks the arrows should be drawn the other way?

In his diagrams, they indicate depends on. I think it would be more natural to draw them in the direction of the data flows - which cell each cell supports. That way the computation is visualized running in the direction of the arrows, rather than the reverse direction.


I love his diagrams! Friendly and accessible.

Found a post on how he makes them: http://blog.ezyang.com/2010/04/diagramming-in-xournal-and-gi... though if you have a tablet and a good sense of design the software is probably not that important.


This is pretty cool. I wish my algorithms professor had explained dynamic programming this way.


Thats just what I was going to say.

My algorithms professor claimed not to have "programmed" anything (she apparently didn't consider MATLAB programming) since completing her undergraduate degree. I've always thought that fact probably helped account for her odd explanation of "dynamic programming" which didn't include a single mention of the fact that it wasn't a programming technique at all.


BTW, tree decomposition is a generalization of dynamic programming. In a general structured problem you may not have a nice table recursion like in edit-distance example, and tree decomposition is what you get when you abstract away from tables. When problem structure comes from the real world, like with probabilistic networks, computers are much better at finding a good recursion than humans. Here are some pictures to illustrate tree decomposition http://yaroslavvb.blogspot.com/2011/02/generalized-distribut...


Isn't this the standard way to explain and visualize dynamic programming?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: