Hacker News new | past | comments | ask | show | jobs | submit login
The Last Question - Isaac Asimov (multivax.com)
173 points by lisperforlife on March 11, 2012 | hide | past | favorite | 41 comments




You can edit your post and make it recursive by adding the current link, so that the next time this story is re-posted, yours is the only post to be copied. Here is the link to this one: http://news.ycombinator.com/item?id=3691113


Thank you, I was just thinking to myself... this again? But I don't think it was HN before...

Like a dream all of the net is meshing together.


Yeah, can't imagine how this submission made it to the front page of HN.


I think it's fairly understandable why - of all of Asimov's shorts - The Last Question tends to have so much gravity with HN readers. For those that haven't read the story before, I can also see why it's worth the up vote. I do find it perplexing, however, to learn how many times it has landed on the front page. This is personally the third time I've found it front page in the last 3-4 years - and I'm not a die-hard HN reader.

At any rate, given how many readers are probably new to the The Last Question, it's probably worth citing another Asimov piece that is oft-mistaken for The Last Question - The Last Answer. It's a more recent work, and a bit more obscure, but worth the read if you're into Asimov.

http://www.thrivenotes.com/the-last-answer/


I can't tell if you're being sarcastic or not. If you are, then please consider that myself and many others feel that linking to previous submissions is a service, since it's interesting to see previous discussions.


Things have a tendency to resurface. I personally had not read it before. Though since the HN search is so good (Probably in the top ten I've seen so far for content search on a site.) theres not as much of an excuse for resubmission.


An implementation of exponential decay could quite possibly solve this problem, but it might mean that they'd have to dramatically rethink the "new" page.

Exponential decay is important because it is, as they say in statistics, "memoryless": it has a simple geometric property that adding a new point today has the exact same effect as adding a new point on the first day. It can therefore be implemented as follows: when you add points to a link, you add them not just to the total points, but also to some accumulator which I will call Hotness. This number is a double; we increment by 1 when someone adds a point.

Every half-hour, some independent process working over still-Hot threads multiplies their Hotness by 0.97153. This gives a half-life of about 12 hours: your rating has hotness 0.5 after half a day, 0.25 after a whole day, and so on. We could tune that if we wanted finer granularity. When something gets below 0.001 Hot we can probably just reset it to 0 Hot abruptly so that we don't check it anymore. (Because doubles will try to go to -infinity if you use them multiplicatively in this way, and if you get to, say, 1000 points this will still mean that we can stop paying attention to you in, say, 10 days.)

Suddenly, adding points to a dead article is the exact same as sponsoring a new article. So we store articles under their URLs as keys, and if you resubmit an existing news story you merely bump it up to 1 Hot.

The "new" page would be very peculiar in this system, though. It might work by listing only those 1 Hot posts which were 0 Hot previously, I don't know.

As for a major con to this approach, I think HN uses polynomial decay rather than exponential decay because exponential decay somehow didn't feel like it had the right "shape" to it or so. This is probably because they didn't implement it in the "memoryless" configuration, though, where each point has value 1 from the moment it's added, and decays slowly.


Better still, increment hotness by 2^(dt/λ) where dt is the time since the site was launched (epoch), and λ the halflife of an upvote. No worker process needed.

Doubles will go to infinity after a few years, but you can either reset the epoch at that time, or store the significand and the exponent seperately.


I agree partways with what you're saying; the normal fears of losing precision in your floats don't really apply because you're always adding the smallest numbers first. On the other hand, your worker process is still in my view a worker process, processing every article by multiplying its hotness by 1.77e-220, and resetting t0. I mean, I believe it's a single SQL query, so it probably shouldn't impact site performance too much even for databases as large as Hacker News, so I'm not so worried about the sudden inconsistency while the worker is doing its thing -- but it's still a dedicated thing-on-the-side which has to be scheduled e.g. via cron job, and which you'd have to audit every once in a while to make sure that it did its job successfully and didn't accidentally get pushed out of sync by, say, server reboots.


An off-topic question: I've studied statistics in my course (engineering) but I haven't gone deep. I know the concepts superficially, like exponencial, polinomial and stuff.

In this situation: what book do you recommend to me? I want to able to reason a thought like that, more real-world stuff.


That's a little bit harder for me to answer, because I'm not familiar with anything which explained it to me the way that I presently understand it. Most of the really insightful books start with sigma-algebras and Borel sets, which are a little hard to understand at first and then get promptly ignored for most of the rest of the book. Basically, in some proofs you need to say that a statement is "almost surely" true because you can always add outcomes to which you assign probability 0, and you can often use those to 'technically' break a theorem.

I would say that the most key ideas for an engineer to know about probability and statistics are: (1) continuous random variables [i.e. a probability density f so that Pr(x < X < x + dx) = f(x) dx], and (2) the Dirac delta-function, which allows all of the statements about continuous random variables to carry over to discrete random variables and half-discrete half-continuous random variables and all of that stuff.

Once you know those, you can start to define mean and variance and you can begin to get a handle on independence [f(x, y) = g(x) h(y)] and how to add two random variables [int dx f(x, z - x) gives a distribution for Z = X + Y].

Most importantly, you get as a near-freebie this important theorem, which I very rarely see in the textbooks. It allows you to construct arbitrary random variables with Math.random(). let F⁻¹(p) be the inverse cumulative distribution function for a density f(x). Let U be uniformly chosen on [0, 1]. Then F⁻¹(U) is distributed according to f(x), Proof: because F is always-increasing, it the inequality x < F⁻¹(U) < x + dx is the same as the inequality F(x) < U < F(x + dx). Therefore Pr(x < F⁻¹(U) < x + dx) = Pr(F(x) < U < F(x + dx)) = F(x + dx) - F(x), due to properties of uniform distributions and the fact that F() is in both cases on [0, 1]. For vanishing dx, F(x + dx) - F(x) = f(x) dx, QED.

This actually also helps when you realize that U doesn't have to be chosen just once; you can also have a uniform sampling of (0, 1), and under the transform F⁻¹(p) that sampling will have density f(x). So if you wanted a density defined on [0, Z] for which asymptotically, f(10 x) = 0.1 f(x), but you also wanted it to be evenly spaced for x < b, then you might want density f(x) ~ 1 / (b + x), x > 0. Then you have F(x) = log(1 + x/b) / log(1 + Z/b), and inverting this gives x(F) = b [(1 + Z/b)^F - 1].

That's the function you would use to create a lattice of points distributed with this density, plugging in F = k/N for k = 0, 1, ..., N. It's a very useful theorem. ^_^

Once you can do that sort of stuff, your textbook should return to the basics of discrete events: Bernoulli trials (aka weighted-coin flips where heads is 1 and tails is 0), Geometric variables (the number of Bernoulli trials before you get a 1), Binomial variables (the sum of N Bernoulli trials), and their continuous limits (exponential, Poisson, Normal).

Don't get too caught up, I'd caution, on Normal random variables. They're useful but the standard-normal tables come with a lot of pointless overhead. (Most of the overhead goes away when you realize that a "Z-score" is the "number of standard deviations away from the mean." -- X = mu + Z sigma. And the rest of it is looking numbers up in tables and making sure that you look for where the table says "0" so that you know what area you're actually calculated.

The reason I'm saying all of that out loud is that I don't know a textbook which will give you all of that material, sorry.


Really thanks for this answer.

I didn't get very well the theorem you mention, but I'll ask a math guy friend of mine.

Many Kudos for you.


Agreed, though I'd say posts like this one are something of an exception; if you weren't already aware of the story, you wouldn't even think to search for it. So it serves as a sort of "hey, HN users, you might be interested in this" post, rather than a discussion area for an article which people might well find via other sources, and want to see discussions about.


Some comments:

1) I've heard many (non-physicist) people argue/think that the 2nd Law of Thermodynamics is a law in the sense that, say, General Relativity or Conservation of Energy is a law. That is not true. As explained here (http://physics.stackexchange.com/questions/4201/why-does-the...) the basic laws of physics are time-symmetric, i.e. there's no currently known fundamental reason that entropy behaves the way it does.

2) I've read this story 20+ times, yet each time it gets me. I think the force of the story comes not from the scientific predictions but from the poignant depiction of humanity's futile fight against oblivion. Aren't all monuments erected for this purpose? The fact that the story is very light on the tech details paradoxically increases its punch.

3) The described technology is a curious mix of far-sight and ridiculous backwardness: In describing harnessing the power of the Sun, Asimov may have had in mind something like a Dyson sphere, which Dyson described in 1960. However, the technicians still use a teletype to communicate with Multivac in 2061!

4) One thing that I think Asimov got wrong fundamentally is that researching the "final question" should have taken all of Multivac's CPU capacity. It's stupendous that Multivac just runs that question on a separate thread while doing everything else. The Hitchiker's Guide gets this right: when Arthur asks a very powerful AI (the Nutrimatic Drinks Dispenser) to make tea it totally paralyzes the machine.

5) I've never been able to find a good interpretation of Cosmic AC's response "NO PROBLEM IS INSOLUBLE IN ALL CONCEIVABLE CIRCUMSTANCES."


The mix of far-sight and backwardness you describe in (3) is common to a lot of sci-fi. I remember one book of Clarke's that describes a journalist taking a trip to the colony on Mars, and to write his articles he takes a portable typewriter with him. There are a lot of anachronisms in Asimov's early Foundation novels as well, such as many characters smoking, a total lack of computers and everything still being done by humans - taxi drivers, customs officers stamping passports, etc. I guess it's a pretty hard thing to see what parts of society are going to be replaced, especially given some of the disruptive technologies like computers that have popped up mostly after these books were written.

Couldn't (4) just represent a fairly good design for Multivac so asking it one hard question doesn't lock it up for everyone else?


I actually enjoy this mix of old and new in sci-fi, from an artistic viewpoint, that is. It gives it this little flavor that is a mix between very high technological advancement and simple nostalgia (obviously not intentional by the author, just a side effect of me reading this story in 2012). When I try to imagine a cool fictional cyberpunk future, I find it much more satisfying to picture hackers hacking away at the keyboard in front a screen filled with green text over black background than the more plausible shiny white touchscreen.

I prefer my space exploration to be done on a Nostromo than a Enterprise.


I consider this to be based on their idea, that scientific inventions will change humanity on large scale, like space travel, etc, and small things will not have time yet to catch up. While in reality small things change fast, and we are still missing large scale changes.

There is for example the Cyberpunk genre (starting from eighties) which predicts things much better. There are much less large scale inventions in that, and a lot of smaller, human life style changing things that were got right. Cyberpunk also started after the world took direction towards the current form of capitalism, and the utopian ideas originating from before sound strange today.


Funnily enough I do all my work on a teletype, using antiquated interfaces from the 70s, and secretly wish for a 40 year old keyboard to program my 30 year old programming language in.

Some anachronisms are not really anachronisms - they are just proofs of the fact that the old way was in some decisive way better.


As to your point 1, I would rather phrase it as a topic of ongoing research. There are some information-based theories that would have something more like entropy being in some sense the underlying law of the universe, and what we currently consider the time-symmetric "base" laws to in fact be the derived laws, in which case in some sense the entropy mystery would disappear. Entropy is a well-observed fact of the universe, and to the extent that our theories fail to explain it very well, that is most likely a problem with the theories, not our observations.


Another favorite of mine: Learning to be Me - Greg Egan

http://qwerjk.com/force-feed#learning-to-be-me


Also, John Varley's story Overdrawn at the Memory Bank


Thanks for this.


Why do humans have to re-submit these? If a post is 1. Timeless and 2. Popular shouldn't this be automated?

Surely this post adds to the experience of some as do many others like it. The first step in this trend would be a Hacker New reading list composed of posts that fit this profile.

Secondarily you could have a way to inject each of the posts on that list into each users front page based on if they had seen it before (followed link checking or HN logs). If I'm new to HN perhaps my front page would have these scattered throughout.

Next you could use them as content on slow news days in combination with the per user information above.

I, for one, would love it if my local movie theatre re-ran Star Wars during slow months, and I wouldn't mind being (re)exposed to classic posts on Sunday afternoons :)


Arguments have been made that it's better for all of us to see the same frontpage. I can't find them right now though. But I agree on the 'classics' page.


"Man, mentally, was one. He consisted of a trillion, trillion, trillion ageless bodies, each in its place...minds of all the bodies freely melted one into the other" Seems it's going to be true. When you google for something it's already some kind of thought of Man. For now connections between individuals are very slow, but it will be solved soon. I'll have a chip in my head which will allow me to share my thoughts immediately with anybody.


I find this idea sad. I like my individuality.


True... and yet again, we're reminded that failing to give J. L. Borges the Nobel in literature after a lifetime spent anticipating this very phenomenon was every bit as silly as giving President Obama the Peace Prize during his first year in office.


Twenty-odd years ago, a friend and I were discussing entropy with our physics teacher and my friend related this story. He couldn't remember where he had read it. I have wanted to read it ever since.

However many times classics are resubmitted, they will still find new and appreciative readers. Thank you lisperforlife & HN.


We need to focus on inhabiting just one other world, first. The longer-term problem of universal heat death will work itself out.


We need to focus on managing the one that we've got, first.

And so far we're doing a piss-poor job of it.


I'd argue we need to do both, but that getting off-planet is actually the bigger priority: even if we run our planets poorly, we'll be safe from complete extinction, which gives us time to optimize in the long run.


No, you have that backwards.

Taking care of what we have right now is the bigger priority because failing that we never will get off the planet anyway.

Getting off the planet requires a larger degree of international cooperation and funding than stewarding our own planet does. If we can't manage the one we certainly won't be able to manage the other. Also, it is something that we actually can do, if we set our minds to it. Whether or not we can actually get off the planet with meaningful numbers of people to a place that is no longer tied to the earth in some critical way remains to be seen.


More international cooperation? International competition got us into space to begin with.


I am not sure but I think you may have missed the irony in his statement.


Translation: "We need to clean up our act in this cave before we try building grass huts. Otherwise we'll just make the same mistakes out there that we're making in here."


We actually could try to build grass huts when we were still living in caves. But we currently can't 'try to get off the planet', not even for very small numbers of people. The ISS is as good as it gets at the moment and that's solidly tied to our economies and our ability to supply it.

And even that might not last long. Space is - sadly - currently not a priority.



> Can entropy ever be reversed?

I might not be understanding the science correctly, but due to the specific phrasing of the question, would "Sure--here's a schematic for an LED that converts waste heat back into photons[1]" be an acceptable answer?

[1] http://www.physorg.com/news/2012-03-efficiency.html (Previously discussed on HN)


It uses energy to do so— just like a heat pump can heat your home with greater output than electrical input, but not no electrical input.

The more exciting detail is that computation itself doesn't increase entropy, at least not if reversible computing is use. Only errors or, rather, their correction do... So things may be more rosy than you might guess: http://arxiv.org/abs/quant-ph/9908043


Not really; it's clear from Asimov's story that the question is about a closed system. The abstract for the paper you cited implies the led was externally heated, acting as a thermo-couple. The closed-system efficiency was still <100%.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: