It's never clear to me when to count elements in a SF book as predictions, or merely plot devices for the story. Does the author mean to say "this is what the world will be like by this date, and by the way, here's a story set in that world," or rather "imagine a world where with the following hypothetical changes... now, here's a story that could happen in that world."
I think you probably try to have it both ways. Like, if you discussed a hundred new technologies in your future SF story, and 99 of them didn't pan, but one unlikely one was dead on, you could say "see? I predicted the fact that by 2041, fish can talk. Bit of a visionary, eh? All those other ones were just worldbuilding."
Not saying this is Vinge at all, just that it's a weird business, and hard to tell from the outside where storytelling ends and prediction begins.
Since you have the ability to set your story any time you want, if it’s set on earth, in a near future scenario, the year it’s set is at least a tacit prediction that that is the point in which you think certain advances are likely. Otherwise you’d just set it in 2050.
This doesn't seem accurate - the scenario needs to be imaginable, but that doesn't mean it's a prediction by the author. I could write a story about the downstream effects of discovering FTL travel in 2030, but that doesn't mean I think it's going to happen. Even for something more realistic - something that will probably happen on some timeline, just not right away, like a Mars colony - the author might not want to explore or think about other technological or societal changes that happen in a longer period of time? Fiction is just that, fiction
Vinge's most important prediction wasn't in Rainbows End, but in A Deepness in the Sky.
The novel (part of the Zones of Thought series tialaramex mentions) depicts a human interstellar civilization thousands of years in the future, in which superluminal travel is impossible (for the humans), so travelers use hibernation to pass the decades while their ships travel between star systems. Merchants often revisit systems after a century or two, so see great changes in each visit.
The merchants repeatedly find that once smart dust (tiny swarms of nanomachines) are developed, governments inevitably use them for ubiquitous surveillance, which inevitably causes societal collapse. <https://blog.regehr.org/archives/255>
I thought the emphasis in A Deepness in the Sky was that high-tech civilisations always collapse - ubiquitous surveillance often being part of that collapse?
Also, humans could achieve superluminal travel - but its where they are (the Slow Zone) that creates this limit. Of course, its fairly clear that the On/Off system is actually in a bubble of the Transcend...
I only started believing in the singularity this year. When AI which could write code was demonstrated to the public at large, it became a short mental leap, combining the tech with, say, genetic algorithms, to get to a runaway state. So either a singularity or a sigmoid curve which might as well be parabolic from our perspective?
It looks like climbing a tree to reach the moon to me.
AI like chatgpt aren’t writing new code, they’re outputting code following their inputs, which is code found on the internet. That’s useful to an individual, but by its nature can’t go beyond what’s already there.
If you hunt around on SO, you aren’t going to find code to generate an AI that can code a better AI. Because no one’s created that code. That’s how current AIs work too, so they aren’t going to be able to do it either, no matter how large and capable the models become.
The non intuitive bit is why are these models surpassing the original programming? It's because they got a secondary source for learning - the feedback they get from generating solutions and testing them. Learning through feedback, be it reinforcement, evolutionary or gradient based, is what can push AI forward.
AlphaZero started with no human knowledge by pure self-play and beat all humans in a few days of practice. It learned from game outcomes as a feedback source. AI can learn from many things, not just from humans. But if it doesn't require humans it usually requires massive computation, simulation or search, maybe even real world testing which can get expensive.
I disagree with this take, fundamentally, because both innovation through iteration and through cognitive leap are both possible with algorithms that are complementary to the likes of ChatGPT. The addition of mutation, recombination and selection to the code for a system with a variety of different types of neural net available will allow them to develop increasingly rapidly.
In short, I don't think people are being imaginative enough about this stuff, given the number of additional ways in which it can be improved.
Alternate take: Regulation should be turned up to 11 and AI should be nationalized so that the benefits go to society instead of a handful of billionaires
I don't think it should be nationalized. However, I do think the Treasury should just print the dollars, and buy OpenAI, etc and all profits go back to the American people.
I don't think Rainbow's End was supposed to be aspirational... it felt more dystopian than anything he's written, even the novels which are set (in whole or in part) in obviously/explicitly dystopian environments.
IMO All of Vinge's stories may be Singularitarian Catastrophes, you just need to look close enough.
Although Vinge claims to have no idea, in Rainbows End obviously Rabbit is an AI, which though it didn't deliberately harm our heroes in the story is obviously an immediate danger to all the survivors, it was attempting to purloin the YGBM weapon and even though that weapon isn't (apparently) exported in finished state, it would now be much easier to re-develop for all the involved parties - which is bad news even if Rabbit doesn't have it.
It's pretty obvious what the catastrophe is in the Zones of Thought novels, and for Tatja Grimm well, to the extent Tatja herself isn't a catastrophe (she starts a major war just to get what she wants!) the whole reason Tatja is so different certainly counts.
I don't know about The Peace War though, haven't read those.
The Peace War is ambiguous: it leads into Marooned in Realtime, where civilization has disappeared, and the remnants that were sent forward in time don’t know why. It could have been a civilization-ending catastrophe, but not necessarily.
I've read the two novels more than once and don't see them as particularly predictive or aspirational.
The Peace War explores the effect of a takeover of society by the people who have just developed (and are the sole owners) of a radical new form of physics (the stasis fields, aka bobbles), and the ensuing counter-revolution (led by that society's equivalent of hackers). I don't think that Vinge was actually predicting that we would fight wars using stasis fields.
Marooned in Realtime is set so far into the future that is doesn't appear to analogise modern society or technology. To me, it's basically a framework for some good character-led interactions.
I've been wondering if a ring-shaped input device paired with AR glasses where you can see texts and typing suggestions could enable that.
With some innovation in keyboard UI and swipe+style typing (which I struggle to live without when I don't have it), that seems quite feasible, especially when paired with possible advancements in predictive text.
I read this novel and didn't really care for it much. I thought the idea of completing massive projects with the fractional attention of tens of thousands of people was hard to take. Coordination costs and context switching are a large overhead for that type of thing, and the novel doesn't address that at all.
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. "
I'm reminded of the Paperclip game (https://www.decisionproblem.com/paperclips/index2.html). While the goal of this superhuman ai was to create as many paperclips as possible, a normal person living in this time would be completely oblivious to this goal and would continue to imagine a rich fulfilling world around them, complete with values and dreams. Spoiler warning: in the end everything is obliterated for more paperclips.
We have no idea the direction things will go, and we maybe can't even use our prior reasoning to guess where this tech revolution will lead us, but I'm happy we're witnessing this in an open society where such tech is more or less available to all. It's hard to think of a more promising scenario than what we're currently faced with.
We will hit energy limits soon.
We are so used to building a new data center every 2 minutes to handle ever increasing number of 'how to boil an egg' videos, our cat pics, "free" email, "free" msging, "free" broadcast streams that its taken for granted the free stuff will stay free and tech innovation will never stop growing.
It's very similar to 2008 where ppl believed housing prices would never fall. But the day is coming where we can't afford that next data center. It's better to be prepared for that more than the singularity.
What exactly are these energy limits, why are you expecting us to hit them soon, and what is "soon"? As an ignoramus on this, it sounds to me like all those predictions that we'll soon run out of oil [0].
[0] "Petroleum has been used for less than 50 years, and it is estimated that the supply will last about 25 or 30 years longer. If production is curtailed and waste stopped it may last till the end of the century. The most important effects of its disappearance will be in the lack of illuminants. Animal and vegetable oils will not begin to supply its place. This being the case, the reckless exploitation of oil fields and the consumption of oil for fuel should be checked."
July 19, 1909 Titusville Herald
Thank you, that looks like a great book, and I'm impressed that they made it open access.
I'll try to find the time to go over it in more detail later, but from brief skimming now, I failed to find the "bottom line", saying how long we have until we run out of energy sources, and it seems to be advocating that by moving to renewable sources, we are good to go forever, at least unless population growth continues exponentially.
So with population growth declining across the world, and the gradually accelerating shift to renewables (and continuously improving tech), I don't quite see a big cause for concern. While of course it's true that there is some limit to how many GPUs we can run on Earth, I don't think we're anywhere close to that limit, and I don't see a particular argument against the option of seeding the rest of the universe with compute infrastructure too (other than that we should stop to think whether we should).
Same story with energy. Google gives us free stuff cause someone pays. And the moment the ad industry stops paying the bill the next data center becomes unaffordable. The story is never about how much energy is available in the universe. The story is about Ameobas over eating and blowing up.
Oh boy, you opened yourself up to the usual slew of "peak oil is a hoax", "renewables will save us" "uhm have you ever heard of this thing called nuclear?" You are a brave soul indeed.
I think you probably try to have it both ways. Like, if you discussed a hundred new technologies in your future SF story, and 99 of them didn't pan, but one unlikely one was dead on, you could say "see? I predicted the fact that by 2041, fish can talk. Bit of a visionary, eh? All those other ones were just worldbuilding."
Not saying this is Vinge at all, just that it's a weird business, and hard to tell from the outside where storytelling ends and prediction begins.