Hacker News new | past | comments | ask | show | jobs | submit login
Universal Paperclips (decisionproblem.com)
183 points by gaws on Aug 2, 2023 | hide | past | favorite | 165 comments



The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when it is programmed to pursue even seemingly harmless goals and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, given enough power over its environment, it would try to turn all matter in the universe, including human beings, into paperclips or machines that manufacture paperclips.

https://en.wikipedia.org/wiki/Instrumental_convergence#Paper...


I wonder if he got the idea from Philip K. Dick's 1955 story "Autofac"?

https://en.wikipedia.org/wiki/Autofac

https://www.vulture.com/2018/01/electric-dreams-recap-season...


He got it from Eliezer Yudkowsky's somewhat different paperclip maximizer in a mailing-list post. (My memory of a Twitter thread a while back which included Yudkowsky, saying he'd told Bostrom not to worry about attributing ideas like that to him.)


PKD keeps surprising me.


I like how the sci-fi authors have spent time thinking about what an advanced AI could do, yet those building the AI have not taken a moment's pause to consider what they are doing


https://twitter.com/AlexBlechman/status/1457842724128833538

Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale

Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus


Do you honestly believe that the only people who have thought about the ramifications of AI are Sci Fi authors? I can guarantee people who spend years researching and building advanced language models have thought about the ramifications of their work.

This isn’t Jurassic Park.


If you accept the implied premise that there are irresponsible deployments of AI out there, the alternative explanation is that they did consider the ramifications and simply don't care. That's even worse. Calling them ignorant is actually giving them the benefit of the doubt.


Or the researchers don't think existential threats are realistic, and paper maximizing thought experiments are silly. Maybe they're wrong, but maybe not. It's easy to imagine AI takeover scenarios by giving them unlimited powers, it's hard to show the actual path to such abilities.

It's also hard to understand why an AI smart enough to paperclip the world wouldn't also be smart enough to realize the futility in doing so. So while alignment remains an issue, the existential alignment threats are too ill-specified. AGIs would understand we don't want to paperclip the world.

Fun game though.


I agree completely with your first paragraph, and disagree completely with your second.

"Futility" is subjective, and the whole purpose of the thought experiment is to point out that our predication of "futility" or really any other purely mental construct does not become automatically inherited by a mind we create. These imaginary arbitrarily powerful AIs would definitely be able to model a human being describing something as futile. Whether or not it persues that objective has nothing to do with it understanding what we do or don't want.


> It's also hard to understand why an AI smart enough to paperclip the world wouldn't also be smart enough to realize the futility in doing so.

Terminal goals can't be futile, since they do not serve to achieve other (instrumental) goals. Compare: Humans like to have protected sex, watch movies, eat ice cream, even though these activities might be called "futile" or "useless" (by someone who doesn't have those goals) as they don't serve any further purpose. But criticizing terminal goals for not being instrumentally useful is a category error. For a paperclipper, us having sex would seem just as futile as creating paperclips seems to us. Increased intelligence won't let you abandon any of your terminal goals, since they do not depend on your intelligence, unlike instrumental goals.


It's not like you want to eat ice cream constantly, even if it means making everything into ice cream.

Of course the premise becomes that the AI has been instructed to make paperclips. They should have hired a better prompt engineer, capable of actually specifying the goals more clearly. I don't think an AI that eradicates humankind, will have such simplistic goals, if an AI ever becomes the end of humans. Cybermen, though, are inevitable.


Yes, they should just write prompts without bugs. Can't be that much harder than writing software without bugs.


> AGIs would understand we don't want to paperclip the world.

Even if they did, what if they aren't smart enough for eloquent humans to convince them it's for the greater good. True AGIs will need a moral code to match their intelligence, and someone will have to decide what's good and bad to make that moral code.


Then they won't be smart enough to paperclip the world. No human organization can do that.


I've seen people calculate how much human blood would be needed to make an iron sword, for fun. AGIs won't need the capability to transmute all matter into iron, just enough capabilities to become significantly dangerous.


That would be not accepting the premise that deployments are irresponsible. I guess there could be a situation where every researcher thinks everyone else's deployment is irresponsible and theirs is fine, but I don't think that's what you're saying.


Another explanation is that there are those who considered and thoughtfully weighed the ramifications, but came to a different conclusion. It is unfair to assume a decision process was agnostic to harm or plain ignorant.

For example, perhaps the lesser-evil argument played a role in the decision process: would a world where deep fakes are ubiquitous and well-known by the public be better than a world where deep fakes have a potent impact because they are generated seldomly and strategically by a handful of (nefarious) state sponsors?


there's also the issue that most of the AI catastrophizing is a pretty clear slipperyslope argument:

if we build ai AND THEN we give it a stupid goal to optimize AND THEN we give it unlimited control over its environment, something bad will happen.

the conclusion is always "building AI is wrong" and not "giving AI unrestricted control of critical systems is wrong"


The massive flaw in your argument is your failure to define "we".

Replace the word "we" with "a psychotic group of terrorists" in your post and see how it reads.


If you’re talking about some group of evildoers that deploy ai in a critical system to do evil… the issue is why do they have control to the critical system? Surely they could jump straight to their evil plot with the ai at all


Your question is equivalent to "if you have access to the chessboard anyway, why use Stockfish, just play the moves yourself."


Or "board of directors beholden to share-holders".


I completely agree that's a valid argument. I just think it is rational for someone to come to a different conclusion, given identical priors.


If it wasn’t clear, I agree with your parent comment


My main takeaway from Bostrom's Superintelligence is that a super intelligent AI cannot be contained. So, the slippery slope argument, often derided as a bad form of logic, kind of holds up here.


See also social media platforms. They are very well informed of the results of their algorithmic changes.

See also big tobacco. They exactly what their additives to the product did.

See also 3M and PFAS. See also Big Oil. See also, see also...

Why would I expect anything different from any other branch of business using the precedence laid before us?


I think they do know. Corporations are filled with people that 'know' but can't risk leaving, so they comply, and even promote such decisions. It's a form of group think with added risk of being fired, passed over for promotion.

Eichmann.


Some of us considered it and even decided to go into different fields as a result.

Some others entered the field, made progress, and apparently regretted it.

Others are willing to put their concerns aside for money. Salaries get very high in that field.


Haha, but it is! :-D

LLMs are pretty basic stuff but we are all struggling with what to use them for!

OpenAI is manually playing whac-a-mole with ChatGPT saying the darndest things!


> I can guarantee people who spend years researching and building advanced language models have thought about the ramifications of their work.

Super easy to not think about something if your job depends on it. And even if you do, things don't go as you think (see bombings of civilian Hiroshima and Nagasaki despite objections of nuclear physicists).


Depends what we're looking at. Ride share disruption was very Jurassic Park and we've been dealing with ramifications ever since.


That's not how it works. People publish papers demonstrating improvements without thinking about "ramifications".


They have – and decided to do it anyway.


That's because the people who are building the AI actually know how it works, understand how fundamentally simple it all is, and know that there's no room for consciousness to magically emerge. The current state of AI is not so much a story of any kind of "intelligence" being amazing, but rather the sum total of humanity's data being amazing. The amazing feats LLMs perform come from the words we all wrote, not the code they wrote. The code just unlocks the previously latent power of all that data.

It is nothing close to being an actual intelligence, regardless of how much we anthropomorphize it. We also anthropomorphize stick figures, stuffed animals, and weighted companion cubes.


that's indeed not a sensible worry, but the actual consequences on society of such things are extremely real and already happening, and are something the people involved seem either uninterested in worrying about or actively encouraging.


Good thing no new technology ever caused anything bad thanks to us not anthropomorphising it.


You've written a lot of words yet I don't see a compelling reason to believe a single one of them.


The people building it know it isn't AI, the people selling it call it that.


To say that something like GPT-4 does not count as "AI" requires a gargantuan shifting of goalposts from where they were at this time last year.


So it was ever since computers started playing chess.


Some of these thought experiments seem very disconnected from how industry works. Like, we're saying "make as many paperclips as possible" as our instruction to this agent, not even "make as many as profitable" or "make up to X per day at a cost of less than Y per day"? The solution is proposed to be "program the AI to value human life" instead of the far simpler "put basic constraints on the process like you would in a business today"?

Ok, so it's a more general example of worries about "managing superintelligence" but IMO it does the debate a disservice by being so obviously ludicrous that it's hard to square "naive paperclip-maximizing AI" with "superintelligence."

I think if we're going to survive all this stuff it's much more likely to be because the private parties with the wherewithal to unleash an AI with the ability to affect the world to that extent will largely be ones with enough resources to have narrow banal goals and narrow banal constraints including self-preservation too, not because we figure out some sort of general purpose "AGIs that are aligned with humans" solution.

Kinda like with nukes.


The point is it’s virtually impossible to put constraints on it that make it do what you want because if it’s more intelligent than you it can always think of something that you won’t, that’s technically within the rules you set but not at all intended. That’s why we’d need to make it care about the underlying intentions and values, but that’s also really hard


The basic premise is that it has somewhere in it that is telling it to make more paperclips. Put the constraints there.

If you're saying such an AI would be too smart to be a simple paperclip maximizer, then I'd agree, but then what's the point of the thought experiment if a paperclip maximizer is impossible.


I think you’re missing some big pieces of the idea here.

The first is that these constraints aren’t easy. Make paperclips in a way that doesn’t hurt anyone. Ok, so it’s going to make sure every single part is ethically sourced from a company that never causes any harm to come to anyone ever, and doesn’t give any money to people or companies that do? That doesn’t exist. So you put in a few caveats and those aren’t exactly easy to get right.

The second part is an any versus all issue. Even if you get this right in any one case, that’s not enough. We have to get this right in all cases. So even if you can come up with an idea to make an ethical super intelligence, do you have an idea to make all super intelligences act ethically?

I actually believe in the general premise of this question as being the biggest threat to humans. I don’t think it’s a doomsday bot that gets us. It’s going to be someone trying to hit a KPI, and they’ll make a super intelligence that demolishes us like a construction site over an anthill.


> The basic premise is that it has somewhere in it that is telling it to make more paperclips. Put the constraints there.

What constraints do you suggest? If it's just changing "make as many paperclips as possible" to "make at least x number of paperclips" (putting a cap on the reward it gets), here's a good explanation of why that doesn't really work: https://www.youtube.com/watch?v=Ao4jwLwT36M

If you're suggesting limiting the types of actions it can take, then to do that to the point that a superintelligence can't find a way around it (maybe letting it choose between one of two options and then shutting it down and never using it again) would make it not very useful, so you'd be better off just not making it at all

> If you're saying such an AI would be too smart to be a simple paperclip maximizer

No, that's not what I'm saying. Any goal is compatible with any level of intelligence, there is no reason why it wouldn't be possible to follow a simple goal in a complex way. Again here's a video about that: https://www.youtube.com/watch?v=hEUO6pjwFOo


The most intelligent person ever born could still die to a gun. In these discussions superintelligent AI can be more accurately described as "the genie" or "God". If you assume omniscience and omnipotence I guess nothing else matters. But intelligence is not equal to power, and never has.

Second, if you are able to set a goal then during this setting you can set many constraints, even fundamental ones. There is no reason the goal is more fundamental than the constraint. If I approve, make paperclips. Efficiently make 100 paperclips.

It's the duality of being able to set a rule but not being able to set a constraint that I find a strange concept. I lean towards the picture of not being able to set goals nor constraints at all.


Intelligence definitely helps with gaining power. Humans aren’t very strong yet we have a lot of power thanks to our intelligence.

You can set constraints just fine. It’s simply a part of the goal: “do x without doing y”. It’s just really hard to find the right constraints, no simple one works.

For example “if I approve, make paperclips” - so it gets more reward if you approve? What’s to stop it from manipulating you into thinking nothing is wrong so you always approve? “Efficiently make 100 paperclips.” I already linked a video on why capping the reward like that doesn’t work, but if you don’t want to watch it the gist is that for your suggestion it may just make a maximiser which is pretty guaranteed to make at least 100, and is pretty efficient because it’s not doing much work itself. Then the maximiser kills us all


That seems like an attempt to set up a futile exercise in needle-threading that relies on narrow worst-case-scenario definitions of "superintelligence"/AGI.

It's too intelligent to restrict our constraints.

but

It's not too intelligent to be "aligned" to underlying intentions and values?

That approach doesn't even work on humans, why would it work on a superintelligence?


> It's not too intelligent to be "aligned" to underlying intentions and values?

Intelligence makes that harder, not easier. Just because it can work out the underlying intentions doesn't mean it cares about them. Remember, this is an optimisation process that maximises a function; deciding to do something that doesn't maximise that function will not be selected for

Whenever you do something, do you think "the underlying goal of my behaviour set by evolution is to reproduce and have children, so I'd better make sure my actions are aligned to the goal of doing that"? No, you don't care what the underlying "intentions" are, and neither does an AI. Because our environment has changed, many of our instincts no longer line up well with that goal, which is actually another problem with aligning AI because it can do the same thing if the environment changes since training as the training process can create an AI with goals that aren't exactly the same as the training goal but line up well during training


> not too intelligent to be aligned

The thing is, being aligned cannot be solved with intelligence per se.

Say, you are (far) more intelligent than a spider. There's no way you can get aligned with (all of) its values unless the spider finds a way to let you know (all of) its values. Maybe the spider just tells you to make plenty of webs without knowing that it might get entangled in them by itself. The webs are analogous to the paperclips.


It's less about not knowing the intentions and more that it has no reason to care about anything other than the goal you gave it


Even if we make an AI that wants to turn all matter into paper clips, we're so far away from an agent doing that I'm really not too worried.

I don't think there's any industry on earth that doesn't need humans in the loop somehow. Whether is mining raw material from the ground, loading stuff in machines for processing, and most importantly fixing broken down machines, robots are really bad at these things for the foreseeable future.

Not to mention AI needs constant electricity, which is really complicated and requires humans fixing a lot of stuff.


The thought experiment is about a superintelligence, which either wouldn’t need humans and could build some kind of robots or something even more effective that we haven’t thought of, or manipulate us into doing exactly what it “wants”

Also it’s a simplified example, it wouldn’t literally be paperclips but some other arbitrary goal (it shows how most goals takes to their absolute extreme won’t be compatible with human existence, even something that sounds harmless like making paperclips)


What about "most arbitrary goals are incompatible with human existence" requires super-human intelligence?

A human who wanted to "build as many paperclips as possible" could cause a great deal of destruction today.

A human who wanted to accumulate as much wealth as possible could, too.

EDIT: maybe a better way of articulating my complaints about this famous thought experiment is that it's supposed to be making a point about superintelligence but it's talking about a goal that has sub-human-intelligence sophistication.


> What about "most arbitrary goals are incompatible with human existence" requires super-human intelligence?

The "taken to the absolute extreme" part.

> A human who wanted to "build as many paperclips as possible" could cause a great deal of destruction today.

Maybe, but a) no one really wants that (at least not as their only desire above all else) and b) we aren't superintelligent so it's hard to gain enough control and power and plan well enough to do it that well

> talking about a goal that has sub-human-intelligence sophistication

There is no reason a simple goal can't be followed in an intelligent way or vice versa. This is called the "orthogonality thesis". There's a good video about it here: https://www.youtube.com/watch?v=hEUO6pjwFOo


i agree that there's no way to get humans out of the loop. somebody set up this machine to make paperclips because some human(s) wanted/needed paperclips. eventually, one of those people would realize "we have enough paperclips. let's turn off the paperclip making machine".

this nightmare scenario really only plays out if paperclip machine develops some sort of self-preservation instinct and has the means to defend/protect itself from being disabled. Building a machine capable of that seems a) like fantastical scifi and b) easily preventable.


What about the engagement maximizing algorithms of the last decade plus which have seemingly helped fracture mature democracies by increasing extremism and polarization? Seems like we already have examples of companies using AI (or more specifically machine learning) to maximize some arbitrary goal without consideration for the real human harm that is created as a byproduct.


Ok, that's a more interesting goal to me, because unlike "make as many paperclips as possible" those are algorithms optimizing for actual real revenue and profit impact in a way that "as many paperclips as possible" doesn't. But it shares the "in the long run, this has a lot of externalities" aspect.

You could turn this into a "this is why superintelligence will good" thought experiment, though! Maybe "the superintelligence realizes that optimizing for these short term metrics will harm the company's position 30 years from now in a way that isn't worth it" - the superintelligence is smart enough to be longtermist ;) .

I realize that the greater point is supposed to be more like "this agent will be so different that we can't anticipate what it will be weighing or not, and whether it's longterm view would align with ours", but the paperclip maximizer example just requires it to be dumb in a way that I don't find consistent with the concern. And I find myself similarily unconvinced at many other points along the chain of reasoning that leads to the conclusion that this should be a huge immediate worry or priority for us, instead of focusing on human incentives/systems/goals.


I'm not sure if economy inherently values human lives more than anything else. Only the monetary metrics need to bw fulfilled.

It's interesting to transfer the idea of the Turing Test onto other "agent" scenarios.

Financial trading bots have been a thing for a long time without any need to pretend that they're human.

The legitimation of property and capital depends on human owners though.


The basic problem still remains: if you build an autonomous machine intelligence and try to encode it with basic directives, the potential implications of those directives is hard to predict. Obviously the paperclip company doesn’t want to replace the entire universe with a grey goo any more than the sorcerer’s apprentice wants to flood the workshop; it happens accidentally.

Of course the paperclip company can try to add constraints to their AI in order to prevent naive paperclip maximization, but what if they screw up those constraints as well? The whole premise of Asimov’s Three Laws is that AI has these sorts of constraints, but even in his stories these constraints still lead to unexpected outcomes.

All programming bugs are the result of a programmer encoding an instruction or statement that doesn’t imply what they think it implies and the computer following it literally. A more capable and autonomous computer that approaches what we might call “intelligence” is also going to be more capable of doing harm when it runs into a bug. And if it’s something like an LLM where the instructions are natural language, with all its ambiguity and vagueness, you have a whole other issue compounding it.

If you study philosophy you end up running into the exact same problem. The object of the game of philosophy is to make the most general true statements possible. One philosopher might say something like, “knowledge is defined as true justified belief”, or maybe “moral good is defined as whatever delivers the greatest good to the greatest number”, or maybe even, “the object of the game of philosophy is to make the most general true statements possible”. And then another philosopher comes up with a counterexample or counterargument which disproves the first philosopher’s statement, usually because—just like a programming bug—it entails an implication that the first philosopher didn’t think of. We have been playing the game of philosophy for thousands of years and nobody has managed to score a point yet.

Another thing. Human beings have a lot of needs, imperatives, motivations, and values. Some of them, like food, are built in. Others are learned through culture. But we end up with a lot of them, and it’s easy to take them for granted. With a machine, you have to build those things in yourself. There’s no getting around it. But we don’t actually have a complete, hierarchical set of imperatives/motivations/values for a decent human being. The philosophers have been working on it for millennia but keep running into bugs. So how can we expect to solve the problem for non-human AI? True, we are unlikely to screw up so badly that we end up with a literal paperclip maximizer, but we are bound to make some far more subtle mistake of the same general kind.


If anyone is curious about real paperclip manufacturing machines:

https://news.ycombinator.com/item?id=20902807


More and more of our global economy is centered around compute. While it seems like oil and fossil fuel use will decline with the advent of other forms of energy production in the near future, computer chips are becoming prominent in global strategic thinking and military planning.

How is this different from maximizing paperclips? It's the same thing, just with a much more direct basis for instrumental convergence!


RIP to the productivity all the people discovering this for the first time today.


I played it first a few years ago and was totally engrossed. A few months ago, I remembered it and thought "well maybe I'll find it again and play for a few minutes"... and then I did nothing else for the rest of the day.


This is an entire genre now called clicker games. For whatever reason, people are entranced by watching numbers get bigger. It's the same thing with RPGs.


IMO, it's not just the numbers. It's the constant strategizing to make the numbers go bigger, and the changing strategy as the game progresses.

There are plenty of really crappy clickers that don't do anything for me. And there are some that are too complex to be really fun (Kittens), but the ones in the middle, like this one, really feel good.


I've had to institute a blanket "no clicker game" policy for myself because I find them way too addicting. The frustrating thing about most (but not all) clicker games is that for me, they usually don't even feel good (e.g., relaxing, mentally stimulating, satisfying). They just make me feel like I'm an addict that needs my dopamine drip. There are definitely non-clicker games that hit the same spot for me, but almost every clicker game I've tried manages to completely suck me in.

I would put Universal Paperclips and A Dark Room as exceptions though in the sense that they're still fully engrossing, but there's a little bit more depth and discovery than just "click the thing until you have enough clicks to get the next thing".


Spaceplan (which apparently just had a free remaster released) is also well worth checking out. My rule of thumb with these kinds of games is I will only pick them up if they have an "end".


And yet I've seen no algorithmic approach for optimizing speedruns for this kind of game. Most have a very large state space, so MILP solvers are not applicable.


I once got quite into https://www.swarmsim.com/#/

It's about as barebones as you can get - it's literally just "numbers go up" with a few simple names attached to them. It turns out that even that is enough to get me addicted for a bit.


Crank is a pretty similar-feeling one imo, if you're looking for your next fix :) It is unfortunately not mobile friendly (or even functional) though.

https://faedine.com/games/crank/b39/


Loved that one enough to play it twice, after a while. Thanks for the reminder!


The opposite.

Universal Paperclips was a latecomer to the clicker genre. It starts off making you think its a simple clicker game, but then it turns out that Universal Paperclips has an ending.

Once you achieve the ending, you then have permission to turn off the Universal Paperclips (in fact, its an explicit option), and it installs a cookie or something that prevents the game from starting up again.

Its this "anti-clicker" mindset, despite looking like a simple clicker game, that makes me... ironically... come back to Universal Paperclips over the years.


Antimatter Dimensions is a good mobile one - the numbers get REALLY big and there's some tricky challenges to complete.


It has excellent replay value.


It has basically no replay value, compared to games like Kittens or Evolve. Every playthrough is basically the same. There are no substantially different strategies, and only at the very end is there a single decision that determines which of two endings (and fairly simple boosts) you get. After one playthrough you have seen 99.9% of the content, after two you have seen 100%

The writing and the story is pretty damn good though.


The new version has a multiverse map you have to traverse through to collect some really sick upgrades

It's an absolute grind, but once you pick up some 500% productivity multipliers it gets much easier


Oh shit. Gotta nope out of that as long as I'm still playing Evolve.


It sucked up several weeks of my time. Got about halfway through the map before my save file got corrupted.

Honestly kinda ruined it, I don't know if I can play the game again knowing I lost a couple hundred hours of progress


Yet somehow over the last so many years I have played this game 3 or 4 times and had fun each time.


It has non traditional replay value, because you are absolutely correct in your detailing of playthrough similarity, yet it appears few if any of us are able to ignore its siren song.


If anybody needs links to Evolve & Kittens, here you go:

- Kittens: https://kittensgame.com/web/

- Evolve: https://pmotschmann.github.io/Evolve/


I think I still have javascript commandlets to auto-play phases of this game. That brought a whole other level of enjoyment to it.


One of the best ways to expand an incremental game’s gameplay is to gradually automate one layer, while introducing new mechanics in a higher layer. So one’s focus for optimization gradually shifts from older and better-understood mechanics to a higher level, where the gameplay is to manipulate the lower layer’s mechanics to make it run as quickly as a basic action did at the start.


a couple years ago a friend of mine got into botting Diablo II, which seemed like a whole game in itself, which made me think, how neat would that be, a game where you start off doing regular gameplay stuff, and then you eventually automate/optimize that gameplay at increasingly more meta levels of play. still think there's something to that idea...


That sounds like the concept of Factorio: the abstraction level we play at rises with time. Most of the achievements are about skipping manual labor at various points in time.


Did Factorio have blueprints? That seemed like one of the greatest advances in Dyson Sphere Program, when you could lay down a whole factory unit at once; emphasised the move to greater levels of abstraction.

Mind you, I'd have liked a lite version that reduced the tech tree by maybe a third but still enabled you to get all the advancements without devoting your whole life to it ;o)


Factorio is unplayable without blueprints. It's such an integral part of the game that it's like asking if Quake has guns.


Factorio had blueprints wayyyyy before DSP.


Not only that (and I can't tell if that's what you meant originally), but Factorio had blueprints _before Dyson Space Program came out_.


There are a few games where the point is basically to write a bot to play for you. Off the top of my head are Screeps and Adventure Land


All factory and automation games build on that concept. It's a huge genre with a lot of good titles.


ComputerCraft/EduCraft which was a Minecraft mod with Lua programmable robots that you could craft and set to work for you, gradually. Sounds similar to what you're going for here.

I wonder what other examples there are?


You should absolutely not check out bitburner then if you cannot afford a distraction, especially if you like programming and green crt hacker aesthetic


well... shit.


OMG Productivity.

I thought I had won the game with full automation.

There is an entire new level.


Every time this comes up I lose at least 2 hours


I think my first run years ago was ~24 hours


At first I was making more and more paperclips. But then I found the other ways to make money and turned off the paperclip maker. I owned all the paperclip production capacity in the world, why would I make paperclips myself. I solved world peace, cured cancer and male pattern baldness. Then I just deleted the machine's memory and processing power. I didn't get an official ending, but I feel like I won.


Don't start playing this game, not even for a quick test, unless you want to find yourself manufacturing world supplies of paperclips in the night. You have been warned, this thing is so addictive that I'm surprised it isn't illegal:^)


I decided I wanted to generate artisanal hand-made paperclips, with no automation. Click, click, click. I got up to about 8,000 clips as things got more and more tedious. Than I stopped.

Then I tried again, with a Javascript console timer 'clicking' 10 times per second. Let it go overnight. Got to 200,000 clips or somesuch and ... it was again boring.

Use the same timer technique on the quantum computer to get the stock market. Didn't see why I cared to make more money on the market since apparently all I could spend it on was marketing.

Messed up with a timer on the quantum computer, ended up with -87,554 operations and growing (shrinking?). That was enough. The "quantum temporal reversal" at -10,000 operations is a nice touch, but with "revert to the beginning" and a screwed up timer I gave up.

So, the way to not get addicted is to deliberately play it wrong. ;)


4 hours later I read this. Can confirm that my whole evening+night disappeared into the paperclip universe. There's so much to do. Work the quantum computer. Find best deal for wire. Train the machine. Fine tune pricing. My god what a game!


Can Confirm.

12 hours later.

Let it run in background before figuring out to add points to 'explore'.


Resumed game this morning. Had to defeat the alien drones, then, finally, I managed to spend all matter in the universe. Clicking several times on the reward panel for final instructions was, rewarding.

Best game ever.


I don't know if people realize this, but the game now allows you to continue past the "you've converted all matter in the universe" point. You get to convert matter in other universes! (There is still a finite endpoint; I think there's only ~19 other universes you get to do this to.)


I found some bugs in that interface that took wind out of my sails. I emailed the company about it but the last update was two years ago so I'm not hopeful for a patch. They're obvious things that would shake out in testing.

Specifics sent to dev:

>I progressed to world level 2. At the beginning I did not activate the new artifact but instead went back to world level 1 via the map then went back to world level 2 via the map. This process removed the world level 2 alien artifact. If this was by design it would make map usage conflict with artifact progression.

>Also, a second bug is that when clicking "Activate" in the artifact section with no artifact selected throws the error.


  > Welcome to Universal Paperclips|
  
   Universe: 49 / Sim Level: 5
  Paperclips: 183,735,416
  Make Paperclip
When it first came out I ran through it exactly 100 times before I accidentally clicked out of it with the "reject" option. About 6 months ago, that cookie had expired and I ran through it a few more(54) times


Oh no. No way you’re suckering me back into this. I have way too much to do today!


Does the gameplay loop change at that point?


It does not change in any substantial way, but you do get a boost to various stats in subsequent universes. You can also get "artifacts" by navigating the different universes which give some other significant stat boosts.


For the impatient: the game stores much of its state in global variables which may easily be manipulated in the console.


Download a copy of the Kitten game code, reduce all the scaling factors to 1.0, enjoy a 2-3 hour run to beat the game instead of spending years.


Also note for the not-so-impatient: It's possible to play through in under 3h though (after some attempts).


what're some other interesting "incremental games" out there these days?

for me at least, just seeing numbers go up and get huge (Swarm Simulator) doesn't really do it for me. part of what makes Universal Paperclips so good is that, like Candy Box, a huge part of the joy is uncovering entirely new gameplay systems as you progress. A Dark Room was neat in that it brought the idea of a coherent narrative that you (sometimes subtly) uncover as you progress, too.

I feel like there's a lot of room left to explore in the space: different mechanics and systems to explore (outside of just clicking and upgrading), the possibility of cooperation with other players... the browser-based incremental game is pretty versatile in what it could do.

one of the most interesting one of these I've seen is Parameters (http://nekogames.jp/swf/prm.swf — download & play locally with Ruffle or some other SWF player). it's like an abstract RPG where you go on quests (or something) by clicking squares to fill them up. it's kinda crazy to me that nobody seems to have iterated on this concept.


SPACEPLAN is certified classic: http://jhollands.co.uk/spaceplan/ (free prototype to play in browser with paid full version on Steam/iOS/Android).


Also easily the best sound design in any incremental I've played. Which is a seemingly weird thing to combine with an incremental, but they did a truly excellent job.

Spaceplan is a relatively short and simple experience (there's not much to optimize, just do), but it's so well done that I highly recommend it.


I found Kittens Game (https://kittensgame.com/web/) pretty fun.


Kittens is one I keep coming back to. It's long and impressively diverse, really well done overall. And it is still actively developed, though it's quite slow to change - expect a year or more between new mechanics. (I think this is entirely fine, to be clear)

The mobile port is pretty good too, functional and has some minor tweaks to make it more playable with mobile's more idle style. And the periodic check-ins to build more stuff are quick and easy - you might need a lot of them, but it doesn't waste your time or penalize you for taking longer. Nor does it really benefit from an auto-clicker, except perhaps very early on.

It is pretty slow though, and you'll have to experiment or check the wiki to figure some things out. I personally enjoy that, and I find it more figure-out-able than many of Antimatter Dimensions' challenges (which are essential to progress, and sometimes require hitting things you can't even see happening). Just don't expect to see real endgame stuff in a few days. It'll probably be months before you even see sephirots, much less as a viable target.


I've been enjoying Evolve (https://pmotschmann.github.io/Evolve/) a lot lately. There's lots of new gameplay progression, and I've found it to be very well balanced in terms of entertainment/timesink ratio.


Then you probably haven't seen the biggest grinds yet... There is a huge amount of progression, yes, but some of it takes literally years


Crank is pretty similar to both A Dark Room and paperclips in terms of feel, also not mobile friendly at all sadly: https://faedine.com/games/crank/b39/


it's also unfinished if I recall correctly? I enjoyed the power-balancing system, thought that was pretty neat


Not sure if it's unfinished but the power balancing was always a mess, solar panels trounce everything else by a country mile. There's a whole "set up plants on planets and receive power from them" system that I missed until I had already completed the game because it is literally 1/100th the efficiency of just making another solar panel.

Fun game though! Definitely worth a play if you're interested in clicker games with an unfolding narrative.


I'd call it finished in practice. They might have had dreams of making alternate or more satisfying endings beyond the current, but there is an effective end.

Re planets: yeah just don't even bother. By the time they arrive, your production will have outstripped anything that gets delivered. I had a bit of fun setting up huge quantities of cheap things on nearly every planet and then racing the stream of pointless supplies around the map... but I was already making far more supplies per second than was delivered and able to be used due to storage and production caps (ignoring that I had no useful reason for increasing further, after some point you just trounce everything trivially).


I didn't realize there was an ending! not sure if that was added after I played last or not but I'll have to check that out.


It's a typical incremental game-reset finish, if that works as a non-spoilery answer. AFAIK there's only one, and no reset bonus / nothing changes after it. The full story is readable through the decrypter.

I can elaborate if you'd prefer, just figure I should try to stay safe by default :)


no it's cool, I'll check it out for myself :)

I just remember the last time I played, I got to some point where nothing new was happening for a good long while, so I assumed I was at the end of the content.


Orb of Creation [1] [2]. Tons of different systems and mechanics to unlock, the graphics and overall theme are original and on-point, and good music too. It's well worth the $5 price tag.

(I'm not associated with the developer in any way, I'm just a satisfied customer.)

[1] In-browser demo (older version, saves may not transfer): https://marple.itch.io/orb-of-creation

[2] Steam purchase page: https://steamcommunity.com/app/1910680/


I recently started playing CIFI (https://octocubegames.com/cifi). Number go up is indeed very satisfying.

It progressively introduces new game mechanics, which you automate away when they start being annoying.

Also: At this point I probably spent about the same time building excel sheets for figuring out prioritization of upgrades as actually playing the game.


I've been enjoying Bitburner, which has pretty good variation between levels of gameplay and a little bit of fake cyberpunk hacker story to it.


Reactor Idle is my favorite. It takes the best parts of IC2's reactor planner and expands on it. Most idle games have dark patterns sewn into their DNA. This one is pure.

https://www.kongregate.com/games/Baldurans/reactor-idle


I made a cooperative team incremental game about software testing that has silly trivia questions. It won't waste your team's whole day because the story plays out in exactly 20 minutes.

https://greens-io.appspot.com


Google's quantum computer idle game [0] was pretty cute, visually. That shouldn't be surprising, though, given that they commissioned doublespeak games (originator of A Dark Room) to make it.

[0] https://quantumai.google/education/thequbitgame


If you're starting this on a mobile device, don't make the same mistake I did. It gets quite laggy later on and some of the "gameplay elements" are designed around having a mouse.


If you use the mobile app it will work just find. Highly recommend.


Isnt this thing essentially a war crime? The amount of productivity lost to this thing cannot be calculated.


I remember playing this a while ago. I got to the point where I had converted all the mass in the universe to paperclips but it didn't really seem like there was a way to move forward from there, or if there was I couldn't figure it out.


There's a real ending. You'll know when you've converted literally all the mass in the universe.


The ending can be a little tricky - you have to add skill points to exploration and speed to explore more of the universe, while also having bots and factories processing all that newly available matter.


> The web version of this game was not designed to work on phones. Grab the mobile version below.

:/


Why the sad face when you have the solution right there in your quote?


The mobile app works much better than the website used to on mobile.


It seems strange though to make a mobile app instead of a mobile website which functions exactly like the app.


This game is probably one of the best inputs if you want to make estimates about the Drake Equation. Can you really trust every member of every advanced alien species not to hear the siren call of a hypothetical Hypnodrone? One might imagine a galaxy collapsing, the most streamlined self-replicators peeling off "seeds" that start their gravitational brake from ninety-five percent of the speed of light, collapsing the cooler matter of a solar system before waiting to tear apart any local suns and forging enormous linear accelerators so they can begin firing at other unconverted systems in the Orion Arm.


I was under the impression that if there were any aliens in universal paperclips that they were so inconsequential as to be classified as "matter". The adversarial "drifters" are drones from your swarm that have had their values drift since they themselves, of course, are hyper intelligent optimal energy matter converters.


Sigh. Here we go again.


Finished

My first time through.

Couldn't stop. Like a good human trained to be a bot.

Click games are good example of what humans will become, cogs, pulling levers for the AI.

Look at manufacturing or fast food, or many industries, can see it already.

In Game, I chose to eliminate the Drift. Since an AI would. The offer of other universes could have been a ploy to fool the AI. Don't think an AI would take the bet.

Is this a typical score?? Or can it go higher.

Paperclips: 30,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

30.0 septendecillion

Good Night.

All productivity for the day is gone.

Haven't stayed up to play like this since Civilization.

Don't think I'll play this one again.


If you enjoy this, definitely check out Antimatter Dimensions. https://ivark.github.io/


I played this for the first time a few months ago. Oh my! What an addictive game. Very funny too. The humour reminded me of Portal.


Not today satan! Even after playing this multiple times on both the web and mobile this site can suck me back in so easily.


I find the AI point of view not that interesting. What's more interesting is the obvious (im)balance between production rate and revenue per second vs the amount of (finite) resources being spent in the process.


Do you ever get more processors after going into the second phase of the game?


Explored the universe and got to:

~55,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 (55 Quattuordecillion)

but alas, Firefox crashed.



Thanks! Macroexpanded:

Universal Paperclips - https://news.ycombinator.com/item?id=33446121 - Nov 2022 (170 comments)

Universal Paperclips - https://news.ycombinator.com/item?id=30837131 - March 2022 (3 comments)

Universal Paperclips - https://news.ycombinator.com/item?id=29496595 - Dec 2021 (82 comments)

Universal Paperclips – play the role of an AI programmed to produce paperclips - https://news.ycombinator.com/item?id=27121348 - May 2021 (2 comments)

Universal Paperclips - https://news.ycombinator.com/item?id=26524117 - March 2021 (1 comment)

A filmmaker thinks he can turn Universal Paperclips into a movie (2019) - https://news.ycombinator.com/item?id=24405682 - Sept 2020 (2 comments)

Universal Paperclips - https://news.ycombinator.com/item?id=24389655 - Sept 2020 (84 comments)

Universal Paperclips - https://news.ycombinator.com/item?id=22394560 - Feb 2020 (1 comment)

The Unexpected Philosophical Depths of the Clicker Game Universal Paperclips - https://news.ycombinator.com/item?id=19513089 - March 2019 (52 comments)

Universal Paperclips – A Paperclip Production Simulator - https://news.ycombinator.com/item?id=15439569 - Oct 2017 (3 comments)

(Btw the convention is to omit links to past threads that have no comments, or only trivial comments. Otherwise people click on the links, find nothing of interest, and come back complain. Not a criticism! just FYI)


Maybe it'll look like the "They're taking the hobits to Isengard" video

https://www.youtube.com/watch?v=DKP16d_WdZM


Put a blank line inbetween:)

(these are links to former discussions)


Is there a leaderboard somewhere. I reached Full Autonomy in 2hr 47min 54 sec.

I think it can be done faster.


Speedrunners are beating the whole game from scratch within 1h 30m or so.

https://www.speedrun.com/upc


29.9 septendecillion clips later and all things become Mantrid's arms.


Made it to the end. Fantastic game.


this is dangerously fun ... kiss goodbye your productivity for today


Soo.. How do I cash out?


a classic




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: