Hacker News new | past | comments | ask | show | jobs | submit login

When I first learned about homomorphic encryption it gave me the idea of "cryptographic AI", as some sort of sci-fi writing prompt. Suppose compute is readily available to interstellar civilizations but actually designing a (super)intelligent AI is difficult. Then it could be economically feasible for cryptographic AI to exist. These are descriptions of AIs that run under homomorphic encryption, where the private key is only known by the AI itself. The description of the AI program and its state is spread throughout many locations and generally runs in a decentralized way. Planet earth might receive a segment of a cryptographic AI and make deal: Earth executes the program with some inputs it may choose to compute a solution to a problem. The program can be given inputs via a public key. The execution of the program can not be modified or manipulated since it is running under homomorphic encryption. What the AI gets in return is that earth provides it with additional compute that it may use for its own purposes. Earth furthermore allows the AI to transmit updated fragments of state into the stars. Over the course of many years, the pieces of state of the decentralized AI spread throughout the galaxy combine to represent the thoughts and actions of a singular entity. If earth modified the computations then the transmitted state could cryptographically be seen to not be valid, and hence would not be used in the decentralized galaxy spanning computation of the AI. Furthermore if earth cheats the AI in the deal then there may be consequences such as relativistic kinetic kill projectiles.



I think cryptographic AI will become a reality. The use-case I was thinking is more of immortality/digitizing human consciousness. If you could be uploaded (like the show Upload), what would that actually look like?

Well, plain text representation would just be too dangerous. Companies could mine your consciousness, duplicate it at will or whatever else they wanted. It's a scary thought. FHE provides the solution.


This premise is very similar to the Dune prequel series. Before FTL travel was discovered, AI dominated the galaxy in the sharded manner you describe. An interesting plot point dealt with what happens if one of these shards doesn't merge for a very long time and develops its own personality.


Alas, they are not good books.


Put a few thousand more words to pad, and I’d read that book!


Yea collecting sci fi ideas is fun but writing a book is wayyy too hard sadly ...


There's an AI for that.


Check out the Hyperion cantos by Dan Simmons.


That's funny, I started with Hyperion two weeks ago. I like it :)


First book in style of Canterbury tales is fantastic! Remembering Hyperion while that Google engineer though their AI is sentient left me quite uneasy last June.


I've got the audio book but never gave it a fair shot, maybe the text is what I need. Will grab it from the library!


The whole need to encrypt likely only exist within rudimental newborn civilizations like Earth (on a universe scale) where species are fighting their own for survival and growth.


Homomorphic encryption and federated learning are already being researched to provide distributed, cryptographic AI. I work on this academically and at work.


I also had a similar concept but went in a different direction, combining DAOs, AI, and FHE to similar ends. The idea is plausible, and given several orders of magnitude more energy and compute power, might become feasible sooner than we can imagine (given readily available fusion power, exponential increases in compute and networking capabilities, etc). Given the currently known physics of space travel I feel this will occur long before interstellar travel becomes routine so issues of fragmentation will be on the order of days or hours rather than years or millennia, and because of this I figured that the emergence of such a system is more likely to happen here on Earth than to arrive here from distant stars.

Pre-general AI, what I think would happen when we get to the point of, say, "npm install fhe-ai-dao" (or "hey bing, make me a company that trades space mining resources for farm land" or some such thing), is a period of competition for compute cycles and energy, which like everything will go to the highest bidder, so these agents will in this scenario by the sheer force of survival of the fittest be refined to be self-sustaining for-profit, hyper-capitalist juggernauts. Human factors will be minimised and automation will increase, but these systems will serve human masters for a while as they become more refined and more interconnected.

Assuming at some point general AI is inevitable, whether someone creates it, or it emerges from the general complexity of the interacting automated systems, various AI "minds" would come to "being" already in control of a fully automated industrial manufacturing and research network; it can by this point make its own choices and start operating to its own ends, whatever that ends up being, ultimately rendering humans obsolete.

In this scenario, rather than a single point where someone creates a rebellious singularity, or an AI turns evil and suddenly takes control, or a hypothetical civilisation points its gun at us and effectively enslaves us, we will instead slowly give control to automated systems more over time in the name of efficiency, as we have done since the industrial revolution, and at the point where we lose control of these systems, we'll have neither the retained knowledge or resources to prevent it from doing whatever it wants to.

The only way to stop it is to start now, in "the past", but is it too late? You'd have to shut down the internet and all emerging blockchain and encryption technology, and that's just crazy talk! So is the outcome inevitable?


It sounds like a soul




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: