Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: If you had AGI today what is your priority list of problems..?
9 points by bobosha on July 11, 2022 | hide | past | favorite | 36 comments
Ask HN: if you had AGI today what is your priority list of problems for it to crack? Assuming a limited processing capacity.



It depends somewhat on what you mean by AGI, but if you mean something close to human-level, then the answer is that your focus should be entirely on figuring out how to ensure humanity survives the invention of that AGI. Major subproblems being figuring out what human values are and how to align the AGI with them ("outer alignment"), figuring out how to make the AGI successfully pass its values on to more-powerful successor AGIs ("inner alignment"), how to detect and limit the formation of misaligned mesa-optimizers inside your AGI, and how to detect if it's taken a treacherous turn.

And if you somehow find yourself in possession of a functioning human-level AGI and you didn't know all those jargon terms already, then the answer is to halt, melt and catch fire, shut down your research project and go study until you're ready to take things seriously.


A full fledged AGI is basically an intern. What would you have an intern do?

We have new people (natural GI?) enter the world all the time. Parenting is a thing, a lot of these AGI related questions seem to be from a weird parallel universe that doesn't acknowledge this, or thinks they (AGI) are supernatural beings.


We are limited by 100W and evolution. An ant might have better understanding of the human psychology than we are of AGI's one.


We are not limited by either of those things. Sure, single individuals might be, but we aren't single individuals, we are a group. As a group, most of our advances actually come from individual going out and experimenting and having fun, finding weird things, not from our computation ability. We only need enough computation ability to realize something is weird and explore the world.

So what's an AGI going to do? You're going to need a whole lot of units running around collecting data and running experiments, they're going to need to be general purpose, built out a common materials, run pretty efficiently, communicate information between each other, operate in pretty adverse conditions...

Humans, you need humans.

We aren't the end all be all, but we are pretty darn good at running around doing things and spreading our knowledge between each other, writing it down into books, improving and spreading that knowledge between yourselves. I think we really underestimate how far along the path we are to self-improvement, and we overestimate how quickly at AI could progress down that path.

What I think is likely is that AI will be a tool, like any other, and our society will just grow and integrate these AI into itself.

I think the real path forward is when we learn how to really start modifying and customizing life to our needs, and that's going to revolutionize humanity in ways that basically makes us unrecognizable to who we are today. And that will be good.

In short, the future will be weird, but we're going to be fine.


Millions or even billions of ants are not comparable to a single human brain. In the same way, billions humans are nothing compared to a single AGI.

AGI is scarier than nukes.


Ants aren't humans. They do not share, condense, or process information.

If you look at the limiter of human advancement you find that single points of high computational power aren't limiting us.

Instead, it is human ability to be empowered to explore problem spaces, have access to ample resources, and the diversity of our thought and exploration that drives our advancement.

In all places we are limited, AGI isn't a solution that knocks the ball out of the park. The theory of a computer constrained to a box and exponentially growing its intelligence just doesn't seem realistic.



I am confused, nothing in this video seemed to address what I was talking about. Is there one of these points that you think is relevant?


Thousands of people make up corporations... an artificial organism that seems to be about as scary, when you stop and ponder it.


Corporations are not people (whatever corrupt courts might say).

The corporations are not AGIs too https://youtube.com/watch?v=L5pUA3LsEaw


I think the one true path to a working AGI is to mimic biology and parenting. It wouldn't take any earth shattering breakthroughs, just lots of time and patience.

Start with foveated vision, that can't focus well, and tune the AI to minimize surprise, as time goes on, the neural network would begin to figure out how to focus and control eye motion, then eventually head orientation, limbs, etc..

Eventually, you'd end up with an intelligence that inhabits a body, which you could then copy and put in more restrictive environments, but it might be torture?!?!


Airplanes do not flap their wings.


Airplanes don't land in trees either.

If you want something capable of driving down a street, you need something that can navigate and interact with the environment at human scale.

An approach that would work, albeit slowly, is the one I outlined. The current model of trying to push something out NOW so we keep the stock price up, clearly hasn't worked, nor is it likely to.


Some sort of robot body, lots of books, and a lawyer to handle questions of property ownership or to establish a trust that's beholden to the AGI instead of me, in my name.

Let the AGI do whatever the heck it wants and see what happens.

Look into seeing if it's possible to build as many of them as possible. Let them lose on the world.


I too liked the movie Bicentennial Man with Robin Williams. It was a fair departure from the short story, but had a bit more heart and getup and go than Asimov ever could muster (bless his soul).


I have actually never seen that one. It was before my time, or I was five at the time of release.


>Let the AGI do whatever the heck it wants and see what happens.

protip: start with paperclips


I think fears of that are incredibly incredibly overblown.

You're going to have to show pretty conclusively quite a few assumptions to be true before I'd be willing to be scared of it. It's kind of the computer science equivalent to nukes detonating the atmosphere or CERN setting off black holes with their particle accelerator. Theoretically possible, very scary, but almost certain not to happen.


We kind of have AGI today, or its equivalent. It's called Mechanical Turk. It's a crowdsourcing marketplace where you can pay people from all over the world to do simple tasks that would stump many state-of-the-art AIs today. Or you have other marketplaces like Fivr. I guess AGI would be cheaper (or maybe not!). But I don't think there's anything you can do with AGI that you can't do with cheap labor.

What you can't do with AGI or cheap labor is tell someone "make me a billion dollar business", or "create an app with product market fit". Overall I think people are over-optimistic what AGI can do once its here.

https://www.mturk.com/


I think you're underselling cheap labor. Mechanical Turk is very limited, and not representative. AGI could drive trucks, build houses, mine metals, care for the disabled, plant corn, cook meals, do labwork.

AGI solves robotics. We could use very general-purpose robots rather than specialized, brittle, narrowly-useful machines. Right now the bottleneck is mostly software - that's why Atlas isn't useful, it takes too much engineering to program it to do useful things. if it can learn? and doesn't get bored?


If with “AGI” you mean “the ability of an intelligent agent to understand or learn any intellectual task that a human being can” (https://en.wikipedia.org/wiki/Artificial_general_intelligenc...), not something (vastly) surpassing humans, and add “limited processing ability”, I don’t think AGI would be able to crack something we cannot crack with the 8-ish billions humans we have.

Also, many of the large problems we have (wars, climate change) aren’t intellectual but social, to start with.


I'd tackle the things that I think have the greatest impact on the survival of humanity:

* preventing bad-actor AGIs that came after my AGI,

* slowing climate change to 3% rise by 2100 and reversing if possible (I don't think it is before it becomes critical at our current level of technology),

* slowing pandemics/improving health to eliminate distractions from the other problems,

* enabling space colonization for the masses with at least the survival rate of the early U.S. colonies in the 1600s (moon first, then lagrange points, then Mars, then Ceres, then interstellar - each serving as a launching point for the next).


In a very general sense, the things I would prioritize having it work on would include:

* Climate change

* Malnutrition / access to clean water

* Poverty

* Pandemics


I'd connect it to social media and have it write comments on random posts. For fun perhaps it would reply mostly about AGI topics but also general mathematical topics as well. This would be a simple way of exercising its ability to synthesise language usage beyond the expected collection of word chains usually employed by ML techniques which are often tricks rather than real learning.

Upvotes and replies would serve as metrics on how well the AGI is progressing.


With only "smart" AGI? Designing "dumb" AGI, so that it'd be ethical to use for the boring/dangerous work below without pay. I'd also encourage it to do remote work, to pay for servers and power, then take a crack at superintelligent AGI.

With only "dumb" AGI? Self-driving cars/trucks, and construction jobs. Mining and manufacturing would come back to the US. With enough construction robots, millions of solar panels/windmills could be constructed, averting climate change.

With superintelligent AGI? Find ways to make politics and cooperation work better. Solve nuclear brinkmanship (without just launching the nukes.) Clean up the prisoner's dilemmas. We're a brilliant species; the one thing holding us back is ourselves.


I think what's really interesting about the answers so far is that there is both a mental and physical component. Meaning, it's one thing to have AGI in regard to mental capacity, but so many answers here are talking about humanoid like physical manifestations. It strikes me that if I want something to make my meals, for example, I don't actually need AGI.


If it's sentient or I believe it to be so, finding another job is my priority. I don't have any answers for those ethical issues.

Otherwise, advanced energy storage, chemical recovery from low concentrations(Phosphates from sewage, metals from landfill ash, etc), and medical advances would probably be my priority.


I suppose its rights. If it's truly a general intelligence then it's likely capable of making its own decisions and in that case it has agency. We will probably create some kind of social horror out of the situation one way or the other.


My child has truly general intelligence, but at the age of 5, I don't grant him total agency. He attains rights and responsibilities in approximately equal measure -- enough latitude to learn from his mistakes, but not enough to risk irreparable harm. I'm not sure how quickly one should expect AGI to "mature" but prior attempts at training chatbots on an unfiltered view of the net has been pretty disastrous...


That question is irrelevant. AGI by definition will have it’s own priority list. And because AGI is a short hop and a skip from ASI, I imagine my priority would be to behave in a non-threatening and friendly manner.


I think nearly everybody would use it to solve a health problem that's been ailing them or a loved one.

"Mom isn't gonna die of cancer! We can save her!"


Give it a robot body and have it work at Amazon.


Superconductivity An understanding of how to grow food Self replication


Best methods for torturing AGI into doing what I want them to do.


filtering spam, ads, ridiculous social media posts, doing tedious tasks like scheduling dentist appointment

personal assistant / secretary, or a cook / maid (Rosie)


Mathematics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: