Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman Q&A: about GPT-4 and AGI (lesswrong.com)
14 points by yoquan on Sept 7, 2021 | hide | past | favorite | 9 comments



What puzzles me is how Altman and others thinks that there is any intelligence in ML as they are pursuing it with pattern matching. And how pattern matching as being pursued with ML could ever give any intelligence or general intelligence is baffling. Pattern Matching != conceptual causal reasoning using a model of the world.


You're effectively stating the Chinese room argument. Which, I believe, was rebuffed.


The chinese room and “true understanding” don’t matter anyway as long as the “room” produces economic value and material power.

AGI can be dumb as a rock by Searlean standards but still pose a severe threat to all of us.


I don't think the Chinese room argument can be fully rebuffed until we more deeply understand how our own brains work and get a more practical definition of intelligence from those discoveries


What is don't get about the AGI speculation is.. to me I can't really figure out how an AGI could exist on current computers. I can imagine AGI in robots more easily. On conventional computers I think AGI,or its servants would need to be distributed across multiple machines with either a central hub orchestrating them or some p2p system. But on some single devs machine, which is the Hollywood style, I have a hard time seeing it. Edit: and the medium for its impacting the world would be the internet. Shutting down the interconnectedness of our tech and infrastructure should perhaps pull the plug on the AGI. But the more interconnected our civ is via internet, a bit of omnipresent piece of software could resist being rooted out and could technically subvert human intentions


I’ll be interested to see how many people are actually using Codex in 12 months time.

I honestly don’t know if it’s a shiny new toy that people will get bored with, or a genuinely useful tool that will become part of your average developers toolkit.


How does this impact the job market for software engineers, new ones in particular?

I have constant intrusive thoughts and anxiety that tools like this will replace me from one of the few jobs I’m psychologically capable of performing well in.

I don’t want to make too negative of predictions since I always bias towards pessimism. What do the rest of you think?


from the article: 'Warning sign (of a critical moment in AGI development) will be, when systems become capable of self-improvement.'

Is the meaning of self improvement here means that the model will actively optimize itself towards improving on its mistakes outside of training? Because under my understanding for this to happen we would need the model to be in a different form than current ML.


once again they will warn us that GPT4 is a beta and is dangerous, skynet etc etc... but hey we're releasing it anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: