What puzzles me is how Altman and others thinks that there is any intelligence in ML as they are pursuing it with pattern matching. And how pattern matching as being pursued with ML could ever give any intelligence or general intelligence is baffling. Pattern Matching != conceptual causal reasoning using a model of the world.
I don't think the Chinese room argument can be fully rebuffed until we more deeply understand how our own brains work and get a more practical definition of intelligence from those discoveries
What is don't get about the AGI speculation is.. to me I can't really figure out how an AGI could exist on current computers. I can imagine AGI in robots more easily. On conventional computers I think AGI,or its servants would need to be distributed across multiple machines with either a central hub orchestrating them or some p2p system. But on some single devs machine, which is the Hollywood style, I have a hard time seeing it. Edit: and the medium for its impacting the world would be the internet. Shutting down the interconnectedness of our tech and infrastructure should perhaps pull the plug on the AGI. But the more interconnected our civ is via internet, a bit of omnipresent piece of software could resist being rooted out and could technically subvert human intentions
I’ll be interested to see how many people are actually using Codex in 12 months time.
I honestly don’t know if it’s a shiny new toy that people will get bored with, or a genuinely useful tool that will become part of your average developers toolkit.
How does this impact the job market for software engineers, new ones in particular?
I have constant intrusive thoughts and anxiety that tools like this will replace me from one of the few jobs I’m psychologically capable of performing well in.
I don’t want to make too negative of predictions since I always bias towards pessimism. What do the rest of you think?
from the article: 'Warning sign (of a critical moment in AGI development) will be, when systems become capable of self-improvement.'
Is the meaning of self improvement here means that the model will actively optimize itself towards improving on its mistakes outside of training? Because under my understanding for this to happen we would need the model to be in a different form than current ML.