I'm not worried that GPT-4 is going to "escape its jail, become self aware, and start building a robot army to conquer the world." Or whatever.
I wasn't worried that the future of financial markets and payment systems were going to be replaced with crypto-currencies and the IMF replaced with the Ethereum developers. I didn't believe for a second that they would successfully supplant the financial institutions and state governments and escape regulation. We were also supposed to see the modern Internet replaced with web3. It was inevitable! These were all hyperbolic claims made by crypto enthusiasts for years. They were preposterous from day one but it didn't stop many smart, intelligent people from being taken in and converted to believers.
There are real dangers! However I think those dangers come from other humans and human institutions. Just as the dangers of crypto came from real humans pulling off huge scams and fraud. Whether it's corporations hyping their AI products to health care professionals and causing thousands of misdiagnoses that lead to chronic health problems or death; to scammers using them to trick family members of their targets to divulge useful information... that's all humans being humans and leveraging a tool that makes it easier for them to cause harm (intentionally or otherwise).
What we need are people cutting through the hype so that regular people don't mistake negligence or intentional malice with the "sentience" of "AGI."
If you want to put yourself in the position of cutting through the hype, conceptual clarity matters. But given your opening and final sentences you seem to have entirely missed the point about distinguishing between intelligence and sentience. The AI researchers you view as hypesters think more precisely than that.
Edit: there is a lot of hype and bullshit around AI that needs cutting through. But unlike crypto, there's also some substance, so a blanket attitude of scoffing isn't helpful.
TFA isn't a blanket scoffing and neither are my comments. I think LLM's are neat, interesting, and could be very useful.
However I do remain skeptical of the more extreme claims made by researchers that don't publish their experiments for others to reproduce, by companies with a vested interest in maintaining narratives that improve their gains, etc.
That kind of hogwash does make it all the more difficult to have sensible conversations and perform real research.
Edit: I, for one, am hopeful we will be able to add reasoning models to our tool-chain one day for the purposes of dispatching trivial proofs to. It would be nice to use formal methods in more places in software development and be able to move past the, cost and time excuses for avoiding them.
I wasn't worried that the future of financial markets and payment systems were going to be replaced with crypto-currencies and the IMF replaced with the Ethereum developers. I didn't believe for a second that they would successfully supplant the financial institutions and state governments and escape regulation. We were also supposed to see the modern Internet replaced with web3. It was inevitable! These were all hyperbolic claims made by crypto enthusiasts for years. They were preposterous from day one but it didn't stop many smart, intelligent people from being taken in and converted to believers.
There are real dangers! However I think those dangers come from other humans and human institutions. Just as the dangers of crypto came from real humans pulling off huge scams and fraud. Whether it's corporations hyping their AI products to health care professionals and causing thousands of misdiagnoses that lead to chronic health problems or death; to scammers using them to trick family members of their targets to divulge useful information... that's all humans being humans and leveraging a tool that makes it easier for them to cause harm (intentionally or otherwise).
What we need are people cutting through the hype so that regular people don't mistake negligence or intentional malice with the "sentience" of "AGI."