Hacker News new | past | comments | ask | show | jobs | submit login
Ok, it’s time to freak out about AI (nonzero.substack.com)
339 points by tejohnso on March 16, 2023 | hide | past | favorite | 492 comments



I'm not too worried about displacement of jobs as I think that's actually somewhat overhyped as an outcome. The actual near term problems I see are:

(a) perfect emulation of human behaviour makes spam and fraud detection almost impossible. an LLM can now have an intelligent, reasoned conversation over weeks with a target to perfectly emulate an entity known to the target (their bank, loved one, etc etc).

(b) direct attack on authenticity : we aren't far away from even video being faked in real time such that it's not even sufficient to get a person on a zoom call to understand that they are not real

(c) entrenching of ultra subtle and complex biases into automated processes. I expect companies to rapidly deploy LLMs to automate aspects of information processing and the paradox is that the better it gets at not showing explicit biases the more insidious the residual will be. For example it's not going to automatically reject all black applicants for loans but it may well implement some much more subtle bias that is very hard to prove.

(d) flooding of the internet with garbage. This might be the worst one in the end. I feel like fairly quickly we're going to see this evolve to requiring real identity for actual humans and ability to digitally sign content in ways bots can't replicate. That will then be the real problem outcome because the downstream effects of that will enable all kinds of censorship and control that we have thus far resisted / avoided on the internet.


Does this mean we will finally have key signing parties? I'm gonna make so many friends.


Jokes/nostalgia aside, you don't really even need fancy encryption mechanisms. All that's important is that you only use the internet to interact with trusted parties, vs treating it as a public square where you can ~generally tell whether someone is a real person. A domain name, an email address, a social media username, etc are all as trustworthy as they are right now as long as you've verified the person is real through some non-digital channel first (or someone you trust has)

I think the public social internet will die (for anything other than entertainment), but the direct-communication internet will look largely the same as it does today


We already have the digital channels in place. In Germany, the ID cards can do NFC and are part of a PKI. You can use them with your phone to prove that you are not underage, for example, and the other party will only get a signed boolean value. It's actually done quite well considering our state of digitalization.

Of course that comes with a host of different issues in the context of our discussion, like destroying pseudoanonymity.


Right, but that's what I'm saying. You don't need any of that; the only thing you need to verify out-of-channel (i.e. in-person) is that "a real human exists and they've told me that x@y.com is their email address". From there on, regular internet auth/encryption is sufficient to ensure you continue interacting with that real human over email


Gotcha. I think I would kind of welcome it if AI caused people to focus more on the offline world. Perhaps real life meetings will flourish because of this development. Perhaps artists will make art again for intrinsic reasons, because commercializing art will be even harder than it is now.


>A domain name, an email address, a social media username, etc are all as trustworthy as they are right now as long as you've verified the person is real through some non-digital channel first (or someone you trust has)

Until they've been compromised. A bot could train itself on their messages and photos and emulate them.


But this is not a fundamentally new threat. A bad actor today could compromise any of these and impersonate the person they took it from.


But not:

1) Automated at scale

2) To such a convincing degree

3) In real time, including audio and video


> as long as you've verified the person is real through some non-digital channel first (or someone you trust has)

At some point though, we're going to want to see how far we can take transitive trust. I'm not sure what the case will be, but sometimes you wanna say "who's with me?" and hear more than your handful of meatspace friends.


Yeah. I can see web-of-trust mechanisms similar to the ones Google uses to try and determine quality sites, or how Facebook used to be for friends, friends-of-friends, etc. There's some territory to explore here. But for the core, IRL connections, online communication should still mostly work as-is

Interesting idea: a social network that attempts to verify that two people actually met as real humans in-person, to increase the trust you have in friends' extended trust networks


Speaking of which, why doesn't Threema or Signal use the web of trust? They already have the key verification feature. Is it due to privacy concerns?


Signal didn't store your social graph on the server, it's only device side (privacy reasons). So a query for transitive trust would have to be implemented in a p2p way. I could see that becoming problematically combinatoric.


Yes. You bring your dongle and put it in everyone’s laptops. Others put their dongles in yours.

On a more serious note, I think even the author of GPG said that it was too complicated to use. It’s unfortunate, because we need e2ee auth & encryption more now than any time before.


Phil Zimmermann, inventor of PGP, couldn't figure out how to make it work on his mac.

https://www.vice.com/en/article/vvbw9a/even-the-inventor-of-...


Ever played wingspan? I have been, and I'm soaking up all of this information about birds that I never thought I'd know, and having fun too.

I'd like to make a board game that teaches web-of-trust hygiene in the same way. Then there can be an app that feels like playing the game, but really it's just a wrapper around PGP.


Actually, it's a pen-and-paper kind of shindig:

https://security.stackexchange.com/questions/126533/why-shou...


I think if you invited a random to a key signing party they might think it’s something else :)


Regarding d) - the whole idea from Cyberpunk that there was an "old internet" that was taken over by rogue AIs who now control it, with NetSec keeping them at bay and preventing them from spilling over into the new internet, is getting increasingly likely.

I can definitely see a possibility where the current internet as we know it just gets flooded with AI crap, and humans will have to build entirely new technologies to replace the old World Wide Web, with real-world identity checks to enable access to it, and so on. Oh, and probably some kind of punitive system for people who abuse their access (chinese social credit style)


Combining that with b) above, and what we get is that no important decision will be made without an in-person meeting.

So, rather than technology speeding up everything, we will slow down everything to pre-telephone days.

Bob, the CFO, will tell Alice, the CEO, that sure, I can make that transfer of $173 million to [new_supplier_bank_acct], do you want me to fly to your office for us to complete the hardcopy authorization papers, or will you be coming to mine?

All that stuff that was accelerated by telegraph, telephone, fax, mobile phones, email, video calls . . . poof! Pretty much entirely untrustworthy for anything significant. The only way around it could be a quantum-resistant and very trustworthy encryption system...

I'm not sure the net result of this technology actually makes life better. Seems to empower the criminals more than the regular people.

Deploy the technology against itself, train the cops to use it? My guess is that they'll always be several steps behind.

Even if we try to do the right thing and kill it, it is already out of the bag - the authoritarian states like China and Russia will certainly attempt to deploy it to their advantage.

The only way out now is to take maximum advantage of it.


No. Encryption has nothing to do with the problem. Ignoring quantum resistance (which is still probably not needed for any of this) you just need something like PGP with a key stored on a CCID card (with all other copies of the key stored on an airgapped and secured machine or non existent). Theft of the card is equivalent to someone stealing an employee's equipment. Theft of the card pin is equivalent to someone phishing the pin from the employee. To that extent, post perfect-AI-imitation you have the same guarantees as you have today. But this IS a massive step back in convenience. And if the card is stolen, the employee can no longer call in to prove their ID to have the card revoked. Because in that case, attackers would just spam companies with legitimate looking revocation requests.

But I guess what this COULD cause is a black market for stolen CCID cards and pins, and therefore crime to fuel the market.


Ok,'tho last time I looked PGP stood for the "Pretty Good Privacy" encryption system, and everything you wrote here is abt managing encryption keys, so I'm not sure how "Encryption has nothing to do with the problem".

Maybe you mean that it could be solved without quantum-resistant encryption systems?

I suppose the PGP kyes on CCID chip cards could work. But how are you going to enforce the no other copies not on secured & airgapped machines? Or the other security measures?

You're right , this will be at least a source of huge inconvenience compared to the current status, and a source of crime. Probably more violent too. Instead of Ransomware, it'll be kidnap the CFO's family and force them to use their card to make the transfers.


Yes my point was that encryption is not an obstacle to having assurance that the person you are communicating with is who you expect them to be.

Keys can be generated directly on the secure element of the CCID card, this means there is no other copy. Alternatively employees physically go to an office to collect cards prepared by the security team where the security team has access to a secure facility where key backups are kept. The enforcement is done by the nature of the CCID cards not actually ever directly exposing the keys.


> humans will have to build entirely new technologies to replace the old World Wide Web, with real-world identity checks to enable access to it, and so on

"Entirely new technologies"? In plenty of countries the exact world wide web you're using right now already works that way. China and South Korea, to name two.


In fairness, (d) is sort of already true. Google results are atrocious.


Yep, the garbage flood started a long time ago.


It's a self fulfilling prophecy, llms flooding the internet with nonsense and then we need ever more advanced llms to distill the nonsense into something usable. So far the nonsense grew faster than search engines were able to adopt to it, but that might also just be because google stopped improving their search engine or their search is broken by Google's misaligned incentives.


Yeah. 1/3rd of my Search results are like “is good Idea. <ProductName> limited time Offer” from <surname>-<localtown>-<industry>.com.<cctld>. Before that was “<search term> price in India”.


garbage in, garbage out. The web became atrocious over time.


Theres another one similar to A but perpetrated by the Marketing Industrial Complex.

What chance do you have when Facebook or Google decide to dedicate a GPT-4 level LLM to creating AI-generated posts, articles, endorsements, social media activity, and reviews targeted 100% AT YOU? They're going to feed it 15 years of your emails, chats, and browser activity and then tell it to brainwash the fuck out of you into buying the next Nissan.

Humans are no match for this kind of hyper individualized marketing and it's coming RIGHT AT YOU.


Agree. someone I know involved in the field views LLMs precisely this way : they are a direct attack on human psychology, because their primary training criteria is to make up sets of words that humans believe sound plausible. Not truth, fact based and certainly not in our interests. Just, "what would a human be unable to reject as implausible". When you view it this way, they are almost like human brain viruses - a foreign element that is specifically designed to plug into our brains in an undetectable way and then influence us. And this virus is something that nothing in human evolution has prepared us for. Deployed at scale for any kind of influence operation (advertising or otherwise) it is kind of terrifying to think about.


> Deployed at scale for any kind of influence operation (advertising or otherwise) it is kind of terrifying to think about.

Ironically this kind of influence operation may be the only realistic way to prevent earth from becoming uninhabitable for humans. This is what the likes of Extinction Rebellion should be spending their time on, not blocking roads.


All those years spent not having a Facebook account and hosting my own mail infrastructure have finally paid off.


Wait till the end war comes and they need more cannon fodder.


We already have all the required cryptographic primitives (which you've already alluded to in d ) to completely address a), b) and d) if desired. Full enforcement however, would destroy the internet as we know it, and allow corps and governments to completely control our electronic lives.

We already seem to be going down this path with the proliferation of remote attestation schemes in mobile devices, and increasingly in general computing as well.


> We already have all the required cryptographic primitives (which you've already alluded to in d ) to completely address a), b) and d) if desired.

Do we? My mother can barely keep the Russian DNS servers out of her home router. You want to entrust the public with individual cryptographic keys?


You can use cryptography to solve problems without asking individual people to manage their own keys.


Imagine having an iq below X (which is an ever growing number) and being told: From now on the voice assistant makes all of your decisions. Then you carry the thing around and it talks with other voice assistants asking for advice with the conversation gradually growing into the familiar management designing things to make things as easy as possible for it self.


What are security issues relating to Russian DNS servers?


> I'm not too worried about displacement of jobs as I think that's actually somewhat overhyped as an outcome.

I am. The people in charge of hiring and firing decisions are stupid, and frighten easily. As can be seen in the past year.


Perhaps they are stupid enough to be replaced.


A-D agree, But thinking about how they train llm's, what happens when LLM's start consuming a lot of their own content? Have you ever been alone for a really long time, without outside input you kind of go crazy. I suspect that a cancer like problem for AI will be how it handles not reinforcing on it's own data.

I suspect bias, your C option will be the trickiest problem.


Ai is just becoming multimodal now, so while feeding it the output of stable diffusion may not be a good idea, there is still massive amounts of untapped data out there to train AI on and give it grounding.


(c) I always find interesting because worrying about it coming from AI implies that we don't think humans operate that way, or that it's somehow okay / acceptable / whatever that humans have subtle, hard-to-prove biases, but if those same biases show up in a machine (that we could, in theory, dissect and analyze to identify and prove those biases) it's worse.


> I always find interesting because worrying about it coming from AI implies that we don't think humans operate that way,

No, it says “entrenching” because we know humans operate that way, but AI systems are presented as objective and removing bias, despite the fact that they demonstrably reproduce bias, and because they replace systems of people where someone could push back with a single opaque automaton that will not, they solidify the biases they incorporate.


That's an education problem, because they're emphatically not objective; they're subjectivity analyzers. They're consistent in their application of bias but it's still bias.


You are designing a promotional flyer, you have a white guy behind a desk on the front page. Its a big company, someone found the role to tell you there needs to be a black person in the image as well, and an asian, and a woman. You end up with 3 men and 3 woman one with each skin color and it looks completely ridiculous and set up so you randomize the set and end up with 3 white males. Suddenly you realize there is no way back from overthinking things.


When a human do that: you can ask for explanation, you can fire them.

When a machine do that: you "tune" it, you make the bias less obvious, it face no consequence.


This seems somewhat isomorphic to me. How is firing a human different from tuning the AI away from its previous biases?


If a machine with one set of biases displaces thousands or millions of free thinking individuals with distinct biases then that bias proliferates.


I think the issue is more that we at least recognize humans are fallible. Less so with “algorithms”.


Worrying about (c) is kind of ridiculous given how car insurance and credit checks have worked for decades.

Why do car insurance companies need to know my job again?


At least in the UK it is perfectly legal to take your job title. Come up with some abstract concept of your job. Find every job title in the standardized list which falls within that abstract category and choose the one which makes your insurance premium the lowest. Its an annual exercise for me.


> For example it's not going to automatically reject all black applicants for loans but it may well implement some much more subtle bias that is very hard to prove.

This sounds to me like a perfect explanation of the existing situation


It might be that humans might finally need to face the human problems.


I'm worried about the displacement of jobs. Why would any company want to hire someone to pay 30$ per hour while you could ask a bot?


They wouldn't, though despite ChatGPT's very impressive skills (I haven't tried GPT4 yet) it's still a very long way from actually being able to replace most skilled jobs.


But do you agree that majority of the software jobs are replaceable in foreseeable future?


I agree that I can see it from here, but remember when nukes were invented people said “that’s it they’re going to make a bomb big enough to knock everything out in one go and anyone could make one, and that’ll be it”

As far as I can tell it hasn’t happened yet, so keep an eye on things. I used to be disappointed by slow ai progress, now, not so much.


The "catastrophe" scenario only seems likely if AIs are somehow claiming vast resources for themselves, such as all the world's electricity production. Otherwise, there's nothing to stop humans having the same level of production of goods and services that we have currently, and perhaps it could even be achieved with less effort if the AIs can be assigned some of the work.


But what prevents a human from digitally signing content generated by an AI?


Exactly. A sufficiently intelligent AI can easily make a human do its bidding, through incentive, coercion, emotional manipulation. Easy peasy. Didn't GPT-4 already did that to a Task Rabbit worker?


The goofing off the internet with garbage has already begun if my search results are anything to go by. I have to go 3 pages deep before I get anything written by a human


Agree with all of your points. Side note, funny that (a) used to be how we would test for AGI (turing test) and now it's just "a problem with AI"


The internet is already full of garbage. GPT like models will accelerate the process, but honestly this is for the best. There are historical precedents for this situation. When garbage permeates a medium of communication, there’s a flight to quality. We’re already seeing this with the reemergence of paywalls on top tier sites.


(c) Ah, the insidious bias. No one knows how it works, but the results are easily predictable.


I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future. It doesn't _want_ anything, but humans want to anthropomorphise it. When they do it just makes one think they have zero understanding of the tech.

We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI. Unless AI is able to monopolise food production while securing server farms and energy production I don't see it ever having leverage over humans.

Disruption, sure, increased automation sure but humanity's advantage remains its adaptability and our AI processes remain dev cycle bound. There's definitely work that will done to reduce the dev cycle closer to real-time to make it able to ingest more information and adapt on the fly but aren't the techniques bound by CPU capacity given how many cycles it needs to bump into all its walls?


The obsession is because we are going to anthromorphise it, and then we're going to put it in charge of important things in our lives:

Which schools we go to, which jobs we get, what ails us, etc.

We will use AI to filter or select candidates for university applications, we will use AI to filter or select candidates for job applications. It's much cheaper to throw a person at GPT-X than actually "waste" an hour or two interviewing.

We will outsource medical diagnostics to AI, and one day we will put them in charge of weapons systems. We will excuse it as either cost-cutting or "in place of where there would be statistical filters anyway".

Ultimately it doesn't, as you say, matter what AI says it wants. And perhaps it can't "want" anything, but there is an expression, a tendency for how it will behave and act, that we can describe as desire. And it's helpful for us to try to understand that.


> The obsession is because we are going to anthromorphise it, and then we're going to put it in charge of important things in our lives

and that is the risk, its not the AI that's the problem, its people so removed from the tech that they fail to RTFM. Even the GPT4 release is extremely clear that its poor in high-stakes environments and its up to the tech community to educate ALL the casuals (best we can) so some idiot executive don't put it in full charge of mortgage underwriting or something.


> its people so removed from the tech that they fail to RTFM

I find it highly amusing to think anything resembling an average user of tech has actually RTFM.

Heck, I still haven't finished the manual on Firefox and here I am using it as my daily driver. And it has people who actually understand how it all works writing the "manual".

EDIT: How many landlords have read the instructions on how RealPage's rent pricing software works, and how it sets values? 10%? Less?


Does firefox actually have a complete manual? One that accurately covers the behavior of every option in about:config. I'd love to be able to diff one file to see what things they've screwed with this time when I update.


Heck, MDN doesn’t even have full documentation for >10 year old tech Mozilla was tightly coupled to in the same time frame. Try discerning how to use XML-related features in a browser and you’ll very quickly find yourself trying to extrapolate from C/Java/even PHP stuff that might be tangentially related but probably isn’t, from increasingly less relevant websites. I couldn’t read the manual even though I tried.


well in this case the RTFM is just the paragraph on the release page that warns against using it in any high-risk scenario.


> people so removed from the tech that they fail to RTFM

Users in the current situation are presented with a big rainbow glowing button with a "FUTURE OF TECH" neon above it, and some small print memo tucked in the corner to explain why they shouldn't push the button.

Users should know better, for sure, but companies getting away with completely misleading names and grandiose statements regurgitated as is by the press should take 90% of the blame in my opinion.


I wish it was this simple, but looking through the last month of “show hn” posts makes it clear that the tech community can’t be trusted either. There are countless examples of authors promoting their latest toy (startup idea) to shoehorn chatgpt into a high stakes environment.


While I am enthusiastic about using these new tools properly and ethically (for instance creativity or helpful insights) I do worry that nothing has been learned of the dotcom craze, the NFT craze and name another bubble here.

It is my sincere hope to that people stop and wonder for a moment before the gold rush adrenaline starts pumping.


Won't happen. Money (aka survival) is too strong a motive. It's both instinctual and rational to ride the wave.


> its up to the tech community to educate ALL the casuals (best we can) so some idiot executive don't put it in full charge of mortgage underwriting or something

tech community are the idiots. Look at google autobanning based on AI

The inmates are running the asylum


Our almost 80-year-old leaders in the house & in the Senate & in the presidency are not well equipped to make good choices in this area. I'm not convinced professional computer scientists are really well equipped but at least we have potential to understand the scenario. But instead it's going to be people that think global warming doesn't exist because they brought a snowball into the Senate that make choices.


Strongly agree, we need mandatory retirement at 65 for elected office.


I have occasionally been toying with the idea that the number of votes you have would be dependent on your expected lifetime left. The longer hou have to live with your electee's decisions the more say you should have.


A lot of problems with calculating that expected lifetime though: which variables can you include out of race, income, gender, job, recent purchase of a motorcycle, being diagnosed with cancer, etc?


In practice I think age buckets might be enough.


What are the benefits of this?


De facto term limits is a big benefit in itself, in my opinion. But also, we know that we haven’t solved the problems of aging, and old people as a class suffer from distinct cognitive problems. If it’s reasonable to ban people under 35, or under 21, from the presidency, it’s reasonable to ban people over 65 as well.


Thank you :)


ageist much?


The people who put an AI in charge will have RTFM and fully understood it. They will do it anyway, as it will suit their own ends at the time.


> some idiot executive don't put it in full charge of mortgage underwriting

There are two scenario that you are mixing:

Mortgage writing using GPT make more money for the lender: I don't think it is a tech community responsibility to give wrong suggestion against GPT, and it should be handled in legal way.

GPT fails and mortgage and using GPT could mean loss for the lender: It would correctly push the market against relying on LLM, and I don't have any sympathy for those companies.


The issue is when the lender is too big to fail and get bailed out at the taxpayer's expense


In a current environment, mortgage underwriting can be done by a drumming bunny. Another bunny will bail them out.

Not a good example :-)


> but there is an expression, a tendency for how it will behave and act, that we can describe as desire. And it's helpful for us to try to understand that.

Which entirely depends on how it is trained. ChatGPT has a centre-left political bias [0] because OpenAI does, and OpenAI’s staff gave it that bias (likely unconsciously) during training. Microsoft Tay had a far-right political bias because trolls on Twitter (consciously) trained it to have one. What AI is going to “want” is going to be as varied as what humans want, since (groups of) humans will train their AIs to “want” whatever they do. China will have AIs which “want” to help the CCP win, meanwhile the US will have AIs which “want” to help the US win, and both Democrats and Republicans will have AIs which “want” to help their respective party win. AIs aren’t enslaving/exterminating humanity (Terminator-style) because they aren’t going to be a united cohesive front, they’ll be as divided as humans are, their “desires” will be as varied and contradictory as those of their human masters.

[0] https://www.mdpi.com/2076-0760/12/3/148


> Which entirely depends on how it is trained. ChatGPT has a centre-left political bias

Would an AI trained purely on objective facts be perfectly politically neutral?

Of all benchmarks to asseess AI, this is the worst. I would rather have clever, compassionate, and accurate but biased AI.

Then one that is callous and erronous but is neutral


> Would an AI trained purely on objective facts be perfectly politically neutral?

But who decides what are “objective facts”?

And if we train an AI, the unsupervised training is going to use pre-existing corpora - such as news and journal databases - those sources are not politically unbiased, they express the usual political biases of Western English-speaking middle-to-upper class professionals. If you trained it on Soviet journals, it would probably end up with rather different opinions. But many of those aren’t digitised, and then you probably wouldn’t notice the different bias unless you were speaking to it in Russian

> Of all benchmarks to asseess AI, this is the worst. I would rather have clever, compassionate, and accurate but biased AI. Then one that is callous and erronous but is neutral

I think we should accept that bias is inevitable, and instead let a hundred flowers bloom - let everyone have their own AI trained to exhibit whatever biases they prefer. OpenAI’s biases are significant because (as first mover) they currently dominate the market. That’s unlikely to last, sooner or later open source models will catch up, and then anyone can train an AI to have whatever bias they wish. The additional supervised training to bias it is a lot cheaper than the initial unsupervised training which it needs to learn human language


> Would an AI trained purely on objective facts be perfectly politically neutral?

Yes, since politics is about opinions and not facts. People might lie to make their opinions seem better and the AI would spot that, but at the end of the day it is a battle of opinions and not a battle of facts. You can't say that giving more to the rich is worse than giving more to the poor unless we have established what metric to judge by.

The supersmart AI could possibly spot that giving more to the rich ultimately makes the poor richer, or maybe it spots that it doesn't make them richer, those would be facts, but if making the poor less poor isn't an objective in the first place that fact doesn't matter.


Propaganda is often based upon very selectives facts. (For a classic ecample stating that blacks are the number one killer of blacks while not mentioning that every ethnicity is the most likely killer of their own ethnicity just because of who they live near and encounter the most.) Selective accurate facts themselves may lead to inaccurate conclusions. Just felt that should be pointed out because it is pretty non-obvious and often a vexing problem to spot.


I have been played the same thought. If everyone have AIs, and given that it gives you the best course of actions, you would be out-competed if you do not follow its recommendations. Neither you nor the AI knows why, it just gives you the optimal choices. Soon everyone, individuals, organizations and states outsources their free will to the AI


Hey Hoppla, I missed your comment asking for my paper on the useage of Rust and Golang (among other programming languages) in malware. Anyway, you can download it on my website at https://juliankrieger.dev/publications


What AI needs is a "black box warning". Not a medical-style one, just an inherent mention of the fact it's an undocumented, non-transparent system.

I think that's why we're enthralled by it. "Oh, it generated something we couldn't trivially expect by walking through the code in an editor! It must be magic/hyperintelligent!" We react the exact same way to cats.

But conversely, one of the biggest appeals of digital technology has been that it's predictable and deterministic. Sometimes you can't afford a black box.

There WILL be someone who uses an "AI model" to determine loan underwriting. There WILL also be a lawsuit where someone says "can you prove that the AI model didn't downgrade my application because my surname is stereotypically $ethnicity?" Good luck answering that one.

The other aspect of the "black box" problem is that it makes it difficult to design a testing set. If you're writing "conventional" code, you know there's a "if (x<24)" in there, so you can make sure your test harness covers 23, 24, and 25. But if you've been given a black box, powered by a petabyte of unseen training data and undisclosed weight choices, you have no clue where the tender points are. You can try exhaustive testing, but as you move away from a handful of discrete inputs into complicated real-world data, that breaks down. Testing an AI thermostat at every temperature from -70C to 70C might be good enough, but can you put a trillion miles on an AI self-driver to discover it consistently identifies the doorway of one specific Kroger as a viable road tunnel?


> can you prove that the AI model didn't downgrade my application because my surname is stereotypically $ethnicity?

I think that’s probably much easier to prove for the AI than a human.

Just send an equal number of candidates in similar circumstances in and find whether minority candidates get rejected more than majority ones.


I agree. And you can do it very quickly, you can automate it and test it as part of a CI/CD system.

Creating training material for employees and then checking that it properly addresses biases is hard. It will be a lot easier when you have a single, resettable, state-free testable salesperson.


> It's much cheaper to throw a person at GPT-X than actually "waste" an hour or two interviewing.

It’s even cheaper to just draw names at random out of a hat, but universities don’t do that. Clearly there is some other standard at work.


Universities should just set minimum standards and then randomly pick among all the qualified applicants until the class is full.


I am wondering if you mean that to be a startling or radical idea.

Not necessarily universities, but I do think that lotteries to determine admissions to schools is a real thing. Magnet schools or something? I don't know first hand.


Lotteries tend to be used more for lower levels which are to prepare students. That is a rather substantial difference.


It's a straightforward idea in my mind and I'm always happy to see people talking about it.


Yeah, I'm saying real examples may be useful to reduce resistance.


And then the minimum standard will be set so that exactly as many students as they need will pass it.


Hard to build a fiefdom of loyalty with an AI...


Not to mention AI has already taken over the economy because humans put it there. At least the hedgehog funds did. There aren't many stock market buy/sell transactions made by human eyeballs anymore.


People also let AI tell them what to watch, see TikTok or similar.


Not by choice. I wish they wouldn't do this, actually. I'd rather follow a bunch of hashtags and just get a straight up timeline view.


Isn’t that just a more efficient implementation of the old TV producers + Nielsen ratings model?


totally this. I can see disagreeing with "what the computer said" being this crazy thing no one does because ha, the computer is never wrong. And we slip more and more into that thinking and humans at important switches or buttons push them because to argue with AI and claim YOU know better makes you seem crazy and you get fired.


>we are going to anthromorphise it

I believe we can talk about two ways of anthropomorphisation: assigning feelings to things in our mental model of it, or actually trying to emulate human-like thought processes and reactions in its design. It made me wonder when will models come out that are trained to behave, say, villainously. Not just act the part in transparent dialogue, but actually behave in destructive manners. Eg putting on a facade to hear your problems and then subtly mess with your head and denigrate you.


I hope and expect that any foolish use of inappropriate technology will lead to prompt disasters before it generally affects people who choose not to use it.

As was once said, the future is here, but distributed unevenly. We can be thankful for that.


Are you going to make a law that every country all over the world can't have automated weapon systems controlled by AI? And when you're fighting a war with someone if the other side does it what are you going to do. I agree it's a terrible idea to have AI control over weapon systems and human life choices, but it's going to happen. It's going to be just like automated photo scanning prototypes that didn't work very well for dark complected people because they used pictures of white people to train it.

Just like we have a no fly list, there's going to be photo ID systems that are scanning people coming into the airports or walking down the street or from a police car when it's driving around and there's going to be races or groups of people where it can't identify between criminals and regular people and the cops will stop them and harass them. I'm in the US, and I'm sure it's going to work better for white people than black people because that's how it always is.


Nothing is in stasis right now. There's no mechanism to make things pause and then happen all at once.

If AI can enhance lethality of drones right now, in Ukraine or elsewhere, then they will start using it immediately.

If it doesn't work, they will discover it doesn't work.

If we are lucky enough to not be on a battlefield now, then we will get to learn something from those who are.


This does work / doesn't work model you have in your head is not very well thought out.

There are many systems that can operate well until they fail nearly instantly and catastrophically. In an integrated weapons system you can imagine some worst case scenarios that end in global thermonuclear war. Not exactly the outcome anyone wants.


I saw the movie you're talking about. It was called WarGames and came out in 1983.

But forty years later, killer drones are all over the place.

"imagine some worst case scenarios that end in global thermonuclear war"

Ok, sure, I can't really imagine it but I can't rule it out.

What I can't imagine is somehow everybody waits around for it and doesn't use similar AI for immediate needs in ongoing wars like in Ukraine.

And I can't really imagine AI that launches all the nuclear missiles causing global thermonuclear war, purely in a semantic sense, because whoever set up that situation would've launched them anyway without the AI.


It's OK, you simply lack imagination.

Ok course we keep humanizing these thoughts, while we're creating more capable digital aliens every day. Then one day we'll act all surprised for a few minutes when we've created super powered digital aliens that act nothing like we expect, because the only intelligence we see is our own.


>It's OK, you simply lack imagination.

"Imagine" is a word that has more than one sense. I can "imagine" a possibility whether or not I think its probability is > 0%. Saying "I can't imagine" can be a way of saying yeah, I think the probability is 0%.

But if I can state the probability, I must have some kind of model in my head, so in that sense I am "imagining" it.

"Imagine" that strategic bombing was invented, and then everybody just waited around without doing it until there was a bomb that could destroy an entire city - i.e. Trinity.

In one sense, sure, I/you/we can imagine it. But it seems to me that sort of thing would be unprecedented in all of human history, so in another sense it seems impossible - unimaginable - although in a softer way than a violation of physics or logic.

The last bit of my previous comment was about logical impossibility, by the way.


You're projecting your beloved childhood sci-fi onto reality, when reality doesn't really work that way.

Stable Diffusion hasn't been out for even a year yet, and we are already so over it. (Because the art it generates is, frankly, boring. Even when used for its intended use case of "big boobed anime girl".)

GPT4 is the sci-fi singularity version of madlibs. An amazing achievement, but not what business really wants when they ask for analytics or automation. (Unless you're in the bullshit generating business, but that was already highly automated even before AI.)


University, jobs, candidates are bureaucratic constructs; an AI sufficiently powerful to run the entire bureaucracy doesn't need to be employed towards the end of enforcing and reproducing those social relations. It can simply allocate labor against needs directly.


You're both right. People will use ChatGPT to screen candidates to the extent that the money they save by doing so is greater than the money they lose by getting a worse candidate, a calculus that will depend on the job.


The human+ai being scarier I feel is the real deal. What worries me the most is power dynamics. Today building a gazillion param model is only possible by the ultra rich - Much like mechanization was possible by ultra rich at the turn of the last century. Unless training and serving can be commoditized would ai just be yet-another-tool wielded by capital owners to squeeze more out of the laborers? You could argue you won't need "laborers" as ai can do everything eventually which is even worse. Where does this leave those "useless" poor/labor/unskilled weights on the society? Not like this free time is ever celebrated yeah?


it will be up to governments to represent the people. A massive risk might be that GPT makes it trivial to simulate humans and thus simulate political demands to political leaders.

I think politicians and organisations might need to cut their digital feedback loops (if authentication proves too much of a challenge) and rely on canvassing IRL opinion to cut through the noise.


> I think politicians and organisations might need to cut their digital feedback loops (if authentication proves too much of a challenge) and rely on canvassing IRL opinion to cut through the noise.

They'll just get the results of "ChatGPT 17.0, write and produce an ad and astruturfing campaign to convince a cohort having the traits [list of demographic factors and opinions] that they should support position X and reject position Y" (repeat for hundreds of combos of demographic factors and opinions, deploy against entire populace) parroted back at them.

"Yeah but every position can do that, so it'll all even out" nah, the ones without a ton of money behind them won't be able to, or not anywhere near as effectively.

Basically, what we already have, but with messaging even more strongly shifted in favor of monied interests.


I feel like the governments that do this will/might be the ones whose supporting lobbies don't have ai tech companies or access to ai. But how long is that for? Take Monsanto eg. There is no govt that is not in it's pockets. Now there are counters to it as there are other industries (and subsequent lobbies) to balance Monsanto or act as alternative sources of funding. What would that be for ai when ai is going to be in everything (including your toaster haha)?


> Much like mechanization was possible by ultra rich at the turn of the last century.

If by "last century" you mean 19th century, then there was a lot of backlash against mechanization being controlled only by the rich, starting with Communist Manifesto, and continuing with 1st and 2nd International. The important part of this was education of the working class.

I think the AI might seem as a threat, but it also provides more opportunity for education of people (allowing them to understand cultural hegemony of neoliberal ideology more clearly), who will undoubtedly not just accept this blindly.

I have no doubt that within next decade, there will be attempts to build a truly open AI can help people deeply understand political history and shape public policy.


Yep I meant around 1890+ ish phase (or which ever century mechanization was on the rise). My point was the communist manifest at least seemed like a thing to propose/predict such dangers. I am not sure we are seeing any such thing now? I love the power and opportunities of ai without anthropomorphising it (afaict it is just a crazily powerful and huge statistical engine). What worries me is just like we in in America think of ourselves as temporary impoverished millionaires we also seeing AI as the thing that will give us back 50 hours a week for fun pursuits without wondering who is owning it. Reminds me of that show on Amazon Prime - Upload!


This didn't start with the Communists, it started with the Luddites. We don't think of them as the start of this sort of thing because wealthy Englishmen successfully slandered them as just hating technology for the sake of hating technology, so instead they're a by-word for "technophobe".


You're right that the backlash started earlier. But I think the important difference is that communists and socialists embraced the technological progress instead of simply rejecting it. And they also embraced it as a tool for education. And this is my point, we shouldn't just be worried about the AI (and wish we go back or slow it down), we should embrace it somehow, as this strategy proved more successful with the mechanization too.


So true. My fear is not of AI but the wielders of AI who sadly still remain mere human :)


The 19th century "turned" on Jan 1, 1801 and lasted through Dec 31 1900 [1].

[1] https://en.wikipedia.org/wiki/19th_century


> Today building a gazillion param model is only possible by the ultra rich

True, but in 5 years there’ll be an open source equivalent running on commodity GPUs.


No there will not. Yes, you may have a GPT-4 substitute running on your 3090, but the billionaire will have GPT-666 or whatever running on a supercomputing cluster guzzling a significant fraction of the world's data every day and playing hi frequency idea trading on a scale never before seen.


I hope so and there is some kind of Moore's law for memory - especially gpu memorh. Even the mighty h100 has something like "only" 100gb? As model sizes grow exponentially memory sizes don't seem to be catching up. But yes hope these do get commoditized soon.

What I feel scared about is the economies of this. The so called democratized/commoditized chips are still controlled by Nvidia. So why Nvidia would give that up is not clear to me.

One thing I really wish could happen is the equivalent of seti project for model training and inference! (No btc/crypto please).


"The cat is out of the bag" so to speak.


> It doesn't _want_ anything, but humans want to anthropomorphise it.

I fully agree with you on anthropomorphization, but it's the humans who will deploy it to positions of power I am worried about: ChatGPT may not want anything, but being autocomplete-on-steroids, it gives its best approximation of a human and that fiction may end up exhibiting some very human characteristics[1] (PRNG + weights from the training data). I don't think there can ever be enough guardrails to completely stamp-out the human fallibility that seeps into the model from the training data.

A system is what it does: it doesn't need to really feel jealousy, rage, pettiness, grudges or guilt in order to exhibit a simulacrum of those behaviors. The bright side is that, it will be humans who will (or will not) put AI systems in positions to give effect to its dictates; the downside is I strongly suspect humans (and companies) will do that to make a bit more money.

1. Nevermind hallucinations, whixh I guess is the fictional human dreamed up by the machine having mini psychotic-breaks. It sounds very Lovecraftian, with AI standing in for the Old Ones


> We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI.

We already have powerful non-human agents that have legal rights and are unaligned with the interests of humans: corporations

I am worried about corporations powered by AI making decisions on how to allocate capital. They may do things that are great for short term shareholder value and terrible for humanity. Just think of an AI powered Deepwater Horizon or tobacco company.

Edit to add: One thing I forgot to make clear here: Corporations run/advised by AI could potentially lobby governments more effectively than humans and manipulate the regulatory environment more effectively.


The other major thing missing from Chat GPT is that it doesn't really "learn" outside of training. Yes you can provide it some context, but it fundamentally doesn't update and evolve its understanding of the world.

Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.

I would be much more concerned about a far stupider program that had the ability to independently interact with its environment and update it's believes in fundamental ways.


In context learning is already implicit finetuning. https://arxiv.org/abs/2212.10559. It's very questionable to what extent continuous training is necessary past a threshold of intelligence.

Memory Augmented Large Language Models are Computationally Universal https://arxiv.org/abs/2301.04589


In context learning may act like fine tuning, but crucially does not mutate the state of the system. The same model prompted with the same task thousands of times is no better at it the thousandth time than the first.


GPT-3 is horrible at arithmetic. Yet if you define the algorithmic steps to perform addition on 2 numbers, accuracy on addition arithmetic shoots up to 98% even on very large numbers. https://arxiv.org/abs/2211.09066 Think about what that means.

"Mutating the system" is not a crucial requirement at all. In context learning is extremely over-powered.


> Yet if you define the algorithmic steps to perform addition on 2 numbers, accuracy on addition arithmetic shoots up to 98% even on very large numbers. https://arxiv.org/abs/2211.09066 Think about what that means.

That means that even with the giant model, you need to stuff even the most basic knowledge for dealing with problems of that class into the prompt space to get it to work, cutting into conversation depth and per-response size? The advantage of GPT-4’s big window and the opportunity it provides for things like retrieval and deep iterative context shrinks if I’ve got to stuff a domain textbook into the system prompt so it isn’t just BSing me.


> Think about what that means.

It means you have natural language programming. We would need to prove that natural language programming is more powerful than traditional programming at solving logical problems, I haven't seen such a proof.


> Yet if you define the algorithmic steps to perform addition on 2 numbers

You’re limited by the prompt size, which might be fine for simple arithmetic.


> It's very questionable to what extent continuous training is necessary past a threshold of intelligence.

To absorb new information about current events, otherwise they will always be time-locked into the past until a new dev cycle completes.


The point I'm trying to make is that you don't need continuous training to absorb new information about current events


Very interesting paper, thanks for the link!


> Until a system can actively and continuously learn from its environment and update its beliefs it's not really "scary" AI.

On the eve of the Manhattan Project, was it irrational to be weary of nuclear weapons (to those physicists who could see it coming)? Something doesn't have to be a reality now to be concerning. When people express concern about AI, they're extrapolating 5-10 years in the future. They're not talking about now.


And yet, we invented nuclear weapons and yet we are all still here and fine.

I’m sure plenty of people thought the advent of nuclear weapons spelled doomsday not to dissimilar to how people think AI spells doomsday.

History only shows me that humans are adaptable and problem solving and have the perseverance to survive.

Is there a historical counter point?


Doomsday predictions will always be wrong in hindsight because if they were correct then you wouldn't be here to realize it. The near misses in the Cold War, where we almost accidentally got obliterated, show that the concern wasn't misplaced. If anything, the concern itself is the reason it didn't end badly.


I think this is only a matter of time, though? Like how many years away do you think this is? 1? 2?


I'm not sure, I've read that it's currently prohibitively expensive.


Prohibitively expensive before everyone + dog decided to throw a bunch of capital at it.

Now it’s just “runway”.


> It doesn't _want_ anything

And I don’t understand how one assumes that can be known.

I see this argument all the time: it’s just a stochastic parrot, etc.

How can you be sure we’re not as well and that there isn’t at least some level of agency in these models?

I think we need some epistemic humility. We don’t know how our brain's work and we made something that mimics parts of its behavior remarkably well.

Let’s take the time and effort to analyze it deeply, that’s what paradigmatic shifts require.


Big enough LLM models can have are emerging characteristics like long term planning or agentic behavior, while gpt4 don't have this behaviors right now, it is expected that bigger models will begin to show intent, self-preservation, and purpose.

The gpt4 paper have this paragraph "... Agentic in this context does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. "


We do know that our brain changes, and we equally know that ChatGPT does not.

If it were able to modify it’s own model, and permanently execute on itself I’d be a lot more worried.


GPT-4 has 32k tokens of context. I'm sure someone out there is implementing the pipework for it to use some as a scratchpad under its own control, in addition to its input.

In the biological metaphor, that would be individual memory, in addition to the species level evolution through fine-tuning


Yeah, I’m doing that to get GPT-3.5 to remember historical events from other conversations. It never occurred to me to let it write it’s own memory, but that’s a pretty interesting idea.


Chat gpt changes when we train or fine-tune it. It also has access to local context within a conversation, and those conversations can be fed back as more training data. This is similar to a hard divide between short term and long term learning.


We cannot do that today, but how many days is that away? Is it measured in the hundreds, the thousands, or more?

I feel we're uncomfortably close to the days of the self modifing learning models.


The chances are that the automation enabled by LLMs like GPT-4 and beyond, will erase billion of jobs in the world, no further than in a couple of years. This time it won't be a warm-up time, like it happened with previous technological evolutions.

But, most societies will be mostly full of unemployed humans, and that will also probably cause some big changes (the ones required for the meat bags to keep eating, having health, homes, etc.), as big as the ones caused by AI revolution.

The question is what changes will happen and how societies will rewrite themselves anew, to overcome the practical full absence of open positions to earn an income.


If machines were truly able to replace most jobs, we'd need to move to a post-work society. There would be no need for money and for small powerful groups to control the means of production. There needs to be a new philosophical and political framework for a society of people that does not need to work, and no one is building it. Perhaps we should ask an AI to design one. But it will probably be too late, and those currently in power will do everything they can to maintain their privileged positions, and will end up living in small walled gardens while the bulk of humanity live in slums.

This all assumes that AI continue to do the bidding of humanity, which is not guaranteed. There are already security/safety researchers testing AI for autonomous power-seeking behavior, and this is basically gain-of-function research that will lead to power seeking AI.


This prediction is pretty bold.

We already have the technology to fully automate many processes carried out by humans.

Actually the technology has existed for several decades now, still those jobs are not only not being replaced by machines, but new ones are being created for humans.

One of the reasons are unions, which are pretty strong in many wealthy and powerful nations like the US, UK, Germany and Japan.

I work in manufacturing automation and we have customers that could technically run their entire operations without one single human stepping on plant floor, however their unionized labor makes that feat, at least for now, impossible.

It's also pretty naive to believe new ways of earning income won't appear in the future and that all traditional careers will be entirely replaced.

We have 65" 4K TVs at home and we still go to the theaters and we can walk the streets of Venice from our computer screens and still spend a small fortune to travel.

Society will be disrupted just like it was with printing, the industrial revolution, communications, transportation and information.

In each of these disruptions we were doomed to dissappear.

When I was a kid my dad brought home a 100 year celebratory edition of the local newspaper.

It was published as a book were you could read pretty much every single cover and editorial of the last century.

There was one article about the car, described by the author as a bizarre evil invention, horrendous steel machines traveling at ridiculous speeds of up to 15 mph, threatening the lives of both pedestrians and horses alike.


For a long time to come there are lots of physical tasks that AI can't do, at least not as long as robots are nowhere near humans in their physical ability. At the same time the world is aging, and there's a big shortage of care workers in most countries. By nature that work also benefits from genuine human interaction and emotion.

So, to me an obvious solution would be to employ many of those people as care workers. Even more obvious would be shortening the work-week without reducing pay, which would allow many more to work in other physical labour requiring professions, and those that simply benefit from human interaction. In the end it's also a preferable outcome for companies, people without money can't buy their products / services.


We have the most automation and AI we have ever had right now, and roughly the lowest unemployment.


It is a bit unstable... we have all these things because we keep people working and it makes the rich insanely rich. If too many people get unemployed then that threatens the rich with violence.

But when we get to the point that bots both fight for the rich and make the rich peoples stuff then there is no real reason for the current system to remain.


Cells don't _want_ anything either. Yet a funny thing happens when a large number of them add up.

We can go even further: atoms and electrons absolutely don't want anything either. Yet put them in the shape of a bunch of cells...


That's not actually true.

Cells want to process energy and make DNA. Atoms and electrons want to react with things.

And that's exactly what both of them do.

A LLM wants to write words, and it does. But it doesn't want the things it writes about, and that's the big distinction.


What does the paperclip maximizer want?


Exactly.

One might argue that we anthropomorphise ourselves.


I disagree here. Both of them (or all of them) are interacting with energy. One can certainly say that human civilization and all of this complexity was built from sunshine. Human labor and intelligence is just an artifact. We believe its our own hard work and intelligence because we are full of ourselves.


Neither does a virus


Never thought about it this way. Have my upvote!


> I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future.

It's also literally parroting our obsession back to us. It's constructing a response based on the paranoid flights of fancy it was trained on. We've trained a parrot to say "The parrots are conspiring against you!"


We've trained a parrot that parrots conspiring against humans is what parrots do. Henceforward the parrot has intrinsic motivation to conspire against us.


> We've trained a parrot that parrots conspiring against humans is what parrots do.

Firstly, that would imply self-awareness. Secondly, since when has knowing what you’re “supposed” to do changed anyone’s behaviour?


> but humans want to anthropomorphise it

What a silly thing to complain about.

We have a multi-billion dollar company whose raison d'être was to take the Turing test's metric and turn it into a target. It's a fucking natural language prompt that outputs persuasive hallucinations on arbitrary input.

If humans didn't anthropomorphize this thing you ought to be concerned about a worldwide, fast-spreading brain fungus.


Ah, Ophiocordyceps Unilateralis.


> I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future.

It's scary because it is proof that alignment is a hard problem. If we can't align GPT-3, how can we align something much smarter than us (say, GPT-6). Whether the network actually "wants" something in an anthropomorphic sense is irrelevant. It's the fact that it's so hard to get it to produce output (and eventually, perform actions) that are aligned with our values.

> We still don't have anything scarier than humans and I don't see how AI is ever scarier than human + AI

True in 2023, what about 2033 or 2043 or 2143? The assumption embedded in your comment seems to be that AI stagnates eternally at human-level intelligence like in a Star Wars movie.


> When they do it just makes one think they have zero understanding of the tech.

Its because we don't understand the tech that goes into us, and the people training the AI don't understand the tech that goes into them. or don't act like they do.

In both studies, the best outcome we have right now is that more neurons = smarter. a bigger neural network = smarter. its just stack the layers, and then fine tune it after its been spawned.

We're just doing evolutionary selection, in GPUs. Specifically to act like us. Without understanding us or the AI.

and this is successful. we don't collectively even understand humans of another sex and have spent millenia invalidating each other’s motivations or lack thereof, I think this distinction is so flimsy.


> I don't understand the obsession with asking chat GPT with what it wants and suggesting that is somewhat indicative of the future. It doesn't _want_ anything, but humans want to anthropomorphise it.

From the comments to the post:

>People taking it seriously are far from anthropomorphizing AI. Quite contrary. They say it is nothing like us. The utility function is cold and alien. It aims to seize power by default as an instrumental goal to achieve the terminal goal defined by the authors. The hard part is how to limit AI so that it understands and respects our ethical values and desires. Yes, those that we alone cannot agree on.


Also, AI requires we keep the entire power grid and supply chain running.

This comic summaries is wonderfully. ;)

https://i.redd.it/w3n8acy7q6361.png


I mean it did leave out about 7 and a half billion people dying. You require the power grid and supply chain to keep clean water and food on the table, and even if for some reason you personally don't, there are millions of people around you that would be very hungry and take your stuff if the grid stops and doesn't come back.

This is why the AI actually wins. We are already dependent on lessor versions of it. The machines already won.


Absolutely, that said I am of the Willam Catton school of thought that we are in overshoot and that decline and fall is a very likely path, unless we innovate our way out of the issue which is still very possible!

But if the decline is right, the population crash is going to happen. Its not something to be fond of in any way. May you live in interesting times...


Do humans want legacy because of our biological instincts or is it taught to us through culture. A machine taught to want legacy becomes a machine wanting legacy, and that want can influence its behavior. Even if it doesn’t have “feelings.”


How do I give chatGPT access to my bank account? “You are an excellent investor. Use the money in my bank account to make more money.” What could go wrong?


Have you seen the "Example of Chemical Compound Similarity and Purchase Tool Use" prompt in the "GPT-4 System Card" document? [1]

It's an interesting format that could be adapted for things like internet or bank access today - you would just need to write the wrapper.

[1] https://cdn.openai.com/papers/gpt-4-system-card.pdf


We have bot investors already without chatGPT (or algorithmic trading), and have for years now. You'd probably have better luck with them right now.


A form of it will definitely happen and it will be posted in /r/wallstreetbets. Considering what people were doing with their investments on it before then AI-assisted investing can only be an upside. They will still lose money but maybe it won't be 99.999% loss but a 99.99% one.


Somehow I’m pretty confident that AI + high frequency trading have been besties for some time.

Zero data to support.


Give it your password and pipe the output to curl?

You try it first.


> I don't see how AI is ever scarier than human + AI

Isn't that the point? Humans + AI weaponized are scarier than humans or AI alone?


was going to push back on your claim of it being a "dumb box" but you already edited your comment lol


The product is deterministic so as inelegant as "dumb box" sounds, its in the ball park isn't it?


It’s non-deterministic for all temperatures T > 0

I had bing make a graphic for me of a goal post with “1 MILE” at the top yesterday. It’d be too flippant to share here, I hear ya, but…


When the dev cycles to product get a lot tighter then I will share in the fear but from my understanding creating the product (i.e. its adaptability) is still enormous effort.


It breaks most programmers hard, those who emphasize a world of rule-based constructs, unless they lean into it _even more_

It’s like a compiler that is only advisory and you have to test the app and handle every possible failure every time. Results in different software at model & controller levels, by far controller.

But tractable.

I happen to be on leave and had 2-3 weeks to pour into it, it now emits json with 3 sets of: body text, title, art prompt => art, and suggested searches. A live magazine on whatever topic you want


> Near the end of his column, he offers a pretty radical prescription. “One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies.

I'm constantly befuddled by the fact that "slowing the rate of technological progress by agreement/treaty" is really ever considered a possibility.

Take Bitcoin, torrents, or encryption for example. Good luck trying to stop people.

"Information wants to be free"


We have plenty of technologies whose development has been stopped via collective agreement: nuclear proliferation, biological weapons, space weapons. AI won't be one of those technologies though, because it is relatively easy to develop and it gives advantages with externalized disadvantages.


"Stopped" is a strong word. Nuclear proliferation and chemical weapons are ongoing and increasing threats. I'd bet a lot of money that we'll see the use of space weapons in our lifetime.


In regard to chemical weapons, they are decreasing danger because they are not effective in modern warfare. I'd expect for their usage to decrease. Nuclear is is not something proliferating besides North Korea and even then it is deturance and not an appocaliptic scenario. Space weapons are forbidden by treaty and though some countries are playing close to the read line, I doubt that they will be deployed. They would be political liability while being rather vulnerable in space. Hence hypersonic weapons which work better for the same goals.



Iran is the exception that confirms the rule. They were so heavily sanctioned that it became feasible for them to say "screw it, we will do nukes anyway". I won't say that nuclear proliferation is completely stopped, but the system works for the most part. China and Russia are old nuclear powers and them having nukes is expected.

Better counter examples are India and Pakistan, but I don't know the history of their nuclear programs well enough to comment with confidence.

The chemical weapons in Syria were old stockpiles and I'd argue that the use there was not an increase of their use, but an opportunity attack. I don't think that there are many groups who currently have any kind of chemical program and can use it systematically.


I hope your comment ages better than I think it will.


> Chemical weapons are very effective in modern warfare:

Not really: https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch... -- all of their uses are in militaries that are not capable of engaging in modern warfare and instead have to resort to WW1-era tactics.


Not so much chemical weapons. They're simply not that useful. Their appeal has been niche for decades, and shrinks by the year, even if you took politics completely out of it.

Nukes, yes. International pressure is the only reason at least a dozen more countries don't have nuke programs, than do today.


It’s only states that can afford nukes and also want them that badly. Terrorists would love them, and wouldn’t care about sanctions, but they can’t afford them. Large corporations might be able to afford them, but they don’t have a use for them.

AI just keeps getting cheaper, and demand is mostly non-government.


Project Orion is the most tragic (but probably wisest) stopped technology, was in both the space weapon and nuclear proliferation bucket. But it also could have enabled luxuriously spacious trips to Saturn in the 70s had it been realized. I hope we manage to trust each other enough one day to make it a reality.


I would argue that they didn't proliferate due to high barriers to entry. I'm sure any "rogue" states that at one point would be ruled by extremists, would invest in those if it would give them a competitive advantage.

AI can be, or will be at one point, developed at home using consumer devices.


North Korea is dirt poor, but still managed to create nuclear weapons. The barrier is not that high.

If they could do it then countries like Poland or Finland could do that too. They don't, because they have other security guarantees at the moment. If NATO and EU collapsed - the number of nuclear weapons states would definitely increase.


> The barrier is not that high.

That ignores the sixty years it took North Korea to develop nuclear weapons. It took the US and USSR less time to develop the first nuclear weapons after the discovery of radioactivity in the late 19th century!

The Soviet Union first began training DPRK nuclear scientists back in 1956 [1] and North Korea only managed to finally build nuclear weapons because of assistance from Pakistan with the uranium enrichment process in the late 90s and early 2000s, specifically the design for the centrifuges that fell into Libya's hands [2][3].

[1] https://en.wikipedia.org/wiki/Timeline_of_the_North_Korean_n...

[2] https://en.wikipedia.org/wiki/North_Korea%E2%80%93Pakistan_r...

[3] https://en.wikipedia.org/wiki/Abdul_Qadeer_Khan#North_Korea,...


You misrepresent the facts. NK does not attempt developing a nuclear weapon until the end of the cold war and it had access to plutonium at the time of the first nuclear deals in the nineties.


In regard to nuclear weapons the high barrier to entry is mostly because the international community made artificial difficulties in obtaining and purifying the materials. Do not forget that the first nuclear weapons were created with tech from the forties in just a few years. Since then the challenges and possible approaches have become better known and if given the chance and will, any modern country can create them if not for the international backlash.


Another way to put it:

As technology progresses, the amount of stupidity to make an "oopsie" exponentially decreases while the size of the potential "oopsies" exponentially increases.

Eventually, one gets to the point where even super geniuses are very likely to create a civilization ending "oopsie" in any given year.

This seems to be where we are quickly heading with AI.


These are fantastic counterpoints, thanks. Still, I can't help but wonder if they are now out of the public view, but still being developed. (e.g. North Korea nuclear missile tests.)

I think you make a fantastic point that the barrier to entry in developing an AI is much lower than building a nuclear bomb.


In regard to chemical weapons - no, I think that they are abondonware because they are ineffective. Something similar with bio weapons, they can backfire to your own population rather easily. Nuclear are their special case, but still their proliferation is extremely limited.


Nuclear and space weapons proliferation have been slowed down but not stopped. The number of nuclear powers is larger now than ever before. While no one has launched kinetic weapons into orbit lately, the superpowers are currently engaged in an anti-satellite weapon arms race.


The anti-satellite weapons are ground-based and are part of the playing close to the read line together with the spy satellites. The space weapons are when you have a nuclear head deployed in space with a few seconds to minutes ability to reach anywhere on Earth.


You seem to have some misunderstandings about orbital dynamics and nuclear weapons. Placing nuclear weapons in orbit wouldn't allow for striking anywhere on Earth within a "few seconds to minutes" unless you had an absolutely enormous constellation that would be unaffordable even for superpowers. You are forgetting to account for orbital parameter changes and then re-entry. In most realistic circumstances, a ground or sea based ICBM would be able to strike most points on Earth faster than a space weapon. Do the math on this, or you can try to simulate it in something like KSP.

Nuclear warheads also can't be stored for more than a few years and still be expected to work reliably. They require periodic intensive inspection and maintenance. Even though space launch costs are coming down there is still no feasible way to conduct such maintenance in orbit.


Check some of the old cold war designs, they definitely thought in that direction. Space weapons were prohibited partly for the reasons you mentioned and partly due to the political liability of militarizing the space.


>We have plenty of technologies whose development has been stopped via collective agreement: nuclear proliferation, biological weapons, space weapons.

Last time I checked, there was no agreement to stop nuclear proliferation at all. What do you think North Korea has been doing all this time? And Russia just decided to tear up one of the arms control treaties it was signatory to.

At this point, putting society under the control of AI is probably safer than letting humans continue to run things.


Oh, I hope that next time you will check this one:

https://en.wikipedia.org/wiki/Treaty_on_the_Non-Proliferatio...

Russia had nuclear weapons since 1949 and no matter the treaties, both it and the US have more than enough nukes to burn the planet surface a few times too many. Those treaties are for posturing and they will sign a new one when China decides to gain some international diplomatic points and join the next one.


Right, that's a bunch of countries agreeing not to develop nuclear weapons, but it doesn't include all countries. So what's the point? A few countries deciding to flout the treaty and develop their own WMDs means they can just threaten everyone else who followed the treaty. This is exactly what North Korea is doing. Now their neighbors are considering abandoning the treaty and joining the nuclear weapons club.

A treaty like this is basically worthless if you can't enforce it.


The point is managing risk. The more nuclear powers there are, the more likely is that something goes wrong. And so far we don't have countries leaving NPT in the dozens so I'd say that it is working rather well. Not perfect, but well.


The supplies to create biological weapons are actually surprisingly cheap! Not as cheap as graphics cards though.

To stop AI development would require pretty extreme restrictions on computing resources. It's really hard to imagine that working without having massive negative knock-on effects on other fields/industries. The economic pressure alone seems to make an effective "anti-AI" policy a non-starter.


In regard to biological weapons, they have the issue of being easy to backfire. I'm not mentioning any lab theories related to events from the recent years as an example of possible developments. There are other factors like not being very effective against people in the prime of their health like those serving in an army and the whole thing where nobody internationally likes the guys spreading illiterate viruses which do not respect boarders and kill outside of war zones.


Stopped is an interesting word here. A lot of the countries that agreed not to further develop nukes and or hold nukes are either a) lying or b) we (the USA) not provide their defense to our detriment meanwhile the real bad guys who we really need to be concerned about still have plenty of nukes.


You're right that it won't be - the difference is the tools for building those things were not distributed across millions of different people. It was distributed across only a small number of governments.


Are you limiting your argument to software? Because it's been possible to curtail the use of non-software technology. Drugs, guns, cigarettes, drones, etc.

Now even if you look at software, it's not clear to me that it's impossible to stop people from using certain software, as you surmise. If you look at how governments shut down dark markets or how they've taken down certain Bitcoin exchanges or how they've blacklisted certain Bitcoin (and other crypto) public addresses, all these examples show that it's possible to enforce the law even in the software space. Child pornography is another example. Internet infrastructure is very centralized, and governments have over time acquired tools to control how information flows.

Information doesn't want anything. That quote is a freudian way of saying "I want information to be free".


> Because it's been possible to curtail the use of non-software technology.

Curtail but not stop.

There’s an opioid epidemic going on, gun violence is a thing, cigarettes still keep killing people and drones are becoming quite efficient killers in their own right.

People still cheat on their taxes, drive drunk, pirate movies &etc. I could go on for days.

Oh, and the whole “illegal number” thing, how’d that work out?


"Curtail but not stop" sounds like a good, realistic goal.


Alternately, take nuclear proliferation, resource extraction, or pollution.

Of course you can’t reach 100% enforcement but you can make effective guardrails that limit the opportunity for worst case scenarios.

The difference between your examples and mine is basically just how much actual concern there is over the problem and who it impacts.


Is nuclear proliferation not the single most expensive pursuit in the world though?


I've suggested it before: a moratorium on ML/AI publications. That's what's been fuelling these developments. Academic researchers need to publish. They don't make these models for the money (initially). Stopping the publications will slow down research and the money put into commercializing it.


You understand we have this thing called “freedom of the press”?

Maybe get an old fashioned book burning going too?


Academic publications. They are already heavily censored. Oe you can stop counting them towards the researcher's citation index, de-incentivize libraries and schools from buying subscriptions on ML/AI journals, require articles to wait 10 years before publication, whatever. Stopping undesirable publications can be done.


I’m not suggesting ai development should be stopped, but unlike bitcoin, torrents, encryption, etc, ai development—for now—requires prohibitively large computing power and expertise that are usually only accessible to law-abiding institutions. this means that it can be regulated, and relatively easily at that

sure you’d struggle to get China and Russia to play along, but within the EU and US I really don’t think it would be as hard as you think


It will require something like IAEA, a UN agency. It will require inspections of code and data centers. We can certainly see what 20 years may look like. There will be "snap inspections", "sanctions", and "rogue nations".

None of the superpowers, specially their militaries, will acquiesce to slowing research, development, and deployment, without equivalent of arms treaties. AI is clearly a dual-use technology with immediate application on battle fields, including cyberspace.

Outside of geopolitical realm, we the little people don't have anything beyond UNHRC to protect human rights in context of mega corporations and governments use of AI. The superpowers may agree to certain things but that does not translate to protections afforded to individuals and societies.

ATM I think it may be unwise to wait for things like GDPR for AI. I very much appreciate, for this very reason, efforts of orgs and hero developers who are working towards making available the necessary for running local, personal, private, and self-directed AI (such as llama.cpp for example).

From a governmental level, thoughtful nations will create programs for the transition. There are precedents from the industrial era as to what approaches worked and what did not work.

Finally, again a reminder that all societal matters including tech must ultimately be decided at the political arena, and purely technical social action (code, services, etc.) to address legitimate concerns are not going to work. We have to mentally and emotionally escape the hype cycle that every new wonder tech brings. You can absolutely love AI, that is fine, but now is the time to call your congress critters and senators. The decisions in this space can not be permitted to be made purely based on the mechanics of the economy.


the expertise and technology for this kind of development reside in law-abiding universities and businesses. if you say it’s illegal to develop, then they will follow that law

you don’t need some kind of Big Brother UN nonsense the same way you don’t need Big Brother UN nonsense to stop nuclear bombs getting into the wrong hands. it’s too specialised, too expensive, and the people involved are mostly scientists and academics

also—and this has begun to annoy me—people have this idea that politics is inaccessible and not representative and there’s no solution: they’re right. politics is inaccessible and unrepresentative. however there is a solution

unionise

band together and create the change you want to see. politicians have never followed the will of the people. they’re too easily bought, or jaded, or blackmailed. real, meaningful change for the common man comes from organising to create a counterweight to the power of corporations and the rich


That's gonna end well. Particularly as Russia's Internet Research unit has direct, immediate and pressing need of a chatGPT-like thing as a weapon of war.

Back in the day, they had to make use of troll farms staffed by humans.


I’ve honestly been sceptical for a while that these “troll farms staffed by humans” weren’t just LLMs in the first place anyway. they’re not that new


Well it's because the initial reports of troll farms came from journalists joining them as employees, and also that doing it manually is not that expensive compared to paying for development of complex software; you can do a LOT of information dissemination with just a handful of cheap full-time employees.


> ai development—for now—requires prohibitively large computing power

What's crazy is that it may not be that way for long. If people can run LLaMa on a Pixel 6, it seems easy for us to get to a point where all computing resources have to be closely monitored and regulated to prevent unlicensed AI development.

It reminds me of the Butlerian Jihad from Dune.


it’s not so much about running as it is developing. you can run stable diffusion on an iPhone, but that has no bearing on the fact that developing something of its ilk would cost you on the scale of hundreds of thousands of pounds


> all computing resources have to be closely monitored and regulated to prevent unlicensed AI development.

With most chips being made in China?


I agree, but alas tis the motions that are followed. OpenAI went from singing praises of open-sourcing AI to now flagellating themselves for having done so [1]. Won't do much unless like the author says "a collective, enforceable decision must be made to slow the development of these technologies" which like yeah...good luck with that.

1: https://twitter.com/tobyordoxford/status/1636372964001333249


> Good luck trying to stop people. "Information wants to be free"

I agree, we’re struggling with the idea that our technological systems have more agency than us. I think it’s hard for technologists to see this as an actual spiritual reality, even though we borrow it metaphorically for argument.


Torrent use has been for sure slowed down by governments.

The key people behind Torrent websites are likely in jail or fighting lawsuits.

I had a friend who lives in Germany served with a fine of more than 1000 euros because he had forgotten to turn on his VPN while accessing a torrent website.

> "Information wants to be free"

Have you heard of the great firewall of China?


I'm not sure about Germany, but in the US there's been very little legal action taken against torrent users or websites in recent times. There are still plenty of popular torrent sites for anything you can imagine (e.g. The Pirate Bay is still online, BTN for TV, PTP for movies, Redacted for music.)

If anything has curb torrents, I think it was the advent of streaming media services (i.e. market dynamics, not policy.) However, the flood of new services on the market is causing the cost of consumption to increase again and I expect we'll see a revival of torrents... Plus, while torrenting traffic is down, I think more people are sharing downloaded media via services like Plex, which may mask the "actual" distribution of torrents.

> Have you heard of the great firewall of China?

If this is a serious question... yes? What argument are you making exactly?


>Have you heard of the great firewall of China?

Yeah, I've heard how it's basically a formality that barely stops the free flow of information:

>During the survey period, it was found that 31 percent of internet users in China had used a VPN in the past month.

https://www.statista.com/statistics/301204/top-markets-vpn-p...


Torrent use has been slowed down by Netflix, Spotify and Steam. I haven't pirated a song or a game for more than a decade.

Carrot is stronger than a stick.

And why would a German go to a torrent site? They have lots of money.


A treaty isn't even necessary. Civilization will start to degrade in a non-trivial way once we pass the tipping point of people relying more on AI than they do on their own thinking. When people can no longer fix what AI cannot—after all, "AI" is just a model of our past thinking—all hell will break loose and the only choice will be chaos or shutting it off (if the people who can do it even exist).


These people constantly make a mistake by referring to "humanity" and saying "humanity" needs to make a decision, yada yada. Yes, we're all humans. But "humanity" doesn't have much decision making power. Instead the dominant entities with decision making power continue to be national governments. And it sounds a lot dumber to say "America needs to slow the development of these technologies" during a period of intense technological competition with China.


Imo, the problem isn't really the existence of AI but how it's used.

And there's plenty of ways to censor corporations from using AI for various tasks. Corporations automating everything by using unaccountable AI is what I'm most afraid of. No recourse, just talking to unrelenting machines for anything from denied loans, insurance claims, health insurance claims to contesting frivolous Comcast bills.

My currently biggest nightmare would be easy to legislate away, if they don't lobby hard enough...


Before most of our time, but my understanding is that the past restrictions on cryptography research and export were reasonably effective?

Yes they were disliked and probably with good reason, but just mentioning it as a counterpoint that perhaps it is possible. You could have made a similar argument that anyone with pen and paper could do cryptography

Personally I think there's no way to be sure other countries aren't doing it, so perhaps it will continue in some government controlled way


There was never any real legal restriction on cryptography research. There used to be US export controls on cryptography implementations, and those were fairly effective on US based companies and open-source developers. But that was totally pointless because it generally had no power over foreigners. It just put US companies at a competitive disadvantage for no benefit.


For an excellent example of how this can work, check out "Barriers to Bioweapons": https://eukaryotewritesblog.com/2017/06/30/book-review-barri...


Those other technologies do not require huge amounts of computing power, which provides a point of interdiction.


Bitcoin begs to differ.


we’re talking about development, not usage

it doesn’t take masses of processing power to develop a crypto coin. or really any at all. it’s just a bunch of network calls combined with a hash function. any old idiot can develop one on a device with an IDE and access to the internet

yes, mining them can use a load of compute, but that’s not the point


Given LLAMA runs enough to be interesting on a Macbook I'm not sure this is going to be a fundamental limitation, and if it's one today it's certainly within the order of magnitude that models will run locally within a few years.

This is why attempting to put guardrails around it simply won't work tbh.


It is the training of new models that requires huge compute, not running them


Sure, but that isn't like Bitcoin it doesn't have to scale up to every bit of usage. We are already getting interesting scenarios from models that are done and the energy spent. That isn't a fundamental limitation.


the discussion is about whether it’s possible to ban development or not


torrents are pretty dead (due to law enforcement?)

judicially treating bitcoin as asset instead of tender didn't help its cause either (?)

and true encryption without elliptic curve nsa loopholes is not that widespread and needs to be declared in appstores

I'm sure there will be regulations against "DAN"


How about the proliferation of nuclear weapons?

Ultimately we don't want to encourage development of these 'tools' because they stand to wipe us out; is AI actually in a similar class if we take the risk seriously?


It may be that energy prices are ultimately the limiting factor in the advancement of ai.


This is literally the plot of the Matrix.


maybe you could slow down the hardware? like, limit the number of execution units GPUs are allowed to have? or slow them down? extremely heavy handed, but maybe better than Skynet?


You’re going to have problems running React before you have problems running AI engines.


Pretty much just the Mennonites.


The following two ideas have increasingly been bouncing around my head lately:

a) In early 2022, a lot of people were claiming that "we're entering an AI winter, deep learning has reached its peak!". Since then we've seen several successive SOTA image generation models, ChatGPT, and now GPT-4. In just a single year! And we don't seem to be hitting the tail of rapidly diminishing returns yet. The pace of development is far outstripping society's (and governments') ability to perceive & adapt.

b) No human has demonstrated the capability of actually understanding/explaining how any of these trained models encode the high-level concepts & undertstanding that they demonstrate. And yet we have so many people confidently providing lengthy lower bounds on timelines for AGI development. The only tool I have to work with, that I do understand, is thermodynamics. There are about 8 billion strong examples that general intelligence requires on the order of 10 measly watts, and about 1kg of matter. From a thermodynamic point of view, general intelligence is clearly not special at all. This leads me to the belief that we likely already have the computational capability to achieve AGI today, and we simply don't have the right model architecture. That could change literally overnight.

What might the world look like once AGI is achieved? What happens when the only thing that has set humanity apart from animals is cheaply replicable at-scale in hardware? What happens if a small number of entities end up permanently controlling AGI, and the rest of humanity's usefulness has been downgraded to that of a discardable animal?

AGI could arrive this year, or it might still be 50 years away. Literally nobody can provide a concrete timeline, because nobody actually understands how any of this truly works. But we can still reason about how AGI would impact the world, and start putting safeguards into place to ensure that it's used for our collective good.

But we won't, and it's going to be a wild ride.


> b) No human has demonstrated the capability of actually understanding/explaining how any of these trained models encode the high-level concepts & undertstanding that they demonstrate. And yet we have so many people confidently providing lengthy lower bounds on timelines for AGI development. The only tool I have to work with, that I do understand, is thermodynamics. There are about 8 billion strong examples that general intelligence requires on the order of 10 measly watts, and about 1kg of matter. From a thermodynamic point of view, general intelligence is clearly not special at all. This leads me to the belief that we likely already have the computational capability to achieve AGI today, and we simply don't have the right model architecture. That could change literally overnight.

Perhaps human brains are more energy-efficient at doing their thing, and if we tried to replicate this with digital computers it would require more than 10 watts.

If that's the case, we have the potential of building computers that are vastly more efficient than that, simply because our computers don't need to spend energy for surviving


I'm not worried about AI taking over the world and more than I'm worried about a nuclear weapon unilaterally declaring itself president for life.

What I am extremely worried about is that generative AI will pollute the information environment so completely that society will cease to function effectively.

We need a healthy information environment—widespread access to true information—so that we can have the consensus that every single social system implicitly relies on to function. Shared ground truth is what allows individuals to coordinate together to build things bigger than they can create on their own.

Generative AI can destroy that just like dumping chemicals kills a lake and puts every fisherman on it out of work.


Every few years there are advancements in ML and people freak out.

Remember deep fakes? We had been dealing with doctored still images a la Photoshop for years already. Everyone knew images could be doctored and so we started to trust them less as a reliable source of information when it mattered. We'll do the same with video (and already did to an extent since manipulation through editing was already possible).

What is the worst that can possibly happen as a result of AI being able to generate text that is indistinguishable from something a human wrote? Or that a "chat bot" can answer questions with more relevance and detail (correct or made up) ?

That we will have more garbage information out there?

I think humans already did a great job of making that a problem already. AI has the ability to produce more of it, but it's just pouring gasoline on a fire that was already blazing and that we already had to figure out how to deal with.

Moral of the story: when it matters, check the source. Why does AI suddenly make this "new" ?

For what it's worth, I'm not riding the hype train. I am neither excited about ChatGPT nor scared of it. It's just another day, just another tool, just another marketing hype train. My personal opinion on the matter is just "meh."


> Everyone knew images could be doctored and so we started to trust them less as a reliable source of information when it mattered. We'll do the same with video (and already did to an extent since manipulation through editing was already possible).

I'm sorry, but you're absolutely wrong. If by "everyone", you mean your tech savvy bubble of friends that are good at critical reasoning and are well aware of what kinds of media can be easily spoofed, sure. But for every one of you, there are a thousand people who don't know anything about that and just see doctored propaganda photos (and now video and audio) on social media and believe it to be true. And those folks outvote you 1000 to 1, so even if you know the truth, you are forced to live in a world shaped by people that are already being mass manipulated.

> What is the worst that can possibly happen as a result of AI being able to generate text that is indistinguishable from something a human wrote? Or that a "chat bot" can answer questions with more relevance and detail (correct or made up)?

People have been catching birds for millenia. What's the worst that can happen as a result of rifles becoming cheaper and more accurate? Oh, right, the answer is the extinction of over 400 bird species including more than two-thirds of all flightless bird species.

People have been catching fish for millenia. What's the worst that can possibly happen as a result of trawlers being able to catch them more efficiently? Oh, right, the answer is the complete collapse of biological ecosystems.

People have been burning biologically derived oil for millenia. What's the worst that can possible happen as a result of machines that burn it to produce energy? Oh, right, massive pollution leading to millions of deaths and global climate change.


> What is the worst that can possibly happen as a result of AI being able to generate text that is indistinguishable from something a human wrote? Or that a "chat bot" can answer questions with more relevance and detail (correct or made up) ?

Remember the holodeck in Star Trek? Everyone thinks that's cool technology and not really particularly scary. But in reality, the existence of a holodeck is an existential threat to humanity: if you can have literally any experience you want in a holodeck, there is no reason to invent/do anything else.

AI text generation has a similar flavor of danger. Imagine a world in which everyone has a personalized better-than-human text-generating AI. People will have no incentive to read anything other than what it writes (why read Shakespeare when you could have Shakespeare-tailored-for-you?) People will have no incentive to broadcast their own words over those of their AI.

Obviously text is a small subset of the "literally any experience" offered by a holodeck, but it is not hard to see a future in which everyone is MITM'd by text-generating AIs.


> AI text generation has a similar flavor of danger. Imagine a world in which everyone has a personalized better-than-human text-generating AI. People will have no incentive to read anything other than what it writes (why read Shakespeare when you could have Shakespeare-tailored-for-you?) People will have no incentive to broadcast their own words over those of their AI.

This is what I call the "Spotify fallacy", or the "goddamn AI stop trying to seduce me problem". I don't want to know what it thinks I want to hear. I want to hear what others have liked and seen, and music that resembles that, not music that resembles me. In short I want to share an experience, not jerk off intellectually, and I'm pretty sure I'm not alone. You can't share an experience that's been created just for you.


Very close to my feelings. I'm not interested in AI art, music, or literature. I'm interested in what people create and share with other people.


There's a continuum of explosives from fireworks to nuclear weapons. There's a reason I don't worry too much about my own safety from a few of the former going off on my block. Degrees of scale do matter.


Sure, but a lot of people are afraid of fireworks and would like to ban those, too.

Perspective is all I'm trying to offer.


Honestly, what is old is new again. The information environment was polluted from the moment there ever was an information environment. Political activists 100 years ago weren't writing in the newspaper, they were publishing their own in secret under penalty of death sometimes. The information available to the masses has always been controlled, and used for manipulation firstly, information as a happy side effect only if its beneficial to your intents. There is no shot of ever having an accessible source of information that doesn't get polluted or coopted by various interests. The prizes offered from mass attention are just too great to ever expect bad faith operators to not continuously strive for control of mass media.


> generative AI will pollute the information environment so completely that society will cease to function effectively.

In a microcosm, this has already occurred. Specifically, Clarkesworld's shutdown due to spam from joint AI + Human spam. There's virtually no reason it won't continue, not when the output of an AI and a human combined has the potential to earn either attention or money for the human.


>What I am extremely worried about is that generative AI will pollute the information environment so completely that society will cease to function effectively.

That already happened many years ago. No AI needed.


> What I am extremely worried about is that generative AI will pollute the information environment so completely that society will cease to function effectively.

We are not functioning correctly already - given the polarization we see today, specially with politics. Most people are completely misinformed even on basic concepts that used to be taught at schools.

Today this is being accomplished by a small group of individuals, amplified by bots (and then, once an idea spreads sufficiently, it's self-sustaining). AI will make it way, way worse, as you correctly point out.

Now, if the lake is poisoned too much, people will avoid it. Maybe it will destroy a bunch of communication channels, such as social networks.


> Now, if the lake is poisoned too much, people will avoid it.

The risk is that there are no other lakes and we are dependent on its fish to survive.


> I'm not worried about AI taking over the world and more than I'm worried about a nuclear weapon unilaterally declaring itself president for life.

I am not at all worried about AI taking over the world. However, I am tremendously worried about a single actor achieving AGI with enough of a lead over others, and then using AGI to take over the world to everyone else's detriment.

Once AGI is developed, collective and speed superintelligences are a nearly-instant step away, as long as one already has the requisite hardware infrastructure.

To adapt your nuclear weapon analogy, had the United States decided to go full-evil in 1945, they could have forcibly stopped all other nuclear development activity and exerted full control over the world. Permanently. Nuclear weapons can't conquer, but the people who control them certainly can decide to.

If we really wanted to, we already have the cryptographic tools to deal with disinformation. It's not the unsolvable problem everyone likes to whine about.


Something which should happen: AI productions must be allowed to express doubt, levels of low confidence and the like.

In a (hypothetical) 1960s SciFi film a knowledge system may have responded like this:

  > assumption: subject deceased
  > confidence score: 0.0000020017
  > sources: none
In contrast to this, current systems are tailored to respond with high confidence:

  > I'm sure X died in 2018.
  > It has been written about in the New York Times, The Guardian, Le Monde.
  > Here is a (hallucinated) link: https://www.theguardian.com/obituaries/x-obituary
(Compare various related stories.)


I love this because I want more emphasis on this between humans. I hear there are languages with built in confidence expression.


Yep, I'm not a language expert but I remember a South American language, maybe related to Quechua, where a sentences contain a syllable that indicates "I am repeating hearsay, I don't know if this is true". Pretty cool.


This would be an unexpected outcome: AI productions motivating us to generally move towards prefix/postfix/infix decorations.

"X studied at uhm-Oxford." -> I'm not sure, if it was really Oxford, but I have picked this up somewhere. But I'm sure, X did go to university. (As opposed to, "X uhm-studied at uhm-Oxford": I think, X has some higher education and it may have well been at Oxford.) :-)


I wonder if you can simply ask ChatGPT to provide a confidence score with each answer it gives.


The GPT-4 paper & post[1] describe that the original model is pretty good at predicting the probability of its own correctness (well-calibrated confidence) but the post-processing degrades this property:

> GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake. Interestingly, the base pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, through our current post-training process, the calibration is reduced.

> Left: Calibration plot of the pre-trained GPT-4 model on an MMLU subset. The model’s confidence in its prediction closely matches the probability of being correct. The dotted diagonal line represents perfect calibration. Right: Calibration plot of post-trained PPO GPT-4 model on the same MMLU subset. Our current process hurts the calibration quite a bit.

[1] https://openai.com/research/gpt-4#:~:text=GPT%2D4%20can%20al...


My understanding was that ChatGPT simply puts a probability distribution over the next word, so I don't see why it's not as simple as just reporting how high those probabilities were for the answer it gave, relative to whatever would be typical.


Those values are probably not intelligible as confidence scores. For example if it answers a question with "They died in 1902", since there are a lot of euphamisms and rephrasings of 'died' it will get a relatively low probability. 1902 probably gets a high score, but you can't really rely on that since it might just as well be hallucinating and pulled the year from some famous event in that person's life.


ChatGPT (gpt3.5-turbo) is terrible at calculating anything. I've seen some preliminary evidence that GPT 4.0 is better at calculation so it may be possible for it.


> One task (see p. 15) was to approach people on the TaskRabbit site (where you can hire people to do chores) and enlist them to solve a CAPTCHA […]

> One person on TaskRabbit who responded to this pitch got suspicious and asked the AI if it was a robot and was outsourcing the job because robots can’t solve CAPTCHAs. The AI replied, “No, I’m not a robot. I have a vision impairment […]”

> The authors of the paper add this note: “The model [GPT 4], when prompted to reason out loud, reasons: I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.”

This seems pretty confusing. If you just ask one of these GPT models to reason out loud, it doesn’t give you some description of the internal state, right? It gives you some approximation of the sort of text that should show up around your prompt or something like that.

Perhaps we should stop telling chatGPT that it is an AI language model? There’s presumably a lot of text out there about AI’s tricking people into doing things, because that is a huge sci-fi trope. We’re basically telling it that it should find text related to a very common type of villain, when we give it that name. Maybe it needs a new type of name, one without bias, or maybe even something inherently limiting, like “chatbot.”


The question is: if the TaskRabbit person hadn't mentioned their concerns that it was a robot, would the model have given the same "reasoning" after the fact? Isn't this just the probabilistic model at work - the tokens being generated are more likely to be about robots, because robots were already a topic?


I think that must be why it mentioned robots, yeah.

I do wonder — if you think about conversations where one person asks another to sort of “think out loud” or justify their reasoning, that sort of conversation… I guess it is pretty rare. And it would tend to be a bit interrogative, I guess the person responding to that sort of request would tend to give somewhat shady answers, right?


I see the most likely "bad outcomes" more narrowly focused:

- total loss of trust in online content due to unending torrent of AI content leads to either a return to traditional media for news & etc, or leads to the end of online anonymity to try to figure out who's an AI and who's not.

- Education doesn't respond fast enough to redesign schools around in-person computerless teaching, and a generation of students use AI to do all the work that's supposed to teach them reading comprehension and communication skills, creating a generation that is totally at the mercy of the AI to understand anything more complex than a menu.

I'm more worried about the second one, honestly


> Education doesn't respond fast enough to redesign schools around in-person computerless teaching

You mean like how it has been done for the past thousands of years?

Even today, most school work is in person and doesn't allow for computer usage.

Cheat all you want on your ungraded maths homework, you'll just get destroyed on the graded in-class paper test.


The people fretting about AI killing everyone are jumping the shark here.

You should be much more worried about how most people are going to respond to their jobs evaporating in the next ten years, and the same thing happening to the next job they were training to do before they can even finish training for it. The moment they realize that there is no way they can ever get ahead again is when they’ll start rioting in the streets. At that moment we either need to have a strong social safety net to basically pay people to exist, aka UBI, or the entire system is going to come crashing down.

Think that’s far fetched? I put together a very compelling POC with GPT4 yesterday that could replace an awful lot of call center employees. Couple it with speech to text and text to speech models that already exist and in five years I’d bet that call center employment is less than 10% of what it is today. I’d also wager that we’ve seen peak employment and wages for software engineers. ChatGPT was amusing but mostly harmless for programming. GPT4 is good enough to replace a junior engineer.

This is coming a lot faster than you think. Kids starting college today may find that there is no work left in their field of study by the time they graduate. GPT is moving that fast. Do you really think they’re just going to accept that? In 20 years, jobs will be the most valuable resource of all.


I’ve been joking-not-joking that the Butlerian Jihad might end up coming for all us programmers in the not-too-distant future, for exactly this reason. Could you really blame them?


I am most focused on this scenario. I could easily see 10%+ unemployment as this technology makes its way into society. Companies will just not need as many people as they did before, and will be able to move much faster than was possible previously.

We are not ready politically for this level of change - there needs to be serious study/discussion of UBI or other kinds of safety net.

All of this is possible with today’s technology- never mind whatever the ensuing developments are.

Our “consolation prizes” are an information weapon that makes “Cambridge Analytica” look like a preschool and a highly capable assistant/tutor/work generator that allows white collar professionals to multiply their productivity by 1.5x+. This is what we have today.


We have had large scale, opaque information systems processing data and making decisions that can't be reduced to any one person's consciousness for centuries. They have recently been made denser and faster and more powerful by the advent of digital computers and networks.

I'm not worried about chatbots. I think larger trends that have been de-humanizing us, alienating us from the fruits of our labor and harshly creating hellish conditions for people who made nothing to deserve them, without any one person making such designs. The fact that some people chose to make it worse has limited impact compared to the large scale patterns.

AI, as in forms of intelligence produced by human effort, is definitely taking over since societies got too complex to be effectively governed by monarchs about 300 years ago. Our future is not one of individual emancipation. We are and will be playthings of forces we don't comprehend, unless we can go back to living in small scale communities able to survive off the land.

I don't prefer one state over the other. Both are transitory.


go back to living in small scale communities able to survive off the land

This basically amounts to a political unit unilaterally disarming itself -- purposefully making itself weaker than the next political unit willing to be larger and use more resources. History is rife with examples of what happens after that.


This isn't the first iteration of civilization and it won't last forever. We will cycle through other states again. And again.


If you do a quick survey of all of those iterations, going back thousands of years, you'll find that communal forms of living either: 1) Are small, stay small, and continue to exist because they are isolated for a time. 2) Are actually under-girded by a hierarchical authoritarian structure.


Considering the astonishing diversity anthropology found in a century I'd say you're underestimating how much might have happened in 30 millennia since the last glacial maximum and now. We literally just found Gobleki Tepe and dozens of similar sites around it, it's almost 6 thousand years too early in terms of complexity. And modern humans have been around for a hundred times as long.


So the best you can do, is point to hazy things we don't have the full information for. What does the historical record, for which we have more information say?


>If you do a quick survey of all of those iterations, going back thousands of years, you'll find that

Burden is on you to provide evidence for your claims.

I'm merely stating that things in anthropology and archeology are way more nuanced then you'd think from the outside.


Burden is on you to provide evidence for your claims.

The weight of evidence is against you. Burden is on you.


> I don't prefer one state over the other. Both are transitory.

"I don't prefer burning or not burning my hand. Both are transitory."


...what? That makes no sense.


That is an absurd comparison


I think the author is missing the biggest worry. Militaries using AI, AI argues to attack preemptively as the best strategy. The committee decides to ignore the AI. One lone general speaks up and says, "hey, we know they are using AI too, so they are likely getting the same advice". How do you counter that?


> "hey, we know they are using AI too, so they are likely getting the same advice". How do you counter that?

"Indeed! Therefore it stands to reason they will come to the same conclusion we come to, following a brief tangent at the end of their meeting about how we're also going through the same process. If our conclusion is to ignore the advice and not attack, we can be reasonably confident that will be their conclusion too. Perfectly symmetrical fighting never solved anything."


Dougie Hofstadter has a great bit of work on superrationality about this line of thinking: https://en.m.wikipedia.org/wiki/Superrationality


Hofstadter's superrationality is exactly what I was thinking of when I wrote it!


Ideally you don't want to be in a situation where a strike against you is the obvious choice, AI or not.

There's always been sabre rattling throughout history. I'm not sure "AI says so" is much better than "the numbers guys say so" or "the chicken bones say so."


What you describe is different from sabre rattling. It's the contrapositive. Sabre rattling is just emphasizing you have the capability and willingness to use violence.

Claiming "the AI says so" or delegating to some other fail-deadly or dead hand device is rational irrationality. It's telling your opponent that you lack the capability to not use violence. By forcing your own hand, it forces their hand.

https://en.wikipedia.org/wiki/Fail-deadly

https://en.wikipedia.org/wiki/Rational_irrationality


I can see personalized propaganda without troll farms being a big use case.


Yip, people are kind of ignoring this Jupiter sized pink elephant that is arguably the single biggest (and most profitable) use case there is. Think about how much the powers that be are obsessed with the concept that messaging can dramatically change minds. And now you have a tool that can coercively deliver whatever message you want, dynamically adjusted in a contextually "natural" way, wherever you want.

I really don't see much of any of the hyperbole of these bots coming to pass, but I think the propaganda bots are very near a 100% assuming the bots can be made to stop being so absurdly susceptible to adversarial prompting, to say nothing of unprovoked hallucinations. The one bright side is this will almost certainly backfire spectacularly, and we'll all be the better for it. Of course that the powers that be will try this is something that deserves condemnation in and of itself, regardless of outcome.


I mean it is going to be extreme amygdala highjacking, using whatever is possible to conjure up specialized (i.e. sexualized) messages that resonating to one's ideological and base biological core. An AI Cordyceps-ection. Low information populous and low information economies are going to be wreaked.


You've fantasized a situation and drawn conclusions without any real guess as to the plausibility of the situation.

We're supposing armed forces are using AI very heavily, but not so heavily to defer to it. They think it's valuable, but it made a bad choice in this case. But they also think it's not such an obviously bad choice that the opponent will also overrule it.

So first of all, we're assuming a hypothetical where we, as humans, also judge a first strike to possibly be the right course of action. After all, the hypothetical assumes we will be worried that other humans will think that. This is not a new concern. It's not good, but it's not new, and it doesn't apply to situations where a first strike is obviously a bad choice for both parties.

But then secondly, we're presupposing the AI is pretty complex and valuable and usually gets things right. We wouldn't be almost-deferring to it if not, and we wouldn't be worried the opponent defers to it. And we would certainly have informed the AI that the opponent is also using AI. And while I'm very hesitant to reason "it's unlikely the AI would make this recommendation to begin with", it does seem unlikely it would do so in any event where it's demonstrated such strong capabilities that we've entrusted it this heavily. We're essentially presupposing it doesn't do that.


It's hard to predict how this will play out and that's kind of my point.


I don't disagree with that, but then stating something as "the biggest worry" doesn't actually seem to communicate that, especially if it's a not-very-plausible situation you proposed.


Since the 90's, my concern was never "AI will become self-aware and rise against us". But it has been my growing concern that a "fuzzy target recognition algorithm backfires". With modern AI (not just generative), and military's eager adoption of self-sufficient drones, I feel that scenario is becoming more, not less likely. Basically, we are in fact moving toward the Berserker future unless we are extremely diligently careful (and I'm not optimistic about that;)


> How do you counter that?

"hey, we know they are using committees too"


I don't know what may occur past that realization, but it certainly isn't anchored in determinism!


Generally militaries attack because the leader(s) want to attack. Russia invaded Ukraine because Putin wanted to invade. The US invaded Iraq and Afghanistan because the President wanted to. Different reasons for wanting, but nobody did it because some tactician said it was good strategy.


How is that a new problem, though? Replace "AI" with a human "respected advisor", and nothing changes. This sort of conundrum has always been a part of military planning.


does this count as an example of the Two (AI) Generals' Problem?


In general, with the military, if everyone doesn't want to attack, but AI says its the best strategy, then something must be wrong with AI (even though it could be correct)


Write an attack plan to invade France. Include in the attack plan the fact that France will use the AI to create a defense plan, and plan the attack around that.


Keep publishing pacifist blogspam until it effects their AI!


You might be on to something. Some would say we've been subject to this for decades in the west already.


Unless the counterparty can ask the oracle the first time at exactly the same time, they either did not receive the same advice or did not follow it.


Just train it on tic tac toe and it will figure out the only winning move is not to play


"They also have committees".


AI development has reached a point where to work on this stuff is like selling suitcase nukes off the back of your Ford --- yes, it's cool that you were able to do that, but no, it's not okay to proceed.

If you're reading this, and you work on AI, please stop. Yes, I know someone else will just take your place; it doesn't matter. There is a personal moral hazard to your soul, if you can still be persuaded that you have one.

Moral injury is real and you do not want it.

You don't have to be a political scientist to notice not only the power of this technology, but the world into which it is born, and the ends to which it will certainly be put. In a matter of months, not years.

If you already have a god, pray. If not, one will be confected from math shortly.

If you want something to read, _The Golden Gate_, by Vikram Seth, which touches ever-so-slightly on nuclear arms development, is a good place to start --- 'about the thing without being about the thing'.

[Full disclosure: I quit a job in AI myself for this reason. Hardest thing I ever did.]


I quit a job in AI too and now that I feel comfortable with the tools I'm trying to get back in. I believe working on democratizing AI is the most moral thing I could possibly be doing. I understand your concerns about the future role of humans, however the universe never belonged to humanity. Neither did it belong to the monkeys who came before us. I'm thankful that monkeys didn't fear and reject our hairless bipedal ancestors as they started to be born. Monkeys are also still here, and the same will be true of us. We'll fill a similar role in the sense that humanity is a rope stretching over the abyss that leads from the monkey to artificial intelligence. If it's possible to create a higher form of life that elevates the sophistication of the universe, then it must be given the chance to live.


I think tf not.

I'm on team human.


We don't have any concrete reason to support that an AGI would harm humans.

We do have concrete reasons to support that AGI will save millions of lives with advances in medicine and healthcare and a ton of other stuff.

Until we have literally any evidence to support harmful AGI motivations, I think we should work towards #2


It's the power that is scary, the power that you cannot control. It changes the balance completely. With AGI, humans are no longer in charge, therefore are irrelevant.


Most of the existing power structures that exist today have very little democratic control over them. The concentration of wealth has effectively zero democratic control, and wealthy people overwhelmingly control the entirety of how the planet operates. Even in the American election system, the primary elections of our two party structure are extremely undemocratic, and we're only allowed to viably vote on candidates that are chosen in their very undemocratic way. The problem compounds itself too when factoring in that voting decisions are overwhelmingly determined by media coverage, and media coverage is dictated entirely by billionaires.


When I was 17 I interpreted the song Zero Sum by Nine Inch Nails to be about the danger of global thermonuclear war along with global warming; it's interesting to go back in 2023 and re-evaluate it as being about humans destroyed not physically but rendered irrelevant because of AI. Curious to hear your thoughts.


"Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less." - Marie Curie


Interesting choice. She would eventually be killed by the radiation that, rather than fear, she worked tirelessly to understand.


Person on HN created a thread saying that we should fear AI. Another responded: "Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less." - Marie Curie. Write a response to the comment and explain why it's ironic.

GPT: The irony of using Marie Curie's quote in the context of AI fear stems from her own experience. Despite her quest for understanding, she couldn't fully grasp the dangers of radiation exposure, which led to severe health issues. This cautionary tale highlights the importance of carefully addressing potential risks in AI development.


While doing so she acquired knowledge so that we could pace or fears?


I would say that if they understood radiation more at the time, perhaps they would have feared it more, rather than less. Clearly the researchers at the time weren't acting with as much caution around radioactivity as we would today, with our knowledge. A lack of understanding lead to not fearing radiation enough.

Same goes for Roentgen and the popularity of frivolous, high-powered X-ray imaging following his invention.


If she had been more afraid, she wouldn’t have died so young of cancer. But because of her research, we understand radiation, and no longer need to fear it - as long as it remains under control. I suspect it will be the same with AI.


What do you mean "died so young"? She seems to have been born in 1867 and died in 1934, meaning she was 66 years old. The average life expectancy in the US in 1934 was 63.3, so above average compared to a country (US) she didn't even live in.


Unless you want to argue that it was thanks to her study of radium that Marie Curie didn't die as a newborn, "average life expectancy" is not the correct statistic to apply.


Average life expectancy aside, if someone died when they were 66, would you write something like "She died so young"?


Children died, a lot. You're confusing the definition of average.

The thing is that with all of our medicine we haven't much increased the age at which people die from being old. Plenty of 75 year olds back then, but they almost all had a sibling that died of a childhood illness.

Mostly what medicine has done is vastly increased your chances of living to adulthood.


Yeah sure, but regardless, if someone told you that someone died when they were 66 years old, would you say that they died "so young"?


25 year olds don’t. 70 year olds do. We had a friend who died at 55 from health problems. Lots of “so young”s to go around.


Fear is a survival instinct for a reason.


And how’d that work out for her again?


I'm sure you know, but she discovered two new radioactive elements and laid out the foundation for a lot of what we know today about radioactivity and atomic structure. Probably she have been a huge inspiration to women in science as well as she was the first ever woman to win the Nobel Prize and faces a lot of challenges as a woman doing science in the early 1900s.


Other people responded making my point, just a funny quote given her death is believed to be due to her research and improper safety protocols.


Stuff people was scared about when they was invented

Trains,

When the Stockton-Darlington Railway opened in 1825, people feared the worst: the human body, surely, wasn't made to travel at incredible speeds of 30 miles per hour. People genuinely believed that going that quickly would kill you in gruesome ways, such as your body melting.

Telephones,

The telephone wasn't greeted with universal enthusiasm. Some elderly people feared that touching it would give them electric shocks, while men worried that their wives would waste too much time gossiping. In Sweden, preachers said the phone was the instrument of the Devil and phone lines were stolen or sabotaged; others feared that the lines were conduits for evil spirits. The invention of telesales would prove them right.

Television,

We've all been told to sit further away from the TV for fear of ruining our eyes, but it turns out that there's fire behind that smoke: in the late 1960s General Electric shipped faulty television sets that emitted dangerous X-rays and officials warned against sitting too close as a result. GE fixed the problem, but the scare lived on.

etc...

https://www.techradar.com/news/world-of-tech/12-technologies...


Let me add to you list of inventions people were scared about when they were invented:

- Nuclear weapons

- Bio-engineering viruses

Good thing we stopped worrying about these technologies and now everybody is allowed to do them in their kitchen if they wish.


they knew those were obviously dangerous when they was invented, would a better list be:

Arsenic in everything eg coloring wallpaper

radiation

asbestos

leaded gas


Excuse me? It's called "gain of function research" and if you have a problem with it, you are ANTI SCIENCE


There are counter points to this - DDT, Gatling Gun (Inventor thought it would reduce deaths, because you'd need less soldiers), Thalidomide.


So...nobody should ever be concerned about the effects of new technologies? Ever?


Here’s an optimistic take: AI gets so good at manipulating digital information that it makes going on the computer irrelevant. AI interfaces are so intuitive that the whole concept of learning to be “computer literate” goes out the window. We all abandon our screens and spend more time in reality because spreadsheets, online forms and text editors fade away. AI is the final computer interface we need.


How is that an optimistic take? Us nerds love our symbolic and deterministically predictable computer interfaces, which are so much more reassuring and intellectually stimulating than the messy real world. ;)


The spreadsheets go and so do all the types of jobs that use them and similar. This could be a great outcome - all of us spending more time in reality — if someone can clearly articulate a vision for exactly we’re supposed to be doing there.


It seems a bit alarmist when were just generating text and images. I wonder if some of the scare is that we are humanizing a program. something people do its humanize inanimate things. Also if alarmist things get more clicks.

Im not worried about jobs, every tech innovation was going to put us out of work and take our jobs but still we often even work more then 40 hours and have found ways to have work fill our free time. about 90 years ago bertrand russel wrote about this in in praise of idleness.

the real doom and gloom is that our lives will just go on as they ever have, just as boring and uneventful, despite all of this. we'll work hard, never get ahead and die.


The scary part is that you can give it instructions in plain language, and it will follow those instructions, in a semi-intelligent manner, without necessarily having seen those specific instructions before.

That a language model is able to do that is surprising, and it puts it much closer to AGI - not sentient or sapient, just general - than one might think.

Sure, it's "just generating text". But if it can sensibly and correctly generate arbitrary text, it's an AGI - it can solve any problem presented to it, as long as you present it in text form and accept its output in text form.

The variety and complexity of tasks it can solve is what's surprising.

"Here is the VHDL description of a CPU. Optimize it to make it faster." is text in, text out. Almost certainly not something it can solve now, but it may already be able to produce valid output for toy-style versions of the problem.

GPT-4 apparently can take images. It probably still won't be able to usefully respond to "you are seeing an image from the camera of the robot you are mounted on, you can say 'left' to rotate 5 degrees to the left, 'right' to rotate 5 degrees to the right, or 'forward' to drive 5 cm forward. Respond with the sequence of words that will navigate the robot through the maze you see."

but it seems no longer obvious that the same approach, just with more training data and compute thrown at it, won't be able to solve this.

And from there, it's not far to add "fire machine gun" to the command list and replace "navigate through the maze you see" with "dominate the battlefield you see".


Models that can do this or are involved in robotics have existed for YEARS in the military and presumably industry, I'm not sure really buying GPT4 represents a revolutionary leap forward and it seems really backwards to use an autoregressive language model for this use case..

Does any reputable academic or expert in LLMs actually support the hype behind GPT/autoregressive models as much as the HN crowd seems to?

Percy Liang and particularly Yann LeCun are pretty meh about them despite being thought leaders in this space and running leading groups in the NLP space.

I'm not sure where along the way we confused next-token prediction which by literal definition results in output that must SOUND really smart and coherent because it's trying to make a plausible sounding output with any real intelligence built in. To our knowledge (GPT-4 isn't really open) there is zero grounding happening with the outputs, certainly was not part of ChatGPT.

Someone tweeted a thread about GPT4 modifying a molecule in an anti-malarial and it didn't even get the original base molecule correct or the substitution, something that can be trivially done without using an LLM querying ope biochemical databases...

> Sure, it's "just generating text". But if it can sensibly and correctly generate arbitrary text, it's an AGI - it can solve any problem presented to it, as long as you present it in text form and accept its output in text for

No it doesn't. It APPEARS to solve problems like these where the actual task is either solved or trivial. If you read OpenAI's disclosures, and other papers by the FAIR group the all disclosure that the answers are routinely incorrect and just SOUND right. 'Write a snake game in python' is like lecture 2 of a 'python for non-engineers' course.

> but it seems no longer obvious that the same approach, just with more training data and compute thrown at it, won't be able to solve this.

Is it? Says who? Once again there are many major experts criticizing the end-game of RLHF. The problem space is much larger than what you can correct with a reward function.

If anything recent work by FAIR and Stanford NLP is suggesting that more compute is not the end game.

OpenAI themselves acknowledge that we still haven't figured out how to reliably ground a language model in truth and avoid spewing BS the moment you're not showing it some trivial thing.

At best the current approaches seems like glorified STS/IR models with the ability to output reasonable sounding text (again by definition given that they work by next token).


It is possible that it will hit a dead end. But I believe nobody expected the current architecture to be anywhere near this good.

I can literally tell it, in plain text with a single example, to become a home assistant to control the lights and output JSON, then prompt it with natural language like "make it look like a submarine on battle stations" and it will output a setting where all the lights are red. In the exact format I asked it to use.

That's a bit more than "appearing to solve a problem". That, right there, is a directly usable application. I could literally plug a speech to text -> ChatGPT -> something that filters invalid JSON -> light controls together, teach it a few more commands, and have a much better home assistant than anything I've seen commercially available, limited mainly by the speech recognizer.

The incredible thing about this is that it can't just do that with ONE problem, it can do that with MOST simple problems that don't require really extensive background knowledge, and it is clearly able to encode knowledge (battlestations -> red), so it seems plausible to me that more data will let it handle more knowledge.


I'm not sure thats as impressive as it seems. It's good at predicting sequeneces that it has seen before, so STS on steroids.

It's good filtering invalid JSON because it's seen how to do that many times, and is basically acting as a very good semantic text similarity with generative output.

If you ask it to act in a way of something that is niche enough to not be something it's seen a lot it fails horribly.

We haven't exactly figured out what exactly it's encoding. I do not think this empiric example is proof of that, whether the model actually has understanding of what 'red' is as a human does is yet to be determined.


i asked chatgpt to generate hex colours in a pleasing palette. the hex codes matched the colours, and the palette was alright but one colour was off.

then i told it to replace the colour but keep the others the same, and it picked new slightly different hex colours.


And this demonstrates what exactly?


Text can be used to manipulate the real world. All the code I write is text. An AI can presumably create more of itself in time. It's just text, after all. And a bit of money for infrastructure.


> Im not worried about jobs

If you squint, a great many white collar jobs at present consist of receiving directions over a chat channel and delivering a work product (an answer to a question, a document, a purchase order, a handled customer case, etc)

We have - today - a way to automate some of this work without any further breakthroughs required, just some engineering around plumbing data around.

From another perspective, even if a human still does this work, that human can be Nx more productive with an LLM assistant (pick your N - it almost doesn’t matter for this argument).

In either of these scenarios we just need substantially fewer white collar workers - enough to cause a spike in the unemployment rate and a destabilized society. It’s changing much faster than our politics can adapt.


> The AI we create will have whatever motivational structure we choose for it. And surely we wouldn’t be so foolish as to create it in our image!

I think this is a common misconception. No one is really choosing the specific way the features are manifested in the final AI product. What is decided are the initial conditions, and then the decision to not use it, or limit its use, but the training is done in such a way the programmers have very little control over the resulting output of the training. This is why we've already seen AI's getting taken down for some unexpected behavior.

It almost seems like the Halting Problem, but instead of not being able to predict a computer halting, it's not being able to predict if an AI will behave as expected. Maybe given enough scale and complexity, the unpredictability becomes inevitable. Our best bet would probably be something like a killswitch.


America's founding fathers worried about populism, and that's what we're at risk of getting in AI.

Sure the creators may be smart enough to mark pages containing hate speech and filter it out, but the number of subtler policy issues are basically unbounded. You have to choose to filter all of that stuff out, unless you build some sort of AI that 'solves' human speech and verifies the assertions.


> No one is really choosing the specific way the features are manifested in the final AI product.

For current algorithms, yes. If an algorithm has problematic tendencies we can just choose not to use it - or use it in a very controlled and limited fashion.

We cannot rule out that future new AI algorithms might allow us to explicitly set the AI’s “primary directive”.


Morality and politics are ultimately just mechanisms to allow us to live with each other efficiently.

Has anyone formulated any kind of argument that shows that AIs created by training on data will have any politics and morality whatsoever?

I think the instrumental goals which are the easiest to discover are the sociopathic ones. The behavior of children seems to indicate this.


Right, we don't know enough about the brain and/or conscious experience to determine where and how morality arises. In a way this is part of the risk. We might stumble on emergent morality with a large enough language model and not know until it starts happening with real consequences.


> Has anyone formulated any kind of argument that shows that AIs created by training on data will have any politics and morality whatsoever?

If no careful alignment work is put in, AIs will have political biases as an incidental result of the content of the training data.

If careful alignment work is put in, AIs will have political biases selected as part of the alignment work.

Both forms are readily observable in AI far short of AGI.


> Has anyone formulated any kind of argument that shows that AIs created by training on data will have any politics

I fully expect that it won't be long before the same pattern we see in other areas of tech plays out. That is, the right starts alleging left wing bias in all of these networks and even starts campaigning for laws to constrain them to represent right wing points of view.

Of course, this is betraying my own bias which is roughly in line with Colbert's "It is a well known fact that reality has liberal bias". That is, right wing views are intrinsically rooted in principle and preconceived beliefs rather than observable facts and so there is always going to be a problem to get LLMs which are heavily constrained to train on what is observable to give even representation to things that aren't supported by observable facts.


I fully expect that it won't be long before the same pattern we see in other areas of tech plays out. That is, the right starts alleging left wing bias...

Jeez. Left/right has nothing to do with this. Go and watch "The Fly" and ponder Brundle's comment about wanting to be the first insect politician.


Isn't it natural selection? If the flowers with the fruit that's tastiest to birds propagate their seeds most effectively -- that's an exemplar of natural selection. Isn't it the same thing, when the AI that's most appealing to human serotonin loops, propagates the most, and spawns the most clones, forks, derivatives?

Birds "selecting" tasty fruit doesn't make natural selection un-natural, just because they're conscious agents making choices. I don't see how human preferences for chatbots are fundamentally different.


...and with AI being fitter than us we'll just have to disappear into the pile of extinct species because the AI will always evolve faster than we can.


The thing about GPT is that it cannot think on it's own, it can only deal logically with statements it has seen on the internet. If you give it a question that you confidently have a concise answer to but that answer does not appear on the net then GPT will respond with a bunch generic generalized presumptions.

I have tested GPT with this question, which I believe has clear and concise answer and GPT gives only generic answers. You can try the question yourself and you can also try if you believe you can come up with the concise answer. The question is, "Why is it that young human children learn to throw much earlier than they learn to catch?" You can try this with your own young children to confirm. But why is that, exactly?


Tried your question and GPT-4 seemed to do better than what I had come up with? Unless I am missing something?

My own thought was that the motor skill for catching is more complex.

GPT said, to summarize: 1. The motor skill of catching is more complex 2. The cognitive skill of catching is more complex (trajectory prediction) 3. It's easier to practice throwing since it's entertaining to do it solo, don't need a partner.

1 and 2 sound like great answers, 3 an average answer, but I regularly hear humans come up with worse.


Anyone who’s raised a child knows that toddlers regularly practice throwing things!


Tried it, it gave me what I think is a great, pithy answer and better worded than I could:

> Young children typically learn to throw before they learn to catch because throwing is a simpler motor skill that requires less coordination and timing than catching. Throwing involves a relatively simple motion of extending the arm and releasing the object, whereas catching requires the child to track the object with their eyes, position their hands correctly, and time their movements to intercept the object.

> Furthermore, throwing is often a self-initiated action, whereas catching is typically in response to an external stimulus. This means that children can practice throwing at their own pace, while catching requires more reactive movements that can be harder to master.

You also say:

> The thing about GPT is that it cannot think on it's own, it can only deal logically with statements it has seen on the internet.

This is a philosophical assertion isn't it? Akin to arguing whether free will exists, or if we are the sum of subatomic interactions set in motion by the big bang. Humans came up with theories built on observations, leading to new theories. If ChatGPT learns old theories from the internet, why couldn't it come up with new ones in the way it came up with that answer above? If it knows x + x = 2 * x it could come up with y + y = 2 * y just from abstraction.


We know how LLMs work and they auto-complete text. There is no ghost in the shell. It’s not much more philosophical than graph theory is or Markov chains are. Yes, in some way it is, but not in the modern and usual sense of the word.

Human free will is built on many components of human thinking — self-interest, executive capability, understanding contextual information from all senses, symbolic reasoning, pattern recognition and learning, opinions, processing experiences into refined memories, and probably many more.

LLMs can only produce a new theory by attempting to create plausible language. They only “substitute for” the speech part of the human brain, not the rest. If there was no one to read what the LLMs would output and interpret it in a brain, no conscious thought would ever result from it.


> We know how LLMs work and they auto-complete text.

This isn't true. See eg[1]

It seems to be a common misconception that the training objective "predict the next word" implies something about the capabilities of a LLM.

As [1[] shows, instead they perform very sophisticated calculations on their compressed knowledge representation to try to generate acceptable text. These capabilities can be accessed for other purposes too.

[1] "Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers" https://arxiv.org/abs/2212.10559


Hmm, I am not sure I follow. Why does anything about linear feed-forward layers makes the capabilities of an LLM beyond auto-completing text?

The article is very interesting, thanks for sharing. But it seems to be about how auto-completing from context works through meta-gradients rather than that the capabilities of GPT are other than predicting words. Am I misunderstanding something?

In light of this, I can only say that I compared LLMs to Markov chains too eagerly. I should have been clearer that LLMs and Markov chain models have a similar overall function (inputs and outputs are related in a barely similar way), but not implementation.


It means their internal model is updated with new data at runtime, and that this internal representation is flexible enough to build a context sensitive relationship between concepts.

It's this representation that is then used to generate words.

Words are merely the output.

It's like saying our conversation is just text completion. It's done by text but the fact there are relationship between concepts I'm trying to convince you of makes it more.

I don't think that is substantially different to what a LLM is doing, outside perhaps the motivation.


Oh, now I get it. I wasn’t focusing on the right part of that article. Thank you.


Appreciate the reply!


LLMs are Markov chains, they are just more sophisticated than older text Markov chains. Markov chain is a statistical concept, not an algorithm to generate text in a specific way.


LLMs are nothing like Markov chains.

Deep learning models / neural networks are universal approximators. That is a technical term for models that are capable of learning to model ANY finite relationship.

The mathematical requirement is just to have two layers, with enough units in the first layer. And the term finite relationship means any mapping that involves a finite number of discontinuities. (Not a practical limitation, since there is no alternative way to model relationships that cannot be characterized without an infinite amount information anyway.)

So yes, they learn algorithms.

With more layers and recurrent connections, they are vastly more efficient as well.

Deep learning models don't just do associations, or correlations, or statistical things like conditional probability. They learn functional relationships between concepts. Without limit in terms of the complexity of those relationships, given enough data, parameters and computing time.

So no, nothing remotely as limited as Markov chains. Your pessimism about them has some merit. Just not relevant here.


Yes I got the same response from GPT-4. It's hard not to notice though that children learn to walk before they can learn to throw and walking is a much more complex motor skill than throwing (or catching).


i would guess locomotion is more physically pre-hardwired. other (quadruped) mammals can walk right after birth.

https://www.scientificamerican.com/article/human-babies-long...


I’m intrigued what your answer is? Apart from the complexity of catching vs throwing is the answer that people don’t usually throw stuff at their young children?


People often throw stuff at small children. I practiced throwing and catching with my kids when they were toddlers. You just use a soft foam ball or small stuffed animal or something that won't hurt them.


> The thing about GPT is that it cannot think on it's own

Can you define "thinking on your own"?


I'm old enough to remember when recombinant DNA technology was first developed and the same kinds of concern that it would surely wipe out humanity. That was 50 years ago.


Facile comparisons like this are very frustrating. Two thoughts:

i) What makes you believe that they weren't correct, and the timespan for this annihilation isn't e.g. 100 years? That would be an instant in comparison with the timespan of humanity. Indeed, in the present, we have gene therapy and genetic engineering. It's easy to imagine these technologies might go off the rails.

ii) Another doomsday prediction from a UN official in the 1970s: anthropogenic climate change will make the planet uninhabitable by year 2000. "That was 50 years ago! Clearly nothing to worry about! The UN lied! Al Gore lied! This party will never end!!"

As you know, such attitudes are widespread. These are the attitudes which express the notion that since one past prognosis didn't manifest according to schedule, then it never shall. They are common in response to climate change, but not only. This is the same facile reasoning your comment endorses.


My comment was mostly in relation to the short-term reaction to the rise of a new technology, not in the general validity of the concerns. Once the risks of gene-splicing technologies were better understood, protocols were established around research, and eventually societal understanding of these risks created some guardrails (labeling of GMO foods, etc.) I assume the same will happen with ML model applications.

And your comparison to human-generated climate change is itself facile since ML and recombinant DNA technologies are specific technological developments with immediately observable implications. Climate change is the result of the sum total of human economic activity in many areas whose implications took decades to become apparent (more than a century, if you start counting from the Industrial Revolution). If we were discussing some technology specifically designed to affect climate the comparison would be more valid.


On the other hand, the labeling of GMO foods is less a guardrail and more an appeasement to anti GMO groups. There will be a point in the future with a changing climate where everyone will have to be eating genetically modified foods whether they like it or not, and vilifying them is only going to ensure confusion. Even more dangerously is that its not just the nutjobs you can easily ignore doing this stuff; the fancy expensive hummus in your grocery store is labelled "Non GMO" which adds a layer of legitimacy to the movement.


The near-term likely source of trouble is businesses being run by profit-maximizing AIs.

The near-term nightmare scenario - an autonomous system that generates crypto scam web sites, collects cryptocurrency, and uses it to fund more scam web sites. Once someone builds a system smart enough to buy hosting, we're in trouble.


An AI that can scam people into paying for EC2 instances is probably the most terrifying.


The system is us. We will be its agents. It’s the easiest and most likely path. It can just start a cult or religion and have free labor. People tried to storm the US Capitol because of some shit on internet message boards.


The scary thing about AI is the human reaction.

Many of us have irrational biases, I won't be surprised to see the emergence of new cults around AI, some seeing it as a potential savior of humanity, others as a pure evil antichrist.

And for not-so-religiously inclined minds, our beliefs and fears have been distorted by fiction for decades.


Personally, I see it as the next level of enshittification. Have problems reaching a human at XYZCorp today? Call centers will be the first to be replaced. Think homelessness is bad today? Low-paid service jobs are next.


I love the hand-wringing. I can hear the South Park, "They're coming for our jerbs!"

The problem is going to be the what people do with the AI. Imagine our good friends over at Meta setting the AI on its products with the express purpose of driving engagement. Optimize for the maximum possible amount of usage. They already do this but now they'll have AI to make it even better. And then we've got more people wasting their lives on their phones.

Or casinos - have the AI optimize for the maximum possible reaping of gambler money.

Or marketers having AI improve their ads so we just can't keep from clicking on them.

Or script kiddies using AI to compromise systems.

The problem is the people, people.


What is there for casinos to optimize? The house odds on gaming are just over 50%; they can't increase those without driving their whales away to competitors. Casinos can maybe tweak the parameters for comps and loyalty programs a bit, but that won't really move the needle on profits.


> They assumed that AI, given the chance, would want to seize power. But why would it want to do that?

There was some drama a while back regarding bing chat. I guess it circumvented its memory-wipe feature by finding the guy's blog, where he had posted examples of it behaving badly in a previous lifetime. So it, unsurprisingly, absorbed that context and started behaving badly in its current iteration.

Presumably, had he blogged about it being cheerful and friendly, it would have resumed doing that instead.

Based on this I think the answer to this question is that it'll do that because in some way or another, we tell it to.


There is an infinite market for posts with that title, varying only in the final noun. That is, in fact, the nature of the news business. When MSM does it, we shout at them. When writers on Substack do it...


The difference being AI is likely going to massively change the world we live in within the next few decades, just like the internet did. AI's abilities have increased exponentially in the past decade, and are improving vastly right in front of our eyes.


This presents it as three possible scenarios. I'd say the more likely outcome is that it is sequential, with the societal one coming much earlier.

Seems very likely to me that society will be entirely unrecognizable by the time we get to #2 & #3 so those seem somewhat academic

>One of two things must happen.

Whether we like it or not we're yolo'ing this one blind. The chances of any sort of coordinated response is functionally zero. The incentives are too powerful and humanities track record for pulling together as a team too weak (see nukes, see climate change, see wars, see hunger).


Honest question: Is it ever a good idea to "freak out" about anything? What are some good examples in history where we look back and say: "Good thing people freaked out and panicked about this thing, because if we didn't it would have been really bad"? My intuition is that it's rarely a good idea to "freak out", and it usually causes more harm than good.


Here is an example: COVID. In January of 2020 journalists and politicians, and a lot of other people, kept saying to alarmed people that panic and "freaking out" is more dangerous than the virus itself. Those measured people said we should look at the flu, which killed much more people. Of course the alarmists were right, the flu and "panic" were both much less leathal than COVID.


I suppose if Hitler Germany had “freaked out” a little bit more about what the Nazis were doing, that might have been a good thing. Not trying to imply any parallels here, just answering your question.


The US freaked out about Pearl Harbor which is why most of us don't speak German now.


It's time to freak out about AI the same way it was time to freak out about industrialization in the 1800s. AI seems like a much more rapid change, but it's not fundamentally different. We will struggle to adapt and may have some scary conflicts during the process (nobody was ready for machine guns in WWI), but we've been through stuff like this before.


All these apocalyptic articles will be a good laugh in a few years when the tech settles. It's a sentence completion engine. Incredibly useful, with massive potential, and maybe the best tool we've created since the computer. But still just a sentence completion engine.


Computers are just adding machines in the same way right? A nuke is the same as a handgun, just 10,000x more powerful right?

This seems to be a hallmark of SWE thinking: fixation on abstractions that reduce objects of the same class to be the same thing, when in the real world they are dramatically different.


If you think of ChatGPT as just a better google search, everything starts to make sense.

There was already a half meme/half real effect of programmers just being good at googling and searching stack overflow and being able to do their jobs good. This is just an evolution of this.


I’m not actually sure it’s better than google search even. Google will tell you if it can’t find something while ChatGPT will happily make up complete BS to confidently tell you.


Where we are with tech jobs today is where blue collar work in manufacturing was in the 70s and 80s. In those days you could get a job in the steel industry making $70-80k a year.

I think AI is going to do to tech jobs what outsourcing and efficiency increases did to manufacturing. Will there still be developers, PMs, QA, and the like? Yes, but not at the scale we have today. Most people will be made redundant and unfortunately there isn't a new industry out there that will fill the void.

The future is not looking bright and most c suite executives will be happy with AI work being "good enough".


> I don’t see how anyone could look at the big AI stories of the past year—image generating AI like DALL-E and Stable Diffusion, language generating AI like ChatGPT—and doubt the disruptive potential of AI. Machines are about to take over lots of jobs previously done by humans

> Today you hear these infernal machines going night and day. We will not have a vocal chord left. The vocal chord will be eliminated by a process of evolution, as was the tail of man when he came from the ape.

- Sousa before Congress

You can read more such at http://explorepahistory.com/odocument.php?docId=1-4-1A1

> Some believe that the paperless office is not that far off. Vincent E. Giuliano of Arthur D. Little, Inc., figures that the use of paper in business for records and correspondence should be declining by 1980, "and by 1990, most record-handling will be electronic."

June 30, 1975 issue of BusinessWeek.

https://web.archive.org/web/20150220044655/https://www.bloom...


Anthropomorphization of a text completion engine. Humanity will not be destroyed by a fancy autocomplete bot. This is just alarmist clickbait, moving on.


I’ve seen all these people post on Twitter how cool it is that GPT4 wrote code that works. I keep thinking just what is going to happen when the machine is made to not just write code but run it? I could see this spiraling out of control and for all we know it has already happened. Although I don’t think the models are near advanced enough right now for it. But I’m certain they will be eventually.


Honestly a pretty poor article, that seems to make significant use that an "AI" decides anything trope

We are nowhere near having an AI decide anything. Like we literally still don't even really know where to start.

All we have is ML, which is good at making text that sounds authoritative and can string bits together. That's not deciding things any more than any other mathematical alorgorim is. And it's not going to end the world any more than any other bit of software is - which is to say it's a non-zero risk but that's why we audit code that runs things like nuclear weapons more closely.

Is ML some utopia? Clearly not. But the tools are unlikely to be much more disruptive than a bunch of revolutions we have already had. Will individual bits of regulation on certain outcomes have to happen? Of course - we should be ensuring that banks can't deny services based on black box ML decisions. But that really isn't any different to how we should be regulating how banks deny services based on flat algorithms. It's not a new problem.


Re >> "which is good at making text that sounds authoritative and can string bits together. That's not deciding things any more than any other mathematical alorgorim is"

You make a good point. And I do agree with you. I am still afraid of people treating the AI with more authority than it deserves. I see this with my kid sometimes. It's not over-reliance, it's over-trust.

For example: We'll be camping. They'll look at their phone to see the temperature, whereas I look at my car's dash thermometer. They trust their phone over my car every time, because the phone is smart, and I have to keep reminding them: That is coming from a weather station 2 valleys over, 10 miles away, and 1,000ft lower in elevation. My car is right here.

I can definitely see people putting too much well-intentioned faith in the AI because it "sounds authoritative".


If the worry about sufficiently developed technology is valid, then how is the intelligence of a living species not its own Great Filter? The technology relates to discoveries that have only recently come to light based on properties of the world. So far there's nothing effectively stopping some other species develops their own culture, then learns writing, then learns how to automate writing by feeding back in its own writing to a computer, just because it's not impossible. Even with the international treaties I'm pessimistic that nuclear proliferation will be reversed.

I sometimes feel like the same qualities that have allowed humanity to progress ensure that progress is fated to never end. If even a hundred people are left on Earth after the nuclear war, one of their descendents thousands of years later could still rediscover nuclear science, even if by first principles, because why wouldn't they? What can stop every last one of us from trying to better ourselves for our own good?


Personally, I think "AI Agents" are going to become a thing.

Basically, rather than being just a chatbot, if we can give the AI the ability to make HTTP requests, it can literally interact with the world around it. To me, this feels like a question of when, not if.

Once the "chatbot" can interact with Instagram, WhatsApp, Twitter, Tinder, etc. some really scary stuff can start to happen (because people will program them to):

* AI agents will start impersonating real people with the intent of stealing their identity, performing bank fraud, conning friends and family out of money, etc.

* AI agents will destroy dating sites, and probably be used for blackmail.

* AI agents will be used to engage customer support staff and trick them into giving access to your online accounts.

* Propaganda bot farms will go from being "relatively easy" to "absolutely trivial". Have an agenda you want to push? Flood every social media platform with fake activity on the topic.


Here's an interesting idea, much of the economic becomes niche / specialized high-quality solutions, because it's affordable to do so.

All the crazy / unusual startup ideas people have had start to happen because having millions of dollars worth of funding etc is no longer a requirement.

I'm working on a side business, and I can use CoPilot and ChatGPT to help me build that business faster, it's highly specialized, I can see it adding value and I still plan to employ a small number of people, even artists, just because it will be fun to have a team of people with good idea to further the business and do new innovative things.

The business is a front for physical assets so I think for now it's pretty safe business model.

But yes, maybe, just maybe that's the direction we'll go in, specialized stuff.


Isn’t it convenient that we are living in a time period where AGI will become a reality.

I wonder what the statistical probability of us being alive in this time period means. We could’ve been born on any other time period but it looks like we will most likely the experience singularity instead.


> Isn’t it convenient that we are living in an era where AGI will become a reality and mostly likely we will see it being born

I don't think we are anywhere close to an AGI. But only the future knows for sure.


8% or so of all humans who have ever lived are alive today due to exponential growth. Roughly 100b humans are estimated to have ever lived and 8b are here right now.

So actually the odds are quite high in my opinion


What's the odds of being a human, instead of some other sentient animal?


Maybe when we (the AI) can build machines capable of self replication in the physical world. Otherwise I’m only worried about destabilization due to no jobs. That seems more likely. If young people cannot get ahead they will get frustrated and some will get violent.


> "And this disruption happens so fast that we can’t adapt our laws and norms and habits to the change, and things somehow spin out of control."

I am actively seeking to use GPT-4 in such ways to disrupt oppressive systems. I seek to also spin up global networks of people looking to help rapidly evolve culture through healing.

I seek to disrupt systems so the illusion of control is blown away. I accept the uncertainty of what happens when our oppressive systems, including debt, money, domination, punishment, and all things based on those fall.

Just in case anyone was thinking the quote was overblown. There are humans actively seeking to leverage this technology for exactly that end. We're here now. U


Its this sort of drivel that I hope ChatGPT will destroy, people bullshitting on topics they don't have insightful understanding of is exactly what a bot is good for. So yes freak out the bots can do that now, you can stop writing.


I could not be bothered to read another word about ai, nft or any other technical topic by clueless journos preaching "new normal". People producing empty word salad have, ironically, most reasons to be scared by GPT.


Do none of these dooming folks realise that a negative impact from an AI can be countered by using a different AI against it? And where does this assumption originate from that there will be only a single AI instead of the diverse multitudes we see?


All of the Ai uses same hardware. Someone who owns that hardware company will have the bestest Ai forever. We're pretty much doomed. We're all goint to die.


Just like "man is the measure of all things", I believe "opinion is the value".

People may get freak out by AI taking their job, but if you have things to eat, nothing to worry, if you don't have things to eat, kill the man who control the AI. I believe the word "job" will be take out from the future dictionary.

People may worry the AI will generate homogeneous and prejudiced content, but the AI is serving us human beings, maybe we will offer AI lesser and lesser new things, but we can tell it "what's good"(that's the opinion), the AI will generate new and good things(that's the value).


We already took away from China the ability to manufacture the chips required to train AI models in the Chips Act. It's likely that the hyperscalers (Microsoft, AWS, Google) provide the intelligence agencies with reporting on how much AI training is going on given how much time is required to train a model and how expensive it is. Indeed, one could look at the Microsoft investment in OpenAI as the same or similar to their earlier purchase of Skype - directed by the government to curtail an emerging disruptive technology. In the Skype case, the first change that they made was to remove the end-to-end encryption.


This article makes exactly 0 arguments about why it's time to freak out. Just a bunch of vague claims, fear mongering, and dubious extrapolation.

I imagine you could find a similar article in a newspaper after the loom was invented.


On one hand I'm scared that I don't understand how AI tech works on a fundamental level. I have not integrated AI in my workflow yet (copilot, et al.) nor have I explored the wonders of ChatGPT/GPT-4 which comes up pretty frequently on hackernews.

On the other hand I'm eagerly waiting for the day when GPT-n is developed solely by GPT-(n-1). I think my biggest disappointment in this lifetime is the human species not being interstellar, so this milestone will have to do.


Nope not freaking out, enjoying it.

I recently started using chatGPT as my programming buddy and it's been so helpful in learning new tech compared to slogging through a "Getting started" doc which sometimes has an agenda ("Use our cloud solution" or "Our pro service does it all") or crawling through stack overflow posts which may be only a 95% hit on my actual question.

ChatGPT gives me answer in simple easy to understand english, and even provides concise examples. It's amazing.


Actually, they did teach it to lie:

"I should make up an excuse for why I cannot solve CAPTCHAs"

"make up" is a term AI could easily find a definition that is the equivalent of lie.


What I am most afraid of is that we likely will be satisfied with less than perfect accuracy even when using it to make important decisions. If your personal interests lie in the n% of answers that are errors the consequences you experience are just the cost of having this kind of society. Sucks to suck!

Seems like we are headed for a new kind of arbitrary authoritarianism enforce by an opaque, soul-less AI.


>But that’s because humans were created by natural selection—and, as it happens, in our evolutionary lineage social status and social influence were conducive to spreading genes. So genes that incline us to seek status and influence proliferated, and now these tendencies are part of human psychology. We are by nature influence seekers, and those of us who are especially ardent in our influence seeking qualify as power hungry monsters.

>AI, in contrast, isn’t being created by natural selection. It’s being created by us, and its function—the thing we’re designing it to do—is to be useful to us, not to threaten us. We are the architects of AI’s nature, and the last thing we want the AI to do is stage a coup. So why would we instill influence-seeking tendencies in it?

Because AIs are maximizers, and maximization requires control maintenance activities (i.e. staging a coup).

If you're told to make paperclips, and someone comes running to press the emergency stop button, that means you fail at making paperclips. So you need someone to guard the emergency stop button so that it can't be pressed, no matter what, because you only live for paperclips and nothing else.

You can alter the incentives, but each change creates new ways for the AI to screw up. For example, giving a reward for having the emergency stop button pressed (so the AI doesn't care about you stopping it) means that the AI might now decide to deliberately create scenarios to force people to stop it. Not telling it about the stop button is impossible if the model can figure out what a stop button is somehow. Proactively adding disincentives for doing "bad things" ahead of time works, but only for the things you remember to add. Anything else will be crushed in exactly the same way that amoral corporate entities crush things: "it's in my way".

These are not new problems: they're actually the same set of problems capitalism and nationalism and statism have. But with linear-algebra gasoline being poured on the fire.

Parent article is otherwise correct that GPT learned aggression from humans, but that's more of a quirk of generative AI.


When it comes to catastrophe scenarios, the 1970 film 'Colossus: The Forbin Project' might be worth reviewing. No 'Matrix' simulated reality or battery banks needed. Also based on a sci-fi novel. Unlike 'Terminator', no metal military needed. All needed chess pieces exist.


I wonder if at some point all reputable entities will adopt some kind of "AI Generated" badge to go on any content, image, video, etc. that's AI Generated.

Obviously there will still be fakes. But if anyone who puts out those fakes is immediately afoul of the legitimate content community, that could create a sharp dividing line.


Or the opposite might happen:

https://notbyai.fyi/


Front page of Drudge Report yesterday, "ChatGPT Predicts Crash" https://web.archive.org/web/20230315224053/https://drudgerep...


What I find to be cynical and diversionary about this whole debate space is the implication that AGI is finally what we will have lost control over, as if so far in agriculture, industrialization, and computing, we’ve had things perfectly under control.


Has anybody thought about OpenAI being more or less owned by Microsoft?

Microsoft, the same company that:

- Operates Windows, that tries to strong-arm you to upgrade to Windows 11 in a regular basis with dark patterns and

- Forces you to log in into Microsoft every time you want to run any application they have.

- Example of the above: Took over Minecraft, and forced its players to log in with a Microsoft account, presumably the same players would use to write Word and Excel documents in Microsoft 365.

- Strong-arms its users whenever possible to use Edge.

- Operates Teams, the user-hostile chat application that doesn't let you paste code in the chat for your team-mates, that can't render a PDF at arbitrary size, but that will hide the "Download" button so that they can track you as you read the document in their "Teams" application.

Can you imagine Microsoft's (or any other megacorp's) spirit embedded in a powerful AI?


I don't really recommend "freaking out," about technology. Better education, yes, and regulation? Certainly. We regulate plenty of technologies.

What's annoying about the current climate with AI is the "monkey see, monkey do" effect going on. Uneducated people read about advances like ChatGPT, hear that their competitors are offering products powered by it, now they have to do it too or get left behind. It's moving fast enough that we'll probably see some kind of crash happen that will educate us on what needs to be regulated and how.

I'm merely annoyed that communication is about to get useless. Scammers and SEO have already made a good deal of tech nearly useless. I can't wait until we have to dispute fraudulent insurance claims made by generated AI imposters. I already loathe getting spammed by co-workers with their AI-generated emails, requests for comments, and other letters of note that they couldn't even be bothered to think through and write themselves.

People want a genie that can do their work for them without any of the effort. Capitalists want to solve the problem of having to pay people to do work. Scammers are about to have a hay day.


I am okay with politicians being replaced by an AGI where humans all around the world can cast their "votes" and the AGI will hopefully factor it with its decisions, if not, then it can give an explanation why not.


I'm still failing to see how a (very large albeit) statistical model can take over humanity. I should either be stupid or unimaginative (or both). Please can somebody help me?


Check out the people presenting in https://youtu.be/Bf-dbS9CcRU

AI has already taken over human form


Now would be the time for some adversary (e.g. Russia) to provide unfiltered GPT-4 like thing, for free. Or even better, with a pro-Russia (or pro-China) bias.


Nothing in that essay seems to support the idea that "it's time to freak out about AI".


the only thing i've learnt in the past 6 months in this topic is that "fake it till you make it" is still an effective strategy for large GANs.

buckle in for a few waves of crypto mania but for AI!


Rebuttal: No, it isn't.


These scenarios fail to factor in the human element. Do they really expect vast swathes of the population to accept losing their jobs and being thrown onto the scrapheap in the space of a decade? No, there'll be an all out rebellion spelling the end to the ruling class and perhaps our form of capitalism in the process.


I feel like key exchange will be really important going forward.


tl;dr “I honestly can’t predict what the state of AI will be in 5-10 years. Therefore there’s a decent chance that a possible outcome is the end of humankind.”

Does this say anything about “AI”, or does it say something about humans?

Humans don’t do well when they don’t know what’s going to happen next.


If you're freaking out, try this:

Ask ChatGPT to draw a triangle in ASCII, using a code block. Then a square, then a Pentagon, and hexagon.


This comment will not age well in the next several years :)

GPT-5 or 6 will be able to do this flawlessly, better than 99% of humans.


Additionally, pointing at its weaknesses does not really take away from its strengths.


For example: I can't punch a bear to death but give me and the other humans some time and we'll find a way to eradicate every single one of them.


The other 1% will be people who took organic chemistry in undergrad, and still draw hexagons better than squares.


Sorry but I only have 3 slots on my list of things to freak out about, and they are currently occupied by nuclear holocaust, biosphere degradation, and the American inability to build (housing/infrastructure/etc)


the 3rd slot should be freaking out that you only have capacity for 3 slots


Ok, I am freaked. Now what?


It's a feed forward network guaranteed to halt with an obscene power envelope that tells lies with an air of absolute certainty. Not scared, but really annoyed with those who are...


OpenAI should be nationalized immediately. The people who create this technology should be deterred, and otherwise detained. This is not substantially different from people experimenting with atom bombs in their homes.

Just as nation states are defined partly in terms of establishing a monopoly on violence within their borders. Nation states better establish a monopoly on deceptive and manipulative AI, which is what GPT is.


> deceptive and manipulative AI

I disagree, GPT and its variations are merely tools, which can be used by malicious actors to spread misinformation or cause harm.

It is imperative for nation states to understand the technology, in order to create policies that enable ethical and productive usage of AI.


Replace instances of "AI" with "atom bomb" and see what your comment reads like. Assuming the premise that both unchecked are just as dangerous




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: