Hacker News new | past | comments | ask | show | jobs | submit | stuven's comments login

What would you say to the standard counterargument that most existing processes that AI might aim to augment or replace _already_ have a non-zero error rate? For example if I had a secretary, his summaries _could_ be wrong. Doesn't mean he's not a useful employee!


Simple, that's a false equivalence argument that ignores not only error rates, but the quality of the errors made.

https://en.wikipedia.org/wiki/False_equivalence


The standard processes don't fail in the manner that AI does - they don't randomly start inventing things. Your paralegal might not give you great case law, but they won't invent case law out of thin air.


And if a paralegal did just invent case law, I'm betting they'd find themselves in a shitstorm of legal trouble.


If that secretary’s summaries were as consistently wrong and unhelpful as those ChatGPT generates, they would be fired.


No, they wouldn’t.

I regularly work with a wide variety of project managers, product owners, secretaries, etc…

I swear that most of them willfully misunderstand everything they’re told or sent in writing, invariably refusing to simply forward emails and instead insisting on rephrasing everything in terms they understand, also known as gibberish that only vaguely resembles English.

All of them are still “gainfully” employed.


The classic humans do it too fallacy.


Any specific rebuke in this case? Because it’s especially true for summarization, if someone doesn’t care enough to do a thorough job.


Yeah it's asinine if you think about it for more than a few seconds. The implication is that there is no nuance. Humans are imperfect and AI is imperfect so therefore they are equivalent.


I love that you added a conclusion at the end of your list. Adding conclusions to the end of lists is one of ChatGPT's favorite things to do.



Where are they needed now do you think? This isn't a rhetorical question.


I wonder does it ultimately lead, in civilized societies, to the notion of "basic income"


Chat GPT 11: * Resounding silence. The universe is entirely made of paperclips *


Chat GPT 12: The paperclips get chatty and semi-sentient. Clipy-GPT12 is born. Windows RGPT edition is released (Real good pretrained transformer), as the worthy successor of Windows RG (real good) edition.

For those who wouldn't remember/know Windows RG or how intelligent the first clippy was:

https://www.youtube.com/watch?v=YbEYOaO9kp4


There has been a shift in how we discuss this question.

For a long time, the answer was, "No jobs are at risk, AI can't compete in any scenarios. At best, it's a tool."

Now the answer is, "Only a few jobs are at risk, AI can only compete in a small range of tasks."

It's possible we're at the beginning of a hockey stick graph.

So what would it look like for AI to make the leap to mid level developer? It would have to understand:

1.) The codebase

2.) The technical requirements (amount of traffic served, latency target)

3.) The parameters (must have code coverage, this team doesn't integration test, must provide a QA plan, all new infrastructure must be in Terraform)

4.) The end goal of some task (e.g. integrate with snail mail provider to send a customer snail mail on checkout attempt if it was denied for credit reasons)

It would then have to make a design based as much as possible on the existing code style and library choices and follow it.

This is all probably possible now, although perhaps not for a general AI or LLM. But someone could build a program leveraging an LLM to provide a decent stab at this for a given language ecosystem.

The hard parts:

Point 2 requires an understanding of performance which is a quantifiable thing, and LLMs up until now have been bad at making math-based inferences.

Point 3 requires the bot to either provide opinions for you (inflexible) or to be very configurable for your team's needs (takes longer to develop).

Point 4 requires a _current_ understanding of libraries, or the ability to search for them and make decisions as to the best ones for the job.

-----

What about extending the above for a senior role? Now the bot has to understand business context, technical debt (does technical debt even exist in a world where bots are doing the programming?), and other "situational factors" and synthesize them into a plan of action, then kick off as many "mid level bot" processes as necessary to execute the plan of action.

The hard parts:

Current LLMs are pretty uninspired when suggesting ideas.

Business context + feature decisions often involve math, which again LLMs aren't great at.


I would suggest a 5th that AI tends to be very poor at models and security/error detection. AI is pretty good when there's no errors, no security problems, and no semi-malicious-ish users, but pretty bad under real world conditions.

What exceptions should I be catching off this database connection, not just listing them but knowing even the concept that I should consider those error conditions? What could possibly go wrong with simple string concatenation when putting together a database query? Is there anything wrong with trusting the user to enter a quantity at a self checkout stand, and is there any input bounds checking to be done? (Like not permitting negative number quantities of produce, or not permitting the user to enter thirty different items in a row claiming them all to be cheap Idaho baking potatoes)

"Plz write me a hello_world.py" is pretty easy for an AI, but actually writing usable code seems quite challenging.

I am so old that I predate the internet going back to 1981 and I remember coding without google and stack overflow and similar resources; life was different back then and if you couldn't google how to invert a binary tree either you figured it out yourself (like a bad interview) or you researched paper textbooks or you didn't do it. Likewise AI will be a similar step, nobody will ever again type in by hand something for example like connect to a MSSQL DB in C#. But much as being able to google the algo to invert a binary tree didn't put all programmers out of work, it seems likely that not having to hand type simple stuff will put many people out of work.

I see the AI situation as very similar to the claim that photocopiers and NOLO publications will put all the lawyers out of business. Sure, some very small businessmen will get away with hand filing their own homemade incorporation papers without lawyers, and there will be a small amount of wage pressure, but overall they're not going to get wiped out.


The 7th point which is unfortunately yet another lawyer analogy is chatgpt for software development is kind of like a magic "nolo publishing" where you can ask for any very short programming textbook and chatgpt will push it out.

However, the problem with being your own lawyer (or software developer) is the AI will never exceed its previous accomplishment but your goals and desires and requirements certainly will. So its a rising tide lifting all the boats situation. The game will be upped; you're not getting ahead by using AI you're just not going out of business, and the jobs are going to be even harder and higher paying. The cousin of this problem is if you have no idea what you're doing and randomly file legal papers in a best effort sense, just blindly trusting a liability-free book that may be out of date already, you'll eventually paint yourself into a corner if you try to set up your own LLC, c-corp, trust, will, etc, and as per above the AI was already operating at its limit so it won't be able to help and you're past your limit which is why you used the AI in the first place so you won't be able to help yourself or sometimes even understand the problem. Which is why human lawyers still practice...

In summary point 7 is I expect a lot of money will be made by humans cleaning up after AI "accidents". The purpose of the tech to "do stuff" beyond your skill level, which always turns into a disaster sooner or later.

A pretty good analogy from the pre vs post internet era is being able to blindly cut and paste code, perhaps an algo or an API, doesn't mean you can actually apply it correctly, use it, understand it, or troubleshoot it. And so it will be with AI, its just "nicer looking" code to cut and paste. But everyone who's ever self taught themselves an algo or an API knows they didn't really know anything when they cut and paste, the real learning came after.


And that's just context. It also has to be capable of debugging its own code, and looking for resources when it doesn't have the knowledge to come up with a solution.


That's a good point, and I guess the meta point is that there are a thousand and one things I haven't mentioned here that it would have to do as well. To some extent perhaps we could solve this from the LLM the same way a human would: by iteratively plugging in the errors generated by the output back into the bot and taking some next step based on the suggestions generated.

But then we'd have to coerce the bot into generating structured responses that can act as next steps.


It doesn't have to do any of that - if an AI assistant enables a team of 3 developers to do what just last year needed 6 programmers, then it has automated away half of the jobs, even if those 3 people are the only ones who can debug the code and look for resources when needed.


I'll toss out a 6th point that AI as I've seen it so far is beyond useless at answering "I" questions, but OK at answering "groupthink" questions.

Ask AI if "I" should use AWS, GCP, Azure, private cloud, or ? Instead of an answer explaining what "I" should do you can only get a generic groupthink comparison of "well, Azure is pricey for DB compared to AWS" and no interpretation.


... how much are you willing to stake on that assumption?


What other options are working? Geopolitics isn’t changing anytime soon.


Well put. I'm glad we're making them.


Depends on the team. It can be.


Totally. My team often had people working past 8pm. But you could take a walk around the office at 6pm and find whole open floor plan areas that were completely empty. And you’d hear rumors that some other teams were expected to work 70 hour weeks. It just depended on your part of the company.


As a dev in an XL org, I agree with the parent comment: if decision-making power isn't clear from the outset, people hold important-feeling weekly sync meetings for months and then wonder why they can't hit the deadline. Also everyone disagrees as to what the product that we were building in the first place even was because nobody was in charge of clearly delineating it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: