Hacker News new | past | comments | ask | show | jobs | submit login
As IBM pushes for automation, its AI simply not up to the job of replacing staff (theregister.com)
63 points by Brajeshwar 69 days ago | hide | past | favorite | 79 comments



> "With AI tools writing that code for us ... why pay for senior-level staff when you can promote a youngster who doesn't really know any better at a much lower price?" he said. "Plus, once you have a seasoned programmer write code that is by law the company's IP and it is fed into an AI library, it basically learns it and the author is no longer needed."

This betrays not only a misunderstanding of what LLMs can do, but a basic lack of knowledge of the software development lifecycle. Krishna gets paid $20 million a year to be this big of an idiot. He should lose his job immediately. He hasn't earned it.

Remember, your company's management would do anything if it meant they could get a dollar. Never bend over backwards for these troglodytes.


Here's a counter point:

- They are saying that maliciously as signalling for cost savings.

- They know this will slowly rot and destroy long term development. Probably screamed from all sides internally.

- They are doing this to meet their own targets.

- If everyone is doing this, it means their competitors are also taking the hit, so it might not be as bad for them long term going this route. It's a similar thing as everyone having bad call centers.


You're right about the first three; they've made up a number which gets them the maximum return on shares. Most of Krishna's compensation is shares.

I don't think they care about competition at all, though. In fact, I'd venture to say that most publicly-traded companies are helmed by people who don't care about the competition beyond what it means for share price. And share price is itself a relatively poor metric for long-term sustainability and growth at a company, because most of the people who are supposed to care about it as an indicator of those things are already hedged and ready to divest at a moment's notice. No one with any real voting power at most major companies has put all of their eggs in the company basket; they diversify.

Executives who are compensated in shares also get compensated so well that they reach the point of diminishing returns on what money can buy you in a given society. If you, for a decade, get paid $1.5 million in cash, then get bonuses for meeting short-term performance targets, there's basically nowhere in the Western world you can't exist comfortably. And that's before you exercise stock options. So the risk of failure has no real bite to it.


Exactly

They are also going for the entirely wrong market.

Everyone is arguing about replacing programming jobs, "AI can't replace my job".

But it is really just fractional.

Lets say you have 5 marketing drones writing boring marketing material. AI allows them to do more, you still need humans to use the tools, and to edit, but now you only need 3.

AI can't do the 'entire job' but it did enough to eliminate two positions. This is what is already happening. Do you think this stopped with the ESPN case? Companies didn't stop because they got caught, they just got better at it.


I used to subscribe to this belief long before LLMs started confabulating, but I'm now starting to feel forced to concede that what will happen is that management will fire two marketing drones, the remaining three will use AI to write stuff, but then spend 5 drones worth of effort fixing the gish gallop. the overall quality will go down, the 3 drones will burn out, and the AI will be used to replace them entirely. the overall slop will be entirely useless.


Yes the causality is reversed in a lot of peoples minds but this is more in-line with some other "innovations" were enforced.

Big Boss reads Gartner. Gartners says do X. Big Boss mandates X, and gets the cost savings they want by firing or hiring freeze. The quality/quantity of the output was hard to measure compared to the concrete reduction on the spend side.

20 years ago it was offshoring. I recall a job where managers were all essentially given the offer at budget season - you can have 5 hires in Bangalore or 0 hires in US. Not a pot of money and managers deciding X vs Y, it was simply an ultimatum X or 0.

Certainly it worked, however it was first forced in many areas where it didn't work, top-down. AI efficiencies will happen similarly.


I've waffled back on forth on that same line of reasoning.

Maybe it just needs to reach point where the AI "confabulating" is equal or less than human "confabulating".

Plenty of humans BS, make errors. So AI just needs to reach parity to allow replacing the humans.


Some humans are capable of saying "I don't know" or, better, "I'll find out".


I think that information needs to be included in the input stream. instead of asking a model "what did I have for breakfast", you give it an api it can query to look at your food journal and if something is not available, it's marked explicitly as unknown.

if you make the human stressed enough, they will in fact tell you an answer whether or not they know the answer.


IBM has been a dead man walking for decades.

If the goal is functional software you're right.

However there are two goals at IBM: planet scale patent portfolio and satisfying the letter of their contracts at the Lowe's possible cost.

If they can find a way to get paid without staff they'll be delighted.

Unfortunately their core competency is hiring 20x as much staff as needed for a job, at the Lowe's quality they can achieve.


This Forbes article from 2017 seemed to hit the nail on the head, regarding their outsourcing to India being a major problem: https://www.forbes.com/sites/panosmourdoukoutas/2017/10/06/i...


"Outsourcing to AI" could be seen as the latest attempt at making the "outsourcing to India" dream work.


You see this in so many tech companies. Once the product is “done”, technical leadership is out and clueless beancounters take the reins.


ex-ibm here.

Some context from an earlier thread: https://news.ycombinator.com/item?id=41558554#41561970

Basically, management rot sets in with MBAs in charge who have no clue how software products get built. They see coders as overhead. Big Mistake. Short-term, it leads to some improved profits. Long term, it leads to cultural rot. and the org is doomed. Happening to amazon and aws as we speak.


The sad thing is, Krishna supposedly has a PhD in electrical engineering. I get that from the MBA brainrot crowd, but he's supposed to know better.


This read to me like, why pay a proper experienced person to generate good and reliable results when you can combine to sub-par(no offense to junior engineers, but you don't know all the caveats and sharp edges yet) things where one feeds the other and continue to remain sub-par while mass money is shovelled to the AI forges so they can figure something out and blame the junior(human) and cite the nice little print that their system may generate false/incorrect things, when all the house of cards come crashing down.


That may very well be the case.

Remember, what happens in five years time isn't Krishna's problem. What happens in the next 90 days is Krishna's problem. And if he can drastically reduce payroll while maintaining revenues in the next 90 days, he is judged by the market to be a good CEO.


I mean, to be fair, it's IBM. Like, the bar for CEO performance is extremely low; they haven't been what you'd call on the up since the 80s.


Expect to see the same story repeating across many large enterprises that dive deep into LLMs without fully understanding exactly what they want out of it.


Oh they know what they want out of it, but they don't realize that LLMs / AI isn't up to the task at all. At best it's next generation code completion, and code completion hasn't made any jobs obsolete. (or, the time saved with code completion is not outweighed by the increased amount of software projects or the advancement in programming languages and libraries making code generation obsolete)

It sounds like IBM's management in this case thinks of AI powered code generators like they (and others) did of COBOL back when, "it's basically english so non-programmers can use it".


If you can't get good results from outsourcing, why should you expect good results from an LLM?


Because this time, the C-suite closed their eyes extra hard and tapped their heels one more extra time.

Still waking up in western Kansas, though.


Somehow LLMs unearth peoples' forgotten wishes to believe in magic and miracles again.


Expect to see the same story being said while actually being used as a cover for replacing expensive workers (e.g. anyone in the US) with cheaper workers. Wall Street is OK with this.


Sometimes it feels like this AI thing is gonna be The Emperor's New Clothes for a whole class of management types. Like, maybe we're automating their jobs actually because AI makes me able to do more, but a lot of that is the admin stuff: I still have to do my own research (which many on the non-tech side don't seem to understand yet) but I can focus more on the tricky parts of my job, which happen to also be the parts I find most rewarding. I'm not sure how they plan to automate that when they're sitting there asking ChatGPT questions like it's an oracle.


if you want a vision of the future, imagine a chatbot saying "I'm sorry, are any of the following articles useful?" in response to your screams, forever.


I'm going to steal this comment, that's great!


Black Mirror would have had a field day with it.


It seems that an alarming percentage of executives do not have the skillset to steer a ship when interest rates are above 0%.


AI=Actually Indians?


I always knew the true AI datacenters were the big buildings that said "TATA CONSULTANCY" and "Accenture"...


Anonymous Indians


IBM needs help, and cutting costs isn't the answer.

Their revenue has been falling for a decade now [0]

I don't even know what it is that they sell anymore. I wonder if they do.

Also, I found this paragraph interesting:

> "Senior software engineers stopped being developed in the US around 2012," Blake said. "That’s the real story. No country on Earth is producing new coders faster than old ones retire. India and Brazil were the last countries and both stopped growing new devs circa 2023. China stopped growing new devs in 2020."

The ROI of one good engineer who understands their computer can be absolutely immense. The logical thing to do is to have some good ones and build an organization around them to help them focus 100% on building profitable things.

I get the feeling this Krishna fellow sees engineers as a cost to be reduced and thinks the value is added by the rest of the organization, rather than the other way around.

[0] https://valustox.com/IBM


The "AI will automate it and we can let go of knowledgeable humans" requires a very naive and sinister mindset, which seems widespread among so called leadership roles. If you ask yourself how a rational person could make such a overcommitment to an unproven technology, the answer is: They are not being rational.

Why are they in leadership roles? No clue.


> AI will automate it and we can let go of knowledgeable humans

I used to be upset when people said these things based on pressing a button and watching a computer generate a word salad like a slot machine hitting the jackpot. Until I realized that it’s not a statement about the machine, it’s a statement about themselves. They can’t distinguish between competent people and confident sounding and well-formatted regurgitation. Which is especially concerning from leaders, who really should be able to weed out imposters.

Now, AI may make more leaps quickly that changes the equation. But anyone with an ounce of experience in the tech world knows to not take optimistic extrapolations at face value.


> Why are they in leadership roles?

Because they’re useful to the bottom line and are anti-social enough to destroy anyone who threatens it.

People and moreover mentality like this should not exist in our society, yet we incentivize for it.


Leaderships always have golden parachute when things come crashing. Regardless of the long term consequences or outcome, they will always win or move onto next thing before the dues arrive.

Also, why can't we replace the leadership with AI? We have more books on management than actual technical knowledge. We can use AI to make similar decisions these leaders make at like what billion times less cost.


> Why are they in leadership roles? No clue

Because, IMO, there's a large overlap between being an irrational person and also being likable and smart sounding.

Confidence doesn't actually come from being good, it comes from being delusional. In order to truly market yourself you need a healthy disdain for the truth. You spit on reality and create your own.

Engineers and other rational-minded people have a big problem - they're too honest. They do feasibly analysis, they look at risks, they talk about weaknesses, pros and cons. But this doesn't strike confidence, quite the opposite - it makes the business folk weary. Why go with you, who estimates 10% chance of failure, when they can go with Mark, who says it's guaranteed to work?

Extrapolate that across all decisions and boom, the people at the top are usually the most irrational. It doesn't really matter that it's all a lie because when everyone walks out of that boardroom, they feel comforted.


the wet-dream of management is to get rid of high-paid knowledge workers. Now, chatGPT is the answer to their prayers, or, so they think.


> "Watsonx Code Assistant technically knows PHP, but it is very inferior to GitHub Copilot,” Blake told The Register. “Still, it's better than nothing. The CEO keeps imploring developers to use it. No one does, except maybe one or two people."

I'm very curious about what PHP is being written over at IBM.


How misinformed do you have to be to think that you can just wake up one day and replace 8000 people with “AI“? What does that even mean?

Instead of doing that, you should probably fire and replace the person who suggested that in the first place.


Currently LLM’s have pretty limited context windows and limited ability to come up with brand new ideas, right? But they do write very confidently. So they’ll have to find some people at the company whose main contribution is contextless repetitive pablum, empty phrases expressed with great vigor and confidence. Preferably someone highly paid. Surely nobody like that is working there…


Besides the obvious, you could honestly replace a huge amount of the horrendous documentation (50%+ of it is just carefully camouflaged marketing fluff anyways) for a large number of IBM products with the hallucinations of an LLM and be no worse off.


Yes but then there's also real documentation like this: https://www.ibm.com/docs/en/SSQ2R2_15.0.0/com.ibm.tpf.toolki...


Has anyone ever read an IBM Redbook?


I bet you could even use an LLM to analyze the work output of different groups within a company and see whose output most closely matches that of an LLM.

Hint: Those people won't have words like "content" "editor" "writer" or "documentation" in their title. They will, though, likely have words like "director" and "vice president" in there somewhere.


Haha. In general, I’m trying very hard to not see jobs I don’t understand as easily replaced by LLMs. Everyone thinks somebody else can be, right?

But I would be fascinated to see a company with the higher levels replaced by LLMs. Like a union or guild where the members are actually voting on the real decisions, and the LLMs handle summarizing these decisions and building narratives.

No idea of it would work. Might still need a human CEO. Would be interesting to see, though.


>>> have pretty limited context windows and limited ability to come up with brand new ideas, right? But they do write very confidently

so LLMs are basically senior management? :P


For some extremely peculiar and entirely ineffable reason, executives don't seem to be considering the possibility of replacing themselves with AI.


It’s long been the dream of management to run the company without those pesky employees. I imagine a good bit of the current AI hype bubble is due to that dream.


I have had this feeling since I started working... Nobody actually wants employees.


The perfect widget will get produced automatically, serving all customer needs. Customers will be supported automatically. Outages will be fixed automatically.

All I do is press a button and collect MRR.


Lot of CTOs about to lose their jobs if true


For good reason


> We're told US-based network engineering staff will be reduced to two or three employees per shift during US business hours, representing a 33 percent loss of staff per shift. That's monitoring and maintaining all of IBM’s global datacenters.

What's in this sense of "network engineering"? (Is it distinct from some role like "technician" or "operations"?)

If you wanted to make IBM's data centers quickly start having downtime, or to be unresponsive to new needs, how many people would you have to hire away?


Sounds like a NetworkOperationsCentre? NOC

Operations and Tech sure. But some know linux, some windows some networking. A PC tech wont have a CCNA.


> "Senior software engineers stopped being developed in the US around 2012," Blake said. "That’s the real story. No country on Earth is producing new coders faster than old ones retire. India and Brazil were the last countries and both stopped growing new devs circa 2023. China stopped growing new devs in 2020."

Huh. I hadn't heard this, and it goes against my intuition. They cite a Stackoverflow survey, and some other articles. A bolus of new developers entered the industry around 2000-2010, and are at least senior level now. I assumed the same trend had continued, since development is a pretty safe route to a good salary. What industry are reasonably intelligent people going into these days? The trades are a great option, but I was recently hearing about a scarcity of skilled tradesmen, too. AI was not an existential job threat in 2012, either, so that's not the reason, nor is the present moment's poor job market. What are people doing to make a living now?


I have a suspicion that may refer to how many people hop jobs for pay raises or otherwise manage to avoid improving their skills. Lots more developers, plenty of years-in-profession seniors, few high-experience seniors.

Ever since people started talking about the coding abilities of AI I've been thinking this pattern would just accelerate, with beginners and inexperienced developers leaning on it too much instead of improving their own skills.


An engineer is to a software company what a tractor is to a farmer. The tractor does the thing. Getting rid of the tractor because and expecting the same level of productivity is... insane?

You can add a bigger engine that makes the tractor better at tractoring (e.g. github copilot), but that doesn't replace the tractor.


To me, the most interesting thing in the article was nothing to do with AI.

> "Senior software engineers stopped being developed in the US around 2012," Blake said. "That’s the real story. No country on Earth is producing new coders faster than old ones retire. India and Brazil were the last countries and both stopped growing new devs circa 2023. China stopped growing new devs in 2020."

This is one article quoting one person at IBM. But if that's true, that's a seismic shift in the landscape from the last, oh, 80 years or so.

That ought to make pay and working conditions better - if true.


Unrelated question, but was Watson (the version that played Jeopardy) an LLM?


Not quite.... Watson predates the transformer architecture by at least a decade

Refer to this article https://cbmm.mit.edu/sites/default/files/documents/watson.pd...


I think LLM is not synonymous with transformers and are quite a bit older. Afaik, an LLM is just a model (any kind of large model) that predicts the next token based on previous tokens.


Yes it was called Watson, but I don't think the technology is related to what they used. So maybe not an LLM at that time. "Watson" just became a brand name for IBM.


All they have to do is keep on dumping all their most experienced and talented (expensive) staff, and AI will be up to the task after all.


If you keep lowing the average ability of the worker eventually AI will overtake them. Unless your workers are the ones making the AI then I guess possible that the AI would regress faster than the workers.


Yup, if the only task is to let people go, the AI can already perform that.


A far cry from the 1979 IBM presentation.

> A COMPUTER CAN NEVER BE HELD ACCOUNTABLE

> THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION


they're notoriously infamous for firing people when they're over 50.


If it is so widespread, why hasn't there been a class action lawsuit against them? Not refuting, genuinely curious.


Because they signed away that right: https://features.propublica.org/ibm/ibm-age-discrimination-a.... That same article mentions a lawsuit about IBM changing its pension scheme, forcing them to revert to the old one and a $300M settlement amount.

That said, there have been a number of age discrimination lawsuits, just no major class action it seems: https://www.cohenmilstein.com/case-study/ibm-age-discriminat..., https://www.theregister.com/2024/06/20/ibm_and_kyndryl_again..., https://www.hrdive.com/news/ibm-hr-professionals-suit-age-di..., https://thenationaltriallawyers.org/article/ibm-hit-with-1-5...


They been trying to for some time now[0]

My understanding is they are having trouble getting the individual arbitration clauses thrown out to allow it to proceed

[0]: https://www.theregister.com/AMP/2024/01/23/ibm_supreme_court...


This article details some past legal actions taken in different countries about it: https://www.theregister.com/2022/08/16/ibm_dinobabies_case_s...

edit: heh, replied concurrently with others pointing the same direction. somebody owes somebody a soft drink.


The first google link for "ibm ageism lawsuit" https://www.cohenmilstein.com/case-study/ibm-age-discriminat...


Why pay senior devs for code that might actually work when you can hire randos who spend all their time debugging AI hallucination-ridden garbage code.


"The truth is that Watsonx [IBM’s generative AI offering] isn’t even available to employees to attempt to try and help automate some meaningless task. It's so far behind OpenAI and ChatGPT that it’s not even close."


Watson is just a brand name for consulting, not any specific tool.


Watsonx is a specific tool.


I always think about the directors/vps of these divisions with huge salaries, running business units that are completely useless. They should just fire all those people and hire a few openai engineers




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: