Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Nvidia CEO says "coding is a dying profession" (techradar.com)
28 points by romeros 7 months ago | hide | past | favorite | 95 comments



Hey chat-gpt, create open-source Nvidia drivers


Hey chatGPT what would a ceo do in this situation? And this one? Thanks you're the new CEO now.

Yread says: "CEOs is a dying profession. AI will make sure noone has to make decisions"


I think this has been said pretty much every decade since computers were invented. At one point, for instance, they said that Object Oriented Programming would kill "coding" because people with limited training, or businesspeople, would just link reusable components and voila, here is your airline reservation system.


When I hear these types of statements. It’s usually an MBA dropout that thinks we are playing with legos.


This time they're right. The next big programming languages are going to be English and Chinese.


They've said that every single year since 1950. (So far, the only lasting result is COBOL.)

"AI" doesn't actually code, it compiles code snippets downloaded from the Internet. (Basically, it's just Stackoverflow with extra steps.)


The definition of "code," as you put it, is what's going to change. That hasn't happened at all since 1950.


Even if it does, it will end up looking more like a mathematical proof with exact definitions, sub-definitions and the equivalent of "lemmas" and such.

Or maybe it will like current laws & regulations, legal code, maybe something like this (first legal code I could think of...) [1].

But I'm sure it will not look like what people imagine when they first hear "programming in English" - a book or a nice story or a conversation.

[1] https://codes.findlaw.com/ca/vehicle-code/veh-sect-23152/


It will look a lot like this: https://chat.openai.com/share/345d0bca-7fc3-4b91-a1f1-95b1e8...

In other words, English (or your human language of choice.)


90% of the work of building real world applications is imo fixing and adding features, i.e. maintenance. I see it rarely considered when talking about "english as code". How does this work in a more complicated application with multiple "pages" and "features"? One conversation per feature? How do you make changes to those when requirements change? If you just go back and change parts of the conversation, how do you make sure the final product still covers all the existing use-cases?

The example is outputting (afaict, skimmed most of it) effectively 15 lines of code. In terms of features, the equivalent of a single button on an ops dashboard page.

That's just not maintainable. Sure, you can write one-off, throwaway "code" this way but there's no way you could build (and maintain) a business this way, at least imo.

You'd need to "regenerate" chunks of your business logic from changed requirements, hoping they'd remain about the same. Thus why I think you'd converge on Math or Legal Code, precise and hopefully reproducible output when slightly tweaked.


My thinking is this: strip out everything but my requests, tidy it up a bit, and feed it to ChatGPT in a fresh context window, and you'll (presumably) get the same Python code out, or something functionally equivalent.

It's like running early C++ code through cfront to generate a C program that is then compiled and executed as usual. Except this time, the input is not C++, it's plain English. That is not a small distinction!

For one thing, any reasonably-literate person can now get the computer to do what they want. The specification is now the program itself, a longstanding holy grail of programming. Decades of marketing hype and empty promises of "5GL" languages are now within reach.

As for maintenance, the document that is 'maintained' is the English, not the Python. When the requirements change, regenerate the output. Ideally no human ever needs to see the Python code, much less edit it. (Notice that ChatGPT isn't just generating the code from scratch, but successfully editing it.)

There is no reason to think that small random variations in the exact output code will be a showstopper. Automated test generation will be a big part of the picture, of course.


That is sort of what I mean, to be able to continuously regenerate the output in the same manner and achieve consistent results, imo, you would probably converge on something that leaves little open to interpretation - so mathematical notation or legal code.


At best it will look like what lawyers writes, and that takes way longer to learn to read and write than what programmers do. It isn't an upgrade, it is a downgrade.


To test your idea, I just asked an LLM Assistant:

I need a website on which anyone could leave me a message (inputs are name, age, message). Please create a website for that, and a good-looking one, e.g. use some popular css framework. Also please write a server that stores this data and host both a website and my own panel, so I can read these messages later. I should be able to mark messages as read and delete them. Use Node.js, Express, MongoDB. Don't comment on it much, I need to see a minimal viable product first, then we'll talk.

It started whining about a project plan, the complexity of developing full-fledged application and even tried to refuse posting it into chat, instead providing a link to a minimal example it just "created" on github. Seems that you're right, it's pretty hard to force it to do a specific thing and from its answer it feels like you have to hold its hand all the time.

But then I went to Characters section and created this:

The following is a conversation between Builder and Prompt. Builder is a bot designed to create source code, scripts and configuration files based on Prompt. Builder can create full-fledged applications and solutions and follows best practices in doing so. It doesn't think that something is too complex or too lengthy to generate, it just dumps it right into the chat. Some call this Builder bot "The Chad of Software Development", because it can solve basically any task he gets from Prompt.

Guess what, it started spewing out teh codez along with npx create-like instructions (frontend just finished, now it's configuring mongo) and it seems that the code is pretty much what I would write. Who's holding who's hand now. I don't think that my request feels legalese. And the model isn't even GPT-4. It's 25GB Mixtral with 4k context window.

The key point here is that most people only see one side of AI, which is the default character card of GPT-3.5/4. There's so much more than that.

(Mongo up and running, it's in the middle of the backend now. I see the typings for mongo/Message, CORS, some routes. It's mostly what I wanted and routes in frontend even match with those on backend. Sure, there's a lots of problems, but I'm using a model from 2023 and not even the smartest one.)


No, nothing will change. LLM's cannot code.


Of course they can. Either you haven't used a good LLM, or you're in denial.


The original design goal of COBOL was to make professional programming obsolete. Business people could write code.

And then, some time later, COBOL was the language most commonly used by professional programmers.

So, yeah. This has been going on for a long time.


It's true. Nobody writes (assembly) code anymore; machines do all that work now.

(For that matter, having spent a lot of my career working on compilers, it's notable that nobody really writes code generators anymore, either! It's all been done already.)

When I started coding, memory management was all basically manual, but machines do all that work now.

Yes, coding is a dying profession - but it always has been, and a new profession of coding is perpetually being born atop the old abstractions, as the work continues; because the work is not really about the details of the machines, except incidentally, but about understanding the problems and the needs well enough to express them in whatever the current language of the machines happens to be.


We are still doing a stupid amount of low level stuff (at least in our heads) now. I thought end 70/80s, with lisp, apl and even basic that we didn’t need to worry about memory management, cycle mangement, etc, anymore, but now, half a century later, we still have to, because if you don’t, everything is laggy, leaky, OOMing, killing the browser, eating the battery, costing stupid cloud $ etc.

The abstractions often suck so much that I long for the manual stuff (and do when I am doing embedded programming for 20kb 2mhz devices); it is usually far more enjoyable as you can reason about it in a very goal oriented way.

And yes, abstracting a human problem to something logical was always the actual job of programming and that will remain for a while; LLMs are terrible at it so far. When your brain manages to cut it up in small pieces of logic, LLMs do puke out the code that fits with bizarre speed. That code often suffers from not taking into account performance and memory management though, and thus make up a terrible app in the end.


> memory management was all basically manual, but machines do all that work now.

Yes, and that has not been a good thing.

The advent and success of Zig and Rust are a modern recognition that outsourcing memory management to a stop-the-world process has been an abysmal failure.


Obviously he has a huge incentive to publicly proclaim that the ceiling of AI is limitless. But does anyone think he actually believes this?


When your business is about selling shovels to gold diggers, there is no way you can admit that there is no gold out there.


I find it tough to understand why anyone would listen to a CEO, at all. If they happened to be telling you something truth-adjacent, it would be only because it served them in some way. We need an 'ok, boomer'* but for business leaders.

* no hate to boomers; just referencing the sentiment


Dude is just looking out for the shareholders.


AI rescued Nvidia when nobody was buying their shovels for digging crypto gold. If it wasn’t for ChatGPT, Nvidia story would have been very different right now. He is probably just hoping that this dream never ends. Otherwise there’s no way to justify the current valuation in the long term.


> The time otherwise spent learning to code should instead be invested in expertise in industries such as farming, biology, manufacturing and education

i.e. absolutely brutal physical labor jobs where labor is de facto expendable and wealth inequality runs rampant.

He might as well have said "If you can't become a doctor or lawyer or don't have a PhD in CS or Math and can't work for me, prepare to work the fields and make me richer."

i'm not sure if he meant that but that's what i took from this statement. and i can absolutely see that happening as AI "eats the world", as it were.


LLMs can help with programming. But you cannot just copy/paste the code from LLMs and push to production. You need to understand the code that LLMs are producing otherwise soon or later you will get in trouble.


> But you cannot just copy/paste the code from LLMs and push to production

For how long though? this is the state of LLMs today, but I'm not sure this will be the case in 3-5 years.


As a developer, I agree with this. Huang may play a marketing game here, but he’s right. “Coding” isn’t something humanity needs to do, because coding in textbooks and coding in reality are two different things. Our IRL coding is a bunch of self-inflicted and cheered complexity and an artificial barrier that secures the jobs. AI has a very good potential to distill coding back to the essence and then some more. Not the current AI, but 2024 isn’t the last year of humanity either. So coding isn’t dying, but coding as we know it should die anyway. Should have, long ago, but it’s so compelling to just stay in a comfort zone of being a software developer. He claims everyone will be a coder, and that I find realistic. Of course people incapable of even easiest levels of “STEM” will exist, but the bar will lower dramatically.


"Commercial trucking is a dying profession." -some guy from 10 years ago

Of course today there are more commercial truckers than ever. So many people seem to forget that not only are sigmoid curves [1] a thing, but literally every single neural network advance to date has ended up being on one. That said, I imagine his comment is more about trying to add fuel to the LLM frenzy than a necessarily sincerely held belief - more hype is probably just an effort to try to make NVID go up.

[1] - https://en.wikipedia.org/wiki/Sigmoid_function


No way any regulated industry is going to just trust some AI to write the code.

Imagine trying to get a medical device through FDA approval with firmware written by a bot.


It would absolutely get through FDA approvals. You don't know what you're talking about.

The FDA would require extensive efficacy trials and an inventory of side-effects, the same as other devices. The FDA isn't doing computer engineering, it's measuring outcomes and risks.


Or imagine trying to provision a data layer that supports a distributed backend. Good luck basing your cloud costs on a word calculator.


they won't trust it end-to-end, but they will allow AI to generate code and have humans perform final review, at least at first. That requires way fewer developers.


How is it different from today?


In my view the difference comes down to accountability. In all of the heavily-regulated environments I’ve worked in, humans are responsible for compliance in systems.

If an LLM generates vulnerable code that evades detection and makes it into production, and that vulnerability is exploited, who is responsible? Presumably, the humans who were tasked with reviewing and approving the code.

As the famous quote states, it is easier to write code than to read it. An LLM would be great for improving developer tooling to generate boilerplate code, etc., but I see tremendous risk for any firms attempting to let an LLM design and build any substantial non-trivial pieces of system code. And the effort required to review the code is at least on the same order of magnitude as it would be to write the code (beyond the trivial “make me an HTTP controller or HTML form for these operations” and the like)


Notice he did not say anything about computer science or the principles. You might not need to type Python or Java, but you still need to understand the architecture, the implications of state management, eventual consistency, etc. etc. etc. So yes typing gets easier, but instead of python, we’ll be typing various prompts, aka prompt engineering. We rarely resort to bytecode these days as well. The abstraction increases, but the principles are still needed.

I wonder if one day we’ll have the Toyota factory level need for manual work to train the next set of senior leaders: https://www.japantimes.co.jp/news/2014/04/07/business/gods-e...


Although he doesn’t seem to quantify the “computer science” part of his statement (as far as I can tell) he does imply that kids should no longer be learning CS from what I hear at this timestamp:

https://youtu.be/Y1pHXV7E4xY?t=17m40s


The reason their company is worth 2 trillion dollar is because of the programmers who wrote cuda.


I do not think it's ever going to become that we code with AI. I just don't see how using black box to produce code is ever going to work out well, and in fact it would be pretty dangerous as you would let essentially the biggest companies that have the cash to actually train the big models to control what goes in your code. And if we think of the future where we want kids to only learn AI programming, they won't actually know what the resulting code does.


Human beings don't generally want to be programmers. They have other things to do with their time. I've never understood this obsession with getting non-technical people into programming with some new development methodology or programming language.

Programming isn't for everyone. A programming language is a tool to solve an automation problems. And tons of people just aren't interesting in solving automation problems. You can make the language as non-computer-y as you want. It won't be able to remove the stuff that makes automation difficult -- edge cases, repeatability, maintenance, etc.

What AI could replace are all the frankly hare-brained attempts at "natural language" programming languages intended to make programming appealing to people who probably would never find it appealing. So basically, Shortcuts and AppleScript on steroids. That's a great use for AI. But it does not represent a threat to the employment security of the guy who maintains a kernel scheduler, VM subsystem, etc.


Transcription: please don't sell your stocks


When I see AI take an large existing legacy codebase and rewrite it perfectly then I will believe that


You will never see this from a single human being.

Probably, within a few months, one (or more, possibly colaborating) LLMs just might.

Always fun to see "the difficult ones," proven.


You'd still have to describe to an LLM what to do, which strikes me as about equivalent complexity to simply writing the code in the first place (which is, after all, a formal description of how a program should behave, even if we perceive it as of a complexity which should not be strictly necessary). The big wins have been simply leveraging LLMs to apply common patterns to multiple codebases (albeit in a buggy and haphazard fashion), but it's still up to humans to compose these patterns into meaningful programs and validate that it's actually functioning as expected or desired. Humans still have far superior understanding of what software is, how it functions, what our intentions are using software, and how to derive good software from bad—we know what a bug is intuitively in a way that LLMs have not been able to demonstrate at all.

However, being able to rewrite a program with formally well-defined behavior (i.e. code) should be in an LLM's capability, but LLMs are a long ways away from demonstrating semantically coherent coding skills, just the ability to regurgitate common patterns (often filled with bugs and/or incoherent semantics).


An LLM is only an Internet search engine with a fancier interface, it doesn't actually reason about anything. There is no "semantic" anything about an LLM's output.


Non-RAG LLMs don't search the internet, nor even have the capability to do so.


I don't personally detect sentience, yet — but about 5% of my inputs result in some sort of interpretation and/or reasoning (sometimes I have to think for days "why did LLM.xyz make such a strange connection..?" only to realize the schizotypisms of machine aren't often wrong (just different).


People ascribe sentience and emotion to a smiley face picture, and that is just two dots and a curved line.

That's just what people do, we are hard-wired to see social cues even if there are none.


Absolutely false, at the core of every LLM is a highly compressed text corpus from an Internet search engine.

(The wonder here isn't that an LLM succeeds at text retrieval tasks, the wonder is how highly compressed the index turns out to be. But maybe we just severely overestimate our own information complexity.)


So, you're saying an LLM is a just a database that does text retrieval?


Yes, using a statistical model which is in effect a very lossy compressor.


So, what you're telling me is that every thing they say has already been said before, completely verbatim? Like, if I asked it to write a story about a dog named Jebediah surfing to planet Xbajahabvash, it would basically just find a link to someone else's story that wrote about the same dog surfing to the same planet? That sounds like an infinitely large amount of combinations. Perhaps the internet is just infinitely large, squared (or even circled).


So, like a human, then?


Reasoning is not necessary, but semantic coherence is.


It's only as semantically coherent as its training database. An LLM is, in effect, just a lossy compression of its training database. The compression is based on statistical maximum likelihood estimation, there are no mental (or any other kind) of models involved in compressing the training database.

You can claim that mental models don't actually exist and everything in the universe is just maximum likelihood, but that would be a religious/spiritual statement, outside the realm of science.


We'll have to see what Google's most advanced model with the ridiculously larger context window can do, once it's fully released into the wild. Refactoring an entire code base is presumably asking too much, but it'll be able to do small refactoring since the released model can do that, so we'll have to see where its upper limits are.

Of course, to get it better at refactoring, everyone has to write blog posts on refactoring to feed the machine.


Why blog posts? Wouldn't the codebase "self evolve" based on the proficiencies of other published code?


AI will probably just delete it.

Worked at a few of these dog shite companies and the amount of low quality shit driven by half baked initiatives from the C-level suite (ie, “we are microservice oriented now, do that”) is astounding.


This seems to me like it would be easier than having it create a greenfield project. The legacy code tells the AI the needed functionality so I would think in the near future an AI could rewrite something in a more concise way with the same style throughout, even adding useful comments and docstrings.


Learn to code^H^H^H^Hfarm. - Jensen Huang

At least he's not telling us to learn coal mining, I guess.


AI still has poor performance when "thinking outside the box".

When AI is able to think creative, novel thoughts then I admit progamming will be a thing of past.

But he should remember that, when AI can think creatively, so will CEOs be a thing of the past.


it is creative, and many of the images it produces are amazing.

it is not innovative and, at least right now, doesn't really "know" anything. That might change, and soon.


Nvidia is grooming the next generation of script kiddies.


nah touchscreens and chromebooks in schools are. kids don't even know what files and folders do.


Its not because most of the time spent is not coding. It is figuring out what the imprecise feature request is really about, how to integrate with this legacy system that Jeff maintains on the other side of the country, convince your boss to not spend money on this useless new AI shit thing, etc, etc.


Yeah, coding is the easiest part of the job. And once you hit a certain level of mastery, interacting with an LLM will slow you down.


Also, checking the essential sanity of code and if necessary making it more-sane.

LLMs may indeed end up replacing web-searching interns or bottom-bin outsourcing, with "code a real programmer has to review and probably rewrite".

However there's no reason to believe LLMs will gain a novel new ability to manage that second part just by making them bigger and bigger.


AI will never fall in love and make babies so it is not a self perpetuating species


Dying "profession". Growing hobby.


He is right, but, in that sense, every job done in front of a computer is a dying profession, because by the time coding is automated so will every other computer based office job.

However, be wary of getting AI to write your code too soon. In its current state you will save several hours of writing code, only to spend hours more on maintenance. Maintaining code is much harder than writing it. You won't have a human to ask "What was the intention behind X weird pattern".

Capitalism, if it survives, will be forced to change. Or it will end up eating its own head off.


Let him cook


Prove it.


https://finance.yahoo.com/news/nvidia-ceo-the-metaverse-will...

> the economy of the virtual world will be much, much bigger than the economy of the physical world. You’re going to have more cars built and designed in virtual worlds, you’ll have more buildings, more roads, more houses — more hats, more bags, more jackets.


We already have that it's called credit cards and the majority of currency is already "virtual" and not cash.


Heh. I'm imagining someone sleeping in their metaverse virtual home, while they're really in a cardboard box under a bridge with a VR headset on.


That’s ready player one for you.


too bad the rest of the book was a lame 80s wank


can't wait for his 2026 prediction of insert current bubble here


I'm so tired of hype cycles in tech, man. LLMs are articulate and knowledgeable but not particularly smart. They can regurgitate Stack Overflow snippets in a context-aware way, and sure, that can be handy in a "code assistant," but it's not enough to build anything on its own. Nothing I've seen suggests real autonomy is anywhere on the horizon. Exponential advancement is always just around the corner, but it never materializes. Little grows exponentially in practice.

"AI is the future" isn't a sincere prediction, coming from the mouth of an industry stakeholder. They're just breathlessly hyping their own product.


> "It is our job to create computing technology such that nobody has to program. And that the programming language is human, everybody in the world is now a programmer. This is the miracle of artificial intelligence."

The Nvidia CEO doesn't understand what programming is.


I'm sure he understands that at a certain point you just gotta keep saying words to keep they AI hype train going.


Golang has only 25 keywords in the language spec. Each one meticulously selected and designed and documented with intent and purpose.

Meanwhile, in English:

https://en.wikipedia.org/wiki/Buffalo_buffalo_Buffalo_buffal...


He understands that if he keeps spouting this stuff the stock keeps going up.


Cargo culting the stock market


that's what the stock market is tho. what's that Keynes quote about how it's not a beauty pageant where you're trying to find the most beatiful, but rather you're trying to guess which candidate everyone else will pick. facts can drive things, but only as much as hype and cargo cults will let them.


They understand what a Moat is.


NVIDIA’s CEO (since 1993), who is one of the company’s founder, is pretty technical, so I doubt that. Although he is an EE, not a computer scientist. This isn’t an MBA spouting Markitecture.

https://en.wikipedia.org/wiki/Jensen_Huang


There are lots of technical people who don't understand what programming is.


You do know that this guy built up NVIDIA from scratch right? Before that, there was some action from 3DFX, but they fizzled and NVIDIA basically invented the modern GPU. He has at least a right to be a bit outlandish in his claims (a real founder CEO vs someone just hired to be CEO after the company was successful), and at least he isn’t Musk.


What does any of that have to do with the quote and my comment?


It’s doubtful that he doesn’t understand some amount of programming. It’s impossible to graduate from EE without a couple of programming quarters/semesters, and these people did invent CUDA and such.


I'm not sure you're getting my implicit point, which is fair because I didn't spell it out. What programming is is not just getting a computer to do something. In many cases, that is the easy part. Programming is just as much if not more so understanding a domain and imparting that to a computer process. Thus, for an AI to be able to do that, we'd solve extra-human intelligence, as even most humans are terrible at system design.

Yes, I would guess that a EE understands how to program a computer to get it to do something. But that isn't all that programming is. So the CEO claiming that we can in the future program via human language is him failing to understand that we already fail at doing this.


My experience with EEs is that they understand programming languages, but typically don’t understand software engineering (creating and maintaining complex systems involving dozens or hundreds of developers). This leads to them to underestimate the skills required.


This makes them perfect critics thoughs. They can code but ignore all our fuckery that seems nonsensical from both outside and inside. My vague estimate is that there’s at least 2-3 guys like this per “true” developer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: