Hacker News new | past | comments | ask | show | jobs | submit | fisherjeff's comments login

What? A ciechanow.ski airfoil explainer? But I have things to do today!


It is too late for us to be saved, my friend, way too late...


Well they won’t touch the principal so long as tuition inflation stays below about 1.5%


YES. Thank you.

I rode in a Model Y last year and just could not believe the mistakes it was making. Disappearing and reappearing cars; the semi in front of us was apparently straddling the lane line for a few miles; somehow an early-2000s Dodge Ram was classified as a small sedan – the list goes on, and this was only a ~10-minute ride. I would be absolutely mortified if a product of mine ended up in front of a customer in that state.


I spent a decade working on commercial computer vision applications that, among other things, had to recognize and track cars. Those are exactly the sort of transient errors you'd expect to see in shipping products and they usually have heuristics to "smooth" over those sorts of problems.

That said, would I ever trust my life to a system like that? No.


I'm actually surprised they show it as raw as it looks. Doesn't inspire too much confidence, even though I bet the system must be reclassifying and changing way faster than it renders things on screen.


Don’t get me wrong, my background is also pretty CV-heavy, and I don’t expect perfection by any means.

But the display itself serves basically no purpose besides looking cool, and it just fails pretty badly at that. Also yeah, it made me maybe a little more nervous about being on the road with a Tesla than it should’ve…


We'd usually have something like that in our products as a developer/debug mode, not generally visible to customers.

If anything, if you've got self-driving on in a Tesla, you're not being nervous enough. :)


That’s not the perception for FSD, btw, that’s the output of a much older generation model that you’re seeing. But yeah, it’s pretty bad.


Crosstabs are here - it’s surprisingly consistent across age groups:

https://d3nkl3psvxxpe9.cloudfront.net/documents/econTabRepor...


It very much contradicts what I said earlier. Don't remember the source. The only difference is that this also includes listening to audio-books. 53% in 2023, for 18-29 year-old, according to your source. Thanks for the correction.


You can with some charging stations. Well, that’s assuming the card reader works, which is a much less safe assumption than it is with gas pumps.


Don’t think it’s even a year… should be 8-9e6 m^2 per year, or 3-3.5 mi^2 per year.


Also, I think we are quite a ways out from a tool being able to devise a solution to a complex high-level problem without online precedent, which is where I find the most satisfaction anyway.

LLMs in particular can be a very fast, surprisingly decent (but, as you mention, very fallible) replacement for Stack Overflow, and, as such, a very good complement to a programmer's skills – seems to me like a net positive at least in the near to medium term.


Spreadsheets didn’t replace accountants, however, it made them more efficient. I don’t personally believe AI will replace software engineers anytime soon, but it’s already making us more efficient. Just as Excel experience is required to crunch numbers, I suspect AI experience will be required to write code.

I use chat-gpt every day for programming and there are times where it’s spot on and more times where it’s blatantly wrong. I like to use it as a rubber duck to help me think and work through problems. But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.


Spreadsheets actually did put many accountants and “computers” (the term for people that tallied and computed numbers, ironically a fairly menial job) out of business. And it’s usually the case that disruptive technology’s benefits are not evenly distributed.

In any case, the unfortunate truth is that AI as it exists today is EXPLICITLY designed to replace people. That’s a far cry from technologies such as the telephone (which by the way put thousands of Morse code telegraph operators out of business)


It is especially sad that VC money is currently being spent on developing AI to eliminate good jobs rather than on developing robots to eliminate bad jobs.


Many machinists, welders, etc would have asked the same question when we shipped most of American manufacturing overseas. There was a generation of experienced people with good jobs that lost their jobs and white collar workers celebrated it. Just Google “those jobs are never coming back”, you’ll find a lot of heartless comparisons to the horse and buggy.

Why should we treat these office jobs any differently?


Agree - also note that many office jobs have been shipped overseas, and also automated out of existence. When I started work there were slews of support staff booking trips, managing appointments, typing correspondence & copying and typesetting docuements. For years we laughed at the paperless office - well it's been here for a decade and there's no discussion about it anymore.

Interestingly at the same time as all those jobs disappeared and got automated there were surges of people into the workforce. Women started to be routinely employed for all but a few years of child birth and care, and many workers came from overseas. Yet, white collar unemployment didn't spike. The driver for this was that the effective size of the economy boomed with the inclusion of Russia, China, Indonesia, India and many other smaller countries in the western sphere/economy post cold war... and growth from innovation.


US manufacturing has not been shipped out. US manufacturing output keeps increasing, though it's overall share of GDP is dropping.

US manufacturing jobs went overseas.

What went overseas were those areas of manufacturing that was more expensive to automate than it was to hire low paid workers elsewhere.

With respect to your final question, I don't think we should treat them differently, but I do think few societies have handled this well.

Most societies are set up in a way that creates a strong disincentive for workers to want production to become more efficient other than at the margins (it helps you if your employer is marginally more efficient than average to keep your job safer).

Couple that with a tacit assumption that there will always be more jobs, and you have the makings of a problem if AI starts to eat away at broader segments.

If/when AI accelerates this process you either need to find a solution to that (in other words, ensure people do not lose out) or it creates a strong risk of social unrest down the line.


If I didn't celebrate that job loss am I allowed to not celebrate this one?


The plan has always been to build the robots together with the better AI. Robots ended up being much harder than early technologists imagined for a myriad different reasons. It turned out that AI is easier or at least that is the hope.


Actually I'd argue that we've had robots forever, just not what you'd consider robots because they're quite effective. Consider the humble washing machine or dishwasher. Very specialized, and hyper effective. What we don;'t have is Gneneralized Robotics, just like we don't have Generalized Intelligence.

Just as "Any sufficiently advanced technology is indistinguishable from magic", "Any sufficiently omnipresent advanced technology is indistinguishable from the mundane". Chat GPT will feel like your smart phone which now feels like your cordless phone which now feels like your corded phone which now feels like wireless telegram on your coal fired steam liner.


No, AI is tremendously harder than early researchers expected. Here's a seminal project proposal from 1955:

"We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. “


GP didn't say that AI was easier than expected, rather that AI is easier than robotics, which is true. Compared to mid-century expectations, robotics has been the most consistently disappointing field of research besides maybe space travel, and even that is well ahead of robots now.


> well ahead of robots now

I am not working in that field, but as an outsider it feels like the industrial robots doing most of the work on TSMC's and Tesla's production lines are on the contrary extremely advanced. Aside from that what Boston Dynamics or startups making prosthetics came up is nothing short of amazing.

If anything software seems to be the bottleneck for building useful humanoids...


I think the state of the art has gotten pretty good, but still nowhere near as good as people thought it would be fifty years ago. More importantly, as of a year ago AI is literally everywhere, hundreds of millions of regular users and more than that who've tried it, almost everyone knows it exists and has some opinion on it. Compare that to even moderately mobile, let alone general, robots. They're only just starting to be seen by most people on a regular basis in some specific, very small geographical locations or campuses. The average person interacts with a mobile or general robot 0 times a day. Science fiction as well as informed expert prediction was always the opposite way around - robots were coming, but they would be dumb. Now it's essentially a guarantee that by the time we have widespread rollout of mobile, safe, general purpose robots, they are going to be very intelligent in the ways that 20 years ago most thought was centuries away.

Basically, it is 1000x easier today to design and build a robot that will have a conversation with you about your interests and then speak poetry about those interests than it is to build a robot that can do all your laundry, and that is the exact opposite of what all of us have been told to expect about the future for the last 70 years.


Space travel was inevitably going to be disappointing without a way to break the light barrier. even a century ago we thought the sound barrier was impossible to penetrate, so at least we are making progress, albiet slow.

On the bright side, it is looking more and more like terraforming will be possible. Probably not in our lifetimes, but in a few centuries time (if humanity survives)


Forget the light barrier, just getting into space cheaply enough is the limiting factor.

Barring something like fusion rockets or a space elevator, it's going to be hard to really do a whole lot in space.


I think the impact of AI is not between good jobs va bad jobs but between good workers and bad workers. For a given field, AI is making good workers more efficient and eliminating those who are bad at their jobs (e.g. the underperforming accountant who is able to make a living doing the more mundane tasks whose job is threatened by spreadsheets and automation)


I worry the effects this has on juniors…


I think AI, particularly text based, seems like a cleaner problem. Robots are derivative of AI, robotics, batteries, hardware, compute, societal shifts. It appears our tech tree needs stable AI first, then can tackle the rest of problems which are either physical or infrastructure.


Capitalism always seeks to commodify skills. We of the professional managerial class happily assist, certain they'll never come for our jobs.


A serious, hopefully not flippant question; Who are "they" in this case? Particularly as the process you describe tends to the limit.


I would guess that "they" are "the capitalists" as a class. It's very common to use personal pronouns for such abstract entities, and describe them in behaving in a goal-driven matter. It doesn't really matter who "they" are as individuals (or even if they are individuals).

More accurate would be something like "reducing labor costs increases return on capital investment, so labor costs will be reduced in a system where economy organizes to maximize return on capital investment". But our language/vocabulary isn't great at describing processes.


Poor phrasing. Apologies. u/jampekka nails it.

Better phrasing may have been

"...happily assist, confident our own jobs will remain secure."


Thanks. Not putting this onto you so I'll say "we/our" to follow your good faith;

What is "coming for our jobs" is some feature of the system, but it being a system of which we presume to be, and hope to remain a part, even though ultimately our part in it must be to eliminate ourselves. Is that fair?

Our hacker's wish to "replace myself with a very small shell-script and hit the beach" is coming true.

The only problem I have with it, even though "we're all hackers now", is I don't see everybody making it to the beach. But maybe everybody doesn't want to.

Will "employment" in the future be a mark of high or low status?


The problem is that under the current system the gains of automation or other increased productivity do not "trickle down" to workers that are replaced by the AI/shell script. Even to those who create the AI/shell script.

The "hit the beach" part requires that you hide the shell script from the company owners, if by hitting the beach you don't mean picking up empty cans for sustinence.


> Will "employment" in the future be a mark of high or low status?

Damn good question.

Also, +1 for beach metaphor.

My (ignorant, evolving) views on these things have most recently been informed by John and Barbara Ehrenreich's observations about the professional-managerial class.

ICYMI:

https://en.wikipedia.org/wiki/Professional%E2%80%93manageria...


An interesting view is that people would still "work" even if they weren't needed for anything productive. In this "Bullshit job" interpretation wage labor is so critical for social organization and control that jobs will be "invented" even if the work is not needed for anything, or is actively harmful (and that this is already going on).

https://strikemag.org/bullshit-jobs/


> Spreadsheets actually did put many accountants

https://cpatrendlines.com/2017/09/28/coming-pike-accountants...

Not really seeing any correlation in graduation rates. Excel was introduced in 1985. Every accountant had a computer in the 80s.


> But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.

isn't it sad that real engineers are going to work as cleaners for AI output? And doing this they are in fact training the next generation to be more able to replace real engineers... We are trading our future income for some minor (and questionable) development speed today.


AI might help programmers become more rigorous by lowering the cost of formal methods. Imagine an advanced language where simply writing a function contract, in some kind of Hoare logic or using a dependently-typed signature, yields provably correct code. These kinds of ideas are already worked on, and I believe are the future.


I'm not convinced about that. Writing a formal contract for a function is incredibly hard, much harder than writing the function itself. I could open any random function in my codebase and with high probability get a piece of code that is < 50 lines, yet would need pages of formal contract to be "as correct" as it is now.

By "as correct", I mean that such a function may have bugs, but the same is true for an AI-generated function derived from a formal contract, if the contract has a loophole. And in that case, a simple microscopic loophole may lead to very very weird bugs. If you want a taste of that, have a look at how some C++ compilers remove half the code because of an "undefined behaviour" loophole.

Proofreading what Copilot wrote seems like the saner option.


That is because you have not used contracts when you started developing your code. Likewise, it would be hard to enforce structured programming on assembly code that was written without this concept in mind.

Contracts can be quite easy to use, see e.g. Dafny by MS Research.


I think this is longer off than you might expect. LLMs work because the “answer” (and the prompt) is fuzzy and inexact. Proving an exact answer is a whole different and significantly more difficult problem, and it’s not clear the LLM approach will scale up to that problem.


Formal methods/dependent types are the future in the same way fusion is, it seems to be perpetually another decade away.

In practice, our industry seems to have reached a sort of limit in how much type system complexity we can actually absorb. If you look at the big new languages that came along in the last 10-15 years (Kotlin, Swift, Go, Rust, TypeScript) then they all have type systems of pretty similar levels of power, with the possible exception of the latter two which have ordinary type systems with some "gimmicks". I don't mean that in a bad way, I mean they have type system features to solve very specific problems beyond generalizable correctness. In the case of Rust it's ownership handling for manual memory management, and for TypeScript it's how to statically express all the things you can do with a pre-existing dynamic type system. None have attempted to integrate generalized academic type theory research like contracts/formal methods/dependent types.

I think this is for a mix of performance and usability reasons that aren't really tractable to solve right now, not even with AI.


> If you look at the big new languages that came along in the last 10-15 years (Kotlin, Swift, Go, Rust, TypeScript) then they all have type systems of pretty similar levels of power, with the possible exception of the latter two which have ordinary type systems with some "gimmicks".

Those are very different type systems:

- Kotlin has a Java-style system with nominal types and subtyping via inheritance

- TypeScript is structurally typed, but otherwise an enormous grab-bag of heuristics with no unifying system to speak of

- Rust is a heavily extended variant of Hindley-Milner with affine types (which is as "academic type theory" as it gets)


Yes, I didn't say they're the same, only that they are of similar levels of power. Write the same program in all three and there won't be a big gap in level of bugginess.

Sometimes Rustaceans like to claim otherwise, but most of the work in Rust's type system goes into taming manual memory management which is solved with a different typing approach in the other two, so unless you need one of those languages for some specific reason then the level of bugs you can catch automatically is going to be in the same ballpark.


> Write the same program in all three and there won't be a big gap in level of bugginess.

I write Typescript at work, and this has not been my experience at all: it's at least an order of magnitude less reliable than even bare ML, let alone any modern Hindley-Milner based language. It's flagrantly, deliberately unsound, and this causes problems on a weekly basis.


Thanks, I've only done a bit of TypeScript so it's interesting to hear that experience. Is the issue interop with JavaScript or a problem even with pure TS codebases?


LLMs are pretty much the antithesis of rigor and formal methods.


So is the off the cuff, stream of consciousness chatter humans use to talk. We still manage to write good scientific papers (sometimes...), not because we think extra hard and then write a good scientific treatment in one go without edits, research or revisions. Instead we we have a whole text structure, revision process, standardised techniques of analysis, searchable research data collections, critique and correction by colleagues, searchable previous findings all "hyperlinked" together by references, and social structures like peer review. That process turns out high-quality, high-information work product at the end, without a significant cognitive adjustment to the humans doing the work aside from just learning the new information required.

I think if we put resources and engineering time into trying to build a "research lab" or "working scientist tool access and support network" with every intelligent actor involved emulated with LLMs, we could probably get much, much more rigorous results out the other end of that process. Approaches like this exist in a sort of embryonic form with LLM strategies like expert debate.


I think the beauty of our craft on a theoretical level is that it very quickly outgrows all of our mathematics and what can be stated based on that (e.g. see the busy beaver problem).

It is honestly, humbling and empowering at the same time. Even a hyper-intelligent AI will be unable to reason about any arbitrary code. Especially that current AI - while impressive at many things - is a far cry from being anywhere near good at logical thinking.


I think the opposite! The problem is that almost everything in the universe can be cast as computing, and so we end up with very little differentiating semantic when thinking about what can and can't be done. Busy beavers is one of a relatively small number of problems that I am familiar with (probably there is a provably infinite set of them, but I haven't navigated it) which are uncomputable, and it doesn't seem at all relevant to nature.

And yet we have free will (ok, within bounds, I cannot fly to the moon etc, but maybe my path integral allows it), we see processes like the expansion of the universe that we cannot account for and infer them like quantum gravity as well.


They won't need human help when the time comes.


It's also where I find most of the work. There are plenty of off the shelf tools to solve all the needs of the company I work at. However, we still end up making a lot of our own stuff, because we want something that the off the shelf option doesn't do, or it can't scale to the level we need. Other times we buy two tools that can't talk to each other and need to write something to make them talk. I often hear people online say they simply copy/paste stuff together from Stack Overflow, but that has never been something I could do at my job.

My concern isn't about an LLM replacing me. My concern is our CIO will think it can, firing first, and thinking later.


It’s not just about if a LLM could replace you, if a LLM replaces other enough other programmers it’ll tank the market price for your skills.


I don’t think this will happen because we’ll just increase the complexity of the systems we imagine. I think a variant of Wirth’s law applies here: the overall difficulty of programming tasks stays constant because, when a new tool simplifies a previously hard task, we increase our ambitions.


In general people are already working at their limits, tooling can help a bit but the real limitation to handling complexity is human intelligence and that appears to be mostly innate. The people this replaces can’t exactly skill up to escape the replacement, and the AI will keep improving so the proportion being replaced will only increase. As someone near the top end of the skill level my hope is that I’ll be one of the last to go, I’ll hopefully make enough money in that time to afford a well stocked bunker.


But, for example, I probably couldn’t have written a spell checker myself forty years ago. Now, something like aspell or ispell is just an of the shelf library. Similarly, I couldn’t implement Timely Stream Processing in a robust way, but flink makes it pretty easy for me to use with a minimal conceptual understanding of the moving parts. New abstractions and tools raise the floor, enabling junior and mid-level engineers to do what would have taken a much more senior engineer before they existed.


"in a robust way" does a lot of work here and works as a weasel word/phrase, i.e. it means whatever the reader wants it to mean (or can be redefined in an argument to suit your purpose).

Why is it that you feel that you couldn't make stream processing that works for your use cases? Is it also that you couldn't do it after some research? Are you one of the juniors/mids that you refer to in your poost?

I'm trying to understand this type of mindset because I've found that overwhelmingly most things can be done to a perfectly acceptable degree and often better than big offerings just from shedding naysayer attitudes and approaching it from first principles. Not to mention the flexibility you get from then owning and understanding the entire thing.


I think you’re taking what I’m saying the opposite of the way I intended it. With enough time and effort, I could probably implement the relevant papers and then use various tools to prove my implementation free of subtle edge cases. But, Flink (and other stream processing frameworks) let me not spend the complexity budget on implementing watermarks, temporal joins and the various other primitives that my application needs. As a result, I can spend more of my complexity budget within my domain and not on implementation details.


I used to think that way but from my experience and observations I've found that engineers are more limited by their innate intelligence rather than their tooling. Experience counts but without sufficient intelligence some people will never figure out certain things no matter how much experience they have - I wish it wasn't so but it's the reality that I have observed. Better tooling will exacerbate the difference between smart and not so smart engineers with the smart engineers becoming more productive and the not so smart engineers will instead be replaced.


If an LLM gets good enough to come for our jobs it is likely to replace all the people who hire us, all the way up to the people who work at the VC funds that think any of our work had value in the first place (remember: the VC fund managers are yet more employees that work for capital, and are just as subject to being replaced as any low-level worker).


that's true, but it's harder to replace someone when you have a personal connection to them. VC fund managers are more likely to be personally known to he person who signs the checks. low-level workers may never have spoken any words to them or even ever have met them.


I think another possibility is if you have skills that an LLM can’t replicate, your value may actually increase.


Only if the other people that the LLM did replace cannot cross train into your space. Price is set at the margins. People imagine it’ll be AI taking the jobs but mostly it’ll be people competing with other people for the space that’s left after AI has taken its slice.


Then the CIO itself gets fired … after all, average per job life of a CIO is roughly 18 months


We’ll see - but given the gap between chatgpt 3 and 4, I think AIs will be competitive with mid level programmers by the end of the decade. I’d be surprised if they aren’t.

The training systems we use for LLMs are still so crude. ChatGPT has never interacted with a compiler. Imagine learning to write code by only reading (quite small!) snippets on GitHub. That’s the state llms are in now. It’s only a matter of time before someone figures out how to put a compiler in a reinforcement learning loop while training an LLM. I think the outcome of that will be something that can program orders of magnitude better. I’ll do it eventually if nobody else does it first. We also need to solve the “context” problem - but that seems tractable to me too.

For all the computational resources they use to do training and inference, our LLMs are still incredibly simple. The fact they can already code so well is a very strong hint for what is to come.


With today's mid level programmers, yes. But by that time, many of today's mid level programmers will be able to do stuff high level programmers do today.

Many people underestimate an LLM's most powerful feature when comparing it with something like Stackoverflow: the ability to ask followup questions and immediately get clarification on anything that is unclear.

I wish I had had access to LLM's when I was younger. So much time wasted on repetitive, mundane in-between code...


> the ability to ask followup questions and immediately get clarification on anything that is unclear.

Not only that, but it has the patience of a saint. It never makes you beg for a solution because it thinks there's an XY problem. It never says "RTFM" before posting an irrelevant part of the documentation because it only skimmed your post. It never says "Why would you use X in 2023? Everyone is using framework Y, I would never hire anyone using X."

The difference comes down to this: unlike a human, it doesn't have an ego or an unwarranted feeling of superiority because it learned an obscure technology.

It just gives you an answer. It might tell you why what you're doing is suboptimal, it might hallucinate an answer that looks real but isn't, but at least you don't have to deal with the the worst parts of asking for help online.


Yeah. You also don't have to wait for an answer or interrupt someone to get that answer.

But - in the history of AIs written for chess and go, there was a period for both games where a human playing with an AI could beat either a human playing alone or an AI playing alone.

I suspect we're in that period for programming now, where a human writing code with an AI beats an AI writing code alone, and a human writing code alone.

For chess and go, after a few short years passed, AIs gained nothing by having a human suggesting moves. And I think we'll see the same before long with AI programmers.


Good riddance. I can finally get started on the massive stockpile of potential projects that I never had time for until now.

It's a good time to be in the section of programmers that see writing code as a means to an end and not as the goal itself.

It does surprise me that so many programmers, whose mantra usually is "automate all the things", are so upset now that all the tedious stuff can finally be automated in one big leap.

Just imagine all the stuff we can do when we are not wasting our resources finding obscure solutions to deeply burried environment bugs or any of the other pointless wastes of time!


> are so upset now that all the tedious stuff can finally be automated in one big leap.

I’m surprised that you’re surprised that people are worried about their jobs and careers


The jobs and careers are not going anywhere unless you are doing very low level coding. There will be more opportunities, not less.


The invention of cars didn’t provide more jobs for horses. I’m not convinced artificial minds will make more job opportunities for humans.

A lot of that high level work is probably easier to outsource to an AI than a lot of the mundane programming. If not now, soon. How long before you can walk up to a computer and say “hey computer - make me a program that does X” and it programs it up for you? I think that’ll be here before I retire.


Wouldn't you agree the invention of the car created a lot more jobs (mechanics, designers, marketing people etc) than it eliminated?

As far as I can tell, this will only increase the demand for people who actually understand what is going on behind the scenes and who are able to deploy all of these new capabilities in a way that makes sense.


It did. But not for horses. Or horse riders. And I don’t think the average developer understands how AIs work well enough to stay relevant in the new world that’s coming.

Also, how long before AIs can do that too - before AIs also understand what is going on behind the scenes, and can deploy all these new capabilities in a way that makes sense? You’re talking about all the other ways you can provide value using your brain. My worry is that for anything you suggest, artificial brains will be able to do whatever it is you might suggest. And do it cheaper, better or both.

GPT4 is already superhuman in the breadth of its knowledge. No human can know as much as it does. And it can respond at superhuman speeds. I’m worried that none of us are smart enough that we can stay ahead of the wave forever.


GPT4's "knowledge" is broad, but not deep. The current generation of LLM's have no clue when it comes to things like intent or actual emotion. They will always pick the most obvious (and boring) choice. There is a big gap between excellent mimicry and true intelligent thought.

As a developer you don't need to know how they work, you just need to be able to wield their power. Should be easy enough if you can read and understand the code it produces (with or without it's help).

Horses don't play a part in this; programmers are generally not simple beasts that can only do one thing. I'm sure plenty of horse drivers became car drivers and those that remained found something else to do in what remained of the horse business.

Assuming we do get AI that can do more than just fool those who did not study them, do you really think programmers will be the first to go? By the time our jobs are on the line, so many other jobs will have been replaced that UBI is probably the only logical way to go forward.


>imagine all the stuff we can do

..if we don't have to do stuff?


Like I posted above: for me programming is a means to an end. I have a fridge full of plans, that will last me for at least a decade, even if AI would write most of the code for me.

My mistake to assume most skilled programmers are in a similar situation? I know many and none of them have time for their side projects.


I mean it's a bit of a weird hypothetical situation to discuss but first of all, if I didn't have to work, probably I would be in a financial pickle, unless the prediction includes UBI of some sort. Secondly, most of my side projects that I would like to create are about doing something that this AI would then also be able to do, so it seems like there is nothing left..


So you expect AI will just create all potential interesting side projects by itself when it gets better, no outside intervention required? I have high hopes, but let's be realistic here.

I'm not saying you won't have to work. I'm saying you can skip most of the tedious parts of making something work.

If trying out an idea will only take a fraction of the time and cost it used to, it will become a lot easier to just go for it. That goes for programmers as well as paying clients.


> Just imagine all the stuff we can do when we are not wasting our resources finding obscure solutions to deeply buried environment bugs or any of the other pointless wastes of time!

Yeah, we can line up at the soup kitchen at 4 AM!


So you've never given up on an idea because you didn't have the time for it? I just assumed all programmers discard potential projects all the time. Maybe just my bubble though.


> Not only that, but it has the patience of a saint. It never makes you beg for a solution because it thinks there's an XY problem. It never says "RTFM" before posting an irrelevant part of the documentation because it only skimmed your post. It never says "Why would you use X in 2023? Everyone is using framework Y, I would never hire anyone using X."

> The difference comes down to this: unlike a human, it doesn't have an ego or an unwarranted feeling of superiority because it learned an obscure technology.

The reason for these harsh answers is not ego or feeling of superiority, but rather a real willingness to help the respective person without wasting an insane amount of time for both sides. Just like one likes to write concise code, quite some experienced programmers love to give very concise, but helpful answers. If the answer is in the manual, "RTFM" is a helpful answer. Giving strongly opinionated technology recommendations is also very helpful way to give the beginner a strong hint what might be a good choice (until the beginner has a very good judgement of this on his own).

I know that this concise style of talking does not fit the "sugar-coated" kind of speaking that is (unluckily) common in society. But it is much more helpful (in particular for learning programming).


On the other hand, ChatGPT will helpfully run a bing search, open the relevant manual, summarize the information, and include additional hints or example code without you needing to do anything. It will also provide you the link, in case you wish to verify or read the source material itself.

So while RTFM is a useful answer when you (the expert) are limited by your own time & energy, LLMs present a fundamental paradigm shift that is both more user-friendly and arguably more useful. Asking someone to go from an LLM back to RTFM today would be ~akin to asking someone to go from Google search back to hand-written site listings in 2003.

You could try, but for most people there simply is no going back.


A lot of what we learned was learned by hours and days of frustration.

Just like exercise trains you to be uncomfortable physically and even mentally, frustration is part of the job.

https://www.thecut.com/2016/06/how-exercise-shapes-you-far-b...

Those who are used to having it easy with LLMs will be up against a real test when they hit a wall.


> But by that time, many of today's mid level programmers will be able to do stuff high level programmers do today.

Not without reason some cheeky devils already renamed "Artificial Intelligence" to "Artificial Mediocracy". AIs generate code that is mediocre. This is a clear improvement if the programmer is bad, but leads to deterioration if the programmer is above average.

Thus, AI won't lead to your scenario of mid level programmers being able to do stuff high level programmers do today, but will rather just make bad programmers more mediocre.


The way an LLM can teach and explain is so much better than having to chase down information manually. This is an amazing time to learn how to code.

An LLM can actually spot and fix mediocrity just fine. All you have to do is ask. Drop in some finished code and add "This code does X. What can I do to improve it?"

See what happens. If you did well, you'll even get a compliment.

It's also a massive boon in language mobility. I never really used Python, complex batch files or Unity C# before. Now I just dive right in, safe in the knowledge that I will have an answer to any basic question in seconds.


Why do you say the snippets are small? They don’t get trained on the full source files?


Nope. LLMs have a limited context window partly because that's the chunk size with which they're presented with data to learn during training (and partly for computational complexity reasons).

One of the reasons I'm feeling very bullish on LLMs is because if you look at the exact training process being used it's full of what feels like very obvious low hanging fruit. I suspect a part of the reason that training them is so expensive is that we do it in really dumb ways that would sound like a dystopian hell if you described it to any actual teacher. The fact that we can get such good results from such a terrible training procedure by just blasting through it with computational brute force, strongly suggests that much better results should be possible once some of that low hanging fruit starts being harvested.


Imagine being able train a model that mimics a good programmer. It would talk and program in the principles of that programmer's philosophy.


> LLMs in particular can be a very fast, surprisingly decent (but, as you mention, very fallible) replacement for Stack Overflow

I think that sentence nails it. For the people who consider "searching stackoverflow and copy/pasting" as programming, LLMs will replace your job, sure. But software development is so much more, critical thinking, analysing, gathering requirements, testing ideas and figuring out which to reject, and more.


Two years ago we were quite a ways out from having LLMs that could competently respond to commands without getting into garbage loops and repeating random nonsense over and over. Now nobody even talks about the Turing test anymore because it's so clearly been blown past.

I wouldn't be so sure it will be very long before solving big, hard, and complex problems is within reach...


> LLMs in particular can be a very fast, surprisingly decent (but, as you mention, very fallible) replacement for Stack Overflow

Nice thing about Stack Overflow is it’s self-correcting most of the time thanks to,

https://xkcd.com/386/

GPT not so much.


It’s an interesting question. The shares are likely not transferrable without amending the LLC agreement or whatever. If you want in on Anthropic, though, and you know the FTX estate has to liquidate, then there’s a chance you can get to a good price; maybe less than $3B but probably not a ton, because lots of other people want in too.

On the other hand, the current investors probably won’t want to make an exception for FTX if they know it’s going to cause them to write down their stakes.


The other thing about the “crazy kid with big dreams” thing that really bugs me is that real altruism is putting others above one’s self.

Effective Altruism, on the other hand, is roughly the belief that you, personally, should be the savior of all mankind and should also get fabulously wealthy along the way. It’s just pure egotism with good PR.


I mean, Delta’s certainly not amazing (although, ok, Delta One is awfully nice), but I will still 100% go out of my way to book them over, say, United or American.

Like a sibling commenter, I fly PDX so will usually fly Alaska but Delta is a solid #2.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: