I've been using AI to solve isolated problems, mainly as a replacement of search engine specifically for programming. I'm still not convinced of these "write whole block of code for me" type of use case. Here's my arguments against the videos from the article.
1. Snake case to camelCase. Even without AI we can already complete these tasks easily. VSCode itself has command of "Transform to Camel Case" for selection. It is nice the AI can figure out which text to transform based on context, but not too impressive. I could select one ":, use "Select All Occurrences", press left, then ctrl+shift+left to select all the keys.
2. Generate boilerplate from documentation. Boilerplate are tedious, but not really time-consuming. How many of you spend 90% of time writing boilerplate instead of the core logic of the project? If a language/framework (Java used to be, not sure about now) requires me to spend that much time on boilerplate, that's a language to be ditched/fixed.
3. Turn problem description into a block of concurrency code. Unlike the boilerplate, these code are more complicated. If I already know the area, I don't need AI's help to begin with. If I don't know, how can I trust the generated code to be correct? It could miss a corner case that my question didn't specify, which I don't yet know existing myself. In the end, I still need to spend time learning Python concurrency, then I'll be writing the same code myself in no time.
In summary, my experience about AI is that if the question is easy (e.g. easy to find exactly same question in StackOverflow), their answer is highly accurate. But if it is a unique question, their accuracy drops quickly. But it is the latter case where we spend most of the time on.
I started like this. Then I came around and can’t imagine going back.
It’s kinda like having a really smart new grad, who works instantly, and has memorized all the docs. Yes I have to code review and guide it. That’s an easy trade off to make for typing 1000 tokens/s, never losing focus, and double checking every detail in realtime.
First: it really does save a ton of time for tedious tasks. My best example is test cases. I can write a method in 3 minutes, but Sonnet will write the 8 best test cases in 4 seconds, which would have taken me 10 mins of switching back and forth, looking at branches/errors, and mocking. I can code review and run these in 30s. Often it finds a bug. It’s definitely more patient than me in writing detailed tests.
Instant and pretty great code review: it can understand what you are trying to do, find issues, and fix them quickly. Just ask it to review and fix issues.
Writing new code: it’s actually pretty great at this. I needed a util class for config that had fallbacks to config files, env vars and defaults. And I wanted type checking to work on the accessors. Nothing hard, but it would have taken time to look at docs for yaml parsing, how to find the home directory, which env vars api returns null vs error on blank, typing, etc. All easy, but takes time. Instead I described it in about 20 seconds and it wrote it (with tests) in a few seconds.
It’s moved well past the stage “it can answer questions on stack overflow”. If it has been a while (a while=6 months in ML), try again with new sonnet 3.5.
>My best example is test cases. I can write a method in 3 minutes, but Sonnet will write the 8 best test cases in 4 seconds
For me it doesn't work. Generated tests fail to run or they fail.
I work in large C# codebases and in each file I have lots of injected dependencies. I have one public method which can call lots of private methods in the same class.
AI either doesn't properly mock the dependencies, either ignores what happens in the private methods.
If I take a lot of time guiding it where to look, it can generate unit tests that pass. But it takes longer than if I write the unit tests myself.
For me it's the same. It's usually just some hallucinated garbage. All of these LLM's don't have the full picture of my project.
When I can give them isolated tasks like convert X to Y, create a foo that does bar it's excellent, but for unit testing? Not even going to try anymore. I write 5 unit tests manually that work in the time I write 5 prompts that give me useless stuff that I need to add manually.
Why can't we have a LLM cache for a project just like I have a build cache? Analyze one particular commit on the main branch very expensively, then only calculate the differences from that point. Pretty much like git works, just for your model.
I know how to find the context I need, being aided by the IDE and compiler. So yes, my context window contains all of the code in my project, even if it's not instantaneous.
It's not that hard to have an idea of what code is defined where in a project, since compilers have been doing that for over half a century. If I'm injecting protocols and mocks into a unit test, it shouldn't be really hard for a computer to figure out their definitions, unless they don't exist yet and I was not clear they should have been created, which would mean that I'm giving the AI the wrong prompt and the error is on my side.
Ask Cursor to write usage and mocking documentation for the most important injected dependencies, then include that documentation in your context. I’ve got a large tree of such documentation in my docs folder specifically for guiding AI. Cursor’s Notebook feature can bundle together contexts.
I use Cursor to work on a Rust Qt app that uses the main branch of cxx-qt so it’s definitely not in the training data, but Claude figures out how to write correct Rust code based on the included documentation no problem, including the dependency injection I do through QmlEngine.
Awesome! Would love to hear your thoughts. Any progress on your AI client? I'm intrigued by the so many bindings to Qt. Recently, I got excited about a Mojo binding[1].
I’ve found it better at writing tests because it tests the code you’ve written vs what you intended. I’ve caught logic bugs because it wrote tests with an assertion for a conditional that was backwards. The readable name of the test clearly pointed out that I was doing the wrong thing (the test passed?.)
Maybe a TLDR from all the issues I'm reading in this thread:
- It's gotten way better in the last 6 months. Both models (Sonnet 3.5 and new October Sonnet 3.5), and tooling (Cursor). If you last tried Co-pilot, you should probably give it another look. It's also going to keep getting better. [1]
- It can make errors, and expect to do some code review and guiding. However the error rates are going way way down [1]. I'd say it's already below humans for a lot of tasks. I'm often doing 2/3 iterations before applying a diff, but a quick comment like "close, keep the test cases, but use the test fixture at the top of the file to reduce repeated code" and 5 seconds is all it takes to get a full refactor. Compared to code-review turn around with a team, it's magic.
- You need to learn how to use it. Setting the right prompts, adding files to the context, etc. I'd say it's already worth learning.
- I just knows the docs, and that's pretty invaluable. I know 10ish languages, which also means I don't remember the system call to get an env var in any of them. It does, and can insert it a lot faster than I can google it. Again, you'll need to code review, but more and more it's nailing idiomatic error checking in each language.
- You don't need libraries for boilerplate tasks. zero_pad is the extreme/joke example, but a lot more of my code is just using system libraries.
- It can do things other tools can't. Tell it to take the visual style of one blog post and port to another. Take it to use a test file I wrote for style reference, and update 12 other files to follow that style. Read the README and tests, then write pydocs for a library. Write a GitHub action to build docs and deploy to GitHub pages (including suggesting libraries, deploy actions, and offering alternatives). Again: you don't blindly trust anything, you code review, and tests are critical.
Yes, it works for new code and simple cases. If you have large code bases, it doesn't have the context and you have to baby it, telling which files and functions it should look into before attempting to write something. That takes a lot of time.
Yes, it can do simple tasks, like you said, writing a call to get the environment variables.
But imagine you work on a basket calculation service, where you have base item prices, where you have to apply some discounts based on some complicated rules, you have to add various kinds of taxes for various countries in the world and you have to use a different number of decimals for each country. Each of your classes calls 5 to 6 other classes, all with a lot of business logic behind. Besides that, you also make lots of API calls to other services.
What will the AI do for you? Nothing, it will just help you write one liners to parse or split strings. For everything else it lacks context.
Are you suggesting you would inline all that logic if you hand-rolled the method? Probably not, right? You would have a high-level algorithm of easily-understood parts. Why wouldnt the AI be able to 1) write that high-level algorithm and then 2) subsequently write the individual parts?
What's the logic here? "I haven't seen it so it doesn't exist?"
There are hundreds of available examples of it processing large numbers of files, and making correct changes across them. There are benchmarks with open datasets already linked in the thread [1]. It's trivial to find examples of it making much more complex changes than "one liners to parse or split strings".
> Instant and pretty great code review: it can understand what you are trying to do, find issues, and fix them quickly. Just ask it to review and fix issues.
Cursor’s code review is surprisingly good. It’s caught many bugs for me that would have taken a while to debug, like off by one errors or improperly refactored code (like changing is_alive to is_dead and forgetting to negate conditionals)
This “really smart new grad” take is completely insane to me, especially if you know how LLMs work. Look at this SQL snippet Claude (the new Sonnet) generated recently.
-- Get recipient's push token and sender's username
SELECT expo_push_token, p.username
INTO recipient_push_token, sender_username
FROM profiles p
WHERE p.id = NEW.recipient_id;
Seems like the world has truly gone insane and engineers are tuned into some alternate reality a la Fox News. Well…it’ll be a sobering day when the other shoe falls.
This is a prompt I gave to o1-mini a while ago:
My instructions follow now. The scripts which I provided you work perfectly fine. I want you to perform a change though. The image_data.pkl and faiss_index.bin are two databases consisting of rows, one for each image, in the end, right? My problem is that there are many duplicates: images with different names but the same content. I want you to write a script which for each row, i.e. each image, opens the image in python and computes the average expected color and the average variation of color, for each of the colors red, green and blue, and over "random" over all the pixels. Make sure that this procedure is normalized with respect to the resolution. Then once this list of "defining features" is obtained, we can compute the pairwise difference. If two images have less than 1% variation in both expectation and variation, then we consider them to be identical. in this case, delete those rows/images, except for one of course, from the .pkl and the .bin I mentioned in the beginning. Write a log file at the end which lists the filenames of identical images.
It wrote the script, I ran it and it worked. I had it write another script which displays the found duplicate groups so I could see at a glance that the script had indeed worked. And for you this does not constitute any understanding? Yes it is assembling pieces of code or algorithmic procedures which it has memorized. But in this way it creates a script tailored to my wishes. The key is that it has to understand my intent.
Does "it understands" just mean "it gave me what I wanted?" If so, I think it's clear that that just isn't understanding.
Understanding is something a being has or does. And understanding isn't always correct. I'm capable of understanding. My calculator isn't. When my calculator returns a correct answer, we don't say it understood me -- or that it understands anything. And when we say I'm wrong, we mean something different from what we mean when we say a calculator is wrong.
When I say LLMs can't understand, I'm saying they're no different, in this respect, from a calculator, WinZip when it unzips an archive, or a binary search algorithm when you invoke a binary-search function. The LLM, the device, the program, and the function boil down (or can) to the same primitives and the same instruction set. So if LLMs have understanding, then necessarily so do a calculator, WinZip, and a binary-search algorithm. But they don't. Or rather we have no reason to suppose they do.
If "it understands" is just shorthand for "the statistical model and program were designed and tuned in such a way that my input produced the desired output," then "understand" is, again, just unarguably the wrong word, even as shorthand. And this kind of shorthand is dangerous, because over and over I see that it stops being shorthand and becomes literal.
LLMs are basically autocorrect on steroids. We have no reason to think they understand you or your intent any more than your cell phone keyboard does when it guesses the next character or word.
When I look at an image of a dog on my computer screen, I don't think that there's an actual dog anywhere in my computer. Saying that these models "understand" because we like their output is, to me, no different from saying that there is, in fact, a real, actual dog.
"It looks like understanding" just isn't sufficient for us to conclude "it understands."
I think the problem is our traditional notions of "understanding" and "intelligence" fail us. I don't think we understand what we mean by "understanding". Whatever the LLM is doing inside, it's far removed from what a human would do. But on the face of it, from an external perspective, it has many of the same useful properties as if done by a human. And the LLM's outputs seem to be converging closer and closer to what a human would do, even though there is still a large gap. I suggest the focus here shouldn't be so much on what the LLM can't do but the speed at which it is becoming better at doing things.
I think there is only one thing we should focus on: Measurable capability on tasks. Understanding, memorization, reasoning etc. are all just shorthands we use to quickly convey an idea of a capability on a kind of task. Measurable capability on tasks can also attempt do describe mechanistically how the model works, but that is very difficult. This is where you would try to describe your sense of "understanding" rigorously. To keep it simple for example, I think when you say that the LLM does not understand what you must really mean is that you reckon its performance will quickly decay off as the task gets more difficult in various dimensions: Depth/complexity, Verifiability of the result, length/duration/context size, to a degree where it is still far from being able to act as a labor-delivering agent.
Another fun example from yesterday: pasted a blog post in markdown into a HTML comment. Selected it and told sonnet to convert it to HTML using another blog post as a style reference.
And how do you trust that it didn't just alter or omit some sentences from your blog post?
I just use Pandoc for that purpose and it takes 30 seconds, including the time to install pandoc. For code generation where you'll review everything, AI makes sense; but for such conversion tasks, it doesn't because you won't review the generated HTML.
> it takes 30 seconds, including the time to install pandoc
On some speedrunning competition maybe? Just tested on my work machine, `sudo apt-get pandoc` took 11 seconds to complete, and it was this fast only because I already had all the depndencies installed.
Also I don't think you'll be able to fulfill the "using another blog post as a style reference" part of GP's requirements - unless, again, you're some grand-master Pandoc speedrunner.
Sure, AI will make mistakes with such conversion tasks. It's not worth it if you're going to review everything carefully anyway. In code, fortunately, you don't have to - the compiler is doing 90% of the grunt work for you. In writing, depends on context. Some text you can eyeball quickly. Sometimes you can get help from your tool.
Literally yesterday I back-ported a CV from English to Polish via Word's Translation feature. I could've done it by hand, but Word did 90% of it correctly, and fixing the remaining issues was a breeze.
Ultimately, what makes LLMs a good tool for random conversions like these is that it's just one tool. Sure, Pandoc can do GP's case better (if inputs are well-defined), but it can't do any of the 10 other ad-hoc conversions they may have needed that day.
> Installing pandoc is basically a one-time cost that is amortized over its uses, so... why worry about it?
Because space of problems LLMs of today solve well with trivial prompts is vast, far greater than any single classical tool covers. If you're comparing solutions to 100 random problems, you have to count in those one-time costs, because you'll need to use some 50-100 different tools to get through them all.
> Relying on the compiler to catch every mistake is a pretty limited strategy.
No, you're relying on the compiler to catch every mistake than can be caught mechanically - exactly the kind of things humans suck at. It's kind of the entire point of errors and warnings in compilers, or static typing for that matter.
No, if you are having an LLM generate code that you are not reviewing, you are relying on the compiler 100%. (Or the runtime, if it isn't a compiled language.)
Re:trust. It just works using Sonnet 3.5. It's gained my trust. I do read it after (again, I'm more a code reviewer role). People make mistakes too, and I think it's error rate for repeititve tasks is below most people's. I also learned how to prompt it. I'd tell it to just add formatting without changing content in the first pass. Then in a separate pass ask it to fix spelling/grammar issues. The diffs are easy to read.
Re:Pandoc. Sure, if that's the only task I used it for. But I used it for 10 different ones per day (write a JSON schema for this json file, write a Pydantic validator that does X, write a GitHub workflow doing Y, add syntax highlighting to this JSON, etc). Re:this specific case - I prefer real HTML using my preferred tools (DaisyUI+tailwind) so I can edit it after. I find myself using a lot less boilerplate-saving libraries, and knowing a few tools more deeply.
Why are you comparing its error rate for repetitive tasks with most people? For such mechanical tasks we already have fully deterministic algorithms to do it, and the error rate of these traditional algorithms is zero. You aren't usually asking a junior assistant to manually do such conversion, so it doesn't make sense to compare its error rate with humans.
Normalizing this kind of computer errors when there should be none makes the world a worse place, bit by bit. The kind of productivity increase you get from here does not seem worthwhile.
You didn't also factor in the time to learn Pandoc (and to relearn it if you haven't used it lately). This is also just one of many daily use cases for these tools. The time it takes to know how to use a dozen tools like this adds up when an LLM can just do them all.
This is actually how I would use AI: if I forgot how to do a conversion task, I would ask AI to tell me the command so that I can run it without rejiggering my memory first. The pandoc command is literally one line with a few flags; it's easily reviewable. Then I run pandoc myself. Same thing with the multitude of other rarely used but extremely useful tools such as jq.
In other words, I want AI to help me with invoking other tools to do a job rather than doing the job itself. This nicely sidesteps all the trust issues I have.
What? Is this a joke? Have you actually worked with human office assistants? The whole point of human assistants is that you don't need to verify their work. You hire them with a good wage and you trust that they are working in good faith.
It's disorienting for me to hear that some people are so blinded by AI assistants that they no longer know how human assistants behave.
I replaced SO with cGPT and it’s the only good case I found. Finding an answer I build onto. But outsourcing my reflexion ? That’s a dangerous path. I tried on small projects to do that, building a project from scratch with cursor just to test it. Sometimes it’s right on spot but in many instances it misses completely some cases and edge cases. Impossible to trust blindly. And if I do so and not take proper time to read and think about the code the consequences pile up and make me waste time in the long run because it’s prompt over prompt over prompt to refine it and sometimes it’s not exactly right. That messes up my thinking and I prefer to do it myself and use it as a documentation on steroids. I never used google and SO again for docs. I have the feeling that relying on it to much to write even small blocs of code will make us loose some abilities in the long run and I don’t think that’s a good thing. Will companies allow us to use AI in code interviews for boilerplate ?
The AI's are to a large degree trained on tutorial code, quick examples, howto's and so on from the net. Code that really should come with a disclamer note: "Dont use in production, only example code.".
This leads to your code being littered with problematic edge-cases that you still have to learn how to fix. Or in worst case you don't even notice that there are edge cases because you just copy-pasted the code and it works for you. The edge cases your users will find with time.
Isn't tutorial-level code exactly the best practices that everyone recommends these days? You know, don't write clever code, make things obvious to juniors, don't be a primadonna but instead make sure you can be replaced by any recently hired fresh undergrad, etc.? :)
Not really. For example, tutorial code will often leave out edge cases so as to avoid confusing the reader: if you're teaching a new programmer how to open a file, you might avoid mentioning how to handle escaping special characters in the filename.
Which makes me wonder, if old companies with a history of highly skilled teams would train local models, how better would they be at helping solve new complex problems.
They kinda already are - source code for highly complex open source software is already in the training datasets. The problem is that tutorials are much more descriptive (why the code is doing something, how does this particular function work etc. -down to a level of a single line of code), which probably means it’s much easier to interpret for llm-s, therefore weighted higher in responses.
I’m slightly worried that these AI tools will hurt language development. Boilerplate heavy and overly verbose languages are flawed. Coding languages should help us express things more succinctly, both as code writers and as code readers.
If AI tools let us vomit out boilerplate and syntax, I guess that sort of helps with the writing part (maybe. As long as you fully understand what the AI is writing). But it doesn’t make the resulting code any more understandable.
Of course, as is always the case, the tools we have now are the dumbest they’ll ever be. Maybe in the future we can have understandable AI that can be used as a programming language, or something. But AI as a programming language generator seems bad.
I used to agree with this, but the proliferation of Javascript made me realize that newer/better programming languages were already not coming to save us.
Wondering. Since the seniors pair with LLMs, world needs much less juniors. Some juniors will go away to other industries, but some might start projects in new languages without LLM/business support.
Frankly, otherwise I don't see how any new lang corpus might get created.
Before you dismiss all of this because "You could do it by hand just as easily", you should actually try using Cursor. It only takes a few minutes to setup.
I'm only 2 weeks in but it's basically impossible for me to imagine going back now.
It's not the same as GH Copilot, or any of the other "glorified auto-complete with a chatbox" tools out there. It's head and shoulders better than everything else I have seen, likely because the people behind it are actual AI experts and have built numerous custom models for specific types of interactions (vs a glorified ChatGPT prompt wrapper).
> I could select one ":, use "Select All Occurrences"
Only if it's the same occurrences. Cursor can often get the idea of what you want to do with the whole block of different names. Unless you're a vim macro master, it's not easily doable.
> How many of you spend 90% of time writing boilerplate instead of the core logic of the project?
It doesn't take much time, but it's a distraction. I'd rather tab through some things quickly than context switch to the docs, finding the example, adapting it for the local script, then getting back to what I was initially trying to do. Working memory in my brain is expensive.
I still spend a good amount of time on boilerplate. Stuff that's not thinking hard about the problem I'm trying to solve. Stuff like units tests, error logging, naming classes, methods and variables. Claude is really pretty good at this, not as good as the best code I've read in my career but definitely better than average.
When I review sonnets code the code is more likely to be correct than if I review my own. If I make a mistake I'll read what I intended to write, and not what I actually wrote. Where as when I review sonnets there's 2 passes so the chance an error slips through is smaller.
I'm using an expansive definition of boiler plate to be sure. But like boiler plate most unit tests require a little bit of thought and then a good amount of typing, doing things like setting up the data to test, mocking up methods, writing out assertions to test all your edge cases.
I've found sonnet and o1 to be pretty good at this. Better than writing the actual code because while modifying a system requires a lot of context of the overall application and domain, unit testing a method usually doesn't.
Yes. You write a function ApplyFooToBar(), and then unit tests that check that, when supplied with right Foos, the function indeed applies those Foos to the Bar. It's not a very intellectually challenging work.
If anything, the challenge is with all the boilerplate surrounding the test, because you can't just write down what the test checks themselves - you need to assemble data, assemble expected result, which you end up DRY-ing into support modules once you have 20 tests needing similar pre-work, and then there's lots of other bullshit to deal with at the intersection between your programming language, your test framework, and your modularization strategy.
Indeed. To many tests are just testing nothing other than mocks. That goes for my coworkers directly and for their Copilot output. They’re not useful tests, they are thing to catch actual errors, they’re maybe useful as usage documentation. But in general, they’re mostly a waste.
Integration tests, good ones, are harder but far more valuable.
> To many tests are just testing nothing other than mocks
Totally agree, and I find that they don't help with documentation much either, because the person that wrote it doesn't know what they're trying to test. So it only overcomplicates things.
Also harmful because it gives a false sense of security that the code is tested when it really isn't.
This has been my approach in the past that only certain parts of the code are worth unit testing. But given how much easier unit tests are to write now with AI I think the % of code worth unit testing has gone up.
> But given how much easier unit tests are to write now with AI I think the % of code worth unit testing has gone up.
I see the argument, I just disagree with it. Test code is still code and it still has to be maintained, which, sure "the AI will do that" but now theres a lot more that I have to babysit.
The tests that I'm seeing pumped out by my coworkers who are using AI for it just aren't very good tests a lot of the time, and honestly encode too much of the specific implementation details of the module in question into them, making refactoring more of a chore.
The tests I'm talking about simply aren't going to catch any bugs, they weren't used as an isolated execution environment for test driven development, so what use are they? I'm not convinced, not yet anyway.
Just because we can get "9X%" coverage with these tools, doesn't mean we should.
Completely agree. I find it fails miserably at business logic, which is where we spend most of our time on. But does great at generic stuff, which is already trivial to find on stack overflow.
Might be that they work in a much more complex codebase, or a language/framework/religion that has less text written on it. Might also be that they (are required to) hold code to a higher standard than you and can't just push half-baked slop to prod.
I've spent a good amount of time in my career reading high quality code and slop. The difference is not some level of intelligence that sonnet does not possess. It's a well thought out design, good naming, and rigor. Sonnet is as good if not better than the average dev at most of this and with a some good prompting and a little editing can write code as good as most high quality open source projects.
Which is usually far higher than most commerical apps the vast majority of us devs work on.
> with a some good prompting and a little editing can write code good
I agree with a good developer "baby-sitting" the model it's capable of producing good code. Although this is more because the developer is skilled at producing good code so they can tell an AI where it should refactor and how (or they can just do it themselves). If you've spent significant time refitting AI code, it's not really AI code anymore its yours.
Blindly following an AI's lead is where the problem is and this is where bad to mediocre developers get stuck using an AI since the effort/skill required to take the AI off its path and get something good out is largely not practised. This is because they don't have to fix their own code, and what the AI spits out is largely functional - why would anyone spend time thinking about a solution that works that they don't understand how they arrived at?
I've spent soo much time in my life reviewing bad or mediocre code from mediocre devs and 95% of the time the code sonnet 3.5 generates is at least as correct and 99% of the time more legible than what a mediocre dev generates.
It's well commented, the naming is great it rarely tries to get overly clever, it usually does some amount of error handling, it'll at least try to read the documentation, it finds most of the edge cases.
It's easy to forget one major problem with this: we all have been mediocre devs at some point in our lives -- and there will always be times when we're mediocre, even with all our experience, because we can't be experienced in everything.
If these tools replace mediocre devs, leaving only the great devs to produce the code, what are we going to do when the great devs of today age out, and there's no one to replace them with, because all those mediocre devs went on to do something else, instead of hone their craft until they became great devs?
Or maybe we'll luck out, and by the time that happens, our AIs will be good enough that they can program everything, and do it even better than the best of us.
If you can call that "lucking out" -- some of us might disagree.
Which is deeply sad, because this both tarnishes good output and gives a free pass to the competitor - shit "content" generated by humans. AI models only recently started to match that in quantity.
But then again, most of software industry exists to create and support creation of human slop - advertising, content marketing, all that - so there's bound to be some double standards and salary-blindness present.
Without knowing much about my prompts and work you’ve just assumed it’s why AI gives me bad results. You’re wrong. (Can you see why this is a bad argument?)
Don't get me wrong I love sloppy code as much as the next cowboy, but don't delude yourself or others when the emperor doesn't have clothes on.
> But does great at generic stuff, which is already trivial to find on stack overflow.
The major difference is that with Cursor you just hit "tab", and that thing is done. Vs breaking focus to open up a browser, searching SO, finding an applicable answer (hopefully), translating it into your editor, then reloading context in your head to keep moving.
The benefits of exploring is finding alternatives and knowing about gotchas. And knowing more about both the problem spaces and how the language/library/framework solves it.
There was a thread about that and gist was: A calculator is a great tool because it's deterministic and the failures are known (mostly related to precision); It eliminates the entire need of doing computation by hand and you don't have to babysit it though the computation process with "you're an expert mathematician..."; Also it's just a tool and you still need to learn basic mathematics to use it.
The equivalent to that is a good IDE that offers good navigation (project and dependencies), great feedback (highlighting, static code analysis,..), semantic manipulation, integration with external tooling, and the build/test/deploy process.
Yes, I think agree. And when you use a calculator and get a result that doesn't make sense to you, you step in as the human and try and figure out what went wrong.
With the calculator, it's typically a human error that causes the issue. With AI, it's an AI error. But in practice it's not a different workflow.
Give inputs -> some machine does work much faster than you could -> use your human knowledge to verify outputs -> move forward or go back to step 1.
My experience has been different. My major use case for AI tools these days is writing tests. I've found that the generated test cases are very much in line with the domain. It might be because we've been strictly using domain driven design principles It even generates test cases that fail to show what we've missed
Yesterday, I got into an argument on the internet (shocking, I know), so I pulled out an old gravitation simulator that I had built for a game.
I had chatGPT give me the solar system parameters, which worked fine, but my simulation had an issue that I actually never resolved. So, working with the AI, I asked it to convert the simulation to constant-time (it was currently locked to render path -- it's over a decade old). Needless to say, it wrote code that set the simulation to be realtime ... in other words, we'd be waiting one year to see the planets go around the sun. After I pointed that out, it figured out what to do and still got things wrong or made some terrible readability decisions. I ended up using it as inspiration instead and then was able to have the simulation step at one second resolution (which was required for a stable orbit) but render at 60fps and compress a year into a second.
This sums up my experience as well. You can get an idea or just a direction from it, but itself AI stumbles upon its own legs instantly in any non-tutorial task. Sometimes I envy and at the same time feel sorry for successful AI-enabled devs, cause it feels like they do boilerplate and textbook features all day. What a release if something can write it for you.
I have a corporation-sponsored subscription to Github CoPilot + Rider
When I'm writing unit tests or integration tests it can guess the boilerplate pretty well.
If I already have a AddUserSucceeds test and I start writing `public void Dele...` it usually fills up the DeleteUserSucceeds function with pretty good guesses on what Asserts I want there - most times it even guesses the API path/function correctly because it uses the whole project as context.
I can also open a fresh project I've never seen and ask "Where is DbContext initialised" and it'll give me the class and code snippet directly.
Have you tried recently to start a new web app from scratch? Specially the integration of frontend framework with styling and the frontend backend integration.
Oh my god get ready to waste a full weekend just to setup everything and get a formatted hello world.
That’s why I use Rails for work. But I also had to write a small Nodejs project (vite/react + express) recently for a private project, and it has a lot of nice things going for it that make modern frontend dev really easy - but boy is it time consuming to set up the basics.
I can't imagine having nice things to say about node after working in Rails. Rails does so much for you, provides a coherent picture of how things work. Node gives you nothing but the most irritating programming tools around
I thought node was great until I had to upgrade some projects, and then realizing that those frameworks maintainers never maintain their dependencies. While in the C world, lot of projects treat warnings as errors.
that's an indictment of the proliferation of shitty frameworks and documentation. it's not hard to figure out such a combination and then keep a template of it lying around for future projects. you don't have to reach for the latest and shiniest at the start of every project.
> you don't have to reach for the latest and shiniest at the start of every project.
Except you kind of do, because if you're working frontend or mobile, then your chosen non-shitty tech stack is probably lacking some Important Language Innovations or Security Featuers that Google or Microsoft forced on the industry since last time you worked with that stack.
(Yes, that's mostly just an indictment of the state of our industry.)
every time you capitulate, you tell them that you're happy to play along, bring more "innovation" so you keep having to run very hard just to stay in place.
Most frontend frameworks come with usable templates. Setting up a new Vite React project and getting to a formatted hello world can be done in half an hour tops.
On a good day, when you're using the most recent right version of MacOS, when all of the frontend tool's couple thousand dependencies isn't transiently incompatible with everything else, yes.
(If no, AI probably won't help you here either. Frontend stuff moves too fast, and there's too much of it. But then perhaps the AI model could tell you that the frontend tool you're using is a ridiculous overkill for your problem anyway.)
I'll be honest, I've never had the command line tool to setup a React / NextJS / Solid / Astro / Svelt / any framework app fail to make something that runs, ever
What exactly magic command line tool are you referring to? What cmd tool configures the frontend framework to use a particular css framework, webpack to work with your backend endpoints properly, setups cors and authentication with the backend for development, configures the backend to point to and serve the spa?
Yep, it will likely work and do what it's supposed to do. But what it's supposed to do is probably only 90% of what you want: try to do anything out of what the boilerplate is setup for and you're in for hours of pain. Want SWC instead of TSC? Eslint wasn't setup, or not like you want? Prettier isn't integrated with ESlint? You want Typescript project references? Something about ES modules vs CJS? Hours and hours and hours of pain.
I understand all this stuff better than the average (although not a top 10%), and I'd be ashamed to put a number of the amount of hours I've lost on setting up boilerplate _even_ having used some sort of official generator
> Boilerplate are tedious, but not really time-consuming.
In the aggregate, almost no programmer can think up code faster than they can type it in. But being a better typist still helps, because it cuts down on the amount you have to hold in your head.
Similar for automatically generating boilerplate.
> If I don't know, how can I trust the generated code to be correct?
Ask the AI for a proof of correctness. (And I'm only half-joking here.)
In languages like Rust the compiler gives you a lot of help in getting concurrency right, but you still have to write the code. If the Rust compiler approves of some code (AI generated or artisanally crafted), you are already pretty far along in concurrency right.
A great mind can take a complex problem and come up with a simple solution that's easy to understand and obviously correct. AI isn't quite there yet, but getting better all the time.
> In the aggregate, almost no programmer can think up code faster than they can type it in.
And thank god! Code is a liability. The price of code is coming down but selling code is almost entirely supplanted by selling features (SaaS) as a business model. The early cloud services have become legacy dependencies by now (great work if you can get it). Maintaining code is becoming a central business concern in all sectors governed by IT (i.e. all sectors, eating the world and all that).
On a per-feature basis, more code means higher maintenance costs, more bugs and greater demands on developer skills and experience. Validated production code that delivers proven customer value is not something you refactor on a whim (unless you plan to go out of business), and the fact that you did it in an evening thanks to ClippyGPT means nothing—-the costly part is always what comes after: demonstrating value or maintaing trust in a competitive market with a much shallower capital investment moat.
> In the aggregate, almost no programmer can think up code faster than they can type it in. But being a better typist still helps, because it cuts down on the amount you have to hold in your head.
I mean on the big picture level sure they can. Or in detail if it is something that they have good experience with. In many cases I get a visual of the whole code blocks, and then if I use copilot I can already predict what it is going to auto complete for me based on the context and then I can pretty much in a second know if it was right or wrong. Of course it is more so for the side projects since I know exactly what I want to do and so it feels most of the time it is having to just vomit all the code out. And I feel impatient, so copilot helps a lot with that.
I don’t even trust the API based exercises anymore unless it’s a stable and well documented API. Too many times I’ve been bitten by an AI mixing and matching method signatures from different versions, using outdated approaches, mixing in apis from similar libraries, or just completely hallucinating a method. Even if I load the entire library docs into the context, I haven’t found one that’s completely reliable.
It just has to be popular and common boilerplate like the example I gave.
It's hard with less popular APIs. It will almost always get something wrong. In such cases, I read docs, search sourcegraph / GitHub, and finally check the source code.
> Snake case to camelCase
> VSCode itself has command of "Transform to Camel Case"
I never understand arguments like this. I have no idea what the shortcut for this command is. I could learn this shortcut, sure, but tomorrow I’ll need something totally different. Surely people can see the value of having a single interface that can complete pretty much any small-to-medium-complexity data transformation. It feels like there’s some kind of purposeful gaslighting going on about this and I don’t really get the motive behind it.
Exactly. I think some commenters are taking this example too literally. It's not about this specific transformation, but how often you need to do similar transformations and don't know the exact shortcut or regex or whatever to make it happen. I can describe what I want in three seconds and be done with it. Literal dropbox.png going on in this thread.
If you aren’t using AI for everything, you’re using it wrong. Go learn how to use it better. It’s your job to find out how. Corporations are going to use it to replace your job.
(Just kidding. I’m just making fun of how AI maxis reply to such comments, but they do it more subtly.)
Boilerplate comes up all the time when writing Erlang with OTP behaviors though, and sometimes you have no idea if it really is the right way or not. There are Emacs skeletons for that (through tempo), but feels like they are sometimes out of date.
1. Is such a taste task for me anyway that I don’t lose much just doing it by hand
2. The last time I wrote boilerplate heavy Java code, 15+ years ago, the IDE already generated most of it for me. Nowadays boilerplate comes in two forms for me: new project setup, which I find it far quicker to use a template or just copy and gut an existing project (and it’s not like I start new projects that often anyway), or new components that follow some structure, where AI might actually be useful but I tend to just copy an existing one and gut it.
3. These aren’t tasks I really trust AI for. I still attempt to use AI for them, but 9 out of 10 times come away disappointed. And the other 1 time end up having to change a lot of it anyway.
I find a lot of value from AI, like you, asking it SO style questions. I do also use it for code snippets, eg “do this in CSS”. Its results for that are usually (but not always) reasonably good. I also use it for isolated helper functions (write a function to flood fill a grid where adjacent values match was a recent one). The results for this range from a perfect solution first try, to absolute trash. It’s still overall faster than not having AI, though. And I use it A LOT for rubber ducking.
I find AI is a useful tool, but I find a lot of the positive stories to be overblown compared to my experience with it. I also stopped using code assistants and just keep a ChatGPT tab open. I sometimes use Claude but it’s conversation length limits turned me off.
Looking at the videos in OP, I find the parallelising task to be exactly the kind of tricky and tedious task that I don’t trust AI to do, based on my experience with that kind of task, and with my experience with AI and the subtly buggy results it has given me.
1. Snake case to camelCase. Even without AI we can already complete these tasks easily. VSCode itself has command of "Transform to Camel Case" for selection. It is nice the AI can figure out which text to transform based on context, but not too impressive. I could select one ":, use "Select All Occurrences", press left, then ctrl+shift+left to select all the keys.
2. Generate boilerplate from documentation. Boilerplate are tedious, but not really time-consuming. How many of you spend 90% of time writing boilerplate instead of the core logic of the project? If a language/framework (Java used to be, not sure about now) requires me to spend that much time on boilerplate, that's a language to be ditched/fixed.
3. Turn problem description into a block of concurrency code. Unlike the boilerplate, these code are more complicated. If I already know the area, I don't need AI's help to begin with. If I don't know, how can I trust the generated code to be correct? It could miss a corner case that my question didn't specify, which I don't yet know existing myself. In the end, I still need to spend time learning Python concurrency, then I'll be writing the same code myself in no time.
In summary, my experience about AI is that if the question is easy (e.g. easy to find exactly same question in StackOverflow), their answer is highly accurate. But if it is a unique question, their accuracy drops quickly. But it is the latter case where we spend most of the time on.