Hacker News new | past | comments | ask | show | jobs | submit login

What domain/type of software do you and they work on? Cursor has been quite effective for me and many others say the same.

As long as one prompts it properly with sufficient context, reviews the generated code, and asks it to revise as needed, the productivity boost is significant in my experience.




Well, the context is the problem. LLMs will really become useful if they 1.) understand the WHOLE codebase AND all it's context and THEN also understand the changes over time to it (local history and git history) and finally also use context from slack - and all of that updating basically in real time.

That will be scary. Until then, it's basically just a better autocomplete for any competent developer.


What you describe would be needed for a fully autonomous system. But for a copilot sort of situation, the LLM doesn't need to understand and know of _everything_. When I implement a feature into a codebase, my mental model doesn't include everything that has ever been done to that codebase, but a somewhat narrow window, just wide enough to solve the issue at hand (unless it's some massive codebase wide refactor or component integration, but even then it's usually broken down into smaller chunks with clear interfaces and abstractions).


I use copilot daily and because it lacks context it's mostly useless except for generating boilerplate and sometimes converting small things from A to B. Oh, also copying functions from stackoverflow and naming them right.

That's about it. But I spend maybe 5% of my time per day on those.


I dislike Copilot's context management, personally, and much prefer populating the context of say Claude deliberately and manually (using Zed, see https://zed.dev/blog/zed-ai). This fits my workflow much much better.


Imagine you are coding in your IDE and it suggests you a feature because someone mentioned it yesterday on #app-eng channel. Needs deeper context, though. About order of events, an how authoritative a character is.


I get value out of LLMs on stock Python or NextJS or whatever where that person was in fact a lossy channel from SO to my diff queue.

If there’s no computation then there’s no computer science. It may be the case that Excel with attitude was a bubble in hiring.

But Sonnet and 4o both suck at why CUDA isn’t detected on this SkyPilot resource.


> But Sonnet and 4o both suck at why CUDA isn’t detected on this SkyPilot resource.

I don't understand this sentence, should "both suck at why" be "both suck and why" or perhaps I'm just misunderstanding in general?


SkyPilot is an excellent piece of software attempting an impossible job: run your NVIDIA job on actively adversarial compute fabric who mark up the nastiest monopoly since the Dutch East India Company (look it up: the only people to run famine margins anywhere near NVIDIA are slave traders).

To come out of the cloud “credits” game with your shirt on, you need stone cold pros.

The kind of people on the Cursor team. Not the adoring fans who actually use their shit.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: