Hacker News new | past | comments | ask | show | jobs | submit login

We are using ellipsis and sweep both for our open source project and they are quite helpful in their own ways. I think selling them as an automated engineer is a little over the top at the moment but once you get the hang of it they can spot common problems in PRs or do small documentation related stuff quite accurately.

Take a look at this PR for example: https://github.com/julep-ai/julep/pull/311

Ellipsis caught a bunch of things that would have come up only in code review later. It also got a few things wrong but they are easy to ignore. I like it overall, helpful once you get the hang of it although far from a “junior dev”.




> Take a look at this PR for example: https://github.com/julep-ai/julep/pull/311

I am still confused if vector size should be 1024 or 728 lol.


Lolll. It’s 1024 but only for documents and not the tools (we changed the embedding model for RAG)


Why isn’t the AI suggesting putting into an appropriately named const? Magic numbers are poor practice.


Good catch. The team could add this rule to their Ellipsis config file to make sure that it's always flagged: "Never use magic numbers. Always store the number in a variable and use the variable instead."

Docs: https://docs.ellipsis.dev/config#add-custom-rules


But even that isn’t ALWAYS the case. There are times when it is appropriate to have numbers inline, as long as they’re not repeated.

This is where good judgement comes in, which is difficult to encode rules for.


> I think selling them as an automated engineer is a little over the top at the moment

Indeed. Amazon originally advertised CodeGuru as being "like having a distinguished engineer on call, 24x7".[^1] That became a punchline at work for a good while.

I can definitely see the value of a tool that helps identify issues and suggest fixes for stuff beyond your typical linter, though. In theory, getting that stuff out of the way could make for more meaningful human reviews. (Just don't overpromise what it can reasonably do.)

[^1]: https://web.archive.org/web/20191203185853/https://aws.amazo...


As it stands today, Ellipsis isn't sold as an AI software engineer.

One of our largest learnings is that state of the art LLM's aren't good enough to write code autonomously, but they are good enough to be helpful during code review.


Right, I stand corrected, I think I confused the branding of other competing products. I remember really liking the fact that ellipsis does _not_ sell itself as a developer. I’ll edit my comment to reflect that. :)


I’ve been following sweep and aider for awhile and really love what they’re both doing, especially sweep.

Would love to get your thoughts on sweep. Does it meet your expectations? If not, where does it fall short?


Not as a “junior dev” that sweeps markets itself as but it is useful in its own ways. For example, one really nifty way I found to effectively use it is to: - git diff - gh issue create “sweep: update docs for this file change” for every file changed

It’s not perfect even after that but gives me a good starting point and often just needs a minor change.


Any thoughts on aider vs. sweep so far? I am also interested in trying out both...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: