Hacker News new | past | comments | ask | show | jobs | submit login

The most valuable thing I want AI to do with regards to coding is to have it write all the unit tests and get me to 100% code coverage. The data variance and combinatorics needed to construct all the meaningful tests is sometimes laborious which means it doesn't get done (us coders are lazy...). That is what AI to do, all the mind numbing draining work so I can focus more on the system.





Which is also a fantasy, since if you did achieve it, you'd just have tests that verified that all your bugs were in place.

Not necessarily. I have used LLMs to write unit tests based on the intent of the code and have it catch bugs. This is for relatively simple cases of course, but there's no reason why this can't scale up in the future.

LLMs absolutely can "detect intent" and correct buggy code. e.g., "this code appears to be trying to foo a bar, but it has a bug..."


How do you expect AI to write unit tests if it doesn't know the precise desired semantics (specification)?

What I personally would like AI to do would be to refactor the program so it would be shorter/clearer, without changing its semantics. Then, I (human) could easily review what it does, whether it conforms to the specification. (For example, rewrite the C program to give exactly the same output, but as a Python code.)

In cases where there is a peculiar difference between the desired semantics and real semantics, this would become apparent as additional complexity in the refactored program. For example, there might be a subtle semantic differences between C and Python library functions. If the refactored program would use a custom reimplementation of C function instead of the Python function, it would indicate that the difference matters for the program semantics, and needs to be somehow further specified, or it can be a bug in one of the implementations.


I've been having good results having AI "color in" the areas that I might otherwise skimp on like that, at least in a first pass at a project: really robust fixtures and mocks in tests (that I'm no longer concerned will be dead weight as the system changes because they can pretty effectively be automatically updated), graceful error handling and messaging for edgier edge cases, admin views for things that might have only had a cli, etc.

A bigger challenge, and a “senior engineer” thing, is to write code with small/tractable state spaces in the first place

It’s not either/or of course, and AI can help

But sometimes it takes another leap beyond the current set of test cases


Tests are the documentation that explains what your application is intended to do. Once AI is able to figure that out, you won't be needed anymore.

Could be the opposite.

Once you write sufficiently detailed unit tests, the AI writes the implementation.


Are we not already more or less there? It is not perfect, to be sure, but LLMs will get you pretty close if you have the documentation to validate what it produces. However, I'm not sure that removes the tedium the parent speaks of when writing tests. Testing is not widely done because it is not particularly fun having to think through the solution up front. As the parent alludes to, many developers want to noodle around with their ideas in the implementation, having no particular focus on what they want to accomplish until they are already in the thick of it.

Mind you, when you treat the implementation as the documentation, it questions what you need testing for?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: