Hacker News new | past | comments | ask | show | jobs | submit login

> I have yet to see a good frontend test

You haven't written one yourself? Is your theory that since you can't write one, they aren't possible?

All my tests are good enough to ensure that if I upgrade a library or change underlying functionality, something breaks if the output is not expected. That is my primary concern and that is what I test for.




No, that's not what I meant. I mean tests are great on the backend, but I've never seen them prevent a bug on the frontend, especially in react where it's usually obvious what effect your changes will have. In fact, many frontend tests don't even truly test the UI



> disables input when disabled is passed

That's exactly the kind of test double I'm talking about


And? There was an actual bug filed around that and the issue was resolved with a test that ensures it doesn't happen again. That is all I care about. Make sure we don't make the same mistake twice and that over time things continue to work with new versions.


Can you elaborate on what’s problematic with this test? Testing that the component does what expected based on input is telling me the component is working as expected.


The developer that forgets to make disable work, will also forget to write this kind of test for it. Once the behaviour is added, it'll never be randomly removed again so the test isn't needed then. It looks nice but never helped find a bug.


Part of the complexity of integrating a form library with a ux library is passing all of the correct properties around between the two. In this case, I wasn't doing that correctly and it resulted in a bug where disabled was not being set correctly. Someone filed a bug. The bug was fixed and a test was written to ensure that this doesn't happen again in the future.

You can read the history here: https://github.com/lookfirst/mui-rff/issues/455

If you're working with people who randomly 'forget' things while they are doing development, then I guarantee that you're working with people who also write buggy code. In fact, in this case, it was ME who wrote that buggy code. I own it. Not all code gets 100% coverage and sometimes things get missed. That is ok. What is not ok is skipping tests because you might forget something and therefore think it isn't worth writing tests at all.

I consider buggy code the act of developers writing the code at least 2x instead of 1x. If you or your company is paying someone $X a year to write code once and they are actually writing code more than once, then I would highly suggest you look for new people to work with because that is a terrible return on investment.

If your developers are writing tests, along with their code, then the code is far more likely to be correct and better thought out and less buggy than code that was just hand tested as they developed it. Speaking of that 2x example, I'd rather pay someone 2x the amount of time to write code, with tests, than the other way around.


> if you're working with people who randomly 'forget' things while they are doing development, then I guarantee that you're working with people who also write buggy code. In fact, in this case, it was ME who wrote that buggy code. I own it.

We all write bugs, we're human and it's hard to think of everything all of the time.

My point was, when we make a thinking error that causes us to write a bug, that same thinking error means we also don't write that test that could find it. If you had thought of writing a test for the disabled thing when you wrote the code, you also would have written it correctly because you would have had that case in mind then.

Now you have a test, but because someone reported the bug and you fixed it. Now it's almost certainly not useful to have anymore.

I believe in automated tests, but for tricky logic mostly.


If you practice test-driven development, the mindset is slightly different.

Humans can't think of everything all the time - TDD guides you to think about one thing at a time.

No one is sitting there adding a disabled prop on their component that then disables an input for no reason. You had intent when writing it. It's possible that a co-worker distracted you, or you went for a break, or you went down another rabbit hole, and ended up skipping disabling the input based on the prop.

If you're practicing TDD, you would have written a failing test before creating the disabled prop. That test will continue to fail until the productive code has been written. It helps to protect against those thinking errors in the first place.


I am all about TDD, but in this case, I'm integrating two third party libraries that have many many features.

There is almost no way to even know what features they have, especially over time. Imagine that MUI added disabled in a new release and now my project isn't implementing it during an upgrade. TDD wouldn't have caught that.


An example where it’s helped me personally is when migrating from Angular Material to Ant Design. The test cases gave me confidence the expected behaviour was preserved


Very good example. Tests are all about 'delta'. Changes over time. They bring confidence over delta.


I know how to make tests for _logic_, that's usually in Typescript functions (e. g., React hooks).

But for components, that call some of those hooks and then render some JSX, I've never been able to write a test that found a bug later on. Every single time one of them failed, either the change made was so large that the whole build failed to compile anyway, or it was because there was new desired behaviour and the test needed changing, not the code.

If a type of test doesn't help find bugs, it's a waste of space.


I've given an example of a whole repository, that is now 4 years old, full of the types of tests you're talking about. It isn't that hard.

If the desired behavior is changing, then the test should fail and yes, you need to rewrite it. That's how testing works.


It's not hard, but they just make the build of our monorepo take longer and longer without finding actual bugs, in my experience. I feel most unit tests should be commented out once the thing under test is done.


You are arguing against tests. It is not a winnable argument.


No, people are arguing against frontend tests because they're orthogonal to test driven development


I really don't understand this comment at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: