Hacker News new | past | comments | ask | show | jobs | submit login
Exploratory Testing (martinfowler.com)
67 points by PretzelFisch on Nov 21, 2019 | hide | past | favorite | 7 comments



From my experience most new bugs get found through exploratory testing. Automated testing is more for confirming that the system still works but not for finding tricky bugs.

It works especially well if the tester has a lot of experience as developer or tester. When I test a piece of software where I know the stack that’s used I usually can guess the difficult parts where the devs may have made mistakes. And of course you need real users. They do stuff nobody ever thought about during development.


Fully agree on understanding users.

I usually strive to develop a strong working relationship with users. Working with them 1:1 is even better. I like to set clear expectations upfront about what I’m capable of helping with to manage my time well.

When I take the time to observe users working - understanding their concerns, figuring out their pain points, gathering feedback - it gives customers a sense of ownership, and they will probably stay customers for a while.

I got into software development after a long stint at Apple Retail during college; a good place to weather the storm from the terrible 2008 recession.

The experience I got working as a Mac Genius at Apple has benefited me more than anything else I did for my career in software engineering.

The deep empathy for users I have developed from observing dozens of people use their devices for 8 hours a day, 8 years is the biggest asset of all my skillsets. I don’t claim to be an expert at how users feel and think; but it’s sure thorough enough to be a positive influence on my work.

If I feel a user is ”doing it wrong;” a simple practice for me in being empathetic is to ask myself “how can we do better?”


Exploratory testing is also useful for uncovering classes of scenario which an existing automated test suite is not currently capable of covering but which nonetheless harbor a lot of bugs.

One pretty common case is browsers - an automated test suite might run an exhaustive list of scenarios in chrome successfully while bugs lurk in edge and firefox which were assumed not to be there.


Automated testing is more for confirming that the system still works but not for finding tricky bugs.

Fuzzing and stress testing fall under automated testing and they can both discover new, tricky bugs in cases devs wouldn't have thought. They definitely won't find everything but they're still useful.


Exactly. If you a flow with some complex functionality and some known edge cases, and automated process is great to test that it's not broken by a new release. But a feature is never really tested until a human has put eyes on it. And Espeon if that human has been intimately involved in the planning, design and execution the whole way through.


The distinction here is between exploratory and scripted testing. Scripted in this sense probably does not mean automated. Think of an actor following a script, reading the lines. That's the meaning.


This reminded me of a blog by Michael Feathers about a similar idea he calls Characterization Testing. There's also some code examples. The difference is instead of trying to find out where the gaps are in your scripted testing, it is about creating tests in order to understand something that isn't tested or has an unclear specification. The intentions are slightly different, but the I'd think the process is the same.

https://michaelfeathers.silvrback.com/characterization-testi...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: