Russ, here, CTO and cofounder of Rainforest QA (
https://www.rainforestqa.com). Way back in 2012, my cofounder Fred (fredsters_s) and I got into YC with one idea in mind, but soon pivoted once we saw a pattern among most of the other companies in our cohort.
These startups were trying to push code through CI/CD pipelines as frequently as possible, but were stymied by quality assurance. Labor-intensive QA (specifically, smoke, regression, and other UI tests) tended to be the bottleneck preventing CI/CD from delivering on its promise of speed and efficiency. That left a frustrating dilemma for these teams: slow down the release to do QA, or move faster at the expense of product quality. Given that we were sure CI/CD would be the future of software development, we decided to dedicate our startup to solving this challenge.
For us, inspired at the time by Mechanical Turk, the question was: could we organize and train crowdsourced testers to do manual UI testing quickly, affordably, and accurately enough for CI/CD?
In the following years, we optimized crowd testing to be as fast as it could possibly be, including parallelization of work and 24/7, on-demand availability. (Our human-powered test suites complete in under 17 minutes, on average!) But, the fact is, for many rote tasks (like regression tests), humans will never be as fast or as affordable as the processing power of computers.
The logical conclusion is that teams should simply automate as much UI testing as possible. But we found that UI test automation is out of reach for many startups—it’s expensive to hire an engineer who has the skills to create and maintain such automated tests in one of the popular frameworks like Selenium. Worse, those tests tend to be brittle, further inflating maintenance costs.
With the rise of no-code, we saw an opportunity to make automated UI testing truly accessible to all companies and product contributors. So two years ago, we made a big decision to pivot the company and got to work building a no-code test automation framework from scratch. We’re excited to have launched our new platform this summer.
On our platform, anyone on your team can write, maintain, and run automated UI tests using a WYSIWYG test editor. Unlike other “no-code” test solutions which still require coding for test maintenance, our proprietary automation framework isn’t a front-end for Selenium. Unlike most test automation frameworks that test the DOM, our automation service interacts with and evaluates the UI of your app or website via machine-vision, to give you the confidence you’re testing exactly what your users and customers will experience. Minor, behind-the-scenes code changes that don’t affect the UI often break Selenium tests (i.e., create false positives), but not Rainforest tests.
Our automated tests return detailed results in under four minutes on average, providing regression steps, video recordings, and HTTP logs of every test. You don’t have to set up or pay extra for testing infrastructure, because it’s all included in the plans on our platform. Tests run on virtual machines in our cloud, including 40+ combinations of platforms and browsers. We build everything with CI/CD pipelines in mind, so most of our customers kick off tests using our API, CLI, or CircleCI.
Of course, not all tests can or should be automated; e.g. when a feature UI is changing frequently or when you need subjective feedback like, “Is this image clear?”. Today's computers are nowhere near able to replace the ingenuity and judgement of people; that’s why our crowd testing community isn’t going anywhere. But we can now say that Rainforest is the only QA platform that provides on-demand access to both no-code automated testing and manual testing by QA specialists.
We offer a free plan that provides five free hours of test automation every month, because we don’t think cost should make test automation inaccessible, either.
I’m looking forward to your questions and feedback!
Ran into a blog post by Rainforest's CEO [1] that really resonated with me on why the status quo in testing is broken (it's all about misalignment of incentives).
My team did a bake off and Rainforest was the clear winner. What Rainforest has built is a big technical achievement and they make it look so easy! We're just getting started with them but we have high hopes that we can better balance velocity and quality.
[1]https://www.rainforestqa.com/blog/accessible-quality