Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Rainforest QA (YC S12) – No-Code UI Test Automation
149 points by ukd1 on Oct 21, 2021 | hide | past | favorite | 88 comments
Russ, here, CTO and cofounder of Rainforest QA (https://www.rainforestqa.com). Way back in 2012, my cofounder Fred (fredsters_s) and I got into YC with one idea in mind, but soon pivoted once we saw a pattern among most of the other companies in our cohort.

These startups were trying to push code through CI/CD pipelines as frequently as possible, but were stymied by quality assurance. Labor-intensive QA (specifically, smoke, regression, and other UI tests) tended to be the bottleneck preventing CI/CD from delivering on its promise of speed and efficiency. That left a frustrating dilemma for these teams: slow down the release to do QA, or move faster at the expense of product quality. Given that we were sure CI/CD would be the future of software development, we decided to dedicate our startup to solving this challenge.

For us, inspired at the time by Mechanical Turk, the question was: could we organize and train crowdsourced testers to do manual UI testing quickly, affordably, and accurately enough for CI/CD?

In the following years, we optimized crowd testing to be as fast as it could possibly be, including parallelization of work and 24/7, on-demand availability. (Our human-powered test suites complete in under 17 minutes, on average!) But, the fact is, for many rote tasks (like regression tests), humans will never be as fast or as affordable as the processing power of computers.

The logical conclusion is that teams should simply automate as much UI testing as possible. But we found that UI test automation is out of reach for many startups—it’s expensive to hire an engineer who has the skills to create and maintain such automated tests in one of the popular frameworks like Selenium. Worse, those tests tend to be brittle, further inflating maintenance costs.

With the rise of no-code, we saw an opportunity to make automated UI testing truly accessible to all companies and product contributors. So two years ago, we made a big decision to pivot the company and got to work building a no-code test automation framework from scratch. We’re excited to have launched our new platform this summer.

On our platform, anyone on your team can write, maintain, and run automated UI tests using a WYSIWYG test editor. Unlike other “no-code” test solutions which still require coding for test maintenance, our proprietary automation framework isn’t a front-end for Selenium. Unlike most test automation frameworks that test the DOM, our automation service interacts with and evaluates the UI of your app or website via machine-vision, to give you the confidence you’re testing exactly what your users and customers will experience. Minor, behind-the-scenes code changes that don’t affect the UI often break Selenium tests (i.e., create false positives), but not Rainforest tests.

Our automated tests return detailed results in under four minutes on average, providing regression steps, video recordings, and HTTP logs of every test. You don’t have to set up or pay extra for testing infrastructure, because it’s all included in the plans on our platform. Tests run on virtual machines in our cloud, including 40+ combinations of platforms and browsers. We build everything with CI/CD pipelines in mind, so most of our customers kick off tests using our API, CLI, or CircleCI.

Of course, not all tests can or should be automated; e.g. when a feature UI is changing frequently or when you need subjective feedback like, “Is this image clear?”. Today's computers are nowhere near able to replace the ingenuity and judgement of people; that’s why our crowd testing community isn’t going anywhere. But we can now say that Rainforest is the only QA platform that provides on-demand access to both no-code automated testing and manual testing by QA specialists.

We offer a free plan that provides five free hours of test automation every month, because we don’t think cost should make test automation inaccessible, either.

I’m looking forward to your questions and feedback!




Like many startups we've struggled with velocity vs quality doing the standard unit test for each PR and full regression tests for bigger changes. But it's never really worked well.

Ran into a blog post by Rainforest's CEO [1] that really resonated with me on why the status quo in testing is broken (it's all about misalignment of incentives).

My team did a bake off and Rainforest was the clear winner. What Rainforest has built is a big technical achievement and they make it look so easy! We're just getting started with them but we have high hopes that we can better balance velocity and quality.

[1]https://www.rainforestqa.com/blog/accessible-quality


Thanks, and glad we won that bakeoff! Shout if we can help y'all out.


Not a question, but I just want to thank you for "inventing" review apps back in the day, the idea of having a full version of your app running for a branch was pretty game changing for the companies I worked at.


Awesome! IMHO it was more of a refinement of the concept of deploying each commit; doing it at a PR and push level worked better. Open-sourcing it, and getting Heroku to bake it in was the icing on the cake for us!

Context for everyone else: early 2014 we open-sourced https://github.com/rainforestapp/fourchette, which was an pioneer of Heroku Review Apps (https://devcenter.heroku.com/articles/github-integration-rev...), and generally the concept of doing this on pull requests over just per branch or commit.


Great to see this shift!

I’d be curious to hear about the comparison to https://reflect.run — I can imagine that access to the tester community is a piece of that…


The obvious major difference is the ability to use the crowd as well as automate; which I believe is unique to Rainforest.

Outwardly, the way they automate is very similar to us. Looking a little deeper, it seems like they _do_ use the DOM pretty heavily (from re watching the video at https://reflect.run). For us, this is a fundamental difference - we do not believe in this; we want to automate testing-like-humans. It's harder, but we believe replicating how a human would detect things working or not (visually, via kvm) ends up with less brittle, easier to maintain tests that are closer to the reality of how a human would interact with your app.

Also, we test using VMs (or physical devices if needed for mobile) - allowing us to test the browser, or any other kind of software. This lets us support a large combination of OS and browser variants out of the box, or custom images for enterprise. Reflect doesn't seem to support more than Chrome when I last looked.


Digging a little more - their pricing; our free plan seems equivalent (yet with better data retention, and no user limits, and email testing included) to their $99/mo plan.


the other major difference is the design principles: our core belief is that everyone owns quality, and so we build for the 'no code' user as well. It's a really hard bar to hit, but I think we've done a good job so far - to use reflect you need to be far more technical. 1/3 of our daily users are PMs.


Hey there - I'm one of the co-founders of Reflect so just to give my perspective:

The workflow for creating tests in Reflect is pretty similar to Rainforest: we both expose a "cloud browser" that loads up your webapp and you interact with that to create your tests. The biggest difference workflow-wise is that Reflect records all your actions automatically, whereas with Rainforest you often need to both specify what step you're going to take, and then actually perform that action in the browser itself. Recording everything automatically is technically harder to pull off since it's forced us to ensure we accurately record every step you take, but we think it makes for a better workflow since you can create tests faster, and there's less chance of inaccuracies that cause tests to not be repeatable.

I would quibble with the statement that you need to be far more technical to use Reflect - we're a no-code product after all. :) We have plenty of folks who aren't developers using our product. But the good thing is that both products have free tiers, so users can always give us both a try for free and decide for themselves.

Edit: Also their statement about Reflect running in headless mode is incorrect. Our test grid is a cluster of VMs: we spin up a Docker container for each test run, and each Docker container is running the test steps using a normal non-headless browser.


Oh good catch Todd, sorry about that - updated.

And totally agreed, both products take a slightly different approach and have different strengths and weaknesses, try them both and see which is a better fit!


What's the reason the Rainforest would need to declare "Left click" before doing it rather than just having the user do it?


It was a design choice; for some actions it's possible - click being a great example - for others it's not e.g. hover; are you thinking, or actually hovering? ...waiting: are you thinking or trying to add a wait to the test?

So, we went the route of having the same way of doing things for everything. We may change this in the future, but at the moment it's consistent and easy to learn. You add an action, then run it - no need to do it twice, or learn multiple ways of doing things.


Chiming in as a happy Rainforest user. This tool has been a huge benefit to our startup. It is a great way to maintain ongoing QA at a minimum cost of hours while iterating your product, and the built-in ability to get your test fixed by Rainforest for a few bucks or to have certain tests done by hand at an hourly rate is extremely useful and makes it simple to quantify the cost of outsourcing to rainforest vs doing something inhouse.

This tool makes the benefits of a well-built automated testing setup much more accessible and less costly.

We had previously been using traditional automated QA tools like selenium when someone suggested Rainforest and I am very happy we made the switch. Nothing but praise and well wishes for this team - you are en route to massive success.


Thanks so much - glad y'all love it!


This is amazing. Raising a Series B with an enterprise sales model, then releasing a self service, bottoms up product is like the hardest possible shift. Hopefully it's exhilarating.


It is definitely exhilarating, and a lot of work - but we're proud of what the team has done and where we are at today!


What are some of the most important lessons you learned in building out Rainforest as a self-serve freemium product after building Rainforest as a more enterprise-focused product to start?


Good question; the enterprise focused product relied on a much more hands-on onboarding, and day-to-day support - sometimes even to the level of professional services - to use the product. Moving to self-serve exposed all of the hard-edges of Rainforest, especially around on-boarding - and later around general use of it. This forced us to up-our-game product design wise significantly, which wasn't a strong focus before. The lesson being, it's not an easy shift - and takes time even if you have a product that works for enterprise.


I wonder if this fits the bill of “do things that don’t scale” — your experience is similar to a theme revealed in my work.


Ok, I'll bite. How do you integrate with the tested software?

Do you a) run the tested software inside your VMs (if so, what's the integration API?) or b) expect your clients to run it (if so, how can the client authenticate your test access?)


Mostly b, but:

a) we can, if so generally they install it as part of the testing (e.g. a client testing a chrome ext), or have us build a custom vm for them (e.g. clients with 20gb download)

b) this is the common path; folks push something, ci builds it, ships to a qa env, they run us, if it passes, push to prod.

For B, Auth is handled anywhere from zero auth (just fully open QA env, but usually it's SaaS so you still have to login to their app), through to http auth, limiting the IPs (https://help.rainforestqa.com/docs/which-ip-addresses-do-rai...), to VPN directly into their QA infra. Without pulling numbers, I'd guess 95% go the zero-auth route.


Congrats Rainforest! As a former early employee, and occasional customer, I'm glad you guys are still going. Turk UI testing is a brilliant idea, and automating that with machine vision is a great next step.


Thanks Paul - good to hear from you; I hope all is well!


>>> slow down or pause the release process to do QA, or move faster at the expense of product quality <<<

This is a constant dilemma for Solopreneurs/very small teams. I was just thinking last week - can I find an affordable automated testing platform (need to run tests on new features that I've added to my latest project - an Electron App).

Follow up question - does this work for Electron Apps? Either way, still happy to try it for other Web App projects if I do another Web App


Yes, it works for electron apps. You'll need to host the binary somewhere, then install it. You can bundle those actions in to one test and then reuse as a building-block for your actual tests.


Got it. Thanks


We use Rainforest at dashworks.ai and it's been instrumental in improving the quality of our product. Super intuitive product and great team!


Glad to have you using us, and great to hear of the impact!


What is the benefit of "no-code"?

The benefit of using code is that you have to know what you are doing. Having experienced what happens when people build data driven systems out of building blocks, with out a thorough understanding of what they are doing (brittle, failing is strange ways under load, general unreliability and low quality) I am suspicious


At least for testing, and traditional automation (aka code) the bar is knowing what you're doing AND knowing the product enough to be able to test it effectively.

We remove the code requirement, making testing accessible to more folks - i.e. product managers and product designers who have great knowledge of the product, but don't want to or can't code. Also, this doesn't tend to exclude developers either. Currently our user base is roughly 1/3rd engineers, 1/3rd product managers/designers, and 1/3 QA folks.


Completely agree that testing should be more accessible to product managers and designers, and love the concept. Consider the simple example of a web form with multiple `input` elements - if I use your product to click on each input and configure a QA procedure, how does the system unambiguously identify each input given that the page layout may change in the future?

The current code-based testing frameworks force me to add an unambiguous marker to the `input` element, like an attribute or ID, which also makes it easy to query from the DOM during the QA process. How does this QA product handle breaking changes to the UI, and how robust could you expect it to be to code changes?


It identifies things visually, and optionally with OCR as well. If it moves likely no issue. Changing shape or size too much, as well as not matching OCR can cause failures - as expected. If it is an expected update and the test needs updating, we smartly suggest updates to the target if we can find one. Alternatively we have a crowd-based service to help write abs maintain your tests; it’s usually used by teams needing high leverage when managing a lot of tests.


Agree, no-code seems to optimize for a large number of people being somewhat effective, but with less control over the output (increasing fragility, rework)

Code is more upfront effort but more control and thus less fragility but more maintainable over time (if done right).

I imagine there are some/many situations where throwing many people at a problem is the “best” way and this would suit that quite well I guess.


In my experience, many orgs that work with Selenium and its derivatives have described that (coded) approach as flaky / brittle (i.e., fragile).

Of course, until automation gets to be as clever of humans, any test automation approach is going to have some flavor of brittleness.


Playwright is excellent too. Playwright is much more forgiving with tests that need to high different origins (common with Enterprise apps) and multiple browsers in the same test (to verify collaborative editing etc). If you're considering Cypress, I'd highly recommend also giving Playwright a look (https://playwright.dev )


Agreed; also, even with human testing, their nuance can be a double-edge sword. Testing, and specifically QA is hard - as today it's mostly about the "assurance" part; does one feel assured enough to ship this, which is subjective.


You should try out Cypress, its really good and its just works out of the box.


This is a brilliant idea and direction. Congrats on launching this.

How do you deal with things like permissions, proprietary information, etc?


Thanks!

Depending on what you mean; assuming it's the security angle:

TLDR; carefully

At a high level, most of our customers are testing in QA - not production - so usually the only proprietary information (outside of credentials to access it) we'd see is something they'd be releasing shortly anyway. However, we take security seriously;

Our infrastructure and code is heavily tested + reviewed before shipping, as well as externally audited yearly. Currently we're checked yearly for HIPAA, and from this have strong internal controls / processes, documentation and guidelines around access controls, how-things-are-done, and audited. Everything is encrypted at rest and transit (db, logs, images, etc). All the testing is done through our infra, recorded (video, kvm) and logged (http, https, dns, etc). Obviously we never re-use a VM, they're destroyed post use.

From the crowd side, they test using the same machines as automation uses (i.e. all the same logging levels as above). Additionally, each individual is KYC'd and signs an NDA with us before they can work. Enterprise, or folks needing BAAs have a sub-crowd with extra levels of KYC / other requirements.

We're currently early in starting being formally SOC 2, but it's not complete. More details here - https://go.rainforestqa.com/rs/601-CFF-493/images/Rainforest...


This tool has a potential to change my product building workflow in revolutionary way.

Congrats, you not only solve a big problem but introduce real advancement in UX with no-code option. Added value from real human QA is a wonderful bonus point.


Thanks; we'd love to see how we help you build things!


The hero gif seems to stop exactly before what I'm curious about: running the test. The mouse literally hovers there, and I even clicked on it thinking maybe it's like an interactive thing I have to continue.


There is a video on the bottom of https://www.rainforestqa.com/how-rainforest-works that shows the product being used, which should answer it!


yeah it's not a huge inconvenience, i was down to scroll, just giving some landing page feedback =)


100%, we'll take it on-board :)


I met Fred in the OG Rainforest “office” many years ago, have fond memories of him being a kind and thoughtful person to talk to. I have been happy to see your success so far and congratulate you on the launch!


This is awesome y'all. Can't believe this didn't exist before


Thanks Tim, as the first no-code platform for this kinda thing, we're surprised too!


Looks awesome! How hard was it to get the testing working from visual alone? Would love to learn what you're tackling the problem technically?


It was pretty hard to be honest, but it was helped by the fact we already had our own virtual machine infrastructure that we use for the crowd side. Without that, it would be a much steeper hill to climb.

Re the problem itself at a basic level, it is balancing visual matching (what does and doesn't matter in an image, what is the match in the VM - is that acceptable, and how/why?), and OCR (what is there, what matters, etc), and timing and interaction issues and complexity as well, and for us has to work on basically any platform we can run in KVM.


Are you testing Rainforest UI with Rainforest? If not when do you think it will be possible to do (for majority of your views let's say)?


Yes, we are dogfooding pretty hard! We release a bunch of times a day and every release goes through a suite of >130 Rainforest tests for regression. We also add tests for every new feature we release, so it keeps growing.

It was a cool thing to write tests for testing the test writing interface :) That's one "level deep" - out of curiosity I was playing with how deep it's possible to go and it turned out at 7 levels things get too small to really make sense (terminal within a terminal within a terminal etc.), but there's no inherent limit.

And we can do this because we work on the pixel level - even though the terminal is a canvas element and it'd be hard/impossible to work with it using the DOM, it doesn't really matter for KVM.


Do you test the new user path? Tried to create a free account with email, didn't get an email. Tried to register with GitHub, authorized rainforest in git, still can't log in. Maybe someone in my organization already tried the free account or something but as a user I get no info at all. Just a unusable login page that looks like it needs some CI testing ;)


For sure, that's weird - email support@rainforestqa.com and we'll sort you out.


Adding to this, we've been doing this since at least 2013 - https://www.rainforestqa.com/blog/2013-11-04-extreme-dogfood...


Does the UI output some kind of configuration file that can be checked into source control? If not, how do I maintain a history of test changes?


We support that for the human language tests, but not yet for the automation. A few customers have done it themselves by exporting the JSON (it's a defined, versioned schema), but we've not yet productized that.


Is there a solution for paranoid companies that don't want your VMs to have access to the application(s) under test?


So far this objection hasn't really been raised more than a couple of times that I can recall. So far, folks have been satisfied using our automation or crowd in the cloud due to our security, external auditing and contracting.

For the crowd side (which use the VMs), you can bring your own humans to make your own custom crowd, which we'll manage for you. It's mainly used by larger enterprise companies with existing bigger teams.


It’s definitely possible to get done, I work in a place where we deployed a testing product with similar architecture and it took about a year to get approved.

Some ability to host it or build a dedicated instance would be easier for us, but if your experience shows the market doesnt care its probably not worth the time.


I remember working with Rainforest back in 2016 when I was at Zentail, glad to see you guys are still alive and doing well!


Thanks! We’ve definitely improved a lot since then, I’d love to hear what you think today!


This seems like Sikuli back in the day, or other OCR/CV based testing tools. It will be interesting to see your take on it!

How do you handle branch or error handling code logic that is ultimately code-like? How would a customer version control and audit the version of the test that was run at a point in time?


Good to see y'all again after so long. As a continued fan, I love to see the refinement over time.


Thanks!


Do you plan to expand into no-code web scraping too? The frontend tech is the same for both.


Hey Melony! We[0] do no-code website monitoring and extract data as well as directly integrate it with other no-code tools.

Maybe that’s what you’re looking for?

[0]: https://monitoro.co


No plans to - our focus is on helping folks improve their product quality. I guess you could probably use it for that if you wished, but it's really not optimized for extracting data from pages.


When I click the help centre link (https://help.rainforestqa.com/en/) I get a 404 :D


Oh thanks for letting us know! Where did you find this link? It should be https://help.rainforestqa.com/


I did find it, but now I can't re-find it! Never mind.


It's a bit unclear, but can this be used to test native desktop applications as well?


yes :)


Awesome! I see there's mentions of it on the website, but I don't see how to do it via the dashboard UI and there's not much in the docs about it. Wrote to support, would be nice to try this out.


Does this work with desktop apps ?


Yes; you can download and install anything, then test it . Ideally you setup the install process as one test, then embed in other tests for better maintainablity. For very large apps, or long to install, we can/have Pre installed things on customized VMs for enterprise folks.


Does this work for SPAs?


Yes, zero issues - as we test like a human would. Rainforest looks at the screen, uses the keyboard and mouse to interact with the software under test. For an SPA, that stack is likely windows, chrome and your SPA inside Chrome.


Are you hiring PM's? :)


Not currently, but we are hiring Product Designers, a Controller (finance), an Implementation Manager, and Customer Success folks.

https://jobs.lever.co/rainforest


Congrats on the launch Russ - really excited to see how this works out!


Thanks Peter! Loved having your support over these years!


Title should say S12 instead of S21, no?


Whoops - that may have originated with me (habit). Fixed now - thanks!


Genuinely thought this was one of the worlds longest startup launches!


it kinda is :) - 10 years in, we're just getting started!


There was also this one just a few weeks ago - just a coincidence, btw:

Launch HN: RescueTime (YC W08) – Redesigned for wellness, balance, remote work - https://news.ycombinator.com/item?id=28683597 - Sept 2021 (141 comments)


It said S21 but was changed to S12 so you were right in your assumption.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: