I find it particularly frustrating that they force you to upgrade to Verify to solve the problem unless you want to build out a lot of your own internal risk detection (which we ended up doing instead)
With the rise of AI APIs, I expect we'll see similar attack vectors for apps that integrate APIs from OpenAI or Stability. There won't be a colluding telecomm, but the API output (a completed task) is relatively fungible and far more valuable in itself than a SMS API response. Something to keep in mind if you're building an AI application: https://stytch.com/blog/securing-ai-against-bot-attacks/
Also, a tip for anyone that feels like the low hanging fruit prevention methods aren't working (e.g. CAPTCHA, rate limits, etc.)
Consider installing a device fingerprinting system -- this has be the single most effective solution we've seen our customers integrate for more sophisticated bot problems: https://stytch.com/docs/fraud#device-fingerprinting. I'd recommend against the off-the-shelf solutions (e.g. open source ones) because many of them are easily reverse engineered, so they work well for low-level threats but not for persistent ones. In addition to our solution, Arkose and Fingerprint Pro are a couple ones I'm aware of
With the rise of AI APIs, I expect we'll see similar attack vectors for apps that integrate APIs from OpenAI or Stability. There won't be a colluding telecomm, but the API output (a completed task) is relatively fungible and far more valuable in itself than a SMS API response. Something to keep in mind if you're building an AI application: https://stytch.com/blog/securing-ai-against-bot-attacks/