hey everyone, supabase ceo here. The team is extremely excited about this release.
This one has been a long-time coming. It's one of our most requested features. We spent a long time getting the Developer Experience right - we investigated several approaches (Containers, Isolates, WASM), and eventually landed with Deno.
Big shout-out to the Deno team. Their open source and DX-focused philosophy made it an obvious choice in the end. We've wrapped their open-source runtime to make the Supabase experience as integrated as possible, from the local CLI development, to realtime logs on the Dashboard. Edge Functions are still Experimental, so make sure you send your feedback to make the experience even better.
We'll have a few of the team in the discussion ready to answer any questions.
I have a feeling you may not be able to talk about this, and thats okay. That said, I'm asking anyway.
Did you consider working with CloudFlare? I feel like Supabase is the perfect fill for the "holes" that CloudFlare has in their services, that you guys fill. They don't have two crucial things currently: Persistent Data Store that's accessible outside of its platform, and an Authentication product. They otherwise have everything else and more. Seems like it would have been a very mutual relationship.
Any insights on this one by chance?
I'm loving Supabase, and the only gateway to adoption on my side is the lack of SOC 2 compliance, which I understand is coming. However I don't want to give up best in class services for things like CDN, Storage, Workers etc. I'm kind of an "all in" platform person. I don't want variance in vendors, I want one vendor with variance, if that makes sense.
Don't per se mean I wouldn't use Supabase still, of course! I love the product where I have been able to test it out, and once SOC 2 compliance lands that will clear the final adoption hurdle for me anyway
we're working closely with Cloudflare and really love their products.
We did consider Cloudflare Workers - ultimately it came down to the open source philosophy of the Deno runtime.
It's incredibly important to us that everything in Supabase has "escape hatches" - you can choose to host wherever you want to. I'm certain that Workers will get there eventually, but it's not there yet. We're still big fans and will be doing other neat things with them in the future (also we'll revisit Workers if the runtime becomes an open source).
> lack of SOC 2 compliance
We've chosen the auditor - I believe SOC2 Type I will be around 4 months away (next Launch Week)
> It's incredibly important to us that everything in Supabase has "escape hatches" - you can choose to host wherever you want to
I'm rooting for you guys, not only your tech stack is very solid, but commitment to letting your users self-host if they want to is laudable.
But here's something I don't understand about your offering. Taking a look at the pricing page https://supabase.com/pricing there is only one pricing tier above free but below the scary-looking "contact us for a quote". The idea is that you get a reasonable baseline included and pay for extra storage and bandwidth. Cool.
But it comes with a number of fixed limits that I find puzzling, like 100k monthly active users. On edge functions, they can run 1000 hours (per month?), invoked 2 million times, and there's a limit of 100 functions in total. The customer can't pay a little more to have the limits raised a little further like they could with storage and bandwidth, they need to migrate to the enterprise plan. So I must ask: why is that?
I almost don't care because I'm the kind of person that likes to run their own servers but, I want to support you guys.
You guys are killing it with these product updates, I feel like I'm hearing about them once every few weeks, and I'm not even actively following you. My team is on Firebase right now and I have some mixed feelings about it; will definitely keep you on my radar.
Congratulations on launching this oft requested feature!
Any timelines for support for Functions over QUIC/HTTP3 and raw UDP/TCP?
Cloudflare Workers, the closest thing to Deno Deploy, announced intention to support raw TCP a few months back (while "HTTP3" has been supported for quite a long time now).
Re: Database Functions:
I really like the dual nature of Cloudflare's database offerings:
Durable Objects (run functions close to data) as an option for apps that need that kind of strong consistency guarantees.
KV (move read-only data close to functions) as an option for apps that can tolerate race but desire speed.
> Any timelines for support for Functions over QUIC/HTTP3 and raw UDP/TCP?
I'll ask the Deno team. I'm curious what your use-case is for these (especially UDP)?
> Cloudflare's database offerings
DO and KV are both incredible products. I think we'll tackle these a little differently since our "base" is Postgres. We are announcing more "edge-like" products tomorrow which will lay the groundwork
Have you used Workers KV? In my experience, it's not just eventually consistent but also delayed - often by 10 seconds at least - the data is not updated even for the same client, unless they implement their own caching.
I very much like the idea of automagic edge compute but would like to see the analytics. I am skeptical of the edgey claims, even for cloudflare. Where are the nodes? How do they each tend to respond? I want a map with every node being lit up in real time. I want to be able to test nodes by location etc.
1) Do the functions "freeze" once the response body has been sent back? Is it possible to return a response to the user (for a quick API response) and then continue doing some background work? This is has been a source of pain with AWS Lambda based services like Vercel that freeze execution once the response has been sent.[1]
2) How does keeping a connection pool work between function invocations? For example using Prisma for the nice typed DB functions.
> Do the functions "freeze" once the response body has been sent back?
Just checked with the Deno team. These functions should not be used as "background workers" - perhaps that is something we explore in the future. It will work for a short time in theory, but it's not guaranteed.
> How does keeping a connection pool work between function invocations
Supabase offers several options here. You can either use the API (PostgREST)[0] - an autogenerated REST API, or the connection pooler (pgbouncer)[1] which we offer with every project.
> For example using Prisma
Supabase is a popular database hosting service for Prisma users because of the built-in pooling - it's a great product, especially their typed interface.
> These functions should not be used as "background workers"
Thanks for checking on it. We actually had to create a Cloudflare worker based "fire-and-forget" system to allow our Vercel functions to shoot off background tasks. Was hoping to replace that.
> Supabase is a popular database hosting service for Prisma
Agreed! We actually use Supabase as our backend for Willow[1] and use Prisma when writing backend functionality. It's been a real easy and fast process to use Supabase's JS client on the frontend to access data (with RLS!) and then Prisma+Supabase on the backend to modify data (with types!). We would love to allow user's to directly change data everywhere directly from the browser but we need to do some background tasks (sending notifications or updating related rows).
The dream would be to have a great DX experience around using insert/update triggers to call Supabase functions to run background tasks. Some type of Terraform-esque configuration (in a SCM) to set it up and keep it in sync would be awesome. We have some triggers that make http calls but we're limiting usage as keeping track of them outside of our other code isn't simple.
> The dream would be to have a great DX experience around using insert/update triggers to call Supabase functions to run background tasks
We have something for this: Function Hooks (soon to be renamed "Async Triggers")[0]. They are still in alpha, but the extension [1] is getting close. It was important to build something which works with PG background workers so that it's non-blocking. We'll make quick progress on this now that we've released Edge Functions.
> sending notifications or updating related rows
Tune in for tomorrow's announcement - it's related.
TBH the average user would be confused if their autoscaling workers were "online and billed for" if some random npm package or a coding accident had "background behavior". Firing off a background task into something similar to AWS SQS is really the right way for this scenario to be handled.
> However, Supabase already offers a flexible solution for that - Database Functions! As such, for Supabase [Edge] Functions, we decided to deploy far-and-wide so that they are as close to your end-users as possible.
Does this mean that a choice has to be made between high latency (Edge Functions) and specialised SQL-only functions (Database Functions)? I see that cron-like triggers are still on the roadmap, is there a plan to have TypeScript functions that can run close to the database (or other resources)? Call me new school (as in not old school), but I prefer processing complex queries in a language that I feel comfortable working in, SQL is not that.
I know a lot of folks are huge fans of writing pure SQL, the lack of type safety and lack of good intergration with source control (I dream of a world where database schemas, functions, security access and the rest can be saved to source control for reproducibility) scare me.
We can look at offering an option to restrict edge functions to just launch in the same region as your database (probably can't call it "edge" at that point though). This might also be useful if you are processing data in your function that you do not want to leave a particular geographical region.
> is there a plan to have TypeScript functions that can run close to the database (or other resources)?
Just to double-up on Inian's comment - there is definitely a world where this happens, perhaps even inside the database itself (like plv8). We were focused on the Edge experience this time, but I'm excited about the future that an open-source TS runtime like Deno enables.
> I dream of a world where database schemas, functions, security access and the rest can be saved to source control for reproducibility
this was one of the main reasons we started supabase. we hope to make database development as easy as application development
> I know a lot of folks are huge fans of writing pure SQL, the lack of type safety and lack of good intergration with source control (I dream of a world where database schemas, functions, security access and the rest can be saved to source control for reproducibility) scare me.
Supabase and Deno?! This is the best news in a while. I like the feeling of you biting at the heels of Cloudflare and even bigger competitors. I believe in this and am excited to build something new!
90% of the projects I work on are built on Firebase, our main worry with that is that we are locked-in, not only to "Firebase", but to Google as well which has a nasty history of just pulling the rug from under your feet without even telling you why.
Having alternatives to try and implement is always a priority for us and Supabase now (w/ edge functions) seems to cover most of our use cases, so we are definitely looking to switch to it.
If you are using Firebase Functions, it should be easy to port it over to Supabase Edge Functions. We also offer a Realtime database (Postgres), Auth and Storage as part of our stack.
I'm excited about the new functionality and find the cause of not defaulting to Cloudflare Workers commendable, I worry a little about the long term viability of Deno Deploy - is there a backup platform that Supabase edge could deploy to in the event of a Deno Deploy acquisition, or even better, is Supabase going to acquire Deno Deploy?
> is there a backup platform that Supabase edge could deploy to in the event of a Deno Deploy acquisition
Yes, we could host this all ourselves (since the Deno runtime is open source) on AWS, or more likely we'd work with someone like Fly who have a globally-distributed platform like Deno Deploy.
> is Supabase going to acquire Deno Deploy
There's no chance of that happening. We couldn't afford them. Deno will be a huge company - and rightfully so, their product is best-in-class
I'd love to learn more about how its making use of Deno. I have been loving Supabase lately, both for hacking away at side projects. I have been thinking about moving some of my own backends over to Supabase.
Feel free to ask any specific questions about our Deno usage which isn't covered in the blog post. Inian is in the comments and will be able to cover anything technical
(Render founder, Supabase fan) You can use both! Render will always be developer-focused even as we move into the enterprise. It's the only way to avoid Heroku's fate.
> Serverless compute options can be broken down into two broad categories:
> Containers as a Service (e.g. Google Cloud Run, Fly.io)
> Functions as a Service (e.g. Cloudflare Workers, Fastly Compute @ Edge, Suborbital)
There are also Google Cloud Functions, which is odd to not mention here.
For what I'm working on, I've put CF Workers in front of my GCF's. This allows me to terminate the SSL at the CF Worker (along with easy control of DNS) as well as control the URLs (and even content) that ends up being sent to the GCF workers as well as caching of results in CF. It gives me a huge amount of flexibility and very little complexity.
Currently doing 1m hits/day through CF workers and it is ~$6/mo + ~$20 for GCP (including a Postgres db and heavy use of pub/sub).
All the other important things are done... CI/CD for the whole development flow with Github actions doing deployments on push to main, logging and graphs comes standard. The developer experience with both GCP and CF is top notch.
Supabase continues to be an interesting alternative, but I really don't see a reason why I'd go with them.
The Google Cloud Functions are not really good compared to CloudFlare Workers :/
The GCF are regioned, slow to start and behaving like an on-demand temporary cloud instance, than an actual FaaS offering.
> Using golang cloud functions. I have a constant stream of hits, so there are always available functions.
There is no way to avoid cold starts althogether, there will always be tail latencies unless there is a generous amount of idle instances running that GCP charges for.
> under 1s. Hot request/responses are ~20ms.
Those numbers are great, I think Go plays a huge part in this. Google's Node.js firestore SDK is terrible... I've had 15 second cold starts, which is unacceptable for client-facing functions, there's a whole thread about it here [1]. GCP doesn't have a very wide range of language SDK support for those that don't want to use Go or Node.js...
> My hits are all US based
Edge compute, like Fly.io or Cloudflare Workers truly shines when you need to serve traffic close to the user around the world. Otherwise normal region-locked functions are fine. Vercel requires you to choose a single region, and it's locked to US-east for free-tier users. For us Europeans over here, SSR has to effectively cross the Atlantic ocean.
I don't think it is terrible, but I do agree it is a slower startup than golang by a lot. Which is why I went with golang for this project and I stopped using firebase entirely. You really don't want to effectively be parsing all your source code every time a launch happens.
> Edge compute, like Fly.io or Cloudflare Workers truly shines when you need to serve traffic close to the user around the world.
For my app, it is really just about having another layer of control in front of my API calls. I don't care so much about the CDN aspects.
I love the concept of Supabase, I only worry about performance in this case. Almost all API calls in my applications execute some SQL to gather data from the database. Having the edge function execute several select queries on the database on the other side of the world is a few times slower than having the API server near the database. Basically, the distance between API and database is much more important than the distance between user and API. I guess the only solution is to have read-copies of the database on the edge as well..?
It feels the CDN edge functions are the new Jamstack. It's fast but comes at a high cost and the excitement shifted from static pages to literally data distributed near the user.
edge functions was an obvious complement for us. we already have database functions [0] which can be executed through a REST api (database co-location is great for data-intensive operations).
Edge Functions are deployed to 29 regions globally, which means that you can use them for low-cost, low-latency operations.
Static pages relied on the client being able to run JS to populate user-specific data. Also, a lot of data is fine with being eventually consistent, which is a good fit for edge functions. An example is rendering the account icon and username on the top write, without hitting a central database/static file server.
And Deno deploy itself likely uses AWS or the like under the hood. So you are paying base infrastructure costs plus the premium for every provider in the stack.
Looks like Dart doesn't support Wasm as a target yet but is considering it as part of their 2022 roadmap[0]. In the mean time, we will start off with supporting other languages like Rust which already have good support for compiling to Rust.
This one has been a long-time coming. It's one of our most requested features. We spent a long time getting the Developer Experience right - we investigated several approaches (Containers, Isolates, WASM), and eventually landed with Deno.
Big shout-out to the Deno team. Their open source and DX-focused philosophy made it an obvious choice in the end. We've wrapped their open-source runtime to make the Supabase experience as integrated as possible, from the local CLI development, to realtime logs on the Dashboard. Edge Functions are still Experimental, so make sure you send your feedback to make the experience even better.
We'll have a few of the team in the discussion ready to answer any questions.