I tried Neon a few months ago when attempting to switch away from a self hosted db. It was a horrible experience, customer support was unhelpful, it was glitchy, slow, and laggy, and even before the price increase their pricing was way too high.
Curious about your self-hosted db - was it also Postgres? How much had you played with pg before you tried Neon? Can you expand what you mean by 'glitchy, slow and laggy'? Slow connection due to cold starts? Scale-up? General inconsistency in performance? What kind of data volumes were you working with?
I ask because my experience with Neon has been completely different to what you just described. Ever since their 'closed beta' days, it has always 'just worked'. Their CLI has been great, none of my automation has ever bombed out without good reason, and I've never seen it cost me more than I expected. Notably, I was also able to self-host it with relative ease, and found that they actually encouraged people to do so. (In contrast, there are a number of similar 'open source' offerings such as Supabase that I've tried self-hosting, and found that while their core codebase is on GH, it is extremely difficult to deploy outside of their own environment. Not intended as a dig at Supabase, they do some really great work and contribute a ton back to the Postgres community - I'm just using them as a relevant example).
As an aside, I've also met people from Neon at various conferences, including co-founder Heikki. They all struck me as genuine Postgres enthusiasts & great fun to geek out with. Neon (like Supabase) have been _really_ pushing the envelope on Postgres for the last couple of years, and have sponsored some significant developments & proposals. In my view they're a 'real OSS company'. While that probably does rose-tint my view of them a little, that's important to me & makes me happier to give them my money. They've certainly done more for Postgres than AWS ever has.
Thank you for the confidence and praise! I do believe we, Neon, try to be as open as possible while also trying to build a business around this open source product.
I'm a proud PostgreSQL contributor myself and value the work of my collegues highly. However, it should be said that AWS' RDS PostgreSQL offering in late '13 significantly helped increase PostgreSQL adoption, and the recent contributions by AWS' own PostgreSQL Contributors Team should not be discounted. IIUC, their Aurora PostgreSQL offering also was the inspiration for Neon as a product, so I don't think that your last jab to AWS was justified like that.
I met with Heikki in Estonia for local Postgres user group meetup. That was when I first time heard about Neon. He is a true Postgres hacker and very cool guy to talk to
Hey, we do want to add numbers back to that page. The issue was that the original numbers were inaccurate.
"The goal of this metric is to represent the health of a system. However, we found this binary “is there an incident or not” approach wasn’t accurate for describing our service. For example, in the past 30 days, 99.9% of projects hosted on Neon had an uptime better than 99.95%; however, the status page displayed 99.89% uptime."
How many times do you expect to slice it? "For 99% of days this year, 99% of our customers have experienced 99% reliability for 99% of their users, we're 99% sure"? (That's just the long way of saying 95%)
When Neon goes down twitter lights up, we haven't had it for a long time now.
Internally we are measuring the number of projects under 5 min in the last 30 days and it's been lower double digits over 700K+ total databases under management.
A TON of effort is going there, twice a week standups, tightening our processes and quality across the board, a standing item in our monthly board meetings. While stability is black and white, what goes into delivering it is a long tail of small and large improvement and a major team effort.
If you're interested, our team also takes negative feedback to create points of emphasis on where we can improve. But I understand that takes time out of your day.
woof, that does not sound good. Can you say more about how you were connecting? What was slow / gitchy and laggy? Feel free to reach to over DM to me with your project details.
> Feel free to reach to over DM to me with your project details.
I’m not the parent commenter but just giving you a heads up that your HN profile doesn’t list any socials so I don’t know if people will find how to DM you. Unless you guys have a public Discord or something and it’s implied that that is how people would be able to DM you.
We were connecting over `pg` (NodeJS). The web dashboard was constantly logging me out and was overall a mess (although I'd imagine it might be a bit better now, given GA?). There's no DMs on HackerNews btw
node-postgres (pg, probably more specifically pg-pool) doesn't do too well with serverless, not handling TCP connections killed while in the pool (or something): https://github.com/brianc/node-postgres/issues/2112 (open since 2020)
A nice marketing opportunity for Neon/Supabase to get a fix officially released.
We did not hide it because it was going down. We hid it because it was an incorrect calculation. We are currently re-doing the our status page to look more like Snowflake's.
I don't have any proof, but you'll just have to trust me, which could be a tough ask. Neon really tries to be transparent in internal and external communication.
We're self-hosting Neon with an internal Kubernetes operator (keep your eyes out for more info) and we're incredibly happy with Neon's technical solutions. I'm not sure we'd be able to build our company without it :o
The by far biggest is being able to scale it on top of Ceph, while getting performance from NVMe disks on Pageservers. This enables us to increase efficiency (aka cut costs) by 90%+ in a multi-tenant environment.
Of course we don't get the same out of the box experience as CNPG, but considering we have the engineering capacity to build those parts ourselves, it was a no-brainer!
Thanks for the explanation! This sounds like the right use case to use something more complex.
I’m always hesitant to add additional technology to the stack if it doesn’t provide a bullet proof benefit. A lot of use cases are perfectly fine with plain postgres, and I’m always fighting against polluting the stack with additional unneeded complexity.
Sounds like a good use case. Do you have any benchmarks or numbers which you could share regarding the performance of database? (especially disk writes and reads)
i'd love to learn more about what you're doing! if you haven't already, please send a message. definitely want to hear lessons learned with your k8s operator.
Branching the whole database (data included) seems really great. Congrats!
It does seem a bit pricey though. For $69/month (Scale), I could rent a dedicated server with 8 dedicated CPUs, twice the RAM and 20x the storage (and that's physically attached NVMe in raid 1), and have money to spare:
https://www.hetzner.com/dedicated-rootserver/matrix-ax/
$69/month is peanuts for anyone running something that hopes to make any amount of money. Developer and operational experience, uptime, etc. seem way more important. Just getting to skip dealing with all that nonsense and focus on whether your idea / startup has any value whatsoever seems far more important.
(Obviously if it's just like a hobby deal then run it on a cheap VPS).
Unless it's enough money to spare to pay someone to manage Postgres on the server and scale it up and down as needed, this isn't that useful a comparison.
As an avid self-hoster, by all means, self host Postgres! But self hosted Postgres is not the same product as managed Postgres.
Cheeky jokes aside, you can definitely go down the hetzner/VPS route. Not everyone has the expertise or desire to spend time doing so, but if you do, then go for it I say. We have some nifty features that are non-trivial to recreate, but again, it depends on your needs.
I switched my company over to Neon from PlanetScale. In part for the ability to scale up/down easily and also so I could run multiple databases on the same cluster.
I also looked at (and spun up) RDS but Neon was way easier to work with and scales to 0 (Aurora Serverless is trash IMHO). Neon starts very quickly as well (hundreds of milliseconds in my testing) which is pretty awesome.
I have nightly backups dumped to Cloudflare R2 from a GitHub action.
If Neon announces they're shutting down, migrating the database to another provider is a bash 1 liner and an hour of downtime.
If Neon just "goes dark" I can recover from the nightly backup and lose at most 24 hours of data - data that is denormalized into other systems and can be manually recovered if absolutely necessary (but probably not).
Not every company is the same, but for many use cases the database layer is an inconsequential decision as long as the API is fungible for your use case.
Not every decision is a 1-way door.
The most important way you can spend your time when making a 1-way door decision for a company isn't picking the right door. It's turning the decision into a 2-way door.
I‘d argue that the most important way to spend the time is on building something customers want and not micro-optimize with shiny database vendor of the week. Get a boring managed Postgres and it‘ll scale for a very very long time, it‘s well understood and if you hit the limit, that‘s a good problem to have and there‘s many solutions for that when the time comes.
My customer profile at the time was mom-and-pop small businesses. Bespoke application development that automated back office work delivered on ~6week timeline.
The size of their tech department was 0.
The goal with the architecture I delivered was to delegate as much as possible below a vendor boundary.
My solution was:
Cloudflare workers for compute.
R2 for storage.
Neon for database. HTML/JS/CSS for the frontend.
Neon wasn’t a micro-optimization. It was the only vendor in town that would own that much of the DB layer’s responsibility (D2 wasn’t prod ready yet).
12months in and so far so good. I’ve had to “come back in” for a total of 30minutes of maintenance when mobile Chrome broke uploading a photo via camera.
Wouldn‘t you want the most boring solution that‘s unlikely to change in the next 5 years for this use case? (AWS etc.)
With a company that did a big funding round it‘s very likely that it gets acquired, shut-down, products get sunsetted, API endpoints need to be changed in the next years.
That's fair. It was something I was worried about but I knew the worst-case was I could spin up RDS and move over to that. This is for my personal company and the load on the software is very spiky with long periods of near-zero activity (we did the switch over in one of those zero-activity periods). That gave us an easy on-ramp to testing this DB as a replacement.
I have 1 database per client and their needs almost never overlap so I wanted to share the underlying server/cluster between them. You can't do that on PlanetScale. Aside from that I liked working them.
I’m not extremely well versed in the topic so forgive the ignorance: how does this differ from AWS Aurora’s offering? is pricing or scaling different. It’s not immediately obvious why you would want to use this instead.
(Neon ceo)
1. You should try and see the experience
2. It scales to 0. If you have lots of projects aurora costs really add up
3. It supports branches and integrates with vercel. You can have environment for every dev and every pr.
4. We will be on multiple clouds soon
5. We are just getting started
I wish you the best of luck, but please know that for a lot of us time is money and also literally our most invaluable / irreplaceable asset, and also tech startups who want us to make a bet on them need to have some sort of value proposition given the risk of "you might die or disapear or be aqui-hired in three months after we moved" so I would encourage to not answering this as your 1).
To me the instinctive reaction is "it's for when you will have time to fool around and figure it out by yourself". It is my opinion only, it's worth nothing more than that, and I realize you didn't ask for it, but I just wanted to let you know.
"Unlike AWS Aurora we decided to open source all the changes in Postgres and also send them upstream as well as fully open source our cloud native storage."
You can find replies from folks in this thread as well discussing self hosted deployments and you can find an entire discord channel dedicated to that topic here: https://discord.gg/92vNTzKDGp
I'm not associated with Neon and I've never used it before and I certainly don't disagree with your sentiment, but it seems like Neon has gone above and beyond in making sure that workloads could continue to run in the event the company fails and broadly assuaging those concerns.
The full control plane is not open source, so if you use tenants or many of the not-in-postgres features you will need to implement the control plane. It's not too much work to do so - there's already a few open source projects starting to do so - but just be aware of that.
We have a toy control plane implementation in the neon repo aliased to `cargo neon`, but also available as `neon_local`. I'm honestly not sure if open-sourcing the production control plane is in our future.
(From molnett.com) For us that self-host, the architecture of building cold storage on top of Object Storage and warm storage on top of NVMEs is unbeatable. It enables us to build a much more cost-effecting offering while keeping Postgres as our database.
Neon won't pull the rug from under an active production database by scaling to 0 when there are still active queries, instead it'll scale databases to 0 only after a longer period of inactivity. So production databases are less likely to scale to 0 because they generally are active most of the time, yet Neon does scale to 0 when it's possible and allowed.
Is 5 a reference to how you naked the prices so Neon doesn’t work for light-workload, moderate storage databases anymore? $69+ a month for a Segment data warehouse holding 10gb is obnoxious. It was under $10 before the switch. What’s next? $690/mo?
I could've sworn I closed my Reddit account. I thought neon was meant to be a bit like Git with data, with branches etc, as well as autoscaling and nice stuff like that.
Why is storage priced so high? Depending on the plan it looks like it can cost anywhere from $1.50 to $1.75 per GB. That feels pretty excessive; maybe I’m missing something that causes it to be so pricy? It actively discourages me from wanting to use it for a hobby project, because the storage costs would send my bill to over $100/month before I even use any compute.
Storage costs are something we are going to be looking at soon. If we can reduce our cost of goods sold, it is possible to pass on savings to consumers.
The feature they call "branches" is more accurately to call "snapshots" or "checkpoints". Creating CoW writable versions and rolling back to past versions - that's snapshots. Branches imply merging, which is a huuuge can of worms, even if we apply last-write-wins, cause referential integrity and things, can't just merge postgreses.
I've been using Neon's serverless Postgres for the last 6 months and I'm extremely impressed! As a developer, it has been a game-changer in terms of productivity and letting me focus on building features rather than managing database infrastructure.
Congrats to the amazing Neon team on launching serverless Postgres to the world!
This is valid. However, many production databases won't scale to zero often. The serverless proposition is still valuable if you factor in the development, test, and staging environments scaling to zero plus our autoscaling that doesn't require downtime or dropped connections.
For guaranteed message delivery, you're probably best of using a messaging system designed with that in mind instead of listen/notify.
(Neon PM) This is true and very often production applications need access to their database 24/7, so they benefit from the serverless nature of autoscaling.
To provide more context, our original plan was to launch the GA on March 22nd. However, we decided to move the date to April 15th because a few projects required additional time to complete. Last week we saw Supabase announcement but we didn't know what is that about and decided to not to move our date again.
Storage is the most surprisingly expensive part now. $1.50/GB is kind of a tough pill to swallow. We've been exploring the idea of shifting from Aurora to Neon. With the recent pricing changes, our bill has exploded even with almost zero usage of compute time.
Even if backups/history were taking up zero space, $1.50/GB seems really high for raw storage. The rest of the pricing seems reasonable to me. We're only around 100GB right now, but can see that ballooning up in the future and raising some concerns.
By the way, I do want to say that branching is a game changer. The recent usability improvements like graphs/metrics and being able to reset a branch to the state of another one without affecting the endpoint is so nice. We previously had a messy script that took care of creating a new branch, moving endpoints, renaming, deleting, etc.
More fine-grained permissions for users on projects would be my number one ask at this point, but overall, I really appreciate the improvements the Neon team has made in recent months.
Permissions improvements have been duly noted by others, so expect that work to begin sometime in the near future.
Reducing our COGS is very important, and also something that we will be working on. I would definitely reach out to our sales team for a more custom quote.
Neon keeps an NVME cache in the Pageservers, and it also keeps copies of the data in S3, one for the main storage, and one for the backup case. The data also gets stored on special replicated storage (Safekeepers). So it might be in 6 different places at the same time (3 safekeepers, 1 pageserver, 2 S3 buckets), details depending on the data's lifecycle through the system.
This architecture delivers really good safety: Once your transaction commits, the data is already replicated across different AZs, and this is done without there being an S3 request each time. It also means that Neon can deliver features like branching.
> So it might be in 6 different places at the same time (3 safekeepers, 1 pageserver, 2 S3 buckets), details depending on the data's lifecycle through the system.
This almost feels like how cloud providers charge excessive fees for excess bandwidth. Storage is cheap, even with replication, especially at Neon scale.
Yeah, I am not sure what "scales to zero" really means. The storage costs + pre-paid compute could essentially buy you an AWS instance. Sure you do get things configured from the get-go but if you can get your way through Terraform that can work too.
Either way, if you get big enough (will use more storage + compute), it makes sense to move. I'd be against such platforms as they make changes to Postgres that might make you depend on them.
One case where scale to zero can help is if your main branch scales to zero, that means more compute hours for your non-main branches.
FWIW, Neon is really "just" Postgres. I can't think of anything Neon specific that would lock you in, other than that we don't support certain kinds of extensions yet. But if you were going to use those extensions, then you probably wouldn't have picked Neon in the first place. I also wouldn't consider that a lock-in. Do you have any examples that you might be aware of in Neon? Postgres compatibility is something that we take very seriously.
hey all, i've only been with neon for a very short time but i'm super excited to be here because i believe neon can deliver an amazing developer experience that databases have been lacking. while we are simply postgres on top the neon platform is something special that unlocks a lot of new possibilities.
this ga is simply marking readiness for the platform and the team that has been building it. there's a lot more to do going forward.
as we're looking forward, post-GA, i'd love to hear what you think neon needs to focus on next. here's what i'm seeing so far:
- improved gh actions integration
- more extensions
- better developer extension support
- autoscaling communications
- metrics / logs integrations
As far as I know Neon has no filesystem cache, which is the most important performance enhancer for Postgres. As here are many using Neon already in production: Does anyone can comment on the performance for large databases (a couple of tables with around 100 million rows) compared to RDS / Supabase / other hosted service? I can imagine that it is substantially slower. But would be happy to be surprised, if not. Or to learn how Neon compensates the missing filesystem cache.
Neon engineer here. We implemented our own local cache between the pageserver and Postgres (called LFC). It compensates for the lack of a normal file system cache, and scales up with the rest of the autoscaling infrastructure.
If the Neon driver were to allow us to easily pass in a localhost connection, the development and test experience would be easier. Perhaps Neon could swap to something like this internally: https://github.com/porsager/postgres.
Having run a local dev environment connected to Neon and tests connected to Neon got in our way of adoption. We'd prefer to develop and run tests against a regular Postgres localhost database.
To the PMs of Neon, put yourself in the shoes of a new developer thinking of giving Neon a try. What changes will I have to make to my code and my development workflow?
The technology is interesting but the billing doesn't convince me. I don't feel that I can scale storage easier with Neon than using just a bigger server with plain Postgresql.
Neon is more than just Postgres with a storage device. It's a managed service with features like branching, autoscaling, bottomless storage, etc.
People who really find Neon valuable usually partake in our "differentiating" features, so if all you need is a managed Postgres, there will definitely be more competitive prices out on the market.
I'm working with a client on a greenfield project and I picked postgres as the tech stack. For the staging server, I just locally installed postgres, configured it, and it works perfectly fine. On the flip side, I'd rather just focus on code, and if there's a free tier (which neon has), I'd rather shell that off to a service.
So, my question is, what trade offs am I making other than a persistant/local db to off-site (ie probably a degree of speed). Since it's free, does that mean my data might be inspected? I'm under an NDA, and my client would prefer his data stays in-house unless there's a good reason for it not to be.
The free tier gets spun down to idle after five minutes of inactivity. The first request after that usually fails to connect as it takes a few seconds for it to come back up.
Neon cold starts are targeted at just a few hundred milliseconds. Anything on the order of seconds would be a regression in our minds. Obviously this depends on geographical latencies, etc. We are always looking to improve cold starts.
I'm on the free tier. In my case it looks like adding `app.config["SQLALCHEMY_ENGINE_OPTIONS"] = {"pool_pre_ping": True}` for the Flask-SQLAlchemy configuration did the trick. I hope to be a paying customer soon :)
We do not inspect user data. We don't even connect to user databases, unless given permission to. You can read our privacy policy here: https://neon.tech/privacy-policy.
Neon will never be as fast as a database local on your computer, but performance is always something we are paying attention to.
If you’re aiming for EU compliance you’re going to need to host the data within the EU and only have EU staff have any sort of access, like running support on it. Microsoft is exempt from that last bit for some reason, but they are Microsoft so they probably cheat.
Won’t be a lot of EU enterprise that will be capable of using your services without rather strict compliance. Which may or may not be in your interest but you might as well just be up front about it. With the way EU is heading in regards to data protection it may not just be enterprise organisations either by 2025. Those compliance laws are getting stricter and stricter by the day.
Nice work Neon! I’ve used it a number of times and am impressed at how quickly databases become available. Leaps and bounds faster than RDS. it’s amazing when you need something now.
Neon seems really great to me, but I wish I could easily run it locally via Kubernetes. I know there are some projects out there[0] but they are all supported by 3rd parties and things like branching and other features don't appear to be supported.
I'd love to be able to just use a helm chart to have Neon in my homelab.
> Postgres was a "it exists" database back in the early 2000's.
If we're really nitpicking it's not saying Postgres is the most popular database since the early 2000's. If you base it of off install counts as the metric, I would assume the statement is true since I'd think it's either Postgres or MySQL today.
Have you considered publishing pricing for larger databases? It would be interesting to know whether the offering would be useful at sizes not covered by the listed tiers.
edit: given the headline feature of infinite PITR capability, knowing the price tag could be quite important. Especially if purging old data isn’t supported (is it?)
Our Sales team is more than happy to communicate about custom deals. I don't know if our team has given thought about more plans on the web page. I would think we want to keep the number of plans small to give users an easier time parsing the page. I think competitors typically have similar number of plans, but I haven't checked in a while.
Aha, I see the $15 per 10 GiB/mo extra in the little info box. That’s … steep, especially for cold data.
I have an application on MySQL that I’d love to move to a service like Neon. Maybe I’ll port it to Postgres some day. But not at those storage prices, at least not without serious care.
I doubt I’ll ever use Neon again after they had our production database go down several times over just a couple of months Q4 last year. We got tired of the downtime during an incident and we were able to spin up a Supabase PG instance and switch over to it faster than Neon could resolve the incident. It wound up taking them days to resolve it fully. They then increased the prices significantly about a month later. Given that it’s only been four months since these issues, I don’t think that they are serious people in the slightest and don’t trust them with a production database.
(Neon CEO)
I'm sorry you had this experience. We had several instabilities in Q4 as the system became popular. That was the one of the reasons we called it "preview" until we got stability to where it needs to be. We are also very transparent about every outage - see neonstatus.com and highlight even the smallest ones.
We are also iterating over prices - we were purely consumption before and realized that we need to offer $19 pain plan to start based on the value our customers get. This is in line for what other devplatform charge for using their services.
Reading the pitch and this solves all of the problems I don't have (as a developer and operator of services that don't need to scale to millions of users). How often do you need to clone/restore whole databases, really? But I may be misunderstanding the pitch.
(Neon PM) instant branching in a development workflow because it's like git but for your database. You can develop or test against an exact copy of production without the risks of testing on production. Autoscaling is useful if the load on your datbase varies (you can save money, even if your app isn't serving millions of users).
Until you start firing emails and texts to your production customers :-)
That aside, I think Neon is pretty cool. I will wait some time to see how stable a service it is, whether price hikes happen often, or whether VC money destroys it.
I think Autora for me is too pricey, but I settled for self hosting PG on Hetzner. I'm there bc Hetzner has been stable for me for years. What I fear the most is having to migrate a db off of some service bc I stopped being their target audience. I know this sucks for startups trying to make it but a risk is a risk. I'll wait and see.
I tried Neon for our startup but ended up going with Planetscale. The stability guarantees seemed weak and still scare me. I would be happy to take another look, but I've had no problems with Vitess, it has been a dream. congrats on your guys' launch to GA.
I don't really see a reason to move off of Planetscale, which is a very mature offering, if it works for you. There is always a cost when moving Postgres providers, doubly so, when you're moving from MySQL to Postgres.
I think the Postgres ecosystem has many differentiating factors where MySQL and MariaDB just can't compete.
The founder of Turso explained how they can have such a generous free tier in a tweet:
> Lots of people have been reaching out to me asking if they should expect our free tier to be around, given recent news.
> Yes, you can expect our free tier to be around.
> The main reason we built a service on SQLite is that we knew we could build the most efficient thing in the market with that. I usually keep these number semi private, but last month Turso had over 20,000 databases under management - in all plans - and our Cloud bill was less then 3.5K USD.
> I sympathize with Planetscale, in the sense that running a company is hard and you sometimes have to make hard choices. But look beyond words, into cost structures and incentives:
> Of course our service is expensive to operate, like any service, but most of it is staffing. We'll never find ourselves in a situation where killing our free tier will make any dent in the profitability of the company.
as a "mere user" of the JS or API endpoints, should I think one is better than the other? To me it doesn't really matter if it's Supabase/Neon, Turso, or D1 powering it as long as... it works
With Turso you get an sqlite database behind a SDK or an (HTTP REST) API endpoint.
With Neon you get a postgresql database over a postgres wire format connection.
So if you have an already existing app speaking postgres to a database somewhere, Neon is a drop-in-replacement, while Turso would require adapting to their custom API.
If you are creating a new service, you might need/want to take advantage of e.g postgres extensions [0] for storing geographical data or pg_vector for similarity searches etc. Or you simply need more stringent serialisability promises than what libsql / turso can provide.
But if you just writing something new from scratch and have "simple" demands, I think something like Turso looks cool (and cheap!).
I'd probably care because of the ecosystem. There's a bunch of developer tooling and mature ORMs around postgres. Granted, those same libraries/frameworks probably work with sqlite too.
I guess it's just trust. I trust postgres more than I trust sqlite for building big apps.
One last point: I'd look at feature set of both offerings. I suspect the features of the postgres offering to be more conducive to scale.
I don't get how this achieves point in time recovery? I assume I'm misunderstanding some fundamental part here.
If the branches are achieved using a COW file system, how does this part from the blog post work?
> Suppose a developer fat fingers a table or a database out of existence? No problem, move the branch to the second before that event happened.
How can I go back in time arbitrarily here? I would have assumed that you can only go to existing snapshots if the underlying method is a COW file system.
This is achieved through our Pageserver. The pageserver ingests streams of WAL, and so contains both snapshots and deltas. This lets us efficiently seek to any LSN by replaying page deltas on top of the nearest snapshot (we call it the GetPage@LSN API).
PostgreSQL writes changes to disk using WAL for consistency. Every WAL record is a set of changes to the PostgreSQL data directory (data files, metadata, ...) that need to be persisted together.
Neon indexes this WAL, and restores pages to the right version by replaying all WAL from the previous page snapshot up to the right version, allowing full point-in-time for all persistent tables in the database.
Calling it "branches" does seem misleading. For instance, would this also work across major PG versions? afaict, it is just not possible to merge two differently versioned postgres-es
Branching in Neon should be interpreted more like the branches seen in graph theory's trees rather than the featureset exposed by git: the whole history of a Neon project is a unidirected graph with a single path between any two points in the history.
> For instance, would this also work across major PG versions
As for multiple major versions: We currently can handle multiple major versions in different tenants on the same pageserver/safekeeper servers, just not for the user's PostgreSQL instance. Major version upgrades (by way of pg_upgrade) are something we're working on, but that's still quite far down the road.
> afaict, it is just not possible to merge two differently versioned postgres-es
Correct, and AFAIK we don't actually advertise anything related to merging histories. If we do, please do tell where, so we can correct that.
We have a few people self-hosting Neon in this thread who are in our Discord's selfhost channel. Might try jumping in. We don't have too many docs related to self hosting at the moment.
Sure! Each instance of Postgres runs inside a qemu VM, inside a kubernetes pod. The VM provides isolation, autoscaling, metrics, and (eventually) live migrations. These VMs share AWS Metal nodes.
It looks like the compute for the free tier is free (for your main branch) + 20h for branches, but the lowest paid tier is 300h for all branches. Can anyone using this speak to that? I've seen this trend where free tier has better features than the lowest paid tier.
Edit: Love to see several Neon folks in this thread from various parts of the company. It's always good to get insight from engineering, devrel, product, and CEO.
(Neon PM) If your project uses less than 500 MiB storage, our Free plan might be the best plan for you. If your project needs more storage, branches or larger compute then a paid plan might be a better fit: with Launch you can run your project 24/7 at $19/month.
The Launch plan includes 300 compute hours. All your computes draw from this, regardless of whether or not it's a primary branch.
For a simple comparison, let's assume you only use 0.25 vCPU computes and your primary compute runs 24/7 (~750 hours per month) to keep the comparison easy:
1. On the Free plan, you can have the primary running 24/7 plus an additional 20 hours for other branches.
2. On the Launch plan, you can have the primary running 24/7 plus an additional 450 hours for other branches. And of course the 10 GiB storage + other paid features.
Congrats to Neon on the generally available status. Big vote of confidence. I've looked around and Neon is the only serverless Postgres I’ve seen that provides copy-on-write branching. Every other managed service has some significant seeding limitation when one looks at the details.
Neat idea, but the stability and assurance of RDS is pretty compelling and migrating databases is pretty much the worst in my experience, even big services like Azure can have unexpected database issues that burn time and money.
Also, custom postgres implementations are a bit scary from a performance tuning perspective.
(Neon PM) Do you have have an idea of the RAM or compute your application needs Many applications can run 24/7 with 1 GiB RAM on the Launch plan at $19/month.
Appreciate yall being active in here but this doesn't really answer the question. Neon looks really interesting to us, we're currently paying Heroku for a standard-5 postgres plan at $1400/month. But that's mostly for the 1 TB of storage, I have no idea how much "compute" we currently use.
Our Sales team would be happy to talk through pricing with you. Do you have any suggestions on where we could clearer to people like yourself who are in the same position?
Storage costs can be calculated as something along the lines of:
size of current database * retention period (configurable) + compute hours
I find it really sad how many communities these days use Discord rather than a searchable forum. There's no specific Q&A format to hone in on specific solutions.
"Managed" means we take care of Postgres administration.
"Serverless" means that your database isn't running if you're not using it. We put computes to sleep after a certain amount of time. We can also scale database resources up and down as needed.
There are some limits. Architecture wise storage indefinitely and compute up to the largest bare metal node on Amazon plus as many read replicas as you want.
We spawn a Postgres instance on the fly if it isn't already up in just a few hundred milliseconds. Cold start times are always something we try to improve.
good point - yeah D1 i think is sqlite based. for my toy projects that hasnt mattered so far but for more complex projects i see how this is a big difference, thanks for pointing out!