Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: Heroku deleted my database with no warning
460 points by fireworks on Jan 31, 2023 | hide | past | favorite | 206 comments
Last December, Heroku nuked the database on one of my active projects. I was travelling at the end of the year and did not catch wind of this until I returned and saw messages about an issue with the app. Sure enough, I checked and noticed that the database was gone and detached on December 9th.

Before the hate comes out, yes I know Heroku deprecated free tiers. However, I did not understand this would affect my projects on paid dynos. The real issue here is that I never received a single email or notice of any kind to my email about this. From researching, it appears most people received SEVERAL notices about this. I did not think there was an issue with my setup because I received zero communication.

Upon reaching out, Heroku has told me that they cannot recover the database. They also admitted that there was "an issue" sending out notifications to me, and confirmed that none were sent.

So I guess just a warning to all - your database might be nuked at any time. I learned my lesson about not doing an offsite backup regularly. I guess the bigger lesson though is that Heroku should really be a last resort option for projects these days. RIP.




This whole debacle has been such short-term thinking from Salesforce. That after carrying these free projects for years they couldn't stomach more than 30d of data retention is just the icing on the cake.

In my view this has caused yet further reputational harm for Heroku, and is going to have a long-term effect on the bottom line from paid projects. The value prop of Heroku has always been being able to sleep at night, but clearly that's gone now.


Herokai here. Unfortunately we had no choice on the data retention front — once we’ve disconnected your database, we aren’t ALLOWED to hold your data for more than 30 days. That’s part of the data scrubbing protocol that we agree to when you sign up. We fought hard for 90+ days internally, but in the end couldn’t get over the issue that we’d be in violation of our contracts with customers.


Having worked at Heroku and had a large part in building Heroku Postgres I do not recall this explicit policy, and it seems very squirrelly to me. Maybe this came in as a policy in recent years and it is the case, but still seems like hiding behind a policy as opposed to doing right by customers.

You could easily block all incoming connections to the database. For a free database of 10k rows there were no SLAs, and you would still technically be hosting the database.

Even taking a dump and emailing it to me feels like a safer option here.

There were better answers here for sure. If the honest answer is we just didn't feel the effort was worth it for this class of users at least own that.


Having been through a SOC2 audit: this wouldn't fly. It's on the checklist of issues that you get hit with regardless of what kind of company you are: when customer accounts are terminated, the data retention clock starts ticking.

You can pick an arbitrary time frame for retention, but whatever you pick, you have to communicate to users, and you can't just change it on a whim. Normal customers want this clock short. They don't want you to retain their stuff after they cancel.


But this isn't an account termination. They deleted the data of an active account, without informing the user.

Why could they not turn it into a read-only database without access from the Heroku apps instead? Then it'd just be a routine change to the service offered, would it not?


The customer account wasn't terminated, the free DB being used by paid Dynos was deleted without any input from or notification to the customer.

I highly doubt normal customers want this clock short when the cancellation is not customer initiated.


Data retention policies are written by compliance people, not product people, so distinctions like overt, deliberate cancellation and "cancellation for nonpayment" or "abandonment" or "discontinuation" usually aren't captured in them. I'm not saying it's great that Heroku deleted these databases; I'm saying: the description given upthread, that the databases were deleted because of contractual requirements, is super plausible.


Plausible, sure, but that doesn't mean it isn't a cop out. There are many things that Heroku could have done to prevent this, like delaying the disconnection when they don't have confirmed delivery of the notifications and/or when they are connected to paid dynos/accounts.

Thus didn't happen because of the contract, but because the people implementing this transition didn't give a crap.


I'm just telling you that what the commenter upthread said rang true. I'm not offering an assessment of compliance regimes.


If you start disconnecting the database, in the same way a user could (on most services?) to keep the database around and idle, there's no way that counts as any kind of deletion. So do that if you need to and you really want to give customers 90 days to react.


It feels like it would've been better to have written things such that if it's attached to a still active account that's paying for other services things would be permitted to work a bit differently, but of course the relevant clauses will have been written long before anybody anticipated a new owner deciding to turn a bunch of things off, so under the circumstances at the time of writing it's unsurprising and not really the authors' fault that they didn't consider that.

I had a vague thought for a moment about maybe being able to give customers the option to opt in to an updated set of terms that would've been better designed for the situation ... but since the original Tell HN author didn't get any emails about this at all, presumably they wouldn't've got that one either so even if (and given 'vague thought' I'm not claiming any particular level of 'good idea' to it) that had been done it presumably wouldn't've helped them anyway.

It seems to me the root problem here isn't so much the compliance initiated policies as a complete failure to take 'make sure people have been adequately warned' sufficiently seriously given the potential consequences.

Deeply unfortunate.

(read that last sentence in understated deadpan dry en_UK to get an appropriate read on just how impressed I'm not by the communications cockup)


Compliance people are product people too (even if in disguise), and obviously (?) vice versa.

Silos lead to issues like this. Cross-functional teams cost more, etc.


Not so much, no.


If compliance affects any user-facing property of the product, then they are, by definition.

Onboarding users/partners would be easier without making them read/click all those consent checkboxes, etc.


That's like saying everybody in a product company is a product person. No, it really doesn't work that way, at least not in general.


The simplest test is "who are you defending"?

Ideally, product people defend the end users. Realistically, compliance people defend the company from getting screwed in an audit and ultimately sued by the government.


They’re accountants in my experience.


Of course it’s plausible. Literally anything to do with contracts or legal in a corp like Salesforce is plausible.


I don't think there's anything in SOC2 that says you can't have multiple tiers of deactivating an account. I think the crucial point here is that Salesforce is requesting the termination, not the customer, so any of their obligations to be responsive to customer data deletion requests are moot (unless the customer comes back and actually requests termination).


Create a passphrase, derive the key, encrypt the data dump with the key, send the passphrase to the customer, delete the key. Presto, you can keep the data as long as you want without any privacy implications, since only the customer could decrypt it. Of course, this assumes working communication channel between the provider and the customer.


> Even taking a dump and emailing it to me feels like a safer option here.

I genuinely had to read this twice to get the intended meaning.


as someone who doesn't use heroku, so disregard my opinion:

i'd probably prefer feces-by-email to surprise database deletion


There is a legal difference between a company policy and a contract with a customer.


Yeah ideally hold onto a backup for say a year, if the owner hasn't come and downloaded it after a year can then assume that they don't want it.


It's not that easy. Those databases might contain personal information which is protected under different privacy laws like GDPR, HIPAA and others and Heroku/Salesforce can not simply store that for longer than agreed upon and Heroku cancelling the account enables the retention period as the customer agreed upon as part of the T&C.


The account was not cancelled.


I wonder if you guys delete email addresses from people who ask to unsubscribe , or when I tell t you to delete my information, according to gdpr

Probably not, because it’s ” difficult “


Many avoid it, but once they have to deal with a (ex-)customer's lawyer they learn what is even more difficult.


This sounds like the legal equivalent of looking the wrong way through a telescope.

Whoever fostered that naive interpretation was a nitwit. If they’re an actual lawyer, they promoted an intentional, mutually harmful unilateral reinterpretation of an agreement and should be sacked.

Cowering behind T&Cs like this is intellectual bankruptcy. There’s always another solution. The law is not a programming language.


If I'm understanding you correctly, the 30 day policy is one that Heroku chose to put in the contract. Engineering might have fought the terms, and yes they need to be followed once set, but it seems totally fair to blame Heroku for creating the limitation in the first place.


When they were written, short sighted acquirers yeeting the free tier was likely not something the people writing the relevant clauses were even considering as a possibility, and honestly it's such a ridiculous decision from a commercial perspective that I find it hard to assign blame for not foreseeing it.

Plus, it would all likely have worked out fine if they'd emailed the customer a warning or three like they intended to do - it was the failure to do so combined with the failure to detect and remediate the initial failure that sent things down such a dark path here.


Are you allowed to inform paying customers that you are going to do this? This is my primary complaint here. I don't understand how this oversight happened. This is going to cause an enormous amount of time and energy to recover from this.


> Are you allowed to inform paying customers that you are going to do this?

I can't be the only one who's basically completely blind to emails from major companies, including SaaS providers, because they're so fucking spammy that the SNR is like 1:99. Notifying me by email, for one of these places, is functionally the same as not notifying me at all.

[EDIT] Sorry, didn't mean to imply the parent wasn't paying attention, just that I'd fully expect a very high percentage of their users to miss the warning in all the noise even if they emailed everyone—even if they emailed them a couple times, actually. That's the cost of every company sending out tons of "join our online seminar on [product]!" and "hey, look, it's our newsletter you never read!" and "it's time for our weekly TOS modification!" emails.


> ...because they're so fucking spammy that the SNR is like 1:99.

This 1000x. I signed up for an SMS gateway service last year. Just for my own hobby use, nothing major. I gave them $10 to start service, and they charge like 2¢ or something per outgoing message.

They have like 180 different prices for 180 countries, territories, provinces, parishes, cantons, prefectures, etc. Every week one of those prices changes, and I get an email notifying me of that. I tried to turn those off in my preferences, but they refused. I can opt out of marketing, weekly digests, and "tips and tricks"; but I can't opt out of pricing notices.

So I added a rule on my end to hide those. I totally understand where they're coming from. They can't NOT give pricing change warnings. But at the same time, in the flood of constant notices, there may be something major I will miss.

It would be nice if they instead gave me the option to never spend more than $0.XX per message, and return an API error if an attempted send fails for price threshold violation. Then the spam wouldn't be needed.


Shit, that'd even let them optimize their pricing. "If we double this rate, 25% of our customers will start erroring, but factoring in message volume of those customers and assuming they all leave us, it'll still be a 60% increase in revenue from this product."


I'm not blind, they didn't send one. And they admitted to me that they did not send one.


Well, that's even worse, of course.


That's a major failure on their part. I did get several nagging emails from them.


Even if they did send it, email isn't guaranteed. What if it went to spam or was otherwise mishandled or missed?


I use heroku at work at they definitely sent out a ton of emails. It also said they were going to delete them right above all third party plugins (even if you weren’t using their database service)


Right, but in this case - as confirmed by Heroku when contacted - something went wrong at their end and the emails weren't sent in the first place.

The fact that the failure to send a single warning email to this particular customer wasn't both detected and considered a five alarm fire level bug under the circumstances is the thing that moved the decision to sunset the free tier from sad to disastrous for the user who posted the Tell HN in the first place.


That's peak idiocy and the product of lawyers taking over the assylum.


> We fought hard for 90+ days internally, but in the end couldn’t get over the issue that we’d be in violation of our contracts with customers.

Contracts with some customers, surely? You could have the default be 90+ days, then those customers whose contracts specify a shorter timeframe get that shorter timeframe configured on their account instead. You could give the customer the choice at signup, and let them change it later using the settings console. If their contract doesn't specify a period, send them a notification that you will be changing it to 90+ days, but telling them they have the right to object if they disagree with that.


The phrase "aren't allowed" supposes some regulatory agency forbidding an action. When it's your own internal policy that contradicts the action, the proper term is "won't".


Every single executive in charge of this decision of how to handle precious customer data at Heroku feel be completely ashamed and take a long, hard look in the mirror.


No choice? That's just the way it is, it can't be helped? Did God himself come down and decree that Herokai *MUST* only hold data for 30 days? Did the FBI come in an threadten to charge your executives with sedition?

Yea, no. You decided to make the decision for contracts to be that way. The fact that you "fought hard" but that decided on the 30 day retention anyways means that clearly the opinions of engineers don't matter and that the company is completely captured by the lawyers and out of touch executives. It hardly inspires confidence.

It also doesn't at all address the fact that you failed to contact an apparently paying customer that their data was about to be nuked, contract or no.


Using the fact that a customer was shown exit as the basis for destroying their assets don’t look great to me, at least on surface…


Please. You'd just need to ask if the customer is OK with 90 days instead of 30. Done.

The company has no commercial interest in doing that, though.


It doesn't sound like this would have helped in this particular case since they were unable to contact the customer.


I was replying to a bullshit claim that they cannot retain for more than 30 days no matter what.

That's hiding behind the T&Cs instead of owning their decision not to even try because there is nothing in it for them.


So you're in favor of companies breaking their terms and conditions at will? I think that would cause quite a lot more outrage and problems.


So you're in favor of companies breaking their terms and conditions at will? I think that would cause quite a lot more outrage and problems.

Do you really never get e-mails from various companies every few weeks telling you that the terms of service have been updated?

I think I get one from eBay alone every other day.


Those can not be compared that easily. There are many changes you can do easily. Changing data retention periods impacts guarantees customers may have given in compliance to privacy regulation like GDPR, HIPAA etc. and thus is a substantial change to the customers in areas with such regulation.


You did not read my comments, did you?

The T&Cs are an agreement between the parties. That agreement can be changed at any time if both parties agree. So they just need to ask.


But if they make an exception for one person, they're opening the doors to complaints and possibly even legal action from others who want the same change. So making an exception for one person isn't really making an exception for just one person. Instead, it's a large process that needs careful legal consideration.


Contracts can be amended when all parties agree on a change.


An exception for a single client might fly for a small company, but it's not really a _simple_ exception when you're a large company (like Saleforce). It needs careful legal consideration and possibly means changing the general T&C for everyone.


If you enable daily backup those are nuked too?


That would be interesting to learn. Backups will be just an additional surcharge rather than a "real backup strategy".


I assumed the value prop of being on Heroku in 2023 was purely "not having to do the pain in the butt to migrate your legacy app that you made on Heroku in the pre-2010 era" - in other words, I would be really shocked if Heroku got significant new business or any growth at all, now that it's just expensive AWS with some basic CI integration points.

I also assume Salesforce only bought them as a cash generator and has no interest in investing in it. So if they saved bottom line from this move, that's a win for them.

(Feel free to correct my assumptions if I'm very wrong)


"Pre-2010"? Heroku only launched in April 2009. So I would assume the vast majority of current customers are not actually from a "pre-2010 era". Heroku didn't even have a postgres add-on until November 2010 -- the salesforce purchase went through in Dec 2010. In fact, the heroku "golden years" were mostly during the early salesforce period. (Heroku put matz, creator of ruby, on staff in July 2011; there was no Heroku API until 2014!). (source for timeline: https://www.heroku.com/about)

Heroku is still quite a bit of value-added over "AWS with some basic CI integration", people chose it and still choose it because it requires a lot lot lot less in-house expertise and management hours than AWS for most kinds of standard apps. (I'm not totally sure what AWS services/architecutres you are thinking of when you say that; but I'll say: for pretty much all of them.)

(Heroku def has more competition in that space than it did 10 years ago, opinions differ on the relative merits)

AWS of course already existed when Heroku was launched, so if you do consider them just "AWS with some CI integration" then I guess they would have been from the start? What would have made that true now if it wasn't then?

I think Salesforce thought it would somehow be more "synergistic" with their other offerings than it has turned out to be. That they'd get Salesforce customers on heroku when they needed something beyond the "no-code" tools Salesforce already provided, or that they'd do better at converting heroku customers to salesforce customers than they have been. It does seem to be true that salesforce stopped really investing in new heroku features or improvements some years ago, and seem to be looking to minimize costs while continuing to collect revenue, I agree. (Sometimes I'm not even sure how much they care about continuing to collect revenue...)


Sorry, my memory on the dates was wrong.

> it requires a lot lot lot less in-house expertise and management hours than AWS

Yeah, you're not wrong there. I do worry though that it's still too complex to really trust someone with no devops bg to be fully responsible for, and at the same time, it's so outmoded (just meaning, it's not that popular now besides for legacy projects) that you aren't going to find a lot of good candidates who are expert with Heroku compared to expert with AWS.

So to me, Heroku is a gamble that you can get by indefinitely with only what you(r existing eng staff) know how to do in Heroku. Which for some projects, maybe that's a decent gamble for the payoff of easier admin in the short term.

> if you do consider them just "AWS with some CI integration" then I guess they would have been from the start? What would have made that true now if it wasn't then?

Yeah good point -- Let's see if I can try to explain my claim better. I think in 2009 using AWS was completely inelegant in every way, it was pretty barebones and basically just "Rent an EC2 server and do whatever you want, good luck." Heroku had a unique (at that time) approach which made cloud hosting much simpler and more abstract. AWS today at least offers a few ways you can host an application on AWS while doing less server maintenance than you'd have needed in '09. Infrastructure As Code is also something I think has a lot of popularity today that wasn't really a thing in 2009, and Heroku if I remember correctly, isn't really about that. So there isn't as much repeatability on that platform, it's more "click around and build your stuff out in the GUI." Which again, is cool in the short term to bootstrap, but can be painful at a bigger scale.


I used Heroku’s free tier for small projects/prototype. I continue to use their cheapest offering for new projects because it saves me time. If a project grows, I can reevaluate my choice. I don’t think anything in Heroku is a pain to migrate away from. I would miss the intuitive/familiar UI.


Check out render.com, it's a solid alternative.


We currently use render. Its excellent, but is missing a couple key features: like Heroku's continuous rollback protection. If they offered this they'd be head and shoulders better.


(Render CEO) We're actively working on Point-in-Time Recovery and hope to launch it in early access soon.


Think we (Crunchy Data) has been mentioned a few times below and have near feature parity with Heroku Postgres (only thing missing is dataclips coming this quarter). We've got folks that migrate the DB to us then use a whole host of other options including Fly, Railway, Render, etc.


> Salesforce on bought them as a cash generator

As an early Heroku employee, this gave me quite the chuckle.


Sorry, I should have guessed that wasn't accurate. But like, surely after they stopped doing any investment in the platform, didn't it start to generate profit easily? How could reselling AWS resources with a premium UI not be profitable?

I just imagined Benioff going "Cool, we can buy this and continue to collect the checks every month, while not developing the UI or integrating it in any way into our business." Basically the same thing they did with Exact Target.


Same - wasn’t an employee but used it often in ~2013-2015 timeframe. I’m pretty sure the valuation was many many many multiples of any sort of revenue based metric, wasn’t it?


Yep.

Also re the timeline, the acquisition was announced in 2010. So your own usage was already a few years post.


Salesforce is like the Midas touch of destruction.


render.com


"I learned my lesson about not doing an offsite backup regularly"

Heroku is a shitshow after the Salesforce takeover and not to shit on you because I know it really sucks. BUT please everyone, do offsite backups and test them. Please people. Please. If you have anything that is important, BACK THEM UP on your own outside of the provider.

Heck, we wrote our own script to backup RDS databases offsite as well even though RDS has backups and restore options. I want that database file.


I spoke out loudly at a previous employer how it's really dumb to have our database and it's backups under a single AWS account. A single AWS account compromise, account issue (e.g. AWS shuts down account), or a disgruntled employee could result in the business being destroyed.

They took some half-hearted efforts to back up the data, but it was far from ideal.

Back up your data, and do it in different places so a failure in one won't affect any other copy of the data.


Yeah pretty cheap and easy, at least relative to the value of data to most companies to setup a server you have physical control of to go download the backup once per day.


It's not exactly cheap to always have a backup that you know works. You have to set it up and test it periodically. Of course, that doesn't mean you shouldn't do it.


Very much that. As a former SRE responsible for tape storage, I saw things like regular backups of an error phrase "You have no access to this database". Guess what happened when the team accidentally dropped the database?

Unless you're doing regular restores, you don't have a backup. You have hope. So yes, doing backups in a way that gives you some form of guarantee isn't exactly cheap.


Or we do have everything in the backup, but the restore process isn't worked out. Someone is losing a weekend writing hacky scripts, and every SLO is being violated, if we ever have to use it for real.


Or you have everything in order but the encryption keys are gone.


That case is still a lot better than no backups and having to tell your customers that the data is gone.


Make them test their disaster recovery plan. One organization I worked for took three tries before they could even simulate disaster their recovery process.


I wish this kind of warning would come with a link to a website like dobackups.org that would document how to actually do them and test them. Something like a front page with different "profiles" like "Windows user", "Web application", "Database". Add to that some more precise docs, links to different solutions, maybe a bit of transparent sponsorship from backup companies, good practices like "how often to do backups", more specialized info for different use cases like family photos, legal documents (with user-contributed infos by countries).

I would like to do backups but it's already not really easy, even harder when you add the whole "figure out how to do backups" to it.


After the Salesforce takeover? The one in 2010? So Heroku was good in like a year!?


Ex-Herokai here. In some sense, SFDC didn't start taking over H until several years after we were acquired. We had our own CEO, office, mission, org structure, annual meetup, etc.

Then, year by year, those things were taken away. Eventually, circa 2018, investment in the core H product started being reduced and individuals and teams started being reorged to focus on Salesforce priorities. H became a "keeping the lights on" job. Folks who wanted to keep pushing left to work on GitHub actions or on render.com, netlify, other places still striving for great devex.


Heh. I always loved Heroku and have complained that Salesforce is messing it up recently, but turns out the first time I used Heroku was like 5 years after the acquisition. I think Salesforce just started inserting their own branding more recently.


At a guess, Salesforce bought it hoping it would take over the entire space rather than merely having the capacity to be one profitable provider among many, and lost interest once it became clear it was going to be the latter rather than the former.

(guess based only on having watched things unfold from the outside without ever being a user/customer so please do add salt to taste)


As a customer with no knowledge of the inside, this is all I've noticed, which probably misses some things:

1. GitHub integration breach, which wasn't handled well but luckily didn't affect me.

2. No more free-tier DBs :(

3. Salesforce logo, which ofc doesn't matter.

Hearing vague negative things here makes me nervous about the future because I really like Heroku and don't want to be stuck using AWS directly.


A service can run in a state of if not benevolent, then at least ambivalent, neglect for quite some time without it being a disaster for the users.

I would suggest you start poking around at possible alternatives just in case, but not as yet with any great sense of urgency.

If you Ctrl-F for crunchydata in this comment section you'll find an employee of theirs talking about their postgres hosting and listing a bunch of services their customer use for the other parts of the puzzle, and have hired enough heavyweight core contributors and active and clueful community members that I think it's reasonable to say they're -good- at postgres.

I would say that what I expect to be most likely to happen is gradually increasing prices and gradually degrading service (the people still working on it do seem to care but I don't know if there are enough of them left to avoid bitrot setting in and even if there are today, in a year or two there may not be) so having a plan to move in an orderly fashion but not executing it yet seems like the wise approach.

Neither panicking and moving in a fast and risky way now, nor waiting until (if it happens) things go from aggravating to actively intolerable and moving in a fast and risky way then, are likely to be particularly good ideas.

OTOH, being prepared to migrate elsewhere in an olderly fashion if the cost/benefit calculations tell you it's time, re-running those calculations and double checking your plan every so often, and continuing to enjoy the service in the mean time, seems like a reasonable, responsible, and overall relatively pleasant way forwards.

(disclaimer: I am not a Heroku user myself, but I think the general principles almost certainly apply here just fine, and I'd certainly be comfortable giving the same advice to a consultancy client at work, so without being foolish enough to claim I'm definitely right I'd suggest the above analysis is at least worthy of giving serious consideration to)


Luckily I keep to basic ways of hosting things, so if the time comes to move, I should be able to. Replicated NodeJS web backend with a Postgres DB is gonna be supported in tons of places. No special logging, just stdout. Some cronjobs an outbound HTTPS requests, that's typical. The most unusual thing I do is open an SSH tunnel to some Linux server if there's a special long-running process I need to call RPCs on.


Don't mind my curiosity but I wonder if you do offsite backup of your emails or just take it for granted that Gmail is a reliable enough service ? It is very important to me but I never even considered backing up my Google takeout data. The point I want to make is that service reliability is a critical factor when considering what to backup. It was expected from Heroku to do the right thing and be conservative in their approach.


Anyone using Google products should absolutely backup their data regularly.


Personally I sync the emails in my Gmail account to my server at Hetzner using imapsync [1] every once in a while. The data on that server is backed up off-site at rsync.net using Borg. I'm intentionally not using Hetzner's storage boxes, because I want my backup to be entirely independent, e.g. if my Hetzner customer account is closed down for whatever reason.

[1] https://imapsync.lamiral.info/


I hear you. I didn't do it for a while but I did 2 things more recently:

1. Setup IMAP backups locally on my computer and everything is backed up on computer to 3rd party backup tool

2. Use Google Takeout from time to time (it is a bit weird at times though).


Backups are apparently "too 20th century" for today's cloud-focused devops, you can just trust your entire business to someone else's procedures and if it all disappears in a puff of smoke, so do you.

(or just maybe we shouldn't have thrown out all the sysadmins with the bathwater)


This happened to me as well: projects with paid dynos but free databases, databases got nuked.

How Heroku missed this is beyond me. They managed to screw over paying customers in their broad attempt to stop freeloaders.

These are good accounts with credit cards on file. Why not just autocovert me to the lowest tier paid database?

FWIW I was able to get them to restore my databases. But I also had free Heroku Redis on one my projects and that, they assured me, is gone forever.


I got "lucky" because I randomly checked the status page on my Heroku dashboard. It laid out which services were going to cost me after the switch date. I upgraded to the Eco tier and noticed that the DBs were still marked with a warning. So yeah, I would've been burned as well, had I not done that.


Right? Like just charge me and tell me. Also, while I still think it is way too aggressive how they deprecated free in general, at the very least... maybe uh... I don't know, let me know if you are going to delete my database? Absolutely insane.


Redis is a memory store, you're the one who fucked up treating a cache as a persistence layer.


Redis offers disk persistence [0]. Why can't it be used as a persistence layer?

[0] - https://redis.io/docs/management/persistence/


Redis offers disk persistence; Heroku's free Redis offering did not.


You can use Redis as a cache, but it's not fundamentally a cache.


Heroku sent me repeated messages that they would shut down my account due to inactivity. I was fine with that because finding and turning off the supposedly active Dyno was impossible. Guess what? Still charging me $27 a month for a server I can't even manage.


If this is true I think it deserves spotlight in a separate post.


A bit of a tangent, but my only real problem with Heroku involved a premium DB. Turns out that upgrading to premium enables high availability (HA) by default, and I don't even remember if you can disable it. HA replicates asynchronously to the standby master, so a master failover can cause a small amount of data loss. For my application, this was unacceptable, and I would have preferred unavailability instead (see CAP theorem). Today I have enough experience to check the fine print for that kind of detail, but anyway such a big change should come with big bold letters IMO.

[Edit: To this day I'm still puzzled by what I'm about to describe, so idk if it's Heroku's fault or mine.] I got a call from my colleague one day saying our database had gone back in time. Evidently we lost an hour of records. The code wasn't even capable of deleting rows, and nobody had direct DB access but me, so after leafing through the docs I suspected a failover event caused it. Premium DBs also let you roll back the DB to a previous point in time, and we were able to recover most of our data this way, like Back to the Future. If this really was a failover event, it's super weird if that the backup was more up to date than the standby master, and that a whole hour (rather than minute) was lost.


Having a HA follower is the only different between Premium and Standard tiers, so I'm not really sure what else you expected them to do in this case. Like, premium-6 is 2x the cost of the standard-6 plan explicitly because of the HA follower.


Yes. I was inexperienced, saw "high availability," and didn't realize the standby could fall behind and lose data.


Also, it's worded like a strict improvement when really it's a tradeoff. You're sacrificing the guarantee of persistence for more availability. I feel like most people who know what this means are not going to want it.


Yes, AWS is similar with their DB offerings. You can discourage it from doing any updates/reboots (which causes a failover), but ultimately if they want to failover, they can at any time.


Ouch. If it knows it's about to fail over from an update, it really should get the follower totally in sync with the leader first.


HA on rds uses synchronous replication - you won’t lose data on automated failover under any normal circumstances.


Ok that's fine


I wonder what architecture they use that can lose an hour of data?

Most architectures I see might lose a few milliseconds of writes in the typical case, and perhaps a second of writes in the worst case (which occurs when the master gets islanded with a couple of clients).


If it was really the HA causing this, maybe the follower had a temp outage before the leader and hadn't yet caught up.

I already don't want HA if there's a chance for even 1 second of data loss, but for those who can tolerate that, there really should be an upper bound on the staleness. If your leader fails, the follower shouldn't take over unless it knows it's close to up-to-date.


Heroku banned my account with several income generating projects on it with zero notice. When I called them they treated me like a criminal and like they were doing me a massive fucking favor by even talking to me. I suspect I was supposed to get email notifications leading up to the ban to take corrective action but 100% did not get anything.

For context, someone put in malicious DMCA complaints against my website. Heroku did a shit fucking job (i.e. nothing) verifying the complaints. They provided no ability to dispute that it was bullshit. I took my business elsewhere, not that they care.


Yup - this happened to me too. It was wildly frustrating, but they did send few emails looking back - it was just a really small project and it wasn't on my radar.

What pissed me off most is that I WAS paying for the account. I was paying $17/mo for redis and dynos, so there was an active card on file.

Why not just start charging for the postgres db, and only delete if there's no active billing?

Heroku was already a no-go for me with new projects, I just keep old things running in there since it's too much work to migrate off. This just cements that for me.


> Why not just start charging for the postgres db, and only delete if there's no active billing?

Probably because the original terms you agreed to were written not anticipating Salesforce perpetrating this and so while I would not at all be surprised if the vast majority of customers in your position would've been entirely happy with it they probably didn't have a legal path to do so.


> they probably didn't have a legal path to do so.

That makes no sense. Most large companies change both agreements and prices via notification emails all the time. Probably even SalesForce does so.


Given a Heroku employee showed up in a different subthread of this comment section saying thay wanted to change things but couldn't, in spite of a long discussion between the technical and legal sides trying to find a way to do so, it seems to me to be at least plausible that in this case for whatever reason that wasn't an option.

If you want to do a close read of the terms and conditions/contract language as of a reasonable guess as to when OP signed up and link the section you believe does give them the right to make such a change unilaterally, I'd be happy to read it and discuss further.

My current position in the mean time will, however, remain that while there's always a chance (often a pretty good one) that my hypothesis is wrong, claiming it makes no sense at all is a stronger claim than is justified by the information forming the basis of this discussion.


The reputation hit to Heroku is real. Its so too bad that they've made these changes, and stopped pushing new features.

I built my prior business on heroku and it was wonderful. Now with this current company I've switched to Render (through am looking for something else). For me the writing was on the wall for Heroku and it wasn't worth it to me to dig in there if new features and support wasn't a given.

Curious for anyone with an inside perspective. What happened? Heroku was so far ahead of the pack for awhile, then suddenly stopped staying a step ahead. Was this intentional? Why?



Same exact thing happened to me. Out of nowhere, my (paid) app started failing. It was only after signing in did I see that the entire postgres database had been removed. No warning that this was going to happen, no ability to recover a backup. Just... deleted.

Not the end of the world for me, because 95% of the data I had in there had already been processed, but I did lose some. Complete joke of a service.


Something similar happened to me on IBM Cloud, back when it was called Bluemix. This was early in my career for an early-stage startup (that eventually shuttered - not due to this issue, but rather due to a lack of investor interest and us running out of runway as a result). Easy enough setup: staging and production, each with two Docker containers, one for the (Elixir/Sugar) app and one with the (Postgres) database.

Well, one day the production Postgres container just... vanishes. All the storage is gone. After weeks back and forth with IBM's support, they confirm that the loss is permanent. No explanation, no refunds, just "lol fuck you". Naturally, we didn't have backups yet, so the few users we did have now had to start from scratch.

I, too, learned my lesson about not doing an offsite backup regularly - and the bigger lesson that Bluemix should've been a last resort option. Funny enough, we had migrated to that from Heroku; we ended up migrating again to AWS (specifically: Elastic Beanstalk).


Same here. I have 2 big gripes:

- I did not receive any email warnings that this would happen. I have a valid email registered with them because I do receive non-promotional emails. However nothing for this. After getting in contact with customer service their rebuttal was that I also would've known by checking the forums/blog. Who on earth is doing that? It's a terrible response.

- They detached the DBs at the start of December. I wasn't going to fix my broken personal projects over the holidays. I told myself I would take a look come January. Which by that point it was past the 30 day grace period.

In the end, I've had to completely stop paying for the dynos. So Heroku lost business overall from me.


Same thing happened to me. My personal rails website, which I kept personal blog postings in a postgres database got completely lost. Most of those post date back to 2014.

I made some mistakes like not backing up my db, not keeping up on maintenance. But didn't think it would result in a complete loss of database considering I was still paying heroku. This app had been running since 2014. After this incident I removed my website from heroku.


In case this is useful, I recovered some lost content from my blog using the Internet Archive a few years ago and published notes about how I did it here: https://simonwillison.net/2017/Oct/8/missing-content/


Interesting, good to know. I do have a internet archive of my website as well.


We also had some of our free databases on our Team tier projects nuked, which Heroku said would instead be automatically upgraded to lowest paid tier. No warning emails either, to anyone on our team.

My personal account did get a bunch of warning emails but nothing on our business accounts.


Same happened to me, I didn't receive any email notification or anything, found out when things started failing.


I am wondering more and more if any trust in a single cloud provider isn't simply an unacceptable risk. The power balance means that a minor error on their side is generally business-ending for you. This is different from almost any other supplier, which can generally be replaced, even if the business is on hold for a few days. So any data recovery plan that should be able answer: what if the relation with our cloud provider disappears unexpectedly.

This is not a warning against only heroku. Google is famous for terminating without any recourse random accounts because they felt like it that day. Amazon and especially Microsoft seem more dependable, but even they had their share of business-killing behaviour.

So e.g. a backup with another cloud vendor is a requirement for almost any business. And of course, validate it. Easyer said than done at scale, of course, but even a partially failed backup is better than nothing.


Even the seemingly independent ones can get acquired by Google and eventually killed.

Anyone remember Parse? It was quite successful until Facebook got hold of it and seemingly out of nowhere decided to destroy it.


Same exact thing happened to a friend of mine who was just about to launch. He didn't realize that he had only paid for the dynos, but was using a free tier DB. No communication at all, and they wiped out his data. Needless to say, he is no longer a customer.

He's moved on to Render since then (and he's now backing up his data offsite). Painful lesson to learn, but at least he hadn't launched his product yet.


I remember being quite surprised when I first learned that Heroku was owned by Salesforce, because I had such a different impression of the two companies. I haven't used it in a few years, but Heroku used to be a great platform for certain types of projects. Unfortunately, the number of concerning stories I have heard about them in recent years has discouraged me from ever using Heroku for a future project.


For reference here's the email another user received in November.

"Remind HN: Heroku will delete all free dbs and shut down all free dynos Monday" https://news.ycombinator.com/item?id=33755651


For me, the messaging in the announcements, dashboard, emails, and discussions here was clear and obvious that free databases were going to be deleted. If someone managed to miss all that, that's on them.


No in-app notifications, no emails, and I was travelling and not following the news closely. I don't think I should have to rely on forums to know if I'm going to have all my data deleted without warning as a paying customer anyway.


They were charging money for thing A, that connected to (free) thing B, and rather than just begin charging the same customer, using same credit card, for thing B, they deleted it. I would love to see the recording of that zoom meeting...


Is there any way for it to make business sense, or was it pure incompetence?

I wonder how many databases they deleted that they could have instead started charging for. Seems to be incredibly destructive for all involved. (SF shareholders, Heroku customers, etc.)


Heroku also had weird business with Skeb.jp on 12/23-24 JST that was literally solved under the table - after Skeb redeployed to AWS. It seemed like a policy decision being made at Salesforce which owns Heroku. Whether such behavior adds confidence to their business users, I don’t know.


Anything stored in a cloud provider you're not paying a minimum of $10k a month should be assumed to be subject to evaporation without recourse at any time.

Make offsite backups.

For this use case, a raspberry pi 4 with a 1TB SD card hidden somewhere in your home with a cronjob is probably more than enough.


Oh crap! What a horror story.

We recently migrated from Heroku Postgres to Crunchy Bridge and can totally recommend it. So maybe set up your new database there instead.


The same thing happened to me, but I was fortunate enough to catch it in time for them to restore it. Very disappointing because I was paying Heroku over $100/month for various things and didn’t realize an actively used project was on a free tier.


On heroku be sure you enable database backups with heroku pg:backups:schedule, and also script syncing those backups on a regular bases to some "off site" (non heroku) storage like your own s3 buckets. It is easy to get the URLs to you backups with `heroku pg:backups:url`.

I have no faith that if a rouge employee clicks the "delete app" button (because did you know to give an employee the ability to update the ssl cert on your web app you also need to give them permission to delete the whole damn app?) you'll ever be able to get your database back (although you might have a 30 day window to do so, but I wouldn't trust it.)


This happened to me but I filed a ticket and they were able to restore it. I guess maybe it was within the 30 day window?


This exact same thing happened to me with a client project I was working on - my client was paying $50/month for dynos, but we received no notifications about needing to pay for the database to avoid it being nuked!

On the upside, when I reached out to Heroku, they did offer to recover the database for me - but by this point in time, I'd already moved on with setting up a new one from scratch (with some changes from the original), so this wasn't particularly helpful.


I also didn’t receive a notification about it, but some good samaritan HN user posted days before to give people a heads up. That’s the only way I found out.


>your database might be nuked at any time

What is not directly possessed by you is not fully YOURS, and probably it is (un)clearly stated in the "free tier" terms of use.

Remember kids, when you encounter free cloud thing on top of the free host-able thing, just spend another day and host it yourself, cause you're just facing a sales funnel which will eventually collapse


Heroku was so nice "back in the day", it was expensive but served me well. I started having some issues before the acquisition so I moved away. I guess I was lucky for that.

They must have had some billing problem because I got charged for months after stopping everything, I was able to block the charges at the credit card level but they kept trying and I never had any answer nor explanation from them.


What a sad, money grubbing set of actions from Heroku. Tarnishing their brand forever to save a few dollars. Shame on them for doing this. There were so many better ways to deal with this and still save money in the longer run such as, https://news.ycombinator.com/item?id=33759178.


> I guess the bigger lesson though is that Heroku should really be a last resort option for projects these days.

Maybe people shouldn't, I don't know because I've never used it, but ...

> I learned my lesson about not doing an offsite backup regularly.

This is the bigger lesson, no matter whether you are using third party database hosting, or hosting yourself, whether it's a NoSQL database or a SQL one.

Once the data is gone, its gone.


Heroku could have done better. At a bare minimum offering an option to download the existing data for 1-year or convert it to a paid db would have been better options and not brought this type of despise from the community.

I still use Heroku on a daily basis and wouldn't say this has caused me to re-evaluate my decision to stay with them, but then again i'm not using anything free from them.


Not surprised to hear that. This happened to me in 2019, and for the whole Heroku account not just 1 database. See https://news.ycombinator.com/item?id=21969358.

They were going downhill since Salesforce takeover.


Bought some HDDs with some left over end of quarter money, found an old dell server, slapped em in installed it in a switch closet, and have the cloud perform backups to this local closet server. I even do this at home. I don't trust these guys with anything. Sorry for your loss


I remember in Google one project sent some announcement to incorrect recipients. Upon investigation it turned on that the bug was in all SQL queries that selected relevant customers, but it was broken for years and no one complained before our investigation.


Well, I moved everything I had to a VPS for the time being because I left it to the end.

So I take this opportunity to ask, what alternatives have people moved to? I really haven't gone back to look what's out there.

Not looking for free, just alternatives to review.


You can host static sites on DigitalOcean's app platform for free. Their lowest tier offering for non-static sites is like $5/mo. Might be worth checking out.


For what kind of an app? If I was tasked to put an app someplace today, the answer would depend if it was easily containerized, and what kind of scale you're talking about.

For small apps, if containerized, I would host on Google Cloud Platform using App Engine Flexible Environment: https://cloud.google.com/appengine/docs/flexible Add on their managed database, of course.

But also, random VPSs are so cheap that if it works fine as is, you could do a lot worse than just running it on Linode, Digital Ocean, Hetzner, etc.


I went full blast into SQLite, agreed there's a migration job in between. But it's totally worth it. In fact my app is now 5x faster.


Happened to myself as well, luckily it was just our staging environment but it was while I was on leave. I feel one should reasonably expect their configuration to be stable to 3 months and 30 day data retention is just insulting.


reading this I keep thinking the real questions are:

1. probably the notice sending and db deletion is two separate teams or responsibilities. Are they?

2. Did people know there was a bug in notice sending so some notices were not being sent, all notices not being sent? I ask this because generally in places I've worked where notice sending was an important part of things you knew if there was a bug and notices were not being sent. But maybe it wasn't that important for Heroku. Maybe it was not known that notices were not being sent for a while - or was it known immediately but things on other parts of business chugged along anyway.

3. If they knew notices were not being sent and they went ahead and deleted db anyway, seems messed up, but that would probably be ok with people if they had data retention for people who did not get notices sent.

4. The whole thing your stuff can be deleted at any time without telling you is basically probably true almost everywhere in that notice sending can have a bug and deletion of stuff is probably not adequately tied to notice sending so that if notice not sent automatic deletion is stopped. Which I'm thinking is probably everywhere - if you work somewhere with automatic deletion and a notice sending module - what happens? Is this scenario handled?

5. answer to this is probably not, but is there a legal issue if notices not sent and stuff deleted, issue might be if some notices were sent - if account A gets notice about deletion and is thus able to act on it and account B does not get notice and is thus not able to act on it there might be a ground for action. Probably not, but when something seems unfair there might be a law that can be stretched to fit it.


My guess would be that a tiny percentage of accounts didn't get their notices sent, nobody complained about it because they didn't realise they were supposed to be getting them (or at least for whatever reason no such complaints were escalated to the people who would go "oh shit" upon seeing one) and some unfortunate combination of technical factors meant whatever internal monitoring existed didn't pick up the omissions.

Note that this is not to say the end result wasn't an indefensible disaster, only that disasters seldom have only a single cause and the above is my best uneducated guess at how things came together to cause this one.


Similar issue with billing post the Salesforce acquisition. We’re on a Enterprise Heroku plan but the billing could only be handled within a Salesforce account (which we don’t have). Autopay disconnected and our app went offline for 3 hours until we could confirm payment over the phone. Billing went from self serve to having to call in to provide a credit card over the phone.


Best option is https://www.crunchydata.com/ IMO.


Never trust any company or hardware. Either can fail you at any moment. Always keep your own backup. Still sucks though.


This is probably the best advice


I'm curious how this affected your projects on paid dynos, can you provide more details around that?


So I did not realize this at first (would have helped if I were, you know, emailed some information about this), but my project was still using a free Postgres addon despite being on a paid dyno setup. So that I suppose was nuked along with the general free tier deprecation.


Did they explain why the notifications didn't work?


"I discussed this with our backend team and upon deeper investigation, we were able to confirm that there was some issue sending the notifications to your email."

Nope.


Guesses 1, 2 and 3 are : their MTA's IP was black listed by a reputable mail hosting provider's spam filtering system (because they send spam).


That phrasing doesn't rule out an issue on your email server's end, like the email could have bounced. (Though I'm guessing that's unlikely.)


Would be surprised if that was this case - vanilla GSuite setup.


As I said, unlikely. But not impossible. My work has a GSuite setup and we've had a few verifiable bounces over the years, though they weren't google's fault but were the fault of glitches at our DNS provider (we don't use google for DNS).


I don't really understand. You were paying for the dyno, but they deleted your instance anyway?


They deleted the database.

On Heroku, databases and dynos are completely distinct entities, each with their own payment plans.

When you set up a dyno and “add” a database, as most people do, it’s really easy to think that they are part of the same plan, and conclude that the “pro” or paid dyno is actually the combo of the dyno and database. They are not.

I understand why they did this, databases can be shared between applications, that’s handy, but this is a sharp edge.


The least they could have done is disconnect the database without deleting the data, so users would at least have a chance to make the decision to either delete or start paying.


This is something I've put into most workflows at $(WORKSPACE=/=heroku) that end up disabling a customers systems. It depends a bit on how hard we have to kill a system for various factors. But for the softest setting, we usually start pulling DNS or similar access configs for a minute, wait for a minute, disable for 2, wait for 2, ... and the 30 minutes never end.

This gets people on the phone with support quickly if they don't react to their account manager or mails. Like, we've had a possible security incident coming from a customer system and they didn't react - because they tried to hide their processes internally. Scream-Testing got us on calls with their InfoSec in a hurry.


They were likely paying for a web dyno, but were using a free database instance.


Free db but paid dyno in this case I think


does seem like a cautionary story that should rule heroku out for anything except quick experiments


I can just second most of those comments here. One thing should be clear: It doesn't matter if you have a payed or free plan with someone who stores your data. Always make backups! There are so many things what can go wrong


Heroku also has a free or cheap tool to back up databases to s3 and download or copy the backup.

Totally understand expectations that a managed database service will keep your data but if it’s truly mission critical take backups…


Times like this are great opportunities for growth. This can help you build consistent habits in the future to have more durable storage practices.


Mine was also deleted recently, I just had assumed I wasn’t reading their emails carefully, but good to know I wasn’t the only one.


"Cloud is just a fancy word for someone else's computer."


All your databases are belong to /dev/null


I also had this happen - unbelievable really


Not your hardware, not your database.


[flagged]


He said he used a paid dyno


Check your junk mail folder.


> They also admitted that there was "an issue" sending out notifications to me, and confirmed that none were sent.

Junk mail wasn't the issue in this case.


You didn't have a full backup of your database? That's on you. Disaster Recovery is an important part of any project. Heroku could have blipped out of existence by accident and you would still have the same problem.


What frustrated me about Heroku nuking free projects was that the paid version was, IMO, too expensive. (Granted, I understand that the free tier was probably a money sink for them.)

$5-10 a month is pocket change in some contexts, but for a hobby or a one-off project, it's a huge chunk of change.

I really wish they offered a lower-cost tier.


My personal take is that I'd rather work with heroku than dealing with aws or gcs.

Sure there are others out there that provide the same easy of use, I am just too lazy to switch ;)

OTOH I've never had them wipe a database on me, so I guess that would be the motivation to move on.


> I've never had them wipe a database on me,

Give it time, apparently ;)

If you're like me and are really into ease of use, I really endorse Digital Ocean. People don't seem to use it in Big Important Companies, but I never had a problem with anything I hosted there when I was my last, medium-sized company, and everything was so easy to manage in the GUI without having to learn proprietary APIs and whole stacks of infrastructure-as-code stuff like in AWS land.


DO is also good and less restrictive for a new user than aws (I.e. getting more resources). I remember having to beg to get more than 1(or 2) servers at aws as a new user.

I haven't looked recently but does DO have the heroku click to add resources and git push to deploy offerings?


I think this product is the closest: https://www.digitalocean.com/products/app-platform


> OTOH I've never had them wipe a database on me, so I guess that would be the motivation to move on.

About a year ago they shut down my personal blog based on a vague "violation of our terms." There was no violation, either. (I suspect they had to shut down phishing and other abusive sites, and cast a little too wide of a net.)

When they claimed they turned it back on, I had to keep pestering them because they goofed something or other. Eventually I couldn't push updates via Github. (I wrote the blog engine as an exercise to learn NodeJS.)

Again, if I could have bought an ultra-cheap tier, I would have been happy to pay.


Or let me share dynos across all my projects. $5-$10 per month for all projects, sounds good! $5 per month for each project adds up very quickly.


That was the new “eco” plan that was introduced alongside the ending of free plans — $5/mo for 1000 hours shared across all eco dynos.


I definitely wish you guys had advertised that earlier / better. I feel like I got lucky because I put off migrating until just before the sunset, at which point I discovered eco dynos. I swear they either didn't exist or were harder to find when the sunset was first announced. Thanks for making them available at all, though.

(Context: I maintain an app for an event that runs about one week a year. The ability to use free dynos the other 51 weeks means we retain the ability to do one-off analytics queries, minor development work, etc. during the off-season, without having to delete and recreate the app or something every year. Eco isn't quite as good as free, but it means we can still have separate staging and production instances during the off-season, without paying extra for staging to be idle, and without having to destroy and recreate staging every year.)


Wow, I also don’t recall seeing this either. It must have been marketed poorly. Same for Postgres, looks like there are now some more cost effective options which weren’t available when I migrated my stuff away.


That's 2 cups of coffee, or 1 latte. It's objectively not a huge chunk of change unless you live in a very LCOL area.

You can also go to any competitors, which have their smallest tier around the same price. Linode has a $5/m tier.

You could also buy a Raspberry Pi for $50 and run it in your closet.


> or a hobby or a one-off project

I don't want to pay $5 / month indefinitely to host a weekend project.

I don't want to deal with the complexities of hosting a weekend project myself. I've done plenty of that.

I just don't want to end up spending $100-200 / month to host a lifetime of weekend projects and experiments.


Then you can get a raspberry Pi and host it in your closet.


> I don't want to deal with the complexities of hosting a weekend project myself. I've done plenty of that.

That involves port forwarding, figuring out something like dyndns, and whatever I need to share domains on a single IP.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: