Hacker News new | past | comments | ask | show | jobs | submit login
I must announce the immediate end of service of SSLPing (sslping.com)
641 points by WelcomeShorty on April 11, 2022 | hide | past | favorite | 262 comments



This seems like a perfectly reasonable way to end a service. The server died, and the effort to bring it back up seems high.

This is a reminder to pay for services you depend on. Per the Pinboard founder's post:

> I love free software and could not have built my site without it. But free web services are not like free software. If your free software project suddenly gets popular, you gain resources: testers, developers and people willing to pitch in. If your free website takes off, you lose resources. Your time is spent firefighting and your money all goes to the nice people at Linode.

> Like a service? Make them charge you or show you ads. If they won't do it, clone them and do it yourself. Soon you'll be the only game in town!

https://blog.pinboard.in/2011/12/don_t_be_a_free_user/


> If your free software project suddenly gets popular, you gain resources: testers, developers and people willing to pitch in.

I wouldn't even say that this is true. If your free software project get's popular you get: Users that want their feautures built and don't want to build it themselves; PRs that need to be reviewed; Co-maintainers that need to be communicated with; etc.

Yes, it also can have upsides, but depending on the project and it's surrounding ecosystem (and culture), it can easily be more demanding that rewarding.


You also get users which don't understand a thing and who send you tons of emails and want A++ personal support.

Sell the same service for $$$ money and suddenly the users (as a group) are a much nicer bunch.

Offering "Free Software" is sometimes like working in retail.


I've seen this exact effect multiple times. It's really rather amazing, that no one expects as much as someone who's paid absolutely nothing for it. Kind of a divide by zero = infinity in the human firmware or something.


Let's agree that it's a global tendency of humans to take advantage of other humans who have shown that they are willing to accept it.


Free software never meant free support, not even for the financial burden of providing source code. (Also why "libre software" is a better term.)


Go tell that to the people who shit all over the emails and bug trackers of free software developers. The rest of us are well aware.


I mean the solution to that predates personal computers...

https://kk.org/ct2/heinleins-fan-mail-solution/


The original complaint was "demanding and petty users flood you and take up your time, through the same channels your good users do", and your solution seems to be "just go ahead and let them take up your time"? What, is Heinlein's Ghost going to pat me on the back if I do? The reality is there is no real "solution" to this problem; there is no solution to "people are annoying." The very premise is completely ill-formed and only possible in a framework that views all human interaction and behavior as completely mechanical.

It's not all bad, though, working on free/libre software, and I'd be remiss to not mention that. Maybe I'm letting it hit too close to home and you're just saying it in jest. But anyone here pretending that you can "solve" these kinds of issues so easily are deluding themselves and telling on themselves at the same time, in my opinion.


> your solution seems to be "just go ahead and let them take up your time"?

How did you get this from my link ?? I'm suggesting pre-made answers here to the most typical "time-wasters"...


[flagged]


> Well to be fair, if you release something to the public you do have a certain responsibility. At least to provide a running configuration without the documentation being behind a payment.

If it's free, you have zero responsibility. People can choose to use it or not, I don't think you even have a responsibility to keep it running.

If you have a hundred thousand users and you decide tomorrow to just yank it from existence, that's absolutely fine too.

It's free, no one forces you to use, don't expect anything. The maintainer might get bored with it, delete it, accidentally break it, or change it just for shits and giggles to fuck with people.

It's fine. It's free.


Entitlement to the extreme. Nobody owes anybody anything by marking something as open source.

Take some responsibility.


This is very silly and I urge you to rethink this position. The software is free, it contains a very explicit warranty disclaimer, and the implicit social contract is that nobody owes anyone anything. There is no fraud here.

If you download the code I posted to my GitHub account and try to do something with it, and it doesn’t work or you don’t understand it or I didn’t bother writing docs, that is in fact very much on you.

Perhaps my software project has lots of docs and I’m super helpful to users, and it is therefore more likely to be popular, or perhaps it does not have those properties and will stay obscure, but either way it is your job to decide what free software to use, not mine to make it up to your standards.

I have never seen paywalled OSS docs and it seems like a silly idea, but that doesn’t make it fraudulent or even bad. It just makes it perhaps inadvisable to use. Which again is on you.


You get both the demanding users and the giving users, but the point is, the demanding users don't actually cost your open source repo anything. When it's a website, every user is money out of your pocket.


Demanding users absolutely do cost something -- time and attention at a minimum. Some users will go to great lengths to try to force you to engage with them. An open source repo does not prevent this either in theory or practice.


They actually don't. You can just ignore them. I get that people find them unpleasant. But "finding somebody unpleasant" is different from "someone costs you money."


No you can't, because they report bugs in the same place that all the other people do, they create spam accounts, they concern troll, they divert conversations onto their pet issues, they do everything people have done to irritate other people on the internet for ages. It's very well understood behavior to anyone who has used an internet forum in the past 20 years and also obvious there is no magical way to filter this all out. You're running the project or part of the maintainer team, YOU have to separate the wheat from the chaff, no magical algorithm or "trick" is going to do it for you.

This is before the fun part where, depending on the level of "personality" you're dealing with, you might end up getting a fun weirdo who becomes obsessed with you for a little while and makes life miserable for everyone involved. I know one person who was stalked on GitHub (before they let you ban people from interacting with you, a fairly recent feature) and this person would comment on literally EVERYTHING they did, but only in a "nice" and "helpful" tone. You can't tune that shit out so easily, I'm afraid.

To be fair you also get some lesser psychopaths'; the angry guy who reported his bugs, replied and demanded to be replied to via Lisp programs in quoted code blocks was a more memorable one.


> because they report bugs in the same place that all the other people do, they create spam accounts, they concern troll, they divert conversations onto their pet issues, they do everything people have done to irritate other people on the internet for ages.

What this is saying that if your "free software project" includes anything more than providing the source code for free online--in other words, if it includes things like a bug tracker, a discussion forum, etc., that you are actually paying attention to--then it's not just a "free software project" any more, it's more like a "free website", and has many of the same issues people are discussing here relative to the latter. And the solution would be much the same: your time and effort isn't free, so if you're giving it to the project, the project shouldn't be free; you should charge for it.


Can you give me some examples of what you'd consider to be free software projects?


What I would consider "free software" is beside the point. I'm not trying to propose a new definition of "free software" or argue for some particular way of labeling projects as "free software". I'm just pointing out that if a project author decides to provide anything more in relation to their project than their source code for free online, they are taking on a potential burden in time and effort, and if the burden becomes too much for them to provide for free, they will either need to start charging for it, or stop doing it. The SSLPing project has taken option number two.


Love to see your email filter that separates messages from helpful users from demanding users.

If you can't automate this step, you have to read the insults, and pleading, and jerk-faced comments from the demanding users. Which takes time and energy.


My filter is my brain. I just... don't interact with people who don't seem worth interacting with.

And, look, that's not perfect and I'm not saying that it's not unpleasant. I'm saying that it's materially different from every person who uses your service costing you actual dollars. And if you can't see that, I honestly don't know what to tell you.


> Love to see your email filter that separates messages from helpful users from demanding users.

Go one step further and make it a product. OP'll be a millionaire in no time.


It is called "Pay me" button.


That's a response to a complainer, not a filter to prevent them from taking your time and energy figuring out if it is worth responding.


If you filter who can complain, you do not have to deal with those that you do not care about.

SAAS with free tier does it this way:

* To contact support via non-app method you need a PIN. PIN is displayed when you login into your paid account. Don't have a PIN? Can't connect to support.

* To contact support via app method your profile needs to be associated with a paid account. Don't have a paid account? Can't contact support via inapp


How do you filter/ignore them without ignoring important stuff?


Demanding users of OSS at a minimum will cost you time. At worst they can also cost you reputation. At best they can help you improve your project.

I think in most cases the entitlement many (most?) users have makes it a net negative these days. At least for popular projects.

I think the easiest solution is to charge for support. No issue reports or PRs unless you’re also a paying user.


> No issue reports or PRs unless you’re also a paying user.

You'd turn down quality bug-reports and code-contributions in the name of blocking spam?

I doubt I'm the only one who would take strong objection to the idea of pay me so that you can work on my project.

Open Source software development can be made to scale to the level of the Linux kernel. I don't buy the idea that its principles need to be thrown out in the name of anti-spam practicalities.


> You'd turn down quality bug-reports and code-contributions in the name of blocking spam?

Absolutely I’d “risk” it. Even a negligible amount like $10, would reduce the noise significantly. I’d also pay that in a heartbeat as a user.

> I don't buy the idea that its principles need to be thrown out in the name of anti-spam practicalities.

What “principles” are you referring to?


> What “principles” are you referring to?

Those of Open Source software development:

> The users are treated like co-developers and so they should have access to the source code of the software. Furthermore, users are encouraged to submit additions to the software, code fixes for the software, bug reports, documentation, etc. [0]

Introducing a paywall to keep out those who wish to submit improvements to a project, is the antithesis of encouragement.

> I’d also pay that in a heartbeat as a user.

Not every Open Source contributor has money to give.

A better alternative might be for a forge website (GitHub or whomever) to implement a user-scoring system. Wikipedia uses this approach quite successfully, where only users with a certain level of 'credibility' are permitted to make changes to semi-protected articles. StackExchange/StackOverflow does something similar to avoid spam on 'highly active questions'. Even HackerNews does something like this, showing usernames in green for new accounts.

What the forge would actually do with the user-score, I'm not certain. It would be difficult to do anything without making the forge less welcoming to newcomers.

[0] https://en.wikipedia.org/wiki/Open-source_software#Developme...


I don't agree that this is a "principle". Having free access to the source is IMO the only principle. As stated in the wikipedia article, the rest is just a "suggestion". I think there's plenty of room to evolve while maintaining the core principle of actually being open source. There already exists a lot of OSS that does NOT grant low investment contribution, and that's not always a bad thing.

The point, as I see it, is to raise the investment / friction required to contribute. You are less likely to pollute a project with inane comments if you're invested (not guaranteed, just less likely). I wouldn't recommend this for any / all OSS, but instead when you start getting tons of low value contributions. In fact you'll see a lot of issues on large OSS projects that I wouldn't even categorize as "contributions", but more like complaining or simply asking questions that are often already clearly documented. E.g. "Doesn't work! Fix please!"

We're seeing a lot of OSS projects simply close down their issue reporting because they don't want to deal with it. If there was an easy way to enforce investment / quality of feedback I think some of these project would probably keep their issue reporting open.

User scoring could potentially work too. It would be an interesting experiment. I think there's a lot of room to try things. For example, you could complement a paid model with an application process to gain free access (for those who can't afford it) as both would require user investment (money OR time).

You mentioned Linux as an example. They have a detailed process for reporting issues. You can't just one-line tweet an issue and expect it to receive attention. https://docs.kernel.org/admin-guide/reporting-issues.html#st...


> Having free access to the source is IMO the only principle

That doesn't sound right. The OSI have strong opinions about what licences qualify as Open Source licences, it's not enough to just let people see the source-code. [0][1][2]

> There already exists a lot of OSS that does NOT grant low investment contribution, and that's not always a bad thing.

Sure, it's possible for a software project to release under an Open Source licence while using a closed-shop (Cathedral) development process. An extreme example would be the id Tech 4 engine, which was originally closed-source.

We could say that Open Source has two meanings: one is about software licences, the other is about software development methodologies.

> raise the investment / friction required to contribute. You are less likely to pollute a project with inane comments if you're invested (not guaranteed, just less likely).

I agree that raising the barrier to entry will probably be effective in keeping out low-quality 'contributions', but likely at a cost to high-quality contributions.

Also, ideally there should be a proper answer to user-support. That is, support tasks should be properly separated from bug-reports, and there should be some kind of system for handling them, perhaps a community forum or perhaps the offer of paid support.

When someone inexperienced needs help, it's not great if there's no answer for them other than contempt. That's how we get angry trolls.

> I wouldn't recommend this for any / all OSS, but instead when you start getting tons of low value contributions.

I wonder what fraction of projects this applies to. When a project does something arcane that can act as a barrier against poor-quality comments, but of course it also acts as a barrier against interest in general. If a project does something broadly useful and is easily approachable, that's presumably when you're mostly likely to have trouble with an onslaught of poor-quality contributions.

> you'll see a lot of issues on large OSS projects that I wouldn't even categorize as "contributions", but more like complaining or simply asking questions that are often already clearly documented. E.g. "Doesn't work! Fix please!"

Right, there's certainly a skill to filing bug-reports and support-requests.

Incidentally I think GitHub does a poor job of separating the two, which may contribute to your problem (assuming you use GitHub).

> Linux as an example. They have a detailed process for reporting issues. You can't just one-line tweet an issue and expect it to receive attention

I think they're a good case-study for this.

They ask that you follow a pretty rigorous sequence of checks before awakening the high-priests of the kernel. Perhaps that's enough. Perhaps more projects should have a quick Before you open a ticket document.

I suspect their email-driven process might also strike a lot of people as, well, intimidating. Email is associated with serious communication using your real name. I suspect people will be more considered when using mailing lists.

I also suspect that if you use an email-driven forge like SourceHut, [3] that would greatly increase the average technical competence of those interacting with your project. It may also reduce the total number of people interacting with it, mind.

[0] https://en.wikipedia.org/wiki/Open-source_software#Open-sour...

[1] https://opensource.org/licenses

[2] https://opensource.org/osd

[3] https://sourcehut.org/


Great points and discourse. Cheers!


> I think the easiest solution is to charge for support. No issue reports or PRs unless you’re also a paying user.

Agree 100%. Your time and effort isn't free. Users who want some of it should have to provide value in exchange.


I don’t think anyone implied FOSS projects lack obligation— just that it’s more manageable.

Uptime (and the related system management,) large monthly withdrawals from your personal checking account, and soliciting donations to recoup some are are constant, recurring obligations more urgent than anything I’ve experienced maintaining repos. Most people shrug and move on if their issues and feature requests get ignored. Even an ignored CVE isn’t going to stop you from paying your rent. Doing your best to get the word out to users (even breaking your code to do so if you’ve got an oft-included but rarely considered library or something), as far as I’m concerned, satisfies your ethical obligations.


Hi I'm the creator of sslping.

I'm just discovering that sslping is on the first page of HN. It hurts to have to kill your project, but it hurts even more to become famous after your death!

I've received 30+ super kind emails from users, and even a donation... I didn't even know I had such a fan base


You did good, and you're doing good by killing this the way you need to for your own reasons. You don't owe anyone anything.

Hang in there.


This seems to imply paying for a service guarantees it will stay up.

We've got tons of evidence to show that's not the case. It might increase the likelihood of it staying up due to paying customers, but it might also not. I don't think we have data to conclude either way (more paid/ad-supported services spring up than completely free "services" — to loosely contrast that with just regular "web sites" — but more die an unglorious death too).

If you want assurance you'd be able to access your data or desired functionality, the only approach is to use a free-software based service that allows you to export your data (and import it into another instance). Depending on your desires, you could either pay someone to host it for you and be ready to host it yourself when they decide to kill it, or host it yourself from the get go.

People make this risk assesment subconsciously all the time. Eg. whether to use gmail.com for their email account or any small random provider ("hey, gmail is more likely to stick around")? Whether to host on YouTube or... You get the point.


All good points of course, and the more honest service providers will tell you the same. In Pinboard's case, the author has regularly joked about being hit by a bus, and how people need to back up their links. He's also got an API in place to enable exactly that.

You could use the same API as a model to reinvent your own personal service, if you so choose. I use it (for now) to just back up everything important, and I'll figure out what to do with the data later if I ever need to.


> This seems to imply paying for a service guarantees it will stay up.

Apposite point seeing that the Pinboard Blog referenced above was last posted to in 2017! Moreover, the home page appears as an unreadable jumble on Firefox mobile.


100% - I'm less and less inclined to build free tiers into my products these days, it's just not sustainable.


Free tiers are/should be a marketing tactic that gets people comfortable enough with your product that when it comes time to put money down, they put it on your thing rather than the competitor. It's when free tiers don't have a sufficient draw into a paid tier that you run into problems.

Microsoft figured this out relatively early on during their monopoly lawsuit which they cleverly settled by donating a bunch of Microsoft products to schools so that the next generation of the workforce would grow up on Microsoft products and become commercial Microsoft users in the workforce (and I've seen Apple use this same approach since then). What was ostensibly seen as a punishment by the legal system simply entrenched their position further.

And in gaming, there's an entire industry around "free to play" games that make billions of dollars, although I sincerely hope B2B tech doesn't take marketing inspiration from them.


Microsoft offers their products to educational institutions for a fraction of the price. You just have to license your staff and get the licenses for your students for "free". And the staff licenses for education cost yearly what businesses pay monthly (MS365 E5 for about $90/year).

In return schools teach their students Microsoft products in school and you can't really beat the pricing. Because running your own infrastructure will be more expensive especially if your school isn't big enough to have it's own IT department. Prices are rising though because you can't license box versions anymore.

So you now have the choice of paying for Teams, SharePoint online and not using them and licensing and running your local file servers, AD etc. or switching more of your infrastructure to the cloud and increasing the lock-in.

This is really worrying to me, but from a cost perspective you can't really justify continue running a lot of infrastructure, especially if your local government doesn't have a lot of funds in the first place. I'd rather spend that money on hardware that the students and teachers can use than pay for servers and licenses.



Yes I‘m aware boxed versions are still available. But they aren‘t available under the contract the state negotiated with Microsoft. And that’s the terms the schools use to buy licenses if they don’t want to pay a premium.

To change this there would have to be political will to reduce the lock-in and maybe offer a state run cloud service.


There is absolutely educational pricing for boxed Office. It sounds like you might be upset about the choices your State made, and are conflating that with what is possible.


> and I've seen Apple use this same approach since the

Google as well recently - unlimited free G Suite (now Workspace) for schools was a massive draw a few years ago, although they are starting to turn around and monetise now they have a captive audience


Same with gmail. Initially launched with the idea of "Don't throw anything away - you'll never need to delete another message" but now the landing page of gmail talks more about security, productivity and other things since eventually you're gonna have to pay for more storage.


The best use of free tier is to allow engineers at big companies to try things without having to get permission from someone.

There can be a lot of abuse, though.


Precisely. All of /r/homelab on Reddit pretty much centers around enterprise features for free/cheap so they can learn off the job. It's a win for the employer, win for the software manufacturer, and a win for the engineer who is hopefully doing it out of enjoyment rather than pressure.


Pay-to-win web services, right...


I mean, that's basically what freemium is for b2b - want to unlock features that get your job done faster? pay up.


What do you think adwords is?


This is a race-to-the-bottom though. Or a marathon-of-deepest-pockets, whatever you call it.

If you decide to make people pay up, a competitor will step up and offer a free version, subsidised from VC or lucrative other businesses. And if the competitor isn't free, at least its cheaper: triggering a race-to-the-bottom, ending in "free".

In any market, for any service with potential, there will be a free, or at least cheaper option. Untill mono-/oligo-polies are established and prices go up, at which point the customers are "extorted" or close to it: all the losses up to now, must be paid from revenues now.

This is seen everywhere (from food delivery, via webhosting, to PAAS to SAAS) and a clear and present proof of why "free markets" aren't automatically efficient and need authorities poking around in them - or else they hardly work at all, let alone efficient.


This is only true under the premise of profit motivation or some other kind of competition, which is not inherent to the FOSS ecosystem.

>In any market, for any service with potential, there will be a free, or at least cheaper option.

Which is not bad at all. IMHO the only problem here are business decisions, slowly locking you into something. FOSS nerds prefer protocols over platforms, which is why Posix and Linux living up to it are so incredibly important. You are right about the markets being misaligned, which is why we have to be very sceptical about big corps buying into various FOSS-foundations.


I presumed we were talking about the service part and not the software part. My comment was about running a service and not about offering FLOSS (or no floss) software.

Though I expect the same applies there too, only that "race to the bottom" is less of a negative thing. And could probably be "race to open-source". Where slowly all paid-for software gets FLOSS alternatives that -at some point- offer competing support/features/experience and therefore make the paid/proprietary alternatives more or less obsolete. E.g. while Oracle is still running and offering their database, on the whole, the world runs free databases. While Microsoft still sells billions of OSes, on the whole -including Android- the world runs Linux mostly. And except for OSX, hardly any other paid-for unixes survived. And so forth.


That’s assuming quality is a constant and is purely objective.

Is there an opportunity cost in providing a gratis/marketing tier versus focusing more on paying customers?


aka free market capitalism..


I have a side gig B2B SaaS. My business users don't care if it's $1/year or $1000/year or $10,000/year. It's the same HUGE amount of paperwork for them either way, so the right answer is $10,000/year. I do allow very long trials.

I do still consider offering a "free" tier, but with the limitation that there is no free support, and each support incident will cost them $200. But then I remember what I said above.


How did you figure this out pricing-wise? And can you link to the service?


You need to have an idea of the annual revenue of your prospects and price accordingly.

A $10,000 purchase for a small company with $100K revenue is going to be a huge expense. For a company with $100MM annual revenue it's a rounding error.


The guy is not talking about free tiers but about free services though.

Free tiers are good if correctly sized compared to paid tiers: they help users get to try/know your service, they allow students and kids aroung the world to play with things, they help small companies grow; if your free/paid tiers are well done, those users who start with the free tier will grow to the paid tiers. If not, it's should be ok too, not everyone is going to need all your features.


I can't agree more... I'm the creator of SSLPing, and I can say it sucks BIG TIME to discover people loved your product and your product is on the first page of HN just after you had to kill it !


Free tiers are an essential; when settling on a service I will try many, in parallel with our current system. If I'm happy it is better I will switch,. when either immediately, or after a short period results in us being paid at a mid tier. If I have to pay to try, and find out if it is any good and works for our specific cases, I'll try it after competitors - so probably won't get round to it - as something else probably meets our needs first.


Free trials are an essential for the reasons you describe - but it doesn’t mean it has to be free forever - just for long enough to make an honest evaluation. 30 days is the norm, and that seems fair to me.


AWS (and generally all *aaS providers) free tier(s) are enough to experiment with, and get a feel for, but are limited enough that using them in any meaningful way is very difficult without incurring cost.

Surely you can appreciate that "time" is not the only commodity that can be meaningfully (and reasonably) limited?


That said, much (but not all) of the AWS "Free Tier" is really just a 12-month free trial. Only a small percentage of the offerings are truly longterm freemium.

https://aws.amazon.com/free/


https://paul.totterman.name/posts/free-clouds/ - you can do quite a bit with some free tiers, and some don't have a time limit


Are you under the impression that their contention is that avoiding free tiers is essential?


I enjoy seeing less affluent audiences use my product for free. It's hard to know who can who can't pay so you have to make an arbitrary call. In my case, I assume Android users can't pay and iOS users can. I have run some experiments and I have seen that this is mostly true

Some people will think that this is unfair, but I think it's great


To be fair Pinboard.in was down quite a lot recently and its DNS has expired(1).

There was a big discussion here (2) and on Twitter IIRC that the author had lost his interest in product and no updates were being made. So there are no guarantees even when it's paid.

(1) https://news.ycombinator.com/item?id=29873306

(2) https://news.ycombinator.com/item?id=30628375


Been pretty rock solid for me for years. Nobody said paying someone the price of a cup of coffee would get you a service with zero hiccups (again: none of which I've experienced, but I believe you). Maybe you should start shaking Maciej and demanding he accept more of your money, or run ads. :)


I agree that it's a perfectly fine way of ending a free service. No guarantees = can end whenever, author is free to do whatever they feel.

However, I'm not sure the conclusion of "make the platform charge you" is the right conclusion if the goal is to have a service that A) doesn't disappear and B) won't make you have to do more work after setting it up (eg: doesn't change).


well if you want it to never change in any way you don't want it to change then obviously the correct solution is build it yourself.


You don't have to build it yourself as long as you can host it yourself from software someone else has built. ;-)


> Like a service? Make them charge you or show you ads. If they won't do it, clone them and do it yourself. Soon you'll be the only game in town!

Once you get enough users, talk to a few VCs and raise a round or two. Grow the user numbers by a few more magnitudes. Then aim to get acquired or IPO. As long as user and revenue growth are positive, not having profitability from day one isn't that big of a problem.


This project was killed by invisible complexity. I felt a little sad reading about it. There are lots of comments talking about the sustainability of projects with respect to money and burnout, the need to pay for services etc. I get that and agree. But I also get the impression that wouldn't have helped here.

When we reach a situation where some unknown person pushes an upstream update, which causes a cascade of problems that even someone with deep technical knowledge and daily connection to the project cannot unpick, that's a breakdown in modularity/dependency.

It's what we faced in the 1990s and from it evolved package managers, version control, dependency management and all the good things.

I think in 2022 we're back in a similar situation again. Virtualisation, containers, cloud services and whatnot have regressed development reliability.

I predict the day when a really major service provider comes out and says; "Sorry everyone. It's been 10 days downtime. We've put our 1000 top engineers on the problem and nobody knows what's wrong. We just can't fix it."


> It's what we faced in the 1990s and from it evolved package managers, version control, dependency management and all the good things.

> I think in 2022 we're back in a similar situation again. Virtualisation, containers, cloud services and whatnot have regressed development reliability.

Realistically you probably want to find the middle ground between these: not copying PHP code through SFTP or even working on it directly, but also not running an enterprise Kubernetes cluster for your simple CRUD app.

Doing that can be pretty tough, though, especially with peer pressure to use hot tech so you aren't left behind in the industry due to stagnating skills (e.g. needing to learn GraphQL and gRPC when REST is still good enough, for example; same for containers and Kubernetes).

> I predict the day when a really major service provider comes out and says; "Sorry everyone. It's been 10 days downtime. We've put our 1000 top engineers on the problem and nobody knows what's wrong. We just can't fix it."

That sort of happened with Roblox, they had an outage that went on for 3 days due to Nomad rolling over and dying in their infra and nobody really knew what the problem was or how to fix it for quite a while: https://roblox.fandom.com/wiki/2021_Roblox_outage

From here on out, we'll see more and more outages like this in large companies with complex infrastructures, at least that's my prediction.


> From here on out, we'll see more and more outages like this in large companies with complex infrastructures, at least that's my prediction.

I think part of this problem is that people set up complex fancy infrastructure at a company, and then leave once the interesting/challenging work is done.

A team is left behind to operate that infrastructure, but they may not necessarily understand it to the depth required to react properly when it fails or when it needs updating, etc..

There's a 20+ year old book called The Ingenuity Gap that addresses this exact challenge with complex systems that we aren't equipped to do much more than "keep the lights on"..

It definitely feels like we are running up against similar situations in the software world, particularly with the huge amount of job churn (and corporate brain drain) that the pandemic produced.


> and then leave once the interesting/challenging work is done.

Don't underestimate constructive dismissal.

The courts have begun to take a somewhat dim view of executives being completely ignorant of Too Good to be True scenarios, which has been a viable form of plausible deniability forever and a day. I'm not treating the workers badly, George is treating the workers badly. George, who I brag about all the time and have promoted three times in five years.

There's this sort of notion of 'what have you done for me lately?' that pushes us to underappreciate people who set up systems that usually run on autopilot. Bosses can't absorb the idea that just because nothing has broken doesn't mean it won't break, and these may be the easiest plates in the world to spin but they are in fact spinning, and so this person who "doesn't look busy" is doing so on purpose. It's what some clever person called "on-site on-call" to differentiate doing nothing from actively doing nothing. It's very reminiscent of the experience some people have buying someone a gift (a subscription, a car, etc) that has an ongoing cost and then they hear from the person how disappointed they are in the new/other gifts they got. "Last year they bought me a car and this year all I got was a new iPad." No, this year you got an iPad and your rich uncle is still paying for your car, you little shit.

Steve may be maintaining systems that save us 4 headcount, but he looks like a cost center to the business. He's lucky we don't lay him off, so don't talk to me about giving him a 10% raise for boosting that to 4.5 headcount.


You see this at all different levels of complexity.

I'm on the board of a very small non-profit and periodically I see someone come in with grand plans for updating the website and other infrastructure. And you pretty much know that, given free rein, they'll come up with something that will be way beyond what the person coming in after them will be in a position to maintain.


As a volunteer at a small non-profit attempting to keep a similar-sounding setup working long enough to replace it, this is the nightmare I am attempting to unwind. The problem when moving to off-the-shelf software for things is everyone wanting their particular favorite features. Said features were all custom-coded by someone without formal development training, of course.


> I think part of this problem is that people set up complex fancy infrastructure at a company, and then leave once the interesting/challenging work is done.

Most companies do not need complex infrastructure either. It is just cargo culting ( or Google culting )


> Doing that can be pretty tough, though, especially with peer pressure to use hot tech so you aren't left behind in the industry due to stagnating skills (e.g. needing to learn GraphQL and gRPC when REST is still good enough, for example; same for containers and Kubernetes).

This is essentially why I’m now in the midst of a career change. I realized I want no part in what is more of a cargo cult than an engineering discipline.


I am curious, if not too personal, would you mind sharing what do you plan on working on?


I recently enrolled in a newly launched BSc program at my local university titled Sustainable Urban Development, blending administrative and social sciences with engineering in an attempt to tackle current and future issues related to urbanization, urban planning and sustainable development.

https://www.tuni.fi/en/study-with-us/technology-sustainable-...


Good luck!


Thanks :)


"I predict the day when a really major service provider comes out and says; "Sorry everyone. It's been 10 days downtime. We've put our 1000 top engineers on the problem and nobody knows what's wrong. We just can't fix it.""

That day is today: https://confluence.status.atlassian.com/incidents/hf1xxft08n...


Insanity at its best!

Atlassian annual revenue for 2021 was $2.089B, a 29.42% increase from 2020.

Atlassian annual revenue for 2020 was $1.614B, a 33.39% increase from 2019.

Atlassian annual revenue for 2019 was $1.21B, a 37.36% increase from 2018.


6 days and counting


Nah, Atlassian has not given up, and isn’t going to. @nonrandomstring wasn’t referring to incidents that take more than a minute to resolve, they were predicting an incident of such magnitude that a large company shuts down over it. This is definitely not that.


I read it differently: the company will not shut down, but abandon attempts to resolve the issue (whether for a product or a set of customers) due to its complexity. Using major in GP's description was to highlight that the company has necessary resources and still can't figure out what's going on.


Fair enough, there is a valid more optimistic interpretation than mine. Still, this is not even the case you’re suggesting. Atlassian isn’t abandoning any features or customers here, this case isn’t indicative of anything other than a run of the mill outage that’s taking longer than a day to fix.


My Atlassian products have been fine for the last few days, and indeed last few years. I think there was a 1 hour upgrade window a couple of months ago.

But then we self-host.


Excellent comment and I can't speak directly for the author, but that's what I read too. Incidental Complexity is a term related to this. Incidental Complexity is any complexity that is not part of solving the core domain problem and has been introduced by the tooling or approach you chose instead.

The most common example I see is using enterprise tooling for small teams and projects, and then those projects getting snowed under by the weight of the hidden and incidental complexity they had built without noticing.


Essential vs incidental complexity is one of those concepts everybody who wants to call themselves a software engineer should be intimately familiar with. As they say, anyone can design a bridge, but it takes an engineer to just barely design a bridge.


Incidental Complexity pushes me towards serverless offerings. Of course, serverless brings with it its own test/deploy quagmire, but I'd argue for personal projects, it doesn't matter as much.


> I predict the day when a really major service provider comes out and says; "Sorry everyone. It's been 10 days downtime. We've put our 1000 top engineers on the problem and nobody knows what's wrong. We just can't fix it."

On one hand a single incident is unlikely to be the end of a large company because companies exist to make money, and also because by the time a company gets to a thousand engineers, it’s engineering function is already centered on managing complexity, and they will have had hundreds of service outages. This is in stark contrast to the single-user free service we’re commenting on, where the owner was already treading water and perhaps just didn’t know it, and where the tradeoffs never had financial upside.

On the other hand, you’re not just right about your prediction, but it has already happened in the broader sense, just maybe not over a single incident. Complexity has already killed many software companies, when the software gets too big to manage and can’t be easily refactored, runs on old languages and tools that stop getting updates, relies on too many services that slowly go out of both fashion and support contracts. The list of companies that got too invested in a single codebase and stuck themselves with the inability to adapt quickly enough due to the weight and complexity of what they built is pretty long and growing.


> "On one hand a single incident is unlikely to be the end of a large company because companies exist to make money, and also because by the time a company gets to a thousand engineers, it’s engineering function is already centered on managing complexity."

Absolutely the companies themselves can be very robust. Some friends and colleagues who work for <big defence> tell me they don't fear for their jobs if a project is crashing. Internally such firms are structured to run dozens of projects, all well compartmentalised, such that if one fails they'll just get moved to another. The company itself is survivable via diversity (plus there's .gov money close at hand to bail out in an emergency).

What concerns me about civil resilience is where we've got unexamined mono-cultures like Google Docs or GitHub that become de-facto pillars for countless other businesses. When Trump ordered Adobe to cut-off Venezuela that should have been a resilience wake-up call for everybody (and it prefigures the present Russian situation of course).

There could be many reasons (more likely an unfortunate confluence of unrelated upstream reasons) that topple a vital part of public infrastructure. That's why big-tech monopoly, which erodes hybrid resilience, bothers the hell out of me.


True but there is also a difference in priority with different package maintainers which isn't always helpful.

Why were old features removed from the upstream packages? Probably because upstream either wanted to ditch old/insecure protocols or the usage was too low to warrant the investment in keeping it alive. That is completely reasonable for them.

Then look at something like SSLPing, it wants to support old protocols because that is part of the feature set for older servers which have no realistic need to update.

It's something I find a bit troubling about the very vocal "security" crowd for whom only the latest and greatest ciphers/protocols/short-lived ssl certs/stronger DH key pairs are acceptable and everything else is garbage.

Sadly, it seems that realistically, only paying/charging will help this because then it is reasonable for someone to spend 40 hours of their week keeping up with the changes/building their own libraries/etc. to keep their project running.


Yeah; I was going to write that this shutdown notice is a great microcosm for what's wrong with the field.

You focus on containers, but this project had dependencies fail at each level. The author also cites init breaking, the crypto library losing support for old protocols, node breaking compatibility, and I think some frontend junk I haven't heard of also fell out of support.

The project sounds like it could have been a well-scoped command line option to SSL or something, but, because it was built with current best practices, keeping the dependencies up to date and maintaining it would probably take multiple full time engineers / swe's at a FAANG.

On top of that, there's no sustainable business models for good ?aaS tools (small, or large), just for platforms and monopolistic stuff.


> because it was built with current best practices, keeping the dependencies up to date and maintaining it would probably take multiple full time engineers / swe's at a FAANG.

Surely significantly less than one full time? But like 10% of an extra full time job that costs you money isn't a very good proposition


Each dependency gets to choose when they break your service (they can have a zero day exploit that's only fixed in a compatibility breaking release, for example).

You need two people if you want one to be able to go on vacation while you maintain a 24 hour SLA for security patches. You need a third because this sort of thing generates employee burn out once it is feature complete.

You could try having it be 10% of one employee, but that employee would need to be a full stack expert from low level Linux stuff to JavaScript GUI junk. Good luck finding two people like that, then telling them they own ten feature complete services like this.

The people that fit such a bill and have the personaility type to be happy in such a job are expensive and rare!


> 24 hour SLA for security patches

I think that's a significant step up from your earlier guess at "maintaining" the software. There's a big spectrum between "project has been shut down" and "round the clock SLA".


"When we reach a situation where some unknown person pushes an upstream update, which causes a cascade of problems that even someone with deep technical knowledge and daily connection to the project cannot unpick, that's a breakdown in modularity/dependency."

Agreed. I believe that the "UNIX philosophy" is a good strategy for avoiding these scenarios and I believe it is underappreciated - even by architects and designers that should know better.

Forgive me for injecting this word but this is why I recoil so violently from systemd and all of its accoutrements. I can sense how this is going to fail even if I can't clearly elucidate it just yet ...


Nonsense. It was a side project that cost a lot of money. He had to take care of his actual job, family, hobbies, income producing side projects, etc. He had no single person on the planet to help him maintain it or update it. So the dependencies went out of date over the years.

Projects with resources can be maintained and updated. This one has no resources and there is no reason to update it because it's a drag on that person's life.


> we reach a situation where some unknown person

We reach an unnecessary situation. The vast majority of these dependencies don't even add any real value - not enough to justify the danger of having your app just stop working inexplicably.


Why do we need Docker? Just so that it’s easy to ship to server? How about rsync + taking the risk of dependencies conflicting?

Sorry not expert at all the infra stuff. But I hope we can question things.


He says he had hundreds of users and hosting costs were a problem. This made me curious how this could be. Testing SSL for a few hundred users should pretty much free these days.

He said Patreon pledges paid for 25% of hosting. So I looked up his Patreon. Turns out he has 3 (!) patreons, together paying $9 per month.

The web is a strange place.

Here comes a fun fact I can add: He is doing better than me. I have a website with about 500000 monthly users and 5 (!) patreons. So while he only had 1% paying users, I only have 0.001% :)

Did I say the web is a strange place? Will this ever change?


Hi I'm chris, the creator of sslping.

Actually, we're talking "small" amounts. I had 3 dedicated servers hosted in France, which is cheap compared to the USA for dedicated servers. I was paying 65€ per month, for 5 years.

How could testing SSL use that much? well, I wanted an HA mongodb setup, and I wanted to support millions of servers. SSLPing was super fast, around 5 seconds to check your SSL versions, cyphers, check some vulnerabilities, etc... when SSLLabs takes 2 minutes.

And yes, despite 1100 total signups in my database, I had only 3 patreons... The maximum I had was 5.

The web is a strange place. I'm looking into sslping very differently today. Yesterday I killed it, and received tens of emails from users to thank me, and today I'm on the first page of HN... because I killed my project.

Do I have to say that I didn't dare ask people to pay for sslping because I thought is wasn't worth it?


Ping me at Joe.Drumgoole@mongodb.com and I can organise payment of all those costs if you want to continue.


would you like help keeping it alive? or reviving it?

i’m open to help: from untangling the issues it has now to hosting, to rearchitecting it.


let’s fucking go boys!


I think this is natural, not strange. Not saying to criticize, I mean as a friendly heads up...

You're acting like a tree in the forest offering fruits for free. People pass by, take a couple fruits and move on. You put a shy sign "if you drop a dollar on the floor it won't hurt".

0.001% dollars is what you get out of this behavior.

It's strange to expect something different. You need to tell people you expect a LOT of them to pay. Otherwise they don't know. You tell them that by restricting access and saying: pay or go away. Offer a limited time free trial. Then charge.


I'm surprised about that too. If I were to build such a service, I'd use cloud and keep it in a free tier. If there's not enough resources to serve all the servers, throttle it (check it not every 24 hours, but every 25 hours and so on). If there's enough demand, implement paid tier without throttling and some extra features.


Which, come to think of it, is exactly how cloud providers manage their free tier.


> Turns out he has 3 (!) patreons, together paying $9 per month.

It is possible that he already informed his patreons and they canceled their pledges except 3 of them.


In January he had 6 patreons making about double that. So $50-$100 a month in hosting costs.


I don't think it's strange that to many people, some service are useful but not always seen as useful enough to commit to X dollars per month, especially if their own use is for free stuff as well.

Also, many businesses are not setup to make multiple small(ish) payments to library maintainers even though imho, this should become a normal pattern. Imagine if all the people who used OpenSSL, PHP, Linux, etc. x 1000 were paid even $1 per month per business use.

I think there is a whole PhD around this since there are psychological factors to people charging up-front vs freemium with upgrades etc. Sometimes if I am trying to find a service to do X and find that one charges, I might immediately reject even if it is really amazing and would save me that much in time.

I guess if it was easy, there wouldn't be a problem!


Signing up for any level of subscription is a pretty high bar for me and, as you say, can be logistically challenging for businesses. It's just way too easy to build up money leaks that you aren't getting any real value from.


The web is a strange place indeed, and maybe I misunderstood your reply, but my take after reading it was like: You are freely and independently offering a service to others for free and are complaining to us that you don't get enough donations to keep it? Kill it, it's not that anyone is entitled to your service for free, or charge for it, so yeah I think it will change when people stop putting free stuff online?


It will change as we all "become the change we want to see". How many web apps that have a free (and likely ad-supported) alternative you are supporting/paying for? If this number starts being 5-10 then we can hope for a network effect and the main currency of exchange on the web becoming simply money instead of (private) data as it is now.


Maybe making your hosting costs visible to your users vs burn rate will make the point more salient. Something like "At current burn rate XYZ service will become untenable in 145 days"


Perhaps try $1Hugs?


I'm waiting for my services to die in a similar way. I too will shutter it when it does. 250k monthly users, but all the time and a lot of the costs are borne by me and where I'm at now I don't have the time (the money I could afford, but it's the time that is more valuable).

I'm glad to see this, I'm glad to see someone else also conclude that they don't have to take an onerous responsibility for something created lightly and that gained traction.

This is what OSS needs too. If the project gets too much, just give it up. Even if others depends on what you've done, you're not compelled to do anything if you're not in a contract and no one is directly paying for your time. Not a donation, but payment to reserve time.

Time is the thing. It ticks by, and technical debt and bit rot eventually make it cost too much.


You are not obligated to run the service (whatever it may be). Run it because you want to, run it because you are exploring cool ideas. Work through the though spots because it is worth it and you are happy afterwards.

But when it causes you prolonged stress, or you just don't have the time anymore you are free to shot it down. You can do that now, at x months in the future, or randomly when your servers die. I think your users would prefer that you give them an x months heads up even if they would technically get longer service waiting for your servers to die.

You don't have to sacrifice yourself for people you have never met and who have done nothing for you.


Well, you could try to sell it off before that happens. This genuinely seems better for everyone, even if the buyer has to turn around and start charging or putting ads up or whatever, the users get a better experience than an unannounced switch-off. You don’t have to lie about the numbers or try to get a three comma buy out, and you’ll still get some amount of money for your time.

I just googled “sell my side project” and found a few marketplaces for that sort of thing. Might be worth checking out.


It's better for the users for sure, but they're not owed anything for a free service. For the owner of the service, surely it just serves to add more complexity?


For reference, this is roughly what I have... still not looking to sell today (that also takes time and brain cycles) but this is what I wrote in an email to someone enquiring by email after I posted this morning:

So I wasn't looking to sell when I wrote the comment on HN, but the gist is: 8 years ago I created a platform for forums, it's a PostgreSQL database with a Go API layer. It's multi-tenant by default, so hosting many forums on a single server or cluster is trivial. That much is solid, and well maintained. But... I am not a front-end person and with that in mind I had the frontend built in Python + Django originally... it has no database, it's a pure veneer over the API just to use common templating for making the HTML. This part has not been maintained... it's 8 years out of date, Python 2.

The platform I run has a number of sites on it, and I'm loosely aware that over the years other people had spun up instances of their own.

Examples of sites using it:

* https://www.lfgss.com (the biggest site on it)

* https://pignolefixe.microco.sm/ (a french site)

* https://forum.espruino.com (something to do with arduino and javascript)

* https://forum.islington.cc (a pretty strong site)

* https://forum.rapha.cc/ (a private members club)

A common theme is cycling.

The entire thing is secure, privacy focused, very low effort to run. There are no adverts, no tracking, no stats... but web logs say that I served 1.5M HTTP requests in the last 24 hours (to now) and that's behind a well configured Cloudflare cache (those not signed in hit cache for 5 minutes, only those signed in get dynamic HTML).

So that's what I have... a forum platform. Oh... what differentiates this forum platform? It has events... in fact the platform is bespoke, the idea when I started it was to have things like classifieds, events, polls, forms, wiki all be native top-level content within a forum. I never liked on vBulletin or Reddit how you'd have to leave the forum to collaborate beyond conversations so I was trying to bring it all into the forum (and thus compete with MeetUp, eBay, etc... who don't have communities and wish they did). Imagine Reddit, but with stronger sense of identity for subreddits, and richer content, and each subreddit able to be on it's own domain if they wished... that's this.

I still don't know if I'm necessarily looking to sell... but if we get to the point that the frontend server fails in some horrible way, the Python + Django being 8 years old probably means the effort to get it working is too much. I did realise this, and started a frontend in Go to replace the Django one (I can maintain Go code) https://github.com/buro9/microcosm but you can see the lack of progress... I joined fast growing startups and my career accelerated too, that doesn't leave time for side projects.


You might try selling on MicroAcquire or Flippa (I haven't used either).

Selling any business will usually take a couple months at least, so waiting until things are on fire and super stressful might not be the best strategy. Listing shouldn't take more than a couple afternoons.


Do you know about Microns?(https://microns.io)


That's a really cool project. The frontend code is very easy to follow, although I looked and looked to see where the API was implemented before figuring out that what the readme describes is not actually in that repo (API server isn't published, right?). :D


It was published. I removed it recently as I was about to butcher it into the other repo with the front-end and didn't want two wildly different versions.

As I haven't done that yet I can put it back up once I get back to a city (currently almost off grid in the Lake District - purposefully didn't bring a laptop with me, I can get away with this as my sites haven't been down in 8 years and I don't feel an obligation to interrupt a vacation to get it online if it did go down during a holiday... It would wait a couple of days).


Ditto - I'm at the point where I either need to get my mind back into the space to (basically) rewrite an entire site (it seems), or let it die. I like it, but hosting it is doing nothing but cost me money right now.

Hard choices, but it was nice to do.


I don’t know what your service is but that’s enough traffic that you likely find a willing buyer?

I recently discovered that there is a pretty active market in people buying/selling smaller businesses and websites like this.

Whether you like what the buyer does with it is a different issue…


True regarding the active market, but generally speaking for your offering to be attractive to that market it has to be already successfully generating income above its costs. There is far less of a market for loss-leaders.

(The exception to that, by the way, is that if anyone here ever builds anything so useful to a particular company that their sales engineers or developer evangelists are routinely using it or writing blog posts about how to use it you should absolutely get in touch with them and ask them to buy it.)


Another option would be to just communicate the limits of their commitment to the site to their users. Maybe people would step up to takeover/assist, or another solution would be found. At least users would know that it's on the way out in advance.


>250k monthly users, but all the time and a lot of the costs are borne by me and where I'm at now I don't have the time

are you not able to let any of these people know they need to pay or the service will go away soon? I mean 250K 1 dollar a month?


It's not trivial for a side-project to start accept payments. You may need to register a company, open bank account, find and set up a payment processor, publish a privacy policy, hire an accountant to handle tax reporting, write code, etc. All doable, but not a "let's push this lever and see if money starts flowing" type of deal.

And it's not a sure thing the money will start to flow. It will definitely not be 250K 1 dollar a month. 99% of users will just go "fine, I'll switch to one of the other 9 still free alternatives".


Can you not just send e-mail with your IBAN asking for donations?

If its so much money that it seriously impacts your taxes that seems like a great thing. At least that wouldn't be a big deal unless you get some serious cash.

If you have the email of 250k users who all don't want to deal switching service. I can imagine getting a fair amount with a once a year ask for donations. Like more then making it a paid service, people usually are happy to give some money for things they use for free if its useful.


The op literally said "(the money I could afford, but it's the time that is more valuable)".


Stripe can take care of most of the hassle. I’d at least send a “Save the service” email with a Stripe payment link (no programming required) and see if I can get some decent bites.


Stripe basically deals with the easiest parts (the technical) and leaves the hardest (or rather, the most energy draining) part for you as a customer/business owner (the bureaucracy of having a business in your country of choice). If you start accepting money, you need to pay taxes, no way around that.


Initially he could declare this as personal income, just like people who accept donations on open source projects. Once he gets many subscriptions, he can start looking at setting up a company.


Leaving aside whether you're likely to bring in any material amount of money, the logistics involved aren't that hard. You set up whatever payment processor you want, track whatever money comes in (paid to a doing business as entity if you like or just you personally), track business expenses (including computers etc. somewhat related to your business), and file any net income as part of your taxes.


Not necessarily. You can incorporate in a country like Estônia, for example.

One-stop-shop accountants will charge you 50-100 euros/mo and take care of everything.

They don't tax revenue, just distributed profits. As long as you keep the money in the business (even investing) or take a salary, you won't pay taxes in Estonia (assuming you don't live there).


> Not necessarily. You can incorporate in a country like Estonia, for example.

But not with Stripe, which always gives you a Delaware C corp as far as I understand. And if there's a Stripe equivalent from Estonia I have yet to find it, despite all the promotion their government does around "e-residency".

Edit: oh nevermind https://support.stripe.com/questions/supporting-companies-th...

(anyhow, Estonia borders Russia, invest at your own peril)


> And if there's a Stripe equivalent from Estonia I have yet to find it, despite all the promotion their government does around "e-residency".

That would be Xolo.io - but you really don't need a Stripe equivalent in Estonia or most places to incorporate. Registering a company is not that hard most places. I wouldn't recommend Estonia because you still need to open a business bank account for your new entity, that's the hard part.

You don't need a C corp for a project like OP. He would want a bog standard LLC which is fiscally transparent for U.S. tax purposes. It gets more conveluted if he himself is not from the US as it would be treated differently by different countries.

But it isn't hard, it is just boring to get going.


I've been to Estonia twice. It's a great country with great people. I can assure you 99.8% of Estonians have negative interest in having ties to Putin's Russia. Putin is a thug - pardon the harsh opinion -, he wants to loot Ukraine riches. Estonia is a member of NATO and offers nothing for looting, as they have close to zero natural resources.


basically if you start it up now with an email - service going away unless getting paid 1 per month, or one per 3 months even depending on how people feel it worth their time, figure out if it will be worth it, set up stripe, and then you have about a year to do accountant setup business.

I mean I am in Denmark and it isn't that difficult, and believe me Denmark is more difficult and entrepreneur unfriendly than most Western countries.


I wonder how it would fare if a service followed the Amazon wishlist kind of deal, and have random unrelated users directly pay some amount to the service provider, bypassing the site owner.

For instance Linode would expose 30 purchasable items for a site, and each item purchased by a user would pay for 1 day of uptime. As a tradeoff, money going to the project maintainer would need to go through Patreon if they set one up.


> You many need to register a company

Shouldn't you do that anyway to protect yourself?


Hmmm.. Is there a market for that?


> Hmmm.. Is there a market for that?

I don't think it's as bad as the parent comment makes it out to be to accept payments.

You don't need to register a company, you can act as a sole proprietor. This requires no paper work, fees or filing anything. You would need a bank account but most folks already have one, your personal checking account would work. You can grab and tweak a pre-made privacy policy in less than an hour if you decide to include one.

I'm also pretty sure (I'm not an accountant) the first year you're self employed you won't get penalized for not paying quarterly taxes. Of course you need to still pay taxes in April, but you can deal with that then. Even if that wasn't true, the worst case scenario is you pay a fee for not filing every quarter. If you accepted payments and were making tons of money hiring an accountant wouldn't be an issue and if you were making close to nothing then you wouldn't have any fees since you have to be taxed $1,000 in the year to be on the hook for paying estimated taxes per quarter.

The last point of writing code stands true tho. You would need to implement some type of payment accepting feature into your app. Fortunately accepting monthly or annual payments is fairly generic. Once you've implemented this in any app you can mostly drop it into another app without a huge amount of effort. It would mainly come down to little tweaks and making the UI match your app.


> I'm also pretty sure (I'm not an accountant) the first year you're self employed you won't get penalized for not paying quarterly taxes.

I'm only chiming in here since I've become somewhat of a reluctant expert on estimated taxes over the years (assuming we're talking about the U.S. here; apologies to those outside the U.S.). And it happens to be fresh on my mind, since I tackled my Q1 taxes yesterday. ;) I've never heard anything about an exception for one's first self-employed year, so be careful with this. (But hey, the tax code is huge and complex, so I may just not know about it.) Although if you also have withholding from a W-2 job, it may cover you.

Generally, everyone is supposed to pay a certain amount of estimated taxes throughout the year, self-employed or not. But for most people this will happen automatically through W-2 withholding. I suppose it could be a rational move to not worry about estimated taxes the first year, focus on your business instead, and eat any penalties/interest. (You wouldn't be the first to do this.) Or you could just make a guess on what you might owe, and send some money in advance of each estimated tax deadline. If you underestimate, I suppose the penalties/interest would be a lot less than if you avoided it altogether. If you overestimate, you get a refund (or if you choose, a carry-forward to the following year).

Or you can do what I did 15 years ago, and spend a weekend studying IRS documents to understand how it all works, set up spreadsheets to calculate estimated taxes using the annualized method, and perform some minimal maintenance on it each year thereafter.


Don‘t forget that hn has international users. The rules you describe don‘t have to be the same for everyone. And there are a lot of countries with much bureaucracy.


not everyone will pay, but even if 1% pay that's 2.5k$ a month and maybe enough to make it worth it


Careful, this sounds like the 1% fallacy - just because 1% sounds like a small number doesn't make it an easy target.

http://successfulsoftware.net/2013/03/11/the-1-percent-falla...


Obviously building a service and thinking you will get 1% of users to support it may be prone to this fallacy and thus a service should not necessarily be built on this assumption, but in this case they have already sunk the costs of building the service, now it's sending the email to find out if 1% will support it.

I don't think there is a lot of reason to be careful in this case.


This is a very clearly written article. Business focused without the bs lingo. I like that kind of thing, because it gets me thinking about stuff instead of feeling stupid when stepping outside of my expertise.


That article isn't applicable, as it's discussing 1% of a total market, whereas the discussion at hand is about 1% of the site's current userbase.


It's still relevant. There are still numbers lower than 1%. Less than 1% of viewers will upvote/downvote a video they watched, which is much easier (and less expensive) than paying.


No, it's not relevant, because they are discussing completely different conversion metrics and at different scales. Also, your insistence that less than 1% of viewers will interact is based on the mistaken assumption that 100% of reported views represent someone who actually watched. Real life doesn't work like analytics.

About 1% of the people shown an ad (or any content) will even consciously acknowledge it, while approximately 1-3% of the remaining 1% will interact with it, and then 1-5% of that remaining group will actually buy the product. The 99% who didn't acknowledge its existence in the first place don't actually factor into anything other than how much you get charged for the ad.

The difference between the article and the OP's scenario, is that the OP already filtered the 99% and converted the rest. Now there is only one conversion left, which is to make them into paying customers, which is a much higher rate than 1% when you already have a captive audience.


I assume you’re talking forum spam service? Had a look at your website bookmarks so sorry if I made the wrong link/conclusion. Why not use a freemium model?


I have been running a product since 2015 and I feel the author's pain. The original tech I used was Ruby on Rails, and it was great (back in Rails 5). However, with Rails 7 and all the other changes since 2015, the debt was pretty overwhelming. Updating dependencies became difficult. Deploying a new server felt impossible. It felt like it was held together with toothpicks and glue. Afraid to update X because Y would break. I couldn't update Y or Z would break.

Unlike the author, my product was making actual money ($XXX,000). I ended up completely rewriting the product from scratch (took ~3 months). I just swapped the old Rails service out for the new service on April 1st.

I stuck to "simple and boring" tech for the new stack. No more Rails, etc.


Lol I thought the appeal of rails was that it is “simple and boring”!

Congrats on having the gumption to do a rewrite. I’m curious what are the more detailed decisions you took to better future proof?

Given how often things shift in our world, I feel like anything I choose now will just end up being obsolete and annoying to maintain in a 5+ year timeframe.


Great question-- and may be worth a detailed/thought out blog post some day!

To future proof the service I wanted to use tech that was based on the fundamentals. This boiled down to no special tooling, no special build process, etc. Furthermore I cut out all dependencies I could. I got my third party dependencies down to two (postgres + nginx).

Practically speaking this meant: no node, no yarn, no ruby, no webpack, no sidekiq, no redis, etc.

The entire service is now Nginx, Postgres and a single binary that has html/css/js/assets embedded inside of it. The entire product is a single Go binary that connects to postgres-- that's it.

I could probably do without Nginx, but it has been nice for a variety of reasons and has never given me a headache.

My tech/decisions may not be for everyone (that is why I didn't share them initially) but it works incredibly well for me and most products I build.


Thanks for the reply! Would enjoy a blog post. I'm guilty of embedding HTML inside of a Rust binary, but it was more do fix an immediate dependency hassle and never thought about it as a long term solution...but maybe it's not such a crazy idea after all.


Frameworks like Rails come with a constant upgrade overhead cost. If you're a solo developer, or even a small (<5) team, that overhead is likely to dominate your development time. Also, the main benefit of the framework (keeping a sane structure while multiple developers' are making code additions) isn't any better than just ad-hoc sharing standards and practices amongst the team. So, it's better to use a simple nodejs server or even plain old php pages.

Once you get past a certain number of developers, the constant overhead is something that still needs to be dealt with, but relative to the human-hours of your entire team, it's much smaller. Also, the team is no longer able to ad-hoc share architectural goals, and so having a framework in place that enforces them scales much better.

Starting with a framework is tempting because everyone thinks that they'll get big. But what if you don't? Maybe it's better to have a little passive income than crash and burn. On the other hand, if you do get big, the tradeoff is that you'll need to rewrite with a framework. That's another opportunity to crash and burn, so it's debatable which is better. Personally I think that Twitter has demonstrated how to do the "start dead-simple and rewrite as the team grows" method.


> Frameworks like Rails come with a constant upgrade overhead cost. If you're a solo developer, or even a small (<5) team, that overhead is likely to dominate your development time.

Many years ago I used to write projects with Python/Django. The Django project releases 3-year LTS versions with 3-month overlap. LTS versions are supposed to ease the constant upgrade overhead, and hitting a 3 month window every 3 years doesn't seem too bad, especially when they also release documentation specifically guiding you from one LTS version to the next. Does Ruby/Rails not do this?


3 years isn't all that long. There's still Fortran and Cobol in the wild from 40 years ago. Most people in silicon valley will have worked on a 20 or even 30 year old codebase at some point. And why do I need to upgrade Django? Did the fundamental principle of MVC change? No, the developers of Django just didn't like some Python 3.8 syntax and decided to rip it out and replace it with some shiny new thing in 3.9. They stopped backporting security fixes to the old Django version, the one with the totally-gross Python 3.8 syntax. So now, to use the latest Django version, you need to upgrade Python minor version also. Easily avoided if you rolled your own framework. (Of course, sometimes you don't have a choice, because maybe Python deprecated some perfectly-good syntax between 3.8 and 3.9 because the devs didn't like it anymore). Another thing is, that's every 3 years for Django. If you start stacking other 3rd party dependencies on top of Django (even something as simple as a MongoDB ORM), you'll have a separate, secondary upgrade schedule for that library.


IMHO the solution to retaining control of an app that wants to sprawl out is to set up the dev and prod environment via ansible/puppet/similar, into an otherwise bare VM or container, with only a forward proxy to reach it.

Doesn't mean your application isn't going to sprawl, but you will at least have a complete list of all its moving parts at all times, and changing them out can at least be done in a controlled way.

Still, it sounds like SSLPing's problem (and yours) was mostly bitrot, and no amount of ansible will cure that.


I'm curious what was so hard about upgrading from Rails 5 to Rails 7. The upgrade from 5 to 6 was smooth for me, and 6 to 7 was even smoother.

Seems like a rewrite would be way more effort.


There were many factors. Rails itself isn't too bad to upgrade, but the problem boiled down to various Rails engines (active admin, etc). There came a point where I sat down and had to decide if it was worth de-tangling the mess and potentially breaking a lot of things to upgrade Rails and the various dependencies (only to do this again in 5 years), or possibly build things in a way that would work regardless of the dependencies.

Sure-- the rewrite was more effort up front but the hope was the long term compounding benefits would be worth it. I don't generally advocate for rewrites but in this case I went for it (and actually got it done).


What stack did you end up moving to?


What was the new tech stack?


If the owner of the site reads this, I'm unfamiliar with your project but I've spent my career untangling especially tricky problems of the kind you're describing. If I'm being honest I find it to be fun.

Based on upvotes this is clearly a project some people cared about. If you would like, comment and I'll add a way of contacting me and do free consulting to take a look and see if I can fix it.

I can't make any promises but from what you described it sounds savable.


> it was using Docker Swarm, which fell out of favour (Kubernetes won after all)

As someone who is still running all of their homelab stuff (well, almost) on Docker Swarm, i dread the day when the eventual switch to Kubernetes will be inevitable.

Maybe Nomad will save me.

Or something like K3s or even Rancher - even though the former has some ingress weirdness with Traefik and default wildcard SSL/TLS certificates since not everyone uses Let's Encrypt or wants DNS challenge for private stuff and the latter needs a lot of resources to even run.

Still, when things go wrong with Docker Swarm, they're generally pretty easy to debug and solve - it's like adding just a little bit more on top of Docker Compose. Worst issues i've had were related to networking and kernel updates breaking something, though purging the node and carrying over all of the bind mount data as well as re-deploying the stacks solved that.


Shit like this is why I never warmed up to Docker. The benefits for a single-man semi-hobbyist working on simple sites, often static, are heavily offset by the busywork these tools constantly generate. They move almost as fast as the JS world, add more and more things to track and update, and risk becoming another point of failure.


> The benefits for a single-man semi-hobbyist working on simple sites, often static, are heavily offset by the busywork these tools constantly generate. They move almost as fast as the JS world, add more and more things to track and update, and risk becoming another point of failure.

In regards to something like Kubernetes, i largely agree.

However, there is (currently) certainly some merit to just using Docker, Docker Compose or even Docker Swarm, all of which are relatively boring and stable.

Those essentially solved the problems of environments, configuration, as well as backups for me. Most *nix distros have humiliatingly failed at having a single directory for everything an application needs to run, which horribly complicates figuring out where your data actually lives and how you could re-create everything from 0 in case of a failure or migration. The same goes for different system packages and even sometimes pinning versions. The same goes for networking being weird and is especially applicable in situations where you would want different versions of the same package running on the same server (e.g. MySQL 5.7 for one project, MySQL 8 for another).

A lot of that was already solved by VMs and tools like Vagrant, but it was a bit heavy and never caught on - Docker essentially solved that problem by allowing you to reason about your applications like applications on a phone - just a package of stuff that you care about, adding some resiliency and load balancing capabilities on top.

And then things like Kubernetes came along and addressed more enterprise concerns that are applicable at scale and inadvertently pulled along a bunch of people for the ride, when they have neither the need nor the capacity to work on such complicated software.

I actually talked more about my experiences with this in my blog post, "My journey from ad hoc chaos to order (a tale of legacy code, services and containers)": https://blog.kronis.dev/articles/my-journey-from-ad-hoc-chao...

That said, it's nice if you don't need to solve the problems that containers do. Alternatively, if you have other ways of solving them (e.g. the ansible + systemd setup), then go ahead and use that until your needs expand!


> Those essentially solved the problems of environments, configuration, as well as backups for me. Most *nix distros have humiliatingly failed at having a single directory for everything an application needs to run, which horribly complicates figuring out where your data actually lives and how you could re-create everything from 0 in case of a failure or migration.

The Filesystem Hierarchy Standard actively militates against a single-directory setup. Instead it is designed so you can keep your static stuff on one read-only location, your interim application runtime stuff on another (/var), your user data on a third (/home)...

It's a high-end standard appropriate for serious computers where these directories can be on different disks, with different filesystems, security protections, performance implications, and backup strategies (no need to back up what you can simply reinstall!). As a small-time user it's only your choice because it's the standard choice.

I guess you can have /opt/xyz?


> As a small-time user it's only your choice because it's the standard choice.

Thanks to Docker i can mostly reject its conventions (for actually organizing my application data).

Instead, i might have something like the following file paths and container bind mounts:

  PATH ON FS --> PATH IN CONTAINER
  /docker/app_foo/data/container_x/var/www/html --> /var/www/html
  /docker/app_foo/data/container_y/var/lib/postgresql/data --> /var/lib/postgresql/data
  /docker/app_bar/data/container_w/app/data --> /app/data
  /docker/app_bar/data/container_z/var/lib/mysql --> /var/lib/mysql
  *the docker/app_X directories can contain deployment descriptors, instructions or other notes if need be
The containers can use FHS internally and not know about where or how things will actually be stored outside of the container (bind mount vs volume, local vs networked). Thankfully, usually there are 1 or 2 directories per container that i actually care about and want to persist, the rest is disposable (e.g. data vs runtime files and libs).

The hosts can have a non-standard folder at FS root under which all of the persistent data will live, thus backups become easy, and due to organizing stacks/apps in a tree structure, everything that's needed by a particular environment is also immediately obvious.

Furthermore, this doesn't get in the way of the actual OS operating, nor does the OS get in the way of my deployments - if anything ever goes wrong, i can just wipe and reinstall, setup with Ansible and carry over the /docker directory as necessary. Of course, depending on how everything is mounted, it might as well be just reconnecting another drive or a few drives and setting up fstab.

I think that treating your entire system as one large entity is a horrible idea - instead you should have a clear distinction between what's a part of "infrastructure software" and what's your "business software", those two should be kept as separate as possible. The former should be easy to wipe and update and even swap out altogether (within reason), the latter should be treated as critical and backed up thoroughly and often.

Edit: my point is that most existing software is really bad at this distinction and there are far too many opinions and as a consequence your Tomcat install is strewn across a dozen folders, same with many other pieces of software. So why should you live in fear of forgetting some important configuration file or some data directory?

Of course, YMMV, but things like Ansible and Docker (or Podman, or containerd) with the "infrastructure as code" and "cattle instead of pets" approaches is pretty serviceable.


We're using Swarm in production for a bunch of our small CMS projects and it's been great. "A little bit on top of Docker Compose" is a great way to describe it. It's just so easy to set up, and I think it's still a reasonable option when you want to dockerize stuff you've had running on a VPS.


Hey ! I'm the creator of SSLPing... yup, I'm the guy who got famous when he killed his dream.

Switch to K3S, you won't regret it. I was hesitant to pour more work into my project to switch to K8S (which I know well, it's part of my day job !)... K3S is an excellent way to get into Kubernetes. Or use Digital Ocean's or other hosted implementations...

Kubernetes has a lot to offer... peace of mind is one of them !


> Switch to K3S, you won't regret it.

K3s is pretty great, at least as long as you keep on the happy path.

In practice, that probably means a DEB based distro, quite possibly Ubuntu LTS and something like Rancher for management (if you prefer a UI of some sort and can afford the spare RAM), or Portainer, or even just Lens.

Sadly, i had a situation where i needed Kubernetes on a server at work, which had about 8 GB of RAM, so OpenShift was out of the question and most other "full" K8s distros also weren't viable, when that same server was also to run the actual containers. Moreso, i was stuck with Oracle Linux, on which getting K3s working properly was a bit problematic (though the same probably applies to most RPM distros).

Not only that, but instead of Let's Encrypt, i needed to use custom SSL/TLS certificates, installing which with the Traefik ingress and using them as the defaults instead of the self-signed ones was quite the mess, about which i wrote in another comment here on HN: https://news.ycombinator.com/item?id=30672765

In short, to get it working i needed:

  - a ConfigMap for Traefik, knowledge about the structure of the ConfigMap (tls.stores.default.defaultCertificate)
  - a TLSSecret for storing the actual certificate/key
  - a TLSStore (which i also needed to actually use the secret, spec.defaultCertificate.secretName)
  - a HelmChartConfig for Traefik to load the ConfigMap with the mounted secrets and config
none of which were documented as well as i'd like, because Traefik isn't necessarily aware of the intricacies of being used with K3s and K3s hasn't got instructions on such a setup because the happy path is using Let's Encrypt. Furthermore, attempting to use Nginx as the ingress instead failed and trying to uninstall all of Traefik resources hanged, something about Kubernetes waiting for the resources to do something so that they may be deleted, a callback that never seemed to happen.

Oh, attempting to use Rancher also failed due to recent changes with cgroups v2 and varying support for all of that, whereas it seemed to work properly on a throwaway Ubuntu LTS VM/VPS.

In short:

  - i still think that the goals of the K3s project are really nice, the current resource usage is surprisingly decent
  - if you have to use Kubernetes on RPM, go with OpenShift, if you don't have the resources for it, just use DEB distros
  - the struggle with niche setups, like i needed, is largely not worth your time, the documentation for these still isn't there yet (example of mature documentation: Apache, which has been around for decades)
  - ideally, just pay someone else to run a cluster for you, if you can (i can't, because mostly on-prem at work and am relatively poor in regards to personal homelab)
I'm still torn about migrating over to Nomad or K3s when i'll eventually retire my Docker Swarm clusters (maybe in the next 5 years), but for now i'm putting those plans on the backburner entirely to give both projects a few years to mature and become more stable/established. In regards to Rancher, they have RKE2 in the works as well.


I've been using k3s on Ubuntu (for development) for a while, and I switched it to ingress-nginx and cert-manager... It was always straightforward.

But I must admit it's easy/easier when someone else is managing K8S for you, be it DigitalOcean or Tencent (which I have experience with) or AWS / Google Cloud.

K3S is quite comparable to Docker Swarm though. I liked Swarm when I developed SSLPing because of its simplicity compared to K8S, but once you learn your way through K8S, there's no point sticking to Swarm I think...


I know you say you will let the projects mature, but I think it's a pretty good time to start picking up Kubernetes or at least getting your hands dirty with it. Nomad is a much simpler solution but not nearly as popular, in-depth or in demand as Kubernetes.

If you wanted an 'in' to Kubernetes and better automation of your homelab. I would suggest to check out this repo[1], it's nearly a A-Z guide on getting k3s running on Ubuntu (I am no distro-snob, Ubuntu just works well for most home workloads in Kubernetes). GitOps tools like Flux and Argo are really becoming popular to use with Kubernetes because all your configuration is stored in Git and the GitOps tools deploy manifests based on the Git repo state. My entire home cluster[2] is open source and there's many who are also doing something similar[3].

All of this comes at a cost of learning these tools which is not easy, but from my interactions with people who do take the plunge from docker-compose or docker-swarm most of them stay and once they see the benefits of GitOps and Kubernetes cannot go back to their previous ways. Automation is king and it is much easier using Kubernetes because most of the tools exist for it already.

1. https://github.com/k8s-at-home/template-cluster-k3s

2. https://github.com/onedr0p/home-ops

3. https://github.com/k8s-at-home/awesome-home-kubernetes


This announcement illustrates the amount of breakage in the software world very well. After just six years the product is beyond repair because the programs on which it relies changed too much.


I had to kill a free service that I absolutely loved due to similar issues.

Couldn’t find a way to monetize, running an old version of node. Upstart vs Systemd, a niche database (rethinkdb) that fell out of favor and upgrading was too painful.

It wasn’t nearly as popular as SSLPing, but it was mine and had a following.

Thanks for writing this note: it’s OK to shut something down if it causes you agony.


RethinkDB. What a blast from the past. They were such a compelling product, but they just never got the attention they deserved. Mongo by comparison was strictly worse, but they never got out from under Mongos shadow.


Hey! SSLPing creator here... I feel your pain, deeply, inside me at the moment!


What was the service?

Getting broken things going again can be kinda fun...


It was a game called Kriegspiel hosted at krgspl.com

More info on the game here: https://en.m.wikipedia.org/wiki/Kriegspiel_(chess)

It worked really well at first, but didn’t scale due to some tech decisions. Maybe at some point I’ll fix it and rerelease it :)


It's crazy that 5 years of updates can break most systems. It's long in internet time, but nothing compared to other industries. Maybe we should settle down a bit and have longer LTS


I use testssl[1], and it saved me quite a few times with Nginx and other TLS issues. Give it a try. It is written bash (the only requirement) and works for Linux, macOS, *BSD and WSL. [1] https://testssl.sh/


I use testssl too, the dockerfile has saved me countless times, as having an image to run and test old SSL versions is godlike.


For reference what are the advantages of something like this vs. something like openssl?


One advantage is the easier usage. Openssl Syntax is quite hard.


Happy user here. I am going to miss your service. It saved my day a couple of times and I loved the no nonsense site, service & notifications.

Thanks for all the fish and I wish you a great time developing your next service!


thanks man! This goes straight to my heart (yeah, I'm chris, I amended the message on sslping.com to show my HN profile). I surely didn't expect SSLPing to END its life on the first page of HN. I'm sad today, really sad :( but my users were the best!


That unstyled short message on a default, white background is touching. I felt Chris reading those technical debt bullets. Kudos to him and good riddance!


PS: I'm not a user of SSLPing, but I built OnlineOrNot and we have a basic SSL verification feature that'll alert you when your SSL certificate is no longer valid: https://onlineornot.com/docs/ssl-verification

Looking at building out this feature further (with expiry warnings 3/7/14/x days before cert expiry), let me know if you're interested.


Quite concerning that the lifetime of modern software, even for a trivial tool, is so short. We have to do something about that.


I can absolutely comprehend how he feels.

Congratulations for recognizing that it's the best for it to be over. Feel relieved and enjoy the newly gained freedom.


I have a service with hundreds of requests per day, thousands of users, it is an old RoR app that is impossible to update now, 3 major versions behind latest, several dependencies that can't be built on newer Ruby versions. Even the Ruby version it is using can't be build anymore on more recent OSs because of OpenSSL incompatibilities.

It will take hundreds of hours to be updated, which at some point will be almost (or even less) than building it again from scratch. As I do not have this time, it will probably die a slow death.


Respect to Chris for not only keeping it going for so long, but closing things with an explanation instead of just vanishing like most services.

I know that pain of debugging a Docker swarm, and it's not fun!


Thank you man! I'm Chris, the creator of SSLPing, and this goes straight to my heart


Reminds me of a decision I made long ago to use a 32-bit instead of 64-bit Linode server because I read somewhere that 32-bit was slightly faster. Nodejs stopped providing 32-bit binaries at version 8 or 9? and now I’m stuck with the risk of migrating my working cron, php, mysql, node spaghetti or being constrained by apis/packages available in an ancient version of Node.

Edit: I thought of slowly migrating everything to Docker, but the official Docker binaries are also 64-bit only.


What features did SSLPing have that forced it to rely so heavily on old versions of OpenSSL?


From https://web.archive.org/web/20220328120435/https://sslping.c...

> SSLPing needs less than 5 seconds to check your server and tell you what's wrong with your SSL/TLS security.

So I guess it used OpenSSL to figure out what was wrong with people's certificates. Not sure what it used that relied on internals so heavily it was hard to upgrade, or if the public interface just changed a lot.

Even with that, seems there was multiple issues that "prevented" ("made it harder than justifiable" rather) the author from keeping the project up, not just regarding OpenSSL.


My guess: SSLv3 has been dropped from OpenSSL, which means hitting a SSLv3 server just fails to connect quite early in the process, so there would not even be cert reading, plus IIRC the connection error can be very obtuse, which means sslping could not do that:

> tell you what's wrong


SSLping creator here...

Actually node.js is using openssl under the hood... SSLPing implemented a partial SSL implementation to quickly test support for SSL versions and cyphers, but used node.js native libs too. Newer node.js versions weren't able to get an SSL certificate out of a SSL v3 only server, for instance. Which didn't allow to test for expiration, etc...

But yes, there were multiple issues indeed


Ah ok, so it sounds like an automated (though probably cutdown) version of the SSLLabs server test.


It does a little more and little less.

More: Enter a list of (sub-) domains and get informed via email when "SSL things" change (for better or for worst), or your https certificate is about to expire.

Less: No fancy pansy "report"

Personally I prefer https://hardenize.com nowadays, over ssllabs for these kind of queries.


https://hardenize.com is quite pretty, but there's nowhere near $999/mo of value in it for me!


Hardenize's paid plan is intended for larger businesses, where we combine infrastructure discovery with continuous monitoring and many other things. However, ad-hoc assessments are free for everyone and we intend to keep it that way. I hope that we will in time be able to provide plans at lower price points and maybe a free plan at some point.

(Hardenize founder, previously also SSL Labs founder.)


Speculation ahead:

It didn't just check if a certificate is valid now, but whether the certificate, intermediary, CA (or anything in the chain) is about to expire soon.

It might have needed to poke deeper into OpenSSL than regular uses.


You don’t need anything peculiar in OpenSSL to do that. I’ve been monitoring my ssl certs with a simple nodejs script (it just connects to get the cert details so generally less than a second per domain).

Perhaps their service also tested for deprecated ciphers/tls versions?


it did test supported cyphers and SSL versions, common vulnerabilities, etc... (SSLPing creator here)


It's also blocked by AdGuard.


Yeah I was trying to figure out what was going on here. Is it blocked cos it's closing or was it blocked from before?


SSLPing was a tool that was designed to test SSL/TLS/PKI server configuration. According to the author's message, it was built on top of OpenSSL. There have been significant changes made to OpenSSL since 2016, many focused on improving security by removing very old cryptographic primitives that are obsolete and insecure. Think RC4, 3DES, SHA1, and so on. If you're relying on OpenSSL to detect presence of such primitives on a server, upgrading your own (client) OpenSSL version breaks the functionality.

This is a problem for all testing tools that rely on OpenSSL. If you follow this direction, you typically need to use at least two OpenSSL versions, one new to test modern features and one very old to test obsolete features.


Hi Ivan,

I know you appreciate the issue to its fullest like no one else! Cheers


Hey Chris, please don't kill the project. Instead, give it a chance to live by selling to another founder willing to take on SSLPing.

I'd be happy to help you find this person through my site called Microns(https://www.microns.io).

I have an audience of entrepreneurs and founders who will like your project. Just let me know if you would like to be featured.


Kudos to Chris Hartwig for throwing in the towel and shutting it down.

Here, read this blog where the author suggests that the service is free and that free account has no limitations and allows you to scan thousands of websites... appalling: https://www.ctrl.blog/entry/review-sslping.html


thank you man, straight to my heart it goes (SSLPing creator here)


You have my utmost respect. I hope you get to spend the time on other things that bring you joy.


> Docker refuses to run after attempting an OS update which broke too many things (upstart vs. systemd being one, FS drivers, etc...).

So all of that cathedral has been fallen down w\o ability to recovery after mistakenly update && upgrade with no backups? And all of that code stack was open-source?


Damn. Sorry to see that this service was running at a loss.

I have a service that, among other things, checks SSL certificates: DomainProactive (https://domainproactive.com). Please try it out if you have this need.


> - it was using Docker Swarm, which fell out of favour (Kubernetes won after all)

This feels like saying to stop supporting Firefox because Chrome won after all. Kubernetes is far from perfect, so we shouldn't try to eradicate all of its competitors for it.


For an alternative, I get great service - not free but very cheap - from updown.io.


I saw updown.io here on HN some time ago, it's a great service, cheap enough I don't notice, and quiet enough I only notice when there's a problem.


What a mess of incompatibility ssl has become, and all to provide questionable amounts of privacy and security along with a monopoly on eavesdropping to those who have the means.


I'm not sure about exact features of this service (I didn't know about it), but I'm using:

- https://www.ssllabs.com/ssltest/

- nmap --script ssl-enum-ciphers news.ycombinator.com -p 443

The nmap one works with various non-HTTP protocols, such as IMAPS and FTPS. The Qualys service is much more detailed, including hints about certificate chain problems etc.


According to the OP, ssllabs takes much longer to run a similar check https://news.ycombinator.com/item?id=30993593.


I'm an older guy Chris and I'm guessing you are in your 20/30s? I was just musing today how my home networking hobby shaped my work life. I'm sure the journey you have been on will shape your life too. Keep doing the stuff :-)


Hey! I'm 50 actually :-) I'll keep doing the stuff nonetheless! Cheers


Sorry, but since the about page of sslping is down, what is it? From what i can tell from the scant information on the internet, it's a monitoring tool that tells you when your websites have a SSL vulnerability?


Make the web monetizable again


It's just me or SSLPing used an incredibly complex set of tools? Simplicity always pays of. But I guess real simplicity is harder to implement, and a lot of those tools make life easier for developers.


I think it's just you, the advice in 2016 was "just throw it up on something using docker compose", "you can later expand it with Swarm".

NodeJS was a common choice too.

The real problem was their lack of an upgrade path I suppose, not the tech stack itself.

I'm also guilty of doing this and I'm an ops person, if you can't keep upgrading at a decent pace then the cost of all the upgrades becomes too great to take in one chunk.


> I think it's just you, the advice in 2016 was "just throw it up on something using docker compose", "you can later expand it with Swarm".

> NodeJS was a common choice too.

I agree it was common and perhaps fashionable, and perhaps it made life easier, but it also added so many level of complexity and technical debt, that now are a burden. That why I say simplicity is hard, but then it pays of (perhaps).


Compared to the standard deployment stack these days: what SSLPing was using seems extremely simple though honestly.


It is, but the people realizing this are very small in number.

It's not just SSLPing, it's the entire industry.

SSLPing is aruably a lot simpler (comparitavely speaking) than many other products.


By today's standards, Docker Swarm on dedicated servers is very simple. Complexity would be putting this into cloud with auto-scaling and probably half a dozen cloud services stitched together.


These sorts of issues are why I moved my personal systems over to Kubernetes years ago for infra, and for code I focus on language ecosystesm with longer horizons.

very sad, but, also, yeah. :-/


Everything improved until nothing worked anymore.


open source it man, we will figure out how to fix it !


Thanks Chris


Can’t he just spawn an EKS, configure his servers there, and flip the DNS?


This entire service ended because there weren't backups.


SSLPing creator here.

There are backups, the db is safe. But you can't easily fix a Docker Swarm cluster when Docker refuses to run, systemd tries to replace upstart, etc. The message I wrote tells the whole story, and backups were not the problem (I even had a redundant 3 way database cluster which most commercial companies don't have).


It seems that posts stay floated near the top of https://news.ycombinator.com/best for a max of maybe ~5-6 days. This has been up for 1-2 days and from a conservative engineering perspective I'd probably be comfortable relying on it staying there for at least 2 more.

So you have a one-shot opportunity for the next day or two to consider changing the webpage to say something different.

Reading the sentiment amongst the comments and the perspective you've also shared, some significant points emerge for me:

- This is a side project you were really invested in and you're really sad to see it go

- It wasn't generating very much money, but you weren't in it for the money

- A few people definitely used and appreciated it - so it had a decent bit of mindshare

IMHO this could easily either go the open source route, or a small private team of assistants could help contribute to getting this going again and maybe contribute toward maintaining it in as people have spare time available.

- You appear to have some level of one-step-beyond-MVP mindshare, and a catchy name. Not only would this make it tricky for forks to take off in an open-source setting, but I can also see full-clone attempts being promptly chased away with pitchforks as well. This is the scenario where open source shines I think - the risk of wholesale copying seems interestingly low.

- You aren't interested in this for the money, so opening the codebase doesn't really let especially novel cats out of any bags

- There is enough clear interest that people will definitely show up to help

Some other considerations:

- Instead of committing to open-sourcing immediately you could also build a team out of all the people expressing interest in contributing spare time. I honestly wonder if the energy level would be lower than a traditional open source project. In this case I can see enough interest coming to the table to help unravel the existing codebase, and enough people might even stick around long-term to help out with simple maintenance hitches and fixes as they come up.

- If there were any concerns about forks/clones the exceptionally restrictive AGPLv3 would probably take care of those sufficiently well (perhaps this is a good question for the theoretical team mentioned above)

TL;DR, my opinion is that if you maybe want to take a break for a while but dive back into this in the future CHANGE THE WEBPAGE NOW while eyeballs are still looking at it. Point people at a Discord server or empty GitHub repository (lots of closed-source stuff uses GitHub for issues) to pin or star so they get pinged when there's activity in the future.

Of course the caveat is that I can only respond to the context I can see, and this may not be useful advice. Kind of an on-the-spot "uhhhhh....", good luck :P


[flagged]


>I use [service] for SSL monitoring

The /about page of this service confirms, read with your previous comments on this site, that you have an undisclosed affiliation with this service. Namely, you were the person who launched it. You should disclose that fact and not purport to simply be a user of the service.


You feel like plugging your service, you should disclose it's yours.


Seems that running this in a container to start with would have guaranteed certain libraries and OS settings were consistent…


It was run in a container, though:

- it was using Docker Swarm, which fell out of favour (Kubernetes won after all)

Docker Swarm is the built in Docker orchestrator, so it was running as a Docker container. It still doesn't protect you from anything around it breaking, such as the actual servers that the orchestrator runs on. Something like Ansible could have been useful (or even VMs with snapshots) for more easily reproducing the setup in case of failure, but it's perfectly fine not to want to do that if it wasn't financially viable.


https://sslping.com/

This site can’t provide a secure connection

sslping.com sent an invalid response. ERR_SSL_PROTOCOL_ERROR

/Ironic


Did you ever update your root certificates after some of the LetsEncrypt certificates expired in September?


Works for me, FWIW




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: