Hacker News new | past | comments | ask | show | jobs | submit login

Congratulations! Fast and stable DB.

In 2011 I wrote PHP client for Redis with features like tags, safe locks, map/reduce. It was heavily tested and users had no issues - and since there was no issues, it was removed from the list of clients after few years as "abandoned". Nobody even tried to contact me :) Now I don't use PHP and barely use Redis so it's all in past. But that experience taught me a lot of new things.

Salvatore Sanfilippo, you are great programmer. Redis API is elegant, safe and effective. I personally had 0 data losses all years I use Redis. And this DB is always extremely fast.




Funny anecdote though. No patches every few hours, no 1000 issues open? Apparently you were too good of a programmer ;)


I'm still a programmer, just use Rust mostly, instead of PHP. It's not an anecdote, there was 6-7 issues on github and code was really well tested.


Why not post a link to it? That would be a simple and welcome revival.


Redis evolved, clients evolved, there are better PHP clients right now I believe.

And I'm here not promote my code, I'm here to congratulate Antirez and thank him for his great job and for being a good example of how to write API, stable and fast code, how to communicate with the community. His blog is a source of lessons, wise thoughts and inspiration.


Google says anecdote means "a short amusing or interesting story about a real incident or person."

Edit: This should have been a reply to the comment from EugeneOZ saying it's not an anecdote; I replied to the wrong comment by accident.


The literal translation from Greek is “unpublished” which refers to a short incident irrelevant enough to not be part of the main story. In modern Greek anecdoto means joke.


Hm, I'm no native speaker. In my language, we use the same word for "stories from the past".


I am a native speaker and this was correct usage of anecdote by any reasonable definition


That is the irony of software development. In theory, good software could last forever without any maintenance. In practice though, nothing has to be replaced as often as software ;-)

What does that teach us about our software development practices?


I think developers need to learn to build to last. We have medical software that hasn’t been updated for two decades that runs just fine.

That’s often unobtainable with modern software development because we rely so much on things that change too often, but it doesn’t have to be that way.

It’s a paradigm shift of course, but I think our business really needs to take maintainability more seriously than it does. This goes for proprietary software as much as Open Source, but with Open Source there is the added layer of governance.

I work in the public sector and we operate quite a few Open Source projects with co-governance shared between different muniplacities, and the biggest cost is keeping up with updates to “includes”. It’s so expensive that our strategy has become to actively avoid tech stacks that change too often.


While I don't disagree that we shouldn't strive for maintainbility, things like medical software, airplane software or similar highly tested mission critical pieces are specifically built to last for that long. Nobody is going to pay us to build a WebShop to last for 20 years, thats just not a necessity when getting it out quick is so much more important from a business perspective than making it last forever.

> That’s often unobtainable with modern software development because we rely so much on things that change too often, but it doesn’t have to be that way

The reason we rely on things that change often is because we want to leverage them to get products out faster. Many different layers of that (as every tech stack is essentially a product by someone) and we have lots of updates to deal with. The flipside of slow moving projects is bugs might not be fixed or new helpful features might not be coming in, meaning you have to build it yourself.

As a community we know and have known how to build mission critical software for decades, but we actively often decide not to do it because it isn't that important compared to other factors.


So interestingly - the web shops you’re talking about do want to maintain their client data, and do expect it to be available “forever,” somewhere. The payment processors absolutely do at a minimum. Some of those layers are highly hardened.

While the particular Etsy clone or t-shirt of the day, or customized shower curtain site will certainly come and go, it’d be an entirely different problem if visa, PayPal, stripe, swipe, or whatever payment processor packed it up and went home at random.


We need the foundations/infrastructure to be built to last. People need to identify which kind of software they're making and treat the infrastructure as unchanging. Changes in the basement need to be carefully considered with a default stance of rejecting them unless justified by reasoning that has a time horizon of many years.


Tell that to the number of shops setting up factory IoT based on Node.js...


I have to be honest I do believe Node.js will be around for a long time. The improvements over the past few years have been vast thanks to the ever improving standards and all the major cloud companies are heavily invested. It has the world's largest public package registry as well (not that the sheer quantity means you can always find a high quality library).

I've only recently switched after years of scepticism but for the sort of stuff I do it's more than good enough. It has its warts but so does every language that's stuck around.


I don't necessarily think that the the language or the core thing won't stick around. But unless you force people to really decide which packages they require, you end up with an unaudited mess of packages (basically with every package - or is anyone really creating portable, stable apps of a relevant size based on the core node environment ...)


Yeah for sure the language definitely makes it easy to build something that won't last. I misunderstood your comment so sorry about that.


I agree with you in principle. But it should be noted that any software that interacts with the world outside of itself can't be considered to be in good working order if it hasn't been audited and updated to resist security vulnerabilities.

I'd argue that medical software shouldn't be connected to networks because security is hard, and most people get it so wrong. If that's part of the design, then the goal you're talking about is attainable. But in many cases, software isn't useful for its purpose if it can't access a network, and so the idea of just leaving it alone for decades at a time is an actively bad goal.


You’re absolutely right, but we also operate Django applications that hasn’t needed anything but the occasional security update in a lifespan that is longer than the existence of React.js.

I like react by the way, it’s just an example. But we’ve certainly had to spend a lot of dev time on JS frameworks in general.


Most “runs fine after 20 years” software is really “security nightmares that people are affraid to touch. Great designs and forward thinking are helpful, but “code and walk away” just isn’t the world we live in.

The new paradigm has to be “plan to evolve with the ecosystem.” There are just too many moving parts to treat software as static.


None of our old software that was build to last has security issues.

I know it’s harder to build with security in mind in the modern connected world, but we have a Django app that hasn’t needed anything but security updates that runs perfectly fine as an example of a web-app that doesn’t need much development time post implementation. So it’s not like it’s impossible either.

Don’t get me wrong, we’ve been as guilty of “wow this new tech is cool” as anyone else, which is where the lessons come from.


This debate never ends. Modern SaaS offerings are nothing like software 20+ years ago that was designed to be delivered in one shot, over the course of months or years, with many upfront hours crafting a precise spec that would not change whenever an investor would drop a new Series X round or the C.O.O. suddenly decided to "pivot" and promised a working minimal viable concept in 2 months without consulting anyone.


Even if the software was bulletproof, the context, environment, requirements, and expectations that the software is used under change, requiring software changes if the software is to remain as effective as it was.


And we are lucky that with software we have the flexibility to rebuild without many of the costs other disciplines face. If I want to rebuild a skyscraper in its same location for the same purpose, I can't build it offsite and then quickly publish it to the building site. I also don't get to reuse any of the cement or girders I used to build it the first time. Additionally, I can't easily redesign a skyscraper to support a totally new use case while not impacting existing tenants and the way the use the building.


People say this, and it certainly seems true on it's face --but software change is still without a doubt one of the most expensive parts of software development, and in fact we engineers spend a lot of time trying to learn how to design software to support change and how to make changes reliably.

To bring it back to the OP, redis is notable for being developed _very carefully and slowly and intentionally_, compared to much software. You won't get a feature in as quickly as you might want, but redis is comparatively rock-solid and backwards-compatibility-reliable software. These things are related. It takes time and effort and skill to make software that can change in backwards compat and reliable ways, takes lots of concern for writing it carefully in the first place.

Change of software is _not_ in fact easy. It might be easier than a bridge. But of course people just _don't change_ bridges, generally. We understand much better how to make requirements for a bridge that won't need to be changed for decades. Software might be easier to change than a bridge, but dealing with change is nonetheless without a doubt the most expensive and hardest part of producing software that will be used over a long term, and quality software is not cheap. And we haven't learned (and some think it may not be feasible ever) to make software that can last as a long as a bridge without changes.


My most popular open source library is a Redis backed cache for Ruby. It continues to accumulate stars, but I haven’t had any issues in well over a year.

Sometimes software is just finished, not abandoned!


Well the author himself said that Redis evolved and there's now better PHP client. It thus means that it wasn't actually finished and was abandoned.

In a closed environment, sure it's just finished, but the world is much bigger, not closed at all. Even the good old Instalshield required maintenance from Microsoft over their 16 bits compatibility layer.


One thing I think OSS developers / projects would benefit from is (self) certification.

Let's say you write a great piece of software that has no need of updating. But Redis keeps evolving - making new releases.

You could just each release of Redis run a series of checks and add a "verification" that we support version 1.0,1.1,1.2 of redis.

But even better is if someone else does this - why? Because it solves a problem I have seen a lot on government circles - the "we cannot use OSS because it is not supported"

But if another company says "we have reviewed, tested and used" version X of php-Redis then you suddenly have a self supporting eco-system.

Webshops that care about winning tenders can say "this software is supported by dozens of providers around the globe" so if it's original user goes off line there are still people who provably have skills and experience with it.

Everyone wins

(note - I am in no way saying this is something you should have done or thought about or arranged - writing software is hard enough without these sort of long term, unprofitable, activities - it's just matches observations i have on getting oss into government (plug: http://www.oss4gov.org/manifesto)


To this end: 18F[0] make extensive use of open-source technology and open-source their contributions, while making heavy use of open-source technology[1][2].

[0]: https://18f.gsa.gov/ [1]: https://18f.gsa.gov/open-source-policy/ [2]: https://github.com/18F


> But if another company says "we have reviewed, tested and used" version X of php-Redis then you suddenly have a self supporting eco-system.

Companies using it is nowhere near the same as 'supporting' it in the sense of providing assistance if there's a problem.


I think if you are prepared to assign some "certificate" that says

- we have tested version X of this and it's full test suite passes, and it runs against this version of redis server or it installs cleanly on RHEL 7.1

then it's a positive move

If you also sign off a different certificate saying

- we are a commercial entity that offers "support" (however we define that) for this software

then we are into a much more interesting eco-system

(yes I am looking for way more than I downloaded it and it works on my laptop :-)


> If you also sign off a different certificate saying we are a commercial entity that offers "support" (however we define that) for this software

There are companies that do that already. What's the issue? People that say "there's no support" either don't care to look, or have custom stuff that is not supportable via 'average' support companies.


Do you have examples of this? (I am not thinking of RHEL world but people picking up support for specific mid sized projects?


rogue wave springs to mind. searching for "open source support service" or "open source support company"


That must have been annoying. It’s interesting, I’ve been evaluating react css frameworks these last couple of days and one of things I’m looking at it is commit log to see what activity is like on the project. It did cross my mind that ones that had gone quiet might be very complete and bug free...but I decided that would be the exception.

Anyway, kudos for writing the client and kudos to Redis. It’s a brilliant bit of software.


there should somewhere be some stats about number of downloads, clones. while npm has its fair share of problems i like that it shows how many weekly downloads it has.


Download count is a reasonable proxy for use... But if someone is happily using the package in production, that won't tick any download numbers even though it's arguably the thing anyone considering a package would really want to know.

Maybe there could be a new curated repo for packages, where users are asked to regularly vouch whether they are still using the package? The problem of course is motivating users to give those answers. Without a critical mass of users regularly vouching, the data isn't much help.


Lol that’s open source marketing 101, GitHub issues are the main source of outreach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: