Computers have an unprecedented ability to reproduce value for free. Programmers need a relatively fixed amount of resources to thrive. (The value of resources varies by location but we all need things like food, shelter, transportation, clothing, tools, etc etc)
If we can find a way to make sure every person has what they need to thrive regardless of their income, programmers can open source all of their software and we can enable the maximum value creation possible. Other engineers like those that design commodities like dishwashers and cars or important manufacturing or medical equipment can also open source their designs so that repair costs are low and innovative improvements are easy to apply. I genuinely believe this would result in a steeper and more rapid innovation curve as well as a better world for all, than a world where we try to monetize things which have zero marginal cost to reproduce.
One problem is that most necessary projects aren’t fun, and most fun projects aren’t necessary. Does anyone design dishwashers as a hobby, as an easy example? How do you propose we motivate people to do work that isn’t fun? Currently the carrot of higher pay or ownership in a more valuable thing is doing that, so we would need something to replace it if that goes away.
I'm 200% convinced there are plenty of people out there who could easily be nerdsniped into building an open source dishwasher! Hackers get up to all kinds of stuff that doesn't seem traditionally fun!
This is true but what does this have to do with this idea? Who says we need bosses and overseers to get stuff done? See:
https://vorondesign.com/
"The original goal of the VORON project, back in 2015, was to create a no-compromise 3D printer that was fun to assemble and a joy to use. It had to be quiet, clean, pretty, and continue to operate 24 hours a day without requiring constant fiddling. In short a true home micro-manufacturing machine without a hefty price tag. It took over a year in development, with every part being redesigned, stress tested and optimized. Shortly after the release a vibrant community formed around the project and continues to grow today. This community is part of what makes VORON such a special experience.
What was once a one-person operation has grown into a small tight-knit group of engineers united under a common design ethos. We're dedicated to creating production-quality printers you can assemble in your kitchen. It's this passion and dedication that drive us to push the boundaries just a little further. We build space shuttles with gardening tools so anyone can have a space shuttle of their own.
Welcome to VORON Design"
> Who says we need bosses and overseers to get stuff done?
I prefer being told what to do so that I can use my mental capacity on things I care about.
I do not care about making a dishwasher or a 3d printer. I would rather pay someone else to do it, and I would rather that person has a team of people under them who can help me with trivial issues if I come across it.
I desperately do not want to waste my time with a dishwasher or anything else in the kitchen.
Yes and to make my vision a reality we need community ownership of the means of production, which would 100% mean access to these tools. With appropriate access to tools, bored engineers under such a system can get a LOT of shit done.
Yeah, for sure. The only issue would be that I wouldn't be as interested in building an open source dishwasher for every possible (or even the most common) users/use cases, but for myself.
I figure someone is going to come along who's excited about adapting the dishwasher for their own use cases, since they also don't want to do dishes. Maybe they'll even have it support their social circle's use cases so they never have to help doing their friends' dishes after being over for dinner again.
It's bound to happen at some point, right? There's a lot of people doing dishes.
No. As a hardware hacker myself, the benefit and draw is the fun of it, I don't want to have to hack my own dishwasher and make it work to my requirements, not maintain it.
Hackers usually write very bad code, in case you weren't aware
If a group of nerds builds an open-source dishwasher as a hobby, the final product will be something too esoteric and difficult to use for regular people.
Even if you imagine software development to be generally fun, even the mundane, the rest of the workflow can be God awful boring. While Communism is a cool idea , it never works since you need incentives to motivate people.
Funny you say that because the most popular 3D printer firmwares are open source. Does QA’ing 3D printer firmware sound fun to you? No? Great well you don’t have to do that someone else is literally doing that right now because they thought it was fun.
By the way this is a decent job for an apprentice under supervision. It’s a good way to learn with minimal risk and young people are often willing to do this kind of thing. Then the veteran comes in and makes a few improvements to the QA system and hopefully it’s good for a while.
Nobody is QAing 3D printer firmware because they thought it would be fun. They are doing it because their printer crashed and now they can no longer print.
You can see man bugs were fixed in the latest release.
What may sound dreadful to you might be interesting for someone else.
While capitalism sounds nice in practice, it results in the destruction of the planet because the profit motive eats everything alive and leaves nothing behind for future generations.
The majority of those bug fixes are from people who probably were paid by companies to work on them.
A couple of people with a bunch of fixes attributed to them seem to work for Red Hat, and several contributors have Collabora email addresses. A fair bunch of fixes would seem to be from members of a LibreOffice team at some company (based on bugzilla comments and email addresses). One contributor seems to be from a company that does consulting related to LibreOffice.
A few fixes are from people registered with what look like private email addresses or with email addresses associated with the LibreOffice or other open source projects themselves. But they seem to be a minority.
That doesn't mean it can't be interesting to those people, or to many people, but it also doesn't mean most of the fixes were motivated by that alone.
(Also, fixing bugs is different than QA.)
> While capitalism sounds nice in practice, it results in the destruction of the planet because the profit motive eats everything alive and leaves nothing behind for future generations.
As always, that doesn't mean communism (or some other supposed polar opposite) would work. Or that those are the two possible binary options.
I get the point that some jobs are boring and need actual materialistic motivation to be done but...
I'm absolutely sure someone would design one out of sheer annoyance with existing solution (if existing solution would be bad).
It would be interesting if system with very short copyright (say 3-5 years) would work. You'd still have leader's advantage for investing in development, but overall winner would be companies that can both innovate and fill the market and not just throw some ideas, patent them and live off people actually trying to implement them...
Actually I think the way this works is you have a series of forks. You will have a variety of weirdos who spend a lot of time on their fork of a design which is close but needs some tweaks. Regular people then can choose from a bunch of different forks with different pros and cons. This is similar to design and production in a capitalist system only it takes way less effort to design a similar but slightly different fork. Most vacuum cleaners are pretty similar but each company has to design their first machine from scratch and there’s a high barrier to entry for that. With forks, it’s easy. And we see with 3D printed objects that when someone is in demand you will have a popular option that a lot of people have downloaded, and then someone comes along and makes their own fork, and someone sees that and improves upon it too, and now you’ve got three options but only one person had to design it from scratch.
There are potentially other carrots aside from material wealth that can motivate people to do unpleasant work. Currently it takes significant pay to get people to do certain important but thankless jobs. We could thank them. A legacy is important to many people. They may enjoy an immutable commemoration of their work, if they're secure in a material sense.
If we had UBI, for instance, and people did not have to work in order to have basic needs (food and shelter) met, then the willingness to do unpleasant jobs like sewer cleaning will go down, and it'll be necessary to pay people more to do that work.
And the need to pay people more will then drive technological innovation that may today not be worthwhile because "just hire someone" is less expensive. And in a world with UBI, automating away unpleasant jobs becomes more of an unmitigated win.
(In case it isn't clear: I think "UBI plus a free market" is a much better system than "don't pay people but magically hope all the work gets done anyway".)
The UBI argument seems fundamentally flawed - things like food-stamps or free social housing make more sense.
Let me give you an example: If the government offers anyone buying a house an incentive of $10K, every house asking price will go up by $10K and wipe out the discount. The relative/competitive nature of the market opposes many en-mass pay-outs.
If there's a ubi, then there's price floor that is higher than a non ubi price floor, meaning that the median prices of goods go up, less taxes to find the ubi, and the ubi must then increase to meet the increased costs for a basic standard of living.
Maybe, maybe not. It does depend on how efficient we could make basic necessity items, and those are only a tiny portion of the modern economy, I suppose. Though I don’t trust the government to not simultaneously screw with the incentives for producing those items, whether by accident or not, which would make a situation like the one you describe more likely. Also it’s not at all clear how housing would work— I’m guessing people would complain about the sort of housing you could get for a workable UBI amount (i.e. one that is actually in equilibrium with the rest of the economy, as in being a tiny rider-on). This would realistically politically cause the amount to keep going up (and thus cause inflation).
> basic necessity items, and those are only a tiny portion of the modern economy
If housing/shelter is a necessity, it might not be cheap. I think housing projects aimed to do exactly this. You'll eventually find 'poverty' redefined to those receiving UBI only, stuck in undesirable locations, who don't consider it true that "work is optional".
Another issue is policing, which is also increasingly expensive, and has all sorts of issues in the US. People like to just shout "fix the police", but have no answer as to how you can recruit for such a job.
This aspect of UBI is functionally equivalent to the generous unemployment + other benefits that exist in many developed countries. The accelerating inflation hypothesis is easily empirically disproven there.
(There's of course normal inflation in these places, like everywhere else, it's a by-design feature of most monetary systems)
I know it seems crazy on its face. And I'm sure those leftists you refer to didn't have a coherent concept of how such a system would actually work. There's no way we could just replace paychecks today with rations and social credits and have a functioning system. It'd be an extreme destabilizing change to a system we built incrementally over a long time to be self reinforcing. But I also have the view that people are very malleable and can conform to all sorts of social structures and belief systems.
I like to think about how this works at smaller scales. When there is an office full of people all being paid about the same and (critically) where they all want and care about the same outcome, the shit jobs will get done. I have often called myself a "code janitor" since I clean up shit that was left behind. It's not because I didn't want to be working on fun greenfield projects but because it was shit that just needed to get done. So I did it. And so did others.
Another example to play around with is when you go camping with friends. There's some shit work that just needs to be done. People pitch in. The same with staying at a friends house or a vacation rental with friends. Or cleaning leaves off of the storm drains. We all do this sort of work because it makes our lives better. If literal shit was piling up in front of my house I would probably shovel it even if it took 8 hours.
Natural disasters are also examples where people do work for free without expectation of compensation. I think people are more like that than what happens in apocalyptic novels (even though I love reading them).
Seems like the key shared characteristic of these examples are small communities where people care about each other. As you say, it works at smaller scales. But free rider problems are a lot tougher in a community of millions or more.
I live in one of those communities (eco village in Australia) and all it takes is one old haggard lady to reduce the workforce to nothing, as no one wants to put up with her abuse.
I think one thing people keep forgetting is that all these problems have already been solved for hundreds and thousands of years. You will never reduce inequality, abuse, pain, suffering. It's much better to contribute and improve what we already have rather than to rip it out and try "yet another variation of the same utopia that we've all been thinking about".
Okay, so are you disputing what I said? Various disparate religions and ideologies have cultivated adherents with notable success across history -- not least among them is free-market capitalism.
You seem smart and it always amazes me how popular communist concepts are with smart people on HN.
Didn't you have some assigned group projects in school? Perhaps some people are easily satisfied by carrying the burden of other people who they don't know and who don't appreciate them, but I'd wager a ridiculously high majority of value producers would not be. Humans are social animals, but we're individuals first and foremost and self-interest will always be the best motivator.
Is there some future where humans are engineered to be satisfied with a predefined role and purpose, amongst other traits? Sure. But until we get to that point, commune-style living is an absolute dud.
By the way, I recommend you visit and try living in an actual commune. My girlfriend told me it was the most disgusting living situation she's ever seen.
Even this is not a fair comparison IMO because even people who don’t mind carrying the majority of the work eventually often get tired and frustrated with this and in my experience this tends to happen at a similar rate to when they enter their most productive years.
There's lots of kibbutz (communes) in Israel. Apparently most aren't too bad. The one I visited seemed decent. But Jewish culture has a pretty strong community ethic which is key.
I also visited a couple of Greek Orthodox monasteries and they were not just nice but beautiful. So for small scale communities with a strong cultural binding "communes" can totally work. It doesn't scale though.
I think people misunderstood where I was coming from a bit. To be clear, I wasn't commenting in support of communism, or against rewarding merit, or against rewarding merit with money. Status in life and legacy in death are still motivators that matter, though, and with the right set of shared values, they are powerful.
We still generally have a culture in the US of respecting our veterans and service members, for example. There is some social value in serving that's not material. If there wasn't, the material benefits would need to be more substantial.
Respect for military is probably well aligned with instinct though— I’m guessing most tribes have respect for their warriors. Can we repeat this for an arbitrary behavior that is not so aligned? Maybe, maybe not?
Umm running the garbage system sounds rad?? I currently design open source farming robots but I would absolutely love designing open source garbage robots. Of course what’s better is community level management of waste production so we don’t even have a lot of garbage to deal with! Reusable washable containers for food, etc.
C'mon, robots obviously! Cleaning sewers doesn't sound like any fun, but designing or remotely piloting a fatberg-blasting sewer shark bot? That sounds kickass!
But it the meantime while there exists no such robots, or while the prototypes get stuck downthere. Someone has to manually fetch them, and do the job. It's not very enticing and I don't think there will be many software engineers ready to suit up, to dig one out.
It took/is taking a long while to develop self-driving cars, despite the money and big players chasing it. How long will a fat-berg shark take-a-to-make-a?
Also, think of the people who might help you with this - sewer-diving experts! You are essentially collaborating with them to displace their own roles.
I used to work in retail stocking shelves. 1000% I would have happily collaborated on developing a shelf-stocking robot... if I got paid something for it.
Running the garbage system is a desk job largely I would expect. It might not be the most stimulating subject matter to you, but I think it's within the realm of possibility that you'd find people who found it an interesting system to manage.
Cleaning the sewers sounds objectionable. I think you shouldn't discount the idea that in a societal structure that's different from ours you'd remove some of the social stigma that comes from such a job. But at the same time, if you observed that very very few people wanted to clean sewers for whatever reason, and there wasn't enough supply to meet demand, then you invest more in technology that reduces the shortfall. As others suggested, automation.
> I think you shouldn't discount the idea that in a societal structure that's different from ours you'd remove some of the social stigma that comes from such a job.
I can see you haven't done any of jobs like that ever in your life. "Social stigma", lmao, that shit smells
> But at the same time, if you observed that very very few people wanted to clean sewers for whatever reason, and there wasn't enough supply to meet demand, then you invest more in technology that reduces the shortfall.
It's delusional to think every job that's undesirable but necessary could be automated and that it would be cheaper than ye olde good material compensation for doing something hard/unpleasant.
I mean, I'm all for it, but that won't happen to the level that would eliminate unpleasant jobs
> I can see you haven't done any of jobs like that ever in your life. "Social stigma", lmao, that shit smells
I haven't, but I didn't say that shit didn't smell. My point was that one component of why some jobs are worse than others is social stigma. Working at a fish monger or in a butchers shop stinks, and you probably get way less PPE than a sewer cleaner would. But butchers and fishmongers have less social stigma.
> It's delusional to think every job that's undesirable but necessary could be automated and that it would be cheaper than ye olde good material compensation for doing something hard/unpleasant.
The fallacy here is that it _needs_ to be cheaper. Sewer cleaning is valuable. If it requires more investment to automate so that we have enough supply to meet the demand, so be it. The only reason we haven't already automated this smelly job is because it's easier to turn a profit if you just pay people peanuts. If profit is no longer motivating, you can make vastly different decisions.
Sure, but you iterate. We decided as a society that Polio was awful enough that we wanted to eradicate it. If we freed up enough effort that is currently wasted on chasing profits, we could eventually get to solving problems like "shit stinks and it sucks having to clean it".
Absent corruption, profits must by definition come from money paid by people who determined that, according to themselves, the work being paid for has value. This is why capitalism works. It is a mass, distributed, value computation engine.
Considering this, I don’t see how effort is wasted on profit, since the effort produced something that someone considered valuable. Do we have another way of computing value at scale? To date, I haven’t seen any realistic proposals.
Why would you be so bothered to just let them? Would you feel embarrassed that "leftists" are nicer? That can be a motivation too! I know some people just show up so they can have someone to talk to for a few hours on a Saturday.
I think if you can't find volunteers, you can have a lottery.
Many western countries use a lottery to choose juries.
If the winners refuse, they'll have a chance to explain themselves, and then they'll be judged by those who didn't refuse. Exactly what happens next depends.
How often do you talk to people this far to the left? I live in a family full of liberals and none of them even remotely think the world should operate this way. I think you could take every person in the US with ideology this far out to the left and put them in a single medium size stadium.
Second, if true, would imply that every leftist they've ever talked to will always get asked "who will run the garbage system, and who will clean the sewer" which wouldn't make sense outside the context of having given a specific opinion on a specific topic first.
Hence the topic of conversation is implied as context.
Clearly this isn't going to work in a world where people use the word "sucker" to refer to people who do work to help others. Honestly, do we call volunteers at soup kitchens suckers?
The problem clearly involves an unequal distribution of work.
If everyone is else being paid and you are trying to convince a single person to literally shovel shit for 8 hours then yes, that won't work. They will feel like they are being taken advantage of. I think this is a common feeling amongst all workers. If your boss asks you to work late you are much less likely to be pissed off if the boss stays late and helps out too.
We only have structures like this due to the class system, where some people never shovel shit and some people do it all week long. A great suggestion for this incidentally brought up by Noam Chomsky in 1976 is to rotate people through these jobs, so everyone spends maybe two days a month on the task. People appreciate a little bit of difficult labor, it just sucks when you have to do it every day for survival. I linked directly to the few minutes of this talk where he talks about it here, but if you have more time check out the whole thing:
https://youtu.be/h_x0Y3FqkEI?t=1744
Without any intention of snark or confrontation whatsoever, can I ask you
1) Have you personally tried to put this idea to practice? If yes, for how long? Would you be willing to share some details? If no, why not? It doesn't take changing the entire society, you can do something like that literally now. Be creative, travel to a different random town twice a month and offer to clean toilets in a couple of buildings? Again, I don't mean to be snarky with this example, I just think working on 3D printers doesn't cut it as a type of difficult labor. Think more along the lines of going down into a mine, shoveling literal shit for 8 hours, etc.
2) Imagine, the society has actually changed such that everyone is supposed to do a couple days of difficult labor. What happens if someone refuses? A punishment? If there is no punishment, what happens if almost everyone refuses and there's not enough people to sustain this undesirable but critical jobs?
Well they were not that huge, but most of organizations had "board of honor" with photos and names of best workers displayed for everyone to see.
It was mocked ("look at those idiots that work for free, lol") even before USSR died.
The USSR is the only example I can think of where they ever tried for a long time to elevate workers in this way instead if paying them better, like the parent commenter asked for.
to remove property laws is to remove markets by definition. You cannot sell anything you have no ownership over or rights to, given that everyone can just take it.
Hence why open source maintainers are barely paid, and companies in the space rely on selling services, not software. If you want, as the title of the article suggests, a world where people pay for software, strengthen IP laws instead of weakening them.
All property rights are legal fiction, there's no "actual property rights". Owning ideas is perfectly reasonable because that's where the entire value is, and having rights to your own ideas guarantees that you are compensated for the invention and value they provide to others. Property rights, intellectual and non-intellectual, correctly allocate resources to value creators.
What if an idea has value independent from an inventor e.g. someone invents it independently. Novel-ty of invention is hard to prove. There is a strong incentive towards rent-seeking beyond the collaborative value invention is supposed to provide.
There are "actual property rights" because an actual property is an actual property - owning one house doesn't mean you own them all. An idea is an abstract "property" that isn't like real estate at all, and most of the concerns highly subjective interpretation of scope and value.
there's strong incentive towards rent-seeking in all property schemes. The oldest form of rent seeking literally does concern physical property, land and real estate. Rent seeking is bad and common but eliminating property rights is worse, some have tried. And you generally can't own abstract ideas, you can't own the number '5', or 'wizard stories', only a particular logo design or Harry Potter. IP is only concerned with specific artifacts. They're intangible but not abstract, they're instantiations of ideas.
If you produce intellectual work there are really only three ways to do that. Give it away for free and beg and provide services, accept some sort of feudal patronage (now reinvented on the internet) or have rights to what you make. The last one is the only thing that gives creators independence and compensates them directly, even if yes you have to settle subjective differences sometimes.
Anyone who wants to abolish IP needs to tell me how a novelist earns a living without being reduced to some sort of begging.
Even libertarians argue that intellectual property is not real property and is incompatible with capitalism. Removing government-mandated IP restrictions is not "removing property laws", it is freeing innovators to copy what they hold in their hands without government intervention. They own the property they purchase or produce but they don't own the intangible ideas behind the thing.
Mises Institute "How Intellectual Property Hampers Capitalism | Stephan Kinsella"
I don't see why people can't gather and pay someone to design a good dishwasher, even if a lot of other things were available for free.
There is a lot of open-source software, and still software development contractors exist and do well, all the while they may use 90%-100% open-source stacks to solve a client's problem.
Beyond that, the issue is risk. To make a new, better company is a very risky endeavor. It takes sticking through hard times. Very few people and willing to do that without a chance of major remuneration.
We should guarantee minimum income, and abolish intellectual property. Build an economy around actually doing things rather than calling dibs on solutions. Let the market sort out the doing of things, just make sure everyone can participate.
> Build an economy around actually doing things rather than calling dibs on solutions.
If I design something, how do I get compensated for the work if anyone can clone the work? Why is manufacturing counted as "compensated actually-doing-something" while design/R&D isn't?
Abolish IP? Why should you have errant and free access to modify my book and publish it under your name as if you wrote it, without me having 0 say in it?
Building a modest economy where everyone can participate is the way to go but to be honest, this world isn't fair, there are some who have more power over others
Do you think there would be very many programmers in such a world?
Personally, I think that a lot of people who right now go into programming "because it's a good career", would instead do things that are equally creative but also capture other things high on the Maslow hierarchy — e.g. fame.
Personally, despite enthusiastically enjoying my programming career and puzzle-oriented problem-solving more generally, I'm still intending to retire early and become a novelist. If I could "thrive regardless of income", I'd do that right now.
It is hard to guess what people would work on without needing to worry about money.
You might try your novel, and one of two things could happen:
You find out you love it, you write a really good novel, and society wins.
You try it, find out that the actual experience of writing a novel is a drag. No harm no foul, you move on and keep trying things until you find something you are really passionate about and good at, and society wins.
Maybe it is programming but you just need a more interesting program.
You can also find out that you love it despite the novels (or software or paintings or poems or whatever) not being interesting for almost anyone else or even being available to anyone, but as you don't need the money you can keep doing that (and only that) and society simply loses out on whatever you're doing currently.
The key part of what people would work on without needing to worry about money is that there is literally zero reason to assume that the thing worked on would be useful to society in any way whatsoever, it can be useless or even detrimental to it - the current mechanism of monetary compensation is the thing aligning the work to interests of others, remove it and you can't expect that alignment to persist.
Unconditional income is a solution to the problem when we don't need people's labor anymore - it makes all sense when people can just go off and do whatever without worrying if it benefits others enough to justify the basic goods and services they need, and the society is okay with that. But while we still do need the labor of most people, there needs to be motivation to guide that labor to the specific things society needs.
> there is literally zero reason to assume that the thing worked on would be useful to society in any way whatsoever, it can be useless or even detrimental to it - the current mechanism of monetary compensation is the thing aligning the work to interests of others, remove it and you can't expect that alignment to persist.
I think a strong argument can be made that the current system does not necessarily align the work being done with the interests of others in a broad or universal sense. Think about a corporation with a very useful drug whose patent is about to expire. Allowing the drug to go generic would be in the best interests of many poor sick people all over the world (patent harmonization means even poor countries must follow US patent law or get locked out of global systems). However companies often find legal tricks they can use to effectively renew the patents for their drugs. This aligns with the interests of some people - the shareholders for example, but is detrimental to the interests of sick poor people all over the world.
And this isn’t a hypothetical, this just happened again two weeks ago with Johnson and Johnson and only a coordinated pressure campaign from some high profile YouTubers was able to get the company to relax their plans:
https://youtu.be/tMhgw5SW0h4
However when there is no profit motive, people often work on problems that they personally need to solve, and there is often good alignment with the work they are doing and the needs of others.
More broadly, we can say that the current system does not necessarily align the work being done with the needs of most people, and that alternative ways of aligning that work must be possible.
I think there's a huge difference between everyone having unlimited material goods Star Trek style and UBI being a floor for everyone. I think of UBI as a floor that I can go below no matter how bad I screw up. If I start a company and max out my credit cards to fund it and it goes belly up then no matter how much I still owe to Chase I will still get my $1000/month to pay the rent and put food in my belly.
But I will still want luxury goods and I'm willing to work for them most of the time. I want a phone upgrade every few years which might be a luxury I couldn't afford under UBI. I like flying airplanes and certainly would need to work to pay for that hobby. But if I get burnt out and want to read books for a year then I could do that too!
Do we really need all of the programmers that are currently being employed? Will society collapse if there aren't 100,000 working on the next photo sharing app?
The important stuff will get done. Anything that is a luxury will get done only if someone wants to do it for themselves or if someone can convince another person to do it. Money doesn't need to disappear under a world of UBI, it's just not something that every single person on earth needs to participate in under thread of starvation and death.
> Yes, it will. You know how? Because the importance will inspire more money to be offered.
Actually I am the lead engineer and maintainer on an open source farming robot project. I do get paid enough to cover my bills, but the pay is less than 50% what I could earn at any of the companies which repeatedly pester me on linkedin. But the work I am doing is more important to me than all that. I am doing the work for less money specifically because it is important.
In a world where we have community control of the means of production, what does it mean for work to be important? It means our community members need us to solve problems so they can grow food, make clothes, and build shelter. Caring for one another is in our DNA, literally! There is no specific monetary incentive required, the care for community members is already enough incentive. The only reason we need to pay people huge sums of money for some SAAS app is that the work is nearly meaningless to us. The money is the only thing that makes it worthwhile. But if we don't need money, we can work on what really matters. And when we open source the results of the work we needed, many others will find that it solves their problems too, perhaps with modest changes, so they don't need to reinvent the wheel over and over again. This means the work goes farther for the same effort.
I’ve been writing ship software, for my entire career. Most times, I’ve been quite involved in the entire process; from napkin sketch to shrinkwrap.
For me, it is quite gratifying to ship, but for many people, the 50% (or more) of shipping software that is non-coding work, isn’t fun, and is usually deprecated, during the planning process.
Examples are things like end-user documentation (not just maintenance docs), training, error handling, accessibility, localization (a huge task), continuing customer support, legal checklists, patent searches, copyright searches, branding strategies, glossaries, distribution channels, evangelists, usage feedback support, synchronization between all of the above, and the Web site, etc.
Big fat, not-fun pain, but needs to be done. A lot of this stuff really needs to be considered before the first line of code is written, and the journey can take years.
The app I’m working on has been in development for over two years (but to be fair, I did “wipe the slate,” and restart, after about a year). The basic “business logic” is in one SDK that I wrote in about a month and a half. All the rest of the work has been chrome.
Humans have tried all kinds of value transfer systems for thousands of years. Giving someone "tokens" (ie. currency) to convert that into whatever they want, or need has been the most flexible version of whatever has come before it. What one person needs to thrive is not the same another person needs to thrive, so who gets to set what that level is?
I'd be skeptical of any system where there's no opportunity to get ahead as people will either find ways to take advantage of the system and screw others over, or the system becomes unsustainable as populations shift in size.
Generally the broad concept I work with is “community ownership of the means of production”. What this means is that you are part owner in a cooperative of cooperatives that owns the machinery you depend upon for your well being. Of course your community trades with others and you and everyone have free choice to vote how you please and contribute as you desire. There is no “enforcement” that prevents you from accumulating more wealth but most of what you rely on is borrowed from a “things library” where you are permitted to use it indefinitely but not sell or destroy it, and in times of need the community may request that you return some items you are not using.
More broadly I would say that many people believe the current system actually does not serve people well. We have a very small portion of the society that owns the means of production and 99 percent of the population have to deal with the dictums of those owners with very little say in how production is allocated. This leads to a world where the output is heavily slanted towards the ownership class while everyone else is fighting for scraps. A world with community ownership of the means of production would mean MUCH more wealth for the average person, so concerns over resource allocation would be less of a concern.
The point anyway is that in the current system I certainly don’t get to decide what my “level” is beyond trying to work hard, but in a community ownership model I would have much more say.
As you have said we have been trying different value systems for thousands of years. No reason to believe attempts to improve the system should not continue.
It’s not an officially banned word but I live in the USA and folks turn off their brain if you advocate for communism. Also it’s not very specific to advocate for communism because doing so leaves things quite open to interpretation. Would that mean I want Soviet style communism? Chinese style communism? Advocating for “community ownership of the means of production” is precise without being overly proscriptive. I’m not talking about a government at all - we can achieve this ownership a variety of ways. Way better to discuss specifics than use a term that will cause a million comments like “we tried that and it failed”. This openness allows people to decide on their own how the goal might be achieved instead of believing I am going to come in like Lenin or Mao and push a specific plan. I don’t want to do that!
I understand "common ownership of means of production" is just plain communism in the way Marx intended when it refers to "post-capitalistic economic organisation".
Unfortunately that is not how human nature works. Value itself depends on the necessity or desire of people for a product and is distributed via the money printing debt system.
In other words: Necessary value can only exist, if other humans have a problem and depend on somebody/something to solve it (temporarily).
Explaining valuation from individual desire is hard, but group desire depends on techniques to control the masses and in between are many layers of uncertainty.
> the necessity of debt to measure and trade is dubious at best, selfish and evil at worst
This is not what I wrote or meant, but I do agree that it could be understand like that.
It would be of course better to measure necessity with something else, but so far approaches to replace it have failed for various reasons.
These are the sorts of efficiency improvements that would go a long way towards tackling global warming and environmental destruction, particularly the open design to reduce waste. The question is, how can we get from where we are in terms of an economic and political system to one that supports a healthy commons and maximizes value, like you describe?
I mean, I’ve seen worse arguments for socialism, but you seem to be painting an overly rosy picture. Yes, computers can reproduce software at zero marginal cost, but there’s still a considerable investment in the initial creation and ongoing maintenance. While I’m all for a world where programmers and engineers are able to fully devote themselves to open source projects, it’s not as simple as just making sure everyone has their basic needs met.
The incentive structures are complex, and money still serves as a potent motivator for many to push boundaries and innovate. Remember, open-source doesn’t always equate to high-quality or innovative, and proprietary doesn’t always mean restrictive or uncreative. A balanced ecosystem where both proprietary and open-source software can coexist might be a more realistic and productive approach. I’m afraid that balance isn’t too dissimilar from the one we have now, so I’m sort of forced to go with Occam’s razor here.
I certainly think open source under capitalism (work at the margins, engineers spread thin) will always be worse than open source under socialism (abundant workforce, lower stress, more time available).
As far as initial investment in the creation of the software - yeah, that’s programmer time. The point of my scheme is to lower the cost of programmer time because their needs are already met, thus lowering the cost of initial investment.
Hardware is a separate concern but I have a whole thing about how open source hardware tends to bring the hardware costs down to the lowest physically possible cost. Just look at 3D printers under patent ($25k) versus ten years after the patents expired and open source took over the low end ($250).
I’m not sure how Occam’s razor would suggest that the status quo is close to the ideal situation here. Those seem unrelated.
A single paragraph excerpt:
"Men are not good enough for Communism, but are they good enough for Capitalism? If all men were good-hearted, kind, and just, they would never exploit one another, although possessing the means of doing so. With such men the private ownership of capital would be no danger. The capitalist would hasten to share his profits with the workers, and the best-remunerated workers with those suffering from occasional causes. If men were provident they would not produce velvet and articles of luxury while food is wanted in cottages: they would not build palaces as long as there are slums. "
I find it really hilarious that you're literally going "that's not real capitalism" yet you probably hate the same "excuse" from the other side of the ideological spectrum.
Sounds like an excellent idea that will work really well because it's incredibly well aligned with how humans actually function. I really wonder why no one else has thought of communism before.
Just because someone is proposing something for a small slice of society, doesn’t mean they intend to propose something similar for all of society. For instance, insisting on free schools, free (rail) roads, free health care, free water, and nationalised energy plants doesn’t mean they want to make everything free, or that they want to nationalise everything, or that they are nostalgic for communist Russia or whatever. That’s just the Red Scare talking. The fact is, different systems for different slices of society can and do coexist.
Human nature is not limited to the environment we’re currently living in. Genetically we’re barely different from the people of a couple hundred years ago. And yet our ancestors lived under many kinds of societies. It would be a little presumptuous to assume the one we’re currently living in is the best. Especially considering how it came to be: remember that as Thatcher was saying capitalism/neoliberalism was natural, she did "nudge" things along by having the army pay a visit to workers on strike.
Even communism isn’t a monolith. It took various forms, which failed for various reasons. Sometimes it was direct outside interference, like how the Paris Commune was basically crushed by the national army.
> Just because someone is proposing something for a small slice of society, doesn’t mean they intend to propose something similar for all of society
Yup, a "small slice of society" that's just proverbially eating the world.
> remember that as Thatcher was saying capitalism/neoliberalism was natural, she did "nudge" things along by having the army pay a visit to workers on strike.
Unfortunately things never happening at times gets in the way of me remembering them.
> Even communism isn’t a monolith. It took various forms, which failed for various reasons [...such as] direct outside interference
Similarly to how perpetual motion machines took various forms and failed for various reasons, including direct outside interference.
> Yup, a "small slice of society" that's just proverbially eating the world.
Software may be everywhere, but last time I checked very few of us actually write it. Something similar can be said about food: everyone has to eat, but relatively few people actually grow crops. And even if we were to implement UBI for everyone, it’s an idea that was around for quite some time, with weaker versions of it implemented for decades now. It didn’t kill capitalism, let alone spread communism (whatever that means) all over the world — or any single major Western country.
> Unfortunately things never happening at times gets in the way of me remembering them.
Thatcher only planned to use the army and declare a state of emergency. Actual repression only used regular police forces… resulting in quite a few injuries, as well as trumped up charges (that turned out to have been fabricated by the police if my link is to be believed): https://www.history.com/news/margaret-thatcher-miners-strike... Also remember what a strike is: people stopping to work because they don’t want something. Something they reject so badly they’re willing to sacrifice their income to prevent it.
So my main point remains: what Thatcher did was far from natural, not to mention contrary to of the will of her people. You could deny that if you want, but to be honest it would be more an expression of your allegiances than your actual beliefs. (Then again, people have an uncanny ability to tailor their beliefs to suit their allegiances…)
The link doesn’t talk about the SAAS model, which is probably the most profitable (and ubiquitous) one these days.
I know people like to rail against it, but I actually like the SAAS model. It keeps incentives aligned. It used to be that I might shell out $200 for a piece of productivity software. Now, I might pay $10 a month instead. The thing is that under the old model, a company was incentivized by make a sale but retention didn’t matter. Now, a sale is almost worthless, but retention is very valuable. Yes, over time I will pay much more with SAAS, but I also have companies that are incentivized to keep the software working. It doesn’t matter that I have a perpetual license on accounting software I bought in 2005… it no longer functions with my operating system anyway. SAAS helps solve this problem.
I find SaaS products, including ones I have paid for, disappear at a much greater rate than the rate at which the desktop tools they replaced stop working.
There's also next to nothing I can do as an end user when they do disappear. If I'm very lucky, I get a limited window to be able to export a portion of my data. But we've eroded data formats to the point where even if I can export my data, there might be nothing to plug it into. What good is a CSV, even, when what I need is a tool that processes the data in the CSV? There's no option for me to keep an old machine or a VM around and self-support on a discontinued piece of SaaS.
That's to say nothing of the price hikes. $10 a month today becomes $14.99 next month, $17.99 in a year, and before you know it the proprietary system you've locked yourself into now costs five times what you originally paid. Sure, they might add some more features, but since it's SaaS, in many cases you have no choice to seek out a different vendor to provide the same feature, as again, your data is locked up in a format you can't easily extract and work with elsewhere.
SaaS from established firms seems to be more durable & maintained.
The problem is all the flash in the pan ZIRP VC funded never-profit SaaS startups out there. Hopefully these get shaken out over the next couple years finally.
For example, I've used Adobe products for a very long time, and they get a lot of flack. I was an extensive user of Photoshop (PS) and Lightroom (LR) for a long time.
However, the old model was - PS pay $600 once, then $200 for updates every 2 years or so. LR was $200/100 as I recall. So your run-rate for both was over $150/year (factoring in the initial $800). This was in like year 2000 dollars.
For $150 2023 dollars.. I get constant feature updates, cloud storage & sync, licensed to run on at least 2 machines, etc. Inflation adjusted this is nearly half the price of paying $150 in 2000.
I'm also intrigued by how many very wealthy people are unwilling to pay $10/mo to stream music/video and/or share passwords, when I recall paying $20/CD at the record store in 1998 dollars. You can listen to basically every song you want for the year for the price of (inflation adjusted) 2.5 CDs purchased by my mallrat teenage self back then.
If I'm buying a lifetime thing, it's an investition. I spent money and got thing that will never get old. As time goes on, I'm getting more things and I need to spend less.
If I'm buying a subscription, it's an obligation. I'll have to spend money from now until I die or I'll get reduced QoL.
Even if today I have spare $200/month, that might not be the case tomorrow. Maybe I'll get fired. Maybe government turn my cash into paper. Maybe I'll have to pay everything I have to doctors to save my live or health. I'll still have bought songs, but I'll no longer have access to the streaming service.
Lifetime thing is a rather large statement, especially with software though isn't it?
Most of the pre-subscription model compares were never lifetime purchases.
Software that needed paid purchase update every 3-5 years to get OS support / features.
No software I used in 1995 will run on my current computers. Even 2005 or 2010 is dubious in some cases.
Content constantly changed delivery mechanisms and people had to buy new media/devices every 5-10 years
VHS/Betamax -> Laserdisc -> DVD -> Bluray / HD-DVD -> Bluray 4K
Vinyl -> 8 track -> Cassette -> CD
For many things there are cheap/free alternatives or you can opt for the fixed cost up front version.
Paper books/eBooks/CDs/DVDs/MP3s can still be purchased outright.
Streaming services have ad supported free tiers.
You can go to the library, turn on the radio or tune into over the air TV signal.
You can buy an old version of photoshop/lightroom put it on an old computer, and don't expect updates.
Etc.
> Lifetime thing is a rather large statement, especially with software though isn't it? Most of the pre-subscription model compares were never lifetime purchases. Software that needed paid purchase update every 3-5 years to get OS support / features.
For sufficiently valuable software, people will hold back on an older OS to keep using the software.
A lot of high-end film scanners will come with the 68k or PowerPC mac that’s used to run the software, because the alternative would be spending $20-30k for a new one. And industrial systems run on similar models.
When Boxed software dies, you run it on your emulator and your files can be read.
> Content constantly changed delivery mechanisms and people had to buy new media/devices every 5-10 years VHS/Betamax -> Laserdisc -> DVD -> Bluray / HD-DVD -> Bluray 4K Vinyl -> 8 track -> Cassette -> CD
You can still find VHS players. You can't get data from SaaS app that died yesterday
> No software I used in 1995 will run on my current computers.
I'd be surprised if many SaaS products from today will still be available in 28 years time.
I'd assume that many 32bit programs from Win95 era still work natively on Windows 11, and for the rest (including 16bit and DOS programs) you can use compatibility layers (e.g. Wine) and emulators.
> Lifetime thing is a rather large statement, especially with software though isn't it? Most of the pre-subscription model compares were never lifetime purchases.
You should hang around more in retro-gaming and retro-computing communities. They invest a lot of time, blood, sweat and tears to get to run some old software on modern devices, or preserve old computing/games devices that is able to run this software.
> I'm also intrigued by how many very wealthy people are unwilling to pay $10/mo to stream music/video and/or share passwords, when I recall paying $20/CD at the record store in 1998 dollars.
Because everything is a recurring automatic charge to my credit card, and one more thing to try and keep track of and continually reevaluate if it's still valuable enough to me to continue paying for it.
When you bought a CD you didn't have to from that point forward continue to think about if you want to continue paying money to have access to the CD.
I personally find the subscription model in some ways better in terms of cognitive load - choosing between concrete things can be paralysing enough that the two most likely outcomes are failing to make a choice or choosing something and regretting it. The sense of now owning something that I spent hard earned cash on can feel a burden if money gets tight.
Subscriptions, on the other hand, match how consuming media feels to me - I spent time doing something I liked and the cost enabled that.
Looking on it from a pure economics point of view, clearly it makes more sense to buy a CD and have access to it forever from that spend. But psychologically it feels very different
> I'm also intrigued by how many very wealthy people are unwilling to pay $10/mo to stream music/video and/or share passwords, when I recall paying $20/CD at the record store in 1998 dollars.
I think when it was $10 or $15 a month for Netflix, and you got everything, that people did pay. The problem now is that it’s $20 a month for Netflix, and $20 for Hulu, and $25 for Disney plus, and $20 for HBO (ahem, “Max!”), and $15 for Amazon, etc. Fragmentation has meant we’re back to a cable bill worth of cost on top of the actual internet (and possibly actual cable), and half the time you still can’t watch the thing you want to watch (some seasons not currently in rotation etc).
(Also, the cable model was driven by bundling, you may not watch a bunch of discovery channel or scifi channel personally but you're paying for them regardless. Most people didn't buy that many optional extras, maybe an extra movie channel or sports or something, but, most people were never racking up $100 of ala carte services either. A lot of people would have spent a lot less on cable tv if they were allowed to unbundle.)
Anyway the “piracy is an availability problem” line isn’t always true. A lot of times it’s a price problem too. Even if Super Netflix came out with actually everything on it for $99 a month I don’t think you’d get a lot of takers. There is a number where it’s worth my time to pirate even if it’s available, it’s not like Best Buy didn’t carry music or movies pre-iTunes/Netflix, and you could always buy esoteric bands on the web etc. Netflix solved availability for $10/month and that last part can’t be severed while retaining the truth of the insight.
You might say it’s not just steam that ended piracy, but steam sales, and as they’ve slowed down so has my proclivity to spend. I’ll buy any old crap at $5 or $10 if it looks fun, and throw it on the backlog, but for $30 or $40 it has to be something I’m specifically interested in playing in the near future.
This summer sale was the first time prices have been decent in a long while, for the last 5 years the discounts have been meager and the base prices remained pretty high. 75% off a game you're still trying to get $60 for 3-5 years after launch isn't exactly the deep discount it's presented as. Konami and Capcom are awful about this.
Isn't the line "piracy is a service problem"? That covers availability price, and even user experience (sometimes piracy is an even better experience than paying, like in games that use Denuvo)
> SaaS from established firms seems to be more durable & maintained.
Google is infamous for shutting down services. And the same thing regularly happens even to large companies when they get acquired by even larger companies who then shut down their existing services and try to force migrate everyone to the parent's offering.
Conversely, stalwarts like Oracle and IBM will often continue providing a service indefinitely. For a price. Because once you're locked in they're happy to keep taking your money. All of your money. Forever. This is... differently terrible?
> the old model was - PS pay $600 once, then $200 for updates every 2 years or so.
But many people would just keep using the original version indefinitely. Paying $800 once is a lot less than paying $150/year until you die. It also lets you choose whether you want to pay more for the new features or save money because you don't need them.
And you can't use the Consumer Price Index for software because software inflation is negative. As more people get computers over time the size of the market increases but the fixed cost of developing the software is the same, so the amortized unit cost goes down and in a competitive market that gets passed on to the customer. In the 90s people paid money for Unix and zip utilities and web browsers and now they're all free because they have such a big market that the unit cost is effectively zero.
SaaS things remain not because they don't follow the same cost structure but because lock-in through proprietary formats and training costs and migration costs keep people stuck on the thing they started with, which in turn keeps competitors from achieving the scale needed to get prices down.
> But many people would just keep using the original version indefinitely. Paying $800 once is a lot less than paying $150/year until you die. It also lets you choose whether you want to pay more for the new features or save money because you don't need them.
The way around that was to change file format so if you're in industry using that file format (say .PSD Photoshop files), at some point you won't be able to open files from your clients...
But that was also a risk, because then companies would standardize on the old version because they didn't want to send files their business partners couldn't open. It also opened the door to a competitor because if you're going to make a compatibility-breaking change anyway...
> The problem is all the flash in the pan ZIRP VC funded never-profit SaaS startups out there.
The thing is those startups sometimes make very useful software while they're around. I ran Sparrow (an email client from > 10 years ago) for years after the company that made it was shuttered and acquired by Google. If Sparrow was a SaaS product it would be gone 30 days after the acquisition was announced.
> SaaS from established firms seems to be more durable & maintained.
I'm sure many other users have noticed this too. I wonder if it makes breaking into the software space as an upstart firm harder than "in the old days".
> I'm also intrigued by how many very wealthy people are unwilling to pay $10/mo to stream music/video and/or share passwords, when I recall paying $20/CD at the record store in 1998 dollars. You can listen to basically every song you want for the year for the price of (inflation adjusted) 2.5 CDs purchased by my mallrat teenage self back then.
I'm willing to pay $10/mo to play music but that gets me access to near-all music I want access to, on all devices I use. A CD can be just in one place at once and needs a specific player. So it's a terrible comparison.
My hypothesis is not that people are spoiled but psychologically anchored.
We buy thousands of items and for most people it’s impossible to know how much something “should” cost. So we anchor our expectations to what we know.
Web software was mostly free for years because it was either ad-supported or a speculative venture capital investment. Or a dev releasing it for free thinking that “if we get lots of users we can raise money and figure out monetization later”. The Social Network came out in 2009 and there’s a scene where Zuckerberg was made to look like a genius for rejecting monetization. People who wanted to be like Zuckerberg made stuff for free then hoped to raise money. Finally add in many developers made software for free for personal or ideological reasons.
The end result is that consumers are psychologically anchored to expect that web software “should” be free, an app “should” cost $1 at most, etc It’s not really about the $10 as much as people don’t like feeling ripped off and paying $10 for something that should cost nothing makes them feel ripped off.
An experience is burned into my brain when a friend who was an aspiring yoga teacher was doing a Twitch stream for 10k viewers as part of an online festival but at the last moment needed to stream to Twitch from his iPhone. There was an app that worked perfectly that cost $15 but he almost sabotaged his whole show frantically searching the App Store for a free alternative because $15 was a ripoff. He caved eventually and unhappily, then to celebrate the stream led friends and family to a sushi restaurant that was $200/person . It was never about his inability to afford $15 but his psychological feeling that a $15 app is a ripoff. But fancy sushi “should” be expensive so $200 is a fair price.
We are very slowly seeing this change as interest rates rise and everyone understands software monetization better but it’s a gradual process. For whatever reason it’s often devs themselves who push back the hardest against monetization, in their warped world view someone charging $10/mo for a SaaS is deeply unethical but going to work for some FAANG company and fighting hard to maximize TC is completely fine and in fact encouraged. That way your boss worries about monetization and you are free of any moral qualms about it. FAANG devs complaining about subscriptions, privacy , and paywalls are quite common and similar to vegetarians who only eat beef and pork but avoid eating cows or pigs.
Sure, you can get versions of your data that are technically usable/readable by other software out of Google Docs or Figma, but you’ll never have a fully fleshed out original because nothing else can read those formats because they’re not documented and can change at the whim of their creators.
Part of the issue with SaaS is when they're rushed to build using the "fastest" technologies or platforms. Then, when they get bigger, they end up having a much higher break even burden.
Building with boring technology on the other hand can remain very low in monthly costs and still provide a lot of scale and capacity for users.
I avoid saas precisely because of the subscription model. Occasionally, I need to make a flowchart, but I don't need to make flowcharts every month. I used to be able to pay for a flowchart software once, and then use it occasionally. Now it seems that, to get quality flowchart software, I have to pay monthly for something I don't use monthly. So instead, I find some free flowchart software which may or may not be limited in some way that I just deal with, and no one gets my money. Or maybe I find something with a buy-me-a-coffee link, but they would still get more from me if I could just buy a perpetual license for a reasonable price.
Of course, the flowchart is just one example. The same can be said for a lot of utility software I only need occasionally.
Yes. I have some audio waveform generation software I use only once in a long while. I paid about $50 for it almost 5 years ago. If it were SaaS, I'd have paid a lot more than that over the last 5 years.
A long time ago I worked out an agreement with a local gym. To avoid a membership that I would only need for a few months (I was living in a hotel temporarily with no access to my own equipment), I paid $10 each time I showed up. This could be a useful model for rarely-used software.
It's a bit frustrating having to "subscribe" and cancel almost everything. I barely signup to anything and I still forget that I'm subscribed to things.
Companies are fully aware that many, many people forget about charges on their card and leech off those for extended periods.
Sure but it's also super cheap. That's the benefit. That's the tradeoff.
And it's as easy as setting a calendar reminder.
I do wish you could pay a month without auto renewal turned on, but it's also not a big deal. You can also often just cancel auto-renew immediately after paying, so no need even for a calendar reminder.
Because the friction of signing up, going through any onboarding process and then canceling a month later (assuming cancelling isn't major hassle) is a pain in the ass that I don't want to have to deal with.
I don't think you realize that people willingly pay for convenience.
Some companies saw this issue by providing a read only client. The users can open files that they created but are not able to modify them without a subscription.
By the way, if you are on Apple ecosystem, I recently tried the newly included Apple tool, Freeform, and found it to be surprisingly capable.
Funny, your scenario to me seems like SAAS is an improvement.
If I only use flowchart software 2x/yr, I can just pay those two individual months and nothing else. Six times over three years is way cheaper than buying it outright ever would have been. Plus after three years I'd be needing something that the newer version introduced anyways.
So in your scenario SAAS saves a bunch of money and keeps your features and OS compatibility up to date.
You just have to remember to cancel it once you're done each month, but that's easy enough with a calendar reminder.
This way you get to save a lot of money over buying it outright.
I feel it's the opposite. The incentive is to lock you in and provide as little value as possible for as much money as possible. Get you hooked, take your data hostage, and then jack up the price as much as possible while delivering little to no additional functionality. Bugs? who cares. Broken functionality? No big deal. You are locked in baby!
It reminds me of the dining hall at my university. The food would always be unbelievably good on parents weekend and any time there were tours that would eat there. Every other time it was mediocre at best. The check for the meal plan money cleared and the goal was to give back the bare minimum.
I don't think that the incentive to "provide as little value as possible for as much money as possible" is in any way unique to the SAAS pricing model. Theoretically, every optimized pricing model will attempt to maximize revenue at a given value level.
And in practice, what does "get you hooked, take your data hostage" mean? I can't think of many SAAS subscriptions in my personal life where this is a real issue.
I dunno, this describes my reality pretty accurately. Apple, Figma, and Adobe all try to lock you in with cloud storage and proprietary storage formats: the more you invest in their products, the more you'd lose by not paying them. I used to run some websites off Squarespace, and there's no way to export them and move somewhere else, so you end up paying ~$200 a year to host a static web page, else recreate it from scratch. Gmail has me locked in by having all my emails from the last twenty years. Slack owns my conversation history with my friends. And so on...
> those that do often have extremely valuable products.
I agree with that. All those products above are valuable and useful to me. But, the price is not commensurate with the value of the product alone. The price only makes sense when you add both the value I get from using the product and the pain I would experience by not using the product anymore. The product developers work hard not only to make the product useful, but also to punish you if you leave. That's the gross part.
With SAAS, if the software is barely usable but lacks competition, vendor gets paid even if they don't fix bugs or broken functionality. When paying up-front, there always is competition - your own old version; so the vendor has strong financial motivation to make improvements since the recurring "maintenance" upgrade revenue is conditional on them, unlike in SAAS.
The difference is that with upfront payment developers are forced to actually add features that provide more utility. Otherwise customers don’t upgrade. With SAAS you have to keep paying, even with if the software is completely static with no new features or bug fixes.
As for bug fixes, do you think I am more or less likely to recommend your software to my friends if it is full of bugs and you don’t fix them?
I think I would agree for large traditional software companies like Autodesk or Adobe that charged large sums for software versions you typically don't update yearly (Creative Cloud), that a flat subscription model seems to be a bad fit.
Probably less so for software you use daily or make your living off of.
I use a text editor daily. I see no revolutionary methods being added to text editing that could ever justify me paying monthly. Even something as simple as a calorie counter has a monthly charge for features that never change (MyFitnessPal).
I pay for Bear.app. It amounts to something like $14 dollars a year. But it's a beautifully crafted app and while it uses a database instead of files, it has good export capabilities. I consider that a donation at this point.
SaaS works when not everything is atomized into micro-profitable businesses. The problem with SaaS is it enabled subscription hell and destroyed ownership. When I buy software I reasonably expect to own my copy. No different than when I go to the store and buy a book, or buy a CD of music, or buy food. With SaaS I own nothing. My data is theirs. My stuff is theirs. It is no different than your example where software no longer works with your operating system. If you squint, you can see that once the company changes their model/raises their prices/etc it's no different than my software suddenly not working. The real difference is at least I only paid the exact cost for my utility vs. 5, 10, or even 20x as much for the same utility.
There is a dramatic difference between a world where some software is SaaS but most is owned vs. our current environment where everything is SaaS. It's the gestalt of the SaaS economy you have to look at and not the isolated cases.
Moveover the issue isn't "productivity software" really. That enhances your life. The fact I can't even own some books, music, simple software, movies, etc is the problem. It creates an environment where the average person is tied down with so many subscriptions just for things they'd normally buy once that they become more poor than would be otherwise.
I am at the point where piracy now makes more sense again and I will basically refuse to purchase any more software. To be honest, I don't care who it hurts. I am tired of being victimized by companies. One of the only software I pay for is the Jetbrains product suite because they are a company whose SaaS model is actually cooperative. Sublime is another one who has more than acceptable terms.
If you pay every month and never own it, that's rent. The landlord will try and lock you in and extract value while providing as little as possible. Sometimes you get a good one that takes care of all the issues, but the majority just want their money.
JetBrains figured this out already. Sell me a perpetual software license that I own and charge me separately to get the updates.
I dislike SaaS very strongly. I will not repeat the "why"s mentioned in this thread, just add one that I haven't seen yet: SaaS incentivize doing busywork that is visible but not necessarily useful.
For example JetBrain's products: Oh look, we have changed our icons/ updated UI/ improved UX/ etc! We know that nobody asked for this, but it will be shoved into your mouth anyway!
Were people actually paying $200 for a piece of productivity software, though? I'm no expert but sort of got the impression that a lot of the consumer-facing software currently charging $10 a month used to retail for 2 figures, not 3.
Saas is a model that looks great for some cases but overall leads to shittification of many apps as the way it is often done, to make 100% sure nobody can just use a copy of a program they have, is by putting it in the cloud, which means higher costs to them and worse experience to user (even the best web apps feel pretty laggy compared to native).
This is timely, I recently commented about paying for software [0], professional software is very expensive, but it's very expensive to create.
There's thankless work such as programming language development, operating systems (Linux), databases and Linux distributions that are profoundly valuable. Even just wrangling them from a devops perspective is painful though.
I've never paid for any of the work that went into Ubuntu, Python or Java (I use Corretto) or MySQL or C.
I kind of want a community of people that help run a sideproject PaaS and solve the things I would prefer not to work on. Servers that are up-to-date and patched and scalable and robust.
I use OmniNotes on my Android phone, I use FreeFileSync, Typora (paid software), IntelliJ Community.
What's a price that you would pay pay for your open source software?
If it was like Spotify, spotify is like $9.99 a month and apparently 210 million susbcribers according to Bing search "spotify number of subscribers". That's a fair amount of people's living costs to pay for.
> I've never paid for any of the work that went into Ubuntu, Python or Java (I use Corretto) or MySQL or C.
You’ve almost certainly paid for them, just not directly. Some share of the cost in the supply chain that delivers you goods and services will inevitably end up with the large enterprises who sponsor or develop those projects.
by eliminating all actors on the stage and referring solely to "large enterprises", welded unequivocally to ".. who pay for this" the entire ecosystem is reduced to absurd oversimplification. It is both insulting to the others who participate, and bone-headed wrong about where "resources" come from in this unusual, modern ecosystem.
The assertion was that even if it doesn’t feel like it, you support open source indirectly.
It was not that all funding or contributions are made by large enterprises.
I applaud efforts to more directly support projects that give you utility. It’s becoming easier for individuals to do that (as evidenced by the article).
Besides being sick of subscriptions for every small thing, I'm not sure I understand the premise here:
"Pay to download or for other services: Not worth it; users can find the software somewhere else and they don't need your other services."
So users won't pay a one-time fee, but instead they will pay a subscription to get that one software they need? They won't "find the software somewhere else" if it's behind a subscription, but will do so if it's behind a single payment?
The thing is that this solution scales better. If you had to pay all developers individually, that would not be worth it but with my solution, you have to pay only one.
Also, it doesn't have to be a subscription. The payment is 100% up to the developers that you pay, so they could sell a one time payment and register a lifetime subscription in this system for that.
If I understand correctly, you are not getting one piece of software. You get access to everything in their library, like a spotify subscription. You also choose which developer gets your $5 or whatever, so you retain the meritocratic infrastructure that a traditional marketplace provides.
Now that you mention it, the spotify subscription is actually very interesting here. A bundled subscription for all the software you use could make sense (though it would probably by 10-100x the cost of a spotify subscription).
However, OP's resource allocation model (each user determines which developer gets their payment) doesn't make sense to me. I think it would be better to prototype multiple resource allocation models in parallel and see which are most fair and sustainable over time.
next to nobody will pay 100x a spotify subscription for anything, no matter how great it is. Despite what buisness owners like to believe, most normal people in the first world have like $100 dollars a month total after food + rent + utilities with which to spend on any and all entertainment and luxuries. at best you could maybe charge like 60 dollars a month, like cable, but that would have to be an unbelievable deal with no alternative (not possible, its incredibly easy to make new software, so you'd constantly be undercut by startups and open source chipping away at your cataloge)
I could maaaaaybe see it working on iphone, a premium apps service, where they have a lot more control
SetApp is pretty much that (for Mac, I don't know if they also do Windows stuff). I've avoided it and instead bought a lot of software available in the bundle because I prefer to own the software when I can and when it makes sense.
OAuth might one way to enable it. Instead of logging in via Google/Facebook/Apple you could login via "SaaSBundler", this would register you've used that product which could help allocate the distribution of funds. Might need some more work if you wanted to distribute based on time
How do you prevent or discourage the rise of "influencer developers"? The problem with subscriptions as a solution is that they end up being a popularity contest. That's not necessarily bad, if people want to spend their money that way but it doesn't solve the global problem of paying for those who write software. If it takes off it will just mean more Lex Fridman types get a big subscriber base, and a bunch more try and emulate that model. If fact I think it could easily distract a lot of people from focusing on writing software.
I'd encourage a strong "progressive tax" that could for example follow the power law: you get log(x) of what your influence is. Getting to 1x (let's say a median pay in a given country) should be pretty easy but to get something like a $1M you would have to make software used on a massive scale.
Whatever revenue you generated that is above what you got paid would go towards the less "lucrative" projects and maintainers keeping the open source going.
I know that is a possible problem. Partially, that problem exists with everything; advertisements make people buy from the most popular brands even if they are not the best. Other than that, the developers in this cooperation have to trust each other so if someone is just popular and doesn't make any good software, they would not be accepted by the other developers to join.
> What if the person does make decent software, but is a huge influencer?
Then they would probably be able to make more money selling subscriptions than other developers that are less known.
I don't know how different that would be though from if they sold physical products.
One important thing here is that there is a limit to how many subscriptions one developer can sell.
This is done to emulate physical products as much as possible.
Also, they would probably sell the subscriptions for a higher price than other developers, since they can, which would mean that people who don't know about that person would buy from someone who is cheaper.
> Why not opt for the Spotify model? Usage = money. Why turn this into a popularity contest?
That means there has to be usage statistics collection in all software.
Since the software has to be open source, that could be abused a lot, including removed.
I also don't like the idea of having any requirement like that on the software.
It would for example require that the software has access to the internet which doesn't work well for some software.
> I don't know how different that would be though from if they sold physical products
I mean that's the literal point of this website, no? In the real world, a sale is a sale. Imagine going into BestBuy, leaving $100 at the front, telling the clerk to put it all into Sony (because Sony is 4 cool kidz) and then just grabbing a nVidia graphics card and Apple AirPods.
> One important thing here is that there is a limit to how many subscriptions one developer can sell.
Definitely interested in seeing how this will play out. Sounds like a recipe for either (a) a super cool, tightly nit community with high quality contributers who care about their software or (b) a dump for software which woudlnt cut it in the real world market.
>Also, they would probably sell the subscriptions for a higher price than other developers, since they can, which would mean that people who don't know about that person would buy from someone who is cheaper.
My game theory senses are tingling. Why would I incentivize people into buying other people's subscription while gaining access to my stuff?
>That means there has to be usage statistics collection in all software.
You could always implement it on your end, right? Could be download based, or whatever. A one time thingy.
> I mean that's the literal point of this website, no? In the real world, a sale is a sale. Imagine going into BestBuy, leaving $100 at the front, telling the clerk to put it all into Sony (because Sony is 4 cool kidz) and then just grabbing a nVidia graphics card and Apple AirPods.
Ok, I see what you mean now. I think the distribution of who gets the money in 1Sub would be similar to donations, with two remedies:
- The owner of the paywall that made you subscribe gets a 10 credits bonus as described in [0]. This will lead to more money to the people who make the things that you actually try to use.
- If someone is popular, they will either run out of subscriptions to sell, or they will sell them at a higher price. In either case that makes it possible for the less known developers to sell more subscriptions.
My question is: why isn't there yet a thing (or is there?) that works like AWS, but has the UX experience of a smartphone: you can install "apps" on it -- which you pay for hosting / bandwidth -- and it handles integration with all your devices, while leaving you in charge of how they're configured and what happens with the data?
Sorta like expanding the mobile phone experience to encompass your whole internet experience, so you can choose what services you use, and where they're hosted, and those two things are fundamentally decoupled.
One such app could be a sort of 'charge card' for websites, which would pay them pennies, or larger tips if you like, instead of having to see ads.
Another might be a connection to a search engine which allows you to tailor _your_ search experience instead of it being optimized in e.g. Google's interests with all the commercial stuff at the top.
I want a plug-and-play way to install services like (front-ends) BreezeWiki, Rimgo, Nitter, and Invidious, and (self-hosted) Miniflux, Gitea, a centralized Syncthing node, and an image sync tool (possibly Immich), onto an old laptop I own, without messing with users, groups, AUR builds, upgrading between Postgres versions... like a world where sandstorm.io had taken off. Then access them on any of my devices, like Tailscale but without binding arbitration and a class action waiver...
Haven't tried this project yet but my plan is to buy cheap hp elitedesk / dell optiplex thin client and just install umbrelOS [0] that has app store with many of those apps such as: homebridge, home assistant, pihole, trailscale, gitea, syncthing, vaultwarden, nextcloud etc.
Umbrel's homepage seems uncomfortably slick, they're selling hardware but the OS is free to use. The app store seems to have more cryptocurrency apps than I'd expect from a general-purpose self-hosting service, and lacks Miniflux, Breezewiki, and Rimgo, though it has Invidious, Nitter, and Syncthing. And the OS seems to live under a noncommercial-only license (https://github.com/getumbrel/umbrel/blob/master/LICENSE.md). Overall the vibes seem to run somewhat in conflict with the "by the community, for the community" ethos I'm more interested in. Perhaps if I have the free time (haha when?) I might see if I can polish up Sandstorm for my personal uses, or if it's more work than juggling Docker containers or AUR packages.
> My question is: why isn't there yet a thing (or is there?) that works like AWS, but has the UX experience of a smartphone: you can install "apps" on it -- which you pay for hosting / bandwidth -- and it handles integration with all your devices, while leaving you in charge of how they're configured and what happens with the data?
Not at all. I can't "install a cloud storage app on my Heroku and then access it on my phone" without significant technical skills. As an engineer I could figure it out, but I won't, because I don't want to deal with that. Instead I will fantasize about how it ought to work.
Successful apps have more to lose from being on such an ecosystem than they stand to gain. It’s why so much software starts out as wanting to be open, dominates the market, then puts up the garden walls.
The closest we have to this is app stores - and look how everyone moans about them.
Kind of, but it should be vertically integrated between "cloud" and "edge" and "home-network" and "mobile." With all of that being either resources you own, or resources you're personally billed for, directly by the providers (though aggregated per app), with no ability for the app to extract rents on the costs of those resources (i.e. you're not paying the app so that the app in turn pays for the resources; you're being billed by the "cloud" and "edge" providers directly.)
If you install e.g. a Photos app, then that'd be a viewer app + cache on your phone; a bounded-size cache on your NAS or ISP gateway-router; a thumbnailing and face-detection background worker started in your ISP's edge DC; and a primary store in some cloud.
If you install e.g. Minecraft, then the server for that game will dynamically reposition itself (and migrate its data) between running embedded on device, vs. on appliance-compute on your home network, vs. on your ISP's edge-compute, vs. on the cloud — depending on whether you're playing single-player, vs. multiplayer with someone else on the same network, vs. at least one player being elsewhere in your region, vs. people connecting all over the world. (And, of course, when nobody is connected to it, the server should quiesce to just being dead state and then gradually have that state "evict upward" toward the cloud.)
IMHO a major part of this would be getting ISPs to sell commodity edge-compute power to OS vendors, both in-DC and in-home-network (presumably by putting addressable application processing capability into ISP gateway routers.)
If I'm a developer and get to chose what to charge, that means I can ask people for $0.01, and they would get access to everything from all developers of this "platform"?
The example on [0] where a developer pays credits when they get a subscriber is confusing. Should Devs "top up" somehow?
> If I'm a developer and get to chose what to charge, that means I can ask people for $0.01, and they would get access to everything from all developers of this "platform"?
You can do that but you will not make a lot of money that way. The number of subscriptions you can sell is limited so if you sell all of them for $0.01 you will probably wish you had asked for more and when you have sold out, only the more expensive subscriptions sold by other developers remain and they will make more money than you.
> The example on [0] where a developer pays credits when they get a subscriber is confusing. Should Devs "top up" somehow?
I don't know exactly what you mean by "top up" but the credits are turned into subscriptions when sold. This is how we make sure the developers can't sell infinite subscriptions. The plan is then that with time, the developers will get more credits so that they can sell more subscriptions. How fast they will get more could depend on the current value of their account, where the value could be calculated from the credits and the number of subscribers they have.
> How fast they will get more could depend on the current value of their account, where the value could be calculated from the credits and the number of subscribers they have.
So are you then implicitly setting the price yourself because anyone who doesn't charge enough can't get more credits?
Suppose someone develops an app which takes hardly any effort to make -- it's a hundred lines of code -- but it does something common that everybody needs so if available for $0.01 it would have a hundred million users. Which would gross a million dollars and more than pay for the development of the simple app, so the developer is satisfied with that. But to do that you'd have to let them sell a hundred million subscriptions for $0.01 each.
Now let's go toward the other end of the spectrum. Some app which is specialized and requires a million dollars of developer time but only has a market of 10,000 customers. Those customers would pay $100 each for it, if they had to, but not if they can buy into the system somewhere else for $10 (or $0.01) instead.
In general, who is going to buy a fungible subscription for significantly more than it's available somewhere else? How do you handle the fact that the development cost of a thing isn't proportional to the number of people who use it?
> So are you then implicitly setting the price yourself because anyone who doesn't charge enough can't get more credits?
Everyone can get more credits. The idea is that when we think we need more subscriptions to sell, every developer would get a number of additional credits that is proportional to the number of credits they have (with active subscriptions converted to credits for the calculation).
> But to do that you'd have to let them sell a hundred million subscriptions for $0.01 each.
That would be very difficult for them to do since the number of subscirptions they can sell is limited by how many credits they have.
> Some app which is specialized and requires a million dollars of developer time but only has a market of 10,000 customers.
If you make software for only a few people and you need a lot of money then I don't think this system is for you. It is mostly for developers who make software for everybody.
> Everyone can get more credits. The idea is that when we think we need more subscriptions to sell, every developer would get a number of additional credits that is proportional to the number of credits they have (with active subscriptions converted to credits for the calculation).
This is what I mean by implicitly setting the price. You set it indirectly by rate limiting the number of subscriptions.
A service with high cost and low volume gets priced out, even if it's only somewhat above average, because people can buy a subscription from someone else for less.
Conversely, if subscriptions are rate limited then no one has any incentive to sell them for less than the market rate, which is in turn set by supply and demand (and you having your hand on the supply knob). Why would anyone charge less, or pay more, than the median price?
Then anyone who needs more than that is priced out, and if you allocate credits based on how many people sign up or use a service, the service that provides only trivial value but to a large number of people gets a ton of credits disproportional to the value of their service.
Software has no marginal cost. You can make something that's used by untold millions of people. Even if many people pirate enough people won't for you to recoup your development cost and then some.
Software is easier to produce, sell, and distribute than any physical product. You don't have to worry about warehouses filled with unsold inventory. You don't have to worry about quality control and returns. It still blows my mind how much easier it is to run a business that deals with bytes instead of atoms. The OP talks about software having no copy protection, but Amazon sells DVD players and cordless drills for $30. Imagine for a second how hard it is to compete with that. Competing with Google or Microsoft or some startup is a walk in the park in comparison.
In software the hard part is making an excellent product. And let's face it, that's where most people fail. It has nothing to do with monetization.
I mean, sure, this is what all the business books, MBAs have been saying since 60s.
However, since then we have come to learn a lot about software. The most important of which is that software, just like physical products, needs maintenance. The world is constantly changing and evolving, and software has to keep up otherwise it'll become obsolete within couple of years. At the very least it must be patched up with newly discovered security threats.
Just look at all the money/effort spent to make features backward compatible, or army of engineers employed by companies just to maintain existing software.
> At the very least it must be patched up with newly discovered security threats.
I'd say at the very most it needs security updates. Too much software changes just to change. UI redesigns for the sake of redesign, cramming features that nobody wants so a product owner can get promoted, adding telemetry and analytics to chase metrics that no user cares about, adding annoying notifications and popups to juice "engagement". I pine for the days of desktop software, where I can wake up in the morning and not be worried that some developer 1,000 miles away from me changed my product out from under me because developers gotta develop.
Another benefit of software that doesn't change every week is you can charge one time for it rather than these awful subscription pricing that most software are switching to. They justify subscriptions because "we have to keep paying developers to develop." Not a problem that the user has, so why should the user have to pay for it?
Old, unchanged software is not obsolete. It's mature. Bugfixes only, please.
On the other hand, the reality experienced by software companies is that adding features is profitable. Joel Spolsky talks about this in one of his old blog posts[1]: "I can tell you that nothing we have ever done at Fog Creek has increased our revenue more than releasing a new version with more features."
It makes sense though, if software companies could make just as much money doing less work, they certainly would.
There's really nothing wrong with new features as long as you understand that there's a certain subset of users who don't want things to change. Maybe it's because people are already trained on the current version, or they don't want to have to upgrade machines just to run the new feature set, or any of a thousand reasons you may not have thought of.
And then there are the "upgrades" that try to force you to pay more.
There was a dev tool that I purchased a couple years ago. Don't remember the name. It was reasonably priced and came with 1 year of support. A bit over a year later I got a notification that they had put out an update, so I downloaded it to take a look, only to find out that it had deleted the version I had bought and my license wouldn't transfer over. If I didn't now buy this new version, not only could I not use it past the trial period, but I'd lost the version I had before.
Yeah, I was pissed. And the company really had trouble understanding why I was so pissed off by this behavior. I did finally find out where I could download the version I had before, but there went my entire workday. And the product that previously I would recommend became something I cautioned people to avoid!
The subset of people who don't want things to change are running which OS exactly? User interface is just like any other artistic field: it has fashion trends. Look at something that's been around forever and is still developed: BBEdit. Yeah sure the app has not changed a *ton*, but its changed more than you think. Many fads in OS X design (like drawers) had to be implemented and later removed.
Any successful piece of software cannot realistically just stay still. It has to keep evolving with the trends of user interface. The difficult part is doing it well. BBEdit has managed it.
>The subset of people who don't want things to change are running which OS exactly?
All of them? Hell I hate it when things change in a way that forces me to give them attention now rather than when I have time. Nothing worse than doing an update and having to rework my flow, scripts, and code just to be productive again. Let me choose when I update my tools, don't force it on me just because your UI team found an even more complicated and torturous way to make simple things ugly and hard - I have my own work to do.
Time moves on. In the 80's, 90's and 00's entire genres and niches were explored and defined. Much of the software I use matured around the time that article was written, and the "new features" in a lot of software are so niche, that they are big enough draws to buy new software versions. Hence the move to subscription for everything. Microsoft office is a generic example. The company I work for has all versions from 2007 through current in use. There are no features in the newer versions that change how we work.
Many people I know in my industry have been looking to dump the service contracts for CAD software for 10+ years. No useful added features, no improvements in stability, and no service to speak of through the VARs. What is the point of paying?
I think this is kind of disproven by a feature that was added to Microsoft Word in the 1990s (I don't think it is still around, although I may be mistaken). It was called "WordArt" and let the user do things like write the word "shark" with the letters deformed so it looked like a picture of a shark. Why would you want to do this? I have no idea. It's just obvious that the people working on Microsoft Word needed to add something and just bug fixes weren't enough, I guess (although they still don't have a reference management system which is why things like EndNote still exist)
I wonder if you are trolling or being serieus, because me and literally everyone i know would use this feature extensively. For powerpoints, school presentations, birthday cards. 50% of the time I fired up Word, it would be for that feature.
I seriously have never seen this used ever. But it sounds like you are talking about children using it, which I hadn't considered (I was already an adult in the 1990s).
> I'd say at the very most it needs security updates.
What the parent said about "security updates at the very least" is correct, and sometimes that happens to also be the very most updates that should be made. And sometimes it's that but a little bit more. And sometimes it's that and a lot more.
The hard part is figuring out the right balance. And then, figuring out how to staff in order to achieve that balance.
The "only security updates" approach turns out to be among the hardest to figure out how to staff for. Because the idea is that this software is essentially complete upon release, so the natural business model is to sell it that way, for a one-time fixed price. And then with that revenue structure, the natural cost structure is to move all the staffing to a new project (or to build these kinds of products with project-based contracts to begin with).
But once you've accepted that you should at least be doing updates for security (and I think this is correct in almost all cases), well, now who is going to do those? You have a recurring cost with a non-recurring revenue stream. You can push down the recurring costs as far as possible, but eventually this model just struggles to pencil out. At that point, you'll probably decide to just stop all updates, including security patches.
This phenomenon is why most people making software seek a business model with a recurring revenue stream. It's not an accident that the days of boxed software were also the days of rampant insecurity.
But, you're totally right that the next step in this is often, "well if we have to have ongoing staffing and recurring revenue, we need something for them to do besides maintenance, so let's do UI refreshes and metrics and stuff I guess". It's a test of leadership, to avoid that temptation. Better products have better leadership that is making better decisions about when it makes sense to do more on a product and when it makes sense to mostly leave it be.
> "security updates at the very least" is correct, and sometimes that happens to also be the very most updates that should be made.
And a lot of those updates wouldn't be necessary of software and tools wouldn't offer so much attack surfaces, that they wouldn't need if they cared less about those things as necessary features...
Man I dunno. This sounds right and all, but after years of seeing security issues that don't seem to have anything to do with unnecessary attack surface, I have to say that this just seems unrealistic to me. The problem is that no software runs on a machine without an internet connection, and you can't control the attack surface of other software on the machine.
> Old, unchanged software is not obsolete. It's mature. Bugfixes only, please.
This assumes a waterfall approach to development which implies multiple 6 month to year long development cycles.
In reality, a mature stable project can receive monthly updates, and an immature half-working project can be in maintenance mode. Furthermore this may work for software that should be seen and not heard doing its job in the background without much user interaction, but for software that users interact with regularly, the design needs to be periodically refreshed to match current trends or users will leave for the newer sexier product with fewer features. We've seen this time and again. I have absolutely experienced a mature product that was "finished" (abandoned) like 4 OS version ago that just doesn't run/work on the current OS version because the platform has added new security controls, APIs, and/or UX expectations, etc. No amount of security updates would fix that.
So while I understand where you're coming from opining for a world where we ship mature software and security updates only, I don't think it's remotely realistic given the way humans operate.
> Software that gets frequent updates isn't "mature and stable" by definition. It's constantly changing.
That's simply not universally true and it's incredibly naïve to try and assert that it is. Obviously there are examples of immature unstable software that receives monthly updates, but it's not a tautology that monthly updates imply immaturity. You either don't work in software or haven't really thought this through.
Stable means the software run reliably without major issues and mature means it is a solution well adapted to the problem domain and solves a problem with grace, tried and true. Monthly updates might be "integrate support for new technology/service (that didn't exist 6 months ago)" or "support latest changes in macOS 14" or even "fix issue that happens 0.01% of the time". Other software changes and you have to adapt, and no software ships bug free. Being mature and stable means you have the time to work on things that aren't existential for your product/business, like adding convenient support for some sexy new service as a nice value bump or making sure those 0.01% of your users aren't occasionally encountering an annoying or frustrating issue.
In this context, stable means that it should not break, not that it will not be updated anymore. The term for what you are referring to is end-of-life.
You're touching on the real problem, here. Software isn't broken, it's just that the inherent issues in capital are starting to become painfully clear in this context.
I've been trying to find a term for "behavior focused on maintaining your job when the need wouldn't exist without such behavior". It's kinda tangential to artificial scarcity but broader in scope, and if we don't have a term for it, we need one badly. So much of our society's resources are committed to solving problems that don't exist, because the actual problem is "you need money to live and for whatever reason the thing you do in the place and time you are isn't necessary or desired".
> I've been trying to find a term for "behavior focused on maintaining your job when the need wouldn't exist without such behavior"
The concept of self-preservation or calling it superfluous self-preservation probably works here. But perhaps saying auto-preservation conveys better the sometimes lack of conscious intention that goes on in these situations.
> Another benefit of software that doesn't change every week is you can charge one time for it rather than these awful subscription pricing that most software are switching to.
How do you pay developers to continuously fix bugs, provide security updates and update their software when the underlying hardware and operating system changes?
> How do you pay developers to continuously fix bugs, provide security updates and update their software when the underlying hardware and operating system changes?
Have we really strayed so far that everyone's forgotten how this is done? Security fixes and serious bug fixes should always be free (At least going back N-1. You price that work into the sale price to begin with), and you get ongoing revenue by selling new versions.
And if the person is happy with the current version “n” that they were using, kept the same operating system while you released n+1 and n+2 to stay compatible with new operating systems then they decided to upgrade their hardware and find out that their old software doesn’t work?
They will still need to buy a new version or should that be free?
If the author of BBEdit never added a feature since 1991. You would have still had to pay for new versions to run on your PPC/Classic MacOS, OS X PPC, x86 Mac and now your ARM Mac.
Back in the “good old days” MS Office cost $595 for each version if you had a Mac and Windows PC.
Now it’s $99/year for five users and you can run on your Mac, Windows, iPad, iPhone, web, or Android device.
The same for Photoshop.
And you get continuous features added as the platform vendor and software vendor add more capabilities.
> and you get ongoing revenue by selling new versions
This works exactly up until the moment that your software is good enough that most of your userbase stops paying to upgrade. Then you are dead in the water, and the software becomes abandonned by design.
Obviously that's bad for businesses - but it's great for consumers! I think the question that's being asked is if there's some business model out there that delivers what customers want (the ability to just buy a finished product once and have it work decades down the line, like "pass it down to your kids" long) while also delivering profits to shareholders.
There's a reason farmers want the ability to repair their own tractors without having to give John Deere an extra cut, you know.
> if there's some business model out there that delivers what customers want ... while also delivering profits to shareholders.
Of course there is, but that's why software in a box cost hundreds or thousands of dollars per version, with minimal bug or security updates thereafter. The grass is always greener, yeah it's a pain in the ass having a ton of $10/mo subscriptions. But I'd much rather have that - as both a consumer and a developer - than have $800 single-sale purchases.
You emulate the abandonware, old OS and all. She kicked the habit recently, but my sister preferred Word 5.1 for Mac for a long time. That was a 68k program, which she dutifully used _on a PC_ while Apple was busy shipping iOS on ARM and Mac OS on x86. The Centris 610 is very tired, but the software still works. (Well, not the original copy. Those install floppies are very dead.) Software can be uniquely persistent, in a way physical artifacts can't, so why are we so insistent on keeping everyone on the upgrade treadmill?
George R.R. Martin pretty famously uses WordStar on DOS. I can't imagine it'd be some win for consumers (either Martin personally, or downstream enjoyers of his books) if he had to be on the latest internet-connected, ad-infested, notification-riddled copy of Windows just so that his OS and Office Suite could repeatedly check to make sure he still has an active subscription and a valid "digital entitlement."
I still use Office 2010. (Though it gets increasingly difficult to activate it, and it last received security updates in 2020.) In 2010 I was using x86_64 (an Athlon 64 X2), and today I'm using x86_64. Why should I upgrade? It happens to still run on Windows 11, but I'd gladly stuff it in a VM to continue using it. (I do use Office <current 365 build> for work, so I can pretty confidently say there is nothing worth paying for in there. The only feature even remotely interesting is PowerQuery for Excel, which is available as an add-in for Office 2010.)
Well, my wife uses one my 5 user Office 365 subscription licenses on her Mac. I use it on my iPad and phone. My mom uses it on her Windows laptop and her iPad.
We each get 1TB of online storage.
Compare that to the $599 that Office for Mac use to cost and that you could only use on one computer.
Companies can bake the cost of one or two maintenance releases and maybe one or two years of security releases into the purchase price. I agree it's not reasonable to expect lifetime updates from a one-time purchase. As long as you're not doing heavy development on these maintenance releases, the company's cost should be very small.
As a user-developer, I'd also be happy with being provided the source (or un-linked object files, or the equivalent for whatever language being used) after the maintenance period was over, so I could continue applying dependency security patches myself.
Depends on whether the bugs are because of preexisting flaws or because the underlying platform has shifted. No one can predict the future, and even OS vendors who once took backward compatibility seriously may not in the future.
The design of MOST non-trivial products is refined over time with no expectation that older versions will be upgraded to the latest and greatest. Yes, material esp. safety defects can lead to recalls but this is relatively rare in the physical world.
The OS under your software is not static. MacOS programs from 10 years ago rarely execute successfully. Windows programs from 20 years ago might. Linux programs from 5 years ago mostly don't unless you have access to source code (and a certain willingness to patch it yourself).
Software "maintenance" is kind of a self-fulfilling prophecy. It's not required to break the old in order to make something new, but unchecked scope creep results in what used to work not working anymore, and thus the artificial need for maintenance.
> I'd say at the very most it needs security updates.
and then you move the bar a little (although I agree):
> Bugfixes only, please.
I would also add updating to work with the current OS / hardware. (I have unusable games that are a recompile away from being usable.)
But I agree with the rest of your points. Especially when, in addition to asking you to fund new features, the new features make the app worse for your use cases.
However I don't know if the root cause is more accurately described as "developers gotta develop" or "product managers gotta produce".
Yea, I don't mean to target individual software developers here. "Developers gotta develop" is commentary on the entire industry, and all the contributors, including developers, UI designers, product owners, QA, executive sponsors. I remember hearing the saying "Programmers are like beavers. Leave a beaver alone to decide what to do and they'll just keep building dams, regardless of the fact that their home is done." I don't know if that's really true about beavers, but it's true about software organizations. The whole software development team will just continue working on the software even long past the point where they're done.
Software compatibility with current modern platforms is a feature, and an owner of software isn't entitled to forward compatibility any more than an owner of a car is entitled to new parts as the old ones degrade.
Software degradation is much like hardware degradation: it happens with time as underlying platforms change.
But interface updates do meaningfully help many people.
Most people in engineering roles think the job is done when the engineering is done, and the maintenance is unnecessary unless it's necessary for stability or security. That's not limited to software, either. The fact is, to the vast majority of non-developer software users, an improved workflow, more intuitive, or yes, even more attractive interface makes more of a difference than moderate performance upgrades or minor stability improvements.
To a developer, interfaces are a way to interact with with software, like an API for humans. To everyone else, the interface is the software. Old interfaces are as or more usable to you because you have a sophisticated mental model of software and a high tolerance for logical complexity. These dreaded designers' profession is figuring out how people who don't have those things can most easily solve their problem with the tool you built.
Car controls would look a lot different if the engineers maintained control over the available controls without designer input. They might intuitively understand that the array of controls that change fuel injection parameters should only be used in certain instances, but they liked having them right there just in case. When told that they'd just confuse average drivers and should probably be hidden, they might argue, "I explained to my 6 year old nephew how more or less air can affect engine preformance." Multiply that by the dozen internal systems they want to control or get real time data from. A designer world recognize that this would confuse most drivers for little benefit and hide everything but the things most drivers need to find and parse instantly... And they would be met with the same heavy sighs and eyerolls that software designers regularly get from developers.
Designers are in the organization because they can do things that developers can't. They make developers work vastly more useful to the world because the way someone solves their problem is as or more important than it being optimally solved using the smallest amount of available resources with 5 9s of reliability instead of 3.
And that's why, in the overwhelming majority of cases, end-user-facing commercial software with professionally designed UIs and someone looking at UX on a whole will dominate FOSS alternatives while tools targeted at developers and other technical people do as well or better, and the commercial equivalents.
Even the security updates are often dubious. Software that could be entirely local (with a system provided filesystem backup/sync for data) adds "cloud" functionality so that it can lock you into the SaaS subscription model, and now it's got the network as an attack surface. It's self-justifying. Even there though, it generally just talks to the vendor's servers, and if you control the vendor's servers, you probably have more direct attack routes than some http client bug or some bug in an svg library that the vendor uses for their logo.
"Security" patches are something only checklist-driven corporate IT (i.e. people who can't consider use-case) ought to care about. For individuals, they're mostly a cudgel to justify abusive practices and should be ignored.
> otherwise it'll become obsolete within couple of years
I mean, sure, this is what all the software developers have been saying...
In the meantime, I'm constantly seeing users, even here on HN, complaining about how their favorite software tools are changing. Users the world over annoyed at SaaS, and pining for installable software that they can just put on a machine and never have to worry about forced upgrades or annual maintenance fees, etc., or even the convenience of not needing an internet connection for it to work.
The software world has never been black and white. There are product niches, and also use-case niches. You could probably make a good business by choosing something that's only available as SaaS and releasing a local-only version of it.
otherwise it'll become obsolete within couple of years. At the very least it must be patched up with newly discovered security threats.
Only if it talks to the internet. I have plenty of software I downloaded over a decade ago that has no internet access and runs perfectly fine on Windows 11. Much of it is even older than that. Just stop trying to cram social media integration into your label-making program and it gets a lot easier.
Probably depends on the user, honestly. Most of the software I use doesn't need to talk to the internet. A lot of it wants to, but that's a different thing.
> The most important of which is that software, just like physical products, needs maintenance. The world is constantly changing and evolving, and software has to keep up otherwise it'll become obsolete within couple of years. At the very least it must be patched up with newly discovered security threats.
I feel this is largely being overstated point, or rather that in reality majority of important patches for software is due shoddy quality of it originally rather than external changes. Most security issues are rehashes of common well-known attacks rather than completely novel discoveries. Especially on desktop the platform churn is pretty low, windows happily runs like decades old binaries, and on Linux desktop we have this one major breakage happening that is Wayland but otherwise well-written decades old code is at least source compatible if not binary compatible (although even that is not that far-fetched...).
> The world is constantly changing and evolving, and software has to keep up otherwise it'll become obsolete within couple of years.
There's some truth to this, but I think this factor is usually dramatically overstated. At least, most of the software I use doesn't need to constantly change. The majority of software updates I see are unnecessary, and many of them are undesirable.
A company I worked for 12 years ago was using a version of Microsoft Navision (now Microsoft Dynamics or something). They hadn't upgraded for several years. Upgrading would have meant a bunch of workstations would have needed to use newer versions of Windows beyond XP. Navision was largely unsupported (only by a consultant, not by MS) and of course the workstations were dangerously behind (yes, we were definitely on the internet). But to the users and the owner of the company, everything was working. We had very few problems...EDI was coming in and going out, packages were packed and shipped, inventory and accounting were up to date. It felt to me like things were held together with chewing gum and duct tape, and we were one hard drive failure from disaster, but from the company's bottom line, nothing was broken.
I left before they upgraded anything, and they're still in business, so I guess it worked out. But it proves that not everything has to change to continue to work.
> ..not everything has to change to continue to work...
That was sort of my argument; you could make do without changing the software but the cost keeps increasing and the user base keeps shrinking. At some point it'll be down to that one customer who refuses to budge and will be asked to make a tough choice; upgrade to the latest version or hire developers yourself to support...and even that may become impossible if h/w guys stop manufacturing that old machine configuration.
We build power electronics and our machines also have lots of software in them. People that only work in software have no idea what a difference a software bug is compared to a hardware bug. Things we can solve in software means someone remotes into the machine and goes home to their family at the end of the day. Hardware problems usually means the engineer goes home packs a bag, gets a plane ticket, is away from the family for a week and hopefully we figured out remotely, correctly what the real issue is. I did two transatlantic flights this year because there was an issue with a >$5 component on a circuit board.
"If your software need security maintainance it mostly has a failed architecture from the get go."
There are plenty of open source libraries, that many software developers used in their applications, that have had to have security updates. No software will be 100% secure.
"Like 9/10 apps need no internet connectivity at all"
This might have been true 10 years ago. Almost all apps people want need internet connectivity.
Software that does not interact with remote computers is 100% secure. You just got the risk when loading malicious save files or what ever, but the floppy disk kind of viruses is a whole other level of security risk and the user need to load the files. It doesn't just happen (I know some Windows computers could get infected by merely plugging in some USB stick, but you get my point).
The whole connectivity thing is the fundamental problem. Transferring files between devices have never been as easy as during the floppy disk days. Usability is not the driving factor behind forcing the internet into everything.
Maybe you've never experienced the difference between writing software for 1000 people and writing software for 1M people, or (I imagine) 1B. The marginal per-person cost of software is not on shipping. It's on "what kind of weird shit will I now have to do because 1M is a lot of chances for my software to break weirdly, and people have paid for it"
> You don't have to worry about quality control and returns.
You don't have to worry about quality control and returns if you don't care about quality control or returns.
That's applicable to websites, where you have to handle requests from all your users, and more users means more requests to handle.
But if we're talking about plain old regular software, something that needs no server to operate, and functions perfectly fine offline, something like, say, Photoshop, how different is the impact on the manufacturer when the software is used by 1k users, 1M users, and 1B users?
Yes, having 1M or 1B users means more opportunities for the bugs to surface and for people do be upset with the product. But do those scenarios impact the quality of the product for other users? Does they introduce unseen costs to the manufacturer? Do they make the product unprofitable or unsuccessful in anyway? Or does it mean that the manufacturer will have to refund 0.1% of their sales, and only benefit from the 99.9% of sales where the product worked as expected?
> how different is the impact on the manufacturer when the software is used by 1k users, 1M users, and 1B users?
_very different_, when the user's environment is different. And 1) you haven't seen shit if you think you can perfectly control the user's environment. 2) every new user is a chance for the environment to bite you.
> Do they make the product unprofitable or unsuccessful in anyway?
You do your engineer best to try and fight that. But there's absolutely a marginal cost, which is what I was responding to.
> _very different_, when the user's environment is different. And 1) you haven't seen shit if you think you can perfectly control the user's environment. 2) every new user is a chance for the environment to bite you.
Can you provide some examples of this? I'd like more info here, because off of the top of my head, I can think of the following counter-examples:
1. This isn't a new problem. User environment has been an issue ever since software as an industry was born. Specifying minimum specs is a pretty typical thing. And while I don't have depth of knowledge on these challenges or their history, my understanding is that it's only become less of a factor over time. So why is digital software different in this regard? If the industry was able to sustain itself before it went digital, what about the change to digital makes it unsustainable now?
2. Computer games, which are probably a good candidate for the most resource-heavy programs that need an appropriate environment, still largely adhere to a pay once business model. Doesn't this indicate that offline experiences aren't affected by environment to such a degree that a single payment business model isn't problematic?
> You do your engineer best to try and fight that. But there's absolutely a marginal cost, which is what I was responding to.
It surely has a marginal cost. But is that cost significant, is the question. In particular, significant enough to warrant a recurring payment business model.
> It surely has a marginal cost. But is that cost significant, is the question. In particular, significant enough to warrant a recurring payment business model.
I think you're assuming more of my answer than what I gave. That's fair given that this is the point of the article, but it's not mine. I'm very specifically only responding to "is there a per-user marginal cost on software?", and my answer is most definitely yes.
To warrant a recurring payment business model, I think the right question to ask is "Is there a per user-year marginal cost on software?", and now the answer is in my view, much more complicated and domain-specific. Worse yet, I think that there's perverse incentives at play here in recurring payments.
> I think you're assuming more of my answer than what I gave. That's fair given that this is the point of the article, but it's not mine. I'm very specifically only responding to "is there a per-user marginal cost on software?", and my answer is most definitely yes.
Fair, but I feel it's disingenuous to ignore the context the original comment was written in (the context of the article) and try to argue against a specific point in the post as if it was made without that original context. The sentence may have lacked inherent context, but it was supporting the key points the GP was making in response to the article. It wasn't designed to stand alone.
Given, I'm not the author of that post so entirely possible they were intending for it to stand alone, but I think it would still be better to see if that was intent rather than to assume so and antagonize what they were saying.
> But do those scenarios impact the quality of the product for other users?
Absolutely. Anything involving internationalization is an open invitation for very weird edge cases. Some languages (Hebrew!) are written right-to-left, some require more than one byte to store (Japanese, Chinese), time formats and time zones vary, some write currencies with the symbol in the front (US dollar) and some at the end (Euro).
If all your testing was done by Americans speaking English, the only thing you may stumble upon is timezones. If you're in Europe, timezones won't be much of an issue (as almost everyone is on CET), but you may find out that, whoops, Windows localizes certain path elements like C:\Users.
On top of that, a constant pain point in support is displays. Most Windows users are on a 1080p screen on their laptop, but may plug in their new 4K monitor and notice that your UI is completely illegible because it doesn't respect DPI settings. Or you thought you supported variable DPI, but never planned on a user stretching your window across two screens with different DPI settings. Or monitors use different color profiles or gamma settings and users complaining about that.
Beyond bugs, scaling your MVP to 1B users will mean expanding your userbase beyond English speaking Americans. This requires upgrades to internationalization, accessibility, possibly compliance with international laws and 3rd party licensing changes per region. Multilingual support staff and international payments processing. With a userbase this large, expect to be sued by people around the world, so you'll need region-specific legal services. Some of these issues just require money and non-technical staff and don't directly impact the user experience aside from diverting resources away from building features and fixing bugs for your original userbase.
Sure, but these aren't business model problems, they're business growth problems. The concern wasn't how to find 1B users in the world (and what do you have to do to get their money), it's whether scaling to 1B users inherently breaks the product, not just for individual users, but for all users.
If a company was only able to sell 2.6M copies of their digital software before running to expansion problems... good for them! That's a lot of sales and they probably made a great deal off of those sales. Sure, they can grow to 1B users, but they don't have to. There's no requirement for them to do that other than choosing to expand into those markets, and that's strictly optional. The business model is doing fine, there's no need to adopt a recurring payment system for ongoing maintenance.
And let's be honest, even if they do choose to expand into those other markets, the cost to convert the existing product to work in those markets is most likely less than the money they'll earn from selling in those markets, so... is there really a need for recurring payments to support maintenance? Will one-payment sale structures inherently fail to make the product profitable in a given market?
There are apparently 2B English speakers in the world, so you could in principle get away with no internationalization and have 1B users. The other things are more a cost of operating a multi-national business, and not a marginal cost of the software as such. You could also in principle scale to ~300M users (or ~100M households) without worrying about international issues by sticking to the US only.
You can tell the people who have never run a business or have worked at one small enough that they see everything. Support staff are not free. Project managers and salespeople can’t keep up with meetings and start sprouting assistants and coworkers. Customers are expensive, especially upset customers. So then the developers have to spend a lot more time making sure customers don’t get upset.
>But if we're talking about plain old regular software, something that needs no server to operate, and functions perfectly fine offline
The main product at work is a desktop application. That means that every OS version / hardware configuration of every platform that any user might install it on can have its own bugs. It means that we support multiple major versions rather than being able to just always deploy the latest version. It means that a user might want to have multiple versions of the software installed side-by-side on the same machine. It doesn't change the fact that more users means more use cases.
Even when customers run software on their own machines, you have to deal with bugs that only occur in rare occasions because your giant user base finds them all. Plus now you’re running in unknowable environments that you have to debug via telephone (the object or the children’s game or both).
Software has a somewhat inverse relationship to scale as manufacturing. For manufacturing the first one costs millions, and each one after costs hundreds for a time. As you get better you winnow away the equipment or maintenance costs and prices drop.
Software use cases experience combinatorics, and almost all useful algorithms have log(n) runtime. Even when Knuth says they are O(1), physics or EE say he’s wrong. There are no economies of scale. Racks don’t get cheaper when you run out of network ports. Cooling doesn’t get cheaper when you run out of roof. Things that failed one time in a million calls now happen every hour instead of twice a month, and actually have to be fixed.
As N of people → ∞, chances for software to break → finite maximum. And for good enough software you should consider that maximum already regardless of the number of users.
> And for good enough software you should consider that maximum already for any number of users.
I don't believe such software exists. (And, to be clear, I'm writing from direct, day-job experience.)
EDIT: I take it back. SQLite, cURL. Maybe.
EDIT2: I can't reply to the SEL4 response, so here goes. I'm a huge fan of verification tools, but consider the Spectre class of bugs. Verification is always done wrt a mathematical model that you've defined after inspecting the world and writing down the properties you want to track. But the world changes, and the chance that the world changes increases with the number of users of your software. That's the nature of the beast.
Spectre is a bug in the processor, not in the software. I agree that when you're stuck with unfinalized buggy processors, adding mitigations in software is reasonable. But the processor could be finalized too.
When I had a reply I couldn't reply to, I opened the reply separately in a new tab, and there I could reply to it, try this.
> Spectre is a bug in the processor, not in the software.
It's a bug in the processor that causes a bug in the software. It's not a bug in your idealized mathematical model, but try telling that to the people who paid you not to leak private keys.
I see my job as an engineer to be to create a product that satisfies the user's expectations (which in this case are eminently reasonable). It matters not one bit that I can point the finger to the chipmakers. I'm still selling something that I now learned doesn't do what I said it would. It's still on me to fix it the best I can. If I care about the product quality, that is.
And yet that's what every good engineer did when Spectre came out. Same with the Pentium fdiv bugs, and same with a host of microcode bugs that come up all the time.
Not my business to decide what you think is reasonable. That's just what happens in the world, and what (in my view) good engineers sign up for.
The choice is between letting hardware be not finalized and letting that force software to be non-finalizable, and letting software be finalizable and forcing the hardware to be finalized too. I like latter more. Finalized hardware is better by itself as well.
If you buy a car and the airbags randomly deploy, would you consider it reasonable for the manufacturer to respond "oh, yeah, that'll happen if you drive it on roads rougher than polished stainless steel. You should only be driving on polished roadways"?
If this requirement was known to me before I bought, sure.
I think that this is a bad analogy to hardware, though. Polished steel roads are unreasonable to ask for, but bugless processors are reasonable to ask for.
They may not be impossible to make, but they don't get made. And I guess you think Microsoft has been putting bugs in Windows for the last few decades just for something to do, or was that also the influence of alien gods?
I suspect it’s less about chances to break due to dice rolls and more chances to not meet the feature/requirements that change based on varying contexts of users, which create a lot of legal and integration and reqs which require lots of code and maintenance.
Not at all. Software has low marginal cost, but that has high fixed costs that need a monetizable market to sustain. Good software takes effort and great people. Those are expensive. If you can't monetize you can't put people on your software and it will suck (like most OSS software, for example). Physical manufacturing is hard, but at least it brings in dollars. OSS, privacy and wankers reverse engineering your software shrinks your market substantially.
I'm not sure I get your argument. Basically everything you're talking about applies to physical manufacturing too. You have high fixed costs (equipment, location, assembly line workers, what have you), and you also have marginal cost (software basically has zero marginal cost). Good physical goods also take effort, and great people to design them.
> Physical manufacturing is hard, but it at least brings in dollars
You say this as if it's some indelible fact that if you make a physical product, it WILL be bought and you WILL make a profit no matter what, but I think it's safe to say this is objectively false, as many failed physical business would attest to.
> OSS, privacy and wankers reverse engineering your software shrinks your market substantially.
As opposed to in the physical world, where nobody ever cribs your ideas and sells them at a discount compared to you... AKA "Amazon's business model"? (not to mention overseas knockoffs of products
Given all these things being equal then, software has all the same benefits that your parent comment mentioned, while staying at best EQUAL with physical manufacturing, save for maybe higher salaries to the people making your product (arguable in some cases, but on average probably true) but this difference pales in comparison to not having to own a warehouse and manage last-mile shipping costs etc.
A physical product has limitations. Creating 1,000 car mirrors requires capital, storage, self space to sell. Once the mirrors are created no changes can occur. Any changes requires a new batch.
Software has expectations that it can and should be changed after purchase through updates/patches/upgrades/saas products. That creates an ongoing cost a physical product doesn't have.
There are tradeoffs and different expectations which make both difficult. I would rather go the software root because I have the advantage of free developer time but someone else might find making 10,000 widgets from China much easier and cheaper. We think software is easier because we devalue what we add and what we really cost
>Software has expectations that it can and should be changed after purchase through updates/patches/upgrades/saas products. That creates an ongoing cost a physical product doesn't have.
Nowadays businesses use this to create a constant revenue stream from what used to be a single purchase. It's not to service the product, its to continue to soak money from the people who do end up spending on it.
Aside from security updates most software I have, I just want them to stop. No changes, no design upgrades, no "we changed this tier of our pricing" etc. Most of that stuff is working against the customer not for them. Your SaaS model is so you can make money, I have no incentive to pay more than I have to.
You have to pay their recurring revenue if you want them to stay in business and keep the lights on so you can use their product. That's the hard reality. If you run your own server and fix your own bugs and etc. (which is feasible for many here, I'm not saying it's a bad option) then you can "pay no more than you have to".
If it was just software they sold it would still exist. It's only a saas and abusive license.verififcation that means if they go out of business they remove all benefits from previously paid amounts, and that's not in my interest either.
> If you can't monetize you can't put people on your software and it will suck (like most OSS software, for example).
I have worked on FLOSS software and I have worked on non-FLOSS software and I don't see most FLOSS software sucking in a way that non-FLOSS does not.
FLOSS has some advantages - as there is no compelling need to release new features which can drive up revenue and profit (or at least OKRs) for the next quarter, you don't get a constant need to release unneeded junk to try to squeeze the last dime out of consumers. You can actually spend time refactoring the code, or only releasing when it is properly architected.
Most of the servers and smartphones in the world are running on a FLOSS kernel. MacBook's OS derive from CSRG's BSD, and even some of Windows, like the Internet stack, derive from FLOSS. If it sucks so much, why do virtually all major operating systems derive fully, or at least partially, from it?
It's almost like we live in different world, I could not disagree more.
* Software is extremely expensive. Software engineers are expensive, and for a good software project you need a tech lead, a manager and probably a few developers. These are all people you need to pay tons of money for.
* Software is constantly changing, something that worked 2 years ago can be broken beyond repair today. You need a team that can keep up with this.
* Software needs maintenance. You can't just build an app an call it a day, you need to employ a team to maintain it continuously. You can build a massive, gargantuan bridge and maintain it maybe every few years/half a decade to keep it safe for 30+ years, you cannot do that in software.
* Unlike what outsider think, software -- even "boring" CRUD/web software -- is still very much a research project. If you ask a civil engineer how to build a bridge, they'll tell you about all the techniques that were developed over the many many decades. What a developer focuses on while writing code is mostly ideas developed in the last few years. Although you think you're building a simple app with 3 devs, what you're missing is you have your own tiny research lab studying how to develop this simple app the cheapest way possible while making it maintainable.
* Software by its very nature is hard to make money off of. Its complexity is opaque to most people, they're not willing to pay. You'll always have people pirating it, eating away from your bottom line. Moreover, each new software means changing workflow, so even if you have the best product on the market, decent amount of people won't switch from the industry standard.
* Modern software engineering methodology focuses on, among other things, time to ship, feature richness and maintainability. It does not focus on correctness -- partially because our theories on software correctness are lacking (even if you decide to use novel/extreme approaches such as Dependently Typed Programming, formal proofs etc it's unclear/unknown if you'll reach a significantly better correctness metric). This makes your product inherently frustrating to the customer. No matter how much money you spend, you'll always have a product that's a little bit buggy. This means the product is very sensitive to the amount of money you throw at it. If you throw Apple level of money, it'll be less buggy, if you have a barebones team it'll be more buggy.
> * Software needs maintenance. You can't just build an app an call it a day, you need to employ a team to maintain it continuously. You can build a massive, gargantuan bridge and maintain it maybe every few years/half a decade to keep it safe for 30+ years, you cannot do that in software.
> * Unlike what outsider think, software -- even "boring" CRUD/web software -- is still very much a research project. If you ask a civil engineer how to build a bridge, they'll tell you about all the techniques that were developed over the many many decades.
As a nonpracticing civil engineer, you're underestimating the ongoing maintence that goes into any large bridge.
Also, though the techniques may be more established, every bridge must still be designed to fit the specific characteristics of its local geology and geography. But come to think of it, fundamental computer science algorithms are pretty well established, like bridge-building techniques. Software engineering is simply fitting the code to each unique problem, as bridge design fits a bridge to each unique place.
The dirty secret is that you rarely need to invest in new, novel, software engineering techniques which is what you need actual software engineers for. In reality you can just get a few software developers to propose a design for a thing, have a software engineer consultant review the design and sign off, and then go on your merry way building the software. Kinda like how architecture/construction vs engineering works in meat space.
> Unlike what outsider think, software -- even "boring" CRUD/web software -- is still very much a research project. If you ask a civil engineer how to build a bridge, they'll tell you about all the techniques that were developed over the many many decades. What a developer focuses on while writing code is mostly ideas developed in the last few years.
Most (all?) of the ideas I see are at least 20 years old, if not 40-50. Something like Spring wouldn't be my ideal choice, but it can certainly get the job done for most people, and it's 20 years old. MVC dates back to the 70s. Postgresql is 27 years old and is a fantastic choice. SQL and RDBMSs date back to the 70s. The term CRUD itself dates back to the 80s. Server rendered pages are still easy to do, perform way better than most React-based abominations, and are as old as the web. If anything, software is plagued by these "research projects" that are mostly just to scratch smart people's itches.
Software margins are good, especially compared to physical things. However, the marginal cost is far from zero. It scales with # and variety of users. Today, all software comes with complex dependencies.
Take for example any mobile app. Apps require constant upgrades to keep up with the hardware and software changes on the platforms. You can’t just build an iPhone app and leave it alone to be enjoyed by people. I’ve tried, within a year or two there will be changes that require developer work, if you don’t keep it maintained, it will start to crash and function poorly, Apple, for example, tracks everything and will start with de-boosting search results for your app and end with removing it from the platform entirely.
Google is the same. I’ve tried, I built a Top 25 RPG and got busy with other things. It went from Top 25 to deplatformed in less 5 years because unmaintained software just doesn’t work in most cases today.
Software is more complex now. All software is a conglomeration of lots of other software: frameworks, platform tools, libraries, APIs, etc.
Another example: Flash
Another example: All the AI software being written on top of the OpenAI API will be broken in a year or two as they roll new versions of the API and deprecate the old.
Software doesn’t just work anymore. The platform that executes it is constantly changing.
> You can’t just build an iPhone app and leave it alone to be enjoyed by people. I’ve tried, within a year or two there will be changes that require developer work, if you don’t keep it maintained, it will start to crash and function poorly
My favorite is when a new Apple update breaks your app, so you identify where the issue is and make a small update, but now Apple rejects your update because of some other arbitrary guidelines it's changed, so you then have to start down that rabbit hole.
> Software doesn’t just work anymore. The platform that executes it is constantly changing.
It depends on the software. But where this is true, it's not because of some innate nature of software, it's because of business decisions software companies have made.
Unlike hobby software, where you could reasonably write everything from scratch just for fun, or say in write it in 6502 ASM which hasn’t changed at all for 50 years and won’t change in 50 more, in modern business software, you use common frameworks, APIs, platforms. All of those mean the software you write will fail as those dependencies and platforms change and age out.
Business software has to be built on common building blocks because as a business you need to be able to hire people who can work on it. So things like Java and node and react and dozens of APIs exist which creates dependencies and requires ongoing maintenance as those components continue to change.
The problem with software's non-physical nature is that it has runaway market dominance issues. Software, especially software that interacts with other software, tends to be either open-source maintained by a "community" or a thinly veiled world domination plan.
Low barrier to entry is really important for new software. So it’s this struggle with some orgs trying to increase lock-in (Microsoft, Oracle, etc) and a constant stream of new products taking off, dominating the world, and getting knocked off themselves.
Making an excellent product is hard, but what is really hard, is maintaining it for years and decades afterwards.
Maintenance, addition of new functionality, bugfixing, porting to other platforms etc. takes easily 10x-50x time than the initial release, and eats the vast majority of the developers' time and energy.
This is where "not being paid for your work" translates into abandoned projects.
Plenty of people are using copies of Word, Powerpoint, and Excel 2003 just fine, which received literally zero 'maintenance' for at least a decade or more depending on personal preferences.
For most software that can be sold in a box, without an attached cloud service, this approach works.
EDIT: Also some fraction would be using them on computers that literally haven't been upgraded or connected to the internet for a decade or more.
It is amusing that your argument for software not needing “maintenance” is pointing out 3 pieces of software that had each received 20 years of maintenance by the time they reached the year you picked, 2003.
I think you are confused because the 2003 version of those products had already had as many as 20 years of maintenance, in the form of prior releases upon which they were based. Word was first released in 1983 and Excel in 1985.
Did the development of Excel 2003 not benefit from all of the work done in the previous versions? Even if it had been a from-scratch rewrite, which it was not, it would have still benefited from the design iteration and experience of the development team in solving the same problems in previous versions. Excel 2003 was not a first-release that was suddenly a refined product that would be useful for years without additional maintenance. I used Excel a few times in the 80s, and regularly through the 90s and early 2000s on Macs and Windows and wrote quite a bit of VBA for them professionally.
What part of my claim is bizarre? My claim is simply that Word and Excel had 20 years of history prior to the 2003 release. Your claim that it was originally written on a different OS is not really a reasonable counter-argument. The Windows version of Excel came out 2 years after the Mac version, so that only knocks the history of the "Windows" version down to 16 years.
By the same logic someone could claim that 1985 Excel was in fact based on the preceding 20 years of mainframe and minicomputer software and the very first spreadsheet-esque programs.
And that those were in turn based on the ideas of Von Neuman & co. from 1945 of tabulating numbers efficiently, and so on and so forth in roughly equivalent leaps all the way back to the first abacus.
It's so reductive of a perspective that it's self-defeating, since no human being, including you, could ever actually comprehend the entirety of technological development, or even just the post-1945 developments.
I think it is a pretty clear line between attributing the quality of a piece of software to two decades of prior development at the same company, with its own continuity of developers and institutional knowledge, as opposed to the general benefits all software gains from the industry advances it builds upon.
No it's not a clear line at all. In fact Lotus 1-2-3 was literally the direct predecessor from IBM in terms of everything but the brand name and some design choices.
Microsoft also makes Windows, and Windows takes backwards compatibility very seriously.
Even if they don't work on maintaining Office 2003 directly, they indirectly work very hard making sure every subsequent version of Windows does not break Office 2003.
No, they are perfectly usable and functional even on Windows XP or Vista or 7 computers that haven't been touched or connected to the internet since 2012.
That's not backward compatibility then - those are the systems it was made for (Windows 7 would then have been made backwards compatible for Office 2003).
It's backward compatibility if Word 2003 runs on the later Windows versions - like Windows 10 and 11. I don't know the answer to that, but I'm sure someone here does.
> Plenty of people are using copies of Word, Powerpoint, and Excel 2003 just fine
Unless they're also using computers and OSes from 2003 (spoiler -- they're not because those OSes wouldn't work with today's internet), those people are benefiting from untold efforts in the meantime to maintain their OS so it has that compatibility with 20 year old user space code.
if you think those aren't receiving maintenance you're not paying attention or are ignorant as to how hard it is to keep a complex app compiling as operating systems move forward.
Not receiving new features is VERY different from not receiving maintenance. It is wholly implausible to believe that there has been zero energy spent on keeping those codebases working in the past 10 years.
I don't think you understand. Office 2003 (or earlier) and similar products aren't constantly phoning home for updates like more recent software. Millions of people have had a single 100% static binary for these programs running on their computer for many years. The ability to phone home, if it exists at all, may even be broken or disabled.
This is in fact how all software worked until, I don't know, about two decades ago? Things being patched was a big deal, a voluntary manual process, and didn't happen often. The update would even have a well-known name like "Service Pack 2".
The idea that all software must be constantly maintained is recent and the assumption that it is necessary is mostly self-imposed by the software business. Users don't share this assumption, and in fact on many products, updates are viewed mostly neutral to negatively, other than perhaps critical security updates on products that are used in connection to the internet or untrusted data.
That kind of rethoric doesn't fly too far... Your original point was
> Plenty of people are using Word, Powerpoint, and Excel 2003 just fine
Are you claiming that a reasonable majority (for the sake of discussion) of this plenty of people are using Office 2003 on Windows XP machines??
I'd doubt it. More like there's plenty of people using old software in modern versions of Windows. The maintenance work, of course, exists and has been done indirectly, by Microsoft, in the development iterations of Windows itself.
If you also include Windows 2000, Vista, and 7 computers that weren't updated in the last decade, I think that would be a sizeable fraction of all Office 2003 users in 2023.
Whether or not they make up the numerical majority of all extant users is simply irrelevant to the point of 'Plenty of people'. It's easily many, many, thousands.
You answered your own issues. Dont open untrusted documents from the net. not running while connected to the net seems mute as the software doesnt directly access the internet.
Seems like issues even the most up to date software suffers from.
Support for Office 2003 ended in 2014. Close to a decade ago. No maintenance, no patches, no service packs, nothing. No energy expended working on that codebase.
Office 2016 is going EOL in two years.
That's from Microsoft themselves. They do not hide these facts or make it hard to find.
Unlike recent versions of Office, old ones didn't call home, and Microsoft doesn't really have an idea of how many copies of their software are still in use in some cases.
I find it mindboggling that a simple program like text processors have to be continually updated for decades. Just program it right once for god sakes.
>I find it mindboggling that a simple program like text processors have to be continually updated for decades
Your assumption that a word processor is a simple program is something you might want to consider, at a low level handling text rendering in a word processor is highly complex work. Besides text encodings regularly evolving and changing over the years especially in the pre-UTF-8 world (but even with Unicode), there's also the reality that security threats evolve over time, and once threats are discovered old code that once seemed fine becomes insecure and dangerous. In computing the reality is that there's constant change driven by supporting a regularly changing computing environment, security fixes, bug fixes, increased computing power permitting new features that are then implemented and new ideas appearing, et al. Software will always be changing, that's the way things are, there's good reasons for this. Trying to oppose that reality with an unrealistic model that doesn't account for the causes of change just leaves you misunderstanding the way the industry works.
You vastly underestimate the complexity involved. Also, new attacks get discovered that were not even dreamed 20 years ago. There is no "just get it right" when right is measured by what we know, and that keeps changing.
The traditional way to fund maintenance was to release a new and better version of the product. Example, all the releases of the various office suites from the days of MS-DOS up to Windows up to the cloud. If sales decline, sell to a competitor (good timing required) or close and switch to something else. A company that paid salaries for 5-10 years is still nothing to be ashamed of.
In the case of Apple, keep selling new hardware. I can't remember if they ever sold their software in the first years of Macs or if it was bundled with the hardware.
> I can't remember if they ever sold their software in the first years of Macs or if it was bundled with the hardware.
In the early OSX era they used to sell their office suite separately. Eventually it got bundled with hardware for free. They still sell some software, like Final Cut Pro.
An excellent product doesn't need maintenance if it doesn't rely on any online services. Once it's done, it's done. It does everything it needs and nothing it doesn't need.
Engineering projects usually have a finished state. Software engineering is no different, no matter how much the industry wants you to believe otherwise.
OSes also can be "excellent products". They don't need yearly updates, there's nothing inherent to them that would prevent them from being made perfect, finished and never updated again.
The only case when an otherwise perfect OS would truly need to update is when new hardware capabilities require OS-level changes to support. Sometimes it may be beneficial to expose these new hardware capabilities as APIs for apps to consume. But again, adding new APIs shouldn't break the existing ones. For example, on phones, this would include things like notched screens, fingerprint readers or multiple rear-facing cameras.
> Dependencies break.
Don't update dependencies. Pick one version that serves you well and stick with it forever. I'm serious.
> Security updates.
It seems like we've already realized that writing code that deals with complex data structures received from untrusted parties in memory-unsafe languages like C is a terrible idea. If you exclude memory safety vulnerabilities, the attack surface shrinks drastically. You'd run out of security vulnerabilities pretty fast if you'd have any to begin with.
> Your house/car need maintenance. Your cities roads/bridges/tunnels need maintenance.
Houses, cars, and road infrastructure are made out of atoms and exposed to elements and stress of our imperfect real world. They wear out. Code doesn't. In 100 years, the bits would be the same they are today (as long as you use a reliable enough storage medium).
I'd rather use an imperfect product that does a good-enough job instead of waiting for a perfect product.
The perfect OS doesn't exist yet. Right now, I'd rather use some OS than no OS.
Why a perfect OS doesn't exist? Good question. Maybe because the programming field is relatively immature so we're still figuring things out and we don't apply formal verification to everything. Compare that to say, architecture, where we can calculate how much weight a structure can withstand. Or the other way around: what do we need to do to support an X amount of load.
I guess the stakes are lower too. I wouldn't walk on a wobbly bridge, but I don't mind if a desktop app I use crashes occasionally under unusual circumstances. Critical software (say, aviation) is generally written with more care but it's still not perfect.
This all sounds fine hypothetically, you might want to take a look around at the world for a while to see why it doesn't fit your model. Obviously your idea hasn't happened, and there's good reasons why this is the case that you could readily discover if you took a look at reality instead of your model of reality.
And? How do updates help any of this? Firewalls are a thing. Memory-safe languages are a thing. Unit tests are a thing. Fuzzing is a thing. And it is not an OS's job to protect the user from themselves (i.e. social engineering). If you've installed malware, you deserve the consequences and you will be more careful next time. It's okay for powerful technologies to require a minimum level of education.
Software engineering is like if a car was built and thus "finished", but the systems it depends on (like roads, and gas stations) changed every N years (with N < 10).
Imagine the gas stations (operating system) changed the kind of fuel they dispense every few years. No, by no means a car (software) that is fully finished today would be able to continue doing its thing tomorrow, without ongoing updates.
This also happens in the real world, it's just that changes are more likely in the decades or centuries, so we as humans don't perceive them so well.
The fact that Microsoft spends a whole lot of money to avoid this, is circumstantial. Apple doesn't so much, and at some point your finished software will stop working with newer MacOS releases if you don't update it to the newer system versions.
Linux is even more of a moving target. Good luck having a perfectly well working compiled program today, and trying to run it in 10 years time.
Is there any reason — other than "we're paying our graphic designers full-time salaries so we better get our money's worth" — why OSes have to change so drastically and can't be finished as well, only ever updated to add new APIs for apps and drivers to support new hardware features?
Security is probably the biggest reason. With attacks growing continually more sophisticated, it’s not enough to just patch holes as they’re found — you have to engineer entirely new systems to not be drowned in holes. This unfortunately has compatibility implications.
Look at macOS for example, which over the years has gained app sandboxing and mobile-like access permissions. Software pre-dating these additions that assumes that it has access to everything all the time will have its functionality impaired. Devs had to update their software to not make such huge assumptions and to handle no access cases gracefully.
So, how secure is "secure enough"? Android's security model is okay, and Google knows it, so they just keep redesigning the UI without substantial API changes because the updates have to be coming out with each lap the planet makes around its star.
> Devs had to update their software to not make such huge assumptions and to handle no access cases gracefully.
Sure. But at some point it will reach the "secure enough" state, won't it?
(Actually, macOS permissions work mostly transparently API-wise. Apps can request access explicitly so it better fits their particular UX, but the prompt would also pop up the first time the protected resource is accessed. No code-level changes are necessary to support this.)
> Android's security model is okay, and Google knows it, so they just keep redesigning the UI without substantial API changes because the updates have to be coming out with each lap the planet makes around its star.
Google is a bit of a special case I think due to their culture of using big projects as a means of climbing the corporate ladder. The only thing that could ever possibly result from that is endless churn.
> Sure. But at some point it will reach the "secure enough" state, won't it?
Maybe, I’m too much of a layman in the field of infosec to be able to say.
> (Actually, macOS permissions work mostly transparently API-wise. Apps can request access explicitly so it better fits their particular UX, but the prompt would also pop up the first time the protected resource is accessed)
True, but it’s still problematic if e.g. the user accidentally denies access unknowingly, which will result in the app producing seemingly nonsensical errors. For a good user experience the app needs to be able to tell the user what the real problem is.
The program's interface with environment won't change forever, when you write your program as a pure function which only touches exactly the thing it fundamentally needs to, you use a pretty much finalized interface.
> only ever updated to add new APIs for apps and drivers to support new hardware features
Sounds like it's not "finished" if it needs all these updates.
As for why change the window dressing, the market for style changes over time. Why do car companies change the look of their products? Why does the outside of a cereal box ever change? Do the inside of our houses today look the same as the 80s? The 70s? The 40s?
Are you arguing that Windows and MacOS should continue to look like it's 1.0 release?
> As for why change the window dressing, the market for style changes over time.
Software is a tool, a means to an end. It doesn't need to participate in fashion.
> Are you arguing that Windows and MacOS should continue to look like it's 1.0 release?
Yes. It should remain an option at least. Not the literal "1.0", but the version when a decent UX was figured out. For Windows in particular, that's clearly 95. I know people who used the "classic" theme in Windows 7 and earlier, which remained mostly unchanged from Windows 95, and are resentful of its removal in subsequent versions that they have to use in order to have support for modern hardware.
I wouldn't say that's very clear. There were still lots of refinements and redesign that happened as Windows started becoming more of a multi-user OS with NT and later XP. Just because there was a classic skin in 7 doesn't mean it was the same UX. Even with the "classic" skin there's pretty massive changes in the behavior and usability of most of the UI.
Maybe you find 95 to be the end-all be-all of design, but many don't. UI, and to an extent UX, is partially subjective and then also very different based on the user. Imagine a user who has only used an iPad gets dropped in front of a Windows 95 machine, would they consider it the peak of OS UI/UX design?
And as the product's target market evolves and changes so too should the software for what customers expect. Which brings me to...
> Software is a tool, a means to an end. It doesn't need to participate in fashion.
One could make the same argument of a bed or a couch or hell even a whole house. A house is just a tool, something to keep the environment consistent and shelter from the elements. A couch is just a tool to support a sitting human being.
It doesn't need to participate in fashion. And yet people are pretty dang picky about their furniture choices and paint comes in thousands of colors.
Any piece software competes in a market of software. Say there's two pieces of software with identical feature sets. One looks like an ancient Java Swing UI and has bad colors and overall just looks ugly, meanwhile the other looks nice and pleasing (insert your own ideas of "nice and pleasing" here). One is probably going to hemorrhage users over time, can you imagine which?
> Even with the "classic" skin there's pretty massive changes in the behavior and usability of most of the UI.
The windows themselves, the taskbar and the desktop all worked the same in 7 as they did in 95. Many of the changes made over that time came with settings to revert them — like that new thicc taskbar with icon-only buttons and window grouping.
> Imagine a user who has only used an iPad gets dropped in front of a Windows 95 machine
I'm sick of perfectly good desktop UIs getting redesigns which are compromised by the existence of iPads and other touchscreen devices. This just should not happen, period. Windows 95 UI is straightforward enough once you get the basic principles, which takes all of one hour of poking around. Microsoft didn't conduct all that research for nothing, after all.
What frustrated people about Windows 95 (and 98, and ME) when it was current wasn't the UI. The UI was nice. It was the inherent instability of the system itself due to its architecture. Same for classic Mac OS, it doesn't matter how nice your UI is if the system itself can be trivially crashed or locked up by a single misbehaving app because of cooperative multitasking and lack of memory protection.
> One could make the same argument of a bed or a couch or hell even a whole house.
All beds and couches work the same and look largely the same. You know a bed when you see one.
All buttons and text fields and window titles used to also look largely the same and everyone was fine with that. But then the plague of flat design happened.
Imagine being exhausted after a long flight, walking into your hotel room only to see white, textureless walls, floor, and ceiling, and multiple white textureless blocks of different sizes inside. You get to figure out which one is a bed, which one is a chair, which one is a toilet, and which one is a sink! How exciting! This is what modern affordance-less UIs feel like. A good tool shows how it's meant to be used by its form.
> And yet people are pretty dang picky about their furniture choices and paint comes in thousands of colors.
That same classic Windows theme was extremely customizable for that very reason. You could change all colors and fonts to your liking, and some people did! You could make yourself a dark theme way before dark themes became mainstream.
> Say there's two pieces of software with identical feature sets.
You mean the control skins are the only thing different between them, otherwise all UI/UX being identical down to the layouts?
> All beds and couches work the same and look largely the same.
It's almost like you've never been in a furniture store. They come in tons of different sizes, shapes, textures, colors, features, and more. There's not just a single couch model that you can then change the color on.
> The UI was nice.
I highly disagree. The taskbar would get filled up quickly and get squished for me since there was no grouping. The notification tray couldn't hide things. No virtual desktop support. The start menu was incredibly basic in form. No search from the start menu. Editing the start menu programs list was extremely non-obvious and not self explained. Navigating the nested start menu shortcuts was a huge pain. Installers just threw everything in the start menu leading to a ton of clutter. UI elements are miserable at scaling, in that there is zero scaling support. The taskbar was only on your primary display. Difficult to change default sound output device straight from the taskbar. No central place to see previous notifications. No jumplists. No Win+Number shortcuts to quickly swap between things on the taskbar. Alt+Tab prompt doesn't have window previews. I could go on and on and on and on and on about what I perceive to be massive failures of 95's UI/UX.
I didn't really care for the overall icon design throughout the OS and generally see the overall style in the OS as pretty basic, bland, and boring. Changing the color or setting the title bar font to Comic Sans isn't solving that.
In the end though, you probably disagree with a lot of these things. I think your couch is ugly and uncomfortable, and you probably think my couch is ugly and uncomfortable. UI/UX has some subjectivity involved. Not everyone agrees 95 was the ultimate software design unable to be improved upon.
Uh, look at curl. It is an excellent product, no doubt about it (or if you do, I wonder what your standards for excellence are), and yet we are here, at version 8.0, 27 years after its first release.
Edit:
"if it doesn't rely on any online services"
That is a big IF. How many things don't, at least indirectly? (e.g. by relying on HTTPS, which requires TLS, which requires keeping up with current cryptographic standards.)
If cryptography can't possibly be figured out once and for all and must remain a moving target forever, I'd separate TLS into a module that can be updated independently of the rest of the system.
Engineering projects have a finished state? So once they build a road or bridge or dam, nobody needs to touch it again forever? It's finished right, no more work anymore.
Even in electronic hardware there's often continuation of design and refinement. Have you never seen a board with a revision number on it?
Real-world objects like these wear out. Code doesn't.
> Have you never seen a board with a revision number on it?
Of course I have. There's a difference though. You can't ship an electronic device that's unfinished with a promise to "fix it later". Yet this is what routinely happens with software these days. Also, if your device serves its purpose well, you'd probably have a "final" board revision with all flaws fixed. If you want to add features to an electronic device, you'd make it a different model, possibly sold concurrently with your existing one to serve people with different needs and budgets.
You just said "engineering". Bridges and roads are engineering as well buddy. And it's not even just the wear, it's the continued refinement and upgrade of these structures which is a constant engineering effort.
> Engineering projects usually have a finished state
This is the statement I'm addressing. And it's just not entirely accurate. Things change, assumptions get proven wrong, there's always a newer and better way to do something, etc.
Sure your widget was probably about as good as you could do at the time you first launched it, but several years later there's better components available. Or maybe a supplier stops making some part you were using. Or a few years later you start getting parts back failing early in their service life and need to make an update. What was once your finished state now isn't.
Nowadays, software is different from the CD era, where you bought a game/software and that was it. Nowadays, people expect the software to be maintained, kept up to date and always compatible with the latest changes (new OS versions, compatibility with other software, etc.).
Maintenance is the high cost of software, not building it. This is why I sell my products with a perpetual license but with paid yearly updates. I can not work for free indefinitely as all the "lifetime" licenses promise.
I think there are a few interesting threads to pick at here.
First, some of these problems are created by software developers themselves. In particular, shoving in an online component where one doesn't need to exist basically guarantees that you will have recurring costs and the need for constant maintenance.
Second, Microsoft is much more careful about maintaining backwards compatibility than Apple. I can generally fire up 10+ year old software on Windows 10, no problem. The same is sometimes true on OSX/iOS, but often not. The increasing popularity of Apple products and the lower priority they place on backwards compatibility has definitely made developers' lives harder.
Having said all that, I don't think everybody expects constant updates. I think power users, especially, are used to running what works for them for long periods of time. You probably can't build the next Google on this, but a lifestyle business? Certainly. Just look at Pinboard and it's lack of enhancements or UI overhauls - and that's an online service.
Yes. Software is a low-capital business and many people in tech don't want to believe it.
A few offices, macbooks, and data center space is very cheap compared to building a manufacturing plant.
On the other side, what tech people understand that the general public does not... is that software has a healthy dose of maintenance and operational costs when it scales. Not a massive cost, but higher than zero - which is what most MBAs think the maintenance cost is.
> In software the hard part is making an excellent product
I'd argue in all domains, the hard part is making an excellent product.
There are virtually zero real-world constraints you can leverage as excuses in the domain of software, other than the original idea was bad or you have really bad people around the idea. Most of the software ideas I have encountered in my career are fantastic. It's not hard to describe what a high quality product experience is like if you are a domain expert and have suffered the gauntlet for 30+ years. The part that always seems to go straight to hell is the implementation of the idea.
I suspect most software projects go bad because there are too many layers of separation between participants. In physical products, substantially more direct interaction is required to get things done. With software products, you can isolate everyone into different multiverses as long as they are pushing PRs to the same GitHub repo (and sometimes not even the repo is shared...). Over time, these silos ultimately ruin any sense of ownership and quality in the product.
It is quite tragic - while on one hand software is the most accessible form of human enterprise ever, it is also the easiest to do wrong. Having no constraints seems like win-win at first, but it is absolutely a double-edged sword. In my view, the best software company CTOs are the ones who intentionally add as many artificial constraints as they can to the technology and process. Do more with less. Force lateral thinking. Make the product people go back to the customer and say things like "we actually can't do that because of a technology policy" instead of pretending like an unlimited infinity of things is always possible.
Software is mostly a non-rivalrous good: https://en.wikipedia.org/wiki/Rivalry_(economics) although it becomes a little bit more that way when it's hosted, rather than distributed via downloads or something, depending on the load it puts on a server.
When you manufacture the physical widget, manufacturing tolerances mean that not every widget is the same. There are variations in the as-produced widgets.
You need a QA/QC process to identify units which are too far out of tolerance and either remove them from the pipeline or remediate them. You also need to track trends in the measured tolerances to proactively fix your production equipment.
In the software world, that’s trivially easy. Your CI pipe publishes an artifact and then every user gets a bit-perfect copy of that artifact. Your entire QC is just: Users compare the artifact’s checksum to the expected checksum. It essentially always matches because we use things like TCP to copy the data.
The type of QA you’re talking about is also required for physical widgets.
Parent means quality control in the context of the supply chain.
Still wrong imho, since you need to at least maintain a zip file in someone's CDN, and those folks have to maintain their CDN QoS.
This logic has always bothered me a little and I’ve never understood why, until recently.
The fact of marginal cost results in a lot of software being written that otherwise never would have been. After all, the difficulty of solving a problem for myself often doesn’t offset the trouble of making a reusable solution. It’s only through having other people use it or pay for it that it becomes worthwhile.
Randall Munroe’s chart is incomplete because it thinks too locally.
> Software is easier to produce, sell, and distribute than any physical product.
This is exactly why people should pay for software: consumption of physical goods destroys the planet. Money spent on software can't be spent on destroying the environment.
Ban ads*, make people pay for content and software and save the planet. Win-win-win.
In an industry full of unchecked monopolists, piracy takes the role of providing the a reasonable price ceiling at which people switch away from bad but monopolized products
I'm a bit confused - you subscribe to one developer, and then get the benefit of being subscribed to all?
What's the incentive for a developer to sign up to this then, if they don't get a share of your subscription when you use their service? Isn't this a bit like asking Disney+ to give all Netflix subscribers access with no compensation?
The difference this is supposed to make is that currently most people don't pay for free software. I don't for example. That is because I don't need to. This system is supposed to make more people pay, which should mean that all developers get more money. Giving access to someone who subscribes to someone else is part of what makes this work and if the developers can accept that, they should all benefit from it.
But I don't get any $ from it unless they sign up on MY site, right? Since there's no sharing mechanism.
So I don't see how joining in would benefit me - if anything I'd lose a bit of revenue from people who would have paid and now find they don't need to because they're signed up for some other product which I have no hand in and no revenue from?
> But I don't get any $ from it unless they sign up on MY site, right? Since there's no sharing mechanism.
Exactly.
> So I don't see how joining in would benefit me - if anything I'd lose a bit of revenue from people who would have paid and now find they don't need to because they're signed up for some other product which I have no hand in and no revenue from?
It would not benefit you if the average person paid for multiple free software projects. In that case, they would only have to pay for one instead of multiple.
I don't think that's the case though, so this solution should make more people pay for free software and that should benefit the developers on average.
There does need to be some way for ordinary users to pay something to somewhere in a single convenient way, voluntarily and in voluntary amounts, that somehow ends up being pooled and distributed to or otherwise benefitting all the 37,000 developers and projects whos free work they use all day every day.
This isn't it.
I donate a little to the EFF, monthly automatic, and a few other things irregularly as I feel particular gratitude. It leaves a million people unaccounted for, but all you can do today is pick a few things that matter to you and let others get the others.
And/or pay back/forward by contributing a little work of your own to the commons which I also do, but you can't expect most to do that, and I don't claim mine is valuable. Actually come to think of that, the reason I work on the things I work on is mostly because I just want to, so maybe most of those million are fine and there's no problem. But come to me with any kind of demand, well, I guess that's when paying enters the chat.
One such service that distributes payments could sell subscriptions in this system. That's one of the ideas I have had all the time with this project but I guess I forgot to write down; payment distributers should be one of those you can subscribe to.
I don't get it. I also see other comments not getting it so I don't think it's just me.
Is this like Kindle unlimited where someone pays a single subscription and gets access to all content providers on the platform (in this case content is software), where creators get a proportion of the subscription fee based on how much a user used an app? So e.g. 10$ per month, I use FooReader 90% of the time, so they get 9$.
Idk, even if I am not getting the details, I don't think that any collective approach to app is going to work. Unlike with other industries like movies or music, products in software are very different from each other and is consumed in a variety of ways (library vs end-user app) that have a lot of complicated nuance (in terms of licensing and company goals).
> where someone pays a single subscription and gets access to all content providers on the platform (in this case content is software), where creators get a proportion of the subscription fee
It is like that, except that users buy the subscriptions directly from the developers. 1Sub doesn't handle any money. This also means that the developers get 100% of the money (except for any transaction fees depending on payment method).
I am super confused about the concept. I pay "someone", of my own choosing, and I get access to...what, exactly? "everything"? What is that? What incentive do the developers that I'm not paying have to give me something?
> Pay to download or for other services: Not worth it; users can find the software somewhere else and they don't need your other services.
I also reject this premise. My evidence being the trillions of dollars spent annually on software and other services.
What you get access to is everything that is protected using this site. Anyone can create paywalls. Here is an example of a link that only lets subscribers view this comments page:
Pay to download or for other services: Not worth it; users can find the software somewhere else and they don't need your other services.
...
The user subscribes to a developer of their choice and in return, all developers (and everyone else who wants to) can give that user some kind of benefit, like giving them access to downloads
and what happens when you release a new version? someone will have to be the first to pay, and most people who want to immediately upgrade will also pay the day it's released instead of waiting for some sketchy dude to upload the executable somewhere else
People should not pay for software - average Joe should have all kinds of software basically free.
Now you ask "who should pay for development", corporations, companies or foundations where people still could donate but would not have to. Where corporations and companies pay salaries and provide end users with services.
Solo devs should not write and maintain anything without getting paid.
Yes it is "corporate dystopia" but on the other hand when I see all kinds of rants or horror stories from OSS maintainers and companies that don't want to contribute it seems only reasonable way. Corporation/Company/Foundation pay salaries for devs and provide people with software while charging for services like keeping data or any other actual services that can be connected to software they provide or in case of foundations by donations.
This is like the musician problem. There are so many people willing to play for pretty much nothing or for free that it's very hard for the average musician to make money. On the consumer side, why should you always pay for music when so many people are doing it for free? There's an oversupply of eager musicians making music
Same with OSS development. Why should you pay for something if people just do it for free? Doesn't matter who the consumer is.
> Solo devs should not write and maintain anything without getting paid.
But they do, and they will regardless. And until they stop, nothing will change. There's an oversupply of eager coders coding for free
Companies will pay (their own developers) once the OSS solution doesn't work or needs extra extensions that doesn't exist.
i would amend that with "average Joe should have all kinds of software basically free under a FOSS license"
and how about "corporation should not be allowed to use any software without paying for it"? it should essentially be treated like a tax. if the software has no price, then the price will be evaluated by some metric. (could be lines of code, but also revenue of the company could be considered)
The whole website is very confusing. Why would a user want to subscribe to only one developer? Why does subscribing to one developer give access to all developers? Why not put yourself in the middle and offer a subscription to "1Sub.dev" and give users the same benefits?
What does it mean to "give access to downloads and other resources"? What kind of downloads and resources?
Can you give some examples of services that exist that you think don't work well enough?
> Why would a user want to subscribe to only one developer?
Subscribing to one is easier than subscribing to many. There is less friction and the user gets more for that subscription.
> Why does subscribing to one developer give access to all developers?
All developers (and everyone else) can add subscription checks to whatever they like that will let only subscribers pass.
> Why not put yourself in the middle and offer a subscription to "1Sub.dev" and give users the same benefits?
Then they would all have to pay me. I don't want that. Someone could have something against paying me. Maybe the payment methods I offer doesn't work for someone. Distributing payments seems like the only right thing to do.
> What does it mean to "give access to downloads and other resources"? What kind of downloads and resources?
It could be anything. Here is an example of a paywall for this comments page that will only let subscribers follow the link:
I'm very confused about how the distributed payment system would work. How much would a subscription cost for a user and how much would a developer see of that?
> I don't know what kind of services you mean.
You write on your website: "Why this is better than the alternatives"
If you could give examples of the alternatives that you think don't work then it might be helpful to see how your service differs from those.
> I'm very confused about how the distributed payment system would work. How much would a subscription cost for a user and how much would a developer see of
that?
Developers could sell subscriptions for any price they want. They have a limited number of subscriptions they can sell so there is a supply/demand that influences the price. Users buy directly from the developers so they would get 100% of the money (minus possible transaction fees depending on payment method).
> If you could give examples of the alternatives that you think don't work then it might be helpful to see how your service differs from those.
The alternatives are mainly the ones listed on the page above: buying things from developers in the usual way and donating. There are also other systems that work in a more centralized way where you pay the system that then distributes the money to the creators and this system differs from all of those in that it doesn't handle any money.
If you want an example, there is liberapay.com that seems to be donations with centralized payments. My system tries to be better than that because:
- Payments are less voluntary because you get access to stuff when you pay.
- Payments are decentralized so there can be more freedom of choice in how you pay.
- What do you expect open source developers to charge at minimum for access to the catalog in order to make this make sense to do at all?
If people subscribe once and access everything, it seems like they'd need to charge a lot to make it a worthwhile co-op to participate in. It feels like the amount they would have to charge would become pretty financially restrictive to access the code and not in the interests of someone who wanted to open source in the first place...
- How does this handle the scenario of a developer disappearing?
Does everyone who had access through that developer continue to have access?
It seems since payment processing is handled by individual developers, no longer would people have to pay for access to the whole catalog. Does this now mean over the long term you are handling an ever increase supply of people with access who do not pay but can transfer their access to others for free?
- How does this handle the scenario of developers with subscribers who are supposed to pay a reoccurring payment but have stopped?
Does the developer have the ability to remove access to the catalog from specific subscribers?
If the developers have the ability to remove subscribers at will, doesn't this disincentivize paying at all because paying gives you no security in your access you just bought? What is your plan to arbitrate this without access to primary payment information to confirm who is right?
- It seems like although decentralized, this approximates to the journal model but for code? Is this your intention?
> - What do you expect open source developers to charge at minimum for access to the catalog in order to make this make sense to do at all?
> If people subscribe once and access everything, it seems like they'd need to charge a lot to make it a worthwhile co-op to participate in.
I have thought about this a bit and yes, when this thing grows, the subscriptions will be worth more and more.
I haven't really done any calculations though because it's really hard to know what things will be like.
Anyway, let's try one:
Let's say there are 100 developers (individuals) and a developer wants $4000 per month. Then if we want a subscription to be $5 per month or maybe we could allow it to be $10, the number of subscribers per developer would have to be 100 * 4000 / 10 / 100 or just 4000/10 = 400. So I guess as long as the number of subscribers are a few hundreds times more than the number of developers (individuals), it could work.
> - How does this handle the scenario of a developer disappearing?
Interesting question; I have not thought about that. Developers register and unregister the subscriptions so hopefully they would unregister their subscriptions before they disappear.
If they don't do that, it could be forced by the system but there would have to be rules about that then so everybody knows what will happen.
> Does the developer have the ability to remove access to the catalog from specific subscribers?
Yes, they can register and unregister subscriptions as much as they want.
> If the developers have the ability to remove subscribers at will, doesn't this disincentivize paying at all because paying gives you no security in your access you just bought? What is your plan to arbitrate this without access to primary payment information to confirm who is right?
That is between the buyer and the seller. If you buy something and you don't get what you bought, you would try to solve that with the seller.
Of cource people can complain to 1Sub too and then maybe the other developers will lose trust in that developer and they can be kicked out.
> - It seems like although decentralized, this approximates to the journal model but for code? Is this your intention?
I have not thought much about the journal model but I can see how this is similar.
My main vision has been tax that everyone who wants to be a citizen pays so that they then can enjoy things that are not sold directly to people.
I feel like the overall system should be clearer. For instance it's not clear how the developers get credits or whether developer accounts are somehow authenticated as representing a genuine entity.
In the opening statement of the site the idea of merely trusting the user without copy protection is completely ignored, but without more details it's not clear if the proposed system is any better.
> As a developer you sell subscriptions independently; you set the price, handle the money and do all of the interactions with the customer. Then you register the subscription in the system by using a simple API.
What prevents me, as a rogue actor, from just adding all my mates to the database without them paying me anything? Would they get access to all other software from the developers who take part in this affair?
> What prevents me, as a rogue actor, from just adding all my mates to the database without them paying me anything? Would they get access to all other software from the developers who take part in this affair?
If you are not a trusted developer in the system then the API key prevents you.
If you are a trusted developer, then you can give away as many subscriptions for free as you like but you only have a limited number of subscriptions to sell so you will not make as much money that way.
> Why would developers use this over just asking for money?
More people should want to pay if they use this system because if you just ask for money, you either don't give anything in return (donations) or you give access to your stuff, but with this system, the user gets access to everything that uses this system.
> What are you going to do about people asking for 1 cent to join the network?
Developers can sell subscriptions for 1 cent but since they have a limited number of subscriptions to sell, they will not make a lot of money that way.
If you mean 1 cent to join as a developer, that is free; it's about trust. This should be a cooperation between developers who trust each other.
It means that there is a supply/demand that influences what price the subscriptions can be sold for.
Developers have a limited number of "credits" that can be turned into subscriptions.
They can get more credits by making people subscribe through their links.
There is also a plan that the credits will be multiplied and grow with time in order to keep the prices on a sane level.
> So someone can subscribe to a 0.99/month product and use several 19.99/month products?
Yes, a developer can sell the subscriptions for very cheap but then they will probably quickly run out of subscriptions (there is a limited number) and then wish they had sold them for more.
Also, the subscription is not really tied to any product; think of it more as a subscription to free software in general, that can be sold by different resellers (the developers).
I know people hate Subscriptions but honestly I quite like them. I can pay for one month usually not very high price to use software when I need it. Problem is to be solved by developers, they should give more often option to buy lifetime license, or allow you to use software for lifetime after you payed for 1 year of subscription (without updates). It’s just not profitable enough I believe. Maybe we will have appropriate laws in the future - that’s the solution I would like to see
Subscriptions just becomes unmanageable when you have to many. I do like your example of some software where you just need it for a month, but I don't think that should be a subscriptions then. That should just be paying for one or two months upfront.
The issue that I have with subscriptions is, as I said, they become unmanageable and they are frequently dishonest, betting on you to forget to cancel them. You do a one year subscription for something, forget to cancel in time, and now you're stuck paying for two years.
Both SaaS and many other type of subscriptions really need to drop the recurring part and just let you "rent" the product. That seems more honest to me.
I just use single-use card whenever I don’t use AppStore for subscription. That way they won’t charge me again and if I end up using and liking the software I will remember to change card or provide another single use card
Paying for one month every once in a while for software that would otherwise be very expensive is about the only benefit I can see for subscriptions. For instance, Apple seems to be moving Final Cut Pro to a subscription model, and a $5/mo subscription is pretty great if you just need to use it once or twice or very sporadically.
Subscriptions always feel a little scummy to me, due in part to the way they're often advertised. I think that "Only $5/mo!" followed by tiny print saying "Billed annually" should be illegal, because it's clearly deceptive advertising.
Sounds like onlyfans/ gumroad business model for developers...
No doubts some developers will benefits from (like 10%), but it will leave the world less open in my opinion.
Imho, the "just buy it" or "patreon to access the development discord/forum/whatever for OSS" seem like the best approaches. Like, I'm in Mastodon's patreon, and I'm happy to buy software. And while it may sting, I'm okay with "major release = new version buy it again". Not fond fond of installed local non-cloud software in the SAAS business model.
It's exactly Patreon or one of its many competitors. The "subscribe to a creator and get special perks" problem is common and solved, but as you note the "CaaS" (creator as a service) model isn't for everyone.
A root of the problem is using economic models for physical items with digital goods and services.
IMO the most sensical low level* economic model for digital things would be one where you pay a really tiny amount every time you derive value from something. A fraction of a penny each time you play a song, each time you edit an image in some software, each time you visit a website.
There are a boatload of obstacles to getting to a model like this, but as a thought exercise it's really interesting to consider an alternate universe where this model got established instead of, say, everything being ad-based. Not only would it provide a model for monetizing software, it would also for example completely reframe DRM (making it both far more ubiquitous but also far less antagonizing to the user, since it would be aligned with what the user is trying to do instead of being at odds with it).
* The idea being that this low level economic would exist but for practical reasons (like overcoming human psychology) you might need to overlay a higher level model like a monthly "unlimited consumption" subscription or tax.
This is basically the idea that motivated "Bitcoin: A Peer-to-Peer Electronic Cash System"[^1]
"The cost of mediation increases transaction costs, limiting the
minimum practical transaction size and cutting off the possibility for small casual transactions [...]"
And more recently Brave, the browser tried to implement it.
"Crypto and DeFi are hard to use and the $330 billion digital advertising industry is failing users, publishers and advertisers. With Basic Attention Token and Brave we want to take Crypto to the next 1B users and solve the endemic inefficiencies and privacy violations hobbling the digital ad industry."[^2]
I personally think this is a beautiful idea, had it worked out as envisioned, the Internet could've been a very different and likely better place now. Pity cryptocurrencies came to be what they're in their present condition.
Interesting to think about. However, for that to be feasible I believe the draconian "copyright forever" laws would have to have never happened. I'm against paying rent to corporations to access the work of dead people on principle. Or past say, fifty years even if they lived.
I think I'm in the same boat as you, but can you articulate the 'why' behind that sentiment? (saying it's "on principle" could also be a way to not have to address that question, haha)
As in, if someone created something and you derive value (utility, enjoyment, etc.) from it, what is the basis for at some point no longer providing compensation for that utility?
FWIW, I haven't come up with a completely convincing answer, and yet I still feel like you do! Maybe there is no firm justification for terminating compensation, but instead it's more of an idea instilled by the culture, that after X years, the thing you created becomes owned by society at large just for the greater good, or maybe in recognition that your work came about because of prior accomplishments from others, or that as a society we want ongoing creativity and not stagnation.
The main goal of copyright is to provide an incentive for the creator to continue creating. Permanent royalties are in direct opposition to that goal. If anything they’re an incentive to retire.
I usually consider myself a decently smart individual but damnit this has me questioning that...
I read through your landing page and your how-it-works page and I am still...confused. That it ends on a hand wavey "we haven't solved this part yet" statement does not inspire confidence.
As best I can tell you are going to take a lot of open software and gatekeep it behind a paywall but each user only has to pay once...to someone...and then they can access all of the software behind that gate. So you are trying to make an ecosystem of software that can only be accessed by people that have paid some money at least once?
> Sorry, there are no developers to subscribe to currently.
If you actually want adoption, more needs done than posting the thing you built and suggesting people use it. Building effective, self-sufficient marketplaces is tough. Benefit has to be seen on both sides from the get-go.
I wonder, if the "tax-funded" model could work for software. The state raises money from the public, but the public determinates directly via usage (minutes spend with), usefullness (money gained) how much of that tax goes to what developer. Cut out the monopoly buisness middle man, but also remove any political moral meddlers in various "round tables" as they are omni present in public media systems.
The idea has problems though. How to pay for background ("invisble" layers).
How to prevetn "hyper transparent citizens".
Etc.
Inspite of all the competition the SAAS pricing is not coming down. There are around 30 calendly alterntatives. However if you check the price of these alternatives they are not too far from what the market leader is charging. More on this at https://blog.neeto.com/p/neetocal-a-calendly-alternative-is.
Why do we insist on making software paid? Wouldn't it make more sense to work toward making software more stable so I could decide to make a calculator app during my free time, and have it somehow still used 200y later?
Software is stupidly simple to distribute, but for some reason one of the hardest to keep. Obviously if we cannot use any software of the past, we are stuck with developers having to maintain old or new solutions.
Society is spending billions of dollars each year for working on complex hardware and software to make that distribution possible. Physical goods are the stupidly simply thing to distribute.
> Pay to download or for other services: Not worth it; users can find the software somewhere else and they don't need your other services.
If users can find the software elsewhere, then it must be cheaper or better if they don't want to use yours. If this is about pirating, then it's just a matter of time before they buy, unless the ransom for decrypting their personal files bankrupts them.
Un-ironically, I make a living from people who pay for my software. I have for 30 years, as both a developer for hire, as an independent developer and even from royalties. It's not hard. Make something useful, make it well, place it where buyers can find it, and price it in a way that makes sense.
People pay for scarcity, not utility. In economics, this is expressed as the water-diamond paradox. Software makers simply need to find ways to make some piece of what they sell scarce (managed workloads). Everything else depends on the conspicuous consumption of idealists; ie it doesn't scale.
Actually I'm seeing a big new wave of open source projects that you can host yourself, but can be used as SaaS if you are willing to pay. I'm always paying because I don't want to bother and because the devs have my /respect
> imagines a sally struthers charity commercial, but with random hipsters and nerds staring sadly at the camera, hoping that somebody, somewhere, will pay them as much money as they think they deserve
Speaking of software business models, I like the idea of charging money for convenience. As in, make the app open-source, but sell compiled binaries and maybe tech support.
Yep, as well as charging for support and consulting. Anything that has to do with developers'/maintainers' time should not be expected to come for free in FOSS projects. Unless the devs are happy to do such work for free ofc.
> By "people" are you excluding organizations such as governments, corporations etc?
If you mean "people" as in "A world where people pay for software", then no.
I think companies, especially software companies, would like to subscribe in this system if it gets big because if they have dependencies that require subscriptions, they probably don't want anything to get in the way for their employees.
This product names crucial issues with how software development is currently monetized, and then offers an alternative that... solves absolutely none of these problems.
Optional extras like 'downloads or other resources' are presumably digital and therefore do not solve the problem - folks can still pirate it. If that's not the point, then it is a donation, in the simplified parlance of the first paragraph of 1sub.dev.
And this all from a company/effort that has such lofty goals that the html title of the page is 'a world where people pay for software'.
This (how do you monetize software development / how do we e.g. let FOSS developers capture more than the current 0.0000000001% of the value they create) is an incredibly difficult problem and this effort sounds like some naive newbie took 5 seconds to think about it and thought: Yeah let's fix things!
At the risk of sounding like a crotchety old fart: Hoo boy if it was that simple, it'd have been solved already.
Alternative plans that work a lot better:
* The NPM ecosystem has a ton of software-as-a-service offerings, e.g. where you can use their site to serve as online tool to e.g. make documentation, to have their site host that documentation, etc. I hate this model (you get nickel-and-dimed and both companies and open source developers alike don't usually like having 50 downstream service providers who, if they go down or have issues, require you having to explain to _your_ customers what's going wrong), but it solves the problems this site names (you can't pirate this, and you get something of value for your money in return).
* Tidelift tries to provide security assurances and support: The payers don't just 'donate', they pay to just be done with the security issues with FOSS dependencies: Tidelift gives you software that scans all your dev work for all your deps and which versions you are on, and tidelift ensures not just that there are no major security holes in those deps, but also that the authors of those deps have made some basic promises about maintaining it in trade for real consideration (namely: money). Github sponsors and the like are more or less barking up the same tree. These setups also solve an unstated problem 1sub.dev tries to solve, which is: You tend to use _a lot_ of software; if you have, say, 600 dependencies (not crazy in this modern age of software dev), and you want to individually set up a 'deal' with all of em, one person has a full time job as they will have to renew over 2 contracts __every working day__ assuming all your subscriptions are yearly.
* Microsoft and co do it as a package deal: You pay one fee for everything they offer and aggressively legally chase down anybody that pirates.
* patreon and co grease the wheels of the donation flow by making it simpler and allowing developers to give something that's hard to pirate: T-shirts and stickers, mentions in the 'about...' page and so on.
* Some developers of FOSS, as well as _many_ commercial outfits, will accept money in trade for priority support.
All of these models have issues. But at least they actually aim to solve the problems. This attempt doesn't even begin to tackle the actual issues, unless I'm missing something.
As a 1million+ user FOSS developer who maintains the library primarily based on privilege (I have enough income to work for the roughly minimum wage I currently get for it, though I could have earned vastly more if I worked for a commercial entity for those hours) - I'm aware that this is not a good situation, that you need to sort out your finances separately just to be a good FOSS author. But, I don't see how 1sub.dev is going to add much compared to what's already there (patreon, github sponsors, FOSS aggregators like apache and eclipse foundation, tidelift, etc).
> offers an alternative that... solves absolutely none of these problems.
Here is how 1sub solves or remedies the problems with the mentioned methods:
- Pay to download or for other services: With 1sub it will be more worth it because you don't just get access to that software or that service, you get access to the software and services of all developers who participate in this system.
- Accepting donations: While 1sub keeps some of the voluntary aspect of donations, you also get something for your money.
> folks can still pirate it
Yes, the point of this is not to make it impossible to do anything without a subscription. It just makes the difference in convenience between subscribing and not subscribing bigger since there are more things that you get or don't get depending on whether you subscribe.
> this effort sounds like some naive newbie took 5 seconds to think about
Interestingly I have thought about this for many years and no idea I have had before or any solution I have seen has felt as good as this one because they always fail in that the user doesn't have enough reason to pay. The main objective of this solution is to give the user more reason to pay.
I think the biggest problem is the financial infrastructure.
We pay for software almost exclusively through digital means, but the fees are too damn high.
Imagine if transaction fees were zero.
Imagine if a piece of software you used costed 10 cents per months. Or someone's patreon or github sponsor was 5 cents per month.
And then imagine if starting and stopping the subscription was intuitive and super easy with any digital payment method you happened to use.
I could see the flood gates open and now developers who got basically nothing will get a ton of small contributions that together would make up quite a nice lump sum every month
A former company I worked for started having a larger Indian userbase. We experimented with supporting them more and it would be similar to what you said - significantly lower prices for them. We chose to mostly ignore the Indian userbase and let them use the product as is without catering to them
The reality is that just because someone pays less doesn't mean they cost less to support. And then, if you support a large number of cheap users, it's even more expensive to support.
As a business, you'd rather have 10 customers paying $10 dollars each instead of 100 customers paying $1 each. Larger businesses can overcome this with economies of scale, but smaller businesses cannot
Support includes things like "I paid and my account doesn't work". In addition, you simply can't provide a good service without support. Being able to answer questions like "I'm trying to do X with your tool, how do I do it?" leads to better customer engagement and retention. It's part of the cost of doing business. The marginal benefit of doing that to microrevenue customers is not worth it financially, and as a result, you will never get as good of engagement nor retention from them.
One of the strengths of small business over a big co like Google is your support is NOT automated and you take the time and care to talk to and answer your customer's questions. You can't do that when you charge 10 cents a customer
On top of that, you still need to market/advertise to those users.
It's less time consuming, causes less friction, and is more profitable to just charge $10 dollars instead
From experience I know this truth: Somebody who won't pay $5 per month will never pay $1 per month nor will they ever pay 10 cents per month.
Something in the mind switches and people turn full on psychotic when it comes to paying for digital services, and there's not much that you can do to fight it with logic.
Just look at Github projects for some really good stuff that are used by thousands or millions. At most the developers will have received 10-20 donations. Almost all of the commenters here on HN have never donated a single dollar to the projects that they love and enjoy.
If we can find a way to make sure every person has what they need to thrive regardless of their income, programmers can open source all of their software and we can enable the maximum value creation possible. Other engineers like those that design commodities like dishwashers and cars or important manufacturing or medical equipment can also open source their designs so that repair costs are low and innovative improvements are easy to apply. I genuinely believe this would result in a steeper and more rapid innovation curve as well as a better world for all, than a world where we try to monetize things which have zero marginal cost to reproduce.