What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.
Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".
Just for example, I'm planning to make one of my commercial projects open source, and I am going to have to do a lot of fixing up before I'm willing to show the source code in public. It's not terrible code, and it works perfectly well, but it's not the sort of code I'd be willing to show to the world in general. Better documentation, TODO and FIXME fixing, checking comments still reflect the code, etc. etc.
But for all my sense of shame for this (perfectly good and working) software, I've seen the insides of several closed-source commercial code bases and seen far, far worse. I would imagine most "enterprise" software is written to a similar standard.
> it is still more robust than software created by multi-billion dollar corporations
OSS software has few to no profit incentives. It it written to do something, not to sell something. It also has little time pressure. If a release slips, there is no impact to quarterly numbers. Commercial software is not an engineering effort, it is a marketing exercise.
When Komatsu decided to go after Catapiliers market, the set quality as their first strategic intent. They then made sure that later strategic steps were beholden to that earlier one.
XP/Agile manafesto emphasized 'working software' which in theory was to have a similar intent.
But the problem with manafestos is that people package them and sell them.
Agile manafesto signatories like Jeff Sutherland selling books with titles promising twice the code in half the time don't help.
OSS has a built in incentive to maintain quality, at least for smaller projects.
Companies could, but unfortunately management practices that make people quite successful becoming habits that are hard to change even when they want to.
Hopefully these big public incidents start to make the choice to care about quality and easier sell.
The point being is that quality is still an important thing for profit oriented companies, but it is easy to drop it and only notice it after it is too late.
Showing that it aligns with long term goals is possible, but getting people to do so is harder.
I disagree. OSS is only ahead if there is no money in it, like new programming languages. Whenever it is profitable, OSS projects just cannot compete with professionals working full time.
OSS is usually reinventing the wheel free from commercial pressures (Linux, GNU, Apache). Or they are previous commercial products (Firefox, LibreOffice, Kubernetes, Bazel).
You're comparing household name with household name. Commercial software has a marketing budget, but free software spreads more by word-of-mouth (or association with a big and processional organisation like GNU), so that's an apples-to-oranges comparison. GIMP isn't very good, as free software image editors go: Script-Fu, plugins, or UI familiarity are basically the only reasons to choose it these days.
I'm curious as to which ones they do not have compared to Adobe products.
The only one I can think of is proper material layer painting in Blender, you can get there with addons but haven't found one that's as good. Genuinely the only thing that I miss, and I do this full time.
Darktable has some features that RawTherapee doesn't, and vice versa. I imagine that some of that stuff isn't in the Adobe software. (I've heard that recent versions of Lightroom have removed local file management support, which both these programs still have – though don't quote me on that.)
Linus would disagree and there’s a reason why the kernel keeps a 1000 ft wall up at all times. He would outright reject majority of user land code with good reason. It’s a miracle that anything outside of the kernel works and it shows very often. People seem to forget how often distros shit the bed on every update and how frequently people have to rebuild the entire fucking thing to get it to work again. Updating is an incredibly stressful procedure for production anything. I haven’t even gotten to audio and video - that’s a separate Fuck You ft. Wayland mix. So no, Windows is the gold standard in many ways, precisely because it’s mercilessly fucked from every which way and manages to run every day across a bajillion machines doing critical things. I don’t care about who is being financially compensated and who isn’t - the depth of decision making shows itself in the musings of Raymond Chen and others, and that level of thoroug thinking is very rare, even in the OSS world.
I left Alpine off of my list because the only place I've ever seen it used is inside of containers, which usually don't run their distro's init system at all.
is it? where Linux = Redhat or Ubuntu in the real world, Ubuntu managed to force snaps and advertising for Ubuntu pro down everybody's throats and the Linux foundation was utterly helpless against that.
If one was fed up enough with Ubuntu, they could switch to Debian or mint and all their programs would still run and their workflows likely will not change too much.
But for windows you’d have to switch to osx or Linux, neither of which is going to easily support your software (unless it happens to run great under wine, but again that’s a different tool)
I feel like the argument of "don't like X, use Y" is often missing the point when people are expressing their pain with OSS. I find X painful because of reason A, B, C so I take the advice to switch to Y, be happy for a half day before I try to do anything complex and find pain points A', B' and C'. It's often a carousel of pain, and a question of choosing your poison over things that should just work, just working.
Just as an example, I spent a couple hours yesterday fighting USB audio to have linear scaling on my fresh Debian stable install, and I'm not getting that time back ever. Haven't had that sort of issue in more opinionated platforms like Windows/MacOS in living memory.
Linux is a more complicated and more powerful (at least more obviously powerful) tool than windows or macOS. Daily Linux use isn’t for everyone. It can be a hobby in and of itself.
The knowledge floor for productivity is much higher because most Linux projects aren’t concerned with mass appeal or ease of use (whether or not they should be is another discussion)
Debian, for example, is all about extreme stability. They tend to rely on older packages that they know are very very stable, but which may not contain newer features or hardware support.
The strength is extreme customization and control. Something that’s harder to get with windows and even harder to get with macOS.
Repeat after me:
Debian STABLE does not mean Debian WORKING, it just means that major versions of most of the software will not change for this release.
There are many things in Debian STABLE that are not working and that will continue to not work until next major release (or longer).
I think STABLE in Debian name is the biggest mislabeling of all time.
The things I build because I'm paid to, in ways I disagree with because MBAs have the final say, are terrible compared to my hobby and passion projects. I don't imagine I'm entirely alone. I hope there's always communities out there building because they want useful things and tools for doing useful things that they control.
I think deadlines are probably also a big factor. Many OSS developers build their projects in their free time for themselves and others. So it could be a passion project where someone takes more pride in their work. I'm a big advocate for that actually.
Much commercial software feels like it is duct taped together to meet some managers deadline, so you feel like 'whatever' and are happy to be done with it.
I do not think "amateurs" is a good description about the people writing the code - most will be highly technical people with lots of experience. And "loosely coordinated" can be applied to many "corporations" as well.
I think it matters that people coding in open source do it because they care (similar to your idea but on the positive side). If you want to make something nice/working/smart you have more chances to succeed if you care than if you are just being paid to do it (or afraid that you will be embarrassed)
in this case would a programmer with a day job that also hacks on linux in their free time be a professional at work and an amateur on anything they do independently? Or really any sort of engineer, contractor, person who makes stuff, etc?
Just my interpretation: that programmer is a professional. They are paid to do programing. They are still professional even when they are working on their hobby project, because it is not a function if they are paid for that particular code, but if they are paid for any coding at all.
If that programer would go and coach their friend’s basketball team for free they would be an amateur coach, but they are still a professional programmer even while coaching.
> it is still more robust than software created by multi-billion dollar corporations
Well, in the industry usually fewer eyes are looking at the code than in open source. Nobody strives to make it perfect overall, people are focused on what is important. Numbers, performance, stability, priorities depend on the project. There are small tasks approved by managers, developers aren't interested in doing more. Bigger company works the same, it has just more projects.
> What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.
This is such a gross mischaracterization.
Linux enjoys stability and robustness because multi-billion dollar corporations like RedHat and Canonical throw tons of money and resources at achieving this end and turning the loose collection of scripts plus a kernel into a usable OS.
If they didn't, Linux would have single-digit adoption by hobbyists and these companies would still be running Solaris and HP/UX.
> Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".
I've committed sins in production code that I would never dream of doing in one of my published open source projects. The allure of " no one will ever see this" is pretty strong
>What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.
I mostly agree, but I think that there's a delayed effect for OSS projects falling apart. Most of these projects are literally just 1 or 2 people coding in their spare time with maybe a few causal contributors. The lack of contributors working on extremely important software makes them vulnerable to bad actors (e.g. the XZ backdoor) or to the maintainers going AWOL. The upside is that it's easy for anybody to just go in and fix the issue once it's found, but the problem needs to happen first before anybody does that.
It's a hunch, but I feel like open source has more churn and chaos & is multi-party. And that mileau resembles nature, with change evolution & dynamics.
Corporations are almost always bound into deep deep path dependence. There's ongoing feature development upon their existing monolithic applications. New ideas are typically driven by the org & product owners, down to the workers. Rarely can engineers build the mandate to do big things, imo.
Closed source's closeness is a horrible disadvantage. Being situated not alone & by yourself but part of a broader environment, new ideas & injections can happen, works to reduce the risk of cruft maladaption & organizational mismanagement & malpractice. Participating in a boarder world & ecosystem engenders a dynamism, resists being beset by technical & organizational stasism.
I'm trying to agree with you here, not shame you, but I do think there's something to the idea that you just shouldn't write code that you wouldn't want to be public. In the long run, it's a principle that will encourage growth in beneficial directions.
Also proprietary code is harder to write because you can't just solve the problem, you have to solve the problem in a way that makes business sense--which often means solving it in a way that does not make sense.
When prototyping something to see if a concept works, or building something for your own private use, you really shouldn't waste time trying to make the code perfect for public consumption. If later you find you want to open source something you wrote, there will inevitably be some clean-up involved, but thinking of writing the cleanest and most readable code on a blue sky project just hampers your ability to create something new and test it quickly.
the problem is that no matter how sincere the promise of "I'll clean it up and release the code" is, it rings very hollow because few people realistically actually ever get there.
if a developer is so afraid of judgement that they can't release the code to something they want to release, we have a cultural problem (which we do), but the way forwards from that is to normalize that sharing code that is more functional than it is pretty is better than future promises of code.
as the saying goes, one public repo up on GitHub is worth two in the private gitlab instance
> Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".
There's something to it. Anecdote of one: at one time management threatened^Wannounced that they planned to open the code base. I for one was not comfortable with that. In a commercial setting, I code with time to release in mind. No frills, no optimizations, no additional checks unless explicitly requested. I just wrote too much code which was never released (customer/sales team changed its mind). And time to market was typically of utmost importance. If the product turns out to be viable, one can fix the code later (which late in my career I spent most time on).
Yep. Some of the garbage I've seen out there is shocking. It scares me.
Then I try and get fractional scaling working on Wayland with my NVidia card and want to gouge my eyes out with frustration that after a decade I still can't do what I can do on a closed source thing that came free with my computer. Actually make that 25 years now. The enterprise crap while horrible, actually mostly does work reasonably well. Sometimes I feel dirty about this.
Quality is therefore relative to the consumer. The attention is on what the engineers care about with Linux, not the users I find. Where there's an impedance mismatch there are a lot of unhappy users.
> after a decade I still can't do what I can do on a closed source thing that came free with my computer
I don't know the specifics, but there's a good chance that your issue is ultimately because Nvidia wants to keep stuff closed, and Linux is not their main market - at least for actual graphics, I guess these days it's a big market for GPU computing. So it's the interface between closed and open source that's giving you grief.
I don't think this is nVidia issue. Wayland/nVidia woes are primarily about flickering, empty windows and other rendering problems. I may be wrong, but I believe HiDPI support is mostly hardware-independent issue.
In a window manager or KDE(x11) you can use nvidia-settings you can click advanced in monitor settings and selected viewport in and viewport out. If you set the viewport out to be the actual resolution and the viewport in to be some multiple of the actual resolution you can get fractional scaling and if you make the factor chosen a function of the relative DPI of your respective monitors you can make things perceptibly the same size across monitors. That's right fractional scaling AND mixed DPI!
You can achieve the same thing with xrandr --scale and its easier to automate happening at login.
You can also achieve pretty good fractional scaling in Cinnamon (x11) directly via its configuration. You enable fractional scaling on the right tab and suddenly global scale is replaced with a per monitor scale. Super user friendly.
Also your copy of Windows was just as free your computer was. You paid someone to configure windows acceptably for you and Microsoft and various OEMs who make windows hardware split your money giving you something usable in return.
You then decided that you wanted Linux on it and now you are the OEM which means you get to puzzle out integration and configuration issues including choosing a DE that supports the features you desire and configuring it to do what you want it to do.
I have been using only 4k monitors in Linux for at least a decade and I have never had any problem with fractional scaling.
I continue to be puzzled whenever I hear about this supposed problem. AFAIK, this is something specific to Gnome, which has a setting for enabling "fractional scaling", whatever that means. I do not use Gnome, so I have never been prevented to use any fractional scaling factor that I liked for my 4k monitors (which have been most frequently connected to NVIDIA cards), already since a decade ago (i.e. by setting whatever value I desired for the monitor DPI).
All GUI systems had fractional scaling already before 1990, including X Window System and MS Windows, because all had a dots-per-inch setting for the connected monitors. Already around 1990, but probably already much earlier, the recommendations for writing any GUI applications was to use for fonts or for any other graphic elements only dimensions in typographic points or in other display independent units.
For any properly written GUI program, changing the DPI of the monitor has always provided fractional scaling without any problems. There have always been some incompetent programmers who have used dimensions in pixels, making unscalable their graphic interfaces, but that has been their fault and not of X Window System or of any other window system that was abused by them.
It would have been better if no window system had allowed the use of any kind of dimensions given in pixels in any API function. Forty years ago there was the excuse that scaling the graphic elements could sometimes be too slow, so the use of dimensions in pixels could improve the performance, but this excuse had already become obsolete a quarter of century ago.
> For any properly written GUI program, changing the DPI of the monitor has always provided fractional scaling without any problems.
Parent comment was talking about Wayland, and Wayland does not even have a concept of display DPI (IIRC, XWayland simply hardcodes it to 96).
You're correct - in theory. In practice, though, it's a complete mess and we're already waaay past the point of no return, unless, of course, somehow an entirely new stack emerges and gains traction.
> There have always been some incompetent programmers who have used dimensions in pixels
I don't have any hard numbers to back this, but I have a very strong impression that most coders use pixels for some or all the dimensions, and a lot of them mix units in a weird way. I mean... say, check this very website's CSS and see how it has an unhealthy mix of `pt`s and `px`es.
don't romanticize the situation too much, open source software is almost entirely written by professional software developers, mostly at their day jobs
Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".
Just for example, I'm planning to make one of my commercial projects open source, and I am going to have to do a lot of fixing up before I'm willing to show the source code in public. It's not terrible code, and it works perfectly well, but it's not the sort of code I'd be willing to show to the world in general. Better documentation, TODO and FIXME fixing, checking comments still reflect the code, etc. etc.
But for all my sense of shame for this (perfectly good and working) software, I've seen the insides of several closed-source commercial code bases and seen far, far worse. I would imagine most "enterprise" software is written to a similar standard.