> The other big problem is the burnout from maintainers, which are often unpaid and could use a lot more support from the billion-dollar companies that benefit from using Linux.
This is the crux of the issue: putting the maintenance burden on unpaid volunteers instead of having the burden be carried by the companies that profit from the 6-year LTS.
Canonical volunteered to maintain LTS kernels within the community, but upstream refuses to accept Canonical's contributions - ironically because apparently they "don't work within the community". Source: https://lwn.net/Articles/608917/
Disclosure: I work for Canonical. I'm not authorised to speak for Canonical, expressed opinions here are my own. [Edit: I should add that I don't work on the kernel so I feel like I'm as much an outside observer as you probably are]. But I'm not sure I'm even expressing an opinion here - just citing some relevant, publicly verifiable facts.
Minor thing, if you look at the citation it says that they don't want to hand over official control of official branches to Canonical because they don't trust them to engage with the community, with those citations OP must have meant "accept contributions" as "accept canonical being in control of older official kernel branches to decide what patches are accept (even though no one else will)" rather than they are not accepting patches from canonical.
I'm just going by the article I cited. I don't know any more than the reason given there. And come on: how is refusing contributions when the cited reason is not contributing not ironic?
The Linux Foundation "harvests" a couple hundred million dollars a year [1]. They could easily spend more on maintainers. I can't easily find exact number, but Torvalds is paid between 1-2 million dollars a year by The Linux Foundation. They could support other volunteer maintainers as well.
This would be a good use case for government grants. The system to administer them is already there. It's beurocratic but it is free money that could support developers long term. It could probably be argued the DOE should offer funding due to the national security etc implications of open source maintenance.
Good for them. With that kind of gumption I'm sure we'll see them running on a lot of devices, mars landers and practically the entire infrastructure of the internet in no time.
To be fair some companies do maintain LTS kernels like Red Hat but they choose different kernel versions than upstream LTS and backport different things so it doesn’t have much crossover with these ones now.
As an ex-Novell/SUSE employee this makes sense to me.
Upstream is supposed to keep marching onwards.
Backporting is _so_ much work. And it's unfortunately not sexy work either, so it's always going to be hard attracting unpaid contributors for it.
If you need stability and long term support as a customer, you have companies like RedHat or SUSE whose entire point is providing 10+ years maintenance on these components.
Unfortunately, none of these companies providing 10+ years of maintenance are doing so for most embedded devices. We either need to get SoC vendors to update their kernel baselines regularly This is hard, we've been trying for a decade and not seen much progress. Alternately, get them to backport fixes and patches (there's actually been quite a bit of progress here in getting them to actually take updates from the stable kernel at all! And that's getting thrown away now...)
Exactly. It sounds like currently there's no money to be made supporting old embedded devices (in the consumer space at least), because no one is on the hook for long term maintenance.
Regulations _could_ change the incentives, and create a market for long term servicing. Regulations are hard to get right though...
Or maybe vendors will be incentivized to actually upstream kernel patches, plus stop making 10 different models every year for weird market segmentation reasons.
“Old devices are phased out sooner” seems like an OK solution with some caveats.
It is nice that it makes the cost of not supporting things visible to the users. Assuming “phased out” means the device will actually stop operating; “Company X’s devices have a short lifetime” is an easy thing for people to understand.
I suspect consumers will look for brands that don’t have this reputation, which should give those well behaved brands a boost.
Although, if it does turn out that just letting devices die is the common solution, maybe something will need to be done to account for the additional e-waste that is generated.
Moving toward proprietary OSes; hey, if it solves the problem… although, I don’t see why they’d have an advantage in keeping things up to date.
It is possible that companies will just break the law but then, that’s true of any law.
This won’t make more money available for supporting old devices, it’ll just make the long term profitability of any device significantly lower and therefore less competition and innovation.
A smarter regulation would have been required non-commercial use firmware source disclosures to allow non competitive long term maintenance by owners.
Who is responsible for complying with it? If a Chinese or American manufacturer of an embedded device that does not have a presence in the EU fails to provide updates what happens?
How many of the companies producing this stuff have the skills to fix kernel security bugs?
Not sure who the "we" is that you refer to, but Google (and Samsung, and other Android manufacturers, as well as companies building other Linux-based embedded/IoT devices) could band together and create a "Corporate Embedded Linux Consortium", and pool some money together to pay developers to maintain old kernel versions.
If the mainline kernel devs are uncomfortable allowing those to be official kernel.org releases, that's fine: the CELC can host the new versions themselves and call the versions something like "5.10.95-celc" or whatever.
I don't get why this is so difficult for people to grasp: if you want long-term maintenance of something, then pay people to maintain it long-term. It's a frighteningly simple concept.
But yes, it'd be better for SoC vendors to track upstream more closely, and actually release updates for newer kernel versions, instead of the usual practice of locking each chip to whatever already-old kernel they choose from the start. Or, the golden ideal: SoC vendors should upstream their changes. But fat chance of that happening any time soon.
> (there's actually been quite a bit of progress here in getting them to actually take updates from the stable kernel at all! And that's getting thrown away now...)
I found this statement kinda funny. If the original situation was that they wouldn't take updates from the stable kernels, then what were all those unpaid developers even maintaining them for? It's bad enough that it's (for most people) unrewarding work that they weren't getting paid for... but then few people were actually making use of it? Ouch. No wonder they're giving up, regardless of any progress made with the SoC vendors.
>SoC vendors should upstream their changes. But fat chance of that happening any time soon
I honestly do not understand why SoC vendors don't put the extra 1% effort in upstreaming their stuff. I've seen (and worked with) software that is lagging 3 to 4 years behind upstream developed by these vendors and if you diff it against upstream it's like 10 small commits, granted, these commits are generally hot garbage.
Isn’t this what the Civil Infrastructure Platform (CIP) initiative [0] was also proposing? Maintenance of Linux kernels on the 10+ year horizon aimed at industrial use cases. Has backing from Toshiba, Hitachi, Bosch, Siemens, Renesas, etc, though a marked lack of chip vendors as members. Not really sure how well it is going though.
Linux kernel doesn't have ABI for device drivers. The device manufacturers either can't or won't publish the drivers as a part of Linux kernel, that's why they fork Linux instead.
> If you need stability and long term support as a customer, you have companies like RedHat or SUSE whose entire point is providing 10+ years maintenance on these components.
Is that even feasible for projects like the Android kernel that distributes their fork to vendors when RedHat forbid redistribution of their source code?
This is the root of so much of our software quality problem. “I want to work on something shiny” outweighs “I have pride in this software and want to keep it healthy.”
Personally I love working on legacy software, I actually dislike greenfield projects, but even in the context of legacy software and system maintenance, backporting fixes would still not rate highly or provide much in the way of interesting work for me.
I'd say there's enough software developers that enjoy doing the latter. It's mostly the external motivation (both in community standing and in payments) that push people to shiny new things.
Are you flagging the word "sexy" or are you asking whether some important projects are fun and exciting and other important projects boring?
Surely maintaining 40 year old bank Cobol code is important but it's not considered fun and exciting. Rewriting half of skia from C++ into Rust is arguably not important at all but it's exciting to the point that it reasonably could make the front page of HN.
Google showed up & offered to pay a huge amount of money to extend LTS support 3x, iirc.
At the time, Linux 4.14 was shipping I think.
Personally it made me a bit sad, because it created real permission for Android to never upgrade kernels. I'd hoped eventually Android would start upgrading kernels, thought the short LTS would surely force them to become a respectable ecosystem that takes maintenance seriously. Making Super LTS was a dodge; suddenly it was ok that 4.14 straggles along until 2024.
Also an interesting note, supposedly there's a "Civil Infrastructure Platform" that supposedly will have some support until 2029 for 4.14! The past never is gone, eh?
Supposedly Project Treble is a whole new driver abstraction in I think mostly userland (maybe?) whose intent is to allow kernel upgrades without having to rewrite drivers (not sure if that is the primary/express goal or was just often mentioned). I'm not sure if Android has yet shipped a kernel upgrade to any phones though; anyone know of an model running an upgraded kernel?
> Personally it made me a bit sad, because it created real permission for Android to never upgrade kernels.
I would argue that it was still better than not doing that, because the vendors weren't going to properly keep up with kernels either way; the choice wasn't "support 3.x for longer or move to 4.x", it was "support 3.x for longer or watch Android devices stay on 3.x without patches".
As the old adage goes, "chips without software is just expensive sand."
Yes what happened & happens is often monstrously unsupportable & terrible. For the past 6 years, Google rolling up with a dump truck full of bills has been justification to keep doing nothing, to keep letting kernel devices be bad.
Your history isn't even good or right. Old releases at the time didn't get official longer support. Canonical just opted to maintain a basically Super LTS 3.16 until 2020, regardless of the Google dump-truck-of-money thing going on here. Old phones got nothing directly from this payoff. Google was just paying for a way forward to keep doing nothing, to justify their ongoing delinquency & inactivity.
Which was unprincipled terrible and awful before, but which they basically bribed gregkh to suddenly make acceptable by at least paying for the privilege of being negligent do nothing delinquents on.
Some comments are saying that the 6-year LTS is needed to support older Android devices. Also, in practice most vendors don't bother releasing updates to phones after the first 2 years, other than security updates.
One possible nice side-effect of not maintaining kernels for so long and allowing people to stay on out-of-date systems would be to encourage vendors to allow users to upgrade to the newer versions of Android for more than the current 2 year life span. They are then more likely to put pressure on their component vendors to get kernel support for their chipsets into mainline so they don't have the excuse that they can't provide updates because the hardware isn't support by modern firmware.
Microsoft is famed for their backwards combability. How do they achieve this? By hard work and a lot of "if version == x" spread throughout the code? Or is because of their development process or do they plan and design for backwards combability from day one?
There's a difference between backwards compatibility and backporting. For either, Microsoft can afford to pay engineers to maintain them.
But backwards compatibility isn't what kernel developers are maintaining, they're backporting things like security fixes to older versions of the kernel.
It would be like if a security fix is implemented in Windows 11, and Microsoft also chose to patch the same change in Windows 10. At some point Microsoft decides that older version of Windows won't get new updates, like how Windows 8.1 stopped receiving them this January.
What kernel developers are deciding is that sufficiently old enough kernel branches will stop receiving backports from newer kernels.
They are saying: "Recent versions of Windows can run old programs made for old versions of Windows. How?".
The Linux kernel is very good at it because of the "Do not break userspace" Linus Torvalds' rule. The usual user space on top of the Linux kernel, not so much.
So yes, backward compatibility and backporting are different matters.
And Windows addresses them both indeed. Your parent commenter is not comparing Windows with Linux.
I think the point is more of a "so what?" Windows' backward compatibility is completely irrelevant and uninteresting here because we're not talking about backward compatibility, we're talking about long-term support.
So this does not look like 10 year support for the initial version but rather like switching different LTS versions over that time. Is there any data from microsoft itself on support duration, release dates, backports and how to parse these numbers?
I don't think we can infer all that much from the version numbers without knowing Microsoft's internal processes around this sort of thing, and exactly what those version numbers mean in the context of Microsoft.
To me, though, 6.1.7600.16385 -> 6.1.7601.21701 does sound like long-term support for a single "version" (whatever that word means in this context).
I don't think any of this is useful to compare like this.
Windows has had three major releases in 11 years. The Linux kernel does one every two months. Windows is an entire OS, with a userland and GUI. The Linux kernel is... a kernel.
The development and support cycles are naturally going to be very different for the two. And regardless, the mainline Linux kernel team is not beholden to anyone for any kind of support. Whatever they do is either voluntary, or done because someone has decided to pay some subset of developers for it to get done. Microsoft employs and pays the people who maintain their old Windows versions.
If no one is paying someone enough to maintain an old Linux kernel for six years, why would they choose to do it? It's mostly thankless, unrewarding work. And given that the pace of development for the Linux kernel is much much faster than that of Windows (or even just the Windows/NT kernel), the job is also much more challenging.
Windows 11 uses the NT 10.0 kernel that originally released with Windows 10 in 2015. NT 10.0 will be supported for well over a decade at this point, maybe even two.
NT6.1 (Windows 7) was also supported from 2009 to 2020 (11 years!), and NT 5.1 (Windows XP) was supported from 2001 through either 2014 (13 years!) or 2019 (18 years!) depending on support channel.
Microsoft will support a product for a decade if not more, assuming you're keeping up with security updates which they absolutely will backport, sometimes even beyond EOL if the fix is that important. Linux with 2 years is a bad joke, by comparison.
That only tells me something about naming? I have no clue how many LTS or non-LTS versions were between the one that shipped with windows 10 and 10.0.22621.900. For all I know, that could be like Linux 2.something being all the way from 1996 to 2011, except that Linux 3.something had a major change of "NOTHING. Absolutely nothing." except for a shiny new number (https://en.wikipedia.org/wiki/Linux_kernel).
So honest question: What does 10.0.22621.900 mean? Is 10.0.X.Y supported for a decade or is that discontinued at some point and I am forced to upgrade to 10.0.X+10,Y-5?
You could choose to stay on Windows 7, that is NT 6.1, and Microsoft will still backport updates from newer kernels such as NT 6.2 and NT 10.0 for the support life of NT 6.1.
Yes. The numbers after the Major.Minor numbers are just revision and build numbers of little consequence for most people.
Are you here for thoughtful conversation or are you just being a Micro$oft Windoze troll? Because I can't tell; I would presume most people here know how to read version numbers.
I had to ask you three(!) times to finally get an answer to a simple question and then you go "major versions are obviously of little consequence; that is why they are called major". Clearly someone is trolling, but it isn't me.
Microsoft maintains their kernels/OSes for that long because people are willing to pay for that support.
It's pretty disrespectful to call Linux's process a "bad joke" when these developers mostly aren't getting paid to maintain major versions for any length of time that you'd consider more reasonable.
Meanwhile, if you do want longer-term support for a specific kernel+OS combo, IBM/Red Hat (among others) will be happy to sell it to you. You may think it's inefficient for each enterprise distro to have their own internal kernel fork that they maintain (rather than all contributing to a centralized LTS kernel), but that's the choice they've all seemingly collectively made. I guess they feel that if they're on the hook to support it, they want full and final say of what goes into it.
Also consider that Windows doesn't sell a kernel: they sell a full OS. In the Windows world, you don't mix and match kernel versions with the rest of the system. You get what Microsoft has tested and released together. With Linux, I can start with today's Debian stable and run it for years, but continue updating to a new major kernel version (self-building it if I want or need) every two months. The development and support cycle for an OS is very different than that of a kernel. You just can't compare the two directly. If you want to, compare Windows with RHEL.
Also-also consider that Windows and Linux are used in very different contexts. Microsoft's customers may largely care about different things than (e.g.) Red Hat's customers.
They asked questions and I answered them. I'm not trying to make a point, and I don't think that the OP was trying to make a point with their questions, either.
When I worked in Windows they had entire teams dedicated to backwards compatibility testing and "sustained engineering". At the end of every release cycle there would be a multi-month effort to pack up all of our test automation and hand it off to another team who owned running it for the next several years. Plus SLAs with giant companies that could get you camped out on the floor of a datacenter with a kernel debugger if you pushed out a broken update. It was never a totally perfect system, but they invested a lot of effort (and money) into it.
A friend who worked at MS tells me that there's a huge amount of "if version" in their code. Apparently it's at the level where it's a big maintenance headache.
IIRC it was partially confirmed when some Windows 11 beta builds started causing issues with software thinking it was being executed on 1.1.x (whose identifier internally apparently is 11).
Maybe for a while. But when you add the maintenance burden to the code, it stays there, forever being felt. Over time, this degrades the product for everyone. And indeed, Windows can be unpleasant to use, not least of all because it feels like glued together legacy systems.
Microsoft sunsets their stuff all the time. It's just that they're competing with Google and Apple now, so they're actively trying to push this line to differentiate where they can.
Try to use only a ten year old printer driver sometime. It's a pain. Linux executes 20 year old code with no problem, as long as you kept all the pieces. How do they do it? Never merge anything that breaks known user space. Easy in theory, hard work in practice.
If you want to run applications from the 90s, you're likely to have more success with dosbox or wine than with a plain Windows. Didn't Microsoft completely give up on backwards emulation a few years ago and started virtualizing it instead, with mixed success?
Of course, if you really want something famous for backwards compatibility, look at OS/400 and z/OS. It's all layers of emulation from the hardware up in order to guarantee that an investment in that platform is future proof. It's all expensive in the end of course, as someone has to pay for it, but they live well on the customers who value such things. Running 50 year old code there is commonplace.
IBM i is stellar in design, compatibility, quality, efficiency, reliability, consistency and security. x86 and Linux pale in comparison.
I wish IBM hadn't fenced it so much like a walled garden. Had they issued inexpensive or free licenses for OS/400 targeted to students and developers, maybe also an emulator to develop conveniently on x86, their i platform would probably be more commonplace now, with quite a bit more available software.
What is killing their platform is not the price but mostly the lack of skills and software. And it's probably too late now to change course.
I'm a long time software engineer and do quite a bit of devops both in cloud but also have significant experience building on-prem and datacenter server clusters.
I have never heard of IBM i until this moment right now.
I assume this is specifically for their Power-series hardware? I've only ever seen Linux on Power hardware...
You may have known IBM i under a different name such as eSeries or AS/400 as it has gone through many renaming.
Yes, it currently targets their Power series, although it's fairly hardware independent. As a matter of fact AS/400 binaries don't even care what CPU they run on, as there are several abstraction layers underneath, namely XPF, TIMI and SLIC. It's a bit like a native, hardware-based JVM with the OS being also the SDK. Another peculiarity is that everything is an object in "i", including libraries, programs and files.
But mostly, it requires close to no sysadmin. Just turn it on, start the services and leave it alone for years if needed.
Microsoft dropped 16 bit support on 64 bit machines, but that was becomes 16 bit support on 32 bit was already using emulation/virtualisation, and so did 32 bit on 64 bit. Emulating a 16 bit emulator inside the 32 bit emulator would be too much, even for Microsoft.
Microsoft does drop backwards compatibility sometimes, usually because the backwards compatibility layer leaves a huge security risk.
Yes, old printer drivers yes for Windows can be a problem, often because of the 32 to 64 bit switch, I have that exact problem with an old printer that still works but can't get it to install on 64 bit.
20 year old software is rarely a problem, I'm running Office XP on Windows 10 without problems.
Are you really comparing a multibillion dollar company to an open source project?
Also this has nothing to do with backwards compatibility, it's about supporting older kernels with security fixes and similar. The decision is a pragmatic one to lessen the burden on the unpaid volunteers.
Like others have mentioned if a company needs a specific kernel pay up. Or use Windows.
On the contrary the FreeBSD project offers stable 'LTS' release for 5 years each.
What I mean by that is each 'major' version with stable API/ABI has a life span of about 5 years - like 5 years of 12.x version, 5 years of 13.x version, etc.
... and all that with having about only 1/10 of the Linux people count (rough estimate).
A difference is that the FreeBSD has to maintain two versions most of the time and sometimes three versions.
At the moment there are 12.x and 13.x. 14.x is in beta, so soon there will be three. But 12.x is expected to be dropped at the end of this year, so in 2024 it will be back to two versions.
As far as I can tell there are a lot more Linux kernels in LTS at the moment.
FreeBSD's major release cycle is much longer than that of the Linux kernel (which makes sense, since FBSD has to maintain the entire OS, not just the kernel). Right now they have two active major release series, and there's a new one every 2.5 years or so.
The Linux kernel has a new major release every two months, and it looks like LTS kernels are one out of every five or six major versions, so that's a new LTS kernel every 10-12 months; right now they have six LTS kernels to maintain.
Also I expect that the Linux kernel develops at a much more rapid pace than the FreeBSD kernel. That's not a knock on FreeBSD; that's just the likely reality. Is that development pace sufficient to support thousands of different embedded chips? Does the FreeBSD project even want to see the kind of change in their development process that an influx of new embedded developers would likely entail?
It's pretty active, actually. Look at the release notes for FreeBSD major versions. Some folks think the release engineering team is too active and that major versions should be supported for more than ~5 years.
I didn't say FreeBSD was abandonware, its kernel development has just been relatively stagnant for decades vs. Linux. Which shouldn't come as particularly surprising considering how much more adoption and investment there's been surrounding Linux over that time period.
What do you mean?
I've witnessed many exoduses in technology, the most obvious ones being MySQL > PostgreSQL.
And from PHP to JS & Python.
I don't think it's too far fetched for people clinging to their favorite GNU/Linux distro to switch to FreeBSD, especially on the server side where in my opinion, FreeBSD is the superior choice.
I think the world is better off when there are choices and the Linux near mono culture is not good for the FOSS movement, in my opinion.
6 years is way to short for some use-cases, like phones
sadly it's not rare that some phone hardware proprietary drivers are basically written once and then hardly maintained and in turn only work for a small number of Linux kernel versions.
so for some hardware it might mean that when it gets released you only have noticeable less then 6 years of kernel support
then between starting to build a phone and releasing it 2 years might easily pass
so that means from release you can provide _at most_ 4 years of kernel security patches etc.
but dates tend to not align that grate so maybe it's just 3 years
but then you sell your phone for more then one year, right?
in which case less then 2 years can be between the customer buying your product and you no longer providing kernel updates
that is a huge issue
I mean think about it if someone is a bit thigh on money and buy a slightly older phone they probably aren't aware that their phone stops getting security updates in a year or so (3 years software support since release but it's a 2 year old phone) at which point using it is a liability.
EDIT: The answer is in my opinion not to have even longer LTS but to proper maintain drivers and in turn being able to do full kernel updates
That's one of the reasons Purism went out of their way to upstream all drivers for the Librem 5. The distribution can upgrade kernels pretty much whenever it wants.
The downside can be painful though: sourcing components with such properties is hard. You basically have to cherry-pick them from all over the world because they're so few and far between.
That's one of the reasons why the Librem 5 is so thick and consumes so much energy.
Contributing a driver to mainline Linux takes significant time and effort up front. You can't just throw anything over the Linux fence and expect that already-overworked kernel maintainers keep tending for it for the next decades.
Slapping together a half-working out-of-tree kernel module and calling it a day is not only much cheaper; it also buys you the time you need to write the new driver for next year's hot shit SoC that smartphone vendors demand.
What would you want as a buyer. A driver that has already demonstrated that it is good enough to be included in the kernel, or one of unknown quality that may need extra work to integrate with the kernel.
I get why suppliers don't want to do the work. I just don't understand why there isn't enough value add for buyers to justify the premium for the benefits of a mainline driver, and/or why sellers don't try and capture that premium
I don't think buyers are actually going to pay enough for the sellers to justify the added cost. Remember that the buyers have to pass their costs on to their end customers (e.g. consumer phone purchasers), and those people won't accept all phones becoming $50 more expensive or whatever.
Also consider the cultural context. The culture of hardware manufacturers is much different than that of software vendors. They don't view software as a product, but more a necessary evil to make their hardware work with existing software infrastructure. They want to spend as little time on it as possible and then move onto the next thing.
I'm not endorsing this status quo, merely trying to explain it.
The way it seems to me is that a driver takes X hours to make, integrate, etc. It's cheaper for the vendor to spend those X hours, rather than each individual purchaser each spending those X hours.
The easy answer is that buyers largely don't care. Most people get their phones from their ISP provider, so that's the main target. They get a data plan that comes bundled with a phone and pay it off for 2 years. After 2 years they get a new plan with a new phone.
Caring about long-term maintenance isn't what most buyers do. Going SIM-only on your data plan is out of the ordinary.
Also in my experience people largely pick their phones based on the surface level hardware rather than the long-term reliability. Hence why Apple keeps putting fancier cameras into every iPhone even though I'm pretty sure a good chunk of customers don't need a fancy camera. Heck, just getting a phone that fits in my hand was a struggle because buyers somehow got convinced that bigger phone = better phone and now most smartphones on the market are just half-size tablets.
That trend at least seems to be somewhat reversing though.
The trend is sadly not reversing fast enough. Apple already discontinued their line of slightly too big phones (mini series), and now they only sell oversized phablets. I might not have viable iOS-based hardware options when I upgrade in 2-3 years, and I'm not comfortable switching to an operating system made by an adtech company. I do hope they go back to smaller sizes before then. Kind of baffling to me how Apple otherwise puts a lot of effort into accessibility, but their main line of phones are awkward and uncomfortable to hold even for a fully able-bodied person with average size hands.
I agree, but consider that the buyer must also consider what the end-customer cares about. The buyer is not going to pay the chip manufacturer extra for mainlined (or at least open source) drivers unless their end-customers are asking for that (since those costs will be passed on to the customer). And outside of niche products like Librem's, the vast majority of customers don't even know about chipset drivers, let alone care.
Sadly, far too often, software support simply nevers enter the picture in sourcing decisions. Back when I was privy to this process at an OEM, the only factors that mattered were:
1. Hit to the BOM (i.e. cost); and
chip
2. Suppliability (i.e., can we get enough pieces, by the time we need them, preferably from fewer suppliers).
In the product I was involved in building (full OS, from bootloader to apps), I was lucky that the hardware team (separate company) was willing to base their decisions on my inputs. The hardware company would bear the full brunt of BOM costs, but without software the hardware was DOA and wouldn't even go to manufacturing. This symbiotic relationship, I think, is what made it necessary for them to listen to our inputs.
Even so, I agreed software support wasn't a super strong input because:
1. There's more room for both compromises and making up for compromises, in software; and
2. Estimating level of software support and quality is more nuanced than just a "Has mainline drivers?" checkbox.
For example, RPi 3B vs. Freescale iMX6. The latter had complete mainline support (for our needs) but the former was still out-of-tree for major subsystems. The RPi was cheaper. A lot cheaper.
I okayed RPi for our base board because:
1. Its out-of-tree kernel was kept up-to-date with mainline with a small delay, and would have supported the next LTS kernel by the time our development was expected to finish (a year);
2. Its out-of-tree code was quite easy (almost straightforward) to integrate into the Gentoo-based stack I wanted to build the OS on; and
3. I was already up-and-running with a prototype on RPi with ArchLinuxARM while we were waiting for iMX6 devkits to be sourced. If ArchLinuxARM could support this board natively, I figured it wouldn't be hard to port it to Gentoo; turned out Gentoo already had built-in support for its out-of-tree code.
Of course, not every sourcing decision was as easy as that. I did have to write a driver for an audio chip because its mainline driver did not support the full range of features the hardware did. But even in that case, the decision to go ahead with that chip was only made after I was certain that we could write and maintain said driver.
Yup, exactly. I last worked in this field in 2009, and BOM cost (tempered with component availability) was king. This was also a time when hardware was much less capable, so they usually ran something like vxWorks (or, ::shudder::, uClinux). Building the cheapest product that could get to market fastest (so as to beat competitors to the latest WiFi draft standard) was all that mattered.
Your Raspberry Pi example is IMO even more illustrative than you let on. I'll reiterate that even that platform is not open and doesn't have a full set of mainlined drivers, after a decade of incredibly active development, by a team that is much more dedicated to openness than most other device manufacturers. Granted, they picked a base (ugh, Broadcom) that is among the worst when it comes to documentation and open source, but I think that also proves a point: device manufacturers don't have a ton of choice, and need to strike a balance between openness and practical considerations. The Raspberry Pi folks had price and capability targets to go with their openness needs, and they couldn't always get everything they wanted.
Because you don't have much choice, and each choice has trade offs. If you pick the part from vendor A, you get the mainlined driver, but maybe you get slower performance, or higher power consumption, or a larger component footprint that doesn't work with your form factor.
And most vendors are like vendor B because they're leading the pack in terms of performance, power consumption, and die size (among other things) and have the market power to avoid having to do everything their customers want them to do.
Still, some headway has been made: Google and Samsung have been gradually getting some manufacturers (mainly Qualcomm) to support their chips for longer. It's been a slow process, though.
As for mainlining: it's a long, difficult process, and the vendor-B types just don't care, and mostly don't need to care.
Because the buyers are consumer hardware companies. This means a) there's an expectation that software works just like their hardware: they put it together once and then throw it onto the market. Updating or supporting it is not a particular consideration, unless they re-engineer something significantly to reduce costs. and b) the bean-counters and hardware engineers have more sway than the software engineers: lower cost, better battery life, features, etc on paper will win out over good software support over the life of the product.
because you don't care to give the customer longer term software support
many consumers are not aware about the danger a unmaintained/non-updatable software stack introduces or that their (mainly) phone is unmaintained
so phone vendor buys from B because A is often just not an option (not available for the hardware you need) and then dumps the problem subtle and mostly unnoticeable on the user
there are some exceptions, e.g. Fairphone is committed to quite long term software support so they try to use vendor As or vendor Bs which have contractual long term commitment for driver maintaince
but in the space of phones (and implicit IoT using phone parts) sadly sometimes (often) the only available option for the hardware you need is vendor B where any long term diver maintenance commitment contracts are just not affordable if you are not operating on a scale of a larger phone vendor
E.g. as far as I remember Fairphone had to do some reserve engineering/patching to continue support for the FP3 until today (and well I think another 2 or so years), and I vaguely remember that they where somewhat lucky that some open source driver work for some parts was already ongoing and getting some support with some of the vendors. For the FP5 they manage to have a more close cooperation with Qualcomm allowing them to provide a 5 year extended warranty and target software support for 8 years (since release of phone).
So without phone producer either being legally forced to have a certain amount of software support (e.g. 3 years after last first party selling) or at least be largely visible transparent about the amount of software support they do provide upfront and also inform their user when the software isn't supported anymore I don't expect to see any larger industry wide changes there.
Through some countries are considering laws like that.
> so for some hardware it might mean that when it gets released you only have noticeable less then 6 years of kernel support
Or they could just upgrade the kernel to a newer version. There's no rule that says that the phone needs to run the same major kernel version for its entire lifetime. The issue is that if you buy a sub €100 phone, how exactly is the manufacturer supposes to finance the development and testing of never versions of the operating system? It might be cheap enough to just apply security fixes to an LTS kernel, but moving and re-validating drivers for hardware that may not even be manufactured quickly becomes unjustifiable expensive for anything but flagship phones.
That's the point: these drivers should get updated. Obviously the low-level component manufacturers don't want to do this, but perhaps we need to find a way to incentivize them to do so. And if that fails, to legally force them.
> sadly it's not rare that some phone hardware proprietary drivers are basically written once and then hardly maintained and in turn only work for a small number of Linux kernel versions.
These manufacturers should be punished by the lack of LTS and need to upgrade precisely because of that lazyness and incompetence.
> You don't see Windows driver developers having their drivers broken by updates every few months.
At the cost of Windows kernel development being a huge PITA because effectively everything in the driver development kit becomes an ossified API that can't ever change, no matter if there are bugs or there are more efficient ways to get something done.
The Linux kernel developers can do whatever they want to get the best (most performant, most energy saving, ...) system because they don't need to worry about breaking someone else's proprietary code. Device manufacturers can always do the right thing and provide well-written modules to upstream - but many don't because (rightfully) the Linux kernel team demands good code quality which is expensive AF. Just look at the state of most non-Pixel/Samsung code dumps, if you're dedicated enough you'll find tons of vulnerabilities and code smell.
>no matter if there are bugs or there are more efficient ways to get something done.
Stability is worth it. After 30 years of development the kernel developers should be able to come up with a solid design for a stable api for drivers that they don't expect to radically change in a way they can't support.
Stability is worth it to you. Others can hold different opinions and make different decisions, and until and unless you -- or someone like minded -- becomes the leader of a major open source kernel project used in billions of devices, the opinions of those others will rule the day.
Because the kernel developers are not beholden to chipset manufacturers who want to spend the shortest possible time writing a close-source driver and then forgetting about it. They're there to work on whatever they enjoy, as well as whatever their (paying) stakeholders care about.
The solution to all this is pretty simple: release the source to these drivers. I guarantee members of the community -- or, hell, companies who rely on these drivers in their end-products -- will maintain the more popular/generally-useful ones, and will update them to work with newer kernels.
Certainly the ideal would be to mainline these drivers in the first place, but that's a long, difficult process and I frankly don't blame the chipset manufacturers for not caring to go through it.
Also, real classy to call the people who design and build the kernel that runs on billions of devices around the world "lazy and incompetent". Methinks you just don't know what you're talking about.
It's less kernel developers then a certain subset of companies providing proprietary drivers only.
Most linux kernel changes are limited enough so that updating a driver is not an issue, IFF you have the source code.
That is how a huge number of drivers are maintained in-tree, if they had to do major changes to all the drivers every time anything changes they wouldn't really get anything done.
Only if you don't have the source code is driver breakage an issue.
But Linux approach to proprietary drivers was always that there is no official support when there is no source code.
Why stop at kernel space. You might as well break all of user space every so often. If everything is open source it shouldn't be an issue to fix all broken Linux software right?
> You might as well break all of user space every so often. If everything is open source it shouldn't be an issue to fix all broken Linux software right?
What an uninformed take.
The Linux kernel has a strict "don't break userspace" policy, because they know that userspace is not released in lock step with the kernel. Having this policy is certainly a burden on them to get things right, but they've decided the trade offs make it worth it.
They have also chosen that the trade offs involved in having a stable driver API are not worth it.
> People don't want you to break their code.
Then maybe "people" (in this case device manufacturers who write crap drivers) should pony up the money and time to get their drivers mainlined so they don't have to worry about this problem. The Linux kernel team doesn't owe them anything.
> sadly it's not rare that some phone hardware proprietary drivers are basically written once and then hardly maintained and in turn only work for a small number of Linux kernel versions.
The worst part of all of this is: Google could go and mandate that the situation improves by using the Google Play Store license - only grant it if the full source code for the BSP is made available and the manufacturer commits to upstreaming the drivers to the Linux kernel. But they haven't, and so the SoC vendors don't feel any pressure to move to sustainable development models.
Google realistically can't do this. "The SoC vendors" is basically Qualcomm (yes, I know there are others, but if Qualcomm doesn't play ball, none of it matters).
Google has tried to improve the situation, and has made some headway: that's why they were able to get longer support for at least security patches for the Pixel line. Now that they own development of their own SoC, they're able to push that even farther. But consider how that's panned out: they essentially hit a wall with Qualcomm, and had to take ownership of the chipset (based off of Samsung's Exynos chip; they didn't start from scratch) in order to actually get what they want when it comes to long-term support. This should give you an idea of the outsized amount of power Qualcomm has in this situation.
Not many companies have the resources to do what Google is doing here! Even Samsung, who designed their own chipset, still uses Qualcomm for a lot of their products, because building a high-performance SoC with good power consumption numbers is really hard. Good luck to most/all of the smaller Android manufacturers who want more control over their hardware.
(Granted, I'm sure Google didn't decide to build Tensor on their own solely because of the long-term support issues; I bet there were other considerations too.)
6 years may seem like a long time, but check out what the competition is doing. Oracle is supporting Solaris 10 for 20 years, 11.4 for 16 years (23 years if you lump it in with 11.0). HP-UX 11i versions seem to get around 15 years of support.
It really depends on what you're doing, a lot of industries may not need such long-term support. 6 years seems like a happy medium to me, but then again I'm not the one supporting it. I expect the kernel devs would be singing a different tune if people were willing to pay for that extended support.
They're just legacy now IMO and their long term support requirements are a result of this, companies that haven't gotten rid of them by now aren't likely to do it any time soon.
I hate seeing them go. I wasn't such a fan of Solaris but I was of HP-UX. But its days are over. It doesn't even run on x86 or x64 and HP has been paying Intel huge money to keep itanium on life support, which is running out now if it hasn't already.
At least Solaris had an Intel port but it too is very rare now.
There's still a decent population of RHEL 5 systems in the wild. Last year I was offered an engagement (turned down for a few reasons) to help a company upgrade several hundred systems from RHEL 5 to RHEL 6 and start planning for a future rollout of RHEL 7.
Outside of tech focused companies, 10+ year old systems really are the norm.
> Outside of tech focused companies, 10+ year old systems really are the norm.
It's because outside of tech companies, nobody cares about new features. They care about things continuing to work. Companies don't buy software for the fun of exploring new versions, especially frustratingly pointless cosmetic changes that keep hitting their training budgets.
Many companies would be happy with RHEL5 or Windows XP today from a feature standpoint, if it weren't a security vulnerability.
The problem about "things continuing to work" is really that many security fixes require updated architecture too. This is really why it's so hard to do LTS. It's not only about wanting new features.
At megacorp (years ago) we were transitioning to CentOS 7 (from 6) and just starting to wind down our 32-bit windows stuff in AWS. I'm sure there are plenty of legacy Linux systems out there, but I wonder how many folks are actually paying for them.
CentOS/RHEL 6 was already pretty long in the tooth, but being the contrarian I am, I was not looking forward to the impending systemd nonsense.
It’s nightmare for developers if you get stuck with infrastructure on such dinosaurs and need to deploy a fresh new project. Anything made in the last 3-5 years likely won’t build due to at least openssl even if you get it to otherwise compile. Docker may not run. Postgres may not run. Go binaries? Yeah those also have issues. It’s like putting yourself into a time capsule with unbreakable windows - you can see how much progress has been made and how much easier your life could’ve been, but you’re stuck here.
Old systems are stable, but there’s a fine line between that and stagnation. Tread carefully.
Most of our .NET workloads are still for .NET Framework, and only now we are starting to have Java 17 for new projects, and only thanks to projects like Spring pushing for it.
Ah, and C++ will most likely be a mix of C++14 and C++17, for new projects.
That's 2 years of the upstream LTS kernel. I would expect that major Linux distributions such as Redhat RHEL and Canonical's Ubuntu would continue to do their extended patch cycles against one of the upstream snapshots as they have done in the past. I think 2 years for upstream LTS is probably fine if the vendor patching methodology remains true. This also assumes that the usage of smaller distributions such as Alpine are more commonly used in very agile environments such as K8's, Docker Swarm, etc... Perhaps that is a big assumption on my part.
Depends on where the computer is at, I guess. On a desk, 6 years is a pretty long time. In an industrial setting, 6 years is not very long of a lifecycle.
consider that it's 6 years after release of the kernel version
so likely <5 years since release of the hardware in the US
likely <4 years since release of hardware outside of the US
likely <3 years since you bought the hardware
and if you buy older phones having only a year or so of proper security updates is not that unlikely
So for phones you would need more something like a 8 or 10 year LTS, or well, proper driver updates for proprietary hardware. In which case 2 years can be just fine because in general/normally only drivers are affected by kernel updates.
All comes to cycle. When do you enter that 6 year LTS? Is there new LTS every year or every other year? If you enter 2 years in or even 4 years in. How much have you support left?
Do you jump LTS releases. So one you are on is ending and there is brand new available? Or do you go to one before and have possibly only 2 or 4 years left...
What kind of breaking change would take longer than 2 years to deal with? The reality is that people wait out the entire 6 year period and then do the required months work at the end. If you make the support period 2 years they will just start working on it sooner.
With the "never break userspace" guarantee, is there ever a reason to want to be on an LTS kernel instead of the latest stable one, other than proprietary kernel modules?
Yes, ZFS. When new, incompatible kernels are released the dkms build for the ZFS kernel modules will fail. By switching to an LTS kernel, I no longer have to worry because my kernel lags so far behind.
The alternative is using a pre-built package repo which will prevent the kernel from updating to an incompatible version using the package dependencies. I lived that way for years and it is an awful experience.
The original intent of the license authors is irrelevant. The aspect of the CDDL that makes it incompatible with GPL is present in the GPL too. Neither license is more or less "dogshit" than the other, they are the same. The difference is the CDDL only applies to code written under the CDDL, whereas the GPL spreads to everything it touches.
If Linux had been under the CDDL, ZFS would have chosen another license. Sun management at the time saw Linux as their primary competitor, and ZFS and DTrace was the crown jewels of Solaris. Just open sourcing was reported to have been a long internal struggle by the people involved, and there's no chance they would have let the Linux distributors use them for free.
Good or bad, it's the result of another era. Still impressive stuff. It's only recently that things like btrfs and eBPF became usable enough, and not in all situations.
> The original intent of the license authors is irrelevant.
what on earth is that supposed to mean? ZFS is not in the Linux kernel because Sun and then Oracle deliberately decided to do that and continue to want that to be the case. The Linux kernel can't be re-licensed, (the Oracle and Sun code in) ZFS could be relicenced in ten minutes if they cared.
> The aspect of the CDDL that makes it incompatible with GPL is present in the GPL too. Neither license is more or less "dogshit" than the other, they are the same. The difference is the CDDL only applies to code written under the CDDL, whereas the GPL spreads to everything it touches.
It means that the original authors could have originally intended to write a recipe for chocolate chip cookies and somehow accidentally wrote the CDDL. That wouldn't change a thing and it wouldn't make the CDDL any better or worse since it would have exactly the same words. The intent is irrelevant, all that matters is the end result.
> ZFS could be relicenced in ten minutes if they cared.
Indeed, I hope that they do. A copyfree license like the BSD licenses would make ZFS significantly more popular and I think would have saved all the effort sunk into btrfs had it been done earlier.
That aspect of the GPL is what any software end-user should want, all the source code, for every part of what you are using.
It is a shame Oracle hasn't released a CDDLv2 that provides GPL compatibility, they could solve the incompatibility quite easily, since CDDLv1 has an auto-update clause by default. I think some of OpenZFS has CDDLv1-only code, but that could probably be removed or replaced.
Oh :-\ Thanks for the warning, I guess I'll have to remain vigilant. Switching to LTS certainly significantly reduces the frequency of incompatibilities, so I'm definitely going to remain on it, but I guess its not the perfect fix I thought it was.
Some kind of breakage is pretty common in random recent kernels. It might not affect you this time, but do you really want to risk it?
So yes - you do want to be on an LTS kernel. But you only need to stay there for about a year until the next one is released and you can test it for a bit before deploying.
That question applies to LTS kernels too. Do you really want to risk that a backport of an important fix won't introduce a problem that mainline didn't have? Do you really want to risk that there are no security vulnerabilities in old kernels that won't get noticed by maintainers since they were incidentally fixed by some non-security-related change in mainline?
My work uses "tailored and curated fixes in lts" and at home I use "acccidental and experimental fixes in bleeding-edge". I've had way more stuff break because of the former than because of the latter, and not just with the kernel.
Lots of third party software hooks the kernel for various things, such as drivers for enterprise RAID or proprietary networking, and whatever it is Nvidia does these days. Those go far beyond the user space interface and are dependent on binaries staying unchanged. This is for them.
If you are running user space applications only on upstream supported hardware, there is no reason to stay with long time supported kernels, just follow the regular stable which is much easier for everyone.
GPU drivers are routinely the biggest chunk of the kernel (both source and runtime) and have the most surface area to have bugs in them regardless of their openness.
My phone is running a minor version of 4.4 from March 2023. Kernel 4.4 is originally from 2016. This means that they are still patching the kernel after 7 years even if it's not a LTS version.
Hardly, given how the Linux kernel is an implementation detail, Linux drivers are considered legacy (all modern drivers should be in userspace since Treble, written in Java/C++/Rust), and the NDK doesn't expose Linux APIs as stable interface.
So not something to build GNU/Linux distributions on top of.
The drivers in userspace are part of the GKI initiative[0], not Treble [1]. Treble deals with separation between Vendor, System and OEM. It establishes a process (CTS & VTS tests) to ensure system components (HALs) stay compatible with whatever updates Google makes to Android, but it deals with the base Android, not the Kernel specifically.
Historically, Treble predates GKI, created after OEMs disregarded Treble, as Google had the clever idea to leave Treble updates as optional for OEMs.
> Binderized HALs. HALs expressed in HAL interface definition language (HIDL) or Android interface definition language (AIDL). These HALs replace both conventional and legacy HALs used in earlier versions of Android. In a Binderized HAL, the Android framework and HALs communicate with each other using binder inter-process communication (IPC) calls. All devices launching with Android 8.0 or later must support binderized HALs only.
GKI only became a thing in Android 12 to fix Treble adoption issues, as you can also easily check, and GSI was introduced in Android 9, after userspace drivers became a requirement in Android 8 as per link above.
in my opinion for anything internet connected not updating the kernel is a liability
security patches of an LTS kernel are as much updates as moving to a newer kernel version
custom non in-tree drivers are generally an anti-pattern
the kernel interface is quite stable
automated testing tools have come quite a way
===> you should fully update the kernel LTS isn't needed
the only offenders which makes this hard are certain hardware vendors mostly related to phones and IoT which provide proprietary drivers only and also do not update them
even with LTS kernels this has caused ton's of problems over time maybe 6-years LTS being absconded in combination with some legislatures starting to require security updates for devices for 2-5 years *after sold* (i.e > released) this will put enough pressure on to change this for a better approach (weather that are user land drivers, in-tree drivers or better driver support in general)
My opinion is that 6 years is not enough, I would target 10 years.
But I guess the core of the issue is planned obsolescence in linux internals. Namely the fear of missing out some features which require significant internal changes and which if linux is without could lower its attractiveness in some ways.
It all depends on Linus T.: arbitration of "internals breaking changes".
Do you have the resources to achieve that target and still move the project forward?
If I could ask anything from Linus it would be to be a little more relaxed about the "never break userspace" rule. Allow for some innovation and improvements. There are bugs in the kernel that have become documented features because some userspace program took advantage of that.
Arbitration, which is ulitmately in the hands of Linus T.
Where ABI stability is paramount for Linus T., ABI bugs will become features.
The glibc/libgcc/libstdc++ found a way around it... which did end up even worse: GNU symbol versioning.
Basically, "fixed" symbols get a new version, BUT sourceware binutils is always linking with the latest symbol version, which makes generating "broad glibc version spectrum compatibility" binaries an abomination (I am trying to stay polite)... because glibc/libgcc/libstdc++ devs are grossely abusing GNU symbol versioning. Game/game engine devs are hit super hard. It is actually a real mess, valve is trying technical mitigations, but there are all shabby and a pain to maintain.
Basically, they are killing native elf/linux gaming with this kind of abuse.
Good move! At least, this will push those pesky Android OEMs to make their drivers available upstream. Its time we have a standardized environment for mobile/embedded use cases like the one we have for PCs. All these stupid little devices filling the junk yards because some greedy OEM didn't want to update their drivers is ridiculous.
(Disclaimer: I work for Red Hat, but I don't work on the kernel. I'm a user-land mammal, and sometimes work with kernel maintainers to debug issues.)
It makes total sense and I support this. I've met some of the upstream Linux maintainers at conferences over the years. Some (many?) of them are really oversubscribed and still plug away. They need relief from the drudgery of LTS stuff at some point.
Anyone here involved in backporting fixes to many stable branches in a user-land project will relate to the problem here. It's time-consuming and tedious work. This kind of work is what "Enterprise Linux" companies get paid for by customers.
Why? You have the right to backport fixes yourself to your heart’s delight. Right to repair doesn’t require someone else to do the work for you for free.
> Right to repair doesn’t require someone else to do the work for you for free.
The term maybe not, but the proposed legislation totally does. Same as warranties or customer protection or not using toxic materials or ... ; none of that is "for free" to the manufacturer, but it is mandatory if you want to be allowed to sell your product.
But if legislation would actually require the Linux kernel, say, to have LTS for??? every major release? Every point release? It’s bad law and should absolutely not exist. If I’m running a community project I have a big FU for anyone trying to impose support requirements on me. Which was actually a rather hot topic at an open source conference I was just at in Europe.
Whatever you came up with in your mind sounds very weird and, yeah, obviously should not exist. That has nothing to do with the actual law though.
> I have a big FU for anyone trying to impose support requirements on me
Nobody talked about any of that?
Product: Samsung phone. Requirement: Samsung needs to keep that device usable for N years.
To meet that requirement Samsung will also need kernel updates. Whether that means doing them in house, paying someone else, making updates more seamless to easily upgrade or ... . The requirement to find a way to make that work is on Samsung, not you.
> Product: Samsung phone. Requirement: Samsung needs to keep that device usable for N years.
What does "usable" mean though?
If I were to go and dig my old Apple Centris 650 [1] that I bought around 1994 out of my pile of old electronics, if the hardware still actually works it would still be able to do everything it did back when it was my only computer. It is running A/UX [2], which is essentially Unix System V 2.2 with features from System V 3 and 3 and BSD 4.2 and 4.3 added.
Even much of what I currently do on Linux servers at work and on the command line at home with MacOS would work or be easy to port.
So in one sense it is usable, because it still does everything that it could do when I got it.
But it would not be very good for networking because even though it has ethernet and TCP/IP, it wouldn't have the right protocols to talk to most thing on today's internet and the browsers it has don't implement things that most websites now depend on.
So in another sense we could say it is not usable, although I think it would be more accurate to say it is no longer useful rather than unusable.
Exactly, but samsung does and they use the linux kernel. And this change affects them and is particularly untimely for them if laws require future long term support from them (not from linux). That was the comment you are replying to.
Of course, linux can just say "not my problem" -- the law does not affect them directly. The discussion topic is whether with this change in law companies like samsung will be willing to invest lots of money to get sufficiently long LTS versions and hence lead to a change in position on the linux side. ... or a switch to fuchsia.
So it’s for Samsung to decide if they want to take a different approach or not. And yeah it’s not a kernel.org problem and there’s actually likely no easy mechanism for Samsung to pay for LTS given the work is mostly done by a bunch of people working for many different companies. I think the Linux Foundation only pays the salaries of three maintainers—including Linus.
Device manufacturers can provide support for kernel used by their products themselves, pay some distro vendor to do it for them or contract maintainers directly. You expect Linux or Greg to do it for free because EU says so or what?
How? It's open source and Free Software, literally guaranteing the right to "repair" your code. Maybe I don't understand your question but it seems totally unrelated to the concept of an open source support cycle
It's really disturbing that software is the only thing whose stability seems to be decreasing very significantly over time, and dragging down everything that it's embedded in.
Who wants constant changes and breakage? Who wants software that's in constant need of updating? I'm pretty sure it's not the users.
The point of LTS kernels is that they do get constant updates ie that security patches are backported. There is no world
in which you can avoid updating frequently.
There are many more updates than security updates coming to LTS kernels all the time.
Kernel updates are more often than not even categorized in any way. Only for very prominent vulnerabilities the security impact is clear to a larger audience.
So does it mean that linux is rolling out updates but these updates do not consider security? Just curious about this thing, I just started using linux and this topic is interesting for me
It means that there are bug fixesall the time, but most of the time no one sorts these into "security" and "non-security" categories.
I remember a message (I can't find it back right now) where this is explained. Basically the thinking is that a lot of bugs can be used to break security, but sometimes it takes a lot of effort to figure out how to exploit a bug.
So you have some choices:
* Research every bug to find out the security implications, which is additional work on top of fixing the bug.
* Mark only the bugs that have known security implications as security fixes, basically guaranteeing that you will miss some that you haven't researched.
* Consider all bugs as potentially having security implications. This is basically what they do now.
I understand the sentiment, while also looking at our computing devices' market where the fight for repairability is pretty harsh and far from a given.
Same for cars that are becoming less and less repairable and viable in the long term, medical devices are now vulnerable to external attack vectors to a point the field didn't predict, and I'd assume it's the same in so many other fields.
One could argue those are all the effect of software getting integrated into what where "dumb" devices, but that's also the march of progress the society is longing for...where to draw the line is more and more a tough decision, and the need for regulation kinda puts a lot of the potential improvements into the "when hell freezes" bucket. I hope I'm deeply mistaken.
There's always time to refactor and tinker with incomplete migration, always time to debug and patch and fuss with updates, but never time to sit down, gather all the requirements, redesign, and reimplement.
Ultimately it's just a question of objectives. Usually software isn't written for its own sake. It's written to achieve a goal, meet a customer need, generate revenue, prove a market, etc. You can achieve those goals without gathering all the requirements, redesigning and re-implementing most of the time. Not in aerospace, or biomedical maybe, where we are willing to pay the outlandish velocity penalties. But most of the things we do aren't that.
Generally, if you want to build a pristine, perfect snowflake, a work of art, then you'll be the only one working on it, on your own time while listening to German electronica, in your house. [1] Nothing wrong with that - I have a few of those projects myself - but I think it's important to remember.
Linux is hoping to adjust the velocity-quality equilibrium a little closer to velocity, and a little further from quality. That's okay too. Linux doesn't have to be everything to everyone. It doesn't have to be flawless to meet the needs of any given person.
> Not in aerospace, or biomedical maybe, where we are willing to pay the outlandish velocity penalties.
… because we really needed X version of software out yesterday? This incredible “velocity” that you speak of has created monstrous software systems, that are dependency nightmares, and are obtuse even for an expert to navigate across. In the rush to release, release, release, the tech sector has layered on layers of tech debt upon layers of tech debt, all while calling themselves “engineers”… There’s nothing to celebrate in the “velocity” of modern software except for someone hustling a dollar faster than someone else.
Gathering requirements doesn't have to take months. It can take hours or a few days. You can even do it in the middle of maintaining a product, e.g. as part of steps towards addressing a feature request or a refactoring. It doesn't strike me as unreasonable that in the 50 years of building UNIX-style kernels, and the 30+ year development of Linux to have someone write down some functional requirements somewhere.
That is how C rolls. In theory you need to formally verify every single C program to ensure that it does not violate memory safety. Yes, that is akin to the same straightjacket that people complain about in that iron oxide language.
This is the crux of the issue: putting the maintenance burden on unpaid volunteers instead of having the burden be carried by the companies that profit from the 6-year LTS.