First of all, I wholeheartedly applaud Marcan for carrying the project this far. They, both as individuals and as a team proper, did great things. What I can say is a rest is well deserved at this point, because he really poured his soul into this and worn himself down.
On the other hand, I'll need to say something, however not in bad faith. He needs to stop fighting with the winds he can't control. Users gonna be users, and people gonna be people. Everyone won't be happy, never ever. Even you integrate from applications to silicon level, not everyone is happy what Apple has accomplished technically. Even though Linux is making the world go on, we have seen friction now and then (tipping my hat to another thing he just went through), so he need to improve his soft skills.
Make no mistake, I'm not making this comment from high above. I was extremely bad at it, and I was bullied online and offline for a decade, and it didn't help to be on the right side of the argument, either. So, I understand how it feels and how he's heartbroken and fuming right now, and rightly so. However, humans are not an exact science, and learning to work together with people with strong technical chops is a literal superpower.
I wish Hector a speedy recovery, a good rest and a bright future. I want to finish with the opening page of Joel Spolsky's "Joel on Software":
For the last few years, I've been saying the following regularly (to friends, family and coworkers): communication is the hardest thing humans will ever do. Period.
Going to the moon, launching rockets, building that amazing app... the hardest thing of all is communicating with other people to get it done.
As a founder (for 40+ years and counting) I manage a lot of different type of people and communication failures are the largest common thread.
Humans have a very, very tough time assuming the point of view of another. That is the root of terrible communication, but assumptions are right up there as a big second.
On the Marcan thing... I just want to say, control what you can and forget the rest (yes, this is direct from stoicism). Users boldly asking for features and not being grateful? Just ignore them. Getting your ego wrapped up in these requests (because that's what it is, even if he doesn't want to admit it), is folly.
I contributed to Marcan for more than a year. I was sad to see the way it ended. I wish him well.
> Humans have a very, very tough time assuming the point of view of another. That is the root of terrible communication, but assumptions are right up there as a big second.
That's very true. I recommend some people to read "The Four Agreements", because that thin book has real potential to improve people's lives through active and passive communication.
Also worth being aware of Robert Kagen's adult development model [0] or something similar; that gives people a framework to go from "humans seem" to some actual percentages and capabilities.
Spoiler, but approximately 66% of the adult population make do without being able to maintain their own perspective independently of what their social circle tells them it is. I imagine that would make it extremely challenging to determine what someone else's perspective is. Especially if that perspective is being formed based on empiricism rather than social signalling.
And if we're making book recommendations, Non-Violent Communication is a gem of an idea.
That's pretty fascinating, thanks for sharing it! It's a pretty compelling explanation as to why some people seem to be completely unable to logically explain their reasoning for certain beliefs and just fall back to "well it should be so because everybody says so."
Ta. I learned about it from my favourite HN comment (https://news.ycombinator.com/item?id=40856578) and have spent the last 6 months wondering why people don't bring it up more. It may just be a model but it has much explanatory power for why there seem to be so many "stupid" people around. I don't really have the word to describe them; people who are technically reasonable but not convinced by arguments or evidence.
Marcus Aurelius wrote extensive personal reflections in his "Meditations". Seneca wrote detailed letters to friends and family discussing philosophy, life, and death. Epictetus discussed death extensively in his Discourses, but sure, they were philosophical teachings rather than personal goodbyes.
They focus on acceptance and equanimity rather than formal farewells.
That said, "control what you can and forget the rest" is indeed stoicism, albeit simplified.
If they've written "many many words over thousands of years" for the merits of their philosophy, they are also perfectly capable to write multi-paragraph goodbye letters. That's the bearing it has on the parents claim. And many did.
Why you felt the need to add your comment, is a more apt question.
> If they've written "many many words over thousands of years" for the merits of their philosophy, they are also perfectly capable to write multi-paragraph goodbye letters. That's the bearing it has on the parents claim. And many did.
Eh, not really - "multi-paragraph goodbye letters" here refers to the overly dramatic fad that internet denizens sometimes engage in when they leave communities, and they tend to have a lot of whining.
Those types of goodbye letters are not the types of goodbye letters stoics would write.
> Why you felt the need to add your comment, is a more apt question.
If you were able to pick up so swiftly what the person I replied to was implying, you too should be able to have picked up that I replied because I disagreed with that implication.
I doubt this, but would be curious to see a source.
> You could then just say that you disagree and state your case, without rudely asking why they posted it.
I didn't find it rude at all, and your reply was far less productive than my IMO neutral question. You took offense on behalf of someone else and inserted yourself when it was unnecessary and entirely reliant on your interpretation and perception. Now we're discussing your perceived slight instead of anything of substance.
> He needs to stop fighting with the winds he can't control. Users gonna be users, and people gonna be people. Everyone won't be happy, never ever.
Right - but it kinda sounds like he's facing headwinds in a lot of different directions.
Headwinds from Apple, who are indifferent to the project, stingy with documentation, and not inclined to reduce their own rate of change.
Headwinds from users, because of the stripped down experience.
Headwinds from the kernel team, who are in the unenviable situation of having to accept and maintain code they can't test for hardware they don't own; and who apparently have some sort of schism over rust support?
Be a heck of a lot easier if at least one of them was on your side.
> Headwinds from Apple, who are indifferent to the project, stingy with documentation, and not inclined to reduce their own rate of change.
That is part of the challenge he chose to take on.
> Headwinds from users, because of the stripped down experience.
Users can be ignored. How much you get users to you is your own choice.
> Headwinds from the kernel team, who are in the unenviable situation of having to accept and maintain code they can't test for hardware they don't own
You don't have to upstream. Again, it's not the kernel team that chose to add support for "hostile" hardware so don't try to make this their problem.
> and who apparently have some sort of schism over rust support?
Resistance when trying to push an entirely different language into an established project is entirely expected. The maintainers in question did not ask for people to add Rust to the kernel. They have no obligation to be welcoming to it.
> Be a heck of a lot easier if at least one of them was on your side.
Except for the users all the conflicts are the direct result from the choice of work. And the users are something you have to choose to listen to as well.
"Their boss" - I'm not sure that boss is best word here.
"did ask for it" - did he? Because from my perspective it looks more like he gave the bone for corporations so they will shut up for rust in kernel. After some time it will end up "Sorry but rust did not have enough support - maintainers left and there were issues with language - well back to C"
Another uphill battle that I haven't seen anyone mention is just how good mobile AMD chips got a year or so after the M1 release. I wouldn't buy a Mac to run Linux on it when I can buy a Lenovo with equally soldered parts that'll work well with the OS I wanna run already.
A lot of it is simply AMD getting on newer TSMC nodes. Most of the Apple's efficiency head start is better process (they got exclusive access to 5nm at first).
That's my understanding as well, as soon as the node exclusivity dropped they were ballpark equal.
Many ARM SOC are designed to run on battery only so the wireless packages and low power states are better, my AMD couldn't go below 400mhz.
But yeah the "Apple M hardware is miles and leagues away" hypetrain was just a hypetrain. Impressive and genuinely great but not revolutionary, at best incremental.
I hope to be able to run ARM on an unlocked laptop soon. I run a Chromebook as extra laptop with a MediaTek 520 chip and it's got 2 days battery life, AMD isn't quite there yet.
> But yeah the "Apple M hardware is miles and leagues away" hypetrain was just a hypetrain. Impressive and genuinely great but not revolutionary, at best incremental.
It's more nuanced than that. Apple effectively pulled a "Sony A7-III" move. Released something one generation ahead before everybody else, and disrupted everyone.
Sony called "A7-III" entry level mirrorless, but it had much more features even when compared to the higher-end SLRs of the era, and effectively pulled every other camera on the market one level down.
I don't think even they thought they'd keep that gap forever. I personally didn't think it either, but when it was released, it was leaps and bounds ahead, and forced other manufacturers to do the same to stay relevant.
They pulled everyone upwards, and now they continue their move. If not this, they also showed that computers can be miniaturized much more. Intel N100 and RaspberryPi/OrangePi 5 provides so much performance for daily tasks, so unimaginable things at that size are considered normal now.
I like the Sony story, but I don't think Apple did "pull everyone along" like that, they had an exclusivity deal with TSMC to be first on a good manufacturing node improvement. They took their high-quality, high-performance iPhone SoC, gave it more juice and a bit better thermals.
It's just another "Apple integrating well" story.
Their SoC is huge compared to competitors because Apple doesn't have to make a profit selling a SoC, they profit selling a device + services so they can splurge on the SoC, splurging on the SoC plus being one node ahead is just "being good", the team implementing Rosetta are the real wizards doing "revolutionary cool shit" if anything
> they had an exclusivity deal with TSMC to be first on a good manufacturing node improvement.
...plus, they have a whole CPU/GPU design company as a department inside Apple.
Not dissimilar to Sony:
Sony Imaging (camera division) designed a new sensor with the new capabilities of Sony Semiconductor (fab), and used their exclusivity to launch a new camera built on top of that new sensor. Plus, we shall not forget that Sony is an audiovisual integration powerhouse. They one of the very few companies which can design their DSPs, accompanying algorithms, software on top of it, and integrate to a single product they manufacture themselves. They're on par with Apple's integration chops, if not better (Sony can also horizontally integrate from Venice II to Bravia or Mics to Hi-Fi systems, incl. everything in between).
The gap also didn't survive in Sony's case (and that's good). Nikon and Fuji uses Sony's sensor fabs to use their capabilities and co-design sensors with the fab side.
Canon had to launch R series, upscale their sensor manufacturing chops. Just because Sony "integrated well" when looked from your perspective.
Sony is also not selling you the sensor. It's selling you the integrated package. From sensor to color accuracy to connectivity to reliability and service. A7-III has an integrated WiFi and FTP client to transfer photos. A9 adds an Ethernet jack for faster transfers. Again, integration within and between ecosystems.
>But yeah the "Apple M hardware is miles and leagues away" hypetrain was just a hypetrain. Impressive and genuinely great but not revolutionary, at best incremental.
Compared to the incremental changes we've seen the previous 10 years before it arrived on AMD/Intel space, it was revolutionary.
Was Intel switching from the Pentium 4 to the Core architecture considered revolutionary at the time? Was AMD's bulldozer architecture? I don't recall.
We must have different definitions of the word "revolutionary". They put a high-end mobile chip in a laptop and it came out good, what's revolutionary? The UMA architecture has advantages but hardly revolutionary.
The jump in performance, efficiency, battery time was not incremental or "evolutionary". Such jumps we call evolutionary.
What they did doesn't matter. Even if they merely took an intel laptop chip and stuck a chewing gum on it, the result was evolutionary.
So much so, that it put a fire under Intel's ass, and mobilized the whole industry to compete. For years after it came out the goal was to copy it and beat it.
What did you expect to call "revolutionary"? Some novel architecture that uses ternary logic? Quantum chips?
And some of these Lenovos are relatively upgradable too. I'm using a ThinkPad I bought refurbished (with a 2 year warranty) and upgraded myself to 40 GB of RAM and 1TB of SSD (there's another slot too if I need it). It cost me $350 including the part upgrades.
Prices seem to have risen a bit since I bought mine. Here's a similar model with a Ryzen 5 7530U for $355: https://www.ebay.com/itm/156626070024 It is certified refurbished and has a two year warranty. It has a SODIMM slot and supports dual SSDs, although not the full size M.2.
It's not just that "people are hard" - it was clear that this will end up this way the moment marcan started ranting on social media about having to send kernel patches via e-mails. Collaborating on software development is a social activity and stuff like convincing maintainers to trust you and your approach is just as important part of it (if not more important) as writing code. Not realizing that is a sure road to burnout (and yes, I'm just as guilty of that myself).
> Not realizing that is a sure road to burnout (and yes, I'm just as guilty of that myself).
Humans are shaped by experience. This is both a boon and a curse. I have been also been on the hot end of the stick and burned myself down, sometimes rightly, sometimes wrongly. Understanding that I don't want to go through this anymore was the point I started to change.
> Collaborating on software development is a social activity and stuff like convincing maintainers to trust you and your approach is just as important part of it (if not more important) as writing code.
Writing the code is at most 5% of software development IME. This is what I always say to people I work with. I absolutely love writing code, but there are so many and more important activities around that, I can't just ignore them and churn out code.
> Writing the code is at most 5% of software development IME.
This really depends on what you work on. And how good the managers are on your team. I talked to a manager at Google once about how he saw his job. He said he saw his entire job as getting all of that stuff out of the way of his team. His job was to handle the BS so his team could spend their time getting work done.
This has been my experience in small projects and in very well run projects. And in immature projects - where bugs are cheap and there’s no code review. In places like that, I’m programming more like 60% of the time. I love that.
But Linux will never be like that ever again. Each line of committed code matters too much, to too many people. Is has to be hard to commit bad code to Linux. And that means you’ve gotta do a lot of talking to justify your code.
I did some work at the IETF a few years ago. It’s just the same there - specs that seem right on day 1 take years to become standards. Look at http2. But then, when that work is done, we have a standard.
As the old saying goes, if you want to go fast, go alone. If you want to go far, go together. Personally I like going fast. But I respect the hell out of people who work on projects like Linux and chrome. They let us go far.
Even in the Google example, it's still in the low percentages when you view it as a system. All the manager did was efficiently allocate resources. It didn't reduce the non-programming work - it simply moved it elsewhere.
Someone who is in a management position, has good political skills and good connections will be way more efficient at doing some of this non-programming work.
This is something that even C-levels forget. Something that takes a CTO 2 minutes to do can take several months for a regular developer to achieve, and I have plenty of experience on and plenty of examples of that.
Yeah. I think the whole drama around rust on Linux is a great example of this. If Linus came forward and clearly supported (or clearly rejected) rust on Linux, it would have saved a lot of people months of stress and heartache.
It really depends what kind of code and for which usage.
People might also live their hobby dev experience better if they were really coding for themselves without any expectation except pushing the code to a repo. As a hobby dev, you don't have to make package, you don't have to have an issue tracker, you don't have to accept external contributions, you don't have to support your users if you aren't willing to have this on your shoulder. You don't even need a public git repo, you could just put a release tarball when release is ready on your personal website.
This works perfectly fine as long as you're happy with being approximately the only user of your code. With some of my projects I do just that, but it gets very different once you add users to the mix.
5%? Sure there is a lot of activity around software. But out of week of 40 hours I most certainly code more than at most 2 hours. If this is your workplace I think it's dysfunctional.
You are implying that if you can communicate but have nothing backing it up that's worth 95%? If anything code can still be taken as is and understood by someone else. So to me it's always most important to be able to produce anything before being able to communicate.
In a sense, yes. I'm contributing to a small but crucial part of a big project, as a coordinator of a four person team. The dynamics of the project form the team as "band of equals", in other words, everybody has approximately the same capabilities, and roles are divided organically, and I ended up as the "project manager" for that group somehow.
Even before we started coding, there was an RFC written by us. We have talked about it, discussed it, ironed it out with the chief architects of the project. When everything made sense we started implementing it. Total coding hours is irrelevant, but it's small when compared all the planning and it's almost finished now.
The code needs to tap and fit into a specific place in the pipeline. Finding and communicating this place was crucial. The code is not. Because you can write the most sophisticated code in the most elegant way, but if you don't design and implement it to fit to the correct place, that code is toast, and the effort is a waste.
So yes, code might be the most enjoyable (and sometimes voluminous) part, but it's 5% of the job, by weight, at most.
Once your proof of concept gains traction more time is spent in meetings with other teams responsible for the systems you'll be interacting with - making sure you do it "right" rather than scrappy. Once your initial release starts getting popular you spend more time looking at metrics and talking to operations folks to make scaling easier and not waste resources. Once your product starts having customers who depend on it, you spend a lot of time working with product to figure out features that make sense, advise on time estimates, mentor new team members, and advise teams who use your product/services as a dependency.
These are all engineering tasks, and the longer you spend on a team/in a company, the more likely it is you provide more value by doing this than by slinging code. You become a repository of institutional knowledge to dispense.
Think about it the other way around: How much code is written and never used? How much code is written and would be better if it were never used? How much code is used only to then notice, that it doesn't solve the business problem that it was intended to solve? How much code is run and it's never noticed that it doesn't solve any business problem?
All the while: You are correct, being able to produce anything that solves a problem is much more valuable than being able to talk about it. But in order to unlock the value (beyond solving your own problem) absolutely requires communication
It's more like writing the code is just the first step on a long road. You won't go anywhere at all if you don't take it, but if that's the only thing you do, all you did is the first step.
I have written plenty of code that's stuck on this first step in my life, including some that went to the very same LKML we're talking about here right now. Some of those things have already been independently written again by other people who actually managed to go further than that.
Perhaps "useless" was the wrong word the GP used. "valued" may be better.
It's fairly common for very useful/valuable code to be discarded because the engineer (or his management) failed to articulate that value to senior leaders as well as someone else who had inferior code.
> it was clear that this will end up this way the moment marcan started ranting on social media about having to send kernel patches via e-mails. Collaborating on software development is a social activity and stuff like convincing maintainers to trust you and your approach is just as important part of it (if not more important) as writing code.
Yeah but FFS using email for patches when there are so much better ways of doing development with git? The Linux Foundation could selfhost a fucking GitLab instance and even in the event of GitLab going down the route of enshittification or closed-source they could reasonably take over the maintenance of a fork.
I get that the Linux folks want to stay on email to gatekeep themselves from, let's be clear, utter morons who spam on any Github PR/issue they can find. But at the same time it makes finding new people to replace those who will literally die out in the next decade or two so much harder.
It's an interesting phenomenon where people keep coming out of the woodwork and criticize the most successful software development project inhistory for doing it wrong.
They're not micro kernel! They're not TDD! They're not C++! They're not CVS! Not SVN! Not SCRUM! Not Gitlab!
Yet the project marches on, with a nebulous vision of doing a really useful kernel for everyone. Had they latched on any of the armchain expert criticism of how they're doing it wrong all these years we wouldn't be here.
> Yet the project marches on, with a nebulous vision of doing a really useful kernel for everyone.
The question is - how long will it march on? The lack of new developers for Linux has been a consistent topic for years now. Linus himself isn't getting younger, and the same goes for Greg KH, Ted Ts'o and other influential leads.
When the status quo scares off too many potential newcomers, eventually the project will either wither or too inexperienced people drive the project against a wall.
The people in charge decided on their preferred ways of communication. You may believe that there are better ways out there, and I may even agree with you, but ultimately it's completely irrelevant. People responsible decided that this is what works for them and, to be honest, they don't even owe you an explanation. You're being asked to collaborate in this specific way and if you're unable to do it, it's on you. If you want to change it, work your way to become a person who decides on this stuff in the project, or convince the people already responsible. Notice how neither of those are technical tasks and that they don't depend on technical superiority of your proposed methods either.
> Yeah but FFS using email for patches when there are so much better ways of doing development with git?
You are missing one point, namely that email is probably the only communication medium that's truly decentralized. I mean, on most email providers you can export your mailboxes and go to someone else. You can have a variety of email clients and ways to back up your mailboxes. No git clone, no specific mailbox or server is in any way special, I think Linus emphasized recently that they made efforts to ensure kernel.org itself is not special in any way.
Yes, I find Github's or Gitlab's UI, even with all enshittification by Microsoft and whatnot, better for doing code reviews than sight-reading patches in emails. And yet I cannot unsee a potential danger that choosing a service — any service! — to host kernel development would make it The Service, and make any migration way harder to do than what you have with email. Knowing life, I'd say pretty confidently that an outcome would be that there would be both mailing lists and The Service, both mandatory, with both sides grumbling about undue burdens.
Have you ever been in a project which had to migrate from, say, Atlassian's stack to Github, or from Github to Gitlab, or vice versa? Heck, from SourceForge + CVS/SVN to Github or similar? Those were usually grand endeavors for projects of medium size and up. Migrate all users, all issues, all PRs, all labels, test it all, and you still have to write code while it all is happening. Lots of back-and-forth about preserving some information which resists migration and deciding whether to just let it burn or spend time massaging it into a way the new system will accept it. Burnout pretty much guaranteed, even if everyone is cooperating and there is necessity.
But you could probably build tools on top of email to make your work more pleasant. The whippersnappers who like newer ways might like to run them.
I personally don't think GitHub's PR model is superior to e-mail based patch management for two reasons. First, e-mail needs no additional middleware at git level to process (I can get my mails and directly start working on my machine), plus e-mail is at least one of Git's native patch management mechanisms.
This is not about spam, server management or GitLab/Gitea/whatever issue. This is catering to most diverse work methods, and removing bottlenecks and failure points from the pipeline. GitLab is down, everybody is blocked. Your mail provider is failing? It'll be up in 5 minutes tops, or your disk is full probably, go handle it yourself.
So Occam's razor outlaws all the complex explanations for mail based patch management. The answer is concise in my head:
> Mailing list is a great archive, it's infinitely simpler and way more robust than a single server, and keeps things neatly decentralized, and as designed.
This is a wind we can't control, I for one, am not looking and kernel devs and say "What a bunch of laggard luddites. They still use e-mail for patch management". On the contrary, I applaud them for making this run for this many years, this smoothly. Also, is it something different what I'm used to? Great! I'll learn something new. It's always good to learn something new.
Because, at the end of the day, all complex systems evolve from much simpler ones, over time. The opposite is impossible.
> Your mail provider is failing? It'll be up in 5 minutes tops, or your disk is full probably, go handle it yourself.
Well until you deal with email deliverability issues, which are staggeringly widespread and random. Email were great to send quick patches between friends like you'd exchange a USB key for a group project. For a project the size of Linux? It doesn't scale at all. There is a reason why Google, Meta, Red Hat, and [insert any tech company here] doesn't collaborate by sending patches via email.
the problem with mail-based patch management is that it doesn't scale well, management wise... when you have hundreds of patches and multiple reviewers who can review them, Github/Gitlab environments make it easier to prioritize the patches, assign who will do the review, filter the patches based on tags, and keep track of what wasn't reviewed yet...,
mail-based patch management is fine for smaller projects, but Linux kernel is too big by now.. it sure is amazing how they seem to make it work despite their scale, but it's kinda obvious by now, that some patches can go unnoticed, unprioritized, unassigned, ...
and open source is all about getting as many developers as possible to contribute to the development. if I contribute something and wait months to get it reviewed, it will deter me from contributing anything more, and I don't care what's the reason behind it. the same goes for if I contribute something and receive an argument between two or more reviewers whether it's the right direction or not and there's no argumentative answer from a supervisor of the project and this situation goes on for months...
Email is just the protocol. What you're really saying is that http-based protocols make more powerful tools possible.
It's not really enough to state your case. You have to do the work.
On the surface, the kernel developers are productive enough. Feel free to do shadow work for a maintainer and keep your patch stack in Gitlab. It it can be shown the be more effective, lots of maintainers are going to be interested. It's not like they all work the same way!
They just have a least common denominator which is store-and-forward patch transport in standard git email format.
Everyone still has at least the base branch they're working on and their working branch on their machine, that's the beauty of working with Git. Even if someone decides to pull a ragequit and perma-wipe the server, when all the developers push their branches, the work is restored. And issues can be backed up.
> Also, is it something different what I'm used to? Great! I'll learn something new.
The thing is, it's harder and more difficult in a time that better solutions exist. Routinely, kernel developers complain about being overworked and onboarding of new developers to be lacking... one part of the cause certainly is that the Linux kernel is a massive piece of technology, and another one that the social conventions of the Linux kernel are very difficult, but the tooling is also very important - Ballmer had a point with "developers developers developers".
People work with highly modern tools in their day jobs, and then they see the state of Linux kernel tooling, and they say "WTF I'm not putting up with that if I'm not getting paid for it".
Or to use a better comparison... everyone is driving on the highway in the same speed, but one car decides to slow down, so everyone else overtakes it. The perpetual difficulties of many open source projects to accomodate changing times and trends - partially because a lot of small FOSS is written by people for their individual usage! - are IMHO one of the reasons why there is so much chaos in the FOSS world and many private users rather go for the commercial option.
You are missing the entire point. When you interact with a group of people who already have a culture and a set of practices/traditions, you have to play by their rules, build up credibility with that community... and then maybe, down the road, you can nudge them a little to make changes. But you have to have credibility first, have established that you understand what they do and understand why their preferences are the way they are.
If you approach it from the viewpoint that you have the solution and they are Luddites, you will influence no one and have no effect.
Marcan's career as a developer includes lots of development on hostile systems where he's jailbreaking various consoles to allow homebrew.
Asahi Linux is similar, given how hostile and undocumented Apple Silicon is, but it has a great amount of expectations of feature completeness and additional bureaucracy for code changes that really destroys the free-wheeling hacker spirit.
I understand. While I'm not as prolific as him, I've grown with systems which retrocomputing fans meticulously restore and use, so I had to do tons of free-wheeling peeking and poking.
What I found is being able have this "afterburner mode" alongside "advanced communications" capabilities gives the real edge in real life. So, this is why I wish he can build his soft skills.
These skills occupy different slots. You don't have to sacrifice one for the other.
Probably a few reasons. For Darwin, there are a few small projects but I think they are all functionally dead. The benefit with Linux, or even the BSDs here is, sure you gotta port to the hardware, but you should get a good set of user land stuff running for 'free' after that. Lots of programs just need to be compiled to target arm64 and they will at the very minimum function a little bit. Then you can have package maintainers help improve that. I don't think any of the open source Darwin based projects got far enough to build anything in user land. So you'd probably just get the Darwin code from Apple, figure out how to build it and then build everything else on top of it.
The BSDs. You can fork a BSD. Maybe he could try to mainline into the BSD, but would probably face a similar battle with the BSDs. Right, one again, the benefit mainlining into linux, and there is some (maybe limited) support to include Rust, is you can narrow your scope. You don't need to worry as much about some things because they will just sorta work, I am thinking like upper layers of the kernel. You have a CPU scheduler and some subsystems that, may not be as optimized for the hardware, but at least it is something and you can focus on other things before coming around to the CPU scheduler. You can fork a BSD, but most would probably consider it a hard fork. I also don't think any of the BSDs have developers who are that interested in brining in Rust. Some people have mentioned it, but as far as I know, nothing is in the works to mainline any kind of Rust support in the BSD kernels. So he would probably meet similar resistance if he tried to work with FreeBSD. OpenBSD isn't really open to Rust at all.
Why insist on developing in Rust? I mean, I see how it's much cooler and actually better than something like C, but people are hugely underestimating how difficult it is to change the established language of a 3 decade old project.
If Rust is the point you get up from the bed in the morning, why don't you focus on Redox and make it the new Linux? Redox today is much more than Linux was in 1991 so it's not like you would be starting from scratch.
You're probably not as good as Linus in, well, anything related to this field really. The only way to find out whether you actually are is to do the work. Note that also he spent a lot of time whining to people who were perceived as the powerful in the field. But in addition to whining he went and did the work and proved those people wrong.
Mind you, I'm a PHP developer by day, so this Rust-vs-C debate and memory management stuff is not something I've had experience with personally, but the "Rust is magical" section towards the bottom seems like a good summary of why the developer chose to use Rust.
Oh no, I totally agree. I am just saying from the perspective of the Asahi Linux project and wanting to use as much Rust as they can, that is what they are facing and the associated trade offs.
I personally fall a little more on the side of the Linux kernel C devs. Inter-oping languages and such does bring in a lot of complications. And the burden is on the Rust devs to prove it out over the long haul. And yes, that is an uphill battle, and it isn't the first time. Tons of organizations go through these pains. Being someone who works in a .NET shop, transitioning from .NET Framework to .NET core slowly is an uphill battle. And that's technically not even a language change!
But I do agree, Redox would probably less friction and a better route if you want to get into OS dev on an already existing project and be able to go "balls to the walls" with Rust. But you also run into, Redox just has a lot less of everything. That is just because it's a small project.
> Asahi Linux is similar, given how hostile and undocumented Apple Silicon is, […]
«Undocumented» – yes, but «hostile» is an emotionally charged term that elicits a strong negative reaction; more significantly, though, it constitutes a flagrant misrepresentation of the veritable truth as stipulated within the resignation letter itself:
When Apple released the M1, I realized that making it run Linux was my dream project. The technical challenges were the same as my console homebrew projects of the past (in fact, much bigger), but this time, the platform was already open - there was no need for a jailbreak, and no drama and entitled users who want to pirate software to worry about.
Which is consistent with marcan's multiple previous blog posts and comments on here. Porting Linux (as well as NetBSD, OpenBSD) onto Apple Silicon has been no different from porting Linux/*BSD onto SPARC, MIPS, HP-PA and other platforms.
Also, if you had a chance to reverse-engineer a closed source system, you would have known that «hostile» has a very specific meaning in such a context as it refers to a system that has been designed to resist the reverse-engineering attempts. No such resistance has been observed on the Apple Silion computing contraptions.
> No such resistance has been observed on the Apple Silion computing contraptions.
I think they even left a "direct boot from image" (or something similar) mode as a small door to allow Asahi Linux development, if not to accelerate a little bit without affecting their own roadmap. Even Hector tweeted about it himself!
I also think calling it hostile is a little far. I recall Hector making comments of, "yea, even though is not greatly documented, it does things quiet a few things the way I would expect" and I believe even applauded Apple on a few thing. I wanna recall it was specifically around the booting.
Yeah, I want to give them accolades for the great work they did.
I just wanted to also add that users will be users. Once its out, there will be endless posts about "why X" and "why not Y". No matter what you do, lots of people are going to be displeased. Its just the way things go. I hope he will want to pick it up again after some time.
First of all, I wholeheartedly applaud Marcan for carrying the project this far. They, both as individuals and as a team proper, did great things. What I can say is a rest is well deserved at this point, because he really poured his soul into this and worn himself down.
On the other hand, I'll need to say something, however not in bad faith. He needs to stop fighting with the winds he can't control. Users gonna be users, and people gonna be people. Everyone won't be happy, never ever. Even you integrate from applications to silicon level, not everyone is happy what Apple has accomplished technically. Even though Linux is making the world go on, we have seen friction now and then (tipping my hat to another thing he just went through), so he need to improve his soft skills.
Make no mistake, I'm not making this comment from high above. I was extremely bad at it, and I was bullied online and offline for a decade, and it didn't help to be on the right side of the argument, either. So, I understand how it feels and how he's heartbroken and fuming right now, and rightly so. However, humans are not an exact science, and learning to work together with people with strong technical chops is a literal superpower.
I wish Hector a speedy recovery, a good rest and a bright future. I want to finish with the opening page of Joel Spolsky's "Joel on Software":
Technical problems are easy, people are hard.
Godspeed Hector. I'm waiting for your return.