Hacker News new | past | comments | ask | show | jobs | submit | diegocg's comments login

This was true at the time, but at the time Linux wasn't preemptive, had bad SMP support and no good threading implementation.

BeOS was designed for SMP hardware, was preemptive and had good threading support, but besides that there is nothing really especial about it. "pervasive multithreading" simply consisted in using threads a lot (for 90's definition of "lot"). Creating and managing the threads by hand. In 90's C++.

Modern operating systems have caught with that and gone beyond of what beos did.


I wouldn't say there was nothing special about it, given the limitations at the time. Having threads spawn quickly, and quickly move between threads so fast as to have no visible user latency was one hell of a feat on BeOS hardware--a 66MHz PowerPC dual-CPU system, IIRC.

But yeah, it's something we take for granted today.


yea it was cool dual CPU hardware: https://en.wikipedia.org/wiki/BeBox those two green lights on the front were CPU load per each CPU. and the geekport reminscent of Raspberry pi only later did they port it to x86, similar to NeXT.


also it looked like Optimus prime, that was kinda cool at the time


Make the one line change be a commit, then the reformatting be another one, review only the first one. It shouldn't be a problem with a proper review system.


If every engineer needs to make two commits when they change the build file, that's a higher cost compared to having people dedicated to the migration.


Googles source control system at the time (Perforce) didn’t allow for this at least easily. Not sure about now


All changes need to be reviewed. That's the point of code reviews.

Your suggestion would allow people to bypass the code review by just saying "oh it's just cleanup don't worry".


My understanding is the 100k changes files were not reviewed by a human, they did some automatic validation, so that validation could also be done on demand, e.g. a commit saying "reformatted" could trigger a check to make sure the files were identical and bypass a human review... but sounds like they chose a reasonable approach.

I've always been against "reformat the whole code base" but it's an interesting example where it seems to have been the right choice.


Compilers will actually do these kind of tricks, I have seen them doing that for switch statements with a large number of cases.


Yes, the lack of an unified server is such a shame. The great selling point of Wayland originally was that it avoided extra context switches by unifying the server and the window manager, this means every window manager must reimplement the server and every protocol by itself. But in an ideal world, we could have had a single Wayland server and window managers being loaded as plugins in the server process.


> But in an ideal world, we could have had a single Wayland server and window managers being loaded as plugins in the server process.

It is never too late to start improving things.


Ecosystem effects mean that it kind of is. Even if we made the software in question (probably on top of wlroots, maybe even just extending wayfire which IIRC kind of already has the plugin system you need?) there is zero chance GNOME or KDE will use it, and I wouldn't bet even on sway or such.


Well, if people are using Gnome or KDE they've already largely chosen conformity, so why care about that?


On the one hand, yes, I'm tempted to just ignore GNOME+KDE because they'll never play nice, and just go build something better without them.

On the other hand, this means that the ecosystem overall stays fragmented and we get an eternal state of "to do X in Windows do foo, in MacOS do bar, and on desktop Linux see this wiki page discussing your options". And while I suppose there would be value in ex. screenshot tools only needing 3 code paths, it would have been nice to only need 1.


The overhead is low enough that my current wm is written in Ruby and the WM is nowhere near the hot path for anything. The notion that it's a problem or have the WM put of process is nonsense. If I ever write a Wayland compositor, I'll add an out of process WM interface, because it's far nicer to be able to restart at will without killing apps.


> If I ever write a Wayland compositor, I'll add an out of process WM interface

This is something I've been wondering for a long time - how long will it take before someone writes a Wayland-based server that provides an interface for window managers, system trays, and whatever other clients running in other processes, mimicking what X does?


I think the main thing stopping it is that people get "stuck" on the compositor since even writing a wlroots based compositor is far more effort than the bare minimum for an X wm.


How on earth does running a file system in userspace suddenly makes file systems safe?

Also, how does putting N millions of LoC in N projects solve problems magically?


XFS is adding pretty radical features and reworking its internals in interesting ways, it's Ext4 the one that doesn't really see that many changes these days.


Personally, I don't "search" with ChatGPT. I ask and talk with it, and that's the big deal and the reason why the current query based search is dead. Think about your typical stackoverflow question. With Google you have to came up with a good query then start the tedious process of looking at the results. With ChatGPT you can directly ask for results, redirect the conversation, etc.


Oh it's even better than that.

I literally had my cursor in my config file the other day and didn't know the option for disabling TLS verification (it's for an internal connection between two private certs), and i literally just put my cursor in the right place and then asked Copilot what I needed to disable verification, and it returned me the correctly formatted elixir code to paste in, 2-3 lines. And it was correct.

And I then googled for the same thing and I couldn't find that result, so I have no idea how Copilot figured it out.


Same here. And unlike stackoverflow or any other forum, if you have any additional questions, you don't have to wait for an answer (which could take seconds, years, or never).


And it's not judgmental. It will recommend a better way of doing things, but especially when you explain why you're doing a certain thing, it will help with actually doing it.


unfortunately, this will invariably turn the Internet into a barren wasteland.


Perhaps, or make low-quality content less prominent on the greater web, such as many sites that would never turn a profit except for blasting a bunch of ads.


I think you misunderstand.

if you get you content via proxy how many content generators will we lose?


This post misses a crucial step in its argumentation: trying to understand why these abstractions happen. Blaming it on some kind of "techno-moral" decay is not an explanation, it only turns the argument into some kind of arrogant self-reward post. It's like when Plan9 fanboys argue about the lost "purity" of Unix.

The anecdote about these kind of security person is interesting, and it's not hard to sympatize with him, but he is missing the point: The industry seems to need someone to run these kind of pre-made security tools. These jobs are not pointless (perhaps they would be if the people who actually "know" other abstraction layers didn't create software that suck), they are solving some problem, people are working full time and getting paid for them. And the fact that they exist does not mean these jobs make sense or that the tasks they focus on are the right or the wrong abstraction.

> What good does an abstraction do when it breaks and nobody any longer understands how the technology under the hood works?

Not many programmers know of a kernel works internally (processes are just another abstraction). Not many know how compilers work and translate high level code to machine instructions either (there is probably no person in the world who understand all the parts of LLVM/GCC). The amount of programmers who know how CPU instructions translate to transistors is very, very rare.

Yet all these abstractions sort of work. People argued back in the day against programming in high level languages, nobody cares about these people, because the kind of problems that can be solved with high-level programming languages can't really be solved with assembly. Abstractions don't appear because companies are stupid, people are trying to solve problems with software and they need to get some concrete task done. Doing the quick hack does not mean that they are doing something wrong, it means that they are focusing into doing something right at another level. And if you can't understand that, it's _your_ fault.

Of course, plenty of times companies are doing stupid things, but that's the nature of the problem, companies try to do different things, some of them succeed, some don't, some succeed despite being horrible and some fail despite being brilliant on paper. So abstractions are created all the time, and there is a continuous dialectic between that abstraction and its usefulness, which is not measured by the opinion of other programmers, but by the success of the companies adopting and following these trends. For some people who knows a lot about systems programming and administration, it may feel stupid that these days we have people with cloud certificates who are in charge of "orchestrating" scalable and fault-tolerant platforms in the cloud, but know very little about how Linux systems work underneath. But it turns out that these people can get things working, even if they don't do it as well as you would do, and that's something that matters - it means that the abstraction sort of works, even if it leaks some times.

I guess it's not easy to spend decades learning things only to wake up one day and realise that large parts of your knowledge has been abstracted out and automated (ie. made less relevant, and thus less valuable in the job markets). But that's how things are in this field...


The weirdest thing of this era is that we don't see the big players (eg. Google) trying to take advantage of this situation and create an alternative.


Googs has showed us in the past it is unable to make a product that people want to use, and that they have no stomach for a slow grind to get a product to beat the establishment.


au contraire, they can definitely make products people like but then insist on either ruining them or shutting them down.


fine, I'll rephrase: social products. Remember Google+? You and 10 other people do. Most people's memory of Google+ was how to avoid it after it became a mandatory thing for all gmail users, and then it was just gone. So yeah, people really liked that product


Imo Google (basically by accident) owns the best place social network for the next decade - YouTube.

Human generated content will become rare and more valuable, which is why the remaining non walled gardens (Reddit and Twitter) are quickly raising the drawbridge.

Video will be the last medium to fall to AI generated content pretending to be human generated, and by cloning tiktok across to YouTube shorts they remove the barriers to entry for humans.

Social media nowdays is not about friends, it’s about algorithms & creator monetization. YouTube is #1 on monetization by a huge margin.

Any text based social media network will be absolutely overrun with LLMs.


> Imo Google (basically by accident) owns the best place social network for the next decade - YouTube.

Does it? I mostly use YouTube on my Apple TV, and the few times I open it elsewhere, I barely check the comments other than to check the audience reception. There are no conversations there.


> Imo Google (basically by accident) owns the best place social network for the next decade - YouTube.

That's pretty damn sad. Limiting discussion to some video link is just not very social to me. Facebook was the best overall to me until they fucked it up. As a concept, it is the most diverse in allowing users to be social with the various types of posts. Sure, you can do videos, images, text only, but also the events, direct messaging, and other features that i don't really remember because i haven't been there in 10 years.


Google and new products…

Hahahahahhhaha


Meta is reportedly working on some mastodon thing


Instagram has a Twitter-like product in the pipeline now.

Google had their foray into social media more than a decade ago and it failed miserably. Starting from that point they've produced a continuous stream of failed products that never received enough support to last long enough to see viability.


Imagine spending countless hours building up your follower base and accumulating the social media cachet, if you will, only to have Google shut it down in three years. Because that's exactly how that would go.


Nobody would join anything social based run by Google anymore. For one thing we all know it would already be in the phase of being closed down, not to mention nobody (Even Google customers) trusts them.


What big players are in a position to make an alternative, as well as be trusted enough that people will migrate to it? What an inane comment.


Meta. I’d move to a Meta-run Twitter in a moment. I use FB Messenger and Instagram every day, and Zuck has so far resisted the pull to the far right that Elon and Jack have.

I’d theoretically trust Apple/Google/MS to own the data and governance, but I don’t think any are capable of executing.


I bet Digg said the same thing ca. 2010


They can give it a catchy name too with their branding, because it adds so much value to their services. Maybe call it Google+?


A platform where everyone but nazis could say whatever they want (as long as it isn’t something a nazi would say) would be so popular.


I like the shadow ban system. Let users decide which other users get bannedthrough down or upvotes on comments.


The users of these glasses will be rich people who love to spend money on new shiny things


Vision Pro is lower cost than two nano-textured Studio Displays.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: