Hacker News new | past | comments | ask | show | jobs | submit login

This is good news. Moving to the Network Extension framework means that Little Snitch's filtering will run entirely in user space, which is not only great for security but it will also allow the code to be written in a higher level language such as Swift.



> great for security

That depends doesn't it? You'll be safe from Little Snitch but Little Snitch will have less power to protect you.


That's not necessarily true. The article mentions this. While ObDev still doesn't have all the APIs necessary to implement all the features of Little Snitch using NetworkExtensions, they are working on it with Apple and feature-parity is not expected to be an issue for the 10.16 release.


Are they a big enough developer to really influence Apples API design? I’d think companies like VMWare etc. are more likely to be able to push and prod Apple to do something than a little guy like the Little Snitch dev.


I don't think they'd say "we're working with Apple" if that wasn't the case.

Also, I wouldn't be surprised if people working on the Network Extension framework were exactly the kind of people who want to be able to continue use Little Snitch themselves.


My guess: There are people inside apple that rely on and value LittleSnitch and they have enough influence to work with the development team and ensure APIs are built to align with Apple's Security/Privacy goals while still allowing LittleSnitch to function as it currently does.


Little Snitch might be the penultimate use case for Apple. If they’re using all the features and are a small highly competent team, they’re probably much easier to work with and can iterate much faster than a huge company the size of VMWare etc.


If they're the penultimate, who's the ultimate?


Apple?


Little Snitch is wildly successful, and there are many people in security, myself included, who can not and will not use a macOS daily driver without it being available.


Have you tried "Hands Off!" I found it even better than Little Snitch. With that said, I suspect Hands Off! would have the same problem re API's.


It’s good but not as good as LS in network management. However, HO does allow for read/write management, which is a significant advantage over LS, which does now have that capability.


there's more than just little snitch here, like "hands off!" [0]

i assume they and other devs in the space also are working with apple to get the necessary API calls for feature parity with prior versions.

[0] https://www.oneperiodic.com/products/handsoff/


we all know how well working with apple/microsoft/google/ibm/whoever-owns-the-platform ends.

apple will make them waste lots of time with one support team while another team implements iSnitch, part of next osx, using private apis.


What if that has an impact on performance? Kernel-user space communication usually means copying data into different portions of memory, plus a context switch.


Windows moved basic graphics driver functionality to user space many, many years ago. (Windows Vista)

>Badly written device drivers can cause severe damage to a system (e.g., BSoD and data corruption) since all standard drivers have high privileges when accessing the kernel directly. The User-Mode Driver Framework insulates the kernel from the problems of direct driver access, instead providing a new class of driver with a dedicated application programming interface at the user level of interrupts and memory management.

If an error occurs, the new framework allows for an immediate driver restart without impacting the system.

https://en.m.wikipedia.org/wiki/User-Mode_Driver_Framework

Has Windows suffered from this change or has the added stability of having a graphics stack capable of restarting itself on error instead of blue screening the entire machine been a good thing?


I don't remember UMDF as supporting video drivers. It was mostly pluggable stuff like USB storage, sound etc. But I haven' touched that stuff since 2005 or so.

Anyone that remembers WinNT 3.51 or so would likely remember the horrible video performance before most windows graphics code was moved to the kernel in win32k.sys...


Graphics and network stacks are very different. It sounds like macOS is going to have the entire network stack in the kernel except for extensions; this could be the worst of all worlds for performance.


Gotta get rid of all my Mac OS network switches....

I think they value the security and reliability from evicting those kernel extensions and nobody dreams of using this in some high performance production switch so I think it’s ok.


A lot of Macs get used for video editing where they love their SANs. This cuts into video editing perf.


Just to clarify, this would only be a problem if the user had network extensions installed (and potentially lots of them, depending on implementation)? It could have negligible impact if the video editing workstation didn't have these installed, if I'm reading this right?


I'm just saying there's a lot of use cases where people actually saturate their network connections on workstations, and you shouldn't discount them just because 'I'm not running mac as a switch'.

But yes, network perf is needed only for workflows that involve large remote resources, and not to all video editing use cases out there.


You can still saturate your network connection. Just takes a bit of context switching to user land. So if you have a crazy core hyperthreaded cpu your 1 gig network connection will easily get filled without a blink. This is like the ssl argument at the beginning of encrypting everything. The world was going to end and then it didn’t


Context switches can absolutely cut into maximum bandwidth and leave you unable to saturate a network.


In certain machines with very weak CPUs and/or many very powerful connections. For a workstation, assuming mild levels of competence, there's no issue.


No, on workstations and servers, particularly in a post spectre world, putting your network drivers into user space will absolutely destroy your perf because of the added context switches.

You'd maybe have a point if it were an L4, but mach ports are used as an example now of how not to do microkernel IPC because of how much overhead they use.


A few thousand context switches per second is minor enough even with spectre mitigations, and if you need more than that you failed the "mild levels of competence" test.


Because those DPDK guys are just a bunch of clowns I guess, trying to avoid even the normal one user/kernel transition.


They have a completely different goal, much harder than merely saturating a single network port.


..no, you fundamentally have a 1 to 1 relationship with a core/port with DPDK. And a lot of the use case is very much normal server style work loads, it's not just people running network switches with it.


According to https://blog.selectel.com/introduction-dpdk-architecture-pri... they are largely trying to avoid bottlenecks that exist inside the Linux kernel itself, bottlenecks that happen even with zero context switches. That's a totally different problem. Also to avoid having a system call per packet, which falls under "mild levels of competence" for an API designed this decade. Userspace networking also exists to eke out absolute minimum latency, which you don't need just to saturate a port.

When your only goal is to avoid throughput bottlenecks, you don't need anything fancy. Avoid having a context switch per packet and you're most of the way there. A context switch every millisecond, or something in that order of magnitude, is completely harmless to throughput. If it causes your core to process 10% fewer packets than if it had zero context switches, then use 1.5 cores. Context switches take nothing anywhere near a millisecond each.


Your citation literally says

> Another factor that negatively affects performance is context switching. When an application in the user space needs to send or receive a packet, it executes a system call. The context is switched to kernel mode and then back to user mode. This consumes a significant amount of system resources.

And they're talking about the socket API, so when they say "a packet" they really mean "any number of packets".

The rest is mainly about metadata that needs to be maintained specifically because kernel and user are in different address spaces and can't directly share in memory data structures, and is additionally exasperated by splitting the device driver away from the network stack like macos is doing.

The only part that isn't ultimately about the user/kernel split and it's costs is the general protocol stuff in the network stack, and that was always the most specious of the claims of DPDK anyway.

Just so you know, you're talking to someone who used to write NAS drivers.


> And they're talking about the socket API, so when they say "a packet" they really mean "any number of packets".

It's completely different if you have one switch per packet vs. one switch per thousand packets.

You're taking things to a ridiculous extreme to imply that any amount of context switching is a deal-breaker. There is a specific number of context switches before you reach 1%, 10%, 50% overhead. There are many reasons to avoid context switches besides overhead, but they are all either based on the underlying implementation or simply not critical to throughput. You're oversimplifying, despite your credentials. The implementation can be changed/fixed without completely purging context switches. There are many tradeoffs, and doing pure user-space is a viable way to approach things, but it's not the only approach.

Memory sharing and metadata slowness is an easy bottleneck to have, but the way you avoid it, by changing data structures and how you talk to different layers of code and the device, can be done whether you put it in the kernel, in pure user space, or split it between the two.


> A few thousand context switches per second is minor enough even with spectre mitigations

Wouldn’t these be Meltdown mitigations?


Actually, almost all of the networking stack is moving out of the kernel with Skywalk.


This is quite interesting information! I wish it were closer to the top of this thread.


Where can I find more info about Skywalk?


This is the only public place: http://newosxbook.com/bonus/vol1ch16.html. Skywalk is an asynchronous networking interface designed to be adaptable to a number of different needs. I'm not really a networking person so I don't know a lot about it, but mostly the kernel gets out of your way and writes out a bunch of data in a ring buffer of some kind asynchronously or something. The goal is to be able to have different use cases customized to their own needs, so the HTTP stack can write its own custom stuff to be optimized for that use case and the same for the IDS stuff or bluetooth stuff etc. Most people quoted a 30-50% reduction in overall cpu usage for a wide range of scenarios.


Doesn't the network stack end with sending everything to userspace (user applications) in the end anyway? As long as it doesn't take multiple round trips...


Multiple round trips is exactly what I'm concerned about. Imagine a connection going from, say, Safari to the kernel to Little Snitch to the kernel to the NIC. It may not work this way though.

Anything tun-based tends to have the same problem.


Firewall is also a part of kernel (dunno about macOS though) so the traffic might not come out.


I think it is likely that only the slow path (first packet of each flow) will move to userland. The fast path will still be handled in kernel.


If this were a big issue, people would have noticed a correspondingly large impact on Windows gaming performance.


All GPU consumers in Windows (AFAIK even OS itself besides the bootloader) are userspace programs calling APIs hence userspace driver isn't a big problem.

The network stack in Windows is a part of the kernel and I haven't heard of userspace implementations of it like DPDK or PF_RING in Linux. GP is wrong about the performance of them though, as you can actually enhance it in a userspace mode (good article from Cloudflare [0]).

[0] https://blog.cloudflare.com/why-we-use-the-linux-kernels-tcp...


We shouldn't ever trade security for performance. Doing that is how Microsoft ended up putting shit like font rendering into the kernel. Made Windows very fast, but made it so much worse when a bug was found.


That's pretty broad. I have a gaming machine with practically no personal data on it, I just want it to be fast. But the tradeoffs for my work machine are way different. Security is ALWAYS a tradeoff. If we wanted perfect airline security we'd fly naked.

Also not like limiting vulnerabilities to user space is always a big improvement. If someone hacks my user account on a single user computer, they have access to all the data I care about anyway. They could ransomeware my stuff even without kernel access.


The trade off is not installing Little Snitch.


A gaming machine with no personal data on it. We call that a console, and they are indeed built for speed above all else.


Consoles are really built for a price point above all else. Hence why they're always lacking in performance compared to contemporary gaming PCs.


They also take security very very seriously.


Consoles take DRM safety seriously, the fact that that aligns with user security is purely coincidental.


Except companies are really quite adept at identifying the person connected to all the "no personal data".


Correct, it's an inversely proportional relationship, security vs. convenience and/or performance. I could care less if my gaming box gets owned but many others are much more serious about their gaming and would hence have other workarounds.


This is impossible... perfect security would require not having ANY performance. All security is about trade offs, and the answer can't be "trade everything for security"


Sometimes it does make sense to trade security for performance.


I completely disagree. Can you give me an example? Perhaps you can change my mind.


We ran a 100-petabyte cluster with all Meltdown/Spectre mitigation turned off because there was no foreign code running on it that didn't have access to the data itself.

It's all about the threat model. Engineers at the company were considered trusted actors and they were the only ones permitted to connect. If that layer failed, there is no way cache invalidation errors would be the fastest way in.


A machine which is turned off is much slower and more secure than a machine which is turned on, but for some reason people insist on turning their computers on.

Security mechanisms which prevent you from doing the thing you're setting out to do are worthless. Making a computer too slow to be useful is one of the ways to do that. In this specific case, if moving Little Snitch's functionality to userland means that the performance hit of running it was large enough that I have to turn it off when doing network performance sensitive things (say, video conferences) then it'd be a net loss in security compared to the status quo of it running in kernel mode.


/dev/random vs /dev/urandom, you could argue that a new seed via /dev/random is somewhat better, but you wouldn't block everything constantly to get new entropy


/dev/urandom is better than /dev/random in almost every case, so much so that on macOS they are identical.


I can't give you an example but it's perfectly plausible that many users don't store volatile data on their computer and/or are not careless with downloading and running programs. These users might prefer the extra speed up.


Frequently security = correctness.


Agreed. Remember ancarda, it's always about the threat modeling. Every scenario has different business/user needs, and therefore, different tradeoffs that can/will be made. Sometimes, it does make sense to trade performance for security. (N.B. not always, or actually, probably not most of the time.)


I trust that you're typing your comment from OpenBSD? After all, it's the only modern OS that doesn't compromise against security.


Not yet, though I am working on replacing proprietary software I use with free software that's Linux/BSD compatible.

It's a long journey - started using Windows. macOS is a nice gap-stop, but the long term destination is probably something like OpenBSD or Qubes OS.

Perhaps eventually replacing much of the old software on my machine with stuff written in memory safe languages like Rust. There's some far-off efforts like Redox OS that may well end up being an option for me.

I keep my eye on security developments and I try to improve my situation as and when I have the time/energy to.

EDIT: To say, I have also switched from iOS to Android - after many years of waiting till Android itself became more secure. I've also dumped a lot of non-free software like Google Authenticator for free alternatives like andOTP. I'd like to eventually run something like Replicant or whatever is current/actively developed in the future.


I did the opposite, went from Android to iOS because of security. I'd rather live in this "walled garden" instead of the vulnerabilities that pop up in Android now and then, the malware that's always popping up in their app-store, and finally the fact that Google is always looking over your shoulder at everything you do, even despite how much you "turn off" things in the OS, it still phones home. Microsoft is the same. I'm tired of it. Not to mention that Android manufacturers idea of an "update" to the OS means you basically have to buy a newer model, as they often lag months behind on software/security updates from Google, and Apple supports their phones and tablets with updates years after. For instance, Google only provides updates to their Pixel phones for 3 years. Meanwhile, my wife's iPhone 6s is still chugging along with the latest OS after 5 years.

But this is just me. Everyone should use what they are comfortable with.


And here it is, just a few hours after I wrote this and here's yet another story about malware on the Google app store:

https://arstechnica.com/information-technology/2020/03/found...


Install an antivirus for your phone. Problem solved?


This isn't even remotely true, OpenBSD needs to be a usable system too.


Trading security for performance is never a good idea. In this case the downside might be that traffic is able to pass through undetected as a result of moving to user space. If your goal is security through monitoring, can you really trust monitoring software that can't see everything?


The whole point of this application is that the information is bounced to user space.


Only when necessary.


no.

Apple will just slowly write itself into the equation so that little snitch can no longer mess with whatever muddled idea apple seems to think is important.

Already with Catalina you have to connect to apple and ask permission before you can even install little snitch. That means little snitch can't protect you from apple, even if you've told apple "my machine doesn't connect to the internet".

And your machine contacts apple every bit as often as microsoft machines even though their philosophy is supposed to be different.

bottom line: you should not have to ask apple permission to do anything with your machine.


Apple has no reason to care about UNIX philosophy at this point at this point though, do they?


UNIX philosophy is a cargo cult worshiped by FOSS followers that never worked with commercial UNIX vendors, or ever bothered reading GNU man pages end to end.

From those commercial vendors, I have worked with Xenix, DG/UX, Solaris, Aix, HP-UX, Tru64.

None of them ever cared about being philosophers.


It may be a technically superior API but even so I'm not thrilled that if I want to stay current with MacOS updates past the phase-out period then I have to pay for a Little Snitch 5 license. v4 works fine for me and without this API deprecation issue I almost certainly wouldn't be interested in upgrading.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: