Hacker News new | past | comments | ask | show | jobs | submit login
What is triald and why is it taking so much disk space? (eclecticlight.co)
262 points by ingve on March 31, 2022 | hide | past | favorite | 283 comments



>Over the last year or so Mac users have run into problems that appear related to a background service named triald. Some report it stealing huge amounts of CPU, others associate it with various glitches, and a few have noticed gigabytes of disk space apparently being taken up by its folder at ~/Library/Trial, and in their Time Machine backups. This article explains as much as I know about this mysterious new service, and ponders what’s going on.

- I find it so odd, Apple users fascination of their own operating system, this sort of voyeur or wildlife documentarian attitude. It’s your machine, but the inner workings are this ineffable mystery.

>I’m not aware of any control over or opt-out from Trial, although it could be covered in Apple’s general request to share data including panic logs, if you can remember where to control that.

>I first noticed Trial when studying Visual Look Up in the log. At the start of each Look Up, the subsystem com.apple.trial enters the following into the log:

>Initializing TRIClient. Trial version: TrialXP-292.18

>_PASEntitlement: Entitlement "com.apple.private.security.storage.triald" is not present.

>Found entitlement: "com.apple.trial.client" --> <private>

><private> 0x12d9547e0 (no container): using Trial root dir <private> >This suggests that Trial is recording information about Visual Look Up, although nothing more explicit appears in subsequent log entries.

- Apple can double speak all they want about digital privacy, but their actions tell the truth. As a paying Apple customer, you’re not deemed worthy enough to even understand how your own machines work, less even to “tamper” with their working, and who knows what data is actually being shared?

- Personally, I say no thank you. I’ve been on Debian or a variant for almost 10 years now, and haven’t looked back. It’s only gotten easier, as software generally moves more and more to SaaS web apps and the intrinsic cross-platform nature of the browser.


> It’s your machine, but the inner workings are this ineffable mystery.

At some point, every machine's software becomes an ineffable mystery. Do you know what is going on in all your 3rd-party Linux drivers, say, from NVIDIA?


A system should only be a mystery in so far that you were too lazy to look up how exactly something works. Having "undocumented service" running wild should be unacceptable, even so it has become so common place. Linux isn't free of this, far too many times there is no man-page to tell you what something does, but at least you can dig into the source if you have to.


Not with 3rd party drivers. I think people fetishize the desire to have complete control, and it is a myth. I've been using Linux since 1997, and all the fighting for transparency has been a tempest in a teapot with no progress made at ALL.

If people wanted to fight for it, stop using NVIDIA products, but do you see that happening? Not bloody likely.

This isn't so much whataboutism as it is about trying to coerce an ecosystem of hundreds of vendors to listen to a small cadre of open source proponents.

If the Linux kernel forces AMD to improve UAI and make it look like LAM, I'd consider that a victory. Let's see what happens.


You've brought up the Nvidia blobs twice now. You are right that those drivers are distributed without source, and so remain a mystery. The key difference is that the Nvidia drivers are the exception. There are open source drivers available for most GPUs, including Nvidia. Personally, I have never felt the need to use anything other than open source drivers. In short, I'm living the dream: every software component in my computer is inspectable. There are no ineffable mysteries.

This ideal falls short in the hardware realm, where we have to contend with the Intel Management Engine and the like.

Nevertheless, the modern Linux world is vastly more open and accessible than the closed-by-design Apple world. Even if I had to use Nvidia drivers, that would be one binary blob in a sea of open source, whereas the Apple ecosystem is opaque binary blobs as far as the eye can see, without even the courtesy of documentation. It makes a difference.


The open source driver doesn’t work correctly. I use Clear Linux which follows a stateless design, similar to NixOS or Guix. However I had to install the proprietary driver as the color profile for my monitor was setup incorrectly, and the DE (Gnome) would crash/freeze.


It is not a myth. I know every service running on my personal machines (the Ubuntu workstations at work do get away from me a bit...)

Here's the practical upshot - once, many moons ago, I forwarded port 22 to my home desktop machine so I could access it remotely. I had fail2ban running and regularly checked the logs, so I wasn't worried. What I didn't realize was that maybe 6 months prior I'd made a test account for some purpose, with 'test' as the username and password. Inevitably, some worm managed to guess this combo in less than the 6 tries before fail2ban kicked in. The kicker, though? I was sitting at the machine when it happened, and I could tell immediately. The HDD light was flashing, and I knew it shouldn't be. There was load on the CPU, and I knew there shouldn't be. Checked 'top', spotted the weird process immediately, killed it, locked the account (it never escaped the test account so no harm done) and went on with my day.

Can you imagine that story on Windows? "My hard drive light is blinking and I don't know why" - yep, that's daily life. "There's a weird process in the list" - yep, there's lots of those, good luck tracking down which company is responsible.


Nobody forces you to buy NVIDIA hardware. I don't.


Aye, the open AMD graphics drivers are pretty good.

Nouveau will get there, one hopes...


Correct, and not even on ideological grounds for me. Intel's stuff was good enough at the time and a lot cheaper.


> If people wanted to fight for it, stop using NVIDIA products, but do you see that happening? Not bloody likely.

Why would I ever buy Nvidia? Asking as a non-AI developer who doesn’t play PC-games.

I’m perfectly happy with Intel graphics. I hear it has good, open-source drivers too.

Exactly what am I missing?


The need isn’t to have complete control in actuality, but instead the ability to control any particular something in the system, or at least understand it.

i.e. the ability to examine and understand something weird that’s going on and that may be a potential problem.


> If people wanted to fight for it, stop using NVIDIA products, but do you see that happening? Not bloody likely.

I don't use Nvidia because of this, and I'm not alone. If you ask "what is the best graphics card for linux" on a linux forum you will invariably get a chorus of "not Nvidia".


That's certainly a single piece of evidence that every machine-- including all Linux distros-- are ineffable mysteries.

Similarly, all fruits have fat content. Just off the top of my head:

1. avocados (a whopping 15% fat content!)

2. there is no legal requirement for me to list another fruit here


I'm pretty sure the most common picture of Linus Torvalds is a picture of him giving NVIDIA a rather rude gesture with his finger, and the reason was because of the way they push a proprietary driver and don't work well with an open community.


As someone who used to work at Apple and have this knowledge at my fingertips…

It really is nice to have, when you want to look it up. Userspace daemons and their behavior should not be a mystery.

At an extreme minimum, most daemons on macOS have at least a man page describing their high level function.


Well, I just filed FB9971057: triald should have a man page explaining what it does ¯\_(ツ)_/¯. Unsure it would help against a person who is dead set at reading logs to figure out what a daemon does, but it might provide some basic insight to the casual reader.


Where is this documentation? Can you demonstrate how to invoke a man page for a particular background daemon? e.g. an arbitrary process from launchctl list output?


just type man and then the name of the service in terminal, like you'd do for awk or vim. For example, man launchd

man triald will report no manual entry, though.


I already know about launchd. I am talking about stuff like com.apple.geod and the dozens of other daemons running on my device. Where is that documented?


What's your point?. That since some software doesn't play nice and doesn't treat its users with respect, we shouldn't value all the software that does? I don't find this agreeable


I don't think I meant that, although it might sound like it. I think you're accusing me of the fallacy of all or nothing, but to me the original argument sounds like an obsession with purity, but clearly one line isn't enough to explain that.

The replies to my original post seem to me to be arguing for 100% transparency from the keyboard to the atoms on the finfets. I don't think I'm strawmanning that (maybe hyperbolizing a little, but with RISC-V proponents, maybe not!)

That sounds like a noble goal, like world peace, ending starvation, and curing all modern ails. However, the extent to which tribalism and out-grouping emerges from people who don't use the "superior, pure" solution is nonsense: just look at the tone of the first post, snidely declaring that this is proof apple is again such a bad solution. That's not helpful, and after hearing similar tech arguments for nearly 35 years online, it is tiresome (ironic). On top of that, the dream of transparency seems unlikely because every open source hardware project I've ever seen has failed [1], there is zero motivation for it.

Again, not to say it isn't a noble goal, but don't fetishize it to the point of outgrouping others.

[1] This is a broad statement, as there is plenty of gadgetry that is open source hardware. I mean: open source hardware for Linux laptops. Or at least hardware that has open source drivers but doesn't have an "Appendix H".


Everything is about trade-offs.

Since I can't meaningfully audit kernel level stuff anyway (not my area) I mostly care whether it works for me (or for the production purpose the customer needs or etc.) and that's not an ideal world but I choose to prioritise open source userspace contributions and running code.

I also run windows 10 on a Thinkpad Helix 2 for my personal machine - with WSL to provide me an X11 environment that I use to ssh to the debian/freebsd/<whatever the customer runs> machines I'm actually doing development work on. I like writing server side code (and tolerate writing browser side code) so being a linux desktop admin would involve a whole extra field of expertise that wouldn't gain me anything, and I'd rather spend the required learning time on writing and publishing more code instead.

Other people are entirely welcome to choose entirely different trade-offs, of course, but this particular attitude works for me.

Edit: Ironically, my one foray into OS X ended because Aqua insisted on reaching into my X11 instance and controlling focus, which completely trashed my fvwm2 FocusFollowsMouse setup in terms of usability for me, so I decided the Microsoft black box was a better trade-off than an Apple black box even though in those days I had to have cygwin for my X11 + ssh setup.


It certainly is. I have Ubuntu 20.04 running on a Multiboot Yoga, an old Dell, and a MacBook Pro 2013 (because this is the only OS that will boot). I use both the default Gnome desktop, and sometimes switch to FVWM2 or MWM (mostly on VNC because I hate all the bells-and-whistles on Gnome).

I hate using Linux on the actual laptop. The touchpad is always a trainwreck. Sensitivity and scroll speed/acceleration is infuriating (too much Xorg config hassle). Trying to get "middle click" working -always- has issues. Power buttons never work properly, same with LidClose actions (i've read many knowledge bases and stackoverflow posts, none seem to work for my particular hardware, as is always the case). It is frustrating as hell to get it to act like a well-behaved "normal" laptop. Don't get me started on suspend/hibernate issues, or the flaky bluetooth drivers that always need system ctl restart command line help.

Now that I've made the tradeoff to not use Linux as anything but a remote server via VNC, my choices are Windows or Mac for a primary computer. I don't like ads showing up in start menu, or idiotic MFA and login requirements, so I'm left with Mac.


If I was using full-screen VNC I'd probably not mind MacOS in theory.

But I don't get on with touchpads (the apple ones are absolutely superior to anybody else's but it's just not my thing) and my wrists -really- don't get on with mice these days, so a trackpoint is pretty essential to my comfort.

Happily, I see the start menu sufficiently rarely that any presence of ads doesn't even register to me, and the login stuff is basically "meh" since it's (so far) stayed out of my way sufficiently for me not to care.

But that's my choice of trade-offs - may you have great joy out of yours :D


Thanks for elaborating. I agree with what you just said, a high level of snobbery won't fix anything, and expecting everything to be "pure" is utopian. That being said, I think that the original commenter was just expressing his frustration at the fact that "tech minded" people continue to buy into Apples ecosystems and then observing the behaviour as though they didn't own their stuff. In a way, not dissimilar to your point elsewhere about how people complain about Nvidia drivers but keep buying their hardware.


> Do you know what is going on in all your 3rd-party Linux drivers, say, from NVIDIA?

Sadly not because, like OSX, that software is proprietary and us filthy peasants are denied the honour of gazing upon the holy source code crafted by the holiest engineers, anointed by Huang himself.

Of course, that's assuming you don't download the Nvidia leak and gaze upon the source code within. That would be bad, you're bad for thinking it, don't do that. Only bad people would do that. May Huang have mercy on your soul.


I only buy AMD GPUs, specifically because AMD provides open-source drivers and NVIDIA doesn't.


Same. I bought Intel for a long time because they Just Worked. It looks like AMD will Just Work _and_ have great performance.


No, but I'm able to read and modify the code for all of the in-tree drivers, which are the only ones in use on my machine.

There's also a large difference between a closed driver or firmware blob, where the function is known but the workings are not, and large portions of the userspace, including system services and applications, that are entirely opaque.


> Do you know what is going on in all your 3rd-party Linux drivers, say, from NVIDIA?

I’d certainly like to.


[The machine] will not fulfill its destiny until you let go of the illusion of control. -- Master Oogway.


This reminds me of Windows Search Indexer on my PC. Microsoft's spying aside, it apparently needs 13.5% or more of my CPU at all times. It started after I updated multiple time sin a row, and for the past few weeks, has never stopped. I've killed it via task manager, only to have it come back shortly after.


One of my first tasks after a clean install of Windows is to disable any unnecessary services, including the Search Indexer. Not only do I disable the service itself, but I also explicitly remove all folders from the "indexed locations" control panel, and explicitly disable the "Allow files to be indexed" bit on the disk itself.

I've literally never needed to search my disk for a file (using the Windows search interface), and if I ever do, I'd rather wait a few more seconds that one time for my search to complete, rather than have the indexer consume my CPU at all times.


It would be acceptable if windows search worked well and was fast -- but it's not. I search for something and it still churns the file system looking for my search query. What is the point of all that indexing if it still seems to always traverse the fs for each search!? By comparison, a third party search application [1] gives me results as soon as my key lifts and doesn't excessively index (at least for me). [1] https://www.voidtools.com/


A reindex of my entire drive in Voidtools Everything is substantially faster than a search in Windows Explorer over that drive.

That does speak to the remarkable efficiency of Voidtools Everything, as I understand it they don't use Window's filesystem stack but parse the NTFS partition directly. And I get that the Windows indexer is technically more powerful because it indexes file metadata like author etc. I still don't get what makes Windows search this slow.


If you use Windows as a daily driver, Voidtools' Everything should be in your top 10 first apps to install (or in your install script...). I have been using it for the better part of a decade, no issues, no resource consumption, just lightning fast searching for files of all drives on your system.


My PC had been fine before the updates (hence not updating until I needed some other thing to work), and I'd never seen Search Indexer pop up in Task Manager in any notable way. something tells me that I might just have a broken version of it, and since Windows Update also looks to be a bit broken, I may just be SOL for the near future.

This set of problems may be obviated by actually getting Manjaro to cooperate. It's on the todo list.


Try out Fedora! There's Fedora Rawhide if you want rolling release.

I have been using Rawhide (+ mainline, non rc or git, kernel) on my work machine for the past 6 months without issue.

If you have to compile a lot of kmods it can be a pain as everything is transitioning to GCC12 but the kernels are still built with GCC11. A container can take care of that though.


I really want to like Fedora - it gets so much right. But: `yum` is the worst package manager I have ever used on linux by far, and that sours the whole experience. I wish I could get "Fedora with pacman/apt instead".


dnf is what replaced yum, and it's alright.

yum is still aliased on most fedora / rhel systems so you can put Fedora 35 on a drive and run yum update -y and it'll kick off the associated dnf process.


Fedora hasn't used "yum" in 7 years (although the alias still exists). I'm curious whether you've ever actually used Fedora.


dnf replaced yum I believe. dnf is certainly much nicer than pacman, it's slower but does a lot more to ensure your system actually works.


Hm, maybe try using Nix alongside it?


Why is it the worst package manager?


I haven't used it since ~2005, but at that time it was inexcusably slow. Notably, remarkably, shockingly slow. I would dread invoking it, it was so slow. 10x the time to do many operations, as apt/dpkg, and that's not an exaggeration.

I assume that got better, though. I hope it did.


That "yum" no longer exists. dnf has a "yum" alias so that scripts still work, and a lot of the commands are named the same and accept the same arguments, but that's basically where the similarity ends.

And yes DNF is much faster.


You need to go in the Search Indexing setting to remove the directory for indexing. And only tick the directory that you want to index.

Also to completely disable it, you needs to disable the service and remove the entry from schedule tasks. It keep coming back because of the service and scheduled tasks are not disabled. This is how Google Chrome keep starting up their updater and bypassing my disabled startup entries for Chrome. Because of scheduled tasks is not removed, Chrome uses that to reactivate their updater and recreate a startup entry. Managed to completely neutered Chrome updater through Sysinternals AutoRuns.


Windows Search Indexer and Windows Update are probably the worst two components in Windows, though at least Windows Update has gotten noticeably better in the last few years.


Even worse is the Mac search indexer. Ever wonder why your CPU temps are pinned at 90c in low-power mode? Open activity monitor, cmd+f for mdworker, and you found your culprit. It's insane that they enable thay be default...


I've never had spotlight hog things for more than a few hours, and only after a fresh install. It's generally well behaved. And it's very useful and fast for searching.


A few hours is a pretty long time for a CPU to be running at near-T-Junction temps. On top of that, every time I pull from a larger repo or unzip a large folder, MacOS will decide it's time for me to take a quick 2 hour break while it depletes 60% of it's battery. Instead of setting up a complicated ignorelist, I just disabled Spotlight indexing altogether.


If the content of the folder or repo have thousand of files and folders, then it would made sense for Spotlight to index them all.

It the same for transferring the folder/files to other location. a single 1GB file will take 5 second to transfer to external hard drive. A 1GB non-zipped folder (consists of thousands of thousand file from 1KB to 1MB each) will take half hour or more to transfer to external drive. I have a 150GB folder that contains over 500k of files, averaging 120KB to 4 MB, will take 3 hours to transfer to the external drive. I have another 120GB folder that are consist of videos, averaging 300MB to 10GB, took 20 minutes to completely transfer to external drive. This transfer through NVMe ---> spinning rust external drive through USB 3.1.

If you are pulling a large repo that have thousands of files in it will take a while for Spotlight to index each file. That is not the fault of Spotlight, that is just the way it is because it have to scan individually each file to index.


I do wish it was faster. I will say, however, that I never really noticed it on my new M1 macbook. They made it less intrusive, I guess. It used to be quite bad for a few hours and I'd sometimes just walk away and grab lunch, or time my new system install for bedtime. Perhaps the switch to SSDs helps as well, it's easy to forget how slow laptop spinning rust was.


> - Apple can double speak all they want about digital privacy, but their actions tell the truth. As a paying Apple customer, you’re not deemed worthy enough to even understand how your own machines work, less even to “tamper” with their working, and who knows what data is actually being shared?

> - Personally, I say no thank you. I’ve been on Debian or a variant for almost 10 years now, and haven’t looked back. It’s only gotten easier, as software generally moves more and more to SaaS web apps and the intrinsic cross-platform nature of the browser.

Personally, all the paying Apple customers in my family have been able to understand how the systems works to make video calls and talk to their grandchildren.

10years ago we tried installing Linux on an HP laptop we paid for and could barely get a picture opened up because... (do I really need to spell this out?) of unreliable display drivers means that the screen blacks out on whim.

I find it so odd, Linux users overlooking the fact that people who don't tinker with computers all their life do not have the time to get Linux up & running.


I've been using "linux on the desktop" exclusively for the last two decades and it's always curious to me when people complain about having to be a techno wizard to operate it because for at least the last 10 years I've pretty much had to configure nothing and everything from fancy displays to networked printers has worked without a hitch. I can honestly say that I'm not even sure where I would look to configure or troubleshoot some hardware on my modern linux system because that's just not something I ever have to do.

One thing I did do while I was building that first linux pc, and for every computer I've bought or built since then, was a tiny bit of research on driver compatibility. Just something as simple as typing '<laptop model> linux compatability' into google. Or, if while configuring a new system there is a choice of wifi cards or hard drives or whatever, I just look each one up and check before I buy. It is a minor hassle compared to the on-going benefits I derive from a linux system.

Sure, in a perfect world all manufacturers would produce high quality open source drivers for all systems, but we are just not that well coordinated. It is getting better, though.


> One thing I did do while I was building that first linux pc, and for every computer I've bought or built since then, was a tiny bit of research on driver compatibility. Just something as simple as typing '<laptop model> linux compatability' into google

And that's the reason you don't have problems - you're going for hardware/laptops that are well supported in Linux.

Do you expect mom and pop to do the same? Versus buying something off-the-shelf with Windows, or MacOS and it just working, without needing to undertake that step?


They don't put anything on anything. They buy Windows computers for Windows, and Mac computers for Mac. Summary, they would buy Linux computers for Linux. This isn't the 90s, or the 00's anymore.


I dunno, my dad spent 6 months researching his new truck.


I would guess that truck is a much larger outlay than a laptop?

And is your dad a Truck person? He may have enjoyed the research process (heck, I enjoy researching tech purchases).


> having to be a techno wizard

> while I was building that first linux pc

> for every computer I've bought or built since then

> bit of research on driver compatibility

This is pretty self explanatory. Your expectation of what basic computer skills should be is way too high.


I look at that the other way round: the average basic computer skill levels are insufferably low.


How many car drivers do you think can even recognize the parts under the hood? That's how things should be - these are tools, and most users will be interested in using it to achieve something, not the intricacies of building the tool itself.


How many cars can be reconfigured on the fly to act as a sporty racer, a huge work truck, an ATV, a unicycle, or literally any wheeled vehicle you can imagine?

Computers are general purpose tools, the ability to reconfigure them to accomplish different tasks is the core feature that makes them useful.

Not being able to pick out software and hardware that work together is like not knowing how to drive. If you can't do this then you are just a passenger.

Willfully avoiding learning basic programming concepts is like refusing to learn how to change a flat tire. I guess nowadays if your car is always connected to a cell tower and you have a comfortable income you might be able to drive your whole life without ever needing this skill but they were still teaching us how to do it back when I took driver's ed.


I have no expectations, only anecdotes for the intellectually curious. People can obviously do and learn about whatever they like.


My trajectory was ~2000-2011 as a heavy Linux-on-the-desktop user (and Windows, for gaming), but quickly converted to Mac when was issued one at work after that, and that's where I've stayed ever since. I didn't even realize how much time I was wasting and how used I'd gotten to working around shitty, broken stuff, mostly by just avoiding doing things. Once I saw what life was like on the other side, it was a no-brainer to switch.

I try Linux again every couple years, and have every time been quickly reminded why I stopped using it (on the desktop, anyway).


Linux was definitely fickle in the early 2000s. It got a lot better over the last ten years. Meanwhile, whenever I have to use Apple products I feel like they've gone backwards in some regards.

I used to dual boot windows for games. But now I can play games on Linux too.


Do you put Linux on Windows computers? If so, did you also put Mac on them?


I think Linux to many is a hobby much like woodworking, one that can also help feed their identity. I've been there myself.

I don't think you can expect deep empathy and a strong desire to simplify things down to the grandparent level if much of the reason for using the system is for geek cred and a landscape for such endless tuning/tweaking.

I'd even put myself in the latter camp, I like tuning things!

Certainly Linux on the desktop has gotten dramatically better over the last few years, but I don't see it ever getting to appliance level without really cutting out what we typically associate with the flexibility of Linux (more like Android or a set top box)


I feel more like it's analogous to driving a stick shift or baking your own bread. "This is superior; everyone should be doing it!" The problem with that sentiment is that it's 100% true yet totally unrealistic (for reasons unknown -- I don't know why every grandparent out there can't learn to use the command line, from a neurological perspective), and the dissonance erupts like steam from a geyser when those people see the folly of not doing those things.


That's a much better analogy.

I just used an outrageously complex espresso machine to make myself some coffee. For most, that would be purely an inferior choice when compared to a drip machine. I think a lot of it depends on how much you want that last few %. I think a lot of people simply aren't interested in the investment required to get a little more out of their computers.

And as a computer guy, of course that makes no sense to me :)


Linux users probably skew more ethical than the average Mac user. Otherwise, Apple wouldn't make so much money locking down future generations freedoms.


Try again. Linux Mint is fantastically simple to use, and legitimately "just works" for average-Joe uses cases.

Linux, and its various distros, are an ever-evolving tech. Linux 10 years ago was so much less capable than Linux in 2022. Just because it didn't work then, definitely doesn't mean it won't work now.


On what laptops does sleep/hibernate work correctly when you close the lid, with Mint?

Seriously, swear to fucking god, every time someone says “oh Linux just works”, the next comment is “oh? I tried that distro on the recommended hardware and had trouble with…”

Next response is ALWAYS: “Oh yeah that’s broken but I don’t use that feature.” Usually about hibernate. As in, apparently everyone using a laptop plugged into a wall and never closes the lid. Or uses a desktop. I don’t even know, it doesn’t make any goddamn sense.

I just don’t believe you guys any more. Everyone who says Linux just works has adapted their workflows to be hyper specific to the things they do, and have just decided that basic-ass features present and working on mainstream OSes are just gimmicks that the rest of us should forgo out of moral purity.

Also with this “try again” noise. Good lord, happy y’all are blessed with the patience to try again until you find the right Goldilocks set of hardware and tweaks that doesn’t give you a migraine but there exists a large contingency of the population that doesn’t spend every waking moment staring at glowing rectangles. Spare us a fucking thought, and spare a thought for our relatives who took years to learn how to double click.


On what laptops does sleep/hibernate work correctly when you close the lid, with Mint?

If anyone knows how to get XUbuntu 18.04 LTS to not shut down with the lid closed and power on, please let me know. I have a subnotebook that sits in a cabinet to run something, and since a recent "upgrade", it won't stay alive. Looking with Google produces at least five articles on this, all contradictory and none of which work. There's a GUI for setting this, it's set appropriately, and that doesn't work. There's a relevant configuration file where the "close lid" option can be set, and that doesn't affect this problem.


Is the relevant configuration file logind.conf, or another one? Whenever I had such issues it was always the one file I had to change.


My Dell Precision 55XX works fine with Ubuntu, everything is supported. For example I have two 4k monitors at HiDPI... which I've been using for over five years, despite frequent mentions on this site that it doesn't work.


Interesting...my new Precision 7560 can't sleep _in_Windows_!


Cool, you should talk to ModernMech, he’s having trouble with power management. Or indeed any of the folks on this thread:

https://news.ycombinator.com/item?id=30644113


Not possible to reply any longer.


Re: "I find it so odd, Linux users overlooking the fact that people who don't tinker with computers all their life do not have the time to get Linux up & running."

Statement is a bit out of date, apart from a good PC, most Linux users don't even know they are using it.

I find it crazy people have apps running on their Macs and windows desktops that they can't even find out what it is doing, let alone how it works.

Thankfully we have a choice, each to their own.


I find it crazy people have apps running on their Macs and windows desktops that they can't even find out what it is doing, let alone how it works.

Have you looked at a Linux syslog lately? This is 20.04 LTS on desktop. It does a lot on its own. Probably too much.

    Mar 31 00:33:18 Nagle-LTS whoopsie[1278]: [00:33:17] Cannot reach: https://daisy.ubuntu.com
Since the system was idle at the time, why did it need to contact that site? I think that's crash logging.

    Mar 31 02:17:42 Nagle-LTS snapd[503370]: storehelpers.go:721: cannot refresh: snap has no updates available: "bare", "blender", "brackets", "chromium", "core", "core18", "core20", "demo-curl", "gnome-3-26-1604", "gnome-3-28-1804", "gnome-3-34-1804", "gnome-3-38-2004", "gnome-system-monitor", "gtk-common-themes", "gtk2-common-themes", "inkscape", "remarkable", "snap-store"
    Mar 31 02:17:42 Nagle-LTS snapd[503370]: autorefresh.go:536: auto-refresh: all snaps are up-to-date
I'm not subscribed to any optional Ubuntu services. Yet:

    Mar 31 02:30:09 Nagle-LTS systemd[1]: Starting Ubuntu Advantage Timer for running repeated jobs...
    Mar 31 02:30:11 Nagle-LTS systemd[1]: ua-timer.service: Succeeded.
    Mar 31 02:30:11 Nagle-LTS systemd[1]: Finished Ubuntu Advantage Timer for running repeated jobs.
    Mar 31 03:05:03 Nagle-LTS systemd[1]: Starting Firmware update daemon...
Something is trying to do a firmware update every few hours:

    Mar 31 03:05:04 Nagle-LTS systemd[1]: Started Firmware update daemon.
    Mar 31 03:05:05 Nagle-LTS fwupd[1434387]: 10:05:05:0005 FuPluginPciMei       ME family not supported for 0:9.0.1.1333
    Mar 31 03:05:05 Nagle-LTS fwupd[1434387]: 10:05:05:0007 FuEngine             failed to record HSI attributes: failed to get historical attr: json-glib version too old
    Mar 31 03:05:05 Nagle-LTS systemd[1]: fwupd-refresh.service: Succeeded.
    Mar 31 03:05:05 Nagle-LTS systemd[1]: Finished Refresh fwupd metadata and update motd.
Apparently it didn't work because, despite this being a current system, something is out of date.

The Ubuntu "snap" system is busy, although it didn't update anything today. Two independent systems seem to be updating "snaps", which are part of Ubuntu's container system for remotely updated applications.

    Mar 31 06:36:49 Nagle-LTS systemd[1]: Starting Daily apt upgrade and clean activities...
    Mar 31 06:37:11 Nagle-LTS dbus-daemon[936]: [system] Activating via systemd: service name='org.freedesktop.PackageKit' unit='packagekit.service' requested by ':1.11660' (uid=0 pid=1437238 comm="/usr/bin/gdbus call --system --dest org.freedeskto" label="unconfined")
    Mar 31 06:37:11 Nagle-LTS systemd[1]: Starting PackageKit Daemon...
    Mar 31 06:37:11 Nagle-LTS PackageKit: daemon start
    Mar 31 06:37:11 Nagle-LTS dbus-daemon[936]: [system] Successfully activated service 'org.freedesktop.PackageKit'
    ....
    Mar 31 06:38:52 Nagle-LTS systemd[1]: apt-daily-upgrade.service: Succeeded.
    Mar 31 06:38:52 Nagle-LTS systemd[1]: Finished Daily apt upgrade and clean activities.
    Mar 31 06:43:56 Nagle-LTS PackageKit: daemon quit
    Mar 31 06:43:56 Nagle-LTS systemd[1]: packagekit.service: Succeeded.
There's a lot happening behind the scenes. Hopefully none of it is hostile or contains backdoors. One wonders.


> There's a lot happening behind the scenes. Hopefully none of it is hostile or contains backdoors. One wonders.

No, one doesn't wonder. one googles that, then there's an answer. That's exactly the root's point here.


And as demonstrated, a lot in the logs to guide your search, nothing shown is behind the scenes.

Personally I disable snap but I know what it does and why it is useful.


Put Mac in the same laptop and compare.


You're using a different definition of "understand how the systems works". Your grandmother doesn't understand unix signals.


>It’s only gotten easier, as software generally moves more and more to SaaS web apps and the intrinsic cross-platform nature of the browser.

What is the difference between not knowing what the software on your system is doing and not knowing what any of your software is doing and it is not running on your system?


How's the battery life on linux laptops these days?


It's not the battery life that's the issue on Laptops these days, as it is typically better than windows.

The biggest issue is getting all the hardware supported out of the box. For example, there is a certain amount of mucking about you have to do to get any off the shelf laptop running Linux. On rare occasion system updates change something that the laptop doesn't like, hunting down that change can become tedious.


The solution for that is easy: don't buy the absolute-latest-generation of any hardware.

I've owned Samsung laptops, HP laptops, Thinkpads. I've built my own work desktops. I've never had any issues, as long as the hardware is ~6 months out on the market.

That's about the time it takes for linux devs to catch up with companies that focus on windows drivers.


That's useless advice. Even Webcams from laptops as older as 6 years may still not work under Linux. Anything which has an Intel webcam for example is almost guaranteed not to work, outside of perhaps a couple of popular models (like MS Surface devices, and then not precisely "out of the box").

The rule still is to get either a Linux-supported or at least a Linux-popular laptop, even if it's more expensive or doesn't match all your features/requirements. Then it doesn't matter whether it's 6 years old or brand new. It will work.

i.e. it's more important for it to be Lenovo and the correct line than the age. But not all Lenovo are equal. T/P stuff is likely supported out of the box even if brand new. X depends on popularity of the model. Tablets/IdeaPad stuff.... look elsewhere. (Lenovo is just an example, leaving other brands for the reader).


> That's useless advice. Even Webcams from laptops as older as 6 years still don't work under Linux. Anything which has an Intel webcam for example is almost guaranteed not to work, outside of perhaps a couple of popular models (like MS Surface devices, and then not precisely "out of the box").

I have 5 laptops on linux at home, all but one being from a "pro" line (ie Lenovo thinkpad, HP elitebook, Dell latitude, all have a working webcam. I'd say with the exception of fingerprint readers which I never cared to test all the functionnalities have been working from day 1.

I think anyone is safe provided you buy profesionnal models, don't buy the latest and newest tech and avoid nvidia GPUs.


Any laptop that has a webcam connected to the intel chipset (the IPU) instead of USB UVC is basically unsupported under Linux. Most consumer laptops use the IPU (e.g. IdeaPads, Miix, HP Elites, Surfaces, all Chromebooks, etc. ), while most business lines use much more standarized USB UVC webcams. The reason is cost, as usual. USB UVC is basically a separate device; IPU webcam is much cheaper (processing is done on the intel chipset, you can use a MIPI sensor like on a phone, you don't even have real hw-controlled privacy LED but a software facsimile, etc.).

The IPU3 (Kaby Lake era!) is the only one generation that has been upstreamed, and as of today it only supports 2 types of sensors that were used in a couple Lenovo Miix models. Nothing else! There are some patches floating around for the Surface devices of the era [1], but nothing upstreamed. And of course any newer IPU is out of the question (Intel is now at IPU6).

And it happens that all of these IPU3 webcams that work with upstream kernel are mostly because of the effort of just ONE guy, who is not employed by Intel, and who did not even know C a couple years ago (if you happen to be reading this, kudos to you!).

[1] https://github.com/linux-surface/linux-surface/blob/master/p...


I can confirm that Surface cameras work fine on Linux (even without a patched kernel), and Ideapad webcams that I've tried also work just fine. Not sure where you're getting this from, or if it's still an issue anymore.


Sorry but no. https://github.com/linux-surface/linux-surface/issues/91 https://github.com/linux-surface/linux-surface/wiki/Camera-S... ( note green mark means works with the patches. Red mark means doesn't work even with patches). You even need a different userspace, normal v4l consumers dont work even with patches.

You have an even older model, I guess. Before SP4 webcams were USB.


I've been using Linux as my main OS for almost 15 years.

Even on very cheap HP laptops, following this rule has worked for me flawlessly so far. Between telling people "you should spend more money on X just to be sure" and "Y is cheaper and it is very likely that it will work", I'd 100% recommend them trying the cheaper option first. Even if it turns out to not work, they can always return it and go to the more expensive alternative later.


The rule is not "spend more money", that's just bullshit. You buy the most expensive laptop they have it is almost guaranteed not to work, because no one else will have bought it. The rule is to keep yourself to the business/professional lines, or otherwise the really popular models.

And not only I have _never_ bought a new laptop ( always used/refurbished/thrift) , but for the past 12 years I have been submitting patches to Linux to support the laptops I get. Once you get outside of the popular lines, there is very few people doing that. It will not happen in 6 months or 6 years unless you do it. Typing this from an HP laptop which even 6 years after release did not have all WMI hotkeys mapped.


I know you didn't say "buy the most expensive", but you said "stick with professional models" which comparatively are more expensive than general models which can work just fine.


How do you deal with laptops that have both an integrated and an Nvidia graphic card? I can't find a reliable way of having the dedicated card on for just some applications on Linux. It's either always on or always off, which is not as convenient or useful as the on-demand mode available on Windows.


> How do you deal with laptops that have both an integrated and an Nvidia graphic card?

Honestly, I don't.

I think all my laptops had only iGPUs. I always treated my laptops as secondary workstations for the times I was away from my main desktop.


It's been my observation over many years that Nvidia and ATI/AMD video cards are the source of a solid 50% of major computer problems. This seems to hold even for Macs. They're best avoided unless you really need them, at least for machines you plan to do non-GPU-related work on.


> The solution for that is easy: don't buy the absolute-latest-generation of any hardware

Or don't out Linux on Windows hardware. System integration teams are a thing, and one person without access to documentation is going to do it poorly.

Instead, buy Linux in Linux hardware.


Still not as good as Windows but much better than it used to be, especially if you're using well tested hardware like a Thinkpad


My experience with ThinkPads has been terrible. I've lost faith in ThinkPads. Too bad we can't get working Linux hardware at work.

Fortunately, I can buy my personal hardware from whomever I wish, and it certainly will not be Lenovo.


I agree that the quality of the hardware has gone downhill (weak hinges, getting more fragile every year, keyboard quality) but running Linux has been fine. No real complaints.


> running Linux has been fine

That has _very_ much not been my and some colleagues' experience.


Pretty good, especially if you install TLP for power management. I get at least a full workday and a half out of my Dell 9375.


Not OP, but my laptop lives long enough for me to get to a plug point if needed. I have power at work and home and I have suspend turned on when I close the lid. I can live with a few seconds of wake up time, when I have no accessible power for long periods of time. It is really not that complicated.


I dual boot Windows and Linux and I found out I can actually work longer in Linux. Maybe it's related to the fact that Windows services consume more CPU.


It varies, like it does for literally every other combination of hardware and software on the market, but mine gets between 8 (light web browsing) and 12 (terminal programming, no web browser running, wifi disabled) hours on a single charge.

For reference, Windows on the same laptop gets about 6 hours while just sitting there.


good enough, especially with optimisations.


Does it really matter?!

Are you really that careless with your privacy and your basic freedoms to the point that you make your choices based on the ~battery life~ of your computing device?

Sorry to pick on you, but whenever I hear these sorts of questions I realize how shallow people can be, even those who are (supposedly) informed and smart enough to know better.

To be worried about the inconvenience of ~shorter battery life~ over fundamental freedoms is like failing the marshmallow test for tech people.


1. The marshmallow test has been debunked (just like so many other flashy claims in the social sciences field, the science is questionable and the replication crisis is very real). So using it in a rant about how uninformed people are is itself a laughable irony.

2. Many of us simply do not agree that the use of an Apple computing device impinges on either our privacy or basic freedoms, or (more nuanced) that the marginal risk is so small that it’s not worth the moderately large inconvenience of shorter battery life.


Point 2 is laughable.


1. It doesn't matter if the effects of the marshmallow test are true or not. The idea is just to illustrate how tech people are as weak-willed, short-term focused and plain "convenience-at-all-costs" as the general consumers.

2. There is no argument here: if you can not control what and how your device operates, your freedom is being taken. You can argue that you don't value this freedom over the convenience it brings you, but this is at best an admission of you being on (1).


> 1. It doesn't matter if the effects of the marshmallow test are true or not. The idea is just to illustrate how tech people are as weak-willed, short-term focused and plain "convenience-at-all-costs" as the general consumers.

Can you explain to me what the long term gain I could expect, exactly, by foregoing a MacOS laptop?

I have lots of computing devices; many are Linux, a couple are Windows, and I have an Apple laptop.

This follows two attempts at a Linux laptop, with crummier hardware in many ways, and many inconveniences from Linux-on-the-desktop sucking. It just works, and I have lots of battery life.

> 2. There is no argument here: if you can not control what and how your device operates, your freedom is being taken.

Am I supposed to be a free software zealot? If my computer sucks, that also impinges on my freedom.

Yes, there may be some hypothetical ability to clone lots and lots of repos and do a lot of troubleshooting and cure one source of suck-- and then argue with upstream maintainers and kowtow until eventually my fix is accepted. When the sources of suck on Linux outnumber the sources of suck on MacOS by a reasonably high multiple, I'd rather keep the MacOS device in my bag than the other machines I have that run Linux.

If you're going to argue about some kind of collective loss by many people making the same choice as me-- that we all lose freedom, etc-- spare me. Tragedy of the commons, etc: not enough people are going to make the choice you're posing to have any significant effect.


> Can you explain to me what the long term gain I could expect, exactly, by foregoing a MacOS laptop?

You would not be contributing to an ecosystem controlled by a single entity who is increasingly abusing their dominance to extract more rent from consumers and other developers.


> You would not be contributing to an ecosystem controlled by a single entity who is increasingly abusing their dominance to extract more rent from consumers and other developers.

c.f.

> > If you're going to argue about some kind of collective loss by many people making the same choice as me-- that we all lose freedom, etc-- spare me. Tragedy of the commons, etc: not enough people are going to make the choice you're posing to have any significant effect.

So, I'd be using my market power to advocate for things that are disproportionately important to you at my own expense. That doesn't seem like "freedom".

I've been a Linux user and developer forever. For awhile, I was a kernel subsystem maintainer. I've walked the walk about making free software available and useful.

I also believe having a Mac laptop benefits me, and is worth the money, and that it is not morally corrosive. Can you please spare me the judgment that I'm "weak-willed" or stupid for my choice?


> not enough people are going to make the choice you're posing to have any significant effect.

A billion flies can not be wrong...

I am not personally affected by what you choose to use. I could give two shits about it and go on with my day.

It's you who stands to gain or lose - along with all those that make a similar choice.

> That doesn't seem like "freedom".

You are also free to stuff yourself with sugar, drugs, play russian roulette, sleep around with any willing consenting adult.

But no, being free to do X does not mean that X is morally virtuous. And you don't get to say "spare me of your judgement" and you shouldn't be expecting any sympathy if you have to face any consequences when things blow up on your face.


> A billion flies can not be wrong...

This willfully pretends not to understand the argument (I hope). My use of MacOS is not likely to make a big difference in the adoption of MacOS, and therefore ascribing all of the massive network effect of usage to me is a tad unfair.

> I am not personally affected by what you choose to use. I could give two shits about it and go on with my day.

OK. Personally, I'm doing quite fine with MacOS for my laptop. Thanks for your concern, but you don't need to call names and judge me over it. If you really don't care so much, you can spare us all the invective.

> You are also free to stuff yourself with sugar, drugs, play russian roulette

All of these things have likely and measurable short term harms to the individual doing them. Using Apple carries some risk of individual and systemic negative effects, but also some benefits (both individually and communally).

Note, this is true of all things. I just drove to school to teach some classes to kids. I could have died on the way. There are possibly even better moral choices of where I could spend my time that are given up (opportunity costs). But, I judged this is what I wanted to do and the benefits outweighed the risks.

> But no, being free to do X does not mean that X is morally virtuous.

I'm pretty sure that I've done way more to make open systems possible on the desktop than you. The fact that one thing I'm doing isn't moving the needle that direction (but isn't hurting it, either) should be OK. Otherwise, you should immediately stop all activity which isn't helping the cause of open-systems-on-desktop to avoid accusations of hypocrisy.

> you shouldn't be expecting any sympathy if you have to face any consequences when things blow up on your face.

Anyone deserves sympathy when things blow up in their face.


> My use of MacOS is not likely to make a big difference in the adoption of MacOS, and therefore ascribing all of the massive network effect of usage to me.

It's not all to you. But you are contributing to it, and you shouldn't be excusing yourself on the basis of "look at everyone else doing it".

> I'm pretty sure that I've done way more to make open systems possible on the desktop than you.

This is not a competition to see who is more virtuous or less of a sinner. It is just me arguing that using Apple products is a moral failing.

> you should immediately stop all activity which isn't helping the cause of open-systems-on-desktop to avoid accusations of hypocrisy.

I am not perfect and I am not free of sin. But at least I am willing to call a failing as such. The first step to redemption is to accept your own faults.


> But at least I am willing to call a failing as such.

I think there's room in the world for closed offerings and open offerings, and that they each have their own advantages and contributions to the world (and their own risks).

> The first step to redemption is to accept your own faults.

P'raps calling most of the world "weak-willed" (which I notice you've stealth-edited your prior comment to soften) and implying they may not be "smart" is going a bit beyond accepting your own faults and instead exposing others to harsh judgment for not valuing the things that you do.


This is not about "open offerings" and "closed offerings". It is specifically about how little it takes for people to give away their freedoms. To wit, ~battery life~.


> It is specifically about how little it takes for people to give away their freedoms.

You frame it that way -- most people don't.

I like having a MacOS computer. Lots of daily things work better. Including battery life.

Linux still exists and I still use it elsewhere. If Apple turns evil, I can move to having a Linux laptop again and just be slightly more miserable.


> If Apple turns evil

It is not a matter of "turning evil". They are already do "evil" things. It's just that you don't care because it doesn't affect you personally or the things that you consume from them.


> They are already do "evil" things.

I disagree. I think Apple has been a pretty good steward as a commercial vendor, with a few missteps. I have a pretty high degree of trust in them-- though my eyes are always open for any substantive problems emerging.


> 2. There is no argument here: if you can not control what and how your device operates, your freedom is being taken. You can argue that you don't value this freedom over the convenience it brings you, but this is at best an admission of you being on (1).

To illustrate something by exaggerating (one of) the numbers to make it crystal clear what I'm getting at:

A Mac that I don't fully control but has 12 hour battery life feels more free in practice than a fully-open platform with a 1-hour battery life. The latter leaves me less able to use the device for things I want to, and takes up more of my time hunting for wall outlets or just having a dead machine when I need one that's working. Time spent keeping my devices working does not feel like freedom, since I could have used that time for other things.

There's hypothetical freedom to do stuff I don't really care to do (audit or fiddle with my OS), then there's the actual range of liberty-of-action for stuff that I do care to do. Personally, I tend to favor the latter, strongly. The former's just a nice-to-have.


>Are you really that careless with your privacy and your basic freedoms to the point that you make your choices based on the ~battery life~ of your computing device?

Not OP, but yes. In a laptop, battery life is the number one thing I'm looking for. That being said, Linux does fine when it comes to battery life in my experience.


You take a very extreme, all or nothing stance. The rest of us make trade offs.


You make trade-offs on dimensions that do not violate fundamental principles or when you are confronted with a situation when you need to choose the lesser of two evils.

"Inconvenience caused by a shorter-battery life" vs "patronizing a company that is systematically abusing its power to take away freedom from everyone" is not such a case that requires any type of trade-off to be made.


My thoughts exactly. A machine that I paid money for and bought shouldn't have these kinds of mysteries from me. A binary driver is one thing. I can uninstall it albeit at the cost of impaired functionality but I had the option of buying another graphics card, OS whatever at the beginning. Here, I buy hardware, OS and everything else from one vendor and I can't uninstall, delete or kill a process that's running on my own machine? If it's something malicious on Linux, I can deny it access to parts of the drive, I can kill it or make it non executable. I can even find the executable and corrupt it so that it won't run.


The existence of a Mac OS service you don't immediately understand really has nothing to do with Apple's stance on digital privacy. There's no logical connection here.


I'd argue that's not a helpful take.

This appears to be downloading and executing new code, and potentially returning results back to the mothership.

That there is no explicit opt-in, or even much information about WHAT this is doing, is sadly what I've come to expect from Apple.


I looked at one of the files:

`~/Library/Trial/Treatments/SIRI_DIALOG_ASSETS/factorPacks/6242008c0f185b3e2ef6bf8e/assets/com.apple.siri.dialog.socialconversationfl#d5b431b/dialog/SocialConversation.catfamily/dalWhoIsYourValentine.cat/en.cat.bin`

the file is gzip compressed and the decompressed data reveals some strings that are probably related to the question asked in the `cat` file:

tag:valentine";Only localize if Valentine's Day is relevant to your localeJ2�SS[If I could have a Valentine, it would be all 7.5 billion people on Earth. That is<sub alias=",">…</sub> a lot of heart-shaped lollipops.]:�SS[If I could have a Valentine, it would be all 7.5 billion people on Earth. That is<sub alias=",">…</sub> a lot of heart-shaped lollipops.]

So this isn't just about a/b testing but also about feeding current data into Siri to allow some local processing of timely events.

Do note though, that I have Siri disabled on my Mac, so this is of questionable value


Maybe they need to store that much locally if they want to train models locally? In the end, I guess many folks on HN would give up a few GB of their disk space if it means all ml related stuff can run locally. And as far as I'm aware, Apple is the only one at least partly doing that.

Would be much nicer if they stated the purpose and allowed for some configuration, though. And verification by third parties is obviously still important.


I've also disabled Siri and my ~/Library/Trial directory is only 3.6M (with no SIRI_DIALOG_ASSETS dir - so no valentines for me).

Maybe you should check that Siri is still disabled on your Mac?


I've had Siri disabled the entire time I've owned my mac (just checked, still disabled), and my ~/Library/Trial dir is 180M, 136530 files.


Interesting, I have it off but still have the folder. Did you do any "extra" steps to disable Siri?


My guess is the data was put there before disabling, and disabling doesn't automatically remove the data.


I just checked and for me it's the same. Siri is disabled and the folder is only 3.6 MB.


Same for me. Siri is disabled and there is nothing in these folders.


I trust that Apple isn't doing anything nefarious here, but at the same time, I'm quite grateful that others aren't so trusting. I want the "white hat" suspicious and skeptical rooting through everything. In fact, I don't think I'd trust Apple so much if it weren't for those that don't trust them. Their constructive distrust enables my trust.


What's the phrase? "Trust but verify"?


Yes, the Russian proverb.

It's even better in Russian: доверяй, но проверяй, doveryay no proveryay

It's better because it displays both words are semantically related. In fact, "verify" is cognate with their root, vera/veryat', probably a common Indoeuropean root but I'm too lazy to verify (pun intended).



Interesting - so more like “verity but verify”. Very cool.


"rest/be assured but ensure" is how I would keep the semantics wordplay, but obviously there's always something lost in translation.


Why do you trust that? Apple has specifically admitted to having a nefarious plan to scan your local files without consent and further use your computer (also without consent) to report on you to law enforcement. This may well be that implementation.

The latest desktop and mobile OSes are quietly shipping code to enable this, despite the uproar that happened when Apple announced it. They let people believe they were stopping the project, but it nonetheless proceeds quietly.

Apple is not trustworthy.

https://en.m.wikipedia.org/wiki/MacOS_Monterey

(Edit for clarity: enable above is used in the "to support" sense. I do not know if such clientside scanning is enabled (in the activated sense) yet, only that Apple has explicitly stated at least twice that it will be. It appears imminent. Some or all of the code has already shipped and is on your machine(s).)


Every cloud provider does CSAM scanning or they'll be chased down by certain three letter gov agencies and/or US senators looking to score easy political points.


This thread is about local file scanning and has nothing to do with the cloud. Apple refers to all photos on Apple devices under the "iCloud Photos" brand name.

Apple is within their rights to scan files on their servers. They already do this now, as you note. This thread is about something else: Apple OS bundled malware that scans files on non-Apple-owned machines without consent, even files that have not left the user's own device. This turns your own local hardware into a snitch against your will. What it scans for is also not within your control.

They do not do this today, but they have stated at least twice that they intend to begin.


> This thread is about local file scanning and has nothing to do with the cloud.

It's completely relevant to my point because it's unlikely they'd be doing this without the pressure from US politicians.

There's also a lot of speculation that it's happening device side so that E2E encryption can be enabled for uploads.


https://en.m.wikipedia.org/wiki/Four_Horsemen_of_the_Infocal...

E2E encryption that leaks information to the middle device about what data is being encrypted is not E2E encrypted.

It's backdoored.

Any system that can't be used safely by child pornographers for safety from government surveillance cannot be used safely by other groups that the government does not wish to operate. Those groups will (without any convictions) be labeled "violent domestic terrorists" by Apple and the FBI when their in-group memes are added to the clientside scanning signature list.

This is the bright line. It's your computer and data, not theirs.

Companies need to comply with the law. This is not law! Backroom pressure to deny millions due process is not reasonable, democratic, or conducive to preserving human rights. It has no basis in law, it is how dictators operate.

If Apple is breaking current law, charge them. If they are not, stop pressuring them, or maybe try to pass a law to alter their behavior (with associated public debate), like civilized adults do.

"Beautiful company you have there, Tim Apple. Shame if something were to happen to it if you don't help us spy on your billions of users." is about as undemocratic and authoritarian as you can get.

Apple trying to sugarcoat the anti-customer malware they're potentially getting strongarmed into rolling out is just as morally repugnant.

When a judge issues a search warrant, the accused has legal rights that dictate what is and is not probable cause. No such due process will exist with what is or is not added to the content detection signature list, everyone on the platform is deemed worthy of suspicion and is searched. The ML algorithm will declare you guilty based on a list of hashes, the inputs for which you are not allowed to possess or see (but somehow NCMEC is). There is no additional step before guys with machine guns raid your house and steal every phone and computer in it.

This is an end run around human rights to due process.


Unless you encrypt on the camera, it's impossible not to leak information to the first storage device you save to. It's also impossible not to leak information to any viewer you ever use unless you just want to look at the encrypted bytes and not the photo.


> Why do you trust that?

Not OP, but I'll throw my two cents in. While some people on HN were hyperventilating about the CSAM thing, I was digging into the docs to see what was actually happening. In every instance, I found myself a bit more impressed with how much engineering the folks at Apple clearly put into making it work without compromising privacy. Contrary to what people kept saying here on HN. It made me a lot more suspicious of randos on HN and a bit more willing to give Apple the benefit of the doubt.

I'm always amenable to more actual evidence, but as of yet that seems completely lacking. If I were more conspiracy minded I'd wonder if there was an orchestrated effort.


People are rightfully upset about it because there is precisely zero amount of clever engineering that makes using my own hardware to spy on me and potentially report me to the police okay, morally or socially or technically. Literally no one would willingly opt in to such a thing on their computer if prompted. This appears to be the US government exercising an extrajudicial, extralegislative mandate (aka functioning as a dictatorship) with regards to a software publisher, and Apple unilaterally forcing this on users via updates.

This is an authoritarian, tyrannical end run around basic human rights to due process, and if indeed it is an essential prerequisite (per the government) to Apple deploying e2e crypto, then we don't actually have freedom of expression (as Apple would presumably be punished by the federal government for publishing software that supports e2e encryption WITHOUT such literal spyware).

The fact that this is even being discussed is a giant blinking red warning light about the lack of freedom in our society. Both for millions like us, who want the availability of quality tools that don't work against our personal interests and human rights, as well as Apple, who appears to not be allowed to publish software unless it contains backdoors for the cops.

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

That should worry you a lot.

PS: Any clientside scanning whatsoever that the user did not consent to, that reports its results to a remote party (even if only positives) is a privacy violation. You cannot scan local files in any way whatsoever (for a purpose that is explicitly against the user, that is, reporting on suspected illegal activity) without compromising privacy, even if Apple's system works perfectly as described. Clientside nonconsensual scanning is inherently a privacy violation.


Do you have any evidence this is what’s happening?

Why should I trust your declarations?


https://web.archive.org/web/20210903214920/https://www.apple...

(linked from the above-linked wikipedia article as citation for the clientside scanning feature documentation on wikipedia)

https://web.archive.org/web/20210903214920/https://www.apple...

Apple said they were doing it.

Apple never subsequently said they were not doing it. (They did say some things that implied that they MIGHT not do it, designed to make you stop being mad at them ("Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.").)

They did, however, reiterate that this was going to be released.

What more evidence do you want than the vendor's direct affirmative statements?

When people plainly tell you who they are: believe them.


This has been discussed ad nauseum. The client side code isn’t looking at your files, it’s looking for a signature.

If you don’t have the signature, it won’t do anything.

I’m a privacy advocate, and this approach actually increased my trust as it was a more palatable option than allowing unfettered server side access in response to LEO requests (which happens in extreme cases, but not on a routine basis.)


Signature scanning of my private files without consent, as well as any potential subsequent reporting to law enforcement, are both human rights violations, if you believe what Apple claims to: that "privacy is a human right".

Privacy includes privacy to break the law until and unless specific probable cause is established in a court. Companies have no moral or legal obligation to get involved in mass scanning to enable proactive law enforcement. "But the children!" does not make this not a privacy violation. Signatures don't make this not a privacy violation.

The system is also ripe for abuse. What happens when the legislature mandates a regulatory body be able to set policy to amend the signature list?

You do not want to live in the world this creates.

This is not about child safety. Think more along the lines of Tiananmen Tank Man.


It was never set to do anything without consent for private files, only for files you give to iCloud for storage.


Apple already scans all photos uploaded to iCloud, as they are not end to end encrypted and are viewable by Apple. This has been going on for ages and is not what is being discussed.

The topic is local scanning on an end user device without the consent of the device owner.

Clientside scanning of files given over to the cloud is nonsensical. Apple already scans serverside.


So far you haven't given any evidence of what you claim, which is that this app is Apple implementing their client-side CSAM detection scheme and silently and secretly including it in Mac OS.

No evidence at all.

So I reiterate: Why should I believe you?


You just moved the goalposts.

Please don’t reiterate obtuse questions when reasonable answers were delivered on a silver platter.


The original implication and still the claim is that this app is for the purposes he describes. "This may well be that implementation." There is no evidence of such.

I'm not moving the goal posts. I'm asking him if he has any evidence that this app is for CSAM detection. So far, I've yet to see any.


I just looked into ~/Library/Trial on my main machine (macOS 12.2.1). I got 380MB spread over 584 files (while TFA's author got 300k???). Most if not all of them are related to com.apple.siri.asr.dictation.en_US. Close to 100% of the space is taken up by .fst files, e.g.

  $ file /path/to/msg-bigG.squeezed_quantized_acceptor.fst
  /path/to/msg-bigG.squeezed_quantized_acceptor.fst: OpenFst binary FST data, fst type: squeezed_quantized_acceptor, arc type: standard, version: 3, num states: 1237632, num arcs: 7376910
Appears be data files for OpenFST[1], "a library for constructing, combining, optimizing, and searching weighted finite-state transducers". I don't do ML so I stopped there.

[1] https://www.openfst.org/twiki/bin/view/FST/WebHome


I checked mine, and I see lots of mention of "Siri" (TEXT TO SPEECH, FIND MY, etc). Seems most likely this is A/B testing for ML models, such as those for Siri and Photos analysis?

That's my best guess. Since Siri and Photos now processes more locally (especially on M1's dedicated ML cores), all the training improvements and testing probably happen locally as well.

It makes sense it would regularly download updates for ML models. I'm more curious about what data gets uploaded back to Apple...


I think apple is pretty invested in distributed ML training. Ie. your PC will go through your photo gallery/siri audio data, and use the labels and stuff to fine-tune an apple-provided ML model. The fine-tunings are then re-uploaded to apple, added up, and distributed to other users in the next update.

The privacy implications of this are dubious. There is as-yet no good research showing how these finetuned models can be used to reveal exactly what photos you have in your photo library, but many experts believe that won't be the case for long.


> The privacy implications of this are dubious.

Really? Apple can essentially be asked by Law Enforcement for answers to questions like "Give me all the IMEIs of people in location X, radius 10 miles, that appear to have pictures of cats"


The model fine tunings can't immediately be used to say "this person has pictures of cats". They effectively are just a big diff of the ML weights. It's a big set of floating point numbers which is hard to get any meaning out of.

But experts believe that techniques might be developed in the future to figure out based on the numbers what data must have been in the training set (and hence your photos). Nobodies managed to do that yet.


No experience with NN, just basic understanding of these things.

If you train a model A) cats, one for B) dogs and one for C) dogs and cats. Is there any research which asks if you can do a network subtraction C-B = D, and get a sensible network D than can loosely do what A does?


Try it... but I predict you'll get a model that doesn't work at all...


> The fine-tunings are then re-uploaded to apple, added up, and distributed to other users in the next update.

IMO this is the kind of statement that should have something to back it up.


I checked mine, and I see lots of mention of "Siri" (TEXT TO SPEECH, FIND MY, etc). Seems most likely this is A/B testing for ML models, such as those for Siri and Photos analysis?

My guess is dictation.

My Trial folder is about 500MB. I've never enabled Siri on my computers declining the option during the setup at first boot. But I do use dictation quite a bit.

If 500MB is the cost of having dictation work solely on my laptop without hitting the internet, I'm OK with it.

The problem is that people love to invent conspiracy theories about Apple because they have an axe to grind.

They get opportunities to do so because of two things: Many of Apple's daemons and other services are not publicly documented. And because it's running a flavor of Unix, it is a lot easier for amateur computer sleuths to poke about and discover what's running that people outside of Apple don't know about.

Operating systems of the past were more opaque, so we just had to blindly trust the manufacturer. But people can see things better now. It would help rebuild trust if Apple would have a standard reference online explaining all of its OS bits.


I get "56 961 719 bytes (567,5 MB on disk) for 147 851 items" on my work laptop, which is definitely not enrolled in any kind of beta programs, and "56 157 910 bytes (566,4 MB on disk) for 147 806 items" on my home computer, which I unboxed yesterday.


I have a big directory here too (100s of thousands of files) however I am enrolled in the beta program. Perhaps the super large versions of the directory is only for the beta program but the non beta participates have the small version.


It could also well be the daemon not cleaning up after itself properly in some obscure circumstances. These things happen way too often.


Macbook Pro M1 Pro, 12.3, Siri completely disabled as most of the daemons (the ones that are possible) and I also have ~333MB across 303k files.


MBP M1, 12.2, Siri enabled (though I haven't actually used it)

    > du -hs ~/Library/Trial
    2.6M /Users/___/Library/Trial

    > find ~/Library/Trial | wc -l
         160


i have siri off and my numbers are 16 and 2.2M (16" intel MBP, 11.4). fwiw, all of my triald's (outgoing) connections are blocked via lulu, but i don't think that makes a difference here.

for folks with siri on and extra bloat, i'd guess they could reduce it some by turning off "Show Siri Suggestions in App" and "Learn from this App" for each app in the "Siri Suggestions & Privacy" section of siri's system prefs (for the apps they don't care about having siri integration).


"Trial" will gather and store the voice samples if you have "Improve Siri & Dictation" turned on in the Privacy settings. This showed up in Big Sur and is probably the exact same thing from iOS. If you have Siri and that disabled the folder should be pretty small.


Howard Oakley is such a prolific writer on newly appearing macOS issues. Take for example this piece. It's collecting new information and trying form a picture, and that's how the community starts learning. Awesome content.


Indeed. He deservedly get to the homepage here regularly these days. His coverage of the M1s has been very useful.


Open source is where you don't have to apply guesswork to find out what a suspicious looking piece of software is doing on your computer.


Nope, instead you have to apply guesswork to figure out why an apt-get call accidentally uninstalled your desktop environment. :P


Pretty sure there was no guesswork involved in finding out why that happened, since it was resolved so quickly.


I'm confused. `apt-get upgrade` doesn't remove packages. `apt-get dist-upgrade` tells you what packages it will remove and has you confirm that you want those packages removed before you proceed.

https://www.debian.org/doc/manuals/apt-guide/ch2.en.html

What guesswork is needed?


It was a reference to an infamous incident where Linus Sebastian accidentally removed his DE when trying to install Steam via APT.


I'll take that every day. Because it's not like proprietary doesn't fuck itself up. For an anecdote, I had a laptop with Win8 preinstalled, and Update just wasn't working one day. The solution? Scouring the internet to find a Microsoft executable that fixed update. How? We'll never know. Because we're not meant to.


Ah yes, gotta love the mystery .exe "fixers" that are sometimes associated with KB articles.


Or the MS update that nuked people's Documents folder?


Or how Exim ended up on my system when I wanted to install a command line tool that does not require email sending (even if it required why on earth would anybody build such package dependencies).


Unless you blindly launch all apt-get call with the -y command it will ask you for confirmation before uninstalling anything.

If you say yes, it is not an accident but 1. your own decision 2. it is easily reversible.


You only have to guess if you ignore the obvious warning and type "Yes, do as I say" indiscriminately.

I think that's fair.


Although you could make this argument in this context I don’t think it is valid. I’m quite certain that someone who’s trying to figure out what some obscure service is doing on their machine would be able to use `apt-get` perfectly fine and wouldn’t ignore any all-caps warnings.


Sure thing, Linus...


"One distribution using one package manager is bad when piloted by an incompetent YouTuber who ignores multiple warnings. Therefore, all of free software is bad."


It's more than just Linus's experience with Pop OS.

So many (consumer-focused) FOSS products have huge, obtuse quirks in them that prevent their widespread adoption, and uber-tech-literate people tend to overlook this because they really want to believe that FOSS can succeed.

In reality, a lot of these quirks come about because of the non-commercial nature of the projects[1]. When a business produces software, the aim is to get as many people as possible to use it. This means a smooth (enough) user experience or death.

For software written by volunteers, the aim is to (generally) satisfy the intellectual curiosity of the people writing it. This means huge issues for users, and QA in basically all forms, are overlooked in favour of working on cool/interesting things for developers.

[1] Yes, I know commercial FOSS exists. I run a reasonably large commercial FOSS project myself. However, the overwhelming majority of FOSS projects out there are volunteer-led and run.


While there's some truth here, you're underestimating the amount of great open source software that's easy to use. You just don't think of it, and it's likely heavily used by commercial companies.

Also, terrible obtuse quirks are certainly not unique to FOSS products. Go ask someone who deals with, I don't know, the software used to run hospitals and healthcare systems (electronic health records, or EHR's) as but one example.


Who said free software was easy to use? It just reduces guesswork. It is often the case that precise tools both enable and require a little more understanding on the part of the users.


No. Widespread adoption is hindered of the status quo that Linux is not the default OS on the PC platform. If governments used Linux internally, mandated that schools teach Linux and Libreoffice, and the Windows bundling thing hadn't happen, and especially if all this happened 15 years ago, the landscape would be vastly different. Linux wouldn't have fewer quirks, maybe it would have more, but nevertheless people would use it because that would make the most sense to them. This is the only thing that matters, not user experience, not the goal of the software, but things that are orthogonal to the software: its integration into society.


apt deciding it should yeet essential packages is not a new thing and has happened many times to many people.


been using debian derived OSes 22 years and never had any problems. Dual boot Windows has permanently BSODed multiple times and required a reinstall.

You can do stupid things in Linux, but if you stay on the rails (I only upgraded to Ubuntu 20.04 last week) you're fine. Windows will just mess your system up every so often without you even doing anything except installing the updates it tells you to.


Apt is a terrible, single package manager.


Definitely right about that.

Maybe some user can be unaware of what is happening with OSS, but the community as a whole will always have accurate information of what is what (unlike this case where everyone is guessing).


> but the community as a whole will always have accurate information of what is what

OpenSSL audit would like to have a word with this mythical community. log4j vulnerability, too.


There are a few big problems with these examples.

For starters, the obvious implied suggestion is that these types of vulnerabilities don't exist in commonly used closed-source systems. That's been proven hilariously false time and again.

Secondly, commercial vendors have seen fit to adopt opensource where it suits them in order to take advantage of (and offload responsibility for) what these components do. You're effectively saying "Open Source community doesn't have accurate information because look at X and Y" and ignoring that "X and Y" were also not discovered to have problems by any closed-source using dependent commercial entities.


> the obvious implied suggestion is that these types of vulnerabilities don't exist in commonly used closed-source systems

I never implied that, obviously or not.

> Secondly, commercial vendors have seen fit to adopt opensource where it suits them in order to take advantage of

Commercial vendors adopted opensource due to lower cost of ownership, not due to perceived lack of problems or because "community knows exactly what is what"


Except it's been proven again and again that this is not even remotely the truth.

The number of people who can properly analyse complex software to uncover what it actually does is a line asymptotically approaching zero. While OpenSSL is an overused example, it still remains a good one.


And the people who can do so are usually the same people who’d be perfectly capable of analyzing binaries directly.

Besides, you’ll never know if the code you’re looking at abuses some compiler quirk without studying the binary.


> And the people who can do so are usually the same people who’d be perfectly capable of analyzing binaries directly.

I doubt that. And even the people who are good at analyzing binaries probably prefer to have the source code available to save a lot of time.


Somebody should tell the folks designing high-level languages that their code is no easier to analyze than a binary, I guess.


Level of the language has little to do with the complexity of code.

- a modern OS has anywhere upwards of 50 million lines of code (Linux kernel alone is ~30 million lines of code) [1]

- a modern browser is anywhere upwards of 30 million lines of code [2]

- there are over 3.5 million individual packages available for the various Linuxes [3]

And so on. The pretence that there are people and resources readily available to analyse those sources, and understand them well enough to uncover complex vulnerabilities is just that: a pretence, a myth. As evidenced by high-profile bugs that existed in popular codebases for years.

Does the availability of source code make analysis somewhat easier? Yes. There's a difference though between reviewing left-pad on GitHub and auditing OpenSSL, for example. There are thousands of people who can do the former, and perhaps 5 who can do the latter.

That is why "the number of people who can properly analyse complex software to uncover what it actually does is a line asymptotically approaching zero".

[1] https://www.linux.com/news/linux-in-2020-27-8-million-lines-...

[2] https://www.openhub.net/p/chrome/analyses/latest/languages_s...

[3] https://repology.org/repositories/packages


The person you are responding to is specifically talking about looking for exploits that are hidden behind compiler bugs or obfuscated by language features. So you're missing the point, I guess.


Debugging C is usually done in assembly so I guess they know.


On the other hand, you alone are responsible for all the little pieces of software that are part of your Linux system. People running MacOS have willingly given over to Apple the guarantee that Apple-provided MacOS software is not doing anything nefarious. The assumption being that 1) there are enough people capable of detecting malicious behavior that it will get caught, and 2) Apple has a very strong vested interest in maintaining their cred as a privacy-focused company.

It is certainly a trade off, and everyone will have their own reasoning. There is no objectively correct answer.


> given over to Apple the guarantee that Apple-provided MacOS software is not doing anything nefarious

That depends on your definition of nefarious.


I will grant that many (I would go so far as saying most, and to a very rough approximation, all) users do not have the same purist attitude towards privacy that is common on HN. Metrics on app usage and such would be seen as nefarious here but the average user doesn't care at all.


Indeed, only to learn about all the programming languages, libraries and communication protocols used in the system, and then they might understand what is happening.


Whereas on closed source you’d have to do all of that and probably a good deal of reverse engineering on top of that. That’s unfortunately not something which it necessarily makes any easier, but rather a lot harder.


The difference here is "[Apple] authorized people who are skilled in the art" vs "anyone skilled in the art" of programming can tell others what it does.


Linux package managers are great I haven't had to build a program from source in years.

But how do I know the source code I check is the binary my machine runs? Even if I build from source I could have a malicious gcc that takes clean source and outputs a malicious binary.

Unless you are running jit or something you can't really know what your computer is running even if you use open source.


You can check whether the binaries you have are produced by the source if you have reproducible builds, see https://wiki.debian.org/ReproducibleBuilds.


You know, you don’t need the source code to tear a binary apart in IDA.


You do know that you can peek at closed source binaries and figure out what they do? It’s not even all that hard in this case and it would quickly tell you what the software does, assuming it’s not intentionally trying to cloak its behavior, which it is not in this case.


Either way, it’s easier if you’ve got the source code as well.


No, the only protection against that is to block all outgoing connections that aren't on a white list. Most servers I have work this way. It's annoying at times but lowers the risk of some shady software/library talking home to ~0%.


Yeah my mother really likes checking out what is going on with these pesky opensource services. She sometimes goes to Github to read the code, while at it she quite often finds bugs, sometimes even security bugs. I am so glad we have a way to understand what is going on using opensource software.

On the other hand all my blackhat friends have a really bad time with closed source software. This is the primary reason black box security testing is dying.


I was never aware of this, and while poking around I came across

~/Library/Trial/Treatments/<NUMBER>/factorPacks/<OTHER_NUMBER>/assets/com.apple.siri.asr.dictation.*/etiquette.json

These files do contain an impressive amount of swear words, some of which I had never heard.


The en_GB one on my machine includes words like smeg, squits, and argie – all of which are mild yet anachronistic terms at worst. And, curiously, "poe" – I know Apple doesn't like power over Ethernet, but really..


Only in one language or in all languages supported by Siri? Would be great when learning a new language :)

But probably needed for Siri to process speech locally, as they'd likely want to filter out those words.


I keep wondering if my Macbook is really mine, or if it's theirs.


Remember when operating systems didn't have multi-gigabyte binary blobs performing cryptic actions with an extremely chatty online connection?


With Little Snitch (and thorough abandonment of iCloud, iMessage, Siri, FaceTime, and the App Store) you can have this once again.

There are about 30 Apple processes to which you need to deny all network access. Then a Mac is relatively quiet. (By default Apple stuff is whitelisted in Little Snitch, you have to go and manually uncheck built-in allow rules.)

Apple built something called the ContentFilterExclusionList that bypassed Little Snitch and VPNs for Apple apps, which was thankfully disabled rapidly (and without an announcement) after a big backlash. Hopefully it will not resurface.

If it does it will still be possible to alter the system boot security level setting and tamper with the signed system volume (and modify the exclusion plist) and the system will still work (and so will the firewall), but it will take some hacking. This (and the work by marcan and the Asahi team) is the sole reason why I just dropped eight large on that chonky mac mini with the twenty cores.

The escape hatch remains. For now. (You still can't buy them anonymously.)


I just put this list in /etc/hosts. Seems to work well enough. I don’t use any apple services on my Mac. This will probably break those.

https://github.com/adversarialtools/apple-telemetry


You can't buy a Mac anonymously?


Not the maxed out BTO ones, at least not at present for the Mac Studio.


No, I don't remember a time when users were in control and actually owned Apple-made hard- and software.

However, linux is alive and well, despite some flavours attempting to force autoupdating snap stores and the like onto users.


Linux can be chatty, but I'd recommend installing OpenSnitch to get a handle on the few phone-home calls from programs like snapd (if you choose to not neuter it, immediately) and distro-specific upsells like "ubuntu-advantage".

The biggest pay-off for OpenSnitch is if you're going to be installing and using third-party programs like VSCode and Unity, who love to talk to analytics providers.


Can't that be filtered on a dns level with a list in pihole? Probably much nicer not having to keep it on all devices.


I agree, but have serious "ugh, but what about when I leave my network"-itis. Like when I tether off my phone (or just use said phone when out).

The ideal solution seems to be to VPN off my home internet connection, but that just feels so broken...


I used WireHole [0] to set up pihole over wireguard for use outside my home. It works really well.

I've thankfully only lost VPN access a handful of times since installing it because our internet is relatively stable.

[0]: https://github.com/IAmStoxe/wirehole


vscode lets you turn off analytics in the settings (IMO it should be off by default, untill a crash is detected).

Isn't that enough?


Of course not?


Why? (Assumimg Vscode respects the setting)


I don't even understand the question.

If you found that your next door neighbor was sneaking into your house while you're away, and he said "oh I have an option not to do that. Just ask me not to any more." Would that be good enough? You say "ok I would like to invoke that option" and that's it, you happily go away on a month long vacation without installing any cameras or anything because they guy said he won't do that now.

That's a single package, whose setting affects no other apps, and you have to trust that the app even really does what it says, which makes no sense when the topic is apps that may be doing unwanted things behind your back.


I agree with your position but that's a bad analogy (as most analogies are). With your computer you can choose to uninstall and no longer use VSCode. You can't choose your neighbor and you can't force him to move away to prevent his bad actions. Also, it would be more like your neighbor is peeping in your windows or planting a monitoring bug to record your activities, rather than breaking in.

Taking that further, you can ditch your Mac and switch to Linux or BSD if you're concerned about the article subject. In other words, you can sell your house and move if you don't like the direction the neighborhood is going with all the peeping toms about, but there's extra work involved in that solution so you have to decide if you can afford such a change for more privacy. So, you can sell your Mac and move your workflow to a less invasive OS on hardware you choose, but depending on how deep your workflow is intertwined with and relies upon the macOS ecosystem, that can be a ton of work as well.


That analogy is like a car that...


Because then VSCodium wouldn't exist


Purging snap has so far been a 5 minute do-and-forget operation, all the way to 21.10.


Sadly, Ubuntu is moving Firefox to snap in the upcoming 22.04 release. I think I'll be switching to either Manjaro or plain Arch before 21.10 becomes unsupported.


Unless you rely on systemd, try Void Linux. It has a sane package manager and a very active, friendly community. It requires a little legwork to get to the same polish as Ubuntu and other consumer-oriented distros, but it is rock solid and reliable.

Otherwise be careful with Manjaro; it is a noble project with great goals and some great implementations (and a stellar community!), but they tend to go a bit overboard on customization for customization's sake which has introduced breakage in the past. Nothing that isn't fixed within a day in my experience, but stable it most certainly is not.

There's also Mint for a more traditional and conservative take on Ubuntu's method, as well as MX Linux which shows off how good Debian can be in the right hands.

Or go crazy like me and start using OpenBSD as your main workstation provided your workflow can adapt to it; I'm on 7.0 on this machine with Firefox 95 and literally everything I need is here.


I'd like to throw my two cents in on Devuan. This is a fork of Debian that simply doesn't include systemd, but does include all the other work that Debian has done, and tracks all the Debian releases.

This is my daily driver for both home and work. I've completely ditched Windows and other versions of Linux. Steam runs fine, and with Proton, I have compatibility good enough not to miss Windows at all. Some games (Dark Messiah is one) that won't even run in Windows anymore run great in Proton.

The company I work for does a lot of container-based work, and aside from a couple tweaks to apt sources that naively assume release names, everything "just works".


I recently upgraded to 21.10, where in the process, snap was reinstalled and firefox was reinstalled as a snap. I did however just purge snap as I've done many times before, without any issue. Then installed firefox through flatpak, which is as of writing version 98.0.2, which looks to be the same as you get through snap.

So, in conclusion, it shouldn't matter one bit how canonical decides to distribute firefox, if you can just use the debian based apt, or flatpak.

I hope canonical goes away from the snap model, because I've had nothing but issues with it, until I got fed up enough to remove it, and then everything was great again. Issues were mostly related to read-only mounted file system, and otherwise the mess it produces. However, even if I had experienced none of those, the whole automatic updates whether-you-like-it-or-not... is one of the reasons why I'm not using windows or macOS.


You can just manually install the official Firefox linux release. I've always done this since the Debian Iceweasel days and it works without issue.


The distro-provided versions can be slightly different than the Mozilla build.

For example, Fedora build is ruled by these principles: https://pagure.io/fesco/issue/1518, while the Mozilla build is not.


Agreed, Mozilla has not exactly shown that they will put user interests first and it makes sense to have a third-party packager to double check their choices.


If you have to fight against your distro, you have a bigger problem than one package, and working around that one package doesn't fix the larger proble which is that they tried, and will try again, and will always be trying.

You need your tools goals to be aligned with your own, and reject any that aren't, not cater to them.

Working around the FF package by installing from elsewhere is no answer.


The distros don't update Firefox as often as Firefox does. If you want leading-edge on Linux, it's best to install to a directory you have write access to with the official Linux Firefox installer, and let the browser update itself. Just my experience.


What does it mean functionally to move it to snap exclusively? Removing it and maybe some of its dependencies from ubuntus default apt repositories? Would there be issues building it and providing it in a custom ppa?


> Apple-made hard- and software.

Not sure why are a singling out Apple here. Windows is worse as are many (most?) hardware vendors; there is a reason everyone REALLY should fresh install vanilla Windows over the vendors OEM version.


thanks for the laugh


The drop cap at the start of the article, is capturing pointer-events. Adding pointer-events: none; will make people who read by selecting text with the mouse happy.

.single .entry-content:before { pointer-events: none; }


I suspect that the browser you are using is not spec-compliant. Pseudo-elements (like ::before and ::after) are not actually present in the DOM and are therefore not selectable.

edit: I see that you are probably referring to the z-index of the ::before element overlapping the text, preventing selection.


Uuhhh these contain the "custom" Siri responses. For example look at

  ~/Library/Trial/v6/AssetStore/assets/82/623873afd8fba430d4f4a45e/content/dialog/SocialConversation.catfamily/dalAprilFoolsSiri.cat
_params.cat.bin has the description:

  category: Social
  description: For when users say 'April fools' to Siri without any other words.
  listenafterspeaking: false
  readyforloc: true
  subcategory: Daily Rituals

and `en.cat.bin` and all the localisations (en-za.cat.bin, en-sg.cat.bin, …) contain some logic when to run and the answers, such as

  It's April 2nd. April Fools! <break time="300ms"/> If you want to play some tricks of your own, say, 'Tell me an April Fools prank.'


"Note that all these are on the SSV, therefore can't be tampered with in any way."

Who does this computer belong to, the author or Apple, Inc. Let me guess, Apple controls the SSV and decides what files it contains. Removing crapware becomes exceedingly difficult. Brilliant.


It sounds somewhat like testing/trialing experimental ML models


Jesus I hate being A/B tested without my consent. Particularly, why is Apple of all places doing this, given their otherwise strong track record of standing up for their users in regard to tracking and other bullshit by major players?!


Apple is no different - now they have more pressure to maintain their growth in bottom line, they are doing more and more of this stuff

The App Store is a joke.

All that said, it’s still currently way better than other big tech - but I am not blindly accepting their behaviour, for now its good enough for me but that can change in the future


You’re associating A/B testing with the practice on the web, where it is often used close to users’ spending decisions.

From what we can tell, This here is not a trial connected to revenue. It’s connected to Siri and other ML-frameworks, but not to, say, the Apple Store or Music applications.


Apple has been known to gradually implement new features, to first roll them out and refine for private usage and then once the feature is stable make it available as a public API.

Mark my words in two or three years Apple will ban developers from using third party A/B testing frameworks and require them to use the Apple-provided solution based on triald.


You are seriously misunderstanding what triald does.


While the Trial framework is capable of doing A/B testing (and historically started out for that purpose), it is fundamentally an asset delivery mechanism, and is mostly used for that (i.e. I suspect that most of the domains using Trial currently have no "B" to pair with their "A").

And some of the differentiation that IS being done is not A/B testing in the usual sense of the word. i.e. if one particular hardware model needs a different version of an asset, those models can be assigned to a separate set, but that does not really make it an A/B test.


Maybe you should contain your outrage until it's based on something other than uninformed speculation.


The fact that Apple once again delivers bananaware ("ripens at the customer") to their users is outrage-worthy on its own, particularly if the software in question does not even provide any benefit to the user. Bad enough that Spotlight and TimeMachine have had issues for years (which, to Apple's "credit" have now been largely solved), but these at least provide useful functionality. Apple is a filthy rich company, they should be able to afford QA for their software or at least ask users prior to deploying unstable software.

Users, particularly those who paid thousands of dollars for the hardware, are not guinea pigs.


I really think you should stop buying Apple products, you are clearly really upset about it.


People have some well-founded respect for the procedures that medical trials have to go through, including informed consent. That’s spilling over/being misdirected to the use of the trial mechanism in other fields, even though it makes absolutely no sense.

The alternative to A/B testing in UX design is not some more rigorous trial with fully-informed consent. Instead, it‘s the older and still predominant mechanism of “let’s just change it”.

Consent in medical tests is important because it involves a ‘B’ that is a substance or procedure that has not (yet) been shown to be safe & effective. The trial is less restrictive than the alternative scenario, where it would be illegal to just treat you with ‘B’.

None of that applies: both the A and B in A/B tests are software versions, for which there are zero legal requirements. Neither version will have an impact on your health or kill you, which cannot always be said for untested pharmaceuticals. A tailor is absolutely free to suggest dark navy to one customer and anthracite to the next, even if they flip a coin to decide and/or mentally take note of what customers like.

The possible harm from A/B testing is the maximum of the harms of A and B. If you wouldn’t complain about either alternative, there is no legitimate reason to complain about the combined A/B harm.


A/B tests suck if you are in the test group and suddenly, completely unannounced, a functionality you have been relying on stops working or gets changed and you are now stuck with the problem.

> None of that applies: both the A and B in A/B tests are software versions, for which there are zero legal requirements.

There should be, that is my point.

All that A/B testing and the constant other changes even in minor patch versions only lead to update hesitancy in people, which in turn leads to security issues because people don't update their software because they don't want to be disrupted in their workflows.

Not to mention that most A/B testing is essentially pseudoscience at best if not outright fraud that's being peddled by people wanting to sell A/B testing "solutions", or the implications on user privacy because of the mandatory phone-home.

FWIW this is what made Microsoft so big in the corporate world: for decades you could expect that nothing major would change, you would not need to re-train your employees or buy new software. That changed with Windows XP (although it was easy to roll back at least the optical changes) and got exponentially worse with the utter crap that is Windows 11.


The alternative to A/B testing is paying people to test your software and learning from their direct feedback. Alternatively, you can provide people with a choice to test the changes you're considering.

If I pay for a product, I expect to be treated with respect. Being turned into a guinea pig to save on feedback cost isn't being respectful. Doing so without notice or consent is even worse.

There's no law against it, of course, but that doesn't mean it's good or even okay.


Surely if it's not informed consent then it's not authorised access to your computer. Unauthorised computer access is illegal in USA (CFAA) and UK (CMA), at least.

If it in anyway restricts bandwidth or causes other harms, eg uses up the last of your HDD space causing an issue, then that's an additional crime according to UK law.

These laws shouldn't just be applied to individuals.


> given their otherwise strong track record of standing up for their users

Citation needed, Apple have an advertising identifier built into the os to help advertisers track you, they share analytics about how you use their os by default, they analyse the data you store with them in iCloud.

I know Google, Microsoft, Amazon they all do these anti-user things thats not my point, my point is that Apple say they have a strong track record of standing up for their users and their users believe them regardless of their track record.


Ah, so the article mentions “A/B testing” but doesn’t realize that Apple has actually been rolling out IAP trials for macOS features. Subscriptions baked right into the OS, with feature flags to make sure they are extracting maximum profit. Eddy Cue is personally overseeing this because services revenue is so critical to Apple that they’re willing to hurt the user experience to get some money out of this.

Ok, seriously, y’all need to tone it down with the speculation. Spotting the word “trial” and immediately jumping to the worst conclusion is dumb and how misinformation spreads. As some commenters have mentioned below, triald manages CloudKit asset packs for Siri to allow it to change its local behavior independent of an OS update. That’s all it is. I don’t really have a problem with “oh let’s poke into this thing on my computer and figure out what it does” but I am kind of sick of just looking at log entries and coming to a misleading conclusion without really understanding what’s going on behind the hood. Yeah, it uses space on your disk and Apple could describe what it is better, but it’s not secret spyware or a plot to fill up your disk for no reason.

(Fun fact: a lot of the assets in the ~/Library/Trial directory are transparently compressed at the filesystem level using a new “LZBITMAP” compressor type in macOS Monterey. Without this it would take more space, so they are definitely aware of the disk usage at Apple.)


Collective ignorance generating conspiracy theories & outrage sadly happens way too often on HN. It's like tilting at windmills trying to change that.


That should be reported to US Homeland Security. It needs to be examined for backdoor vulnerability. Now that the Western world is semi at war with Russia, you can't ignore this sort of thing any more.

Right now, anything that can affect running systems from the outside has to be viewed as an attack vector. It would be a good idea for anyone running anything critical or semi-critical to turn off all remote updates for a few months.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: