Hacker News new | past | comments | ask | show | jobs | submit login

> I hate that Chromium’s snap takes more than 10 seconds to load on cold boot on a freaking SSD, whereas .deb and Flatpak apps load in 1-2 seconds.

Can someone verify this? As someone who will eventually upgrade to 20.04, this is concerning.




Apparently, snaps are compressed to save disk space, which is why they take so long to start:

- https://www.reddit.com/r/Ubuntu/comments/9scoif/snap_package...

Saving disk space is certainly useful for rarely used apps, however, your web browser (and any other frequencely used apps), shouldn't be compressed, especially if there is ample disk space.


The bottom line is they optimize installation time by amortizing it out over the runtime life of the package, or in other words, optimizing a one time 15 second process to be a 14 second process, in return for making a many-times 1 second process a 30 second process. It makes absolutely no sense.

They do this using a filesystem originally designed for embedded devices, using a driver hacked to disable threading support because the sheer number of filesystems snapd mounts would otherwise consume a huge amount of memory in per-cpu buffers used for decompression. In other words, they broke squashfs for everyone in the process of trying to make it work for snap.

On-demand decompression like this has made very little sense on desktops since the mid 90s, and even if it did, snapd's manifestation of it is particularly terrible.


> On-demand decompression like this has made very little sense on desktops since the mid 90s

Ok, maybe not desktops? But ZFS on-disk compression is a sysadmin's frickin dream -- just one example that you can access logfiles with plaintext tools like grep while benefiting from the space savings with neglible cost, LZ4 has basically no overhead at all, https://www.servethehome.com/the-case-for-using-zfs-compress...

I really hope you will try on-disk compression, encryption, deduplication, and that sort of thing sometime, you will see it is so much better than gzip-compressed, gpg-encrypted files


Filesystem compression is a completely different animal than this. It has to deal with your ability to modify the file at any time. It doesn't compress the whole file together, it does it in blocks. When you launch a binary (and the system mmaps it) it doesn't have to decompress the entire file before you can start using it, only the first compression block.

Compression also typically makes it faster to launch applications from spinning rust, because the bottleneck is the drive and reading 50MB and decompressing it is faster than reading 100MB uncompressed. This would be true of SSDs as well except that most of them already do this internally.

But snap isn't reading e.g. 64kB and then giving you 128kB on demand (and then prefetching the next block) like the filesystem does, it has to read and decompress the entire 100+MB package before you can even open it. That is very silly and adds a perceptible amount of latency.


> But snap isn't reading e.g. 64kB and then giving you 128kB on demand (and then prefetching the next block) like the filesystem does, it has to read and decompress the entire 100+MB package before you can even open it. That is very silly and adds a perceptible amount of latency.

Wait, I could be wrong about this. I was deducing it from other people saying that it has to decompress the package every time you open it plus the empirically long application load times, but it turns out it's using squashfs which at least in principle could be doing the compression the same way as zfs. I haven't checked whether it does or not.

They're doing something wrong though or it wouldn't be this slow. Possibly more than one thing. Unfortunately there are a lot of different ways to screw this up, like not caching the decompressed data so it has to be decompressed again on every read even if it's already in memory, or using too CPU intensive a compression algorithm or too large a block size, or double (or triple or quadruple) caching because it's loop-mounted and then forcing slow disk reads through inefficient cache utilization, or over-aggressive synchronous prefetching, or any combination of these. Or maybe it actually is doing whole-file-level compression.

Now I'm curious which one(s) it really is.


Can't you just alias cat to zcat and so on? There should be such tools available for just about everything that isn't a container format (zip, 7z, tar).


It’s a bad implementation. You can run inline compression on latency sensitive workloads like VDI without issue.

Compression makes a lot of sense as the cost for fast high capacity SSD is usually much higher than the extra CPU cycles required to decompress.


It's not squashfs fault, it's the snap people that just have absolutely no clue. squashfs is designed for embedded systems with say 8 or 16 MiB of very slow NOR flash, so you maximize compression ratio at the expense of speed (because the flash is probably still slower).


And decompression is typically very fast. What I don’t understand is why they’re not using something like zstd if they care about speed. It’s a supported compression algo for squashfs, but still they insist on using a single threaded compression (xz iirc?) algo.


The kernel code for reading zstd squashfs image has been merged for some time. But zstd is only a recently supported algorithm in upstream squashfs tools for creating the squashfs image.

In my testing with OS installs that depend on squashfs+xz, there is a significant lzma hit for decompression, resulting in significant latencies. And the higher the compression level used, the more the hit when decompressing. While compression computational hit for zstd is in the same ballpark as xz to achieve the same compression ratio, (a) decompression computational cost is far less with zstd, translating into faster reads; (b) is fairly consistent regardless of compression level.

Another factor for squashfs is the block size. The bigger it is, the better the compression ratio, but the greater the read amplification. I haven't looked at it, but it might be they're overoptimized for space reduction with too little consideration for performance. Since this isn't a one time use image, like for an installation, but intended to be read over and over again, erofs might be an alternative worth benchmarking.

https://linuxreviews.org/images/d/d2/EROFS_file_system_OSS20...

https://www.usenix.org/system/files/atc19-gao.pdf

https://lkml.org/lkml/2018/5/31/306


I think there is a cultural bias in this type of application that favors disk space to reduce overheads on mirrors.


Whoa, it is a weird focus in this day and age...

Especially since this has been a solved problem for ages without any real performance penalty. NTFS have had this since 1995 it seems and zfs probably since its inception.


Apple has silently compressed files (including executables) since Snow Leopard [1] -- to increase speed. Did Ubuntu pick the wrong compression algorithm?

[1]: https://arstechnica.com/gadgets/2009/08/mac-os-x-10-6/3/


I see two mentions of "increased speed" in that article:

1. Increased installation speed. This one's obvious; less data takes less time to install. This is mentioned in your parent comment.

2. "But compression isn't just about saving disk space. It's also a classic example of trading CPU cycles for decreased I/O latency and bandwidth. Over the past few decades, CPU performance has gotten better (and computing resources more plentiful—more on that later) at a much faster rate than disk performance has increased. Modern hard disk seek times and rotational delays are still measured in milliseconds. In one millisecond, a 2 GHz CPU goes through two million cycles. And then, of course, there's still the actual data transfer time to consider. [...] Given the almost comical glut of CPU resources on a modern multi-core Mac under normal use, the total time needed to transfer a compressed payload from the disk and use the CPU to decompress its contents into memory will still usually be far less than the time it'd take to transfer the data in uncompressed form."

It's an interesting point, but seek times and rotational delays don't apply to SSDs. This is kind of an uneasy comparison to draw with "I hate that Chromium’s snap takes more than 10 seconds to load on cold boot on a freaking SSD".


It's an interesting point, but seek times and rotational delays don't apply to SSDs.

There is another reason to do compression on SSDs: you have more storage free and thus less write amplification and your SSDs will last longer. In fact some SSD controllers (e.g. SandForce controllers used to do this) compress data to reduce write amplification.

https://en.wikipedia.org/wiki/SandForce#Technology

The trick that they applied is that say, if you had a 500GB SSD and you stored 400GB uncompressed which the controller compressed to 200GB, the drive would still report only having 100GB free, giving it an ample 300GB of free blocks, thus greatly reducing write amplification.

(Of course, the benefit of controller-level compression is gone with full-disk encryption. But I guess FDE was less popular when SandForce SSDs became popular.)


Isn't the point of compression to save transfer time from the disk to memory, not space on the disk?

That's why the kernel is compressed.


Yes. It's a great idea for spinning rust, meh for SATA SSD, and bad for NVMe SSD.


Saving disk space is certainly useful for rarely used apps

Is it really? I can't recall the last time I ran into disk space issues, must have been in the 1990s.


Really? I constantly run into disk space issues. Apple still ships their flagship 13" macbook pro 128gb storage and they charge $200 for another 128gb. While other manufacturer's laptops charge a lot less for storage these days, most still only come with 256 which is not enough these days for development IMO.

Even on my desktop, I managed to fill 750GB with various VMs and android development tools (the SDKs, etc). While I am not sure how much compression could have saved me, it could still be worth it (especially since I only use certain VMs or SDK version once a month).


Yeah, then stop burying yourself with Apple devices.

Anyways, it makes sense to maintain something like an LRU cache, and compress only the least used things.


Why would you? lz4 decompresses at ~5GB/s on a modern CPU [1] with good compression ratios, that's still more than most SSDs can push nowadays. Most applications are small fraction of that size on-disk.

The problems arise when you start using xz-compressed squashfs images. LZMA2 is optimized for compression ratio and typically decompresses several times slower than even zlib deflate (which is already ~10 slower than lz4).

[1] https://github.com/lz4/lz4


Yeah... and when something is truly large it generally doesn't compress well anyway as the large assets are embedded media files; I don't understand the point of compressing stuff like this :/.


I still don't understand though. I run on btrfs with compression turned on and have noticed 0 performance issues. Only with snaps as mentioned.


I run on disk space issues almost monthly, and I have a fair amount of terabytes


> snaps are compressed to save disk space

The list of dumb decisions that Ubuntu has been making recently just keeps increasing.


I did not notice the load times as I basically never close chromium but I have noticed (not a snap expert): 1) It does not work well with the rest of the OS (e.g. I will pin Chromium to task bar but after a bit it will stop using that icon and instead appear as a new one where clicking the pinned one opens some new instance)

2) It somehow consumes insane amount of CPU for me. I have noticed my fans going crazy (mind you I am using a 32GB RAM, 12 core brand new machine) and all my cores being at 60%.

The kicker about that, I did not even see chromium running! I had closed it but the rogue snap processes would not die. I had to sigterm everything and uninstall it.

Then I wanted to install chromium without snap but as the post says - YOU CAN'T! At least not easily enough.

So the solution for me - download Google Chrome after years of using Chromium because you could easily install it natively (I still use Firefox as main browser but sometimes stuff only works in chromium based ones).

It's a total disaster as far as I am concerned. Next time I am reinstalling the OS (hopefully not any time soon since I've just upgraded from 19.10 to 20.04) it will not be ubuntu.



It takes about 5 seconds for me off an NVMe SSD. For comparison Firefox (installed with apt) starts in under a second.


Yes it starts in about 3~4 seconds for me. The only reason it doesn't bother me is that I normally use Firefox.


I can confirm this. The same app (tested VS Code, IntelliJ, Pycharm, Atom) installed from a deb vs a snap is like 2 seconds vs 8-10 seconds on my beefed up rig.

It's ridiculous.


Took around 10 seconds to start for me, almost exactly.


Clean install, Ubuntu 20.04. Chromium starts in around a second.


Is it installed as a Snap?

From what I understood, only the first start is slow. Once the virtual disk had been decompressed, subsequent launches are much faster.


Same here. Upgrade to 20.04, but fresh install of chromium. Starts in 2sec in plain old HDD.


Whatever you do don't install anything else.


Boots slow for me, but it isn't a big deal in my opinion. I like the prompts that Chromium is trying to open files, etc.

Overall, snap doesn't seem terrible to me, but I haven't really read into the complaints everyone is sharing.


I can't - snap Chromium takes about 1 second to load for me on Ubuntu 20.04


> Can someone verify this? As someone who will eventually upgrade to 20.04, this is concerning.

Chromium was converted from deb to snap already in 19.10 if not earlier.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: