Hacker News new | past | comments | ask | show | jobs | submit login

Interesting point that RAM has basically not moved up in consumer computers in 10+ years... wonder why.

The article answers its own question by pointing out SSDs allow better swapping, bus is wider and RAM is more and more integrated into processor, etc.

But I think this misses the point. Did these solutions make it uneconomic to increase RAM density, or did scaling challenges in RAM density lead to a bunch of other nearby things getting better to compensate?

I'd guess the problem is in scaling RAM density itself because performance is just not very good on 8gb macbooks, compared to 16gb. If "unified memory" was really that great, I'd expect there to be largely no difference in perceived quality.

Does anyone have expertise in RAM manufacturing challenge?




I don't think SSDs allowing rapid swapping is as big a deal as SSDs being really fast at serving files. On a typical system, pre-SSD, you wanted gobs of RAM to make it fast - not only for your actual application use, but also for the page cache. You wanted that glacial spinning rust to be touched once for any page you'd be using frequently because the access times were so awful.

Now, with SSDs, it's a lot cheaper and faster to read disk, and especially with NVMe, you don't have to read things sequentially. You just "throw the spaghetti at the wall" with regards to all the blocks you want, and it services them. So you don't need nearly as much page cache to have "teh snappy" in your responsiveness.

We've also added compressed RAM to all major OSes (Windows has it, MacOS has it, and Linux at least normally ships with zswap built as a module, though not enabled). So that further improves RAM efficiency - part of the reason I can use 64-bit 4GB ARM boxes is that zswap does a very good job of keeping swap off the disk.

We're using RAM more efficiently than we used to be, and that's helped keep "usable amounts" somewhat stable.

Don't worry, though. Electron apps have heard the complaint and are coming for all the RAM you have! It's shocking just how much less RAM something like ncspot (curses/terminal client for Spotify) uses than the official app...


> especially with NVMe, you don't have to read things sequentially

NVMe is actually not any better than SATA SSDs at random/low-queue-depth IO. The latency-per-request is about the same for the flash memory itself and that's really the dominant factor in purely random requests.

Of course pretty soon NVMe will be used for DirectStorage so it'll be preferable to have it in terms of CPU load/game smoothness, but just in terms of raw random access, SSDs really haven't improved in over a decade at this point. Which is what was so attractive about Optane/3D X-point... it was the first improvement in disk latency in a really long time, and that makes a huge difference in tons of workloads, especially consumer workloads. The 280/480GB Optane SSDs were great.

But yeah you're right that paging and compression and other tricks have let us get more out of the same amount of RAM. Browsers just need to keep one window and a couple tabs open, and they'll page out if they see you launch a game, etc, so as long as one single application doesn't need more than 16GB it's fine.

Also, games are really the most intensive single thing that anyone will do. Browsers are a bunch of granular tabs that can be paged out a piece at a time, where you can't really do that with a game. And games are limited by what's being done with consoles... consoles have stayed around the 16GB mark for total system RAM for a long time now too. So the "single largest task" hasn't increased much, and we're much better at doing paging for the granular stuff.


Latency may be similar but:

1. Pretty sure IO depth is as high as OSes can make it so small depth only happens on a mostly idle system.

2. Throughput of NVMe is 10x higher than SATA. So in terms of “time to read the whole file” or “time to complete all I/O requests”, it is also meaningfully better from that perspective.


> NVMe is actually not any better than SATA SSDs at random/low-queue-depth IO.

The fastest NVMe SSD[0] on UserBenchmark appears to be a fair bit faster at 4k random read compared to the fastest SATA SSD[1].

75 MB/s vs 41.9 MB/s Avg Random 4k Read.

1,419 MB/s vs 431 MB/s Avg Deep queue 4k read

Edit: This comment has been edited, originally I was comparing a flash SATA SSD vs an optane nvme drive, which wasn't a fair comparison.

[0]: https://ssd.userbenchmark.com/SpeedTest/1311638/Samsung-SSD-...

[1]: https://ssd.userbenchmark.com/SpeedTest/1463967/Samsung-SSD-...


that might be with a higher queue depth though, like 4K Random QD=4 or something, I don't see it says "QD=1" or similar anywhere there and that's a fairly high result if it was really QD=1.

It's true that NVMe does better with a higher queue depth though, but, consumer workloads tend to be QD=1 (you don't start the next access until this one has been finished) and that's the pathological case due to the inherent latency of flash access. Flash is pretty bad at those scenarios whether SATA or NVMe.

https://images.anandtech.com/graphs/graph11953/burst-rr.png

https://www.anandtech.com/show/11953/the-intel-optane-ssd-90...

So eh, I suppose it's true that NVMe is at least a little better in random 4K QD=1, a 960 Pro is 59.8 MB/s vs 38.8 for the 850 Pro (although note that's only a 256GB drive, which often don't have all their flash lanes populated and a 1TB or 2TB might be faster). But it's not really night-and-day better, they're still both quite slow. In contrast Optane can push 420-510 MB/s in pure 4K Random QD=1.


Also people forget that the jump from 8 bit to 16 bit doubled address size, and 16 to 32 did it again, and 32 to 64, again. But each time the percentage of "active memory" that was used by addresses dropped.

And I feel the operating systems have gotten better at paging out large portions of these stupid electron apps, but that may just be wishful thinking.


Memory addresses were never 8 bits. Some early hobbyist machines might have had only 256 bytes of RAM present, but the address space was always larger.


Yeah, the 8bit machines I used had 16bit address space. For example from my vague/limited Z80 memories most of the 8bit registers were paired - so if you wanted a 16bit address, you used the pair. To lazy to look it up, but with the Z80 I seem to remember about 7 8bit registers and that allowed 3 pairs that could handle a 16bit value.


Even the Intel 4004--widely regarded as the first commercial microprocessor--had a 12-bit address space


This got me thinking, and I went digging even further into historic mainframes. These rarely used eight-bit bytes, so calculating memory size on them is a little funny. But all had more than 256 bytes.

Whirlwind I (1951): 2048 16-bit words, so 4k bytes. This was the first digital computer to use core memory (and the first to operate on more than one bit at a time).

EDVAC (designed in 1944): 1024 44 bit-words, so about 5.6k.

ENIAC (designed in 1943): No memory at all, at least not like we think of it.

So there you go. All but the earliest digital computer used an address space greater than eight bits wide. I'm sure there are some micro controllers and similar that have only eight bit address spaces, but general purpose machines seem to have started at 12-bits and gone up from there.


The ENIAC was upgraded to be a stored-program computer after a while, and eventually had 100 words of core memory.


I actually have 100GB of RAM in my desktop machine! It's great, but my usage is pretty niche. I use it as drive space to hold large ML datasets for super fast access.

I think for most use cases ssd is fast enough though.


From the song:

> It does all my work without me even askin'

It sounds like Wierd Al had the same use case!


I think it's just RAM reaching the comfortable level, like other things did.

Back when I had a 386 DX 40 MHz with 4MB RAM and 170MB disk, everything was at a premium. Drawing a game at a decent framerate at 320x200 required careful coding. RAM was always scarce. That disk space ran out in no time at all, and even faster one CD drives showed up.

I remember spending an inordinate time on squeezing out more disk space, and using stuff like DoubleSpace to make more room.

Today I've got a 2TB SSD and that's plenty. Once in a while I notice I've got 500GB worth of builds lying around, do some cleaning, and problem solved for the next 6 months.

I could get more storage but it'd be superfluous, it'd just allow for accumulating more junk before I need a cleaning.

RAM is similar, at some point it ceases to be constraining. 16GB is an okay amount to have unless you run virtual machines, or compile large projects using 32 cores at once (had to upgrade to 64GB for that).


16G is just enough that I only get two or three OOM kills a day. So, it's pathetically low for my usage, but I can't upgrade because it's all soldered on now! 64G or 128G seems like it would be enough to not run into problems.


What are you doing where you're having OOM kills? I think the only time that's ever happened to me on a desktop computer (or laptop) was when I accidentally generated an enormous mesh in Blender


Blender fun : Select All -> Subdivide -> Subdivide -> Subdivide ... wait where did all my memory go!

I have learnt the ways of vertices. Not great at making the model I want, but getting there.


As someone on a desktop with 64GB, firefox still manages to slowly rise in usage up to around 40% of RAM, occasionally causing oom issues.


I wonder if you have a runaway extension... I haven't seen this type of issue from Firefox in a while.


I also have 64GB on my home PC and Firefox tends to get into bad states where it uses up a lot of RAM/CPU too. Restarting it usually fixes things (with saved tabs so I don't lose too much state).

But outside of bugs I can see why we're not at 100GB - even with a PopOS VM soaking up 8GB and running Firefox for at least a day or two with maybe 30 tabs, I'm only at 21GB used. Most of that is Firefox and Edge.


yeah, its definitely a cache/extension thing; usually when it gets near the edge I also restart. I do wish there were a way to set a max cache size for firefox.


FF using 1GB here, after several hours. You must have a leak somewhere.


How many windows/tabs is a much more relevant question in my experience.


FF doesn't load old tabs so you can't just count them if they've been carried over since a restart.


true, active tabs matters, but people that have hundreds of tabs open but only rarely open new ones or touch old ones are probably also quite rare.


I open and close one or two times per day. Might have a dozen+ open at once, not a huge number.


I usually have 800 to 1000 tabs open and Firefox only uses a few GB.


128 GiB here. Still run into OOMs occasionally.


> Drawing a game at a decent framerate at 320x200 required careful coding.

320x200 is 64,000 pixels.

If you want to maintain 20 fps, then you have to render 1,280,000 pixels per second. At 40 Mhz, that's 31.25 clock cycles per pixel. And the IPC of a 386 was pretty awful.

That's also not including any CPU time for game logic.


Hint: you don't replace all pixels at once.


Most PC games of the VGA DOS era did exactly that, though.

But, well, a lot can be done in 30 cycles. If it's a 2D game, then you're mostly blitting sprites. If there's no need for translucency, each row can be reduced to a memcpy (i.e. probably REP MOVSD).

Something like Doom had to do a lot more tricks to be fast enough. Though even then it still repainted all the pixels.


You might enjoy the Fridman interview of Carmack. He talks about that era’s repaint strategies.


For most users that is true. I think there were several applications that drove the demand for more memory, then the 32bit -> 64bit transition drove it further but now for most users 16GB is plenty.


16 GB RAM is above average. I've just opened BestBuy (US, WA, and I'm not logged in so it picked some store in Seattle - YMMV), went to the "All Laptops" section (no filters of any kind) and here's what I get on the first page: 16, 8, 12, 8, 12, 4, 4, 4, 4, 8, 8, 8, 8, 16, 16, 4, 4, 8. Median value is obviously 8 and mean/average is 8.4.

I'd say that's about enough to comfortably use a browser with a few tabs on an otherwise pristine machine with nothing unusual running in background (and I'm not sure about memory requirements of all the typically prenistalled crapware). Start opening more tabs or install some apps and 8GB RAM is going to run out real quick.

And it goes as low as 4 - which is a bad joke. That's appropriate only for quite special low-memory uses (like a thin client, preferably based on a special low-resource GNU/Linux distro) or "I'll install my own RAM anyway so I don't care what comes stock" scenario.


I agree that 4 GiB is too low for browsers these days (and has been for years) but but that is only because the web is so bloated. But 4 GiB would also be a waste on any kind of thin client. Plenty of local applications should run fine on that with the appropriate OS.


Low end chromebooks have 4gb, but they pretty much count as thin clients. :)


I think it's really quite the opposite.

Compute resource consumption is like a gas, it expands to fill whatever container you give it.

Things didn't reach a comfortable level, Moore's Law just slowed down a lot so bloat slowed at pace. When developers can get a machine with twice as much resources every year and a half, things feel uncomfortable real quick for everybody else. When developer's can't ... things stop being so uncomfortable for everyone.

However, there is a certain amount of physical reality that has capped needs. Audio and video have limits to perceptual differences; with a given codec (which are getting better as well) there is a maximum bitrate where a human will be able to experience an improvement. Lots of arguing about where exactly, but the limit exists and so the need for storage/compute/memory to handle media has a max and we've hit that.


RAM prices haven't dropped as fast as other parts, images don't ever really get much "bigger" (this drove a lot of early memory jumps, because as monitor sizes grew, so did the RAM necessary to hold the video image, and image file sizes also grew to "look good" - the last jump we had here was retina).

The other dirty secret is home computers are still single-tasking machines. They may have many programs running, but the user is doing a single task.

Server RAM has continued to grow each and every year.


> The other dirty secret is home computers are still single-tasking machines. They may have many programs running, but the user is doing a single task.

It isn't often you really want it, but it is nice to have when you do. I am still bitter at Intel for mainstream 4-core'ing it for years...


Pretty sure 4 gig RAM was common consumer level then, but I take your point. I think the average consumer user became rather less affected by the benefits of increased RAM somewhere around 8 and that let manufacturers get away with keeping on turning out machines that size. Specialist software and common OSes carried on getting better at using more if you had more demanding tasks to do, which is probably quite a lot of the people here, but not a high % of mass market computer buyers.

Honestly I think the pace of advance has left me behind too now, as the pool of jobs that really need the next increment of "more" goes down. There might be a few tasks that can readily consume more and more silicon for the foreseeable, but more and more tasks will become "better doesn't matter". (someone's going to butt in and say their workload needs way more. Preemptively, I am happy for you. Maybe even a little jealous and certainly interested to hear. But not all cool problems have massive data)


In 1995, I remember buying a Pentium PC with 32 megs of RAM. Gigabytes of RAM wasn't common until the early 2000's!


>Pretty sure 4 gig RAM was common consumer level then

I wish. Y2k3, AMD Athlon, 256MB of RAM. That could run the whole KDE, Kopete, Konqueror and Amarok DE combo in a breeze.


In 1992, I had an LCII with 10 MB RAM and that was considered extravagant. 24MB in 1994 was considered high end.

I bought a 32MB SIMM to attach to my DX/2-66 DOS Compatibility Card in my PowerMac 6100/60 for $300 in 1996.

Even as late as 2008, having 4GB RAM was not common.


DRAM hit a scaling wall about a decade ago. Capacity per dollar has not been improving significantly, certainly not at the exponential rate it had been for the prior 50 years or so.

https://thememoryguy.com/dram-prices-hit-historic-low/

As to why, search for words like "dram scaling challenge". I'm no expert but I believe capacitor (lack of) scaling is the biggest problem. While transistors and interconnections continued shrinking, capacitors suitable for use in DRAM circuits ran out of steam around 2xnm.


8gigs is "enough" for most people.

Even modern games don't usually use more than eg. 16gigs

Developers are a whole different story, that's why it's not unusual to find business-class laptops with 64+ gigs of ram (just for the developer to run the development environment, usually consisting of multiple virtual machines).


It was pretty difficult to use 64GB of RAM on my old desktop. 95% of my usage was Firefox and the occasional game. The only things that actually utilized that RAM were After Effects and Cinema 4D, which I only use as a hobby. I felt kinda dumb buying that much RAM up until I got into AE and Cinema 4D because most of it just sat there unused.


Can confirm my Mac has 64.


To be fair, CPUs haven't improved a ridiculous amount either.

10 years ago, mainstream computers were being built with i5-2500ks in them. Now, for a similar price point you might be looking at a Ryzen 5 5600x. User Benchmark puts this at a 50-60% increase in effective 1/2/4 core workloads, and a 150-160% increase in 8 core workloads.

Compared to the changes in SSDs (64GB/128GB SATA3 being mainstream then, compared to 1TB NVMe now) or GPUs (Can an HD6850 and RTX 3060 even be compared!?), it's pretty meagre!


https://cpu.userbenchmark.com/Compare/Intel-Core-i5-2500K-vs... is the comparison you’re referring to.

I recall hearing a few years back that User Benchmark was wildly unreliable where AMD was involved, presenting figures that made them look much worse than they actually were. No idea of the present situation. I also vaguely recall the “effective speed” rating being ridiculed (unsure if this is connected with the AMD stuff or not), and +25% does seem rather ridiculous given that its average scores are all over +50%, quite apart from things like the memory speed (the i5-2500K supported DDR3 at up to 1333MHz, the 5600X DDR4 at up to 3200MHz).

An alternative comparison comes from PassMark, who I think are generally regarded as more reliable. https://www.cpubenchmark.net/compare/Intel-i5-2500K-vs-AMD-R... presents a much more favourable view of the 5600X: twice as fast single-threaded, and over 5.3× as fast for the multi-threaded CPU Mark.


In my experience; UserBenchmark will only show exactly one review per part, usually written early in the lifecycle, and never updated. Sometimes, this is written by random users (older parts), sometimes by staff. All reviews are basically trash, especially staff reviews.

Also, data fields like 64-Core Perf were made less prominent on Part Lists and Benchmark Result pages around the time Zen+ and Zen 2 All-Core outperformed comparable Intel parts. 1-Core Perf and 8-Core Perf were prioritized on highly visible pages, putting Intel on top of the default filtered list of CPUs.

However, the dataset produced by the millions of benchmarks remains apparently unmolested, and all the data is still visible going back a decade or more, if not slightly obscured by checkboxes and layout changes. (https://www.userbenchmark.com/UserRun/1)


Thanks for that. I'd always just taken them at face value, since they seem authoritative.


Overall RAM in your systems HAS been steadily increasing, it's just targeted at the use cases where that matters, so it's primarily been in things like the GPU rather than system memory. In the past ten years GPUs have gone from ~3GB[1] to now 24GB[2] in high-end consumer cards.

Another factor is in speed; In that ten year span we went from DDR3 to DDR5, moving from 17000 to 57600 transfers per second over that time frame. SSDs are much more common meaning it's easier to keep RAM full of only the things you need at the moment and drop what you don't out.

In terms of RAM density increasing, I think it's been more driven by demand than anything and there simply isn't a serious demand out there except in the very high end server space. Even compute is more driven by GPU than system memory, so you're mostly talking about companies that want to run huge in-memory databases, versus in the past when the entire industry has been starving for more memory density. The story has been speed matters more.

I'd also suggest that improvements in compression and hardware-assisted decompression for video streams have made a difference, as whatever is in memory on your average consumer device is smaller.

Coupling this with efficiency improvements of things like DirectStorage where the data fed into the GPU doesn't even need to touch RAM, the requirement to have high system memory available will be further lessened for gaming or compute workloads going forward.

[1]: https://www.anandtech.com/bench/GPU12/372

[2]: https://www.nvidia.com/en-us/geforce/graphics-cards/30-serie...


Worse, it's now all soldered in, so we can't even upgrade it.

Like - fuck. this. shit.

I will take - in a heartbeat - (we'd all better!) whatever slight decrease in performance and slightly larger design that comes from not being soldered in half a microsecond if it means I can actually upgrade the RAM later on after my purchase.

Not just because I want continued value out of my purchase - because honestly, that's not even getting started on the amount of eWaste that must be created from this selfish, harmful; wasteful pattern.

Think about all the MacBook Airs out there stuck with 4GB of RAM and practically useless 128GB SSD's that could be useful if Apple didn't intentionally make them useless.

We've certainly gone very far backward in terms of any computer hardware worth purchasing in the last ten years.

I really don't get the point, beyond greed - of this whole soldering garbage. No performance or size benefits can be worth not being able to continue to reap value from my investment a couple years down the road.

This not only creates seriously less waste, but also nurtures a healthy, positive attitude of valuing our hardware instead of seeing it as a throwaway, replaceable thing.

I truly hope environmental laws make it illegal at some point. The EU seems to be good with that stuff. The soldering issue accomplishes literally nothing beyond nurturing, continuing, and encouraging a selfish culture of waste that only gets worse the more and more accepted we allow it to become.

We've only got one planet. It's way - way - WAY - past time we started acting like it.

Ten years ago, the state of hardware was far more sustainable than it is now.

For. Fucking. Shame.


Im still daily driving my 2013 macbook air with 128gb ssd and well 8gb ram. 100’s of browser tabs, photoshop, kicad and illustrator often at the same time. Never reboot, always sleep. Runs Catalina. Best machine ever. [edit] while going through many eeepcs, several thinkpads and two desktop pcs in that period [edit2: these were prolly older than 10 years tho ;)]


I don't really understand how you do this. Even running archlinux with clamd and a tiny windows manager takes ~2-4 GB of RAM to do effectively "nothing".

So, of course, MacOS takes ~8GB of RAM to do effectively nothing, such is about the same for windows.

Unless you're running something like alpine Linux these days, you're going to be eating a ton of RAM and cycles for surveillance, telemetry, and other silly 'features' for these companies.


I’ve used those MacBook airs before and those weak dual core CPUs really don’t have much beef to them, and the screens are these 1440x900 TN panels (not sure about the vertical resolution) which was low spec even for the time.

I don’t see upgradability being essential to longevity. I think you can just buy everything you need for the next decade on day one.

The soldering uses significantly less power.


>> I don’t see upgradability being essential to longevity. I think you can just buy everything you need for the next decade on day one.

Wow, that statement reeks of having so much privilege it’s like you forgot that this isn’t financially feasible for a lot of people, and that upgrading down the line allowed someone to afford a lower spec computer at the time.

Like - you do get not everyone’s wealthy af, yeah? And that companies intentionally bloat the cost of first party memory and stuff so that if you’re not insanely privileged; like I guess you are, then, no - it’s not even…it doesn’t even make sense.

All this does is steal from the poor to give to the rich.


That’s a pretty wildly inaccurate guess given I bought that MacBook Air when I was in poverty. It was my life savings at the time.

It’s a matter of efficiency, somebody pays for all those ram slots to get made and the electricity to power them. Now that computer memory is growing generation over generation at a slower rate, and now that other sources of power waste have been eliminated, I reckon there’s less and less reason to go with slots.

It is a shame that this allows vendors to charge non-market prices for products but the problem I have is that this has become peoples main argument to preserve slotted memory. It’s an objectively inferior technology for mobile devices we use so we can pay market prices for memory. Realistically though, slotted memory is doomed if the only argument for it, since manufacturers have every incentive to stop offering the option of slots. Even if they wanted to offer a customer competitive market prices for memory there’s little reason they couldn’t do that with soldered memory.


When I bought my laptop this was exactly my thinking, so I immediately put 128GB aftermarket RAM into it.


I remember the old setup - buy or build a PC, and a year or two later max the RAM (for significantly less now) and get another year or two out of it.

Then for awhile you could use SSDs to do the same thing.

But now it’s all laptops as far as the eye can see, with soldered RAM.


I have 64G of RAM in my framework laptop with room for 64G more. It's not all bad. (The state of mobile phone storage (not to mention removable batteries) has gotten significantly worse over time. Heck, they even dropped the 3.5mm headphone jack which is honestly fucking insane.)


Framework is the only laptop I will consider purchasing in the future, for sure.


I think RAM has been increasing but for the GPU rather than CPU. Also, I think Microsoft isn't quite as bad as Apple at memory usage so it still isn't that easy for a large number of people to really need more than 8GB and for anyone who plays games it is better to have extra memory in the GPU. I don't think there is all that much swapping needed with 8GB but most CPU memory tends to be used as disk cache so it still helps to have faster disks. I think of computers that started to have 8GB RAM over a decade ago as the "modern era of computing" and those systems are still fine now (at least with SSD) for a wide range of casual use, other than software that explicitly requires newer stuff (mostly GPU, sometimes virtualization or vector stuff). My sense is most hardware changes since have been in the GPU and SSDs.


This is where it becomes apparent that the way I use my machine is very different to some folks on here.

It is a Lenovo T400 with 4GB of RAM. In order to maximise the life span of the SSD, I knocked out the swap sector. So that is 4GB, no option to move beyond that.

That said in daily use, I have never seen my usage go anywhere near 3GB but I suspect that is just because I am very frugal with my browsing and keep cleaning up unused tabs.


Put Windows/MacOS on it and you'll use 3 GB just booting up :p.


No background or industry specific knowledge, but I'd venture to guess smart phones/mobile computing added a lot to the demand for RAM and outpaced increases in manufacturing.

I'd guess now that the market for smartphones is pretty mature we should start seeing further RAM increases in the coming years.


> I'd guess the problem is in scaling RAM density itself

Yea, seems[1] that's the issue. I wrote some more about it in my other[2].

[1]: https://www.allaboutcircuits.com/news/micron-unveils-1a-dram...

[2]: https://news.ycombinator.com/item?id=32758022


I would guess it’s because the average user just doesn’t use enough ram for it to matter. My current desktop built in 2014 has 64GB and it seems like I never come close to using it all even running multiple virtual machines and things like that.


> Interesting point that RAM has basically not moved up in consumer computers in 10+ years... wonder why.

Smartphones gradually became the limiting factor




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: