Of course, the other plausible explanation is that (at least at the time) Weird Al didn't know the difference between RAM and hard drive space (especially likely, because he's talking about defragging a hard drive). In fact, I'd be surprised if that wasn't what he meant. I will, however, leave room for him knowing the difference, but intentionally being absurd (it is Weird Al, after all).
That problem (referring to things that are not conventional fast, volatile random access memory as RAM) seems to have only gotten worse in the last twenty years - exacerbated, I believe by the increased use of flash technology (SSDs, etc - which have, admittedly blurred the lines some).
It also doesn't help that smart phone and tablet manufacturers/advertisers have long insisted on referring to the device's long-term storage (also flash tech) as "memory". I doubt that will ever stop bugging me, but I've learned to live with it, albeit curmudgeonly.
I’m going to have to stan for Weird Al here and say that there’s basically 0% chance that he didn’t know the difference between RAM and hard drive space. He’s actually quite meticulous in his songwriting and pays a lot of attention to details. And with a mostly (white and) nerdy (ba-dum-tssshhh!) audience he knows he’d never hear the end of it if he screws up. Must be quite the motivator to get things right.
He even mentions hard drive space separately, as something comparable to removable storage:
You could back up your whole hard drive on a floppy diskette
You're the biggest joke on the Internet
I can only conclude that he did understand the distinction between RAM and hard drive space. The song is full of jokes that show a deep understanding of the topic. Indeed I don't find any nits to pick.
Also, taken in context, telling somebody their whole HDD fits in a floppy is entirely consistent with bragging about your entirely unreal 100 gigs of RAM.
Just to be clear, I did not in any way mean this as disparaging toward Weird Al. I've been a fan of his music for decades.
But as I said elsewhere in this thread, I have been surprised in the past by people I would have described as fairly tech-savvy, who still called hard drive space "memory".
However, if the two phrases are related (as opposed to just being adjacent in the song), at this point I'd guess Al does probably know the difference, and the relationship is intended to be intentionally over-the-top absurd bragging.
I mean, “memory” in the computer sense is by analogy t memory in the brain sense, which is used for both short and long term storage (and I’d argue more commonly for long term storage). It therefore doesn’t seem unreasonable to me for people to call a computers long term storage it’s “memory”. The function of a hard drive is to remember things.
He didn’t say “memory” - he said RAM! Why? Because it’s funnier and it rhymes with spam! Plus “hard drive space” is three syllables. It’s a song not a README, sheesh
I consider myself pretty tech savvy and call hard drives memory. HDDs exist in the middle of a memory hierarchy stretching from registers to cloud storage.
Never thought about it before, but I guess SSDs are random access, so it would be technically correct to call them RAM, and that’s my favorite kind of correct.
Jumping on the nitpick train: SSDs are less randomly write-access memory because of the requirement to explicitly erase an entire block before you can rewrite it. RAM doesn't have this constraint. It has other constraints, like writing and refreshing rows at a time, but combined with the difference in erase cycles causing wear in flash, you do end up with a significant difference in just how randomly you can access these memories.
SRAM is the only type of memory offering true random access. DRAM can suffer so much on pure random access that completely rewriting your algorithm to have mostly sequential access to 20x more memory still ends up 4x faster (lean miner vs mean miner in [1]).
Well as long as we are nitpicking: random access does not say anything about the block size - you're not going to be writing individual random bits whith any kind of memory. And random access also does not say that all blocks have the same access time, only that the time does not depend on what you have accessed previously. In conclusion, an array of spinning rust drives is random access memory.
> I have been surprised in the past by people I would have described as fairly tech-savvy, who still called hard drive space "memory".
That's not really incorrect, though. RAM is random access memory, as opposed to memory that has other access patterns.
It's less common these days, but among people who learned to use computers in the 80s, you'll still hear some use "memory" to refer to storage that isn't RAM.
Calling a hard drive “memory” can also be left over from very old computers which blurred the distinction between transient and permanent/stable memory.
At the time, I had used/tried out the following OSes
- BeOS 4.5 (including some beta versions)
- BeOS 5 (including some beta versions)
- Windows NT 4.0
- OS2/Warp
- Windows 95 (and it's service releases) (including beta versions)
- Windows 98 (and it's service releases) (including beta versions)
- PC-DOS
- MS-DOS with DosShell
- MS-DOS with Windows 3.11
- Whatever old version of MacOS was on the school computers
- Slackware
(might have some dates wrong)
I was also on mIRC and downloading from newsgroups regularly.
I think many ISP's would give you guides about using email/newsgroups back then as those services were considered required for an ISP. TUCOWS was super popular for this newfangled WinSOCK software (TUCOWS stands for The Ultimate Collection Of WinSOCK Software). I remember testing how fast the first consumer cable internet connections were by downloading from them.
You are right for most people that stuff was probably obscure.
At that time you'd have been mercilessly about mIRC. It's a client program, and you'd be on IRCnet, EFnet or QuakeNet, and the the last one would earn you endless disrespect from IRCnet veterans.
Today, Usenet is obscure. Late 90's was right around when Usenet was at its peak. ISPs back then advertised newsgroups as a feature: "we have a full newsfeed", that sort of thing. I worked at one that built a server with a whopping 9 gigabyte hard drive just for the NNTP feed! That was pretty cutting edge back in 1995.
Maybe so, but I don't see anything in the lyrics that could not be taken from a friend who's "really good with computers".
At this point, however, I'm willing to give Al the benefit of the doubt. It certainly would be "in character" (for lack of a better term), for him to be knowledgeable enough to know the difference. But I have been surprised by others on this particular point in the past.
> I will, however, leave room for him knowing the difference, but intentionally being absurd (it is Weird Al, after all).
This isn't even intentional absurdity. The theme of the song is bragging. Here are some other lyrics.
I'm down with Bill Gates
I call him "Money" for short
I phone him up at home
and I make him do my tech support
Your laptop is a month old? Well that's great,
if you could use a nice heavy paperweight
Installed a T1 line in my house
Upgrade my system at least twice a day
The line about having 100 gigabytes of RAM is completely in keeping with every other part of the song. There's no more reason to think Weird Al might not have known what RAM was than there is reason to believe he didn't know that having Bill Gates do your tech support was unrealistic, or that PCs are rarely upgraded more than once a day.
I think the main point is all the other (technical) brags are quite quaintly outdated (T1 lines are glacial by todays standard, while interestingly the lines about how quickly hardware becomes obsolete have become less accurate over time), on the other hand 100GB of RAM is still quite extreme.
This was an active topic of debate back in the days when people still relied on modems. T1 gives you 1.544 mpbs "bandwidth" but the fact that it's "guaranteed" bandwidth means it should be fast enough for anything you'd need to do as an individual user. If you had your own private T1 line all to yourself, the latency should feel the same as being on the same local LAN as whatever service you're trying to access. Even a 56k modem still had a small but noticeable latency, especially if you're doing command-line where you expect echo-back on each character you type.
People don't really understand the speed vs. bandwidth debate, but they do know the psychological difference when latency is low enough to be unnoticeable.
IIRC the point about T1 lines are their availability + uptime + bandwidth guarantees, but yea they're not so hot anymore from the ubiquity of other high performance networking alternatives.
I'd say it differently: 64Gb ram is easily purchasable and 128 isn't much harder - just $$$.
But to my knowledge not many purchase it. As the article says, the defaults for new items on the main vendors are still 8 or 16. I suspect most could be happy with 32GB for the next 3 years but that isn't a default.
> There's no more reason to think Weird Al might not have known what RAM was than there is reason to believe he didn't know that having Bill Gates do your tech support was unrealistic
I understand why you might argue that, but I've been surprised in the past by people who were fairly-well-versed on computers (no pun intended), but still called hard drive space "memory".
I should go listen to that section of the song again (like I said elsewhere, it isn't one of my favorites of his). By intentional absurdity, I meant that I could see Al intending this to be a case of bragging in an absurd way "Defraggin' my hard drive for thrills, I got me a hundred gigabytes of RAM".
But now that I noticed the "I" in the middle there (I had missed it before), I'm guessing I was wrong in thinking the lines could be cause-and-effect (which would be the source of the absurdity).
Well, it is secondary memory. Besides, absolutely, it is memory. It's just not the main memory in a relative context about your computer.
In fact, "storage" and "memory" mean about the same thing. And since it's not technically the main or primary memory of our computers anymore, and RAM is actually an implementation detail that all of the in-machine memory shares, I'm not sure I have a good pedantic name for it.
I mean... My computer has a device called RAM that communicates with the north bridge by a protocol where sequential readings have lower latency than random ones. It also has a device called disk (or sometimes drive), that has nothing shaped like a disk (and is never loaded), and communicates with the north bridge by a protocol where sequential readings have lower latency than random ones.
At the CPU side, there is an in-package device that communicates with the memory controller by a protocol where sequential readings have lower latency than random ones. It communicates with an in-die device, that finally communicates with the CPU with a protocol that provides random access. That last one is obviously not called "RAM".
> I'm guessing I was wrong in thinking the lines could be cause-and-effect
This is the root cause of why everyone is arguing with you. For some reason, TFA quotes the defrag line and the RAM line as if they are somehow paired, which you've also latched onto, but it makes no sense to consider them as a pair. Many lines in this song are to be interpreted as their own complete thought. And if we really want to pair up consecutive lines, wouldn't we use lines that rhyme to do so?
...
Nine to five, chillin' at Hewlett Packard? // Workin' at a desk with a dumb little placard?
Yeah, payin' the bills with my mad programming skills // Defraggin' my hard drive for thrills
I got me a hundred gigabytes of RAM // I never feed trolls and I don't read spam
Installed a T1 line in my house // Always at my PC, double-clickin' on my mizouse
As I said before, this is not one of my favorite Weird Al songs (many of which I could sing along with word-for-word from memory). I was never crazy about the song, so I am not surprised that I didn't know the surrounding context (which the original article left out). Still, I don't think you can exclude the possibility of a cause-and-effect relationship spanning the middle of two rhyming lines (perhaps particularly, in rap music), but after seeing and hearing the lines in context, I don't think that was Al's intention here.
It's really not the users' fault. Most civilians understand that computers have to have some way to "remember" their files, but the fact that computers also need memory that "forgets" if the power goes off makes no sense to them.
It shouldn't make sense to us either; it's a ridiculous kludge resulting from the fact that we've never figured out how to make memory fast, dense, cheap, and nonvolatile at the same time.
Actually Intel did figure it out with Optane. Then they killed it because computer designers couldn't figure out how to build computers without the multilevel memory kludge. IMHO this is the single dumbest thing that happened in computer science in the last ten years.
My understanding is that the problems with Optane were a lot more complicated than that. @bcantrill and others talked about this on an episode of their Oxide and Friends Twitter space a few weeks ago. A written summary would be nice.
Thanks for the tip. I learned a lot listening to this [0].
My takeway: Optane was good technology that was killed by Intel's thoroughly wrong-headed marketing. It's cheaper than DRAM but more expensive than flash. And much faster than flash, so it makes the fastest SSDs. But those SSDs are too expensive for everybody except niche server farms who need the fastest possible storage. And that's not a big enough market to get the kinks worked out of a new physics technology and achieve economies of scale.
Intel thought they knew how to fix that: They sold Optane DIMMs to replace RAM. But they also refused to talk about how it worked, be truthful about the fact that it probably had a limited number of write cycles, or describe even one killer use case. So nobody wanted to trust it with mission-critical data.
Worst of all, Intel only made DIMMs for Intel processors that had the controller on the CPU die. ARM? AMD? RISC-V? No Optane DIMMs for you. This was the dealbreaker that made every designer say "Thanks Intel but I'm good with DRAM." As they said on the podcast, Intel wanted Optane to be both proprietary and ubiquitous, and those two things almost never go together. (Well obviously they do. See Apple for example. But the hosts were discussing fundamental enabling technology, not integrated systems.)
> Most civilians understand that computers have to have some way to "remember" their files, but the fact that computers also need memory that "forgets" if the power goes off makes no sense to them.
Well, of course that makes no sense. It isn't true.
We use volatile memory because we do need low latency, and volatile memory is a cheap way to accomplish that. But the forgetting isn't a feature that we would miss if it went away. It's an antifeature that we work around because volatile memory is cheap.
I agree we would be fine if all memory was nonvolatile, as long as all the other properties like latency were preserved.
In terms of software robustness though, starting from scratch occasionally is a useful thing. Sure, ideally all our software would be correct and would have no way of getting into strange, broken or semi-broken states. In practice, I doubt we'll every get there. Heck, even biology does the same thing: the birth of child is basically a reboot that throws away the parent's accumulated semi-broken state.
We have built systems in software that try to be fully persistent, but they never caught on. I believe that's for a good reason.
It would take serious software changes before that became a benefit. If every unoptimized Electron app (but I repeat myself) were writing its memory leaks straight to permanent storage my computer would never work again.
> If every unoptimized Electron app (but I repeat myself) were writing its memory leaks straight to permanent storage my computer would never work again.
This is a catastrophic misunderstanding. I have no idea how you think a computer works, but if memory leaks are written to permanent storage, that will have no effect on anything. The difference between volatile and non-volatile memory is in whether the data is lost when the system loses power.
A memory leak has nothing at all to do with that question. A leak occurs when software at some level believes that it has released memory while software at another level believes that that memory is still allocated to the first software. If your Electron app leaks a bunch of memory, that memory will be reclaimed (or more literally, "recognized as unused") when the app is closed. That's true regardless of whether the memory in question could persist through a power outage. Leaks don't stay leaked because they touch the hard drive -- touching the hard drive is something memory leaks have done forever! They stay leaked because the software leaking them stays alive.
> The difference between volatile and non-volatile memory is in whether the data is lost when the system loses power.
I'm aware. This is a feature for me - I disable suspend/hibernate/resume functionality. I don't want hiberfile.sys taking up space (irrelevant in this scenario, I guess) and I certainly don't want programs to reopen themselves after a restart, especially if it was a crash. If all storage were nonvolatile, OSes would behave as though resuming from hibernate (S4) all the time.
> that memory will be reclaimed [. . .] when the app is closed.
Again, I'm aware. I'm glad you've never had any sort of crash or freeze that would prevent closing a program, but it does happen.
OSes would need to implement a sort of virtual cold boot to clear the right areas of memory, even after a BSOD or kernel panic. Probably wouldn't be that hard, but it would have to happen.
You could still have a restart "from scratch" feature in the OS. But persistent RAM could potentially mean the power dropping for a few seconds means you don't lose your session.
I used to explain it to clients as the difference between having your files nicely organized in cabinets or on shelves, and having the contents of several different files strewn over the desk. For people who like to keep a tidy desk this metaphor made immediate sense.
It explains some aspects and obscures some other ones.
It explains how the documents at arm reach on the desk are faster to access than the things in the cabinets. It obscures the fact that stuff on the desk dissapears if the power supply glitches even just a little.
In fact we invented these batteries with electronics which sense if the electricity is about to go out so the computer has time to carry the documents from the desk to the cabinets before they dissapear. And we think this is normal and even necessary in pro settings. (I’m talking about uninterrupted power supplies of course.)
A lot of people clear their desk before leaving the office, either because they're tidy and well organized or because they don't want to leave confidential info out where the cleaning staff could browse through it. It's not hard to extend the metaphor to a reboot of the computer being like the end of the workday.
Just three days ago, professional streamer Hakos Baelz made this exact mistake.
...and then had to be told that the cable you plug into the back of a landline telephone isn't just a power connector. Sadly, I doubt we'll ever know how she thought telephones actually worked.
I could be wrong, but I think back in the day the computer monitor was the whole system for debugging and viewing the state of a running computer (which is why the debugger on an Apple II is called a monitor, for example) but now we just use it to mean the physical screen.
CRT monitor were a relatively late addition to terminal devices. Traditional terminals were hardcopy printers; they printed the output you received onto paper.
I was about to say that confusing the first two makes perfect sense because it's been a long time since the Desktop has had any real purpose beyond displaying a background image. Then I realised that the kind of people we're talking about probably have a shortcut for every single application they've ever installed on their desktop and have no idea how to delete them.
I've seen the ones on the second set meaning completely different things from one application or environment to another. Even switching the meaning between themselves.
Your family must be very different than mine! (and coworkers too:)
I Have frequently heard phrase at work "you plug into here to get wifi". My family and in laws absolutely positively do not understand the difference between wifi and internet. In particular, the idea that they may be connected to wifi but not have internet is an alien, incomprehensible notion.
It depends who you hang around with. Computer professionals understand the difference. Most others do not as they don't need to - empirically, to majority of population, wifi IS internet (as opposed to cell phones, of course, which use "data":)
My children complain the wifi is down when their ethernet cable is broken. They say that AFTER THEY TELL ME IT'S BROKEN. This is not just a meme, they should know better, and are very unhappy on WiFi, but still tend to call all internet Wi-Fi.
There was an incident years ago with a library offering wired Ethernet for laptops, and the sign said "wired wifi connection". I'm not sure it's really that common.
> The name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name that was "a little catchier than 'IEEE 802.11b Direct Sequence'."
> The Wi-Fi Alliance used the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created, and the Wi-Fi Alliance was also called the "Wireless Fidelity Alliance Inc" in some publications.
Elsewhere I've seen that it was chosen because of its similarity to HiFi - which is short for "High Fidelity" (regarding audio equipment):
> High fidelity (often shortened to Hi-Fi or HiFi) is the high-quality reproduction of sound. It is important to audiophiles and home audio enthusiasts. Ideally, high-fidelity equipment has inaudible noise and distortion, and a flat (neutral, uncolored) frequency response within the human hearing range.
However, there seems to be a lot of debate as to whether the 'Fi' in WiFi was ever fully accepted to mean "Fidelity". So pretty much it seems marketing people liked the way "WiFi" sounded, so they used it.
Assuming most people use the combo modem/router (WAP, technically, since this entire thread is an exercise in pedantry) provided by their ISP, this makes sense from a non-technical user's perspective, if Comcast always ships hardware with the latest Wifi spec. Of course, you need a whole mess of asterisks after that, because I don't think I own a single client device with Wifi 6E, plus network contention, time sharing, etc. Gamers should do what I do and plug a wire thingy from their computer to their Wifi thingy.
For a lot of people all their connected devices at home are wireless: smart devices, phones, many have laptops rather than desktops, tablets, …, and while out the connect to other wi-fi networks. It is easy to circulate WiFi and cellular data access, and if you don't use much or any wired networking at home all your normal network access is therefore WiFi.
Screensaver as wallpaper is more rare but I have heard it, and have done for some time (I know non-technical people who have used wallpaper and screen saver wrong for many years, going back to when fancy screen savers were more common than simply powering off, either calling both by one name or just using them randomly). More common these days though is people simply not knowing the term, except to confuse it with a physical screen protector.
Those lines are separate lines of separate couplets.
The lines around it are
"Paying the bills with my mad programming skills
Defragging my hard drive for thrills
Got me a hundred gigabytes of RAM
I never feed trolls and I don't read spam"
They're not really related to each other besides being right next to each other. The lines rhyme with other lines. Which, if you were trying to link ideas lyrically, is where you'd do it, on the rhyme.
But, the entire song is just a litany of various brags centered around technology. It is, like most of Al's work, pretty clever and knowledgeable. Not only of the source material, but of the subject presented.
Yeah, I guess it depends on whether you interpret them as separate.
It is possible to think of it in a cause-and-effect way "Defragging my hard drive for thrills got me a hundred gigabytes of RAM". Which, honestly, I could see Al saying that as a purposefully absurd statement (because the first could not cause the second, but people brag like that all of the time).
I will admit that, although it seems like it should be (given my profession), this is not one of my favorite Weird Al songs --and I've been a fan for decades. So while I have heard it many times, I can't remember the last time I listened to it.
There's really only one interpretation. There's a definite pause between the lines as well. This isn't really a matter of perspective. The only way I can concede that it's possible to read those lines in your way is not particularly flattering to you.
However, I'm sure I have heard a few people use RAM when they meant hard drive space. Though, in fairness, those were probably friends/relatives who used computers but didn't know a lot about how they actually work.
The thing people are discussing is the 100GiB of RAM line. You wouldn’t typically defrag RAM (although that was a thing back then) and 100GiB is more indicative of HD space and not RAM. So the question is whether the two lines are related and did Weird Al say that defragging his disk freed up 100GiB of RAM (an impossible amount at the time) or 100 GiB of storage and got RAM and disk confused.
Interesting point that RAM has basically not moved up in consumer computers in 10+ years... wonder why.
The article answers its own question by pointing out SSDs allow better swapping, bus is wider and RAM is more and more integrated into processor, etc.
But I think this misses the point. Did these solutions make it uneconomic to increase RAM density, or did scaling challenges in RAM density lead to a bunch of other nearby things getting better to compensate?
I'd guess the problem is in scaling RAM density itself because performance is just not very good on 8gb macbooks, compared to 16gb. If "unified memory" was really that great, I'd expect there to be largely no difference in perceived quality.
Does anyone have expertise in RAM manufacturing challenge?
I don't think SSDs allowing rapid swapping is as big a deal as SSDs being really fast at serving files. On a typical system, pre-SSD, you wanted gobs of RAM to make it fast - not only for your actual application use, but also for the page cache. You wanted that glacial spinning rust to be touched once for any page you'd be using frequently because the access times were so awful.
Now, with SSDs, it's a lot cheaper and faster to read disk, and especially with NVMe, you don't have to read things sequentially. You just "throw the spaghetti at the wall" with regards to all the blocks you want, and it services them. So you don't need nearly as much page cache to have "teh snappy" in your responsiveness.
We've also added compressed RAM to all major OSes (Windows has it, MacOS has it, and Linux at least normally ships with zswap built as a module, though not enabled). So that further improves RAM efficiency - part of the reason I can use 64-bit 4GB ARM boxes is that zswap does a very good job of keeping swap off the disk.
We're using RAM more efficiently than we used to be, and that's helped keep "usable amounts" somewhat stable.
Don't worry, though. Electron apps have heard the complaint and are coming for all the RAM you have! It's shocking just how much less RAM something like ncspot (curses/terminal client for Spotify) uses than the official app...
> especially with NVMe, you don't have to read things sequentially
NVMe is actually not any better than SATA SSDs at random/low-queue-depth IO. The latency-per-request is about the same for the flash memory itself and that's really the dominant factor in purely random requests.
Of course pretty soon NVMe will be used for DirectStorage so it'll be preferable to have it in terms of CPU load/game smoothness, but just in terms of raw random access, SSDs really haven't improved in over a decade at this point. Which is what was so attractive about Optane/3D X-point... it was the first improvement in disk latency in a really long time, and that makes a huge difference in tons of workloads, especially consumer workloads. The 280/480GB Optane SSDs were great.
But yeah you're right that paging and compression and other tricks have let us get more out of the same amount of RAM. Browsers just need to keep one window and a couple tabs open, and they'll page out if they see you launch a game, etc, so as long as one single application doesn't need more than 16GB it's fine.
Also, games are really the most intensive single thing that anyone will do. Browsers are a bunch of granular tabs that can be paged out a piece at a time, where you can't really do that with a game. And games are limited by what's being done with consoles... consoles have stayed around the 16GB mark for total system RAM for a long time now too. So the "single largest task" hasn't increased much, and we're much better at doing paging for the granular stuff.
1. Pretty sure IO depth is as high as OSes can make it so small depth only happens on a mostly idle system.
2. Throughput of NVMe is 10x higher than SATA. So in terms of “time to read the whole file” or “time to complete all I/O requests”, it is also meaningfully better from that perspective.
that might be with a higher queue depth though, like 4K Random QD=4 or something, I don't see it says "QD=1" or similar anywhere there and that's a fairly high result if it was really QD=1.
It's true that NVMe does better with a higher queue depth though, but, consumer workloads tend to be QD=1 (you don't start the next access until this one has been finished) and that's the pathological case due to the inherent latency of flash access. Flash is pretty bad at those scenarios whether SATA or NVMe.
So eh, I suppose it's true that NVMe is at least a little better in random 4K QD=1, a 960 Pro is 59.8 MB/s vs 38.8 for the 850 Pro (although note that's only a 256GB drive, which often don't have all their flash lanes populated and a 1TB or 2TB might be faster). But it's not really night-and-day better, they're still both quite slow. In contrast Optane can push 420-510 MB/s in pure 4K Random QD=1.
Also people forget that the jump from 8 bit to 16 bit doubled address size, and 16 to 32 did it again, and 32 to 64, again. But each time the percentage of "active memory" that was used by addresses dropped.
And I feel the operating systems have gotten better at paging out large portions of these stupid electron apps, but that may just be wishful thinking.
Memory addresses were never 8 bits. Some early hobbyist machines might have had only 256 bytes of RAM present, but the address space was always larger.
Yeah, the 8bit machines I used had 16bit address space. For example from my vague/limited Z80 memories most of the 8bit registers were paired - so if you wanted a 16bit address, you used the pair. To lazy to look it up, but with the Z80 I seem to remember about 7 8bit registers and that allowed 3 pairs that could handle a 16bit value.
This got me thinking, and I went digging even further into historic mainframes. These rarely used eight-bit bytes, so calculating memory size on them is a little funny. But all had more than 256 bytes.
Whirlwind I (1951): 2048 16-bit words, so 4k bytes. This was the first digital computer to use core memory (and the first to operate on more than one bit at a time).
EDVAC (designed in 1944): 1024 44 bit-words, so about 5.6k.
ENIAC (designed in 1943): No memory at all, at least not like we think of it.
So there you go. All but the earliest digital computer used an address space greater than eight bits wide. I'm sure there are some micro controllers and similar that have only eight bit address spaces, but general purpose machines seem to have started at 12-bits and gone up from there.
I actually have 100GB of RAM in my desktop machine! It's great, but my usage is pretty niche. I use it as drive space to hold large ML datasets for super fast access.
I think for most use cases ssd is fast enough though.
I think it's just RAM reaching the comfortable level, like other things did.
Back when I had a 386 DX 40 MHz with 4MB RAM and 170MB disk, everything was at a premium. Drawing a game at a decent framerate at 320x200 required careful coding. RAM was always scarce. That disk space ran out in no time at all, and even faster one CD drives showed up.
I remember spending an inordinate time on squeezing out more disk space, and using stuff like DoubleSpace to make more room.
Today I've got a 2TB SSD and that's plenty. Once in a while I notice I've got 500GB worth of builds lying around, do some cleaning, and problem solved for the next 6 months.
I could get more storage but it'd be superfluous, it'd just allow for accumulating more junk before I need a cleaning.
RAM is similar, at some point it ceases to be constraining. 16GB is an okay amount to have unless you run virtual machines, or compile large projects using 32 cores at once (had to upgrade to 64GB for that).
16G is just enough that I only get two or three OOM kills a day. So, it's pathetically low for my usage, but I can't upgrade because it's all soldered on now! 64G or 128G seems like it would be enough to not run into problems.
What are you doing where you're having OOM kills? I think the only time that's ever happened to me on a desktop computer (or laptop) was when I accidentally generated an enormous mesh in Blender
I also have 64GB on my home PC and Firefox tends to get into bad states where it uses up a lot of RAM/CPU too. Restarting it usually fixes things (with saved tabs so I don't lose too much state).
But outside of bugs I can see why we're not at 100GB - even with a PopOS VM soaking up 8GB and running Firefox for at least a day or two with maybe 30 tabs, I'm only at 21GB used. Most of that is Firefox and Edge.
yeah, its definitely a cache/extension thing; usually when it gets near the edge I also restart. I do wish there were a way to set a max cache size for firefox.
> Drawing a game at a decent framerate at 320x200 required careful coding.
320x200 is 64,000 pixels.
If you want to maintain 20 fps, then you have to render 1,280,000 pixels per second. At 40 Mhz, that's 31.25 clock cycles per pixel. And the IPC of a 386 was pretty awful.
That's also not including any CPU time for game logic.
Most PC games of the VGA DOS era did exactly that, though.
But, well, a lot can be done in 30 cycles. If it's a 2D game, then you're mostly blitting sprites. If there's no need for translucency, each row can be reduced to a memcpy (i.e. probably REP MOVSD).
Something like Doom had to do a lot more tricks to be fast enough. Though even then it still repainted all the pixels.
For most users that is true. I think there were several applications that drove the demand for more memory, then the 32bit -> 64bit transition drove it further but now for most users 16GB is plenty.
16 GB RAM is above average. I've just opened BestBuy (US, WA, and I'm not logged in so it picked some store in Seattle - YMMV), went to the "All Laptops" section (no filters of any kind) and here's what I get on the first page: 16, 8, 12, 8, 12, 4, 4, 4, 4, 8, 8, 8, 8, 16, 16, 4, 4, 8. Median value is obviously 8 and mean/average is 8.4.
I'd say that's about enough to comfortably use a browser with a few tabs on an otherwise pristine machine with nothing unusual running in background (and I'm not sure about memory requirements of all the typically prenistalled crapware). Start opening more tabs or install some apps and 8GB RAM is going to run out real quick.
And it goes as low as 4 - which is a bad joke. That's appropriate only for quite special low-memory uses (like a thin client, preferably based on a special low-resource GNU/Linux distro) or "I'll install my own RAM anyway so I don't care what comes stock" scenario.
I agree that 4 GiB is too low for browsers these days (and has been for years) but but that is only because the web is so bloated. But 4 GiB would also be a waste on any kind of thin client. Plenty of local applications should run fine on that with the appropriate OS.
Compute resource consumption is like a gas, it expands to fill whatever container you give it.
Things didn't reach a comfortable level, Moore's Law just slowed down a lot so bloat slowed at pace. When developers can get a machine with twice as much resources every year and a half, things feel uncomfortable real quick for everybody else. When developer's can't ... things stop being so uncomfortable for everyone.
However, there is a certain amount of physical reality that has capped needs. Audio and video have limits to perceptual differences; with a given codec (which are getting better as well) there is a maximum bitrate where a human will be able to experience an improvement. Lots of arguing about where exactly, but the limit exists and so the need for storage/compute/memory to handle media has a max and we've hit that.
RAM prices haven't dropped as fast as other parts, images don't ever really get much "bigger" (this drove a lot of early memory jumps, because as monitor sizes grew, so did the RAM necessary to hold the video image, and image file sizes also grew to "look good" - the last jump we had here was retina).
The other dirty secret is home computers are still single-tasking machines. They may have many programs running, but the user is doing a single task.
Server RAM has continued to grow each and every year.
> The other dirty secret is home computers are still single-tasking machines. They may have many programs running, but the user is doing a single task.
It isn't often you really want it, but it is nice to have when you do. I am still bitter at Intel for mainstream 4-core'ing it for years...
Pretty sure 4 gig RAM was common consumer level then, but I take your point. I think the average consumer user became rather less affected by the benefits of increased RAM somewhere around 8 and that let manufacturers get away with keeping on turning out machines that size. Specialist software and common OSes carried on getting better at using more if you had more demanding tasks to do, which is probably quite a lot of the people here, but not a high % of mass market computer buyers.
Honestly I think the pace of advance has left me behind too now, as the pool of jobs that really need the next increment of "more" goes down. There might be a few tasks that can readily consume more and more silicon for the foreseeable, but more and more tasks will become "better doesn't matter". (someone's going to butt in and say their workload needs way more. Preemptively, I am happy for you. Maybe even a little jealous and certainly interested to hear. But not all cool problems have massive data)
DRAM hit a scaling wall about a decade ago. Capacity per dollar has not been improving significantly, certainly not at the exponential rate it had been for the prior 50 years or so.
As to why, search for words like "dram scaling challenge". I'm no expert but I believe capacitor (lack of) scaling is the biggest problem. While transistors and interconnections continued shrinking, capacitors suitable for use in DRAM circuits ran out of steam around 2xnm.
Even modern games don't usually use more than eg. 16gigs
Developers are a whole different story, that's why it's not unusual to find business-class laptops with 64+ gigs of ram (just for the developer to run the development environment, usually consisting of multiple virtual machines).
It was pretty difficult to use 64GB of RAM on my old desktop. 95% of my usage was Firefox and the occasional game. The only things that actually utilized that RAM were After Effects and Cinema 4D, which I only use as a hobby. I felt kinda dumb buying that much RAM up until I got into AE and Cinema 4D because most of it just sat there unused.
To be fair, CPUs haven't improved a ridiculous amount either.
10 years ago, mainstream computers were being built with i5-2500ks in them. Now, for a similar price point you might be looking at a Ryzen 5 5600x. User Benchmark puts this at a 50-60% increase in effective 1/2/4 core workloads, and a 150-160% increase in 8 core workloads.
Compared to the changes in SSDs (64GB/128GB SATA3 being mainstream then, compared to 1TB NVMe now) or GPUs (Can an HD6850 and RTX 3060 even be compared!?), it's pretty meagre!
I recall hearing a few years back that User Benchmark was wildly unreliable where AMD was involved, presenting figures that made them look much worse than they actually were. No idea of the present situation. I also vaguely recall the “effective speed” rating being ridiculed (unsure if this is connected with the AMD stuff or not), and +25% does seem rather ridiculous given that its average scores are all over +50%, quite apart from things like the memory speed (the i5-2500K supported DDR3 at up to 1333MHz, the 5600X DDR4 at up to 3200MHz).
An alternative comparison comes from PassMark, who I think are generally regarded as more reliable. https://www.cpubenchmark.net/compare/Intel-i5-2500K-vs-AMD-R... presents a much more favourable view of the 5600X: twice as fast single-threaded, and over 5.3× as fast for the multi-threaded CPU Mark.
In my experience; UserBenchmark will only show exactly one review per part, usually written early in the lifecycle, and never updated. Sometimes, this is written by random users (older parts), sometimes by staff. All reviews are basically trash, especially staff reviews.
Also, data fields like 64-Core Perf were made less prominent on Part Lists and Benchmark Result pages around the time Zen+ and Zen 2 All-Core outperformed comparable Intel parts. 1-Core Perf and 8-Core Perf were prioritized on highly visible pages, putting Intel on top of the default filtered list of CPUs.
However, the dataset produced by the millions of benchmarks remains apparently unmolested, and all the data is still visible going back a decade or more, if not slightly obscured by checkboxes and layout changes. (https://www.userbenchmark.com/UserRun/1)
Overall RAM in your systems HAS been steadily increasing, it's just targeted at the use cases where that matters, so it's primarily been in things like the GPU rather than system memory.
In the past ten years GPUs have gone from ~3GB[1] to now 24GB[2] in high-end consumer cards.
Another factor is in speed; In that ten year span we went from DDR3 to DDR5, moving from 17000 to 57600 transfers per second over that time frame. SSDs are much more common meaning it's easier to keep RAM full of only the things you need at the moment and drop what you don't out.
In terms of RAM density increasing, I think it's been more driven by demand than anything and there simply isn't a serious demand out there except in the very high end server space. Even compute is more driven by GPU than system memory, so you're mostly talking about companies that want to run huge in-memory databases, versus in the past when the entire industry has been starving for more memory density. The story has been speed matters more.
I'd also suggest that improvements in compression and hardware-assisted decompression for video streams have made a difference, as whatever is in memory on your average consumer device is smaller.
Coupling this with efficiency improvements of things like DirectStorage where the data fed into the GPU doesn't even need to touch RAM, the requirement to have high system memory available will be further lessened for gaming or compute workloads going forward.
Worse, it's now all soldered in, so we can't even upgrade it.
Like - fuck. this. shit.
I will take - in a heartbeat - (we'd all better!) whatever slight decrease in performance and slightly larger design that comes from not being soldered in half a microsecond if it means I can actually upgrade the RAM later on after my purchase.
Not just because I want continued value out of my purchase - because honestly, that's not even getting started on the amount of eWaste that must be created from this selfish, harmful; wasteful pattern.
Think about all the MacBook Airs out there stuck with 4GB of RAM and practically useless 128GB SSD's that could be useful if Apple didn't intentionally make them useless.
We've certainly gone very far backward in terms of any computer hardware worth purchasing in the last ten years.
I really don't get the point, beyond greed - of this whole soldering garbage. No performance or size benefits can be worth not being able to continue to reap value from my investment a couple years down the road.
This not only creates seriously less waste, but also nurtures a healthy, positive attitude of valuing our hardware instead of seeing it as a throwaway, replaceable thing.
I truly hope environmental laws make it illegal at some point. The EU seems to be good with that stuff. The soldering issue accomplishes literally nothing beyond nurturing, continuing, and encouraging a selfish culture of waste that only gets worse the more and more accepted we allow it to become.
We've only got one planet. It's way - way - WAY - past time we started acting like it.
Ten years ago, the state of hardware was far more sustainable than it is now.
Im still daily driving my 2013 macbook air with 128gb ssd and well 8gb ram. 100’s of browser tabs, photoshop, kicad and illustrator often at the same time. Never reboot, always sleep. Runs Catalina. Best machine ever. [edit] while going through many eeepcs, several thinkpads and two desktop pcs in that period [edit2: these were prolly older than 10 years tho ;)]
I don't really understand how you do this.
Even running archlinux with clamd and a tiny windows manager takes ~2-4 GB of RAM to do effectively "nothing".
So, of course, MacOS takes ~8GB of RAM to do effectively nothing, such is about the same for windows.
Unless you're running something like alpine Linux these days, you're going to be eating a ton of RAM and cycles for surveillance, telemetry, and other silly 'features' for these companies.
I’ve used those MacBook airs before and those weak dual core CPUs really don’t have much beef to them, and the screens are these 1440x900 TN panels (not sure about the vertical resolution) which was low spec even for the time.
I don’t see upgradability being essential to longevity. I think you can just buy everything you need for the next decade on day one.
>> I don’t see upgradability being essential to longevity. I think you can just buy everything you need for the next decade on day one.
Wow, that statement reeks of having so much privilege it’s like you forgot that this isn’t financially feasible for a lot of people, and that upgrading down the line allowed someone to afford a lower spec computer at the time.
Like - you do get not everyone’s wealthy af, yeah? And that companies intentionally bloat the cost of first party memory and stuff so that if you’re not insanely privileged; like I guess you are, then, no - it’s not even…it doesn’t even make sense.
All this does is steal from the poor to give to the rich.
That’s a pretty wildly inaccurate guess given I bought that MacBook Air when I was in poverty. It was my life savings at the time.
It’s a matter of efficiency, somebody pays for all those ram slots to get made and the electricity to power them. Now that computer memory is growing generation over generation at a slower rate, and now that other sources of power waste have been eliminated, I reckon there’s less and less reason to go with slots.
It is a shame that this allows vendors to charge non-market prices for products but the problem I have is that this has become peoples main argument to preserve slotted memory. It’s an objectively inferior technology for mobile devices we use so we can pay market prices for memory. Realistically though, slotted memory is doomed if the only argument for it, since manufacturers have every incentive to stop offering the option of slots. Even if they wanted to offer a customer competitive market prices for memory there’s little reason they couldn’t do that with soldered memory.
I have 64G of RAM in my framework laptop with room for 64G more. It's not all bad. (The state of mobile phone storage (not to mention removable batteries) has gotten significantly worse over time. Heck, they even dropped the 3.5mm headphone jack which is honestly fucking insane.)
I think RAM has been increasing but for the GPU rather than CPU. Also, I think Microsoft isn't quite as bad as Apple at memory usage so it still isn't that easy for a large number of people to really need more than 8GB and for anyone who plays games it is better to have extra memory in the GPU. I don't think there is all that much swapping needed with 8GB but most CPU memory tends to be used as disk cache so it still helps to have faster disks. I think of computers that started to have 8GB RAM over a decade ago as the "modern era of computing" and those systems are still fine now (at least with SSD) for a wide range of casual use, other than software that explicitly requires newer stuff (mostly GPU, sometimes virtualization or vector stuff). My sense is most hardware changes since have been in the GPU and SSDs.
This is where it becomes apparent that the way I use my machine is very different to some folks on here.
It is a Lenovo T400 with 4GB of RAM. In order to maximise the life span of the SSD, I knocked out the swap sector. So that is 4GB, no option to move beyond that.
That said in daily use, I have never seen my usage go anywhere near 3GB but I suspect that is just because I am very frugal with my browsing and keep cleaning up unused tabs.
No background or industry specific knowledge, but I'd venture to guess smart phones/mobile computing added a lot to the demand for RAM and outpaced increases in manufacturing.
I'd guess now that the market for smartphones is pretty mature we should start seeing further RAM increases in the coming years.
I would guess it’s because the average user just doesn’t use enough ram for it to matter. My current desktop built in 2014 has 64GB and it seems like I never come close to using it all even running multiple virtual machines and things like that.
I think a big piece that is ignored is how much normal compute/memory usage has been shifted to the Cloud.
I no longer need to have local resources to do amazing things. I don't need an 10GB+ GPU in my phone to be able to hit some website and use AI to generate images. I have fast web/email search capabilities that don't require me to have a local index of all of it. I can spin up a remote compute cluster with as much RAM as I need if I want some heavy lifting done, and then throw it all away when I'm finished.
"Back in the day" we would try to build for all that we could perceive we would do in the next 5 years, and maybe upgrade memory half way through that cycle if things changed. We ran a lot of things locally, and would close apps if we needed to open something needy. I think also systems have gotten a lot better at sharing and doing memory swapping (see comments about SSD helping here). Back in the old days, if you ran out of memory, the app would just not open at all, or crash.
I just did a test, checking RAM usage before/after closing Chrome (which has 8 tabs open, three of which are Google apps and three of which are Jira, so pretty heavyweight). It's using 2.8GB
Chrome and other modern browsers tune their memory usage taking the current memory pressure into account.
If you _have_ a lot of RAM available, why not use it?
On a 4GB machine, Chrome will _not_ use 5.2 GB for 3 YT videos and a bit of GitHub.
How is priority of memory usage managed between 10 different apps using that kind of allocation policy?
I have never seen Chrome starting to use much less memory when memory pressure becomes high and I have been monitoring it many times. There is some GC like behavior releasing a small amount of memory from time to time, but nothing really significant when systems starts running out of avaliable physical memory.
I don't think YouTube videos are really representative of web memory usage, because a significant amount of the memory they take up will be the buffered video content itself
I somewhat jokingly built out my gaming PC with 64 GBs of RAM with Chrome as my justification... unfortunately well before it reaches even 16 GBs of RAM usage it becomes fairly unstable before eventually reporting that it's out of memory despite less than 50% memory utilization.
I'm sorry, but you don't get to tell me what I need to comfortably run a piece of software on my computer. Despite whatever utilization metrics are claimed, Chrome runs like a wounded animal whenever I get into the territory of a dozen windows with approximately a dozen tabs each. If I try to do something unimaginable like open a Google sheet and a Meet call at the same time on my work provided laptop equipped with 16 GB of RAM, it's a disaster.
Then maybe it's something other than memory. Despite having a bunch of extensions running, the only problem I have ever had with 3-400 open tabs is the mental effort of managing them.
I'm honestly perplexed I don't have more problems as the machine I do most of my reading/browsing on is more than a decade old and doesn't even have a GPU. Plus I nearly always have 2 messaging apps, a pdf viewer, and 1-3 other browsers running simultaneously, maybe a logging app too. All running on Windows 10, it's not some turbocharged Gentoo installation or anything.
I'm curious, what if you try running Chrome's Task Manager (shift-escape on Windows, idk what you're using). Cause I noticed things like gmail are pretty well behaved and just sit there, whereas Twitter takes up nearly twice as much memory and grabs a bunch of CPU time every 20 seconds. If you're not running an ad-blocker I would expect that any site with heavy ads/tracking is impacting performance.
I work in advertising, so I think using Ublock Origin is hypocritical
I sincerely think that your industry makes the internet way worse than it needs to be.
Twitter back in the day had a web page (m.twitter.com) which worked everywhere, even under Links2 with the graphical UI.
If the reason it's "the user can't be tracked equally without a JS Big Brother behemoth", then they don't understand how cookies can be used for that, or by just parsing the user tweets and preferences.
My Carbon X1 with 8 i7 cores isn't broken. Rather, the internet browser I use largely for work has issues managing a large collection of tabs that include a few heavy web-apps (gmail, meet, sheets and play music).
How many tabs do you have open? Typically that's only necessary if you like have 100+ tabs open or are running many demanding web apps at the same time.
>I no longer need to have local resources to do amazing things. I don't need an 10GB+ GPU in my phone to be able to hit some website and use AI to generate images.
Perhaps, but most people don't use the Cloud for anything fancy, but to run SaaS apps like email, calendars, note applications, chat, and so on, that could be way better off served by local apps, and not only run better and be faster, but take less resources than they do running on the Cloud...
Like Slack (be it electron "app" or browser tab) taking 100x the resources to do roughly what ICQ did 20 years ago...
Yes - you can effortlessly lease a 128G RAM cloud VM today if you need one. I distinctly remember that in 2010 8G server was "big box" whereas in 2013 having 32G became commonplace. That's the RAM saturation point.
From a professional perspective, I completely agree.
I used to run into "not enough RAM" situations frequently and convinced my company to splurge 4K EUR on a Macbook Pro with 64 GBs of RAM. I'm very happy with it.
I also had to block several planned purchases where somebody unrelated to the work my team does decided to order MacBooks with 8GBs of RAM for new team members. It's 2022 my dude, the 2000s called and they want their 4*2GB RAM kits back.
From a consumer perspective on the other hand, I feel that current machines with 8GB of RAM run typical software (browsers and whatnot) well enough.
If they are unrelated to the work your team does, how can you be sure they really need more than 8Gb of unified memory?
I'm on an 8Gb M1 Macbook Air and really don't notice it swapping often, unless I'm running something heavy in addition to browser + terminal + IntelliJ IDEA.
I’ve managed to soft lock 16gb machines with inefficient libraries I needed to use to pull data, whereas a 32gb machine would have just barely survived.
I feel the real use of high amounts of memory is to run bad software.
The M1 is absurdly better than Intel Macs about swapping, I think because of the giant caches. I never have memory issues on my 8gb M1 Mac Mini, but my work 16gb Intel Macbook runs into memory slowdowns as soon as it launches Slack, a browser, and an IDE at the same time.
I have a 16GB M1 MBP. I can easily get it to slow to a crawl if I'm not careful. This is with Intellij Idea and other tools I use regularly. It's the worst thing about Macbooks, limited RAM.
Hahah I'm glad I got the 16GB M1 Macbook Air because I can run Stable Diffusion on it! I'm having so much fun generating images for free to my hearts content.
> But memory has felt like an exception to Moore’s Law for a while, at least in practice.
That's because it is. Memory is failing to scale. That's why there's so much investment in alternative memory technologies, including why Intel sunk so much money into 3D XPoint despite the losses. But the market is brutally optimized, hence the difficulty cracking it.
Commercialization of large machine learning models might push the RAM game for the end user up in the not-too-far future. For unified memory machines like the current Apple chips that is, and otherwise the memory on consumer GPUs for the enthusiasts.
Last week, several posts made it to the HN front page of stable diffusion forks that reduce the memory usage, to make it possible to generate 512x512 images with < 10 GB GPU memory, but with a tradeoff in computing speed. Trying to go beyond HD resolution (far from phone photo resolution) will still blow out your top-of-the-line Nvidia consumer GPU.
When approaches like these will be hitting your favorite image editing tool, you'll want to get that 256 GB RAM iPad for your mom. Otherwise, you'll have to deal with her having to wait minutes to give your family cat a festive hat in last year's Christmas picture.
"Case took the pink handset from its cradle and punched a Hongkong number from memory. He let it ring five times, then hung up. His buyer for the three megabytes of hot RAM in the Hitachi wasn't taking calls."
I bought a Mac Studio recently with 64gb of RAM. I adore it, but I am kind of missing the portability of a laptop.
I’ve been watching Facebook Marketplace recently for a cheap Mac Laptop, and I was surprised to find the place is absolutely flooded with near-new MacBook Pros with 8gb of RAM. It doesn’t make any sense, it’s the worst place to cheap out - it’s not upgradable and makes a huge performance difference.
I suspect it might be self-selecting. The reason there are so many on Facebook Marketplace is because the owners can feel they need an upgrade. Do they know the reason for it’s lackluster performance being the lack of RAM? Maybe not, they just know the machine runs poorly and are going to upgrade to a newer one with 8gb of RAM.
To contrast my 11 year old iMac I was upgrading from has 24gb of RAM and still held up fine as my daily driver. The lack of software upgrades being the thing that pushed me to upgrade.
Same, I have a 2nd generation intel i5 desktop at home. since I have upgraded to an SSD and the RAM from 8 to 24gb it gained years of new life. Perfectly capable daily driver.
Huh, that's interesting! I had a 2019 MBP with only 16gb of RAM - an amount I've been comfortable with on other computers prior to that - and it was UNUSABLE. Just MISERABLY slow. Upgraded to Apple Silicon with the same amount of RAM and it feels unstoppable, just always running at 150% of expected speed.
(I would have gotten more RAM in the 2019 laptop, but it was during that brief window where new laptop shipping times were measured in months, and I was happy to have ANY work laptop and not have to be using my old personal one, an old Air.)
> The general public isn’t asking for a hundred gigs, but I’d love to see the baseline rise up a bit. It doesn’t feel like we’ve budged meaningfully here for years. Or is that just me?
While the baseline of 8GB hasn't risen, I do think the floor of what we'd consider "unnecessary" has risen. I remember in 2018 I was building a new desktop and I spent a pretty penny on 32GB of RAM; folks on r/buildapc said that was a waste. Now and days I feel like I've seen a lot of higher end builds that feature 32GB or even 64GB.
Just my 2c; I don't have stats to back this up or anything...
My primary workstation has 128GB and it is for sure unnecessary. Even with multiple projects running each of which spins up a dozen containers in k3s, and a background game of Rimworld as well as all the electron apps that life requires these days, I rarely ever breach 64GB much less 100GB.
The only real use is writing horrifying malloc experiments and having a couple extra seconds to kill them before OOMing.
I routinely use this much RAM for work. And it's not malloc experiments lol. I need 200ag to 500G for my research work. Most of our systems have 384G and this is just enough. If I could only have a laptop with that much...
Containers are relatively resource-efficient; if you need to run a bunch of actual VMs for testing (eg, Windows), you can easily find ways to use 128GB.
I wonder if something like vista is needed to move the needle on consumer RAM again. Pre-vista, windows required 64MB of RAM, and you could fudge it a bit even lower if you knew what you were doing. Vista _required_ 1GB of RAM, and recommended 2.
OEMs were selling 64MB desktops right up until vista was released.
Today, windows 11 requires 4GB or ram. If windows 12 (or whatever they're going to call the next windows, they're not too good at counting) required the same sized jump between XP and Vista, it'd require 64GB or ram.
Vista required 512 MB. In practice, that probably sucked, but those were the paper specs.
XP might have required 64 MB on paper but it crawled and was practically useless on that spec (like Windows 95 was on 4 MB). Never saw anyone do it. A common spec in 2001 would be 256 MB or at an absolute minimum, 128.
Vista came out in 2007 and absolutely no one was selling 64 MB desktop computers at that point. 64 MB is 1997-1998 spec -- in 2007 it would commonly be 512 MB or 1 GB.
It is true, however, that Vista had astronomically high minimum specs for the time -- some machines sold at the time could just barely run it -- and that it probably drove a lot of upgrades.
Hm... I had a Celeron machine running XP ~2004-2007 that had 512MB of RAM, and it wasn't hard to run out, eventually upgraded to 768MB but I was still jealous of my friend running XP with 1GB.
Then, I built a Vista machine in 2007 with 2GB to start, and it was clearly not enough, immediately filled the other 2 slots to go to 4GB.
A bit of bullshit. Ublock Origin, git://bitreich.org/privacy-haters, enable that config to either Firefox or Chrom* based browsers. Under Windows you can set he env vars properly to the desktop shortcuts as pure arguments for the exe.
Seriously, I tried with > 10yo Celeron netbook and it was perfectly usable once you set up ZRAM/zswap and Ublock Origin. Oh, and some tweaks on about:flags
to force the GL acceleration. OpenGL 2.1 capable, go figure, and yet browsing it's snappy on the Celeron.
Does it though? I made another comment but for home use I can't even max out 64 GBs.
The only thing I can think of that'd ever max out my RAM is some sort of training task (even though I'd expect to run out of VRAM first). But those are the kinds of tasks that do best on distributed systems since you don't really need to care about them, just spin it up, run your task, and tear it back down
Google recommends 64GiB to build the Android kernel. That's a thing you could technically do at home. And if you want to do anything else at the same time, you're gonna need to go to 128.
Funny you should say that... my entire career has been built on embedded Android and I've built a lot of images from "scratch" (or as close as you get with random SoC supplier provided garbage tacked on)
The first time I built an AOSP image from scratch was on some dinky office workstation that had been freshly upgraded with a whopping 16GBs so you wouldn't come back in the next morning to a random OOM
These days I get to open a PR and some random monster of a machine on a build farm does the heavy lifting, but I can still say from a lot of experience that 64GB is truly more than plenty for Android OS builds and definitely won't be what keeps you from doing other stuff... IO and CPU usage might make it an interesting proposition, but not RAM.
When Google says 64GB it's for the whole machine: the build process will use a lot of it when configured properly, but not so much that you can't run anything else
(Also again, Android builds are a perfect example of where a remote machine pays off in spades. The spin up is such a small fraction of what's needed you don't have to start messing with containerization if you don't want to, and you can get access to some insanely large instance for a few hours then stop paying for it as soon as it's done.
It just seems unlikely to have a task that warrants that much RAM that isn't just about throwing resources at an otherwise "stateless" task that's a great match for cloud compute)
There are various analytic APIs/libraries that will map data files into memory when there is surplus RAM available. That can really speed up processes which would otherwise be IO bound.
> The general public isn’t asking for a hundred gigs, but I’d love to see the baseline rise up a bit. It doesn’t feel like we’ve budged meaningfully here for years. Or is that just me?
Then less than 3 comments in somehow it became we start justifying 256 GBs of RAM?
256 GBs of RAM can be useful in some cases, on some computers, in some places, but that's not really a meaningful inference? 1TB of RAM can be useful depending on the workload, any given number could be.
The question is can it be useful for anything even vaguely resembling personal computing, and the answer is for all intents and purposes: No. It's not.
You're getting upset and going all caps over your own misunderstanding....
> The general public isn’t asking for a hundred gigs, but I’d love to see the baseline rise up a bit. It doesn’t feel like we’ve budged meaningfully here for years. Or is that just me?
This was the comment that kicked off the thread. Some people felt 32 GBs was the new baseline, and then out of left field comes _256 GBs_
For any amount of RAM, someone somewhere will be able to use it. But that's the kind of deep observation I expect from a toddler.
If we're going past kiddie pool deep observations of plain fact, no, the baseline wouldn't be anywhere near 256 GBs of RAM based on how people use computers.
(And before you attack me for your own poor understanding of language again: People as in the human collective. "People don't need" is not the same as "no one needs".)
I didn't. At least, not to those of us who can deal with some flexibility in interpreting written communication, and use a little tool called context.
But then again, there are definitely people out there who need every. single. nuance. of the most basic statement spelled out for them, as if they're biological GPT-3 endpoint (and this site certainly does feel like it's drowning in those people these days) but I don't write comments for them.
Instead I write comments for people who are interested in actual conversation over browbeating everyone in site because they assumed the most useless interpretation of your statement was the correct one.
It's also worth noting that 64-bit x86 processors only appeared a few years after the song, with all 32-bit systems until then being limited to a theoretical maximum of 64GB (with PAE).
The general public isn’t asking for a hundred gigs, but I’d love to see the baseline rise up a bit.
I'm still comfortable with 4GB (and currently using ~1GB of it), because I stay away from the bloated web stuff. Outside of certain applications, I think there really isn't a need for huge amounts of RAM; unlike e.g. storage, which even I have several TBs of, simply because of accumulating media.
Not GP, but about 75% of the time, I'm not using any GUI software besides browser & terminal.
In the terminal I'm running a tmux/vim based development environment.
Of the half dozen non-technical users whose habits I know, I think all of them use their PC exclusively for a browser. They could sub in a chromebook, I guess.
Weird Al's song released in 1999 [1]. I don't think it was possible to buy a desktop/workstation/server machine in 1999 with 100gb of addressable RAM. So the lyric is absurd (possibly intentionally so).
The first 64 bit Pentium system went to market in 2004. With 32 bit systems the maximum memory addressable per process is 4gb; this could be stretched to 64gb with PAE [2]. It might have been possible to get a 64gb RAM pentium-based server in 1999. But 100GB is simply impossible.
There were 64 bit processors from other families available in 1999. DEC, SGI, and Sun all had workstations using 64 bit processors. The earliest I can find supporting 128gb of RAM though is the SGI Origin 3800, shipping in 2001. [3]
Getting a 64 gb 2021 MacBook Pro really changed the game for me.
The computer now works on my rhythm, not its own. No need to suspend tabs or do any other shenanigans. Sometimes it's just not productive to stop everything to clean up tabs and/or open apps. Some rabbit holes are actually quite worth digging into, and they are often RAM-hungry!
Going from 16 to 64 gb on my personal laptop meant I got to choose when to deal with my own mess... on my own terms. I much prefer to do this cleaning exercise on weekends, versus "whenever the computer starts choking up".
So yeah, highly agree with the author here. Wish Apple laptops were RAM upgradable, but I doubt that will be coming anytime soon.
I tried using a 40" 4k tv as a monitor a few years ago. It was big enough that I had to move my head to see different parts of the screen. I didn't like that and now I'm back on around 30".
But he was measuring the width of the monitor, which is not how monitors and TVs are measured -- they're measured diagonally across the screen.
So, assuming he was talking about a 16:9 aspect ratio monitor (which he certainly was not -- he was talking 4:3 -- but I can't be bothered to do the math), a 40" wide monitor is actually a 46" monitor.
Imagine the revolution when UltraRAM becomes common and inexpensive. Then everybody can have terabytes of storage as cheap as hard disks and faster than today's RAM.
Benefits:
* Fast. 2.5x faster (at least) than RAM when running at the same bandwidth because it does not need to be refreshed. This was proven by Intel's insanely expensive persistent Optanium memory. RAM requires an electronic refresh at least every 65ms or it loses its contents.
* Archival. Contents could last, with integrity, past a 1000 years.
* Massive. The goal isn't to replace RAM, but to replace hard disks. Since UltraRAM will be faster than RAM functional obsolescence immediately applies. Storage and memory become a single volume.
When UltraRAM does become available technologies dependent upon memory optimization and current storage innovation also achieve functional obsolescence for the first time in computer history, which includes database applications. Databases will essentially become a high level storage mechanism on top of faster lower level storage mechanisms like file systems.
I'll take cheaper flash any day but when you start talking about making RAM obsolete the metric will be latency not bandwidth. I can already stick e.g. 4x PCIe 4 NVMe drives into a x16 slot or a 32 into an AMD server and beat whatever the RAM bandwidth and cost/GB will be it's just not very useful to do so so nobody really does it as a RAM replacement.
'Workstation-class' computers absolutely can have 128 GB of memory, or even more.
Even Apple does this (not just on the Mac Pro, you can configure the Mac Studio for it too). On the PC side all AMD Threadrippers can support at least that much, as can the Intel i9. These aren't server-class CPUs that require ECC memory or anything exotic like that either. Normal consumer hardware.
I wonder if another thing that's been holding larger amounts of RAM back is the lack of error correction. Maybe DDR5 will mitigate that problem, although it's still suboptimal that consumer-targetted CPUs don't support ECC.
Not sure if it is a compulsory part of the spec, and irrelevant for most laptops since laptops generally wouldn’t use DDR5, but use low-power DDR instead.
On-die ECC should help with data integrity, but on-die ECC doesn't protect integrity all the way to the memory controller, and lack of reporting means I don't think you'll even know when there's an uncorrectable error. Which means you've still got the same basic issue --- memory is not stable, although the error rate is likely reduced.
If you start in 1980 with 32K and then double it every two years, you get a reasonable approximation of the RAM of a desktop-class computer all the way until then.
You hit 8GB in 2016 and then the rule goes out of the window.
I purchased a 32GB kit of fast-for-the-time RAM for around $150 in 2016. There was a 64GB kit available for a little more than I wanted to pay as well. In years after that the price of RAM went WAY up due to a mixture of natural disasters, RAM cartels, and a change of priorities for manufacturers. In 2019 I was offered $320 for those same sticks of used RAM. Looking at current motherboards of the same type, they also top out at 64GB (I know there are higher limits but I'm in the Mini-ITX ecosystem). So I agree with your point just disagree a bit on the 8GB number as I had a fairly middle of the road laptop in 2012 with 16GB of RAM as well.
Going from 2016 to today (2022) is three more doublings, which would be 64GB of RAM -- and that is an available configuration in laptops such as the XPS 17 or the MacBook Pro. That's not even considering desktops (which can handle 128GB) or workstations (which can reach terabytes).
> no one wants to close tabs because they might not find that one critical page again
And most of the things in the tabs don't meet the threshold for bookmarking them, because the overhead of maintaining bookmarks makes one want to only bookmark things that are used often.
So basically bookmarks and history devolved into tabs for people? I do notice the difference between my Firefox installation (with 'live downloaded' search suggestions disabled), where I mostly can pull anything I want from the past, and the times when I have to use Chrome, which at least by default makes it difficult to recall a page visited even a moment ago. (Yes, technically you have history hidden in menus.) I would dread living that in normal browsing.
Google's lack of search quality affects hardware specs. Better at answering general questions and less good at finding a specific thing, especially from a few months ago.
On a related note, the computer from Mega Man X has 8192TB of "real mem", 32768TB of "avail mem", and laughably small cache sizes of 512KB, 768KB, and 32768KB.
You guys must also think 50 cent always carries guns as big as an adult human, Ice Cube has literally shot bullets into the sun, and Ludacris never drives under 100mph. Al is satirizing hyperbole in rap music. The stream of implausible claims is why the song is funny- 100gb of hard drive space wouldn't have fit with the song at all.
Also, plenty of people were actually using computers with 100gb of ram in the late 90s. ASCI Red at Sandia had 1,212gb of ram in 1997. Blue Gene/L, which they were building when this song came out had 16,000gb of ram and was designed to be upgradeable to as much as 128,000gb - even much more if the node quantity were maxed as well.
IDK, anecdotally at least I broke the 100G RAM barrier earlier this year by upgrading to a Ryzen 5900X with 4 x 32G sticks. RAM has largely become quite affordable with a 32G stick of DDR4 going for around ~$100 (and you can occasionally see discounts down to ~$80).
It has been great being able to spin up VMs as needed, or keep hundreds of tabs open on Chrome and not have to worry about things getting bogged down. The only annoyance is that this also results in gigantic memory dumps if the system crashes...
(FWIW, I'm usually hovering around 20G just with browser tabs alone and no VMs running. Admittedly, I do use tabs as a way to organize work so having hundreds is common.)
Not a hardware person but it seems it wouldn't be that much of a difference?
Assuming bit flips happen at a fixed rate (cosmic rays whatnot), then wouldn't the likelihood only depend on the actual amount of memory used rather than the total capacity? Like, if a bit flips in unused RAM does it really matter... And used memory is used one way or the other so the flip would have happened anyways.
I think the bigger issue is outright RAM failures/errors are harder to diagnose. Like if I had a bad chip on one of the sticks, it would be harder to notice since the chance of hitting that error and crashing is much lower, so the system would seem to run fine most of the time. Doing memtest is rather annoyingly slow so I've only done it once after building the system to make sure none of the sticks were DOA.
On another note, for my NAS I do run ECC (4 x 8G DDR3) on a server board and over the course of about 7 years, I've had 1 stick fail in the form of consistently getting stuck bits (which are then corrected and reported) and have had about 3 or 4 spurious flips in total on the other sticks.
Any computer targeted at business users really needs to start at 16GB minimum. I can't understand why 8GB is still the default base configuration on almost every computer.
My company made the mistake of ordering 8GB M1 MacBook Pros for everyone. After all ... they say "Pro", right? Truly the worst computer I've ever used, all because of memory management (and M1s unique memory-sharing architecture).
Even simple tasks like opening a new tab or switching windows showed serious lag. I could not run my local dev server and listen to music at the same time.
We eventually just said "screw it" and bought all new 16GB MacBook Airs. 1000x better.
In other words, you still decided to support a manufacturer that makes it impossible to simply insert another stick of RAM (like we've been doing for decades) and requires you to throw a whole computer away. Or sell it, whatever, you've said that it's pretty much unusable already in spite of being new.
Seeing that from one of the poorer countries, you guys really do have too much money.
the 13" pro and air are basically the same machine :-/ I'm pretty sure they're keeping that pro because it's a physical drop in replacement for previous models, just for business that want that continuity.
That 8gb though. I had to switch to one when my last macbook died. I could develop on it fine, but chrome and teams killed me. Especially teams where it's just now getting a native release. There's no reason a lot of apps are using as much memory as they do. 8gb _should_ be more than fine for any office worker.
I'm growing on the idea that developers shouldn't be getting massive amounts of memory in their machines, so they can feel any pain their users will.
Last time I worked in finance, the typical 2U machines that we ordered were configured by IT with 256Gb by default unless we said otherwise. The typical workload was data analytics, and when you've got lots of cores, you get through RAM pretty quick for those sorts of workloads (and any spare caches data which is also useful).
Home machines are typically consuming rather than producing stuff, so it seems reasonable that the RAM hasn't leapt ahead, since typical workloads aren't memory bound.
I remember a friend having a 64MB computer and struggling to play Diablo II. It would always lag. I upgraded him to 512MB using some old RAM sticks. The game never lagged again.
I've often wondered this -- why isn't it normal for consumer systems to support more RAM? I built a PC with 256GB DDR4 in 2016 for working with large pointclouds, but most motherboards I looked at only supported 64GB max, and the situation doesn't seem to have changed in the last 6 years. I guess it's because the emphasis with RAM has been on improving speed rather than volume.
Were you thinking about adding more RAM slots, or why the sticks aren't larger?
The reason the sticks aren't larger seems to be because they had trouble scaling down the DRAM memory cells[1] since about 2016.
DRAM works, as you might know, by letting a capacitor hold a charge (or not).
This ability is a function of the geometry of the capacitor: the area of the anode and cathode, as well as the distance between them. Leakage current discharges the capacitor, hence why DRAM needs to be periodically refreshed, the industry settled on 64ms retention time per cell[2]. Leakage current tends to go up when the features shrink, due to there being less insulation between the conductors.
As mentioned in [1] it seems things are about to change, so perhaps we'll see much larger sticks in the near future.
If you were thinking of slots, I think that's more economics and feasibility. The Threadripper motherboards had 8 slots[3], since the CPU supported it, but they were quite expensive compared to regular Ryzen motherboards. It also requires a lot more power, so forget having that in laptops. And the slots take up space, so was only for the full ATX size[4].
If most people here use laptops (which is my -- perhaps flawed -- perception), then maybe not.
I just put together a new Framework laptop with 64GiB, and that was a huge jump from my Dell XPS 13 with 16GiB. All of the other 13" laptops I looked at max out at either 16GiB or 32GiB. Even 15"/16" laptops probably usually only have 16GiB or 32GiB, on average, though I'm sure there are some with 64GiB.
And then phones, even the high end ones max out at around 8GiB, right? I think mine has 6GiB, though it's a few years old now.
Even someone with a desktop machine (for gaming, perhaps) probably "only" has 64GiB. So the total (64GiB desktop + 16GiB laptop + 8GiB phone = 88GiB) is still under 100GiB. I guess if we include the dedicated VRAM on a the discrete GPU that might be in the desktop? Not sure I'd count that, though.
I do have a few Raspberry Pis (of various vintage) and an old Mac Mini that are doing various things around the house, so I guess that's another 4+4+1+0.5+16 = 25.5GiB. So my grand total (including my phone and laptop; I don't have a desktop) comes to 97.5GiB -- so close!
I wouldn't count these, but my router has 2GiB, and I have two APs with 128MiB each, so that'd bring me to 99.75GiB. Can't believe I'm only 256MiB short!
> The general public isn’t asking for a hundred gigs, but I’d love to see the baseline rise up a bit.
Please no. The pressure on us software developers to keep our bloat under control, particularly when better options than Electron become available, has to come from somewhere. There will always be people who can't afford to upgrade to the new baseline, and we shouldn't leave them behind if we can help it.
What better options than electron can we hope for?
Writing UIs in TypeScript with React is amazingly productive. A huge amount of effort has been invested into making that entire stack fast and it’s still a lot slower than I‘d want it to be. I’ve written some websites in Yew (react-like rust library), but either Rust or Yew seems to really suck for UI stuff. I feel like we’re missing some kind of nice middle-of-the-road language that’s both reasonably productive and results in reasonably performant apps. OCaml is the only language I know of that’s in that sweet spot, but there’s no UI libraries for it unless you go the ReasonML/react route, but then you’re just compiling to JS again.
If that pressure doesn't exist than why does it have to come from somewhere? I also run exactly 0 electron apps but definitely would like to have more ram - 64GB would give me some breathing room, 128GB would probably be optimal.
Some people can't afford to keep up with our upgrade treadmill. I'm trying to do my part to slow the treadmill down by voluntarily not upgrading my main PC, a 2016 laptop with a quad-core i7-6700HQ and 16 GB of RAM. I suspect that even that machine is too powerful. To really feel users' pain and develop my software in a way that minimizes it, I should probably be using a budget laptop with like 4 GB of RAM, an actual spinning HDD, and the typical Windows crapware still active, and do big compiles and other tasks that truly require more resources on a remote machine.
Hasn’t memory sped up dramatically in the last 20 years? Maybe that, combined with ever faster SSDs, means most people just don’t need too much more.
It used to incredibly painful to run out of ram. You’d have to hit swap or page a file in from disk that was tossed and wait hundreds of milliseconds. That was incredibly noticeable.
But now we can get things into memory so much faster having to load something off permanent storage takes a few tens of milliseconds. It’s not so objectionable.
At the same time most people are doing similar things to 20 years ago, needs haven’t scaled. Sure needs have gone up as images have gotten sharper but text is still text. Audio is still only a meg or two a minute. Unless you’re doing high end photo editing, video editing, or neural net stuff (which is mostly GPU memory?) do most people really need much more than 8/16GB?
I remember when the first MacBook Air came out someone, I think it was Jeff Atwood, posted about compile speeds.
The Air had no cooling and an underpowered little CPU so low power Apple had Intel make it just for them.
But you could pay a crazy amount of money for a teeny-tiny SSD instead of a tiny hard drive.
The SSD was so much faster than standard hard drives that machine could compile code faster than the normal MacBook Pros of the day, even though they could hold more ram and had better CPUs.
Gettin go things off disk to the CPU matters a lot. It’s OK if you don’t have enough memory if that pipeline is extremely fast.
The situation may have happened again with the M1 Macs. Not only were they faster than the Intel chips at most things but the on-package memory screams and the storage controller is fantastic.
People have reported those machines at 16GB anecdotally feeling amazing despite having half the ram of other existing machines.
Isn't this just the same as processors? We used to primarily care about GHz and GBs, but there were diminishing returns for continuing to grow those. As a result, processors started shifting to multiple cores, but still usually hovering in the same GHz range. Meanwhile RAM total size might be the same in your current machine as it was a decade ago, but bandwidth and clock rates have drastically increased since Weird Al wrote that song. A similar thing happens with digital cameras and megapixels.
It wasn't that things stopped improving, it is that the old measure of improvement has been de-emphasized in favor of other factors that have proved to be more important for normal use.
I don't think so. From [1], in 2006 a Dell XPS M1710 costs $2,845 and had 1Gb of RAM. Inflation adjusted, that's $3,550, so a pretty high end system for the time. Today, you can get a 16Gb laptop for well under $1000. That's a 16x capacity increase in about as many years for 1/4th the price.
I actually do have 100gb of ram in my desktop, mainly because I got a bit carried away on eBay and bought an epyc, and you kinda have to fill all the slots to get the performance.
It's about deminishing returns, the more ram you have the more likely you need to move bytes around.
For applications that are moving more bytes from drives to memory, this means we need to wait for the transfer and we end up preferring to delegate these applications to the cloud instead of running on our own machine, because servers are where the ram is increasing and abundant.
So basically we are doing a lot more big ram computation elsewhere and doing small ram computation on our desktop
My MacBook Pro M1 Max has 64 and an M2 Max will probably top out at 96 if they use Samsungs new 12 GB LPDDR5.
An entry Level laptop now comes with 16 GB. Scaling is just not as fast because with RAM either has sweet spots and swapping is much faster with SSDs. To be quite honest if you interpret his lyrics the people with lots of ram are Prosumers or professionals which can get 100gb of ram or more it’s not just about the SOCs
Okay, let's stipulate that "Weird Al" knew exactly what he was talking about, so why did he claim "100 gigabytes" which is an impossible number to achieve in one device? (Other than meter scanning) It should have been 128, yes?
"Weird Al" was, in fact, making a song, not a rigorous piece of technical journalism.
"I got me a hundred gigabytes of ram" flows better lyrically in the song than "I got me a hundred-twenty-eight gigabytes of ram".
Whether he knew the details of typical memory module sizes or not, it's such a minor inaccuracy that the lyrical flow matters far more.
Anyway, I bet you could achieve it if you really wanted to try. Get a dell poweredge server with 24 DIMMs, install 12x8GB (96) + 4x1GB (4) for your total of 100, and with 8 DIMMs leftover.
Problem is, there is a big divide in user base: For a normal user, 8 GB is mostly sufficient. If you have a lot of chrome tabs open, 16 GB makes your day. But, if you are any sort of a creator, you can not have enough ... my 256GB is barely enough.
8 gigabytes is not enough to run Windows. It ok for an iPad, but any Windows computer with 8GB ram essentially unusable and will crawl to a halt the second you open a web browser.
We use Ublock Origin and try to compress the RAM as a virtual swap everywhere.
Chrom* based browsers have optimized switches for low end machines, starting with --light, that's it, append that parameter to your desktop shortcut and things will speed up a bit.
Using the web today without UBo today it's suicidal.
It is ridiculous that to get more than 16gb laptop PC it has to be a heavy gaming laptop, super expensive with high power consumption. At
least here in australia. And the 16gb ones can’t be upgraded further.
It's kind of terrible that people are still stuck on 16 and even 8 gib of memory, and we're suffering for it..
Firefox recently introduced the most retarded feature ever.. Unloading tabs, that breaks javascript applications and gives an unreasonably bad experience.. I disabled it, but it keeps changing back to being enabled.. I have 128 gib of memory in this machine.. I don't care if firefox uses 10 or 20 of them..
It's 2022, it's sort of ridiculous that we still even have memory tiers..
I want my on-die 16 TiB nonvolatile RAM, and I want it two years ago!
There's a difference between something still existing and something still being a problem.
The reason it was a problem on spinning disks, was that the delay in getting the next bit of data highly depended on where it was relative to the previous bit. With SSDs (as I understand it) looking up any bit of data is (just about?) independent of where it is located.
So, it's still a thing - but the cure (involving lots of wear) is a lot worse than the disease now.
Yeah, that was one of the reasons I said "just about". But minuscule compared to millisecond seek latencies on a spinning disk.
So the problem getting orders of magnitude smaller while the costs of the cure increasing a lot swung the equation away from defragging being a good thing overall.
Spinny hard drives have a master record file that tells the seek heads where all the parts of the data lie on the magnetic material.
It takes time for the read arms to move back and forth to literally read the data on the sector indicated by the master record file, and defragmenting the drive helped minimize that by moving the files around to be a linearly arranged as physically possible given the sophistication of the given system.
SSDs also have a similar master record file, indicating where all of the bits of the files are located on the flash memory in the hard drive, but since it makes at most the most atomically minute difference in seek times for the controller to read the data from those sectors (since there is no seek arm that has to physically move to the physical location nor any delay waiting for the spin of the disk to reach the correct location for the data read to begin) then file "fragmentation" has the most negligibly minuscule impact on file read speeds or drive performance, and given the dramatic increase in destructive writes that a file defragmentation would cause on an SSD, it should be avoided under any regular usage.
Hard drives do not have this metadata, the filesystem does.
Yes, this is exactly what I said.
People use the same filesystems on HDD and SSD drives.
Yes, of course, but I'm not talking about filesystems but the reasoning behind why fragmentation matters on Hard Drives and not SSDs.
Fragmented files still require more blocks to say where the various parts are.
I don't have a comparison file to review, but in all measurable and tangible matters, this point is moot as it does not impact SSD performance, only Hard Disk performance is affected by fragmentation.
And I explained to you, your understanding is incorrect.
An SSD needs so close to the same amount of time to read a linear file compared to a fragmented file that the difference is imperceptible. 0.1ms per seek, less than 1/1000th of what you are claiming.
To directly copy from Crucial's website:
Should I defrag my SSD?
The short answer is this: you don't have to defrag an SSD.
To understand why, we first need to look at the purpose of defragmenting a drive. Defragging ensures that large files are stored in one continuous area of a hard disk drive so that the file can be read in one go. Mechanical drives have a relatively long seek time of approximately 15ms, so every time a file is fragmented you lose 15ms finding the next one. This really adds up when reading lots of different files split into lots of different fragments.
However, this isn't an issue with SSDs because the seek time are about 0.1ms. You won’t really notice the benefit of defragged files — which means there is no performance advantage to defragging an SSD.
SSDs move data that's already on your disk to other places on your disk, often sticking it at a temporary position first. That's what gives defragmenting a disadvantage for SSD users. You’re writing data you already have, which uses up some of the NAND's limited rewrite capability. So, you get no performance advantage whatsoever, but you are using up some of the limited rewrite capability.
My understanding is not incorrect, please stop assuming that because your understanding is shallow (using consumer-grade information as a reference makes that obvious) that that of others must be as well.
I never claimed that you should defragment your SSD, I just pointed out that fragmentation is still an issue on SSDs, which is a true fact.
Reading a large fragmented file can be 30% slower than if it is not fragmented. Now if you don't think that's significant, you're just silly.
Well, then, instead of being an abrasive speaker, you should point me to sources that back up your claim.
I was able to find a page that talked about write amplification and read speeds decreasing when files are overly fragmented on SSDs, but that was 7 years ago and technologies have improved dramatically since then, not to mention that the effect was found in files split into four hundred thousand pieces, which indicates the slowdown may have been more on the in-drive ASIC not having the throughput needed to deal with that many fragments than with the actual read/write of the disk itself.
Where's your recent (last 4 years) paper or article that indicates that file fragmentation has anything more than a negligible affect on read/write speeds in an ssd?
If you can't put up, then nothing you say matters.
This is pure logic, no need for any experiment or benchmark.
Fragmented files require reading more blocks. The exact scheme depends on the filesystem used. The 30% figure came from an estimation of the ext4 worst case vs best case of number of reads needed.
A hard drive can give you the next block almost instantly.
But a random block may take many many times that as it repositions the head and waits for the block to come spinning by.
And an SSD returns any block almost at the same speed, so defragmenting only increases the amount of wear on the drive (and it internally remaps anyway so you don’t even get what you’re after - block 5 is a logical block that may be currently mapped to 466 not next to block 6).
I’ve found it pretty handy to run a defragmentation in a guest OS before trying to shrink virtual hard drives. Not sure it affects all virtual drives but hyper-v vhdx can compact poorly if the data is quite fragmented and it’s often easier to defrag in the guest if the filesystem isn’t something you have drivers for on the host.
They're binary prefixes[0]. If we go by the traditional meaning of the SI prefixes, "M" means "10^6", so 1MB is 1,000,000 bytes. 1MiB, by contrast, is 1,048,576 bytes (1024*1024, or 2^20), which would be more correct for something like RAM.
Yes, in the past the SI prefixes were "abused" for power-of-two quantities, but nowadays it's best to be more precise, as many computer-related things actually do come in power-of-ten quantities, not just power-of-two quantities. For example, my NVMe drive's capacity is actually 2TB, not 2TiB[1].
[1] Ok, ok, it's actually showing up in `parted` as 2,000,398,934,016 bytes, which is a little bit more (~380MiB) than 2TB, but considerably less (~185GiB) than 2TiB.
If you've been in computing since, say, the 90's, you don't. It was always based on context. Memory = binary, bandwidth = decimal, disk/SSD storage = decimal (maybe!)
I'm amazed by the number of people here that think 16 or even 8 gigs are "enough" for everyday use.
Sure you CAN live like that but it's noticably painful with even the most pedestrian of usage.
open a browser and a handful of tabs, there goes 4 gigs, open some messengers and you got another couple gigs gone. maybe that's fine as long as browser and messengers are literally all you do but the machine becoming painfully slow to use as soon as you go from background level usage to any sort of productivity is not "good enough" by a long shot IMO
My primary "personal" system is an M1 MBP with 8GB of RAM. I edit 6K video on it all the time. While it could certainly use more RAM, I don't feel like I need it bad enough to justify spending the money right now.
My gaming desktop only has 16GB of RAM. It's the fastest DDR4 that my motherboard will support, granted, but all of the games I play on it are CPU or GPU bound. Additional RAM isn't going to significantly help anything.
It's acceptable and can do a lot for sure but I was getting the vibe a lot of people consider more basically pointless which it IMO isn't.
I think with a little bit of multitasking most users would experience a noticable improvement
Is the problem actually the lack of ram though? We used to browse the web just fine on megabytes of ram... Now I expect modern sites to take my resources, but 1000x more?
arguably the problem is the excessive use of the resource but that's kinda just the reality of it.
I'm not saying it's impossible to deal with it but it's no where near the point where I never have to think about it which is what I would consider truly "enough"
That problem (referring to things that are not conventional fast, volatile random access memory as RAM) seems to have only gotten worse in the last twenty years - exacerbated, I believe by the increased use of flash technology (SSDs, etc - which have, admittedly blurred the lines some).
It also doesn't help that smart phone and tablet manufacturers/advertisers have long insisted on referring to the device's long-term storage (also flash tech) as "memory". I doubt that will ever stop bugging me, but I've learned to live with it, albeit curmudgeonly.
Whippersnappers! :-D