Hacker News new | past | comments | ask | show | jobs | submit login
Weird Al had 100 gigs of RAM (rubenerd.com)
301 points by Tomte on Sept 6, 2022 | hide | past | favorite | 400 comments



Of course, the other plausible explanation is that (at least at the time) Weird Al didn't know the difference between RAM and hard drive space (especially likely, because he's talking about defragging a hard drive). In fact, I'd be surprised if that wasn't what he meant. I will, however, leave room for him knowing the difference, but intentionally being absurd (it is Weird Al, after all).

That problem (referring to things that are not conventional fast, volatile random access memory as RAM) seems to have only gotten worse in the last twenty years - exacerbated, I believe by the increased use of flash technology (SSDs, etc - which have, admittedly blurred the lines some).

It also doesn't help that smart phone and tablet manufacturers/advertisers have long insisted on referring to the device's long-term storage (also flash tech) as "memory". I doubt that will ever stop bugging me, but I've learned to live with it, albeit curmudgeonly.

Whippersnappers! :-D


I’m going to have to stan for Weird Al here and say that there’s basically 0% chance that he didn’t know the difference between RAM and hard drive space. He’s actually quite meticulous in his songwriting and pays a lot of attention to details. And with a mostly (white and) nerdy (ba-dum-tssshhh!) audience he knows he’d never hear the end of it if he screws up. Must be quite the motivator to get things right.


He even mentions hard drive space separately, as something comparable to removable storage:

You could back up your whole hard drive on a floppy diskette

You're the biggest joke on the Internet

I can only conclude that he did understand the distinction between RAM and hard drive space. The song is full of jokes that show a deep understanding of the topic. Indeed I don't find any nits to pick.


Also, taken in context, telling somebody their whole HDD fits in a floppy is entirely consistent with bragging about your entirely unreal 100 gigs of RAM.


Just to be clear, I did not in any way mean this as disparaging toward Weird Al. I've been a fan of his music for decades.

But as I said elsewhere in this thread, I have been surprised in the past by people I would have described as fairly tech-savvy, who still called hard drive space "memory".

However, if the two phrases are related (as opposed to just being adjacent in the song), at this point I'd guess Al does probably know the difference, and the relationship is intended to be intentionally over-the-top absurd bragging.


> who still called hard drive space "memory"

I mean, “memory” in the computer sense is by analogy t memory in the brain sense, which is used for both short and long term storage (and I’d argue more commonly for long term storage). It therefore doesn’t seem unreasonable to me for people to call a computers long term storage it’s “memory”. The function of a hard drive is to remember things.


He didn’t say “memory” - he said RAM! Why? Because it’s funnier and it rhymes with spam! Plus “hard drive space” is three syllables. It’s a song not a README, sheesh


It’s a SINGME not a README


I like the idea of adding SINGME files to my repos from now on


I consider myself pretty tech savvy and call hard drives memory. HDDs exist in the middle of a memory hierarchy stretching from registers to cloud storage.


You're not wrong, they are a kind of memory. I think more people would understand you if you call RAM memory and HDDs and SDDs storage though.

It goes the other way too, RAM is a type of storage, which is the terminology IBM uses (and I think has used since the 60s or so).


Memory is where you store data


I don’t really care if the unwashed masses understand me.


Do you call them RAM, though?


Never thought about it before, but I guess SSDs are random access, so it would be technically correct to call them RAM, and that’s my favorite kind of correct.


Jumping on the nitpick train: SSDs are less randomly write-access memory because of the requirement to explicitly erase an entire block before you can rewrite it. RAM doesn't have this constraint. It has other constraints, like writing and refreshing rows at a time, but combined with the difference in erase cycles causing wear in flash, you do end up with a significant difference in just how randomly you can access these memories.


SRAM is the only type of memory offering true random access. DRAM can suffer so much on pure random access that completely rewriting your algorithm to have mostly sequential access to 20x more memory still ends up 4x faster (lean miner vs mean miner in [1]).

[1] https://github.com/tromp/cuckoo


Well as long as we are nitpicking: random access does not say anything about the block size - you're not going to be writing individual random bits whith any kind of memory. And random access also does not say that all blocks have the same access time, only that the time does not depend on what you have accessed previously. In conclusion, an array of spinning rust drives is random access memory.


My mother calls the entire computer "hard drive", leftover from old newspapers in the 90'ies.


Yep. Most faculty/staff where I work either call a desktop computer a "hard drive" or a "CPU".


RAM is just cache for your drive. And these days your drive is just cache for the internet. It’s all memory, one big memory.


Most of the time, my RAM directly caches the internet. (Ie when I watch YouTube or browse the web, stuff doesn't typically go on disk.)


RAM is NOT cache for your drive. It's the active work space where the CPU tracks all the bits and bobs of executing code - state, variables, etc.

Swap space on disk is a cache of RAM, if you want to go down that path - but RAM and disk support entirely different purposes.


Or, swap (and hibernate) are the RAM cache flushing to disk. ;)


> I have been surprised in the past by people I would have described as fairly tech-savvy, who still called hard drive space "memory".

That's not really incorrect, though. RAM is random access memory, as opposed to memory that has other access patterns.

It's less common these days, but among people who learned to use computers in the 80s, you'll still hear some use "memory" to refer to storage that isn't RAM.


Windows 95/98 labelled HDD space as "Memory" so it's unsurprising that that's common.

That's why I always had to make sure to say "RAM" when I meant memory in the 90s.


>> Windows 95/98 labelled HDD space as "Memory" so it's unsurprising that that's common.

Where did it do that?


Calling a hard drive “memory” can also be left over from very old computers which blurred the distinction between transient and permanent/stable memory.


| basically 0% chance

lost credibility here


The rest of the lyrics make it clear that Weird Al knows exactly what he's talking about.

https://genius.com/Weird-al-yankovic-its-all-about-the-penti...


In this particular song, His Weirdness offers a few timeless observations:

> Upgrade my system at least twice a day

> What kinda chip you got in there, a Dorito?

> You say you've had your desktop for over a week? > Throw that junk away, man, it's an antique!


yeah, the Usenet reference had to have been obscure even back then. and beta-testing OSes back then would been.. what? Windows Neptune and Copeland?


At the time, I had used/tried out the following OSes

  - BeOS 4.5 (including some beta versions)
  - BeOS 5 (including some beta versions)
  - Windows NT 4.0
  - OS2/Warp
  - Windows 95 (and it's service releases) (including beta versions)
  - Windows 98 (and it's service releases) (including beta versions)
  - PC-DOS
  - MS-DOS with DosShell
  - MS-DOS with Windows 3.11
  - Whatever old version of MacOS was on the school computers
  - Slackware
(might have some dates wrong)

I was also on mIRC and downloading from newsgroups regularly.

I think many ISP's would give you guides about using email/newsgroups back then as those services were considered required for an ISP. TUCOWS was super popular for this newfangled WinSOCK software (TUCOWS stands for The Ultimate Collection Of WinSOCK Software). I remember testing how fast the first consumer cable internet connections were by downloading from them.

You are right for most people that stuff was probably obscure.


I had been exposed to some of those, plus SunOS, Solaris, VMS, AmigaOS, QNX, Red Hat and Debian.

There was no such thing as a OS-monoculture.


At that time you'd have been mercilessly about mIRC. It's a client program, and you'd be on IRCnet, EFnet or QuakeNet, and the the last one would earn you endless disrespect from IRCnet veterans.


Today, Usenet is obscure. Late 90's was right around when Usenet was at its peak. ISPs back then advertised newsgroups as a feature: "we have a full newsfeed", that sort of thing. I worked at one that built a server with a whopping 9 gigabyte hard drive just for the NNTP feed! That was pretty cutting edge back in 1995.


Among the general MTV-watching audience of the 90’s, Usenet was very obscure.


I subscribed to newzbin we'll into the mid 2010's.


Eh; Debian has been around since '93. I'm sure there were dozens of non-mainstream OS's fighting for nerds' attention by 1999.


Weird Al has an engineering degree from Cal Poly


To be specific, he has an architecture degree from Cal Poly.


Maybe so, but I don't see anything in the lyrics that could not be taken from a friend who's "really good with computers".

At this point, however, I'm willing to give Al the benefit of the doubt. It certainly would be "in character" (for lack of a better term), for him to be knowledgeable enough to know the difference. But I have been surprised by others on this particular point in the past.


> I will, however, leave room for him knowing the difference, but intentionally being absurd (it is Weird Al, after all).

This isn't even intentional absurdity. The theme of the song is bragging. Here are some other lyrics.

    I'm down with Bill Gates
    I call him "Money" for short
    I phone him up at home
    and I make him do my tech support

    Your laptop is a month old? Well that's great,
    if you could use a nice heavy paperweight

    Installed a T1 line in my house

    Upgrade my system at least twice a day
The line about having 100 gigabytes of RAM is completely in keeping with every other part of the song. There's no more reason to think Weird Al might not have known what RAM was than there is reason to believe he didn't know that having Bill Gates do your tech support was unrealistic, or that PCs are rarely upgraded more than once a day.


I think the main point is all the other (technical) brags are quite quaintly outdated (T1 lines are glacial by todays standard, while interestingly the lines about how quickly hardware becomes obsolete have become less accurate over time), on the other hand 100GB of RAM is still quite extreme.


This was an active topic of debate back in the days when people still relied on modems. T1 gives you 1.544 mpbs "bandwidth" but the fact that it's "guaranteed" bandwidth means it should be fast enough for anything you'd need to do as an individual user. If you had your own private T1 line all to yourself, the latency should feel the same as being on the same local LAN as whatever service you're trying to access. Even a 56k modem still had a small but noticeable latency, especially if you're doing command-line where you expect echo-back on each character you type.

People don't really understand the speed vs. bandwidth debate, but they do know the psychological difference when latency is low enough to be unnoticeable.


IIRC the point about T1 lines are their availability + uptime + bandwidth guarantees, but yea they're not so hot anymore from the ubiquity of other high performance networking alternatives.


I don't know how extreme I'd consider it; 64GB of RAM is relatively common and 128GB is just one step up from that.


I'd say it differently: 64Gb ram is easily purchasable and 128 isn't much harder - just $$$.

But to my knowledge not many purchase it. As the article says, the defaults for new items on the main vendors are still 8 or 16. I suspect most could be happy with 32GB for the next 3 years but that isn't a default.


64 Gb is a high-end desktop these days. But most desktops aren't high-end, and most PCs these days aren't desktops.


> There's no more reason to think Weird Al might not have known what RAM was than there is reason to believe he didn't know that having Bill Gates do your tech support was unrealistic

I understand why you might argue that, but I've been surprised in the past by people who were fairly-well-versed on computers (no pun intended), but still called hard drive space "memory".

I should go listen to that section of the song again (like I said elsewhere, it isn't one of my favorites of his). By intentional absurdity, I meant that I could see Al intending this to be a case of bragging in an absurd way "Defraggin' my hard drive for thrills, I got me a hundred gigabytes of RAM".

But now that I noticed the "I" in the middle there (I had missed it before), I'm guessing I was wrong in thinking the lines could be cause-and-effect (which would be the source of the absurdity).


> but still called hard drive space "memory".

Well, it is secondary memory. Besides, absolutely, it is memory. It's just not the main memory in a relative context about your computer.

In fact, "storage" and "memory" mean about the same thing. And since it's not technically the main or primary memory of our computers anymore, and RAM is actually an implementation detail that all of the in-machine memory shares, I'm not sure I have a good pedantic name for it.


> RAM is actually an implementation detail that all of the in-machine memory shares

Nope. A pagefile is random-access memory backed by your sequential-access hard drive (or, hey, maybe your random-access SSD) instead of your "RAM".


I mean... My computer has a device called RAM that communicates with the north bridge by a protocol where sequential readings have lower latency than random ones. It also has a device called disk (or sometimes drive), that has nothing shaped like a disk (and is never loaded), and communicates with the north bridge by a protocol where sequential readings have lower latency than random ones.

At the CPU side, there is an in-package device that communicates with the memory controller by a protocol where sequential readings have lower latency than random ones. It communicates with an in-die device, that finally communicates with the CPU with a protocol that provides random access. That last one is obviously not called "RAM".


> I'm guessing I was wrong in thinking the lines could be cause-and-effect

This is the root cause of why everyone is arguing with you. For some reason, TFA quotes the defrag line and the RAM line as if they are somehow paired, which you've also latched onto, but it makes no sense to consider them as a pair. Many lines in this song are to be interpreted as their own complete thought. And if we really want to pair up consecutive lines, wouldn't we use lines that rhyme to do so?

...

Nine to five, chillin' at Hewlett Packard? // Workin' at a desk with a dumb little placard?

Yeah, payin' the bills with my mad programming skills // Defraggin' my hard drive for thrills

I got me a hundred gigabytes of RAM // I never feed trolls and I don't read spam

Installed a T1 line in my house // Always at my PC, double-clickin' on my mizouse

...


Yeah, that's fair.

As I said before, this is not one of my favorite Weird Al songs (many of which I could sing along with word-for-word from memory). I was never crazy about the song, so I am not surprised that I didn't know the surrounding context (which the original article left out). Still, I don't think you can exclude the possibility of a cause-and-effect relationship spanning the middle of two rhyming lines (perhaps particularly, in rap music), but after seeing and hearing the lines in context, I don't think that was Al's intention here.


It's really not the users' fault. Most civilians understand that computers have to have some way to "remember" their files, but the fact that computers also need memory that "forgets" if the power goes off makes no sense to them.

It shouldn't make sense to us either; it's a ridiculous kludge resulting from the fact that we've never figured out how to make memory fast, dense, cheap, and nonvolatile at the same time.

Actually Intel did figure it out with Optane. Then they killed it because computer designers couldn't figure out how to build computers without the multilevel memory kludge. IMHO this is the single dumbest thing that happened in computer science in the last ten years.


My understanding is that the problems with Optane were a lot more complicated than that. @bcantrill and others talked about this on an episode of their Oxide and Friends Twitter space a few weeks ago. A written summary would be nice.


Thanks for the tip. I learned a lot listening to this [0].

My takeway: Optane was good technology that was killed by Intel's thoroughly wrong-headed marketing. It's cheaper than DRAM but more expensive than flash. And much faster than flash, so it makes the fastest SSDs. But those SSDs are too expensive for everybody except niche server farms who need the fastest possible storage. And that's not a big enough market to get the kinks worked out of a new physics technology and achieve economies of scale.

Intel thought they knew how to fix that: They sold Optane DIMMs to replace RAM. But they also refused to talk about how it worked, be truthful about the fact that it probably had a limited number of write cycles, or describe even one killer use case. So nobody wanted to trust it with mission-critical data.

Worst of all, Intel only made DIMMs for Intel processors that had the controller on the CPU die. ARM? AMD? RISC-V? No Optane DIMMs for you. This was the dealbreaker that made every designer say "Thanks Intel but I'm good with DRAM." As they said on the podcast, Intel wanted Optane to be both proprietary and ubiquitous, and those two things almost never go together. (Well obviously they do. See Apple for example. But the hosts were discussing fundamental enabling technology, not integrated systems.)

[0] https://podcasts.apple.com/us/podcast/rip-optane/id162593222...


> Most civilians understand that computers have to have some way to "remember" their files, but the fact that computers also need memory that "forgets" if the power goes off makes no sense to them.

Well, of course that makes no sense. It isn't true.

We use volatile memory because we do need low latency, and volatile memory is a cheap way to accomplish that. But the forgetting isn't a feature that we would miss if it went away. It's an antifeature that we work around because volatile memory is cheap.


I agree we would be fine if all memory was nonvolatile, as long as all the other properties like latency were preserved.

In terms of software robustness though, starting from scratch occasionally is a useful thing. Sure, ideally all our software would be correct and would have no way of getting into strange, broken or semi-broken states. In practice, I doubt we'll every get there. Heck, even biology does the same thing: the birth of child is basically a reboot that throws away the parent's accumulated semi-broken state.

We have built systems in software that try to be fully persistent, but they never caught on. I believe that's for a good reason.


It would take serious software changes before that became a benefit. If every unoptimized Electron app (but I repeat myself) were writing its memory leaks straight to permanent storage my computer would never work again.


> If every unoptimized Electron app (but I repeat myself) were writing its memory leaks straight to permanent storage my computer would never work again.

This is a catastrophic misunderstanding. I have no idea how you think a computer works, but if memory leaks are written to permanent storage, that will have no effect on anything. The difference between volatile and non-volatile memory is in whether the data is lost when the system loses power.

A memory leak has nothing at all to do with that question. A leak occurs when software at some level believes that it has released memory while software at another level believes that that memory is still allocated to the first software. If your Electron app leaks a bunch of memory, that memory will be reclaimed (or more literally, "recognized as unused") when the app is closed. That's true regardless of whether the memory in question could persist through a power outage. Leaks don't stay leaked because they touch the hard drive -- touching the hard drive is something memory leaks have done forever! They stay leaked because the software leaking them stays alive.


> The difference between volatile and non-volatile memory is in whether the data is lost when the system loses power.

I'm aware. This is a feature for me - I disable suspend/hibernate/resume functionality. I don't want hiberfile.sys taking up space (irrelevant in this scenario, I guess) and I certainly don't want programs to reopen themselves after a restart, especially if it was a crash. If all storage were nonvolatile, OSes would behave as though resuming from hibernate (S4) all the time.

> that memory will be reclaimed [. . .] when the app is closed.

Again, I'm aware. I'm glad you've never had any sort of crash or freeze that would prevent closing a program, but it does happen.

OSes would need to implement a sort of virtual cold boot to clear the right areas of memory, even after a BSOD or kernel panic. Probably wouldn't be that hard, but it would have to happen.


> Again, I'm aware.

Are you? Because according to your words in this more recent comment, your original comment was meaningless gibberish.

> If all storage were nonvolatile, OSes would behave as though resuming from hibernate (S4) all the time.

That isn't even possible to do. Thus, obviously, it would not be done.


> That isn't even possible to do.

What? Of course it has to have a first boot sometime, but past that S5 would no longer need to exist.


You could still have a restart "from scratch" feature in the OS. But persistent RAM could potentially mean the power dropping for a few seconds means you don't lose your session.


I used to explain it to clients as the difference between having your files nicely organized in cabinets or on shelves, and having the contents of several different files strewn over the desk. For people who like to keep a tidy desk this metaphor made immediate sense.


It explains some aspects and obscures some other ones.

It explains how the documents at arm reach on the desk are faster to access than the things in the cabinets. It obscures the fact that stuff on the desk dissapears if the power supply glitches even just a little.

In fact we invented these batteries with electronics which sense if the electricity is about to go out so the computer has time to carry the documents from the desk to the cabinets before they dissapear. And we think this is normal and even necessary in pro settings. (I’m talking about uninterrupted power supplies of course.)


A lot of people clear their desk before leaving the office, either because they're tidy and well organized or because they don't want to leave confidential info out where the cleaning staff could browse through it. It's not hard to extend the metaphor to a reboot of the computer being like the end of the workday.


on the desk/in the cabinets has worked as a metaphor for anyone I've had to explain it to.


See also "Wi-Fi" meaning any Internet access now. Or "screen saver" meaning the desktop wallpaper.


Just three days ago, professional streamer Hakos Baelz made this exact mistake.

...and then had to be told that the cable you plug into the back of a landline telephone isn't just a power connector. Sadly, I doubt we'll ever know how she thought telephones actually worked.


20mA current loop?


I think she was playing up most of her zoomerisms during that stream to play off Vesper's boomer character.

I think. But who knows.


The CPU is that box that the monitor and keyboard connect to. :-/


I could be wrong, but I think back in the day the computer monitor was the whole system for debugging and viewing the state of a running computer (which is why the debugger on an Apple II is called a monitor, for example) but now we just use it to mean the physical screen.


I'm not sure about that.

CRT monitor were a relatively late addition to terminal devices. Traditional terminals were hardcopy printers; they printed the output you received onto paper.


Internet is down! Umm no it’s fine. Shows me Facebook not working. Umm that’s just Facebook down. So the internet is down!

After a few rounds sure. The entire Internet is down. I’m going to go play an online game.


A pet peeve I've had to grow out of is the missing word "connection" when people say "The Internet is down".


So you don’t believe in technological solipsism?


The web server is down!

- Sales guy


I think you mean "the website is down!"

https://youtu.be/uRGljemfwUE


Huh? I have never encountered those.


My favorite one was somewhere on Reddit where someone bought a house and was asking how to rip out the Ethernet so they could install Wi-Fi.

Thankfully everyone was like "NO. Don't do that, here's how to install a wireless router to your ethernet setup"


I have. Often. It's probably not common in tech circles but "desktop", "wallpaper", and "screen saver" are often used interchangeably.

"Menu bar", "dock", "toolbar", "menu", and other similar terms are used more or less at random.

It's simply not common for the average user to know the names of UI components.


I was about to say that confusing the first two makes perfect sense because it's been a long time since the Desktop has had any real purpose beyond displaying a background image. Then I realised that the kind of people we're talking about probably have a shortcut for every single application they've ever installed on their desktop and have no idea how to delete them.


It's been an equally long period since screensavers had any actual purpose, being replaced by power saving features and screens without burn-in.


They're still somewhat useful with OLEDs, although even there turning the screen off is sometimes better.


It doesn't help that different brands of OS used different terms for the same thing.

The screen saver mixup makes no sense though. That can only mean one thing.


I'm clear on the first set but will cop to not having thought much about which of those is which in the second set.


I've seen the ones on the second set meaning completely different things from one application or environment to another. Even switching the meaning between themselves.


Your family must be very different than mine! (and coworkers too:)

I Have frequently heard phrase at work "you plug into here to get wifi". My family and in laws absolutely positively do not understand the difference between wifi and internet. In particular, the idea that they may be connected to wifi but not have internet is an alien, incomprehensible notion.

It depends who you hang around with. Computer professionals understand the difference. Most others do not as they don't need to - empirically, to majority of population, wifi IS internet (as opposed to cell phones, of course, which use "data":)


WiFi is the blue ‘e’ on my screensaver.


Aaaugh. You just summarized all the frustration of trying to help my family with technology.


My children complain the wifi is down when their ethernet cable is broken. They say that AFTER THEY TELL ME IT'S BROKEN. This is not just a meme, they should know better, and are very unhappy on WiFi, but still tend to call all internet Wi-Fi.


There was an incident years ago with a library offering wired Ethernet for laptops, and the sign said "wired wifi connection". I'm not sure it's really that common.


wifi but the wi stands for wired


And what does the "fi" stand for?


From: https://en.wikipedia.org/wiki/Wi-Fi

> The name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name that was "a little catchier than 'IEEE 802.11b Direct Sequence'."

> The Wi-Fi Alliance used the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created, and the Wi-Fi Alliance was also called the "Wireless Fidelity Alliance Inc" in some publications.

Elsewhere I've seen that it was chosen because of its similarity to HiFi - which is short for "High Fidelity" (regarding audio equipment):

https://en.wikipedia.org/wiki/High_fidelity

> High fidelity (often shortened to Hi-Fi or HiFi) is the high-quality reproduction of sound. It is important to audiophiles and home audio enthusiasts. Ideally, high-fidelity equipment has inaudible noise and distortion, and a flat (neutral, uncolored) frequency response within the human hearing range.

However, there seems to be a lot of debate as to whether the 'Fi' in WiFi was ever fully accepted to mean "Fidelity". So pretty much it seems marketing people liked the way "WiFi" sounded, so they used it.


WIres For Information


The same thing it stands for in "WiFi."


I hear stuff like "Comcast offers the fastest wifi for your gaming needs" anytime I am accidentally exposed to ads.


Assuming most people use the combo modem/router (WAP, technically, since this entire thread is an exercise in pedantry) provided by their ISP, this makes sense from a non-technical user's perspective, if Comcast always ships hardware with the latest Wifi spec. Of course, you need a whole mess of asterisks after that, because I don't think I own a single client device with Wifi 6E, plus network contention, time sharing, etc. Gamers should do what I do and plug a wire thingy from their computer to their Wifi thingy.


Ok, I found a hanger wire in my closet and have plugged one end to my computer power hole and the other into my wifi thingy power hole. Now what?

- someone in a much to near future


I've certainly heard both.

For a lot of people all their connected devices at home are wireless: smart devices, phones, many have laptops rather than desktops, tablets, …, and while out the connect to other wi-fi networks. It is easy to circulate WiFi and cellular data access, and if you don't use much or any wired networking at home all your normal network access is therefore WiFi.

Screensaver as wallpaper is more rare but I have heard it, and have done for some time (I know non-technical people who have used wallpaper and screen saver wrong for many years, going back to when fancy screen savers were more common than simply powering off, either calling both by one name or just using them randomly). More common these days though is people simply not knowing the term, except to confuse it with a physical screen protector.


Those lines are separate lines of separate couplets.

The lines around it are

"Paying the bills with my mad programming skills

Defragging my hard drive for thrills

Got me a hundred gigabytes of RAM

I never feed trolls and I don't read spam"

They're not really related to each other besides being right next to each other. The lines rhyme with other lines. Which, if you were trying to link ideas lyrically, is where you'd do it, on the rhyme.

But, the entire song is just a litany of various brags centered around technology. It is, like most of Al's work, pretty clever and knowledgeable. Not only of the source material, but of the subject presented.


Yeah, I guess it depends on whether you interpret them as separate.

It is possible to think of it in a cause-and-effect way "Defragging my hard drive for thrills got me a hundred gigabytes of RAM". Which, honestly, I could see Al saying that as a purposefully absurd statement (because the first could not cause the second, but people brag like that all of the time).

I will admit that, although it seems like it should be (given my profession), this is not one of my favorite Weird Al songs --and I've been a fan for decades. So while I have heard it many times, I can't remember the last time I listened to it.


There's really only one interpretation. There's a definite pause between the lines as well. This isn't really a matter of perspective. The only way I can concede that it's possible to read those lines in your way is not particularly flattering to you.


I still tell people that I'm double clickin' on my mizouse.


I'm 100% sure he knew what he was talking about. That line wouldn't have fit the song if he threw out a reasonable-sounding number for the 90s.

> Defragging my hard drive occasionally, when I start to notice drops in i/o performance

> I got me 32 megabytes of RAM

No thank you.


I don't think I've ever heard anyone confuse ram and mass storage by saying "ram".

All I think I've ever seen is anyone who says "ram" knows what ram is, and people who are fuzzy on it call it all "memory".


Yeah, I think that's mostly true.

However, I'm sure I have heard a few people use RAM when they meant hard drive space. Though, in fairness, those were probably friends/relatives who used computers but didn't know a lot about how they actually work.


Another plausible explanation is that `100 Gigabytes of RAM` is more catchy than `100 Megabytes of RAM`


"A thousand megabytes of RAM" has the same meter, and sounds more like an absurd brag than "A hundred gigabytes" even though it's smaller


Conversely, when it comes to mobile devices, RAM is still used in its original meaning, but somehow internal Flash storage became "ROM".


That one is logical, kinda, flash memory is [EEP]ROM.


Not once you drop the first part and it becomes just "read-only memory", though!


I think the poster is overthinking it.

It was probably just that a gigabyte sounds big, 100 sounds big. Therefore, 100 gigs


> especially likely, because he's talking about *defragging a hard drive*

Can someone explain why this is incorrect? I thought a hard drive is typically what one would defrag


The thing people are discussing is the 100GiB of RAM line. You wouldn’t typically defrag RAM (although that was a thing back then) and 100GiB is more indicative of HD space and not RAM. So the question is whether the two lines are related and did Weird Al say that defragging his disk freed up 100GiB of RAM (an impossible amount at the time) or 100 GiB of storage and got RAM and disk confused.


> and 100GiB is more indicative of HD space and not RAM

That's exactly what clearly points to it being about RAM.

> So the question is whether the two lines are related

There's no reason to think that they're related.


Or the fact that he's "doing it for thrills" implies he doesn't need to because he has mounted his entire file system as a RAM disk.


What should he have had been talking about defragging?

I don’t remember ever defragging anything but my hard drive.


> the other plausible explanation

It's not really plausible in the context of those lyrics.


Interesting point that RAM has basically not moved up in consumer computers in 10+ years... wonder why.

The article answers its own question by pointing out SSDs allow better swapping, bus is wider and RAM is more and more integrated into processor, etc.

But I think this misses the point. Did these solutions make it uneconomic to increase RAM density, or did scaling challenges in RAM density lead to a bunch of other nearby things getting better to compensate?

I'd guess the problem is in scaling RAM density itself because performance is just not very good on 8gb macbooks, compared to 16gb. If "unified memory" was really that great, I'd expect there to be largely no difference in perceived quality.

Does anyone have expertise in RAM manufacturing challenge?


I don't think SSDs allowing rapid swapping is as big a deal as SSDs being really fast at serving files. On a typical system, pre-SSD, you wanted gobs of RAM to make it fast - not only for your actual application use, but also for the page cache. You wanted that glacial spinning rust to be touched once for any page you'd be using frequently because the access times were so awful.

Now, with SSDs, it's a lot cheaper and faster to read disk, and especially with NVMe, you don't have to read things sequentially. You just "throw the spaghetti at the wall" with regards to all the blocks you want, and it services them. So you don't need nearly as much page cache to have "teh snappy" in your responsiveness.

We've also added compressed RAM to all major OSes (Windows has it, MacOS has it, and Linux at least normally ships with zswap built as a module, though not enabled). So that further improves RAM efficiency - part of the reason I can use 64-bit 4GB ARM boxes is that zswap does a very good job of keeping swap off the disk.

We're using RAM more efficiently than we used to be, and that's helped keep "usable amounts" somewhat stable.

Don't worry, though. Electron apps have heard the complaint and are coming for all the RAM you have! It's shocking just how much less RAM something like ncspot (curses/terminal client for Spotify) uses than the official app...


> especially with NVMe, you don't have to read things sequentially

NVMe is actually not any better than SATA SSDs at random/low-queue-depth IO. The latency-per-request is about the same for the flash memory itself and that's really the dominant factor in purely random requests.

Of course pretty soon NVMe will be used for DirectStorage so it'll be preferable to have it in terms of CPU load/game smoothness, but just in terms of raw random access, SSDs really haven't improved in over a decade at this point. Which is what was so attractive about Optane/3D X-point... it was the first improvement in disk latency in a really long time, and that makes a huge difference in tons of workloads, especially consumer workloads. The 280/480GB Optane SSDs were great.

But yeah you're right that paging and compression and other tricks have let us get more out of the same amount of RAM. Browsers just need to keep one window and a couple tabs open, and they'll page out if they see you launch a game, etc, so as long as one single application doesn't need more than 16GB it's fine.

Also, games are really the most intensive single thing that anyone will do. Browsers are a bunch of granular tabs that can be paged out a piece at a time, where you can't really do that with a game. And games are limited by what's being done with consoles... consoles have stayed around the 16GB mark for total system RAM for a long time now too. So the "single largest task" hasn't increased much, and we're much better at doing paging for the granular stuff.


Latency may be similar but:

1. Pretty sure IO depth is as high as OSes can make it so small depth only happens on a mostly idle system.

2. Throughput of NVMe is 10x higher than SATA. So in terms of “time to read the whole file” or “time to complete all I/O requests”, it is also meaningfully better from that perspective.


> NVMe is actually not any better than SATA SSDs at random/low-queue-depth IO.

The fastest NVMe SSD[0] on UserBenchmark appears to be a fair bit faster at 4k random read compared to the fastest SATA SSD[1].

75 MB/s vs 41.9 MB/s Avg Random 4k Read.

1,419 MB/s vs 431 MB/s Avg Deep queue 4k read

Edit: This comment has been edited, originally I was comparing a flash SATA SSD vs an optane nvme drive, which wasn't a fair comparison.

[0]: https://ssd.userbenchmark.com/SpeedTest/1311638/Samsung-SSD-...

[1]: https://ssd.userbenchmark.com/SpeedTest/1463967/Samsung-SSD-...


that might be with a higher queue depth though, like 4K Random QD=4 or something, I don't see it says "QD=1" or similar anywhere there and that's a fairly high result if it was really QD=1.

It's true that NVMe does better with a higher queue depth though, but, consumer workloads tend to be QD=1 (you don't start the next access until this one has been finished) and that's the pathological case due to the inherent latency of flash access. Flash is pretty bad at those scenarios whether SATA or NVMe.

https://images.anandtech.com/graphs/graph11953/burst-rr.png

https://www.anandtech.com/show/11953/the-intel-optane-ssd-90...

So eh, I suppose it's true that NVMe is at least a little better in random 4K QD=1, a 960 Pro is 59.8 MB/s vs 38.8 for the 850 Pro (although note that's only a 256GB drive, which often don't have all their flash lanes populated and a 1TB or 2TB might be faster). But it's not really night-and-day better, they're still both quite slow. In contrast Optane can push 420-510 MB/s in pure 4K Random QD=1.


Also people forget that the jump from 8 bit to 16 bit doubled address size, and 16 to 32 did it again, and 32 to 64, again. But each time the percentage of "active memory" that was used by addresses dropped.

And I feel the operating systems have gotten better at paging out large portions of these stupid electron apps, but that may just be wishful thinking.


Memory addresses were never 8 bits. Some early hobbyist machines might have had only 256 bytes of RAM present, but the address space was always larger.


Yeah, the 8bit machines I used had 16bit address space. For example from my vague/limited Z80 memories most of the 8bit registers were paired - so if you wanted a 16bit address, you used the pair. To lazy to look it up, but with the Z80 I seem to remember about 7 8bit registers and that allowed 3 pairs that could handle a 16bit value.


Even the Intel 4004--widely regarded as the first commercial microprocessor--had a 12-bit address space


This got me thinking, and I went digging even further into historic mainframes. These rarely used eight-bit bytes, so calculating memory size on them is a little funny. But all had more than 256 bytes.

Whirlwind I (1951): 2048 16-bit words, so 4k bytes. This was the first digital computer to use core memory (and the first to operate on more than one bit at a time).

EDVAC (designed in 1944): 1024 44 bit-words, so about 5.6k.

ENIAC (designed in 1943): No memory at all, at least not like we think of it.

So there you go. All but the earliest digital computer used an address space greater than eight bits wide. I'm sure there are some micro controllers and similar that have only eight bit address spaces, but general purpose machines seem to have started at 12-bits and gone up from there.


The ENIAC was upgraded to be a stored-program computer after a while, and eventually had 100 words of core memory.


I actually have 100GB of RAM in my desktop machine! It's great, but my usage is pretty niche. I use it as drive space to hold large ML datasets for super fast access.

I think for most use cases ssd is fast enough though.


From the song:

> It does all my work without me even askin'

It sounds like Wierd Al had the same use case!


I think it's just RAM reaching the comfortable level, like other things did.

Back when I had a 386 DX 40 MHz with 4MB RAM and 170MB disk, everything was at a premium. Drawing a game at a decent framerate at 320x200 required careful coding. RAM was always scarce. That disk space ran out in no time at all, and even faster one CD drives showed up.

I remember spending an inordinate time on squeezing out more disk space, and using stuff like DoubleSpace to make more room.

Today I've got a 2TB SSD and that's plenty. Once in a while I notice I've got 500GB worth of builds lying around, do some cleaning, and problem solved for the next 6 months.

I could get more storage but it'd be superfluous, it'd just allow for accumulating more junk before I need a cleaning.

RAM is similar, at some point it ceases to be constraining. 16GB is an okay amount to have unless you run virtual machines, or compile large projects using 32 cores at once (had to upgrade to 64GB for that).


16G is just enough that I only get two or three OOM kills a day. So, it's pathetically low for my usage, but I can't upgrade because it's all soldered on now! 64G or 128G seems like it would be enough to not run into problems.


What are you doing where you're having OOM kills? I think the only time that's ever happened to me on a desktop computer (or laptop) was when I accidentally generated an enormous mesh in Blender


Blender fun : Select All -> Subdivide -> Subdivide -> Subdivide ... wait where did all my memory go!

I have learnt the ways of vertices. Not great at making the model I want, but getting there.


As someone on a desktop with 64GB, firefox still manages to slowly rise in usage up to around 40% of RAM, occasionally causing oom issues.


I wonder if you have a runaway extension... I haven't seen this type of issue from Firefox in a while.


I also have 64GB on my home PC and Firefox tends to get into bad states where it uses up a lot of RAM/CPU too. Restarting it usually fixes things (with saved tabs so I don't lose too much state).

But outside of bugs I can see why we're not at 100GB - even with a PopOS VM soaking up 8GB and running Firefox for at least a day or two with maybe 30 tabs, I'm only at 21GB used. Most of that is Firefox and Edge.


yeah, its definitely a cache/extension thing; usually when it gets near the edge I also restart. I do wish there were a way to set a max cache size for firefox.


FF using 1GB here, after several hours. You must have a leak somewhere.


How many windows/tabs is a much more relevant question in my experience.


FF doesn't load old tabs so you can't just count them if they've been carried over since a restart.


true, active tabs matters, but people that have hundreds of tabs open but only rarely open new ones or touch old ones are probably also quite rare.


I open and close one or two times per day. Might have a dozen+ open at once, not a huge number.


I usually have 800 to 1000 tabs open and Firefox only uses a few GB.


128 GiB here. Still run into OOMs occasionally.


> Drawing a game at a decent framerate at 320x200 required careful coding.

320x200 is 64,000 pixels.

If you want to maintain 20 fps, then you have to render 1,280,000 pixels per second. At 40 Mhz, that's 31.25 clock cycles per pixel. And the IPC of a 386 was pretty awful.

That's also not including any CPU time for game logic.


Hint: you don't replace all pixels at once.


Most PC games of the VGA DOS era did exactly that, though.

But, well, a lot can be done in 30 cycles. If it's a 2D game, then you're mostly blitting sprites. If there's no need for translucency, each row can be reduced to a memcpy (i.e. probably REP MOVSD).

Something like Doom had to do a lot more tricks to be fast enough. Though even then it still repainted all the pixels.


You might enjoy the Fridman interview of Carmack. He talks about that era’s repaint strategies.


For most users that is true. I think there were several applications that drove the demand for more memory, then the 32bit -> 64bit transition drove it further but now for most users 16GB is plenty.


16 GB RAM is above average. I've just opened BestBuy (US, WA, and I'm not logged in so it picked some store in Seattle - YMMV), went to the "All Laptops" section (no filters of any kind) and here's what I get on the first page: 16, 8, 12, 8, 12, 4, 4, 4, 4, 8, 8, 8, 8, 16, 16, 4, 4, 8. Median value is obviously 8 and mean/average is 8.4.

I'd say that's about enough to comfortably use a browser with a few tabs on an otherwise pristine machine with nothing unusual running in background (and I'm not sure about memory requirements of all the typically prenistalled crapware). Start opening more tabs or install some apps and 8GB RAM is going to run out real quick.

And it goes as low as 4 - which is a bad joke. That's appropriate only for quite special low-memory uses (like a thin client, preferably based on a special low-resource GNU/Linux distro) or "I'll install my own RAM anyway so I don't care what comes stock" scenario.


I agree that 4 GiB is too low for browsers these days (and has been for years) but but that is only because the web is so bloated. But 4 GiB would also be a waste on any kind of thin client. Plenty of local applications should run fine on that with the appropriate OS.


Low end chromebooks have 4gb, but they pretty much count as thin clients. :)


I think it's really quite the opposite.

Compute resource consumption is like a gas, it expands to fill whatever container you give it.

Things didn't reach a comfortable level, Moore's Law just slowed down a lot so bloat slowed at pace. When developers can get a machine with twice as much resources every year and a half, things feel uncomfortable real quick for everybody else. When developer's can't ... things stop being so uncomfortable for everyone.

However, there is a certain amount of physical reality that has capped needs. Audio and video have limits to perceptual differences; with a given codec (which are getting better as well) there is a maximum bitrate where a human will be able to experience an improvement. Lots of arguing about where exactly, but the limit exists and so the need for storage/compute/memory to handle media has a max and we've hit that.


RAM prices haven't dropped as fast as other parts, images don't ever really get much "bigger" (this drove a lot of early memory jumps, because as monitor sizes grew, so did the RAM necessary to hold the video image, and image file sizes also grew to "look good" - the last jump we had here was retina).

The other dirty secret is home computers are still single-tasking machines. They may have many programs running, but the user is doing a single task.

Server RAM has continued to grow each and every year.


> The other dirty secret is home computers are still single-tasking machines. They may have many programs running, but the user is doing a single task.

It isn't often you really want it, but it is nice to have when you do. I am still bitter at Intel for mainstream 4-core'ing it for years...


Pretty sure 4 gig RAM was common consumer level then, but I take your point. I think the average consumer user became rather less affected by the benefits of increased RAM somewhere around 8 and that let manufacturers get away with keeping on turning out machines that size. Specialist software and common OSes carried on getting better at using more if you had more demanding tasks to do, which is probably quite a lot of the people here, but not a high % of mass market computer buyers.

Honestly I think the pace of advance has left me behind too now, as the pool of jobs that really need the next increment of "more" goes down. There might be a few tasks that can readily consume more and more silicon for the foreseeable, but more and more tasks will become "better doesn't matter". (someone's going to butt in and say their workload needs way more. Preemptively, I am happy for you. Maybe even a little jealous and certainly interested to hear. But not all cool problems have massive data)


In 1995, I remember buying a Pentium PC with 32 megs of RAM. Gigabytes of RAM wasn't common until the early 2000's!


>Pretty sure 4 gig RAM was common consumer level then

I wish. Y2k3, AMD Athlon, 256MB of RAM. That could run the whole KDE, Kopete, Konqueror and Amarok DE combo in a breeze.


In 1992, I had an LCII with 10 MB RAM and that was considered extravagant. 24MB in 1994 was considered high end.

I bought a 32MB SIMM to attach to my DX/2-66 DOS Compatibility Card in my PowerMac 6100/60 for $300 in 1996.

Even as late as 2008, having 4GB RAM was not common.


DRAM hit a scaling wall about a decade ago. Capacity per dollar has not been improving significantly, certainly not at the exponential rate it had been for the prior 50 years or so.

https://thememoryguy.com/dram-prices-hit-historic-low/

As to why, search for words like "dram scaling challenge". I'm no expert but I believe capacitor (lack of) scaling is the biggest problem. While transistors and interconnections continued shrinking, capacitors suitable for use in DRAM circuits ran out of steam around 2xnm.


8gigs is "enough" for most people.

Even modern games don't usually use more than eg. 16gigs

Developers are a whole different story, that's why it's not unusual to find business-class laptops with 64+ gigs of ram (just for the developer to run the development environment, usually consisting of multiple virtual machines).


It was pretty difficult to use 64GB of RAM on my old desktop. 95% of my usage was Firefox and the occasional game. The only things that actually utilized that RAM were After Effects and Cinema 4D, which I only use as a hobby. I felt kinda dumb buying that much RAM up until I got into AE and Cinema 4D because most of it just sat there unused.


Can confirm my Mac has 64.


To be fair, CPUs haven't improved a ridiculous amount either.

10 years ago, mainstream computers were being built with i5-2500ks in them. Now, for a similar price point you might be looking at a Ryzen 5 5600x. User Benchmark puts this at a 50-60% increase in effective 1/2/4 core workloads, and a 150-160% increase in 8 core workloads.

Compared to the changes in SSDs (64GB/128GB SATA3 being mainstream then, compared to 1TB NVMe now) or GPUs (Can an HD6850 and RTX 3060 even be compared!?), it's pretty meagre!


https://cpu.userbenchmark.com/Compare/Intel-Core-i5-2500K-vs... is the comparison you’re referring to.

I recall hearing a few years back that User Benchmark was wildly unreliable where AMD was involved, presenting figures that made them look much worse than they actually were. No idea of the present situation. I also vaguely recall the “effective speed” rating being ridiculed (unsure if this is connected with the AMD stuff or not), and +25% does seem rather ridiculous given that its average scores are all over +50%, quite apart from things like the memory speed (the i5-2500K supported DDR3 at up to 1333MHz, the 5600X DDR4 at up to 3200MHz).

An alternative comparison comes from PassMark, who I think are generally regarded as more reliable. https://www.cpubenchmark.net/compare/Intel-i5-2500K-vs-AMD-R... presents a much more favourable view of the 5600X: twice as fast single-threaded, and over 5.3× as fast for the multi-threaded CPU Mark.


In my experience; UserBenchmark will only show exactly one review per part, usually written early in the lifecycle, and never updated. Sometimes, this is written by random users (older parts), sometimes by staff. All reviews are basically trash, especially staff reviews.

Also, data fields like 64-Core Perf were made less prominent on Part Lists and Benchmark Result pages around the time Zen+ and Zen 2 All-Core outperformed comparable Intel parts. 1-Core Perf and 8-Core Perf were prioritized on highly visible pages, putting Intel on top of the default filtered list of CPUs.

However, the dataset produced by the millions of benchmarks remains apparently unmolested, and all the data is still visible going back a decade or more, if not slightly obscured by checkboxes and layout changes. (https://www.userbenchmark.com/UserRun/1)


Thanks for that. I'd always just taken them at face value, since they seem authoritative.


Overall RAM in your systems HAS been steadily increasing, it's just targeted at the use cases where that matters, so it's primarily been in things like the GPU rather than system memory. In the past ten years GPUs have gone from ~3GB[1] to now 24GB[2] in high-end consumer cards.

Another factor is in speed; In that ten year span we went from DDR3 to DDR5, moving from 17000 to 57600 transfers per second over that time frame. SSDs are much more common meaning it's easier to keep RAM full of only the things you need at the moment and drop what you don't out.

In terms of RAM density increasing, I think it's been more driven by demand than anything and there simply isn't a serious demand out there except in the very high end server space. Even compute is more driven by GPU than system memory, so you're mostly talking about companies that want to run huge in-memory databases, versus in the past when the entire industry has been starving for more memory density. The story has been speed matters more.

I'd also suggest that improvements in compression and hardware-assisted decompression for video streams have made a difference, as whatever is in memory on your average consumer device is smaller.

Coupling this with efficiency improvements of things like DirectStorage where the data fed into the GPU doesn't even need to touch RAM, the requirement to have high system memory available will be further lessened for gaming or compute workloads going forward.

[1]: https://www.anandtech.com/bench/GPU12/372

[2]: https://www.nvidia.com/en-us/geforce/graphics-cards/30-serie...


Worse, it's now all soldered in, so we can't even upgrade it.

Like - fuck. this. shit.

I will take - in a heartbeat - (we'd all better!) whatever slight decrease in performance and slightly larger design that comes from not being soldered in half a microsecond if it means I can actually upgrade the RAM later on after my purchase.

Not just because I want continued value out of my purchase - because honestly, that's not even getting started on the amount of eWaste that must be created from this selfish, harmful; wasteful pattern.

Think about all the MacBook Airs out there stuck with 4GB of RAM and practically useless 128GB SSD's that could be useful if Apple didn't intentionally make them useless.

We've certainly gone very far backward in terms of any computer hardware worth purchasing in the last ten years.

I really don't get the point, beyond greed - of this whole soldering garbage. No performance or size benefits can be worth not being able to continue to reap value from my investment a couple years down the road.

This not only creates seriously less waste, but also nurtures a healthy, positive attitude of valuing our hardware instead of seeing it as a throwaway, replaceable thing.

I truly hope environmental laws make it illegal at some point. The EU seems to be good with that stuff. The soldering issue accomplishes literally nothing beyond nurturing, continuing, and encouraging a selfish culture of waste that only gets worse the more and more accepted we allow it to become.

We've only got one planet. It's way - way - WAY - past time we started acting like it.

Ten years ago, the state of hardware was far more sustainable than it is now.

For. Fucking. Shame.


Im still daily driving my 2013 macbook air with 128gb ssd and well 8gb ram. 100’s of browser tabs, photoshop, kicad and illustrator often at the same time. Never reboot, always sleep. Runs Catalina. Best machine ever. [edit] while going through many eeepcs, several thinkpads and two desktop pcs in that period [edit2: these were prolly older than 10 years tho ;)]


I don't really understand how you do this. Even running archlinux with clamd and a tiny windows manager takes ~2-4 GB of RAM to do effectively "nothing".

So, of course, MacOS takes ~8GB of RAM to do effectively nothing, such is about the same for windows.

Unless you're running something like alpine Linux these days, you're going to be eating a ton of RAM and cycles for surveillance, telemetry, and other silly 'features' for these companies.


I’ve used those MacBook airs before and those weak dual core CPUs really don’t have much beef to them, and the screens are these 1440x900 TN panels (not sure about the vertical resolution) which was low spec even for the time.

I don’t see upgradability being essential to longevity. I think you can just buy everything you need for the next decade on day one.

The soldering uses significantly less power.


>> I don’t see upgradability being essential to longevity. I think you can just buy everything you need for the next decade on day one.

Wow, that statement reeks of having so much privilege it’s like you forgot that this isn’t financially feasible for a lot of people, and that upgrading down the line allowed someone to afford a lower spec computer at the time.

Like - you do get not everyone’s wealthy af, yeah? And that companies intentionally bloat the cost of first party memory and stuff so that if you’re not insanely privileged; like I guess you are, then, no - it’s not even…it doesn’t even make sense.

All this does is steal from the poor to give to the rich.


That’s a pretty wildly inaccurate guess given I bought that MacBook Air when I was in poverty. It was my life savings at the time.

It’s a matter of efficiency, somebody pays for all those ram slots to get made and the electricity to power them. Now that computer memory is growing generation over generation at a slower rate, and now that other sources of power waste have been eliminated, I reckon there’s less and less reason to go with slots.

It is a shame that this allows vendors to charge non-market prices for products but the problem I have is that this has become peoples main argument to preserve slotted memory. It’s an objectively inferior technology for mobile devices we use so we can pay market prices for memory. Realistically though, slotted memory is doomed if the only argument for it, since manufacturers have every incentive to stop offering the option of slots. Even if they wanted to offer a customer competitive market prices for memory there’s little reason they couldn’t do that with soldered memory.


When I bought my laptop this was exactly my thinking, so I immediately put 128GB aftermarket RAM into it.


I remember the old setup - buy or build a PC, and a year or two later max the RAM (for significantly less now) and get another year or two out of it.

Then for awhile you could use SSDs to do the same thing.

But now it’s all laptops as far as the eye can see, with soldered RAM.


I have 64G of RAM in my framework laptop with room for 64G more. It's not all bad. (The state of mobile phone storage (not to mention removable batteries) has gotten significantly worse over time. Heck, they even dropped the 3.5mm headphone jack which is honestly fucking insane.)


Framework is the only laptop I will consider purchasing in the future, for sure.


I think RAM has been increasing but for the GPU rather than CPU. Also, I think Microsoft isn't quite as bad as Apple at memory usage so it still isn't that easy for a large number of people to really need more than 8GB and for anyone who plays games it is better to have extra memory in the GPU. I don't think there is all that much swapping needed with 8GB but most CPU memory tends to be used as disk cache so it still helps to have faster disks. I think of computers that started to have 8GB RAM over a decade ago as the "modern era of computing" and those systems are still fine now (at least with SSD) for a wide range of casual use, other than software that explicitly requires newer stuff (mostly GPU, sometimes virtualization or vector stuff). My sense is most hardware changes since have been in the GPU and SSDs.


This is where it becomes apparent that the way I use my machine is very different to some folks on here.

It is a Lenovo T400 with 4GB of RAM. In order to maximise the life span of the SSD, I knocked out the swap sector. So that is 4GB, no option to move beyond that.

That said in daily use, I have never seen my usage go anywhere near 3GB but I suspect that is just because I am very frugal with my browsing and keep cleaning up unused tabs.


Put Windows/MacOS on it and you'll use 3 GB just booting up :p.


No background or industry specific knowledge, but I'd venture to guess smart phones/mobile computing added a lot to the demand for RAM and outpaced increases in manufacturing.

I'd guess now that the market for smartphones is pretty mature we should start seeing further RAM increases in the coming years.


> I'd guess the problem is in scaling RAM density itself

Yea, seems[1] that's the issue. I wrote some more about it in my other[2].

[1]: https://www.allaboutcircuits.com/news/micron-unveils-1a-dram...

[2]: https://news.ycombinator.com/item?id=32758022


I would guess it’s because the average user just doesn’t use enough ram for it to matter. My current desktop built in 2014 has 64GB and it seems like I never come close to using it all even running multiple virtual machines and things like that.


> Interesting point that RAM has basically not moved up in consumer computers in 10+ years... wonder why.

Smartphones gradually became the limiting factor


I think a big piece that is ignored is how much normal compute/memory usage has been shifted to the Cloud.

I no longer need to have local resources to do amazing things. I don't need an 10GB+ GPU in my phone to be able to hit some website and use AI to generate images. I have fast web/email search capabilities that don't require me to have a local index of all of it. I can spin up a remote compute cluster with as much RAM as I need if I want some heavy lifting done, and then throw it all away when I'm finished.

"Back in the day" we would try to build for all that we could perceive we would do in the next 5 years, and maybe upgrade memory half way through that cycle if things changed. We ran a lot of things locally, and would close apps if we needed to open something needy. I think also systems have gotten a lot better at sharing and doing memory swapping (see comments about SSD helping here). Back in the old days, if you ran out of memory, the app would just not open at all, or crash.


I need >16GB of RAM to reasonably run Chrome on a personal computer.


I just did a test, checking RAM usage before/after closing Chrome (which has 8 tabs open, three of which are Google apps and three of which are Jira, so pretty heavyweight). It's using 2.8GB


Always reserve about 2G for macOS/Windows and about 1G for a desktop variant of linux.

For example "WindowServer" on my Mac is using 1.22G at the moment by itself.

For Chrome tabs:

95MB static cost per tab; some sites load a lot of data: console.cloud.google.com takes 1GiB of ram for me; LinkedIn takes about 700MiB

Jira, Confluence, Google Cloud Console, Slack and Gmail on my machine consume 4GiB when running simultaneously.

That's before I have things like teams, onedrive/dropbox, docker or an IDE.

I don't think 16G is too little, but I could see how there's a potential there to run out quite easily.


4.4 GB currently for 18 tabs here. Just a few Github and documentation pages open. Nothing heavy.

PS: Then I opened another 3 tabs (of YT videos) and it jumped to 5.2 GB.


Chrome and other modern browsers tune their memory usage taking the current memory pressure into account. If you _have_ a lot of RAM available, why not use it? On a 4GB machine, Chrome will _not_ use 5.2 GB for 3 YT videos and a bit of GitHub.


How is priority of memory usage managed between 10 different apps using that kind of allocation policy?

I have never seen Chrome starting to use much less memory when memory pressure becomes high and I have been monitoring it many times. There is some GC like behavior releasing a small amount of memory from time to time, but nothing really significant when systems starts running out of avaliable physical memory.


I don't think YouTube videos are really representative of web memory usage, because a significant amount of the memory they take up will be the buffered video content itself


I somewhat jokingly built out my gaming PC with 64 GBs of RAM with Chrome as my justification... unfortunately well before it reaches even 16 GBs of RAM usage it becomes fairly unstable before eventually reporting that it's out of memory despite less than 50% memory utilization.


As someone prone to having 3-400 tabs open at once for months on end, no you don't.

I'm at the point of wanting 32-64gb of memory these days, but only because I like messing with huge datasets.


I'm sorry, but you don't get to tell me what I need to comfortably run a piece of software on my computer. Despite whatever utilization metrics are claimed, Chrome runs like a wounded animal whenever I get into the territory of a dozen windows with approximately a dozen tabs each. If I try to do something unimaginable like open a Google sheet and a Meet call at the same time on my work provided laptop equipped with 16 GB of RAM, it's a disaster.


Then maybe it's something other than memory. Despite having a bunch of extensions running, the only problem I have ever had with 3-400 open tabs is the mental effort of managing them.

I'm honestly perplexed I don't have more problems as the machine I do most of my reading/browsing on is more than a decade old and doesn't even have a GPU. Plus I nearly always have 2 messaging apps, a pdf viewer, and 1-3 other browsers running simultaneously, maybe a logging app too. All running on Windows 10, it's not some turbocharged Gentoo installation or anything.


- Use Ublock Origin

- Add the "--light" parameter to your desktop shortcut/script/launcher.


I work in advertising, so I think using Ublock Origin is hypocritical (and it seems is not permitted by my employer's security policy.)

I can't find documentation for any "--light" flag, so it's unclear what that does / if it still works. But I appreciate the suggestions.


I'm curious, what if you try running Chrome's Task Manager (shift-escape on Windows, idk what you're using). Cause I noticed things like gmail are pretty well behaved and just sit there, whereas Twitter takes up nearly twice as much memory and grabs a bunch of CPU time every 20 seconds. If you're not running an ad-blocker I would expect that any site with heavy ads/tracking is impacting performance.

I work in advertising, so I think using Ublock Origin is hypocritical

I sincerely think that your industry makes the internet way worse than it needs to be.


Twitter back in the day had a web page (m.twitter.com) which worked everywhere, even under Links2 with the graphical UI.

If the reason it's "the user can't be tracked equally without a JS Big Brother behemoth", then they don't understand how cookies can be used for that, or by just parsing the user tweets and preferences.


Well, your industry it's generanting the same problem you are ranting at. ADs will crawl down any browser.


Your computer is broken.


My Carbon X1 with 8 i7 cores isn't broken. Rather, the internet browser I use largely for work has issues managing a large collection of tabs that include a few heavy web-apps (gmail, meet, sheets and play music).


I suspect your work is running lots of tracking software. Check the Chrome Task Manager and see if it is that?


It's definitely broken if your 16 GB RAM can't handle Chrome.


How many tabs do you have open? Typically that's only necessary if you like have 100+ tabs open or are running many demanding web apps at the same time.


Unless you have some extreme circumstances, like a plugin that's leaking memory or 100s of tabs, I really really doubt that.


>I no longer need to have local resources to do amazing things. I don't need an 10GB+ GPU in my phone to be able to hit some website and use AI to generate images.

Perhaps, but most people don't use the Cloud for anything fancy, but to run SaaS apps like email, calendars, note applications, chat, and so on, that could be way better off served by local apps, and not only run better and be faster, but take less resources than they do running on the Cloud...

Like Slack (be it electron "app" or browser tab) taking 100x the resources to do roughly what ICQ did 20 years ago...


Yes but what Slack does and ICQ didn't is justify a monthly subscription, so it's an improvement you see.


Yes - you can effortlessly lease a 128G RAM cloud VM today if you need one. I distinctly remember that in 2010 8G server was "big box" whereas in 2013 having 32G became commonplace. That's the RAM saturation point.


From a professional perspective, I completely agree.

I used to run into "not enough RAM" situations frequently and convinced my company to splurge 4K EUR on a Macbook Pro with 64 GBs of RAM. I'm very happy with it.

I also had to block several planned purchases where somebody unrelated to the work my team does decided to order MacBooks with 8GBs of RAM for new team members. It's 2022 my dude, the 2000s called and they want their 4*2GB RAM kits back.

From a consumer perspective on the other hand, I feel that current machines with 8GB of RAM run typical software (browsers and whatnot) well enough.


If they are unrelated to the work your team does, how can you be sure they really need more than 8Gb of unified memory?

I'm on an 8Gb M1 Macbook Air and really don't notice it swapping often, unless I'm running something heavy in addition to browser + terminal + IntelliJ IDEA.


To me it sounds like GP was implying someone not on his team was trying to purchase the under-specced laptops for his team.


Correct, and sorry for being unclear: somebody unrelated was trying to buy under-specced hardware for new members of my team in order to save money.


I’ve managed to soft lock 16gb machines with inefficient libraries I needed to use to pull data, whereas a 32gb machine would have just barely survived.

I feel the real use of high amounts of memory is to run bad software.


The M1 is absurdly better than Intel Macs about swapping, I think because of the giant caches. I never have memory issues on my 8gb M1 Mac Mini, but my work 16gb Intel Macbook runs into memory slowdowns as soon as it launches Slack, a browser, and an IDE at the same time.


I have a 16GB M1 MBP. I can easily get it to slow to a crawl if I'm not careful. This is with Intellij Idea and other tools I use regularly. It's the worst thing about Macbooks, limited RAM.


Hahah I'm glad I got the 16GB M1 Macbook Air because I can run Stable Diffusion on it! I'm having so much fun generating images for free to my hearts content.


Image or video editing?


8GB ram is not enough to run any kind of modern web browser


That's demonstrably false - I can run Safari or Chrome on my 8Gb M1 macbook air 24/7 without any issues.


In which universe? I can run up to 5 tabs well under an Atom netbook with 1GB and some hosts bloking file.

with 8GB and the Intel NUC own as a desktop (Alpine Linux XFCE), with UBo I can open dozens of tabs without blinking.


Currently running Spotify + VSCode + Chrome + Slack on 8GB. That's 4 browsers, and it's totally fine.


> But memory has felt like an exception to Moore’s Law for a while, at least in practice.

That's because it is. Memory is failing to scale. That's why there's so much investment in alternative memory technologies, including why Intel sunk so much money into 3D XPoint despite the losses. But the market is brutally optimized, hence the difficulty cracking it.


SRAM cells need 6 transistors. DRAM cells need a transistor and a capacitor.

6 transistors aren't dense, so are capacitors in integrated circuits. Comparatively with NAND, it only needs one (floating gate) transistor.

Samsung is working on 3D-stacked DRAM cells: https://www.i-micronews.com/samsung-electronics-gearing-up-t...


Commercialization of large machine learning models might push the RAM game for the end user up in the not-too-far future. For unified memory machines like the current Apple chips that is, and otherwise the memory on consumer GPUs for the enthusiasts.

Last week, several posts made it to the HN front page of stable diffusion forks that reduce the memory usage, to make it possible to generate 512x512 images with < 10 GB GPU memory, but with a tradeoff in computing speed. Trying to go beyond HD resolution (far from phone photo resolution) will still blow out your top-of-the-line Nvidia consumer GPU.

When approaches like these will be hitting your favorite image editing tool, you'll want to get that 256 GB RAM iPad for your mom. Otherwise, you'll have to deal with her having to wait minutes to give your family cat a festive hat in last year's Christmas picture.


"Case took the pink handset from its cradle and punched a Hongkong number from memory. He let it ring five times, then hung up. His buyer for the three megabytes of hot RAM in the Hitachi wasn't taking calls."

- Neuromancer, by William Gibson, 1984


I bought a Mac Studio recently with 64gb of RAM. I adore it, but I am kind of missing the portability of a laptop.

I’ve been watching Facebook Marketplace recently for a cheap Mac Laptop, and I was surprised to find the place is absolutely flooded with near-new MacBook Pros with 8gb of RAM. It doesn’t make any sense, it’s the worst place to cheap out - it’s not upgradable and makes a huge performance difference.

I suspect it might be self-selecting. The reason there are so many on Facebook Marketplace is because the owners can feel they need an upgrade. Do they know the reason for it’s lackluster performance being the lack of RAM? Maybe not, they just know the machine runs poorly and are going to upgrade to a newer one with 8gb of RAM.

To contrast my 11 year old iMac I was upgrading from has 24gb of RAM and still held up fine as my daily driver. The lack of software upgrades being the thing that pushed me to upgrade.


Same, I have a 2nd generation intel i5 desktop at home. since I have upgraded to an SSD and the RAM from 8 to 24gb it gained years of new life. Perfectly capable daily driver.


I bought a MacBook with 64gb ram in 2019 and it’s a beast. So much so that I haven’t felt the need to upgrade to Apple silicon yet.


Huh, that's interesting! I had a 2019 MBP with only 16gb of RAM - an amount I've been comfortable with on other computers prior to that - and it was UNUSABLE. Just MISERABLY slow. Upgraded to Apple Silicon with the same amount of RAM and it feels unstoppable, just always running at 150% of expected speed.

(I would have gotten more RAM in the 2019 laptop, but it was during that brief window where new laptop shipping times were measured in months, and I was happy to have ANY work laptop and not have to be using my old personal one, an old Air.)


> The general public isn’t asking for a hundred gigs, but I’d love to see the baseline rise up a bit. It doesn’t feel like we’ve budged meaningfully here for years. Or is that just me?

While the baseline of 8GB hasn't risen, I do think the floor of what we'd consider "unnecessary" has risen. I remember in 2018 I was building a new desktop and I spent a pretty penny on 32GB of RAM; folks on r/buildapc said that was a waste. Now and days I feel like I've seen a lot of higher end builds that feature 32GB or even 64GB.

Just my 2c; I don't have stats to back this up or anything...


My primary workstation has 128GB and it is for sure unnecessary. Even with multiple projects running each of which spins up a dozen containers in k3s, and a background game of Rimworld as well as all the electron apps that life requires these days, I rarely ever breach 64GB much less 100GB.

The only real use is writing horrifying malloc experiments and having a couple extra seconds to kill them before OOMing.


I work in video games development and all our workstations have 128GB of ram - I consider that the bare minimum nowadays.


I routinely use this much RAM for work. And it's not malloc experiments lol. I need 200ag to 500G for my research work. Most of our systems have 384G and this is just enough. If I could only have a laptop with that much...


Containers are relatively resource-efficient; if you need to run a bunch of actual VMs for testing (eg, Windows), you can easily find ways to use 128GB.


32gb is totally worthwhile. I don't know if I need 32gb exactly, but I desperately needed more than 16gb.


at least on linux, any unused ram is going to be used for your buffer cache. get as much memory as you can afford.


I wonder if something like vista is needed to move the needle on consumer RAM again. Pre-vista, windows required 64MB of RAM, and you could fudge it a bit even lower if you knew what you were doing. Vista _required_ 1GB of RAM, and recommended 2.

OEMs were selling 64MB desktops right up until vista was released.

Today, windows 11 requires 4GB or ram. If windows 12 (or whatever they're going to call the next windows, they're not too good at counting) required the same sized jump between XP and Vista, it'd require 64GB or ram.


Most of this is not historically accurate.

Vista required 512 MB. In practice, that probably sucked, but those were the paper specs.

XP might have required 64 MB on paper but it crawled and was practically useless on that spec (like Windows 95 was on 4 MB). Never saw anyone do it. A common spec in 2001 would be 256 MB or at an absolute minimum, 128.

Vista came out in 2007 and absolutely no one was selling 64 MB desktop computers at that point. 64 MB is 1997-1998 spec -- in 2007 it would commonly be 512 MB or 1 GB.

It is true, however, that Vista had astronomically high minimum specs for the time -- some machines sold at the time could just barely run it -- and that it probably drove a lot of upgrades.


XP could run on 128MB once you disabled a lot of services. On Windows 95, anything less than 8MB was a no-no.


Hm... I had a Celeron machine running XP ~2004-2007 that had 512MB of RAM, and it wasn't hard to run out, eventually upgraded to 768MB but I was still jealous of my friend running XP with 1GB.

Then, I built a Vista machine in 2007 with 2GB to start, and it was clearly not enough, immediately filled the other 2 slots to go to 4GB.


> Today, windows 11 requires 4GB or ram

That's gotta be painful, though, unless you only ever use a single browser tab.


A bit of bullshit. Ublock Origin, git://bitreich.org/privacy-haters, enable that config to either Firefox or Chrom* based browsers. Under Windows you can set he env vars properly to the desktop shortcuts as pure arguments for the exe. Seriously, I tried with > 10yo Celeron netbook and it was perfectly usable once you set up ZRAM/zswap and Ublock Origin. Oh, and some tweaks on about:flags to force the GL acceleration. OpenGL 2.1 capable, go figure, and yet browsing it's snappy on the Celeron.


That may be the case but the question is, what big change to the end user could they pitch to justify such a leap in the requirements?


256GB of RAM has its uses. I'd rather go with that than having to fiddle with a 'distributed system' that requires another level of care and feeding.


Does it though? I made another comment but for home use I can't even max out 64 GBs.

The only thing I can think of that'd ever max out my RAM is some sort of training task (even though I'd expect to run out of VRAM first). But those are the kinds of tasks that do best on distributed systems since you don't really need to care about them, just spin it up, run your task, and tear it back down


Google recommends 64GiB to build the Android kernel. That's a thing you could technically do at home. And if you want to do anything else at the same time, you're gonna need to go to 128.


Funny you should say that... my entire career has been built on embedded Android and I've built a lot of images from "scratch" (or as close as you get with random SoC supplier provided garbage tacked on)

The first time I built an AOSP image from scratch was on some dinky office workstation that had been freshly upgraded with a whopping 16GBs so you wouldn't come back in the next morning to a random OOM

These days I get to open a PR and some random monster of a machine on a build farm does the heavy lifting, but I can still say from a lot of experience that 64GB is truly more than plenty for Android OS builds and definitely won't be what keeps you from doing other stuff... IO and CPU usage might make it an interesting proposition, but not RAM.

When Google says 64GB it's for the whole machine: the build process will use a lot of it when configured properly, but not so much that you can't run anything else

(Also again, Android builds are a perfect example of where a remote machine pays off in spades. The spin up is such a small fraction of what's needed you don't have to start messing with containerization if you don't want to, and you can get access to some insanely large instance for a few hours then stop paying for it as soon as it's done.

It just seems unlikely to have a task that warrants that much RAM that isn't just about throwing resources at an otherwise "stateless" task that's a great match for cloud compute)


There are various analytic APIs/libraries that will map data files into memory when there is surplus RAM available. That can really speed up processes which would otherwise be IO bound.


Are we in the same thread?

> The general public isn’t asking for a hundred gigs, but I’d love to see the baseline rise up a bit. It doesn’t feel like we’ve budged meaningfully here for years. Or is that just me?

Then less than 3 comments in somehow it became we start justifying 256 GBs of RAM?

256 GBs of RAM can be useful in some cases, on some computers, in some places, but that's not really a meaningful inference? 1TB of RAM can be useful depending on the workload, any given number could be.

The question is can it be useful for anything even vaguely resembling personal computing, and the answer is for all intents and purposes: No. It's not.


Just because YOU do not have a use case, does not mean NO ONE has a use case.

Your experiences are not the indicative of everyones experiences.


You're getting upset and going all caps over your own misunderstanding....

> The general public isn’t asking for a hundred gigs, but I’d love to see the baseline rise up a bit. It doesn’t feel like we’ve budged meaningfully here for years. Or is that just me?

This was the comment that kicked off the thread. Some people felt 32 GBs was the new baseline, and then out of left field comes _256 GBs_

For any amount of RAM, someone somewhere will be able to use it. But that's the kind of deep observation I expect from a toddler.

If we're going past kiddie pool deep observations of plain fact, no, the baseline wouldn't be anywhere near 256 GBs of RAM based on how people use computers.

(And before you attack me for your own poor understanding of language again: People as in the human collective. "People don't need" is not the same as "no one needs".)


I'm replying to you and YOUR post. Capitalization for emphasis.

I was commenting on how you seemed to make yourself the arbiter of how much ram one could ever possibly use.


And me and "MY" post don't exist in a vacuum!

> you seemed to make yourself the arbiter

I didn't. At least, not to those of us who can deal with some flexibility in interpreting written communication, and use a little tool called context.

But then again, there are definitely people out there who need every. single. nuance. of the most basic statement spelled out for them, as if they're biological GPT-3 endpoint (and this site certainly does feel like it's drowning in those people these days) but I don't write comments for them.

Instead I write comments for people who are interested in actual conversation over browbeating everyone in site because they assumed the most useless interpretation of your statement was the correct one.


Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: