Hacker News new | past | comments | ask | show | jobs | submit login
Adventures of putting 16 GB RAM in a motherboard that doesn’t support it (2019) (downtowndougbrown.com)
419 points by walterbell on Dec 13, 2021 | hide | past | favorite | 74 comments



My experience has always been that "not supported" means anywhere from "physically impossible" (e.g. because there just isn't that many address bits) to "we won't help you because we haven't tested it".

Some quick searching shows that others have managed to get that CPU to use 32GB of RAM:

https://forums.overclockers.com.au/threads/p55-chipset-i5-75...

I thought he would go as far as patching the BIOS itself, which would make it a "permanent" fix. In fact one of the projects I haven't gotten around to finishing is to patch the memory init code for an old Atom processor embedded motherboard to make it recognise more combinations of RAM modules; analysis of the BIOS and leaked sources shows that it was stupidly written as "if(512MB in slot 1 and 512MB in slot 2) ... else if(1GB in slot 1 and empty slot 2) else if(1GB in slot 1 and 1GB in slot 2) ...", when the memory controller can actually be set up generally to work with many more different configurations.


One end of that range should include 'you already have it but we're not telling you'.

I noticed the type numbers of one of my computers had RAM that was 2x the quantity advertised and indeed with some fiddling I managed to unlock the second half.


I went to the local store to buy 4MB (4x1) for a computer I was building but they accidentally gave me 16MB (4x4) for the same price. Then they went out of business, turns out it was a money laundering scam; guess they didn't know or care exactly what items went out the door.


That's bizarre. RAM is relatively expensive and has long been a substantial cost of the computer. Not like CPU binning where it really can be cheaper to sell you the faster processor and lock it to behave like the cheaper one.


Yep. And this was when RAM was a lot more expensive than it is today. I could not quite believe my luck until it worked.


I vaguely remember some Sinclair computers doing stuff like that because they used cheap RAM chips that were ensured to be error-free on the entire memory-space (so basically binning) so they used the "good" half, but a lot of the time the "bad" half was also fine?


"16 GB is not supported because the system has two slots and 8 GB sticks do not exist" ...yet.


and marketing didn't want some penny-pinching customer to fixate on HOW MUCH? would the system cost if it actually had all that RAM.


Your definition's right in the sense of what "not supported" allows the user to do. But the term also means "if you try this and it doesn't work, we're not responsible," and that's the key thing.

Some companies (and programs, and programming languages, and so on) make it easy to do unsupported things. Some make it hard. None will accept liability for it.

Hardware manufacturers are on the hook for so many potential problems, defects, misconfigurations and misuses that it makes sense from their perspective to support actively only those things they can guarantee will always work. Not things which the hardware can theoretically do but which might not always be the best idea, things that aren't possible now but might be in ten years, etc.


I was looking to Hackintosh an older Dell desktop I had squirreled away and in the process of gathering all the hardware every piece of documentation I found insisted that it topped out at 4GB. This was a late-era Core 2 so that seemed completely nonsensical to me (I don't think I ever owned a Core 2 that had less than 8) and, wouldn't you know it, 8GB worked perfectly fine.

Funnily enough, like the article's author I also encountered a DSDT-related problem on that same system: if you were to dump the tables and recompile them with the standard Intel utility it straight up wouldn't work. Came to find out that there are apparently two compilers, one from Intel and another from MS and the MS one is super-lenient about accepting garbage. Eventually worked out that the logic in the stock tables was such that several features (HPET and sleep, iirc) just straight up don't work unless the OS identifies itself to ACPI as Vista (and not like Vista+, Vista exclusively). Such a pain.


There used to be people who would sell you memory for MacBooks or Fujistu laptops that nearly doubled the official RAM. My impression was that they didn’t support arbitrary N GB cards but they did support a subset, and these guys just found a module that they did support.

For my Fujitsu there was memory soldiered on the board so you couldn’t quite double it, but you could get around half again as much memory as the manufacturer claimed was the max.


Although Apple said 8GB was the maximum a 2.4GHz Core2Duo Macbook Pro 13" mid-2010 could handle, the Intel chip inside could support 16GB.

https://eshop.macsales.com/memory/maxram


I have one of these machines. That 16GB RAM bump and an SSD has resulted in it still being usable, 12 years after I bought it.


Ditto! Resource heavy web apps used to make the laptop creak a bit but for 95% of tasks it was a solid machine.


Wow. How has the battery in your MacBook Pro not died after 12 years?


For me, had to replace the battery but just once so far during keyboard replacement.

Did result in losing the power up button (which had a fingerprint mechanism in it).


This is true for many older Apple products. I have a similarly aged Mac Mini that officially only supports 8GB but took 16GB without problems


I had some Dell Core 2 laptops that AFAIK were limited to 4G (well, less because it couldn't map anything above 4G so you wouldn't be able to access all the physical memory) by the Intel 945 chipset. If you had a 965 chipset you could do 8G apparently.


I ran into DSDT problems as well on my laptop running Linux. How did you figure it out? How do people reverse engineer this stuff? I managed to get all USB features working but this ACPI stuff eludes me.


Read the ACPI spec. Seriously. It's a commonly given answer especially in the Hackintosh scene, but I don't think there's a quicker or easier way than to actually understand what everything is about.

On the other hand, it's funny to think that Hackintosh probably was responsible for motivating a lot of people to learn about ACPI and other complex low-level hardware details.


ACPI spec is that final frontier for me that I am saving for my retirement year reading leisure.

I just recently mastered Linux Intel DDIO for 100Gbps network card. And this is after BellCore SONET/ATM.


I remember having to faff about with DSDT code in order to get the battery indicator to work when running OSX on VMware Player (which you had to patch)


This was shared on HN [0] when it was first written in 2019 and it was heavily discussed (817 points, 178 comments). As soon as I saw the title I knew what post this was; I recall it very vividly and was more than happy to upvote it and then sit down to read it again!

[0]: https://news.ycombinator.com/item?id=19573458


> In order for the document to help me, I clearly needed to find the four error parameters that used to be displayed with the blue screen on older versions of Windows. Windows 10 hides all the information by default, but I discovered that it’s possible to re-enable display of the extra error information by adding an entry to your registry.

It's still baffling how microsoft completely broke bluescreens by hiding any useful info.


It's part of the deplorable ongoing effort by the IT industry as a whole to dumb everything down as much as possible.


Blue screens got a bad reputation because the people they were most often presented to had no idea how to consume the information. Presenting the right information to the right people is also important.


Everyone knows that a blue screen means that the OS crashed. Maybe they should be more helpful instead? Like, you know, actually show a human-readable description of the error, so the person who sees it knows where to start troubleshooting. Something like "kernel-mode driver for device X performed an invalid memory operation, please try updating it or report this to its developer" instead of "PAGE_FAULT_IN_NONPAGED_AREA".


I generally agree, but I don't think

> "kernel-mode driver for device X performed an invalid memory operation, please try updating it or report this to its developer"

is a good example of a user-friendly error message either.

My experience in writing software and supporting it for end users (even highly educated users) has taught me that a large percentage of people will stop reading if they encounter any technical words or indirect statements. Honestly, I feel like the "Your computer ran into a problem and needs to restart. We'll restart it for you" message that Microsoft settled on is probably about the common denominator. It has no technical terms and it doesn't give the user an opportunity for decision paralysis.


People got more impatient over the years and now expect things just to work. I had a computer shop in the late 90s (my first and only business ever) and as lot of my clients where pretty much helpful with their observations when they brought their machines for me to fix.

It is not that people were smarter, of course, but they did expect to have to dirty their hands with their stuff from time to time. Nowadays, besides all of our protests at the contrary, stuff is way more reliable and ease to use, and people got used to complicated stuff just working.


Agree 100%. The idea of "what a computer is" and "who a computer is for" has changed quite a bit. I sometimes forget how far computing has come until I see an old news segment:

https://www.youtube.com/watch?v=0KDdU0DCbJA


> People got more impatient over the years and now expect things just to work.

Except most of the time, things aren't aware of that.

Yes, today's operating systems are more reliable thanks to memory protection and better abstractions. But that's compensated by insatiable product managers that absolutely have to keep updating a feature-complete product at any cost with features and UI redesigns no one ever asked for.


But even if most people don't know how to consume it, it's a lot easier for a tech support rep to just ask for the numbers, rather than see if booting into safe mode or whatever will work well enough for them to walk the customer through manually adding registry entries (which just seems like a bad idea anyway), so they can get the extra debugging info.

Just leave it visible all the time, Microsoft! It won't hurt, and can certainly help!


I have religiously put debugging information into my error messages for the past decade. Often accompanied with a statement like "Please provide this information when reporting this error". I have supported thousands of users. I can count on one hand the number of times that a user has provided that debugging information to me when it was appropriate to do so. I am the only one who uses that debugging information, when I'm running my own tests.

In fact, sometimes I'll get support requests like "Hey I got this error 012345 what does that mean?" and the attached screenshot will show a message like: "Invalid Password, please type your password again (Error code: 012345)"

I absolutely understand the technical utility, but I really wouldn't be surprised if they have better overall support outcomes without it.


> Just leave it visible all the time, Microsoft! It won't hurt, and can certainly help!

It is not MS style. Not so long ago they were blaming sysadmins for 404 errors.


but the screenshot after enabling it just has 4 numbers in the top left corner in blue-on-blue text. i think most people would ignore those, since there's also a large plain english error message.


Not sure if it's "IT industry as a whole", or Microsoft, because if I'm trying to do tech support for someone, I definitely want that extra information.


Fascinating; so that makes me wonder if the bluescreen code flow also tries to look up the registry value, or if the boot process somehow reads that registry value so early in the boot that by the time a bluescreen happens, the value is already available to it ... in a specific memory location or something?

It's my idea of hell having an error handler that tries to do fancy stuff and then the error handler dies and swallows the original error condition

Also, if you happen to still have the link to that regkey description, I bet this audience would benefit from your research


The missing contents are also in the system eventlog entry, but that presumes that you can reboot to see it.


I still use a 2012 macbook, which was the last one where Apple graciously let you replace most of the parts including the RAM sticks yourself before soldering them to the logic board. According to official documentation this computer could have only been ordered with 2x2gb sticks, can only support an upgrade up to 2x4gb sticks, but here I am running 2x8gb sticks without having to do any fiddling, just plug and play. Unlike OP though, windows 10 over bootcamp is luckily able to pick up the 16gb just fine somehow.


I do wonder if that Mac Mini from 2011 I have but haven't used in a long time also secretly supports 16 GB. I upgraded it to 8 GB ages ago and the docs said that's as far as it goes.


Funny timing: I just upgraded a 2011 Mac Mini from 4GB to 16GB yesterday, and it works just fine. After searching around, I found a site (can't remember the name) where they specifically call out that the actual amount of RAM that various Macs support, regardless of what Apple's official specs say.

(Had to crack it open because the old spinning-rust hard drive was starting to go bad, so I replaced it with an SSD, and figured I should upgrade the RAM while I had it open. I have Linux installed on it, and it's a nice media server box, running Jellyfin and a few other things. Only downside is that its support for HW video decode/encode is kinda limited -- no h265 -- due to its age.)


Not sure if this is what you're thinking of, but One World Computing has found higher real RAM limits for various old Macs (this post is about the 2009 iMac I inherited): https://eshop.macsales.com/blog/17059-late-2009-core-i5-i7-i...


Look up the mid-2011 Mac mini models on EveryMac.com and it states that 16GB is supported in 2 x 8GB configurations.


I use my 2011 Mac Mini with 16GB. Been running it that way nearly since the day I bought it. It’s on its 3rd HD, a 120GB SSD. Wikipedia article mentioned it would work with 16GB so I upgraded it as soon as I could.


Same here! I'm bummed that Catalina, the last OS update to officially run on it, probably will hit EOL in 2022. I'll likely begrudgingly replace it, rather than look into workarounds to install newer versions, in an unsupported and slow way.


You could install big sur using some patch if you don't mind the built in wifi card stopped working, but I heard you can swap the wifi card from later model and big sur would recognize it.


Why not just continue running catalina? I am still on mojave, no issues past EOL for me.


Had one of those notebooks, and did the same thing. it was my second or third macbook, and I was used to buying the lesser spec'd machine and then beefing it up with more memory and a better HD or SSD


I've run 16gb in mine alongside that little dual core CPU. It works better than it did in 2012 thanks to a 60gb SSD too.


Presentations on open-source firmware, which can be used to extend the lifespan of older motherboards and devices.

2020-2021: https://vimeo.com/user128699411

2019: https://www.youtube.com/channel/UCUVk2lv2h2VbP3Dx4k2axyA/vid...


It's a shame that getting modern machines with Coreboot compatibility are virtually impossible

I'd absolutely pay extra for a Ryzen 5000 series motherboard that I could put Coreboot on

Hopefully the Framework laptop pushes things in the right direction for mobile computing, they've at least stated an intent to add Coreboot support


Most of the time those limits are important for a specific feature of the motherboard that you might not be using.

For example I recently put 64GB of RAM in a mobo that only supported 32. It didn't work at first and through trial and error I found out that XMP was the only thing with a hard 32GB requirement. Once disabled it booted fine with 64GB.


Haha, many many years ago I had a 486 motherboard, and through lots of sailing I was able to save to afford 2 8mb simms. I plugged them in and it reporter... 12Mb

:-D


I had a 5Mb 386 system for a while. Often, a program requiring 8Mb of RAM would work, just poorly on more than 4.


For windows with test signing on it seems you may be able to replace the tables with your own generated ones similar to the OP's original attempt, but without the need to use grub. I have no idea the utility of this, considering it requires test signing and compared to the grub approach, but maybe it will come in handy to somebody someday.

> To do this, rename the AML binary to acpitabl.dat and move it to %windir%\system32. At boot time, Windows replaces tables present in the ACPI firmware with those in acpitabl.dat. [0]

[0] https://docs.microsoft.com/en-us/windows-hardware/drivers/br...


When you enable test signing (etc.) don't you then have the text overlay on top of the desktop wallpaper?


I really appreciate writeups like this because it gives me insight into lower level systems that I previously did not know existed.


I'm also always looking for additional low-level understanding. What's tricky is figuring out how to pace it so it's not overwhelming.

Like, my favorite thing is that circuit boards have wobbly traces on them around high-speed interfaces (RAM, PCI, etc) so all the bits in parallel signals arrive at the same time. The clock speeds are specced right up at the edges of the speed electrons can stably flow at.

And this sort of thing is awesome, but like... quite a fair bit beyond the point I'm practically able to engage at.

I guess I'm trying to figure out how to leap without missing anything lol :D which is impossible now I phrase it like that...


Does someone know what's happening with these PCI mappings?

Is this memory-mapped by an address decoder on the motherboard (or north/south bridge)? Or is it a hint to the OS that they should map it here?

Maybe the PCI I/Os are really there, hardwired on the motherboard, but use a higher impedance, though that would be very hacky.

Or maybe they're mapped from that address, and the highest bits do not count, as address decoding is only performed on the lower bits? With physical memory shadowing parts of the physical address space?

Or maybe this is just virtual address space shenanigans? In that case, how does the OS figure how to talk to PCI or the RAM based on the provided mappings?

I might be able to find that information by looking at the specifications and whatever ACPI docs I can find, but if someone already knows how it works, I'd really appreciate a few pointers.


I had an easier experience along these lines. The Lenovo ThinkPad X220's official technical specifications say it only supports 8 GiB of RAM. But many forum posts across the web confirm that people used 2 slots of 8 GiB = 16 GiB with no problem (e.g. https://www.reddit.com/r/thinkpad/comments/6rdra2/does_the_x... ). Sure enough, I tried it and it worked.



Very nice adventure, and I love the fact that the open source ecosystem provided all the tools that OP could use to get ahead.


Yeah, one would expect, motherboards could have some heuristics to either: - have the best speed possible - have the highest capacity possible with the RAM available and finally - tell what is the problem if there is one (e.g. having no module in slot, having incompatible modules on channel)

This cannot be so hard to do. See my recent adventures: https://twitter.com/KaliszAd/status/1462940387916582915


This post is so interesting, and someody already has archived that page: https://archive.md/p9uxC


Somewhat related, I had an older Asus core 2 board, that would accept 16gb of ram, but would only work on 64 bit OS's. However, even with a 64 bit OS, the onboard NIC would fail to work. So I added a NIC to the PCI bus, and it also wouldn't work. I could only get a USB dongle NIC to work. Really weird, and never solved why it was doing this.


My n=1 - A Dell system, "16GB max memory" (Dell) vs. "32GB max memory" (web sites of some leading 3rd-party memory manufacturers). The latter seem to be correct. Both Windows 10 and the hardware self-tests in the Dell's BIOS fully support the extra memory.


I've got an old HP business tower that only officially supports 4GB of DDR2 and has 4 slots.

Works just fine on 800MHz using 4GB total.

Needed to use 667MHz DIMMS so it would handle 8GB.

Almost worked with 800 but was unstable.


I love posts like this! I remember back in the day having to cut motherboard traces on my Amiga 500 to get Fat Agnus to recognize 1MB of chip RAM. A pure BIOS/bootloader hack is far less risky!


too bad that there is no Github/Gitlab or some git repository to help us peons check and test for potential overexpansion of our beloved PC motherboards.


Why did you share this?

Some previous discussion:

https://news.ycombinator.com/item?id=19573458


I'm glad he did as I missed it before. Perfectly happy for items of interest to be reposted periodically, it makes sense.


Just asking why. Why submit it. There's nothing new on it. We're just repeating the same discussions had before that can be read on the previous discussion when it was new. Hence linking to the other discussion thread.


As I said above, what is new is "me", the audience, and imho that's reason enough. I will never start digging around like a mad man in the annals of HN for cool stuff to read, but if some good older piece makes the front page again, then that is good.


(2019)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: