Hacker News new | past | comments | ask | show | jobs | submit login
Thanks for the memory: How cheap RAM changes computing (arstechnica.co.uk)
98 points by rbanffy on Oct 17, 2016 | hide | past | favorite | 76 comments



Commodore 64 RAM Disk. I remember having 64k of RAM soldered on a generic bread board with a 9V battery as backup. It made the computer ridiculously fast when you had slow floppy drives for I/O. I also made it work for the Amega but the payoff was much lower and I stopped using it because the floppy drive and hard drives were faster and the bottle neck was at the CPU.

Video Production - If I could store my video projects in a RAM Disk it really would be pretty amazing. The issue is SSD have really been able to speed up the input so much that the benefit would be much smaller. In video your write to disk is actually very small and usually only happens while you import, preview or render (Which is faster on GPU RAM). There is a lot of work to get a production RAM Disk up and being ready with a low benefit. It still would be nice to have all my clips just reside in Ram and able to "real time" render with less stutters on previews.

Data "Science" - I most use R and that is 100% in memory so it certainly has been a boom for that in my own usage. 64 GB of RAM is pretty cheap and then I can also use Spark if it isn't enough. My current biggest time restraint for big data sets is getting the sets more then anything else. Having large data sets reside in a RAM Disk would really speed things up but we now have Spark for the actual big data sets.


"Commodore 64 RAM Disk"

Oh man.

I manually stuffed 1.5mb (three 512mb RAM cards) for an image capture system. My thumbs got blisters. The PC was an over clocked Tandy 80286 (8 mhz!). I forget the name of the goofy Lotus/Intel/Microsoft memory expander contraption. That Tandy was a tank.

We were doing large format image capture for document archival. Totally not worth it. Results were unusable. But we had investers, so...

Before that, I thought I was pretty cool pimping my Apple //e with an 80 column character display and 128kb RAM. True word processing, WYSIWYG.


EMS


I think that Tandy was the best computer I ever owned.


Hah. I remember using those on MS-DOS PCs back in the late 80s, as well.

We put work-session data into the RAM disk, then saved it back to the hard drive at opportune times.


Ah, the 1988 DRAM shortages where 256K DRAM costed more than $10 a piece.


Protectionist measures fueled by rising sun Japan hysteria engineered by Micron. Stagnated PC industry. Postponed transition to 386, stomped OS/2 launch.

I still haven't forgiven Micron. Never forget!


OS/2 shouldn't have failed!!!! To bad DOS and Windows apps ran better in VM then native OS/2 apps that would just hang 10% of the time!!!! Another reason I loved my Amiga!


Yea, that OS/2 2.0 fiasco is one of my favorite topics.


And we still have Micron today. They were suffering years of losses until the DRAM market began to recover in 1987. They even bought Elpida too.


1990 I paid $150 for 512k ram pack.


Used server RAM has been ridiculously cheap. You can get 256GB for $383:

http://www.ebay.com/itm/256GB-64x4GB-PC3-10600R-DDR3-1333-EC...

128GB for $144:

http://www.ebay.com/itm/128GB-32x4GB-DDR3-PC3-10600R-Server-...


Server grade everything is ridiculously cheap. Unfortunately, the motherboards seem to still be expensive.

I bought a botherboard with two LGA2011 sockets for £35 and built a system around it for only about £100, which included 32GB of RAM and 2 8-core xeons. Unfortunately, the motherboard was faulty, and I have been unable to find a repacement for a reasonable price.



Thanks, although I'm looing for <£40 ideally. They do come up around that price occasionally.


You'll see people complaining about memory usage for editors and IDE's. A few years ago it was Emacs, "the memory hog". Today it's Atom.

The usual complaint is often "you shouldn't need a modern computer to use an editor".

What people fail to realize is that Moore's Law and memory eventually make the problem go away. Spend a little extra on memory today and don't make memory requirements a deciding factor.


If you need an editor, use notepad.exe, nano, or an equivalent.

You run emacs / vim / sublime / atom exactly because it's much more than a simple text editor, it's a development environment of sorts.


Why wouldn't use simply use the editor you leave open on your desktop for the "small editor" tasks?

What kind of person says "this is a job for nano, while this is a job for emacs?"


> What kind of person says "this is a job for nano, while this is a job for emacs?"

Me, I use nano for quick edits over SSH and a different editor/IDE for everything else (which is done locally and pushed, I (incredibly) rarely edit anything on the server.


> What kind of person says "this is a job for nano, while this is a job for emacs?"

Depends on what you're used to. Even I as a seasoned die-hard Emacs user who is aware of both TRAMP and emacsclient will still fire up VIM (or vi) for various reasons: because it's what I grew up on, because I can't remember how to use TRAMP to edit that file remotely or as root, or the system is really not up to the task of running emacs.


I have a very specific example of jobs for nano vs vim. When I need to copy paste some code into a server and I don't have clipboard access over ssh, nano is better for that than vim, because vim tends to keep tabs indented, so I'll use nano to paste the file into, and then go into vim to edit it. I'm sure there are other examples.


Use ":set paste" before you paste. (Disable with ":set nopaste".)

See also: http://vim.wikia.com/wiki/Toggle_auto-indenting_for_code_pas...

Or, if your terminal emulator supports "bracketed paste", use something like this: https://github.com/ConradIrwin/vim-bracketed-paste


Indeed, if you use emacs, you just keep it open forever and use it for everything. Such is the culture.

(Some people even try to use emacs as their entire desktop environment.)


Maybe we disagree about scale :-)

Vim is my "little" editor; JetBrains (and the like) are my "big" editor (albeit with vi[m] plugin to avoid the mental context switch).

Somewhere an emacs user is laughing at me, but I'll get over it.


Money can make a lot of problems go away, but it's still worth complaining about. Especially bad is to have computers get better and better while the user experience stays as laggy as ever. Then you can only improve things by being ahead of the curve, wasting really large amounts of money.


> What people fail to realize is that Moore's Law and memory eventually make the problem go away. Spend a little extra on memory today and don't make memory requirements a deciding factor.

What's interesting to consider, though, are unintended consequences. Even though the problem is old, take a look at Bufferbloat: https://en.wikipedia.org/wiki/Bufferbloat

Basically, as RAM got cheap, switch manufacturers started shoving more into switches, packets got delayed to where it broke protocols, and things had to be rethought. This wasn't just some off the cuff hacked together protocols either: it severely hampered TCP, one of the most widely tuned and tested protocols out there.


Moore's law already has one foot in the grave. Multi-core, GPGPU, FPGA, etc. are reactions to this.


I'm honestly surprised we haven't seen consoles that store the entire game in RAM yet. Load times for consoles are absolutely insane.


O tempora, O mores: consoles used to store the entire game in ROM. Up to about 64Mb.

The modern need for patches and DLC makes that impossible now, in addition to the other reasons. But I think it would still be possible to greatly improve load times with a bit of optimisation - it's just not commercially important. Maybe if the console QA system imposed maximum load times it would be different - but then you'd see compromises made elsewhere, such as in level size.

(I play Cities:Skylines, where one of the more popular mods is simply called "Don't Crash", which lazy-loads other mod data to improve performance and reliability)


Remember that the 3DS is still very successful, and runs just off carts (well, also downloadable titles). It seems to run very smoothly.

In practice, there have been a few games that have had patches released -- in almost all modern games a tiny fraction of the game is code, which can be easily replaced, leaving all the art and music which (hopefully) don't need as much patching.


Well, modern carts like 3DS are quite different from old cart systems in the sense that they are handled more like storage devices than extension of the memory. I think GBA was the last system that allowed for example running code directly from the cart, without loading it first to RAM.


Games on the 3DS aren't typically renowned for their over-the-top graphism quality in the overall videogame scene; if you expected the same quality with the same resolution on the desktop, then you'd be doing what pjc50 said: compromises


Frankly the focus on graphics have long since reached the point of wankery.


That's your opinion. Graphics is also not the thing I'm most interested in, but let's not think that we are a realistic view of the overall market.


Sad but true. Too many beautifully rendered games that aren't terribly engaging because amazing graphics are what the mainstream market demands. (I've avoided buying the latest gen consoles partly because of this, but mostly because I just don't get the time to play games that much these days.)

That being said there are also many beautiful games that are amazingly fun, and the graphics do add to the experience, but they're more like the icing on the cake. That's not quite the metaphor I'm groping for, but you get the idea: the gameplay is a fundamental that has to be right, and then the graphics can really add something; without the gameplay the experience is always destined to be flat.


Yeah, i fail to see how the patch and DLC stuff can't be solved with some emmc added to the board.


It will be interesting to see what happens with VR games. With a regular game you can always do something else - check your phone, walk away for a moment. In VR you are stuck staring at a loading screen unable to easily do anything else. We are already seeing cases where people will abandon a VR experience just because they cannot handle the load times. Content creators take note.


> But I think it would still be possible to greatly improve load times with a bit of optimisation - it's just not commercially important.

A lot of work goes into optimizing titles on consoles... it wouldn't make sense to skimp on such an important part of UX.


The current (market) price for 64 GB of RAM is roughly the same as the price of a modern console. So doing that would essentially double the price of the console.


Double the cost, not necessarily the price. And maybe not even double, considering the price is already under-valuated, and buying tons of RAM chips from the factory must be cheaper than market price.


Given a PS4 game can be ~40-50 GB on disk, maybe its just not cheap enough given how big games are.


And that may be partly compressed as well. Reading that all into memory while uncompressing en un-DRMing might take a while.


Titanfall is 20GB of game and 35GB of uncompressed audio (all translations).


I should have a look at the PC version. Is it really PCM audio? Is it seriously not worth it to even compress with a lossless codec?

I miss the days of the PC RIP, where every byte counted.


The game is CPU-bound on some of its rendering, so reducing the load on the CPU by using uncompressed audio allowed them to maintain graphical fidelity with reduced minimum requirements in terms of CPU.


If it's stupid and it works, it's not stupid.

There are plenty of high-speed decompressors that could possibly be adapted to PCM data, though, I'm sure RAD would sell you one.


I think there's also a healthy amount of "People think more GBs equals more content."


Wouldn't the next logical move would be to switch to fast Flash memory that can move a GB/s or so? At the moment loading times seem to be bottlenecked by hard drives and optical drives.

Flash memory is still about a magnitude cheaper than RAM.


That's what computers are doing by switching entirely to solid states, but I was more thinking your console library would still be on a (very) cheap hard drive and when you start up a game it just starts loading everything it can into RAM while you play.


> when you start up a game it just starts loading everything it can into RAM while you play.

That's kind of what happens now (loading screens), but the problem is that console hardware is designed to barely pull off and play acceptable games and no more. From what I remember, the Xbox 360 was originally designed with 256 MB of RAM, but got bumped to 512. Games up until a few years ago had to deal with that. Many games go through many build cycles of simply optimizing RAM usage. When you have more demands on RAM during gameplay, there isn't any left over to load future levels. Add in that many games today are open world, and you don't know where the player is going to be, it's not worth it. Even PC exclusive games don't have this feature.


Which side are you arguing? Everything you're saying sounds like reasons to just throw cheap RAM at the problem and load one level, then put the whole game in RAM while that level is being played.

I imagine with some filesystem trickery you could even do it without the game even knowing, it just magically gets everything it asks for virtually instantly.


I'm arguing against it, on the basis of economics and (to a lesser extent) game design.

Console hardware is almost always sold at a loss. No company would build a console full of RAM (no matter how cheap it is) with almost no load times. Most people (having acclimated to loading screens) would rather buy the cheaper "good enough" alternative.


Aren't Nintendo rumored to get back to sorta-ROM cartridges with their upcoming NX console? (The cartridges would be made of strictly speaking rewritable flash memory.)

They seem to care about loading times....


There're a lot of unsubstantiated NX rumors.


If a console just let you install a PCI-E-based storage device, like the Samsung 950 Pro[1], you could likely speed up load times by an order of magnitude or more.

[1] c.f. http://www.newegg.com/Product/Product.aspx?Item=N82E16820147... (I recently installed it in my desktop computer; my X99 motherboard had a slot for it.)


Hi, I would like to recruit you to the PCMR. ;)


Trouble with modern consoles (games, graphics if you will) is which RAM? Most of content is loaded into and needed on GPU RAM, not system RAM.


>> modern consoles (games, graphics if you will) is which RAM?

Except the modern console generation doesn't have dedicated GPU RAM. They are using AMD APU's that unify System/GPU ram.


You're right, with on PS4 it being GDDR5. It leaned towards graphics, not system, requirements. Not exactly the breed you'd find in regular stuff.


The primary difference between GDDR and DDR is 128bit vs 64bit bus. GDDR is also normally SRAM, but DDR can be SRAM, but it is normally DRAM.

Saying one is geared to graphics is just marketing.


I think you meant SDRAM not SRAM?


So why did they choose GDDR then?


I'm not Sony Engineering so IDFK

GDDR5 trades latency for thoughtput

XboxONE DDR3: 68GB/s CAS 7

PS4: GDDR5: 127GB/s CAS 15

The reason GDDR is normally used in GPU's is the workloads are more predictive, you are doing the same calculation thousands of times, just iterating across a buffer. This is why SIMD/Cache prefetching shine in GPU's. It makes memory latency not as important.


So, you're saying GDDR is more geared towards operations done commonly in graphics? ;)


No. I'm saying it is gear to work loads with easily predictable or cache-able memory access patterns.


The ability to increase the GPU Ram has the same possibilities, but is VERY expensive right now. a 24GB graphics card cost about $5,000 and most of that is due to who currently needs high GPU Ram. These are for Scientific Computing and Data Analysis and boy I would love to have one. Image having these on your Spark Servers and using all the fast GPU Ram?

http://www.newegg.com/Product/Product.aspx?Item=N82E16814132...


We have a couple of Quadros with 12GB at work. They aren't really fast compared to GTX, but dat RAM makes a difference. We really need to have RAM revolution on GPUs.


"Presented by SAP"

"SAP HANA is the in-memory computing platform that enables you to accelerate business processes, deliver more intelligence, and simplify your IT environment"


Completely useless article. Random publicity shots of data centers/super computers, no content...


I really miss the days of Hannibal.


For anyone confused, digi_owl is referring to Jon "Hannibal" Stokes, the co-founder of Ars Technica who left the site in 2011. He wrote the sort of deeply-technical content that this article, well, isn't.

http://arstechnica.com/author/hannibal/


How about John Siracusa?


Article after article of Apple wankery, thanks but no thanks...


Because SAP has built a product, it has expertise regarding the effects of the current memory market on computing. Other companies are also riding the wave.

I found this podcast interesting but warn that it discusses the use of memory in the context of designing and implementing Alluxio:

This SE-Radio podcast http://www.se-radio.net/2016/06/se-radio-episode-260-haoyuan...


Unfortunately RAM is not cheap, prices are at the same level for years now with no visible end to this trend.


I love getting ram on the cheap. I don't really use it, but just having an insane amount of it is so satisfying.

Star Citizen will probably get its own ram disk, if priced keep on dropping




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: