Commodore 64 RAM Disk. I remember having 64k of RAM soldered on a generic bread board with a 9V battery as backup. It made the computer ridiculously fast when you had slow floppy drives for I/O. I also made it work for the Amega but the payoff was much lower and I stopped using it because the floppy drive and hard drives were faster and the bottle neck was at the CPU.
Video Production - If I could store my video projects in a RAM Disk it really would be pretty amazing. The issue is SSD have really been able to speed up the input so much that the benefit would be much smaller. In video your write to disk is actually very small and usually only happens while you import, preview or render (Which is faster on GPU RAM). There is a lot of work to get a production RAM Disk up and being ready with a low benefit. It still would be nice to have all my clips just reside in Ram and able to "real time" render with less stutters on previews.
Data "Science" - I most use R and that is 100% in memory so it certainly has been a boom for that in my own usage. 64 GB of RAM is pretty cheap and then I can also use Spark if it isn't enough. My current biggest time restraint for big data sets is getting the sets more then anything else. Having large data sets reside in a RAM Disk would really speed things up but we now have Spark for the actual big data sets.
I manually stuffed 1.5mb (three 512mb RAM cards) for an image capture system. My thumbs got blisters. The PC was an over clocked Tandy 80286 (8 mhz!). I forget the name of the goofy Lotus/Intel/Microsoft memory expander contraption. That Tandy was a tank.
We were doing large format image capture for document archival. Totally not worth it. Results were unusable. But we had investers, so...
Before that, I thought I was pretty cool pimping my Apple //e with an 80 column character display and 128kb RAM. True word processing, WYSIWYG.
Protectionist measures fueled by rising sun Japan hysteria engineered by Micron. Stagnated PC industry. Postponed transition to 386, stomped OS/2 launch.
OS/2 shouldn't have failed!!!! To bad DOS and Windows apps ran better in VM then native OS/2 apps that would just hang 10% of the time!!!! Another reason I loved my Amiga!
Server grade everything is ridiculously cheap. Unfortunately, the motherboards seem to still be expensive.
I bought a botherboard with two LGA2011 sockets for £35 and built a system around it for only about £100, which included 32GB of RAM and 2 8-core xeons. Unfortunately, the motherboard was faulty, and I have been unable to find a repacement for a reasonable price.
You'll see people complaining about memory usage for editors and IDE's. A few years ago it was Emacs, "the memory hog". Today it's Atom.
The usual complaint is often "you shouldn't need a modern computer to use an editor".
What people fail to realize is that Moore's Law and memory eventually make the problem go away. Spend a little extra on memory today and don't make memory requirements a deciding factor.
> What kind of person says "this is a job for nano, while this is a job for emacs?"
Me, I use nano for quick edits over SSH and a different editor/IDE for everything else (which is done locally and pushed, I (incredibly) rarely edit anything on the server.
> What kind of person says "this is a job for nano, while this is a job for emacs?"
Depends on what you're used to. Even I as a seasoned die-hard Emacs user who is aware of both TRAMP and emacsclient will still fire up VIM (or vi) for various reasons: because it's what I grew up on, because I can't remember how to use TRAMP to edit that file remotely or as root, or the system is really not up to the task of running emacs.
I have a very specific example of jobs for nano vs vim. When I need to copy paste some code into a server and I don't have clipboard access over ssh, nano is better for that than vim, because vim tends to keep tabs indented, so I'll use nano to paste the file into, and then go into vim to edit it. I'm sure there are other examples.
Money can make a lot of problems go away, but it's still worth complaining about. Especially bad is to have computers get better and better while the user experience stays as laggy as ever. Then you can only improve things by being ahead of the curve, wasting really large amounts of money.
> What people fail to realize is that Moore's Law and memory eventually make the problem go away. Spend a little extra on memory today and don't make memory requirements a deciding factor.
What's interesting to consider, though, are unintended consequences. Even though the problem is old, take a look at Bufferbloat: https://en.wikipedia.org/wiki/Bufferbloat
Basically, as RAM got cheap, switch manufacturers started shoving more into switches, packets got delayed to where it broke protocols, and things had to be rethought. This wasn't just some off the cuff hacked together protocols either: it severely hampered TCP, one of the most widely tuned and tested protocols out there.
O tempora, O mores: consoles used to store the entire game in ROM. Up to about 64Mb.
The modern need for patches and DLC makes that impossible now, in addition to the other reasons. But I think it would still be possible to greatly improve load times with a bit of optimisation - it's just not commercially important. Maybe if the console QA system imposed maximum load times it would be different - but then you'd see compromises made elsewhere, such as in level size.
(I play Cities:Skylines, where one of the more popular mods is simply called "Don't Crash", which lazy-loads other mod data to improve performance and reliability)
Remember that the 3DS is still very successful, and runs just off carts (well, also downloadable titles). It seems to run very smoothly.
In practice, there have been a few games that have had patches released -- in almost all modern games a tiny fraction of the game is code, which can be easily replaced, leaving all the art and music which (hopefully) don't need as much patching.
Well, modern carts like 3DS are quite different from old cart systems in the sense that they are handled more like storage devices than extension of the memory. I think GBA was the last system that allowed for example running code directly from the cart, without loading it first to RAM.
Games on the 3DS aren't typically renowned for their over-the-top graphism quality in the overall videogame scene; if you expected the same quality with the same resolution on the desktop, then you'd be doing what pjc50 said: compromises
Sad but true. Too many beautifully rendered games that aren't terribly engaging because amazing graphics are what the mainstream market demands. (I've avoided buying the latest gen consoles partly because of this, but mostly because I just don't get the time to play games that much these days.)
That being said there are also many beautiful games that are amazingly fun, and the graphics do add to the experience, but they're more like the icing on the cake. That's not quite the metaphor I'm groping for, but you get the idea: the gameplay is a fundamental that has to be right, and then the graphics can really add something; without the gameplay the experience is always destined to be flat.
It will be interesting to see what happens with VR games. With a regular game you can always do something else - check your phone, walk away for a moment. In VR you are stuck staring at a loading screen unable to easily do anything else. We are already seeing cases where people will abandon a VR experience just because they cannot handle the load times. Content creators take note.
The current (market) price for 64 GB of RAM is roughly the same as the price of a modern console. So doing that would essentially double the price of the console.
Double the cost, not necessarily the price. And maybe not even double, considering the price is already under-valuated, and buying tons of RAM chips from the factory must be cheaper than market price.
The game is CPU-bound on some of its rendering, so reducing the load on the CPU by using uncompressed audio allowed them to maintain graphical fidelity with reduced minimum requirements in terms of CPU.
Wouldn't the next logical move would be to switch to fast Flash memory that can move a GB/s or so? At the moment loading times seem to be bottlenecked by hard drives and optical drives.
Flash memory is still about a magnitude cheaper than RAM.
That's what computers are doing by switching entirely to solid states, but I was more thinking your console library would still be on a (very) cheap hard drive and when you start up a game it just starts loading everything it can into RAM while you play.
> when you start up a game it just starts loading everything it can into RAM while you play.
That's kind of what happens now (loading screens), but the problem is that console hardware is designed to barely pull off and play acceptable games and no more. From what I remember, the Xbox 360 was originally designed with 256 MB of RAM, but got bumped to 512. Games up until a few years ago had to deal with that. Many games go through many build cycles of simply optimizing RAM usage. When you have more demands on RAM during gameplay, there isn't any left over to load future levels. Add in that many games today are open world, and you don't know where the player is going to be, it's not worth it. Even PC exclusive games don't have this feature.
Which side are you arguing? Everything you're saying sounds like reasons to just throw cheap RAM at the problem and load one level, then put the whole game in RAM while that level is being played.
I imagine with some filesystem trickery you could even do it without the game even knowing, it just magically gets everything it asks for virtually instantly.
I'm arguing against it, on the basis of economics and (to a lesser extent) game design.
Console hardware is almost always sold at a loss. No company would build a console full of RAM (no matter how cheap it is) with almost no load times. Most people (having acclimated to loading screens) would rather buy the cheaper "good enough" alternative.
Aren't Nintendo rumored to get back to sorta-ROM cartridges with their upcoming NX console? (The cartridges would be made of strictly speaking rewritable flash memory.)
If a console just let you install a PCI-E-based storage device, like the Samsung 950 Pro[1], you could likely speed up load times by an order of magnitude or more.
The reason GDDR is normally used in GPU's is the workloads are more predictive, you are doing the same calculation thousands of times, just iterating across a buffer. This is why SIMD/Cache prefetching shine in GPU's. It makes memory latency not as important.
The ability to increase the GPU Ram has the same possibilities, but is VERY expensive right now. a 24GB graphics card cost about $5,000 and most of that is due to who currently needs high GPU Ram. These are for Scientific Computing and Data Analysis and boy I would love to have one. Image having these on your Spark Servers and using all the fast GPU Ram?
We have a couple of Quadros with 12GB at work. They aren't really fast compared to GTX, but dat RAM makes a difference. We really need to have RAM revolution on GPUs.
"SAP HANA is the in-memory computing platform that enables you to accelerate business processes, deliver more intelligence, and simplify your IT environment"
For anyone confused, digi_owl is referring to Jon "Hannibal" Stokes, the co-founder of Ars Technica who left the site in 2011. He wrote the sort of deeply-technical content that this article, well, isn't.
Because SAP has built a product, it has expertise regarding the effects of the current memory market on computing. Other companies are also riding the wave.
I found this podcast interesting but warn that it discusses the use of memory in the context of designing and implementing Alluxio:
Video Production - If I could store my video projects in a RAM Disk it really would be pretty amazing. The issue is SSD have really been able to speed up the input so much that the benefit would be much smaller. In video your write to disk is actually very small and usually only happens while you import, preview or render (Which is faster on GPU RAM). There is a lot of work to get a production RAM Disk up and being ready with a low benefit. It still would be nice to have all my clips just reside in Ram and able to "real time" render with less stutters on previews.
Data "Science" - I most use R and that is 100% in memory so it certainly has been a boom for that in my own usage. 64 GB of RAM is pretty cheap and then I can also use Spark if it isn't enough. My current biggest time restraint for big data sets is getting the sets more then anything else. Having large data sets reside in a RAM Disk would really speed things up but we now have Spark for the actual big data sets.