Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why are flash memory prices going down so much faster than RAM?
165 points by altoz on Jan 2, 2017 | hide | past | favorite | 76 comments
You can now buy 128gb USB sticks or SD cards for around $20. Meanwhile, 16 gb RAM sticks still are $100+.

Are the performance specifications for RAM that much more stringent? Is the demand for SD cards and USB sticks that much greater that there's more economies of scale?




The obvious answer: flash can hold multiple bits per cell and ram can't.

MLC is half as expensive as SLC. TLC is 33% less expensive than MLC. QLC is 25% less expensive than TLC and 75% cheaper than SLC. Not to mention transparent compression algos. As the controllers improve you can get more bits of storage from the same amount of flash for free. Longevity and reliability suffers, but hey, cheap SSDs!

Ram only gets cheaper by improvements to semiconductor processes, which also can be applied to make flash cheaper. (Big fat asterisk, those processes are very different.) While improvements to flash that allow more levels per cell can't be applied to ram. The price difference between flash and ram will only continue to grow.

Modern flash is quite "analog". The first company to figure out how to reliably store 32 voltage levels per cell (Five bits. PLC?) will make a quick billion.


For other readers wondering what SLC, MLC, TLC, and QLC stand for:

SLC -> Single-Level Cell

MLC -> Multi-Level Cell (now mostly means 2-level cell?)

TLC -> Triple-Level Cell

QLC -> Quad-Level Cell

It seems like a lot of the higher-level cell designs require a '3D' gate architecture, so you might come across that terminology in marketing materials.

Sources: https://en.wikipedia.org/wiki/Multi-level_cell and http://www.theregister.co.uk/2016/07/28/qlc_flash_primer/


A bit of trivia:

- Each MLC cell actually holds one of 4 distinct charge levels, and thus can encode 2 bits. Same for TLC (8 levels, 3 bits) and QLC (16 charge levels to encode 4 bits).

- "3D" when talking about solid-state memory, would usually refer to vertically stacked cells [1], which gives you more cells per square millimeter at the same manufacturing process node. This is orthogonal to xLC.

[1] http://www.theregister.co.uk/2013/07/23/sandisk_takes_the_bi...


Beside stacking layers on chips, stacking multiple dies is also used to increase the density per PCB area. (I believe that works for DRAM and flash.) The package you see on the PCB may actually hold tens of silicon slices which themselves each hold tens of layers.

This method can be used until heat dissipation becomes an issue, which may be the reason why CPUs/GPUs are not stacked.


Yep they use TSVs.


Two other factors:

Flash connection protocols (sata, ide) allows much faster evolution independent what's on the other side of interface.

If you have a new flash tech that's 2,3x better in density/speed, you can easily deploy it in current generation of high end server in the next few months.

RAM protocols (DDR2,3,4) must be developed 100% in sync with the CPU vendors. If the major cpu vendors (Intel, AMD, ARM Soc, Qualcomm) decide they don't want your new 2,3x better interface speed/density, you have zero chance to deploy it. It takes years for JEDEC to agree on new memory interface standard. Your new 2,5,10x better tech's deployment is actually depend on your competitors agree to allow it to be the new standard.

Flash can have higher latency as trade off if needed. DDR interface's latency has high impact on the CPU/system benchmark.


CPUs typically support larger DIMM sizes than exist at launch. Having to wait on CPU manufacturers to support larger sizes is probably not an issue. Having an incredibly small market for those larger sizes probably is more of an issue. The Things stored in RAM are replaced all the time. The things stored in flash typically are not replaced, but appended, creating a demand for more.


I think more what the parent post was getting at is perhaps better visualized:

For flash storage, the hierarchy looks like this:

CPU -> standardized interface (PCIe, SATA) -> Controller Chip -> NAND

For DRAM, it looks like this: CPU -> DRAM

The DRAM controller is (these days[0]) built directly into the CPU, whereas for flash storage, the NAND controller communicates with the CPU (indirectly) over a standardized interface. So for flash storage, the designers have control over which controller chip they use, and as such can change the NAND technology used at will. They aren't even limited to NAND, if something better comes along. Whereas with RAM, you can't just plug DDR4 into a CPU that only "speaks" DDR3. I think that flexibility is the important distinction.

[0] It used to be that the DRAM controller was on the northbridge, which was a separate chip from the CPU. But for the purposes of performance and power consumption, the tradeoff with flexibility was made (The Athlon 64 was the first [consumer?] CPU to put the memory controller directly on-die, and that was a large part of the reason it crushed the Pentium 4.)


CPUs may, but getting BIOS support on your particular board may be an issue. I've had a number of boards over the years where I had to wait months for the manufacturer to put out an update to support new DIMM sizes correctly.


Make those protocols NVME, PCIe and SATA. Don't think I've seen an IDE flash drive yet ;)


CompactFlash (CF) is effectively IDE/PATA.


"MLC is half as expensive as SLC. TLC is 33% less expensive than MLC. QLC is 25% less expensive than TLC and 75% cheaper than SLC."

visualized: http://i.imgur.com/niinuEz.png


All things you mention combined only explain a factor up to 4 (probably much less since the complexity increases). With a factor of 2-3 already being realized years ago.


Yup & the most recent flash memory consumer price drops are due to a process shrink plus 3D die stacking, rather than just MLC/TLC.

Stacking can and will be used for DRAM as well but 3D DRAM-based products have not hit high-vol consumer markets yet, so it's currently widening the cost-per-bit gap b/t DRAM & NAND.


DRAM type memory uses a completely different process than flash even if they're both a form of "memory". The performance of DDR-type memory is well beyond anything in the Flash world.

Today 2GB/s is considered very good for an SSD but that would be brutally slow for system memory. DDR4 memory is typically 30-60GB/s per bank with the low end being two-channel, the high end being four.

DRAM has also been the subject of aggressive research and development for many, many decades while large-scale production of flash is a relatively recent phenomenon. It's the widespread adoption of smart phones, thinner notebooks, and ubiquitous USB keychain type devices that as pushed it to the volumes it's at now.

There's also the concern that DDR memory must have a very high level of data integrity, bit-flip errors are severely problematic, and it can't wear out even after trillions of cycles. Flash has more pervasive error correction, and while wear is a minor concern, it's still possible to exhaust it if you really, really try.

I'd say the reason flash memory prices are steeply down is the new "3D" process used by Intel and Samsung has been a big game-changer, allowing for much higher density. DRAM has seen more gradual evolution through the last few generations.


> There's also the concern that DDR memory must have a very high level of data integrity, bit-flip errors are severely problematic, and it can't wear out even after trillions of cycles. Flash has more pervasive error correction, and while wear is a minor concern, it's still possible to exhaust it if you really, really try.

The reason why we can't do the pervasive and extremely aggressive error correction [1] that is used in flash storage to increase usable densities with DRAM is that forward error correction (FEC) is based on blocks, so to read a couple bytes from a block you will have to read the entire block [2], decode it and then you can have your bytes.

This does not work well for RAM ®

Memory fetching is fundamentally based on cache lines; but the overhead in both bandwidth and latency(!) to fetch-and-decode, say, 64K instead of 64 bytes would be completely unacceptable in most applications.

[1] It's one of the major factors contributing to device endurance. [2] Simplification: Practical FEC is multi-tiered, ie. there are different block sizes involved and multiple layers of EC at these block sizes.


I think you're only half-right here.

You cannot do this kind of error correction it like ECC checking in the CPU because it would require multiple clocks or too many pins.

You cannot do this kind of error correction across multiple DRAM chips on board because it would require too many pins.

But you could do this kind of error correction on-chip because you could just read 1024 bits (or more) in parallel from multiple areas (or even mutliple stacked silicon slices).

One problem, however, is that each small read would induce a high energy cost due to the many bits fed into the extensive error correction circuitry. While it shouldn't be a problem for consecutive reads, random-access would indeed lead to a high power consumption which in most cases would not be acceptable.


While your technological reasoning is valid, it explains a factor of price difference but not a change of the difference which is the original question.

The "3D" process explains an additional factor coming into play at recent times. I bet there are a range of technologies which were developed for flash and used for flash manufacturing but not for DRAM manufacturing. This explains a growing disparity between the size/price ratio.

Let's not stop there. If those technologies are useful for flash, why are they not used for DRAM?

I see not hard reason why the 3D process cannot be made useable for DRAM.

The answer, I guess, is that the market forces lead to different priorities where the size/price ratio for DRAM is not so important and more development is done in other areas like speed, reliability, power consumption. So the real reason is not technological but market driven.


DRAM is not flash, flash is not DRAM. They are fundamentally different technologies.

"Let's not stop there. If these technologies are useful for flash, why are they not used for magnetic core memory?"

It's like saying that.

Often pricing is strongly tied to volume. If they're making more flash memory, flash memory becomes less expensive. That does have a moderate cross-over effect into DRAM, anything that drives process improvements benefits all fabricators, but it's not as direct as you'd think.


For some historical context this is worth remembering:

https://en.wikipedia.org/wiki/DRAM_price_fixing


This should be top.

I purchased 64GB of DDR3 for <$100 a year ago. Now I the prices are >$100 for 32GB of DDR3. DDR4 prices are idiotic ~$100 for 16GB


>I purchased 64GB of DDR3 for <$100 a year ago.

I don't think it was ever this cheap


Looking back on my prior receipts, a stick of 8GB DDR3-1600MHz a year ago was about $12. Currently, lowest price I see is sitting at $21 on pricewatch.com with Newegg being the seller offering the deal (my receipt is from Portatech.)


>Looking back on my prior receipts, a stick of 8GB DDR3-1600MHz a year ago was about $12.

This would make a 4Gbit DDR3 chip cost less than a dollar, and I don't think that DRAMeXchange prices has ever been that low


You are responding to somebody who claims to have a receipt with a "I think". The best way to respond to an anecdote is data, not opinion.


I'm the kind of person that keeps receipts for tax purposes, as all parts I buy are for business purposes (repairs, fresh builds, etc.)


Umm, would you care to point out where in my unedited post I said "I think"?


I did mention the DRAMeXchange source though.


In the late ninies I spent £100 on 4 megabytes of ram. Less than 2 months later it was £12.


I don't think DRAM prices declined that fast even in 1996.


It is not the speed, reliability, or processes.

There are numerous flash manufacturers all racing each other to build out capacity and increase density because they want to eat the HDD market.

DRAM is controlled by a small cabal, all of whom have multiple convictions for price fixing and collusion. I believe Hynix shipped some sacrificial C-suite execs to do prison time in the US over it. To a large degree no one is building out DRAM fab capacity as well.

DRAM prices have stayed high because the manufacturers want it to stay high and are colluding (either explicitly or implicitly) to keep prices up. The capital cost to compete is enormous: many billions before you can make your first sale. The day you break ground memory prices will mysteriously drop so low as to make your venture unprofitable, meaning your commercial loans get called in and you go bankrupt. Everyone understands this and avoids attempting to compete.


This is quite controversial and sounds more than a little conspiracy theory-esque.

While I'm sure this sort of thing goes on, if you could back your statements up with some evidence it would be really helpful.

What you're suggesting here allows for the possibility that at some point some tech firm finds creates a way for us to ditch traditional DRAM and use the cheaper flash storage instead.

My gut tells me there is more to it than politics and capitalism.


I mean, 5 different companies have pleaded guilty to price-fixing in the DRAM market, so I don't think what is suggested here is too controversial.

https://en.wikipedia.org/wiki/DRAM_price_fixing


Wow. Thanks for surfacing this!


"This is quite controversial and sounds more than a little conspiracy theory-esque."

Maybe so - but about 90% of the world economy competes this way.

People are not stupid.

'Competing on price' is a dumb thing to do in an oligarchy, because it's effectively handing surpluses to another part of the value chain.

Every industry CEO knows how this game is played, and they definitely play it - there's no explicit collusion needed - it just happens naturally when there are 'reasonably intelligent actors acting in self interest within a market oligarchy'.

'Dumping' to exterminate competition is a very common tactic, used throughout history.

It takes a lot of disruption to change institutional dynamics.

I can't speak for the facts in this situation - I have no idea if those allegations are true or not, but they seem reasonable, I wouldn't be surprised if they are valid statements.

"My gut tells me there is more to it than politics and capitalism." - that's usually all it ever is :)


If that's the case it does leave the door wide open to a technological alternative.

My point is that greed and self-interest are not the only tent poles in this situation - which I fully appreciate are so common.

The technical aspect of DRAM's performance and configuration is clearly another, otherwise the industry would have moved on. This suggests either:

1) a wider circle of collusion, with other manufacturers in the back pockets to lock in the environment ripe for an oligarchy, especially preferential if the market is expecting to disappear (a better option is primed to surface); or,

2) simply that nothing beats DRAM at the moment so we're stuck with this until someone intentionally sacrifices themselves simply in order to drive down the price.

I can't see that happening, of course.

What's more likely, IMHO, is that research and development into other approaches is proving fruitful and the companies who have already invested heavily in the status quo are merely fixing their prices to maintain healthy balance sheets/market cap while they phase in the next-gen tech.


There are well-known strategies for breaking cartels doing price-cutting. e.g. Dow chemical (https://fee.org/articles/herbert-dow-and-predatory-pricing/)

Although repackaging chemicals is probably easier then reselling DRAM as I assume there are IP rights involved.


Although this is a great article, I'd suggest this is not by any means a 'well known strategy' - it's rather a brilliant bit of entrepreneurialism and an anomaly at best.

It would take some serious financial backing, wherewithal to pull this off, moreover, it could really only be done in a commodity market.

I'm not even sure it could be done today.


It happens in all sorts of industries.

Going back a bit, this one

http://www.heraldscotland.com/news/12093375.display/


I don't doubt that price fixing happens.

I doubt that small businesspeople can out-compete big cartels who want to dump on them.


Yep. As usual, it's impossible to have a reliable cartel if the government doesn't support it.


Hynix price fixing appears to be have taken place over ten years ago. Does your "break ground and get killed" principle still apply? Have any new vendors shown up?


Would you take a billion dollar chance on it?


Would Apple?


If I had to wager an uneducated guess, I would propose that the recent mainstream acceptance of Solid State Drives as a viable, affordable alternative to spinning disk drives, has created a sudden demand in Flash memory that's caused that industry to thrive.

At least at the retail level in the Best Buy where I worked until recently, I watched Solid State drives transition from something only high end computers had to something that was standard even among the lower priced value machines. We had customers complaining about the smaller drive sizes because they were so accustomed to the gigantic storage offered by the spinning disk media at its height in popularity.

I'd love someone with more industry knowledge to chime in though, as my own experience here is pretty limited. This is simply what I've observed in my own corner of the world.


RAM is only capable of storing one bit per cell. FLASH didn't really take off until MLC technology came around giving the ability to store multiple bits per cell which vastly increased the density.

Theoretically RAM could be built that way but it would be much slower. Every cell read/write would need to go through an ADC/DAC, and the noise is much higher due to leakage. This slowness isn't much of a problem for FLASH because its competition was spinning disks that were slow as molasses anyways.


I think this works by providing a second application for older chip manufacturing facilities. For SD chip designs, speed and size effectively do not matter (the controller will matter a hell of a lot more for final speed than actual storage chips speed). So they're using the chip fabs that everyone else is abandoning.

As a second bonus, even on old systems SD card circuits are relatively small (compared to a 5-60" LCD they certainly are). Wafers are round and old wafers are used to manufacture LCD displays, so small chips can be placed around them in the manufacturing process and get really good economics by having lots of manufacturing options.

So same reasons displays are getting cheap, except they're even better. So the race to the bottom is happening pretty fast for SD cards.

Not entirely sure about this. Might be entirely wrong, but I'm not sure how to confirm this.


Slow SSDs have gone down in price, but fast SSDs are still expensive. For example, you can get a 500GB Samsung 850 for about $130 but a Samsung 960 evo costs $250, and then another $100 on top of that for the 960 pro. Those 3 drives range from 600MB/sec to 2222MB/sec linear reads, the fastest costing the same as a 600MB/sec SSD did 3 years ago.

The demand for slow RAM drops precipitously after the whatever Intel chipsets use it stop being used in new systems (not sure if the same is true in the embedded market). For example, nobody's buying DDR2 these days. So the economies of scale dissipate and fabs retool faster.

So while both devices have economies of scale, SSDs have an extra dimension to their demand curve for performance that allows for slower higher density chips to still be profitable.


A question that seems related: what the heck is up with memristors? The Wikipedia page (https://en.wikipedia.org/wiki/Memristor) says that memristors are estimated for commercial viability in 2018, and have been built in prototype, but also says that there are serious doubts about whether memristors can possibly exist in physical reality! What gives?


Perhaps find better sources than Wikipedia? You can buy Memristors now, but it's going to be many many years before they get up to densities you'd expect in modern computing systems. (See sites like http://www.bioinspired.net/products-1.html or http://knowm.org/product/).


In 2017 commercial viability will probably be pushed back to 2019.

There is some debate about whether HP's chips are "true" memristors or just PCM, but ultimately that's irrelevant to customers as long as they store data.


1. There little demand for more memory. 2. There are only a few Memory makers left on the market. 3. Moore's Law no longer applicable, smaller transistor isn't necessarily cheaper any more. 4. You can have a Bad NAND, you dont want a Bad memory.

China has decided to pour in 10s of Billions into the NAND and DRAM industry by 2020, until then the price should very much stable / predictable.


Why does the OP imply that there is some relation between the two products? It's true that both use litho and si, but they are dramatically different, internally.

Think of flash as a consumable media - it is, since each cell can only be erased a few hundred times. After all, that's what it means when the vendor says "200TBW for 256G device": you can expect 7-800 cycles per cell. This is also why vendors are pushing capacity so hard: it lets them push down the price while at the same time not needing to improve endurance.

So in some sense, the answer is "endurance", since the physics of flash erasure necessitate high capacity, and no one would buy the extra capacity unless it were also relatively cheaper. Whereas DRAM doesn't wear out...


OP is simply asking.


After reading through the answers here, I don't think the real answer has been given.

Is it the technology?

* Flash cells can store more data an be produced cheaper per cell. But they are more complex to read out and slower.

This can explain some factor, but the factor of 40 given by OP probably not.

* Flash and DRAM probably use different processes.

This could explain a bit but look at the next point...

* DRAM has a much longer history and (at least in the beginning) much higher capital investment.

...which means that DRAM should have the technological advantage. At least through economies of scale.

Is the cumulative investment in flash research already much bigger compared to DRAM research?

Is the process used to produce flash memory so much easier?

Is it the market?

* Obviously people pay the price.

* With DRAM people are hungry for performance more than they are for size.

* We already have more than enough DRAM. The latest MacBookPro demonstrates that 16 GB DRAM is enough for just about everybody but flash storage goes up to 1 TB.

* Of those 16 GB DRAM the speed and power consumption are much more important than the raw size.

Coming back to the cumulative investment. I think that the primary pain point for flash has been the price per GB. Flash could be stronger, faster, more reliable, less power consuming but those are all secondary. It is fast and reliable enough by using very complex RAID controllers. The power consumption is not as bad as HDDs already use a lot and the data mostly just sits around. The main driving point is the price per GB. This is where the money goes in flash development.

On the other hand for DRAM, after some point, it is mostly speed and power driven. Reliability has to be comparatively high as every cell must work over years. Size is mostly increased by improving semiconductor processes where flash probably uses a lot of the same technology. Using the layer stacking technologies of flash is probably not yet applicable because it is not reliable enough and not compatible with the cell layout, maybe it never will.

If we really were hungry for so much RAM we would probably get it. But we aren't. It's good enough. Progress slows down.


You're thinking like an MBA and dismissing the biggest factors

Writing to DRAM is much, much faster than writing to SSD, also reliability (in the sense that you can use chips with defective cells in SSDs).

You mentioned those factors but you're not giving them enough importance. Market factors are important, but the price is not going to be less than cost.


I'm thinking like an MBA here because that's how you have to think in order to reason about the price something has.

Yes, those factors are important! But are they also the main driver of cost? I don't know.

Could we have speed, reliabaility and size? That is a technological question. I don't have the answer to that.

Would enough people care to pay for speed, reliability and size? That's more of an MBA question and the answer probably is no.

The example given is: People pay extra to have 1 TB of flash in their machine but they are OK with only having 16 GB of RAM. The priority is even to have a lower power consumption instead of having 32 GB of RAM.

While you are right that we maybe cannot deliver the technology on a pure technological stance, you kind of miss the point. Even if we could, we wouldn't do it because of the MBAs driving the fincances in a certain direction based on (perceived) market forces.


The question was: "Why are flash memory prices going down so much faster than RAM?"

Answer: because there are (technological) ways of making it cheaper (and probably slower/less reliable, but not much), as well as economy of scale factors

Why DRAM doesn't get cheaper: because it has to follow much stricter requirements.

The question itself is naive. It thinks "megabytes of memory" are the same thing regardless of form factor

> Would enough people care to pay for speed, reliability and size? That's more of an MBA question and the answer probably is no.

Yes, probably not. Not the masses at least. Unless they buy Apple, they (still) care about this mostly, apart from adapter shenanigans


16GB RAM is fine for consumer devices (mostly), but hilariously small for scale/hyperscale environments.


General agreement is that Macbook Pro is not so great with only 16GB RAM.


Looking at Hynix's financials, they're making enough money to reduce the cost of RAM quite a bit. Looks like it's just them maximizing their profit as one would expect. I assume it's similar for the rest. As always with for-profit firms selling hardware or I.P..


Unless you're suggesting collusion, that doesn't seem to make sense. Prices should fall in a competitive market, regardless of what profit a manufacturer would ideally like to maximize.


Collusion has already happened in this market. I don't need that, though. I've observed a pattern in a lot of markets where big players, esp wielding patents, will turn into cartels (intentional or emergent) that act in both individual self-interest and collective self-interest. A race to the bottom wouldn't benefit any of them. So, they compete in little ways to fight for market share but make sure the business models keep the profits up.

The telecoms, both cellular and ISP's, are where this is most obvious where the compete in the most incremental way saying its impossible to do anything else. Then, a small play comes in doing what they're doing... sometimes the same way... with more benefit at a tiny fraction of the cost. Suddenly, they can afford to do the same with their existing infrastructure. Real competition would've brought in unlimited plans in cell phones or gigabit in broadband much sooner for similar or lower prices.

The largest vendors of RAM are likely a cartel in practice. It's the best outcome for each of them to not race to the bottom. They don't even need to talk to know that. That they can reinforce it with patents they're more likely to have than the smaller players is icing on the cake.


Objective Analysis has an entire report on this: http://objective-analysis.com/Reports.html#Consolidation


Appreciate it. Going by abstracts, it looks like there's a cost advantage to NAND, the DRAM people are greedy cartels, and might have to merge to reduce costs. Looks like the greed is backfiring but we must remember CEO's and boards focus on short-term due to incentives. They might still "win" even if the firms start losing.


Cellular providers and ISPs are both regulated (especially ISPs, where a cable provider might have a legal monopoly in an entire municipality). Doesn't seem the same.


Which is irrelevant jf their day-to-day operations, esp speed and pricing, arent regulated in a way that forces one up and other down. It doesn't help that the head of regulation is usually from telecoms or their lobbyists. Or that they bribe politicians to protect their interests like getting states to ban taxpayer-funded networks.


An additional point: People don't know the quality of the flash they are buying. You're seeing bottom-bin crappy parts made to look better than they are with ECC. You're also NOT seeing data retention; that cheap USB stick may develop unreadable blocks sooner (this is much more likely with TLC and QLC, where the difference between two bit values can be measured in the hundreds or even dozens of trapped electrons).


RAM is written to faster than any other component other than the onboard processor cache.


Read/write speeds for DRAM are much faster than flash (although the gap is closing and the day may soon come where computers are sold with flash storage but no DRAM, and the distinction between memory and storage is done away with).

DRAM is also a much more mature technology than flash is, so more of the low-hanging fruit for improvement has already been taken advantage of.


RAM is more expensive to produce in general (more transistors, more stringent specs as you mentioned), but as to rate of change, that seems more likely to be due to competitive pressures and maybe more rapidly increasing demand for flash in recent years versus RAM.


All technical reasons aside, it has got to be mostly demand. I can't see computers coming pre-configured with more than 64gb, but for storage there is increasingly more demand as the price reaches parity with spinning disks. One thing that is true is we are as storage-hungry as ever. High res photos created by every phone, companies logging every minute process in hopes of using it for analysis later... Our need for storage vastly outstrips our need for RAM.


The market for DRAM is relatively mature while the market for flash is still developing as it canabalizes the hard drive market. Consequently, you see flash taking advantage of newer processes and techniques first as flash manufacturers aim for ever greater volumes in pursuit of profits.


I think NAND flash is more flexible in terms of design than DRAM is, for example 3D NAND. NAND flash generally communicate through a separate controller that uses for example the SATA or USB buses, while the DRAM controller is built into the CPU or chipset.


And here I am with 24gb ecc ram for 40 bucks on eBay. Workstation PCs ftw!


This is the relevant chart:

http://www.jcmit.com/mem2015.htm

DIMM seems to go down slower lately.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: