Hacker News new | past | comments | ask | show | jobs | submit login
Computer memory prototype ditches 1s and 0s for denser data storage (newatlas.com)
95 points by DamnInteresting on July 17, 2023 | hide | past | favorite | 105 comments



The Intel 8087 math coprocessor (1980) had too much microcode to fit into a standard ROM. It was implemented with a special ROM that stored two bits per transistor by using four different transistor sizes, almost doubling the density. Each value read from the ROM was thresholded to generate two output bits. This technique was also used in the iAPX 432's I/O processor chip.


NB-ROM-shenanigans are very traditional. The Tektronix 7000 readout (1969) managed to encode the stroke of an entire letter with just a few transistors. Here's the letter 3, encoded in 24 transistors: https://w140.com/tekwiki/images/e/e0/7000-readout-chargen-ci... This uses multi-emitter transistors to encode discrete current ratios (which are then interpolated across groups by smoothly ramping base currents between points, turning them into strokes), all connections done in the metal layer, so the silicon was the same for all ROM chips.

https://w140.com/tekwiki/wiki/7000_series_readout_system


They were not messing around with the analog wizardry back in the day!

Before even the first microcontroller they were doing this!


ROMs were non-binary before it was cool.


Isn't this exactly what modern TLC/QLC do?


Same concept for reading - threshold regions indicate multiple bit values, but different for storage.

The 8087 used physically different transistor sizes[0] to read a voltage level that "stored" two bits inside four threshold regions. MLC/TLC/QLC/PLC use a single transistor but charge the gate to a different "analog" voltage that is between 2^n threshold regions - MLC has four (two bits), TLC has eight (three bits), QLC has 16 (four bits), and PLC has 32 (five bits).

[0]: https://www.righto.com/2018/09/two-bits-per-transistor-high-...


Vinyl records also don’t have 0’s and zeroes, it does not make them better than Blu-ray. MLC NAND have more than just 1’s and 0’s, they have 0..7 or even 0..15. Does anyone really want to process analog data?


> Does anyone really want to process analog data?

But.. you do all the time. pretty much all the time these binary states are calculated from curves, or approximations. The state isn't 1 or 0, its the set of states which make 1 or 0 depending, under error correcting codes, over-sampling, things like manchester encoding.

We're fundamentally imposing binary on a non-binary state, by making deterministic judgement calls about which side of a divide things fall on, but to do that we have to look at analogue qualities.

I like to imagine people who work in VLSI smile at this statement, but at some level its analogue everywhere. It's higher-states which get to act like its binary, but they map to an analogue signal substrate.


Well, analogue everywhere until quantum effects come into play.


1. Vinyl ridges are much larger than DVD lines. Kind of a silly comparison considering the article claims the density of this scheme would exceed what we currently have.

2. It appears to not be analog.

>An energy barrier is created at the points where the bridges meet the device contacts, and the height of this barrier can be controlled which changes the electrical resistance of the overall material. That in turn is what encodes the data.

That seems to indicate they can have a base-n encoding where n is what they can practically achieve with the barrier/bridge layering scheme.


Where it gets interesteing is laserdisc, since that IS the same size (more or less) as a 12" lP, play time is roughly 30 minutes per side, and the video signal is full analog. The audio is analog + an optional PCM or Dolby Digital signal.

Both the audio and video are FM encoded.

(Yea, 30 minutes per side is long for an LP, more like 20-22 minutes is typical, but hey, it's analog. To a point you could increase playing time by deceasing volume.

The most extreme case of this I'm aware of is "90 minutes with Aurthur Felder and the Boston Pops", which what it says it is... the first side alone is a hair over 47 minutes).


Something that's continuous is analog. I would even say that the MLC/TLC/QLC use analog storage to represent multiple bits per site.

We would already have something similar with SSD and many more bits/levels per site if voltages were precise and stable over time and number of overwrites.

An interesting use would be for storing 1-dimensional data. In effect the errors would be minor and not randomized/scattered as it would be for digital encodings.


Wasn't optane a pretty noticeable leap in voltage stability? They could've eventually pivoted to high density packing if they hadn't given up.


Optane was never going to reach Flash levels of density. It was always in that awkward zone between DDR4 and Flash. Slower than DDR4 but faster than Flash. More dense than DDR4 but less dense than Flash.

It was a failure of marketing and building. Today's computers use a team of DRAM (like DDR5) and Flash. (GPU-ram like HBM or GDDR6 are still DRAM fundamentally).


What do you call Wifi, or cellular data, or ethernet, or DOCSIS?


They claim it’s continuous. If continuous values itself were that great then the vinyl records were great for data storage. The problem with continuous values is noise, durability and precision. You can made vinyl disks with incredibly high precision, but preventing damage and reading back those values is not practical. That’s why even the first analog laser medium had discrete pits and lands instead of a continuous signal.


It can be continuous but still have a limit to data due to noise. In vinyls case the more data you store the higher the error rate (at some point the differences in states is hidden in noise).

The data rate from a noisy channel or storage can be derived from Shannon-hartley theorem.

This is why the old analogue phones could only do 56k for example.


Robbed-bit signalling was largely responsible for why voice circuits had a 56Kbps limit. The FCC's regs regarding crosstalk further limited real-world throughput to roughly 53Kbps.

Leased point-to-point DS0 circuits could do a full 64KBps if the whole circuit was 8-bit clean.


>This is why the old analogue phones could only do 56k for example.

And somehow, on analogue circuit switch phone call mode, still sounded better with less artifacts than most mobile phone calls!


There's a legitimate reason for that. Tom Scott did a video on it.[0] It basically boils down to telephone switches starting to compress the audio. Compress and decompress multiple times (by various companies) on the way to you, and the quality degrades very quick.

[0]: https://www.youtube.com/watch?v=w2A8q3XIhu0


It's been a long time since I've called anyone using a phone number. VoIP apps like Discord took over because they don't sound like crap


There are billions of calls made using a phone number still.

"Across the world, we make 13.5 billion phone calls every day, using mobile phones. The average person makes or receives around eight phone calls every day, meaning that the US deals with around 2.4 billion phone calls across the 300 million cell phone users" that's from 2023 source.

Here's a 2019 one about US:

On average, people make and receive a total of 178 calls per month Of those 178 total calls, 93 are calls received and 85 are calls made

Not just older people either:

"80% of Millennials and 84% of Gen Z-ers use their phones during 2020 for phone calls".


Yes, that’s why I hate their headline.


There's a Japanese company that makes a laser turntable that can play a vinyl record without any physical contact. It's pretty cool.


> Does anyone really want to process analog data?

I mean... maybe we should?

All radio waves, such as 802.11, are fundamentally analog data. And then they are converted back into digital with incredible QAM-encoding levels. If you think QLC 4-bits per Flash-cell is big, wait till you see 256-QAM (aka: 8-bits per 802.11 timeslice), or more.

I believe I saw some spec for some fiber-optic cable that is 32768-QAM (aka: 15-bits per timeslice).

--------

Then again, Flash (and other memory) are composed of individual cells. True-analog signals like radio-waves are truly analog, so maybe its not quite the same.

But maybe we should think of things like tape or hard-drives as an analog signal. Or maybe not, who knows? I'm not actually a storage specialist so I don't really know. (Nor am I really in communications, lol).


"Analog" and "digital" do not refer to the physical process of transmission, such as radio waves, but the interpretation of the signal. A digital system has rules for forming and interpreting the signal, so that the transmitted and received messages are identical. These rules typically involve things like signal level thresholds and timing.

Well-engineered systems have "noise immunity" meaning that the thresholds are well above the noise floor, making the likelihood of an error negligible.

If you hang an oscilloscope probe on a "digital" signal, such as the data lines on a memory chip, it might look like a frightening mess, but so long as the rules are obeyed, data transfer occurs faithfully.

In a true analog medium, there's no way that the receiver can make out precisely what was sent, and two receivers will get different information.


"Well-engineered systems have "noise immunity" meaning that the thresholds are well above the noise floor, making the likelihood of an error negligible."

Well-engineered systems also expect noise and compensate with error detection/correction.


All mediums are ultimately analog.


256+ QAM is there but you will almost never see it in practice unless you’re next to the AP. Wifi7 defines up to 4096 so maybe the economics will drive improvements that didn’t land in 6E APs.


You'll see it on almost every cableTV channel these days


I was referring to Wifi but yes


docsis. up to 16384


256-QAM encodes 4 bits on the "phase" component and 4 on the quadrature one. They can be regarded as independent signals. Think about a sin() and a cos() being orthogonal to each other even for the same frequency.


I'm pretty sure that Einstein proved that light (including radio) is not, in-fact, analog, but rather has discrete levels of energy in quantum steps.


That turns out not to matter, because the noise is always at least several quanta at whatever frequency you’re using so you can’t get close to the quantization limit anyway.


Also our physiological sensory and processing equipment (ie, eyes, ears, brain...) translates analog / continuous input into discrete electrical signals; a given neuron fires or doesn't.


That’s not how that works.


Light is produced and consumed in quanta (photons) by reguar matter. Light itself is just radio wave. Radiowave has not such limitation. It's limitation of atomic particles.


That's not how quantum mechanics works. Radio waves are photons, and photons are radio waves. Different experiments will reveal different aspects of it, but these are not separate phenomena.

Also, the exact same thing is true of every other particle: just as photons are quantizations of the electro-magnetic waves, electrons are quantizations of waves in the electron field, quarks are quantized waves in the quark field etc. Atoms and molecules are also waves; in fact, there have been experiments showing the wave-like nature of molecules with something like 5000 atoms.


I saw water-like behavior of a wave! I have no idea, how much molecules it contains, large waves have many cubic meters of water, but I'm pretty sure that quantum behavior is not limited to 5000 atoms. Waves, even very large onses, are moving energy in "packets". 1 wave - 1 quanta. So, what?

Photon is radio-wave (EM-wave). Typical photon is produced and consumed by a electron, so properties of photon produced by an electron are limited by the electron, also, our abilty to detect photonare limited by the electron. However, EM-waves in general are not limited by electron.

On smaller scale, electron interacts with other charged particles and with EM-field (the medium) via EM-waves. I'm pretty sure that electron is not staying stil, it's vibrating because of thermal noise. Vibration of charged particle produces EM-waves. Those EM-waves are not photons.


> Vibration of charged particle produces EM-waves. Those EM-waves are not photons.

Yes, they are. In the Standard Model, the photon is the "carrier" particle of the EM field. Any EM interaction between any two charged particles is ultimately the exchange of one or more photons between those two charged particles. When a particle emits an EM wave, in any way, it emits one or more photons. It's impossible to emit 1/2 a photon or 1.3 photons - the EM field / EM waves come in quantities of whole photons.

In the same way, when two quarks interact via the strong interaction, they are emitting and absorbing gluons, the carrier particle of the strong force.

The only fundamental interaction not currently proven to be quantized is gravity. We do know gravity waves exist, but we have not been able to measure them to the required fidelity to verify whether they are also quantized, and thus to know whether they are also equivalent to an exchange of particles (we call these hypothetical particles gravitons, and many do believe that gravitational interactions also occur by the exchange of a whole number of gravitons between bodies with "gravitational charge", i.e. mass).


Unlike regular waves, photons are topologically stable, like smoke-ring kind of wave. Google hopfions for example.

If you like hydrodynamic quantum analogs (walking droplets), then you may see that droplet may escape it pilot wave sometimes. In such cases, pilot wave continue to travel in same direction for some time. IMHO, it's similar to how photons are formed: electron creates pilot wave, then escapes it, similar to Cherenkov radiation.

Anyway, photons have special configuration, so they behaves differently than regular waves.

Gravitational waves, AFAIK, are not topologically stable at all.


I think this is a faulty analogy. The data is still encoded (or, at least, interpreted) as digital data, except that instead of each bit being just 0 or 1, you can have components that represent 0 to n. But each state is still discrete, the only difference is whether representing, say, 0 to 3 requires 2 components (2 bits) or a single component (a qit?).


Isn't that just how MLC NAND works though?

Also the article is just describing memristor memory isn't it? Intel's been shipping Optane memristor based memory (which they deny, but others are skeptical of the denial) for years, so clearly I'm missing something.



Optane was PCM , not ReRAM (memristor).


> ... representing, say, 0 to 3 requires 2 components (2 bits) or a single component (a qit?).

I think the term you're looking for is trit (ternary/trinary digit).


A trit is 0,1,2, no? Tri-state.


0 to 3 would be a quat, 0 to 4 a quit, 0 to 5 a hext and so on.


My understanding is that modern hard drives all use probability and error detection/correction to convert a gooey analog signal into chunky discrete goodness.


> Does anyone really want to process analog data?

If they can get it to encode binary data more densely than existing systems, with acceptable speed/reliability/cost? Yes, of course they would.


This would still be digital (discrete) not analog. For example you can have 4 voltage levels representing 00, 01, 10, 11 This technique is used in modems.

So you can store more information in each memory location resulting in higher density.


The quality of the media is in the perception of the user. Ask any audiophile if vinyl is best or not.

Analog isn't really the issue here as there would be some kind of multiplexing/demultiplexing encoding or device to interface with it. It would still be discrete and have defined levels. as the technology advances you may be able to define more levels in the same bandwidth space thus creating "more" storage by greater efficiency of the spectrum.


There's an argument to be made, if reality isn't discrete (can be represented exactly by rational numbers) but continuous (can be represented by real numbers) you can represent programs that cannot be represented in a discrete way like a Turing machine. For instance the typical proof that the halting problem is unsolvable relies on there being just a discrete infinite amount of programs.


This isn't about analog data. The states will still be quantized/digitized. Besides, currently used transistors in regular CPUs are analog too.

And, does anyone want more than 0 and 1? If it's faster and has largest capacities, with equally feasible manufacturing, then yes.


> Does anyone really want to process analog data?

I imagine at some point in the abstractions it'll get a compat layer. Also, I could be wrong about MLC NAND but I thought it still stored 0/1 and just had more bits stored per cell through stacking.


No, multi-level NAND has extra voltage levels. Instead of a just regular off (0) and on (1) they have intermediate states too. For QLC, that's 16 levels, it's quite granular, and the margins of error are very small.


The margins for error are wider than one might immediately assume, since they combine it with error correction codes.


TLC NAND stores one of eight values in the cell. There aren't three separate values like 1, 0, 1, there is only one value, like "5".


I mean, 101 is just 5 in binary.

So each cell stores either three pieces of binary or one piece of octal. It's the same thing.


Its absolutely not the same thing from a hardware perspective. Normal components always operate on bits. We see "5", but the computer sees (at least) 3 bits. It then sends its output as bits to the next one. A 5 must require 3 wires, or 3 cycles of the clock over a single wire.

Building a computer where each wire can encode multiple bits of data at the exact same time fundamentally changes the way operations are done. The entire thing would need to be built in a different way, with different consequences.

It might be a bad idea, or impossible, but it would be different.


5 is not stored as 101, it’s stored as charge level number 5 out of 8 other valid charge levels.


The encoding used on vinyl records has more information density than if zeros and ones were stored, say by two levels of the groove (or a closely related encoding like RZ or NRZ).


Ah, memristors.

I didn't know term had fallen out of popularity.

These things were all the rage up till about 10 years ago. “The 4th fundamental missing electrical circuit component”: resistor, inductor, capacitor, and.. memristor. We were supposed to build hardware neural nets the size of human brains with these things, and get petabytes of storage in unheard-of densities. What happened?


but this is not memristor afaik, the breakthrough here is that it has multiple states, not that it can remember the states without power. (At least the article does not mention if the state is preserved so I would assume not.)


No it has a continuum of states. It’s the same thing.


Binary data simply means that each datum encodes exactly one of two states. We represent them as 0s and 1s, but that's just an abstraction of convenience - physically, that maps to something like "high voltage" and "low voltage", or "dark" and "light", or "on" and "off", depending on the physical medium.

We could have three states - "high voltage", "medium voltage" and "low voltage" - but the advantage of having exactly two is that it makes it harder to mistake one state for another (e.g. if voltage fluctuates within a specific range). If the measurement range for each datum is (0, 100), you can decide that anything below 50 is "low voltage" and anything below 100 is "high voltage". You can also do the same thing with (0,33), <33,66), <66,100), but that requires assuming more precise tools throughout the entire pipeline. When we talk about bits getting flipped (the entire reason that checksums exist in wire protocols), that's the reason: the medium conducting the signal is imprecise, and sometimes the reading is off.

Traditionally, this represents a tradeoff between density and fidelity. If your system has high enough fidelity, you can take advantage of the additional precision and distinguish between more states, representing additional information.

If your system has 8 states (0,8.3), <8.3, 16.6)... etc., you can look at this as an octal system, or you can think of it as a binary system in which a read/write error affects an entire byte, rather than a single bit.

At the end of the day, this is a question of signal processing - binary representation is a convenient abstraction that allows us to understand the way that we're interpreting the signals we're reading, but it's fundamentally an arbitrary choice.


This is multi-level cell flash memory, and it's been in use for years. Instead of "high" or "low" voltage they use the in-between states to encode more bits per cell. [1]

I worked at an embedded startup when SD cards started switching to MLC and we saw a really notable decrease in storage reliability. We ended up sourcing special SD cards that were flashed with embedded controller firmware restoring them to single level cell functionality. Your storage space is divided by 2^n going from n levels of voltage per cell back to SLC, but we saw greatly increased data integrity.

[1] https://en.wikipedia.org/wiki/Multi-level_cell


> This is multi-level cell flash memory, and it's been in use for years. Instead of "high" or "low" voltage they use the in-between states to encode more bits per cell. [1]

Yes, I'm explaining it with high and low voltage because that's an easy example to wrap your head around if you haven't thought about signal processing or hardware engineering before.


How much calculus would it require in order to get a decent grasp on signal processing? Not to work in the field or become an expert, but to gain an at least somewhat intelligent outsider’s understanding of it?


Learn what's the deal with the Fourier Transform and the Z transform. Then learn how to design some digital FIR filters. Oh and do learn about the Nyquist-Shannon sampling theorem, it's very important. And that's about it...just joking, there's tons more. But that would be a good start.


Don’t work in the field but I believe fast fourier transforms are commonplace so at least enough to have a solid grasp of that?


Most intro classes require calculus 1 & 2.


Increasing the number of bits per cell is an exponential decrease in reliability for a multiplicative increase in capacity. Flash manufacturers have become notoriously secretive about TLC/QLC retention and endurance and hope that you don't notice. Meanwhile SLC is near extinct and sells for much more than the 2x-3x-4x multiples over MLC/TLC/QLC that it should normally cost. It's none other than planned obsolescence.


> Increasing the number of bits per cell is an exponential decrease in reliability for a multiplicative increase in capacity. Flash manufacturers have become notoriously secretive about TLC/QLC retention and endurance and hope that you don't notice. Meanwhile SLC is near extinct and sells for much more than the 2x-3x-4x multiples over MLC/TLC/QLC that it should normally cost.

Yes.

> It's none other than planned obsolescence.

What makes you say this?

Even products that are built for a long lifetime of heavy use only sometimes need to drop down to MLC. Most servers and basically any consumer devices are fine with TLC.

Lack of SLC is not making drives die early.

Even on those 4-5 bit devices, the impact on performance tends to be a bigger deal than the impact on endurance. And there's no great push to get people onto those drives. TLC drives get most of the attention.


> Your storage space is divided by 2^n going from n levels of voltage per cell back to SLC

it's not that bad, it's divided by log2(n).


Those are known as pseudo-SLC, they are common in industrial applications.


> fundamentally an arbitrary choice

It’s not arbitrary, it’s an engineering choice. It’s just easier to design binary circuits.


> We could have three states…

"With the advent of mass-produced binary components for computers, ternary computers have diminished in significance. However, Donald Knuth argues that they will be brought back into development in the future to take advantage of ternary logic's elegance and efficiency."

(Wikpedia article on ternary computer(s), https://en.wikipedia.org/wiki/Ternary_computer, Donald Knuth's observation is found in The Art of Computer Programming.)


0 and 1 are human definitions like time and date. You can define any state to be a 0 and any other to be 1, whether it is flowing electrons, flowing water, on-off light, etc. By doing this you gain a simpler model on which you can reason.

You can even expand it to whether a 'neuron' is firing (1) or not (0). What causes the firing does not matter, may it be digital or analog, discrete or continuous.

You want to represent more than two states? Use multiple 1s or 0s, and you can still use the model.

Use something else? Now you have to create your own model where you can map your states and state changes, along with all mathematics associated.

You can no longer compare a USB 'byte' with this new model 'byte'. What even is a 'byte' in your new model? It's like comparing apples and oranges.

It usually all depends on how accurately you can measure your states, often limited by the signal-to-noise ratio. When is a 0 really a 0 (signal) and not a 1 (noise).

A good example is telecommunications, where the data is sent over an electromagnetic wave (analog, continuous). In the past you could only identify between 32 states, but now due to better sensors you can identify 2048 states, increasing your data density by 64x. Same goes for HDD and SD card storage.


> Normally it’s challenging to use [hafnium oxide] for memory because it has no structure at the atomic level – its hafnium and oxygen atoms are randomly mixed together.

What on earth does this mean? Looks like a typically non-random crystal to me:

https://en.wikipedia.org/wiki/Hafnium(IV)_oxide

The actual article is open access, but rather dense:

https://www.science.org/doi/10.1126/sciadv.adg1946

It seems they used amorphous hafnium oxide:

> To create this amorphous nanocomposite, we added Ba to hafnium oxide during the single-step thin film deposition, and because of its simplicity of compositional control, we used pulsed laser deposition (PLD) to deposit the films. The PLD target had a Ba:Hf cation ratio of 1:2, which exceeds the solubility limit of dopants (with large atomic radii) in (crystalline) hafnium oxide, so that the formation of a second (amorphous) phase could be expected (22).

Is that what the article means? Because it also then talks about adding barium on top of this, whereas you don't even get the amorphous phase without barium.

Anyway, the paper also explains the deal with the Barium "bridges", which the authors call columns, fairly clearly (but it's too long to copy here).

Other observations:

- The authors seem to be very proud of producing this device mostly using "industry-compatible" processes, which i think means stuff you could do in a conventional chip fab, and where they don't, they have a story about how they could. This seems to contrast with other experiments in this field where the techniques would be very difficult to export from the lab.

- Switching time seems to be > ~100 ns. Maybe that can be optimised a lot. And i don't know what that translates to in terms of latency at bus level. But that time is ~100x SRAM, ~30x DRAM, ~10x some of these other wacky high-speed storage technologies, but ~1000x less than flash. Could have those numbers highly mixed up, so correct me if i'm wrong.

- There is some chat about "neuromorphic" functionality, where storage works like neurons. I won't go into the details but this sounds like a fad that is probably getting semiconductor researchers nice big grants, and will go absolutely nowhere.


Actually if you think in the sense of informatics...base 3 is more efficient than base 2, but natural base (base e = base 2.718...) is the most efficient, and 3 - e is closer to e than e - 2. So it is more natural for computers to go in multiple bases but unnatural to human.


How are you measuring efficiency? Why is base 3 more efficient than base 2? Why is e the most efficient?


There is a particular notion of "radix economy" which takes into account both the length of a number compared to its size and the informational complexity of more symbols. It doesn't really have anything to do with data storage though. If you could create a device to store 1000 10-bits (10its?) for exactly the same cost as one that stores 1000 3-bits you'd obviously pick the base 10 one, no matter what radix economy says.



Yes -- The soviets did the right thing, but it was too costly.


What does it mean to have a transcendental base? I understand algebraic bases but I don't see how you could use a transcendental base to represent integers with a finite number of symbols


> 3 - e is closer to e than e - 2

I don't get it.


I assume they mean absolute distance

abs(3-e) < abs(2-e)


I still don’t understand why base e is the most efficient ideal. There has to be some more info on this.


https://en.m.wikipedia.org/wiki/Radix_economy

The idea is that if it costs $r to store a base-r digit, then base 3 (or e in a continuous scale) turns out to be the most efficient. Obviously, there's no a priori reason to think that a 3-level gate is exactly 1.5x more expensive than a 2-level gate, so this is mostly of theoretical interest.


This was a really interesting article, thank you.

I'm thinking about how this would apply to human psychology of reading and writing numbers. Then it doesn't make sense to measure economy as b floor(log_b(n)+1), because adding in more symbols doesn't increase the complexity linearly for people reading or writing numbers. Maybe something like E(b,n) = f(b) g(floor(log_b(n)+1)), where f stays constant up to 10 or 20 symbols, and then increases after, and g increases faster than linearly because it's easier to read shorter numbers than longer ones.


Yeah I don’t even understand non-whole bases.

For example, how does someone express the number 5 in base e…


12.0200112_e = e+2+2/e^2+e^-5+e^-6+2e^-7 = 4.99999285804.....

so 5 in base e is an infinite sequence of digits starting with 12.020011....


How many digits do you need to do this for a given transcendental?


I've always visualized it like fractals


> As powerful as current computer technology can be, there are a few hard limits to it. Data is encoded into just two states – one or zero. And this data is stored and processed in different parts of a computer system, so it needs to be shuttled back and forth, which consumes energy and time.

This kinda reads like someone who doesn't know how storage works. Storage is rarely encoded as "one and zero" in the form you see it, say, in a hex editor. In fact that's what's it decoded to. Storage schemes vary, to limit run length, add error correction, and in some cases includes analog states like the ones described in the article, to make more efficient use of charge and material state.

This is trivial, because actually the smallest unit you work with in a computer is not a bit, but a byte. And often in fact a word, which is typically 16 bytes (64 bits), or 64 bytes (512 bits) the size of a cache line, or even 4096 bytes, the size of a typical memory page or disk sector. So we have plenty of leeway to comfortably encode these larger units however we wish, and the rest can stay as-is.


How long have we been talking about "new" types of memory/storage? Anyone remember memristors?


The problem is that any new technology has to compete with half a century of improvement made to the current one.

It's not enough to make a proof of concept of your MRAM, FRAM, or whatnot. If you want it to catch on you have to also come up with a way to quickly start making say, 16GB modules competitive with modern DDR5.

That's tough.


The article literally links to another one that talks about the "emerging type of memory" - from 2012.


Has anything happened with memristors?


With the recent successes quantizing LLMs to 4 bits, memristors can actually be useful. Basically the lower the precision the more attractive analog computation becomes.


You can buy them at knowm.com so I'd guess.. maybe?


Bender: "What an awful dream! 1s and 0s everywhere! (gasp) I thought I saw 2..."


Who could have predicted that the key element to break the hegemony of 0 and 1 is ha(l)fnium.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: