Hacker News new | past | comments | ask | show | jobs | submit login
With 'The Machine,' HP May Have Invented a New Kind of Computer (businessweek.com)
595 points by justincormack on June 11, 2014 | hide | past | favorite | 292 comments



For those of you who are unfamiliar with what a memristor is, as I was, HP has an easy-to-understand analogy on its FAQ page about memristors:

"A common analogy for a resistor is a pipe that carries water. The water itself is analogous to electrical charge, the pressure at the input of the pipe is similar to voltage, and the rate of flow of the water through the pipe is like electrical current. Just as with an electrical resistor, the flow of water through the pipe is faster if the pipe is shorter and/or it has a larger diameter. An analogy for a memristor is an interesting kind of pipe that expands or shrinks when water flows through it. If water flows through the pipe in one direction, the diameter of the pipe increases, thus enabling the water to flow faster. If water flows through the pipe in the opposite direction, the diameter of the pipe decreases, thus slowing down the flow of water. If the water pressure is turned off, the pipe will retain it most recent diameter until the water is turned back on. Thus, the pipe does not store water like a bucket (or a capacitor) – it remembers how much water flowed through it."

Source: http://www.hpl.hp.com/news/2008/apr-jun/memristor_faq.html


Interesting tidbit: a while ago, I was talking about memristors with a physics trained person. And he told me something that really stuck:

It is impossible to make an inductor (coil) without resistance and without capacitance. It is impossible to make a capacitor without some inductance and a little bit of leakage. It is impossible to create a resistor that does not have self-inductance or a bit of capacitance.

If you followed all that and you agree with it (because you've done some electronics) or you've heard of 'parasitic capacitors' and such, then you can see that it must somehow follow that parasitic mem-resistance must have been with us all along, but is such a small effect that we simply failed to notice it. No resistors, capacitors or coils that we normally use exhibits this effect in such a way that we adapt our designs to it to minimize it.

That's in part the reason why it took so long to manifest in a usable form, if the effect would be as devastating as resistance on powerlines, as capacitance on wiring, as induction on large capacitors then we'd have exploited it long ago.

So it's going to be something exceedingly subtle when and if it comes to market.


Interestingly, this is exactly what the chief of HP labs said in this video [1]: That lots of papers noted the effect and talked about anomalies, but no-one made the connection to the memristors before them. He also says that the effect is only really noticeable at nano scales (because it needs incredibly high electric fields).

EDIT: on second thought, that's not the case, because the HP memristor is not a memristor in the sense of the of the original paper describing it - it has nothing to do with the magnetic flux. It can only be described with the same set of equations, so, it behaves like a memristor, but is not "actually" a memristor that completes the set [2]. That is, if I understood the video correctly (but I think I did, as the speaker himself talked about a "trick" in the beginning of the video).

[1] https://www.youtube.com/watch?v=bKGhvKyjgLY [2] http://en.wikipedia.org/wiki/Memristor#mediaviewer/File:Two-...


Nice to see confirmation of that theory from that high up.

It seemed extremely logical to me when I heard it the first time and like everything else, in retrospect it is blindingly obvious.

I'd hate to see a memristor dropped into a circuit design and being handed the task of 'fixing' spice to produce the correct output. But by the looks of it the usefulness of individual memristors will be quite limited.


Can't reply to your other comment yet, so I'll do it here. I think that this is a "duck typing" memristor: You can use it as a memristor, but it's not what was predicted. Anyway, if you're interested, watch the video I linked (thanks to @achille): It really explains what this is all about. If you have some basic understanding of semiconductors physics I think you'll enjoy it :)


I think I get what you're going for now, sorry it took a while for the coin to drop.

Just like a battery can look like a capacitor under some conditions it really isn't a capacitor.

Capacitors and batteries can be described using the same basic laws, even though the fundamental mode of operation is completely different.

They have the same basic properties:

- internal resistance

- a certain capacity

- a breakdown voltage

- you can charge them

- you can discharge them by connecting the terminals (hopefully through some suitable load)

- both exhibit self-discharge

- have a certain amount of inductance

And yet the one is a real capacitor whereas the other one is an electro-chemical device, only when you open one up or when you charge/discharge it a large number of times can you see that there is a real difference between the two, superficially they are interchangeable for some applications.

Does that get it right?


An electrolytic capacitor is an electro-chemical device itself. Also, like a battery, if the "electrolyte" is not without solvent then it is a liquid-state device. As opposed to a film & foil capacitor which is a solid-state device. Batteries generate a characteristic cell voltage as a result of a chemical reaction. Capacitors store any voltage up to their maximum rating, electrolytics have higher capacity/size ratio than film & foil due to the chemical help of the electrolyte, but do not generate voltage on their own, only store what is supplied from a power source.


That's a good comparison, as far as I understand this thing. With the exception that the "real memristor" is only a theoretical device.


See my edit to the previous comment - now I actually think that the symmetry isn't proven by HP approach (but this doesn't make it any less interesting/promising).


For it to be an actual memristor it would have to be. Really curious about what it is that they will be unveiling.


Is it not "actually" a memristor in the same sense that an electrochemical capacitor is not "actually" a capacitor: it exhibits the same properties as a capacitor, but the properties are caused by redox reactions instead of electric fields (even though most chemistry is really just electric fields anyway)?


You can produce memristors on the macro scale[1], in fact, it appears that any pair of oxidized conductive metals will work[2].

http://sparkbangbuzz.com/memristor/memristor.htm[1]

http://youtu.be/rvA5r4LtVnc[2]


> it must somehow follow that parasitic mem-resistance must have been with us all along

I'm not so sure about that. It's impossible to make things without unwanted series/parallel resistances because conductors/insulators aren't perfect. It's impossible to make things without parasitic capacitance because without a perfect insulator, any two points have some capacitance between them - just like how there's also a parasitic gas discharge tube if the electric field between terminals reaches the dielectric breakdown voltage of air (this is another two terminal device, it's just not linear). These are all well-known physical phenomena that we deliberately assume away when we draw simple schematic symbols on paper, and then have to add back in when they end up applying.

Constraining the domain to two-terminal AND passive AND linear devices, one will have a very small number of possibilities for equations that describe them. Good old R/C/L that we know and love, and purportedly "M". But what is THE physical process that M describes and therefore gets made interesting by?

One could create a battery-powered opamp circuit that had two external terminals and fulfilled whatever equation you dreamed up (once again up to the limit of your abstraction, in this case constrained to lower frequencies). But the existence of that equation/device doesn't imply that there has to be a low-level physical phenomenon that will let you directly implement it.

Every time I try to read about memristors, I grok the bits in isolation, but mostly come away confused. C and L have state as well, and would also make great computer memory but for those nasty parasitic effects. From the wikipedia page: researchers everywhere are working on memristors, yet memristance was documented 200 years ago, yet the creator is arguing that the definition applies to phase change ram? I almost feel it's the "Cloud" of device physics research.


The big problem is that your two terminal battery powered op-amp circuit would (1) sooner or later run out of battery power and (2) would have to exhibit it's memory function through some 'ordinary' memory device.

If we're going to use 'ordinary' memory devices anyway then the whole exercise is moot.

What exactly drives 'M' is the billion dollar (and probably many billions of dollars) question...

It's more like the cold fusion of semiconductors, but apparently they have something. I'll hold off any judgment until they do their unveiling, but this had better be good and not yet another promise.


Yes, either linear memory devices like C/L, or creating digital memory cells out of positive feed back hysteresis. Running out of battery is just one of the limits of the abstraction. I mean it as a mental exercise in demonstrating that one can easily build a circuit that fulfills the equation of a memristor - does its mere emulated existence make it interesting? Why aren't these "emulated memristors" (obviously with a third terminal for a power supply, a different abstraction limit) a standard building block of modern circuits, even in just hobbyist discrete form?

I'm sure there are specific devices being worked on that have exciting applications to memory, and I'm certainly not putting down the researchers involved with them or their work. But it seems like the "memristor" narrative is a bit of a hype train that has taken on a life of it's own. Let's say one team of researchers were to develop a miniature device with a similar but decidedly non-linear effect. Would this make it any less important?

(And if my judgment is wrong and we end up with an entirely new family of passives based on the same fundamental physics with usual differentiation based on power handling/size/etc, I'll be happy to eat my words).


> And if my judgment is wrong and we end up with an entirely new family of passives based on the same fundamental physics with usual differentiation based on power handling/size/etc, I'll be happy to eat my words

That will be the best case ever of being happy to be wrong. If true the implications are on par with some pretty advanced SF in terms of computing power and storage. Content addressable memory anybody?


Every effect is linear if you constraint your domain enough.

But you are right - the real news is more that we've got a new way to manufacture some type of memory storage element (that's also conveniently fundamentally tiny).

It is definitely not the only way though: IBMs been working on magnetic racetrack memory for some time which would have many of the same advantages.


Magnetic racetrack sounds like bubble memory warmed over with associated feelings. The good news is that 'bubbles' are real, they just never broke through commercially.

http://en.wikipedia.org/wiki/Bubble_memory

vs

http://en.wikipedia.org/wiki/Racetrack_memory

So many common elements.


memristors, like core memory, are destructively read. You measure the current to set them to 1 capturing the ones where you needed more current (they were zero initially) and the ones that needed less current (they were one initially). You capture the state in a register, and in the second half of the cycle you write the zeros back where the register shows the were originally zero.

As core memory before it showed, you can do this with sufficient accuracy if the hysteresis difference is large enough. The only requirement is that it take a different amount of current to set it to a known state if it was already in that state, than it does to set it to the same known state when it was in the opposite state.


memristors, like core memory, are destructively read but are they really, with HP implementation - which is not a "real" memristor? In the video presentation [1] they show that they change the state of the memristor using very short "high" voltage impulses, and then measure its resistance using lower voltages that do not make the oxygen vacancies migrate - and thus do not change the state, if I understood that correctly.

If they can use these low voltages (and low currents) to effectively read the state without altering it, they get incredibly simplified management of the memory, and incredibly low power read operations, which would justify their assertion that their memristors can be packed in 3d without thermal problems.

[1] https://www.youtube.com/watch?v=bKGhvKyjgLY


>It's impossible to make things without parasitic capacitance because without a perfect insulator...

Maybe a nit, but I don't believe that is technically correct. Parasitic capacitance (and inductance) exist between any two objects because the electric (and magnetic) field extend from a charge (or current) to infinity. These fields store energy, even in perfect vacuum. The lumped circuit model of stored electric energy is capacitance, the model of stored magnetic energy is inductance so anything physical which carries charge or current will have capacitance or inductance (will store finite energy). An insulating material (dielectric) in a capacitor can increase the energy storage (by the sqrt of the dielectric constant of the material) but non-ideal materials are not necessary for the "parasitic" effect to occur.


Any signal that has some non-zero propagation time also needs proportionally high capacitances/inductances because of the energy storage requirement you mentioned (resistors are memoryless so can't model that).


For me, the "discovery" of memristors felt strangely like predicting a fundamental particle based on known symmetries, where one of these symmetry slots has not yet been observed.

http://en.m.wikipedia.org/wiki/Eightfold_Way_(physics)

http://en.m.wikipedia.org/wiki/Memristor


I remember seeing a video [1] where a man discovered that old, corroded brass ammunition casings developed a slight memristance property.

[1]: https://www.youtube.com/watch?v=MlswP_qXbdA


Conceptual symmetry between the resistor, capacitor, inductor, and the memristor:

http://en.wikipedia.org/wiki/Memristor#mediaviewer/File:Two-...

The scales fell from my eyes. Now the electronics math makes sense - Memristor was the missing puzzle piece.


I can't help but notice that the region occupied by the memristor is what would be best described as a 'flux-capacitor'. You really can't make this stuff up.


That reminds me of the missing mechanical equivalent of the capacitor, the inerter, which McLaren secretly began using on their formula one cars in 2005. But does this mean there's potentially another missing suspension component?

http://personal.strath.ac.uk/yixiang.xu/talk_HUST_2008_Chen....

edit: Assuming a Force-Current, Velocity-Voltage equivalence, I get the mechanical memristor behavior to be: dx=M.F.dt, with M being a property of the memristor(memdamper?). Or in other words, v=M.F; velocity is proportional to the force. (The dots are multiplication. Asterisks italicize the text)


v = M.F => that's regular inertia isn't it? In the case of the inerter presumably constrained by the stroke of the device. It's like a 'two stroke flywheel' in a nice package.


As explained at http://www-control.eng.cam.ac.uk/~mcs/lecture_j.pdf, mass with respect to the inertial frame is equivalent to a grounded capacitor. However, the single "pole" limited the sorts of mechanical systems that could be designed. The inerter allows a system to be optimized with respect to the difference between accelerations, not simply an absolute acceleration.

I think that two accelerations could also be related with levers, but the quantities involved might make that not so feasible as a suspension component.


It's great material for a T-shirt.


what an awful diagram. it implies there's some kind of symmetry that relates all four, but when you look at the details there are asymmetries. if you think it was helpful look at it again and explain why di appears in two quarants but dq and dv appear once each.


You can rearrange the inductor equation to get di = (1/L) dϕ. Then the quadrants are perfectly symmetrical except for the ugly 1/L. But that's just an arbitrary unit created by humans, you could fix that by defining the unit of inductance as the inverse of what we use now.


ah, thank-you. that makes more sense.


are you sure you're looking on both sides of the equations?


I read this analogy and immediately started thinking how complicated it would be to use this effect if the diameter is always changing, every time you make your water flow.

Then I watched the video posted by achille [1]. If I understood the video correctly, the analogy should specify a fundamental thing: The diameter of the pipe only increases or decreases if the pressure differential is very high. You can make your water flow at low pressure without affecting the diameter in any measurable way. This should make using memristors practically much easier.

[1] https://www.youtube.com/watch?v=bKGhvKyjgLY


Always thought water increases its speed when you shrink the pipe, not the other way around


Ah, yeah, this could be more carefully stated.

At a bottleneck, the water that gets through is moving faster, but the total volume of water going through is lower.

It's a classic EE analogy, so people tend to shorthand it. One good and thorough explanation of the full bit is here: http://science.howstuffworks.com/environmental/energy/questi...

What happens if you increase the pressure in the tank? You probably can guess that this makes more water come out of the hose. The same is true of an electrical system: Increasing the voltage will make more current flow.

Let's say you increase the diameter of the hose and all of the fittings to the tank. You probably guessed that this also makes more water come out of the hose. This is like decreasing the resistance in an electrical system, which increases the current flow.


> At a bottleneck, the water that gets through is moving faster, but the total volume of water going through is lower.

This is incorrect because the volume of water will be the same under I3 flow (called I-cube flow: Incompressible, Inviscid and Irrotational).

Due to conservation of mass, we have

mass = density x volume = density x area x speed x time

Measuring the amount of volume for the same time with changing area, the only quantity that can change is the speed (recall that we have fixed the time and already assumed incompressible flow, hence density does not change).

So, more area means lower speed and vice versa. But the volume always is the same under stated assumptions.


> water increases its speed when you shrink the pipe

Not exactly. It increases its speed when you shrink the nozzle. If you shrink the pipe and don't change the supply pressure the water will slow down.

The nozzle is special because it's a transition from high pressure (inside the pipe) to zero pressure (the world outside).

If you keep the flow rate of the water unchanged then shrinking the nozzle means the water must flow faster in order for the flow rate to remain unchanged.

But if the nozzle feeds into a high pressure part of the pipe then things just slow down.


thanks for the explanation, makes a lot of sense and my Physics seem to be rustier than I thought :D


Perhaps they meant flow rate?


That made me think of a chinese finger trap.


Are they teaching memristors in college these days as a fundamental component, or is it still too new?


They are in higher level classes. My EE friends knew what they were instantly


A memristor is the 4th fundamental circuit component, along with capacitors, resistors, and inductors.

The important part of memristors is that they can be arranged to form a crossbar latch, which acts like a transistor.

This crossbar latches are very very small, and very low power. HP plans on achieving data density of 100GB/cm^2 [1] with read write speeds approximately 100x faster then flash memory while using 1% of the energy. Also with lower energy costs, expected data density is 1 Petabyte per cm^3 (due to 3D stacked circuitry)

Basically when this technology comes of age we'll see smart phones reach the order of terabytes of storage.

[1] http://www.eetimes.com/document.asp?doc_id=1168454


> Basically when this technology comes of age we'll see smart phones reach the order of terabytes of storage.

Looking forward to carrying the library of congress, netflix's entire catalog, and all of openstreetmap on my phone in the next 5 years.


This means we might be able to make a literal version of the hitchhikers guide. All of human knowledge in your hand, no network connectivity needed.


> I felt a great disturbance in the Force, as if millions of voices [of copyright attorneys and trade industry groups] suddenly cried out in terror, and were suddenly silenced.


Guess again. All that storage represents a virtually limitless crypto-space within which intellectual property rights holders can create one-time-pads, and disenfranchise future consumers with.

When combined with ubiquitous 4G coverage, all they'll ever need to stream are the limitless persistent auth tokens, as locking and unlocking assets as frequently as bandwidth will allow, while remaining profitable.

I bet some of the greediest and most tyrranical IP controllers will have 1:1 authentication ratios, with each and every text character, audio sample and pixel per video frame, such that they'll never reduce bandwidth, and even create backlogs of access history, for billing purposes, when network connectivity returns.

Yes, because you accessed five seconds of full motion video, and witnessed content in which Lady Gaga's eyes moved five pixels to the left, while she pronounced a fricative syllable in the third verse of her top 40 hit single, you now owe Viacom 1/20th of one cent. NOTE: Costs do not include third-party fees covering closed-captioning for the hearing impared.


You dropped your phone on the corner?


All of Wikipedia (English) already fits on a smartphone with a 64GB SD card.


With no history and no pictures. It's useful, but it's not really "wikipedia" anymore :)


Weird side point, but I'd be really interested to hear how people weight the importance of Wikipedia text vs media vs history. For me the first is 99% of the point, but clearly others feel differently.


History and talk pages can be very important.

When I saw that the film Defiance was based on a interesting-sounding real set of events, I looked at them on Wikipedia. The history and talk pages had an ongoing edit war with a sizable number of commentators who appeared to feel that the Bielskis should be treated as criminals and murderers rather than fêted as heroes, due to their killing of local Nazi sympathisers.

If you just stuck to the hashed-out Wikipedia page that was current, you'd be completely oblivious to the whole controversy.


Yeah, you're probably right. But the history can be really important on certain pages. https://news.ycombinator.com/item?id=1671756 And images can make some of the math articles useful since they are too-often dense pages of formulas.


Isn't most the math just latex? So it could be rendered client side?


I mean the formulas are not very good at explaining the concepts. Sometimes it's helpful to have a picture.


I find that Wikipedia's text heavy presentation disappointing especially considering that the articles are almost exclusively what I would consider extremely terse. It's like the worst of both worlds.


I've never considered this, but the history of Wikipedia over the next 100 years might be an insanely valuable source of insight about culture.


I remember reading about Xowa and there was stated that English wikipedia requires 25GB and images additional ~80GB which you can all download and access offline with Xowa. Considering there are 128GB SD and micro SDXC cards readily available for ~$100... that's actually amazing.


I suspect if you selected 5% of the most "useful" articles you'd miss out almost nothing and be able to pack in a very small size actually.


I'm sure you could batch process the pictures to a reasonable size and store all those on a big flash drive today already.


All of the textual wikipedia is about 10GB compressed, even with the garbage pages. I'd say a good chunk of the history and the images could fit in the remainder.


You could also just strap a portable terabyte hard drive to your phone. Not as cool as this, but it does fit in your hand.


You'd need the network connectivity anyway just to stay 'in sync'. But in off-line mode life would be a lot easier.


It could reduce the amount of data a smartphone needs. I'd like to keep a copy of my local area in my google map and then have it check in once a week to look for changes.


A smartphone could have an onboard map today. It's a conscious choice by google to make it work the way it does. Look at TomTom or any other navigator (and possibly some smartphone apps too) that come with integrated maps.

That service is just grafted on there, it's not a necessity.

It's the google way of doing things.


Actually, Google Maps allow you to cache portions of maps for offline use: https://support.google.com/gmm/answer/2650377?hl=en

There are other navigation apps that do store offline maps; the reason they're not popular outside of outdoor enthusiasts is that they take up a huge amount of space. It seems like a reasonable trade-off to me.


Nokia somehow manages to do offline maps just fine - you can download maps of the countries/cities you need separately and they take up no more than about a hundred MBs each.


Since when is caching a substitute for integration?


I'm not sure I understand what you mean by integration; are you proposing that the mapping app contain the complete map data, which is then occasionally updated as needed? There's a clear advantage to not storing several GB of data you rarely if ever need on a device with very limited storage space.


Flash is a commodity, a typical navigation device will store all the maps for all of Europe including the most crazy little roads and cost a relatively small fraction of a cell phone, includes a good chunk of the hardware and comes with an SD slot for upgrades.

Cell phones could easily provide navigation capabilities for at least the country of origin of the buyer and several around it if the creators decided this was a desirable thing.

All of the US is 1GB. See here: (dutch):

http://www.laptopshop.nl/vragen/29528/115735/tomtom-navigato...

Flash runs a few $ / G. Of course your phone manufacturer will screw you completely on the memory when you buy it (the price difference between a 16G and a 32G phone is ridiculous, but I guess I can't fault them as long as people fall for tricks like that).


>a typical navigation device will store all the maps for all of Europe

Yeah but it won't also have several GB of games, audio, apps and photos on it as well. People would cry bloody murder if they found out google was using 7 GB of space to cache the world map by default.

If you are a person who needs a world map at their fingertips there are apps for that but the average person just needs google maps to figure out what turn the should take occasionally.


If they paid $3 for that 7GB they'd be fine with it. It's not like you have to store that stuff in RAM.


And this is all especially true if we start seeing memristor arrays replacing flash chips in mobile devices; at that point, the few GB it would take to store a worldwide transportation map would be pocket change.


StreetView is a useful, if expensive to implement, addition to maps, and doesn't fit on a phone (yet)


StreetView is overkill to me, basic navigation is pretty much a must, even if you have no dataplan. But of course that's not the way smartphones are being marketed.

I've been holding out from the smartphone revolution for quite a while now, but a phone that would do off-line navigation would be a good thing because that means one less thing to carry along.

I don't like having services forced upon me when a single download would suffice.


Check out MapsWithMe - OpenStreetMap packaged as a basic mapping app, paid pro package for more than basic mapping/location pointing. The OSM db is segmented by country/province/US state so you can tailor the size / coverage as desired.

Haven't looked at the pro pricing or feature list, just having maps that my off-network, GPS equipped old brick can use near home is sweet enough.


I tried MapsWithMe, but those are only maps, no navigation.

If you use Android, check out OsmAnd. It uses the same OSM maps, has navigation (with voice pack too) and it's free for up to 8 countries IIRC. I have been using it for about three months and it works pretty good.

The only problem I found is that city names for some countries are in local language, so if you go to Greece for example, you would have to type in the names in Greek letters. It they fixed this, it would be perfect.


There is setting for the names (but I think there have been people on the OSMand mailing list having issues with it). The English names would also have to be present in the OSM data (not for any particular technical reason, just as far as I'm aware that is the only source of names the devs use for the maps).

Also, the free version allows for 10 downloads, where updating a previously downloaded map (which might only be a region of a country) counts as one of them. But the paid version is only $8.


Thank you, I will!


Not to derail the topic - but maps already does that.

Search for your area. Pull up info sheet for it. Click "Save map to use offline". (Although I've got no idea if it checks for changes)


Didn't that disappear in some recent version? I had to use some special keyword search term to make that happen in the newest Maps.


It did disappear. And fortunately, they've added it back again, in yet another UX interaction.

https://support.google.com/gmm/answer/3273567?hl=en


that's the interaction groby_b just described


However "You can move and zoom in or out on your saved map, but you can’t search or get directions on it."

Reduced the utility considerably imho.

Both Navigon and Co-Pilot (on Android) can store the entire U.S. in about 1.5 GB.


Google maps does check for changes, but in a sort of braindead way. Every 30 days it'll check for map updates, and if it can't (e.g. no internet connection) it'll just delete your locally saved maps.


This is starting to sound like a Version Control type solution where you just update your Wikipedia every now and then and it merges the new info.


I ended up looking into this a little while back as an emergency kit type thing. Wikipedia is useful - and tells me how to do a lot of things. In a disaster, I probably don't have internet. So it would be a good idea to have local wikipedia.

Zim reader was the best I could do, and still is the best at the moment. But I would be totally fine with dedicating a few terabytes to keeping much more complete local archives on my file-server if the update process was reasonably automatic (something like bittorrent would be great).


Urgh not Zim - Kiwix. Kiwix is the best I could do.

Zim is a desktop wiki app.


Software grows to match the hardware. If storage goes up 100x, data will grow to accommodate.

Some things won't grow, to be sure. Anything text-based, for example. But movies, music, pictures... would probably all explode in fidelity and thus size.


Would movies and music really explode? Music hit the ceiling long time ago - nobody's going to go for higher resolution / sampling rate than the current "uncompressed" sound. Big publishers are intentionally sacrificing the quality for loudness anyway.

Movies can practically scale up to retina-equivalent on wall-size equivalent only. Even then not many people will want TVs bigger than they can look at without moving their eyes...

Pictures could still grow, especially the internal raw representation taken from the light capturing element, but that's likely to be down-scaled when saving to kill noise.

What else is there that really takes space? (that is not limited by the effective resolution of human senses)


> Music hit the ceiling long time ago - nobody's going to go for higher resolution / sampling rate than the current "uncompressed" sound.

Spatial audio that supports dynamically computed surround sound for arbitrarily many speakers and headphones.

> Movies can practically scale up to retina-equivalent on wall-size equivalent only. Even then not many people will want TVs bigger than they can look at without moving their eyes...

> Pictures could still grow, especially the internal raw representation taken from the light capturing element, but that's likely to be down-scaled when saving to kill noise.

1) High color depth, allowing bright and dim objects to coexist in the same picture without loss of fidelity, and allowing dynamic lighting.

2) Complete depth data, allowing dynamic depth of field changes.

3) Complete geometry and scene-graph data, allowing you to change the camera and perspective.


"Spatial audio that supports dynamically computed surround sound for arbitrarily many speakers and headphones."

Still covers only a small arbitrary constant factor of data that is already very small by modern standards.

Note that surround sound data is, IIRC, already not "twice" the size of stereo.

And your video points 1 and 2 are also still at most only small constant factors of increase over what we already have, with 3 potentially being a compression technique.

Video is nearing its apex; sound is pretty much already there.

There's actually a maximum rate at which our senses can convey information to our brains; any use of data beyond that rate is literally impossible and anything carried beyond that is wasted. Even a full sensorium just isn't that much larger than what we already have. We are, after all, talking about technologies that are in the same ballpark as the maximum theoretical data density that human brains can have, and in practice the memristor storage is going to be much higher. It should not be surprising that it's very difficult to truly "use" all that storage.


> Note that surround sound data is, IIRC, already not "twice" the size of stereo.

Even stereo is also not twice the size of mono. (using joint-stereo encoding)

Agree with all the other points too.

> 2) Complete depth data, allowing dynamic depth of field changes.

This does extend the natural sense range, but is also already possible/done.

> 3) Complete geometry and scene-graph data, allowing you to change the camera and perspective.

That should actually take less space than the final scene in many cases...


Isn't having a computer process some data and zoom in on a portion of it 'using of data beyond the rate at which our senses can convey information to our brains'?

I guess you are talking about the utility of media at higher and higher fidelity and I am just snagging on how you have phrased it.


It's a good point, though. He is mainly speaking to the data rates for media involving 'guided' experiences. Interactive experiences would benefit much more linearly with the amount of data able to be stored (google maps on your phone analogy).


Sensor data is one I would guess. Modern cell phones have an incredible number of sensors in them, but usually don't log the data when it's quickly sampled (accelerometers, background mic), or sample it rarely/slowly (GPS, barometer, ambient lighting) when it is logged. I would personally love to have days, weeks, or months of raw data sampled frequently.

Edit: I'd imagine this is due both to a limited amount of storage, a limited number of writes to current sold-state mediums, and energy usage, all of which this technology appears to solve.


Energy usage is the huge one - if this is applied to computing and not just storage then maybe.


Sensor data is dirt cheap to store. Even at 100 data points per second. It takes 10 seconds to store a kilobyte of data (1000) data points). At 64 bits per point.

1GB/s of streaming data is 10,737,418 channels of 100 point per second 64bit floats.


I think my argument is that we will find new things to store on our PC's similar to how we found new stuff to store on our PC's from the 90s till now.

I guess it is hard to imagine, but let's dream a bit...one ridiculous possibility that I've thought about is consider something like google glass. Imagine recording every moment of your day from your smart phone, and not just when you wake up. Why would you do this? Well, may be there for little things like having a meeting with someone and you want to remember details without writing it down. Why take the time to record it manually when you have the ability to do it in HD, for cheap?

Again, this is hard to imagine wanting to do especially for privacy reasons, but were things like facebook and twitter easy to imagine in the 80s? As things things have changed, so have social norms...I'm not saying this is a good or bad thing, I'm just stating an observation.

So, I happen to not have any HD vids on my laptop at the moment, but I have a normal video on my laptop that is 10 minutes long that is 70 MB. For something like recording video for 10 hours say, that's 6 * 70 * 10~4GB. So, yeah that's about 250 days (little less than a year) on your smart phone at 1 TB ignoring all other things (full seasons of vids of shows, as another commenter mentioned, stuff like raw gps data that could be added to the video, other things), so may be it isn't quite reaching the limit but it's a good example for how as we have more space for stuff, it is possible to imagine finding more things to fill your smartphone's new TB of storage with.


Again, this is hard to imagine wanting to do especially for privacy reasons, but were things like facebook and twitter easy to imagine in the 80s?

Much easier than the example you give, yes. Considering things like Xanadu were already theorized in the '60s, Facebook and Twitter wouldn't be that huge of a stretch. What's more, Usenet and BBSes already existed in the '80s, so there was a text-based glimpse of the future in some way.

Of course, imagining the scale, deep technical details and the heavy use of graphics back then was likely difficult. But the general concept? Hardly. Especially not something as simplistic as Twitter, the idea of it was most certainly not alien back then.


I wasn't referring to technical aspects, I was referring to social aspects. I'll admit I wasn't born in the 80's, but even when I was a kid in the 90's, I couldn't imagine sharing my day-to-day interactions with 100's of people.


Retaining program traces for every program executed on the system.


Holographic movies require a high bandwidth and a lot of storage space (recorded/pre-rendered).

Current holographic displays require a movie-source for every few-point. And to make it convincing for a small POV (e.g. 19" 4:3 holographic monitor) you need at least 8 few-points, for a 360° next-gen holographic display you would many terrabytes of data for one movie (and a high bandwidth to stream the data to the display). Depending on the holographic technology you need additonally a very high resolution in contrast to the visual output. If you use a common HD resolution the current visible pixel range is about 640x480 - at least some expensive research displays I used in 2011.


*view-point


Think about the oculus rift, movies/media might become a lot more complicated as VR or VR like technology comes of age. Or if its not VR there might be some other technology.


Caches for streaming the media. The BBC has this problem every now and again, with big events or popular shows: the pipes to the cabinets in the streets aren't big enough to unicast to everyone simultaneously. A CDN node in every cabinet is probably not too far off.


> Software grows to match the hardware.

You mean XML for instance?


Not sure why he was downvoted, xml is a trade off for developer convenience. A trade off we were willing to make because we had disposable bits.


What was the trade off? Harder to parse files for less human readability?


What was the predecessor to xml that was easier to read?


  "If GML was an infant, SGML is the bright youngster who far
   exceeds expectations and made its parents too proud, but
   XML is the drug-addicted gang member who had committed his
   first murder before he had sex, which was rape." -- Erik Naggum
GML is from the late 1960s and is, arguably, easier to read. SGML from the 1980s expanded upon this. XML is essentially just an offshoot of SGML and much less readable than SGML is. For example, here is a valid SGML document:

  <anthology>
    <poem>
      <title>The SICK ROSE
      <stanza>
        <line>O Rose thou art sick.
        <line>The invisible worm, 
        <line>That flies in the night
        <line>In the howling storm:
      <stanza>
        <line>Has found out thy bed
        <line>Of crimson joy:
        <line>And his dark secret love 
        <line>Does thy life destroy.
    <poem> 
      <!-- more poems go here    -->
  </anthology>
Aside from XML's lineage, there is no shortage of markup languages available with varying levels of readability.


I'm confused, why does anthology have a closing tag but the other elements do not?


The same reason <ul> has a closing tag but <li> does not: HTML is SGML where certain tags can be made optional. XML specifically did away with this and made all close tags mandatory (even on contentless tags, which is why xhtml requires you to do <img />).


Just to be clear, are some closing tags optional or are all closing tags optional? If the former, how do you know which?


In this case, the schema probably disallows any other tags inside <line>, which makes </line> superfluous and it can thus be omitted. If <stanza>s are not recursive and poems cannot contain <line>s outside <stanza>s, </stanza> can also be omitted. Etc etc.


You work it out from the schema - if it is not ambiguous you can omit it.


I think it's funny to see people quoting Erik Naggum, though perhaps that's just because my context for him is largely his rather prolific and fiery output on usenet.


I don't know much about him except from his essays about time and XML, both of which I think are spot on, despite the fiery language.


S-expressions.


CSV


XML's only real problem is the dumb idea of requiring closing tags to have the tag name. Otherwise it'd be just as fine as, say, JSON:

  <a>b</>
  "A":"B"
And maybe one can get upset about the entity encoding and other extra aspects. But those aren't really why XML feels so large.


Bandwidth won't grow and IMHO that is now the chief constraint in computing. For low usage it is cheaper to store data on S3 for a month than buying the bandwidth to transfer it once!


We'll think about media in a completely different way when this happens.


No, we'll just use the same shitty car and theft and property analogies.


Can't wait to see it: You wouldn't steal a Library of Congress...


The bigger problem is telecoms won't sell any bandwidth. So everything will have to shared via sneaker net.


As in shuffing around our terabytes of "illicit" storage by hiding terabyte thumbdrives in our sneakers?


Well if we're still capped at say broadband speeds, just driving down the street with your 50TB cellphone doesn't seem like much of a loss.

Maybe we should allow net neutrality to fall, increase socialization as a result :-) lol.

:.:.:

Also no, sneaker net means a network transported by feet not wires. But I do like the joke, or I pray its a joke not a prophesy.


This is already somewhat feasible and someone is actually working on it: http://memkite.com/blog/2014/04/01/technical-feasibility-of-...


Of course, it'll all be in hyper-HD 8K video and with 3D map fidelity down to the centimetre


So storage is 1x10^6 larger, and read/write speeds are 1x10^2 faster.

So it'll take 1x10^4 times as long to copy the capacity.


It will take 1e4 times as long to completely fill that device proportionally to an older one, yes, but that's still two orders of magnitude faster than it would be to fill that equivalent amount of space with the older write speed.


If you can figure out how to put enough memristors in parallel, the transfer speed is essentially unlimited. It's the access time (latency) that is crucial, and makes all the difference, because it is inherent to the technology, and cannot be changed regardless of how many elements you string together.

This explains it well: http://rescomp.stanford.edu/~cheshire/rants/Latency.html?HN_...

Several new non-volatile memory technologies have achieved latency down to 100 picoseconds. This is an order of magnitude faster than the L1 cache on a CPU (SRAM).

CPU power will again becomes the bottleneck, instead of storage. We will have to build completely new CPU architectures to make us of this though.


If you're making comparisons based on physical displacement, sure. But it's probably more helpful to think about how fast it will copy an equivalent number of bits, and from this perspective it actually will, indeed, take 1x10^-2 the time of flash storage.


I'll take that tradeoff in exchange for 10^6 larger capacity, any day.


Not only that but it has the potential to change our entire model of computing, and simplify it tremendously. The unification of storage and computing.


Unification of storage and computing...what does that even mean? I get that having nearly unlimited, fast, persistent storage would be awesome, but that's not a conceptual merging of storage and computing -- but I definitely agree about the simplification.


Because processors are now so many orders of magnitude faster than main memory a huge percentage of their transistors and complexity are allocated to things like caching, prefetch, and branch prediction.

Those features that are the only things saving modern processors from spending nearly all of their time doing nothing useful while they're waiting for data to be retrieved from main memory.

Look at a Core i7 CPU die, with the L1/L2/L3 cache circled. Understand that many of the remaining transistors are doing things like branch prediction, managing cache coherency, and prefetch and so forth.

  http://i.stack.imgur.com/4Z1nU.png
Now imagine that all of those resources could be doing actual processing instead of simply caching gobs of data that already resides in main memory.

That would be awesome.


Would this really be fast enough to do away with cache completely? A 100x speedup from main memory to L1 cache sounds pretty low, although I don't have any figures to back that up. If this is slower than L1 will there not be a major performance drop to the point we'd have to use at least some cache still?


Without knowing any of the details I'd guess that the processor would still benefit from some small amount of cache.


Well only one pool of data, no more registers, cache, hard drives etc. The storage will live next to the logic. What's more, memristors can act was logic units as well as storage, and do it interchangeably.


Well only one pool of data, no more registers, cache, hard drives etc. The storage will live next to the logic.

Even if all storage uses the same underlying persistence technology, access times will still depend on the distance between the data and the logic. If there are ALUs intertwingled with the storage, access times will still depend on how close the various pieces of data being worked on together are.

Parallel algorithms tend to lose performance as they get chattier and as node speed is lost to increase node count.

Storage living next to logic is effectively a miniaturized version of the "datacenter as a computer" or "warehouse-scale computer" setup that some places are using already. I seem to recall seeing whole books written on the additional complexities of writing software to take advantage of such an environment.

There will never be "only one pool of data". At a minimum, there will be "data in the physical thing that stays on the desk", "data in the physical thing that goes in my pocket", "data on such-and-such private network", and "data available over the Internet".


So, the program doesn't just run on the silicon it modifies the processor, you can increase buffers, add registers, modify the arithmetic units to use different algos and such like using a FPGA that can be adapted on the fly? You could execute a program by running it in place rather than streaming commands to a processing unit?!


In theory, yes, in addition it can also simulate neural networks natively. All kinds of new possibilities.


CPU's may end up looking more like GPU's with thousands of registers.


Compiler optimizations that are currently impeded because of register pressure can be enabled to its fullest extent, thereby increasing program speed even more. Compiler writers also get more breathing room to design better optimizations.


or with very few registers per thread on die with the rest in storage ala TMS9900

http://en.wikipedia.org/wiki/Texas_Instruments_TMS9900


Note that other companies had been working on switched resistance crossbar latches with metal oxides before HP, they just didn't have a fancy name for it.


The name comes from the 1971 paper from Leon Chua.. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=108333...


Yes, but referring to varying resistance metal-oxides as memristors is much more recent (2008ish I think) and is still somewhat controversioal.


I assume the initial prices will also be 10x more per GB than what we have now, too, much like SSDs and flash storage in the beginning (it's still the case actually) - compared to flash storage that is - so 100x more per GB than HDD storage.


It'll come down. Demand will be massive (if it lives up to the hype). This could potentially unify the RAM and Hard disks in computers.


There will always be a place for yet faster memory. Does this compete with the registers and L1/L2 cache in terms of speed?


But at least caches are often transparent to software, thus don't affect software as much.


Likely my basic understanding of memristors they require a certain ramp time. So I don't think it'll replace cache.


it doesn't need to. If it's faster than the whole 'caching' procedure, than it is faster regardless.


how many years away until you expect the technology to hit markets?


Impossible to predict, but it seems we are getting the fabrication down.


Dr. Leon Ong Chua [1] wrote this paper [2] in 1971 entitled, Memristor - The Missing Circuit Element, where he first outlined the theory of the memristor.

Admittedly, this is not my field, so my fascination may appear to be quite naive to the well-initiated, but I find it simply intriguing that the memristor was first theorized to exist; that is, it is somewhat analogous to the Higgs boson, in that the mathematics precedes the discovery.

To my point (from the paper):

"Although no physical memristor has yet been discovered in the form of a physical device without internal power supply, the circuit-theoretic and quasi-static electromagnetic analyses presented in Sections III and IV make plausible the notion that a memristor device with a monotonically increasing φ-q curve could be invented, if not discovered accidentally."

[1] https://en.wikipedia.org/wiki/Leon_O._Chua

[2] http://www.cpmt.org/scv/meetings/chua.pdf


Another thing that has been predicted by theory first was the laser: http://en.wikipedia.org/wiki/Laser#History


Indeed, electromagnetic waves were theorized by James Clark Maxwell and later shown to exist five years later by Heinrich Hertz.

http://en.wikipedia.org/wiki/Heinrich_Hertz#Electromagnetic_...


The theory was revolutionary of course but the properties had already been observed in "unusable" semiconductor materials. They had the pinched hysteresis loops in the 70's.


I encourage folks to watch this tech talk by HP Research Labs on the discovery of the memristor and implementation details. It goes into the gory technical details, from the low level physics to the construction of the gates, the truth tables, and the high level computation possibilities once chips are assembled in a crossbar package.

> https://www.youtube.com/watch?v=bKGhvKyjgLY


Thanks very much for sharing this video. Despite having a background in electrical engineering and device physics, I've never really understood what a memristor was until watching this.


Any idea if their materials have changed? He speaks of Platinum and Titanium Dioxide as the materials they are using. Even discounting the cost of spinning up the necessary fabrication facilities, I would think such materials would keep the costs prohibitive for mass adoption.


Platinum has a long history in industrial applications, often in surprising quantities. There's got to be a few orders of magnitude more of it in a catalytic converter than you would want on a chip die.

Titanium dioxide is better known as white paint pigment. Or sunscreen. People smear it on their skin in large quantities. It's not what you'd call scarce.


In the nano scale I would imagine that the material cost is not the biggest issue


Thanks for the video! It should be the first comment for this thread. I... retweeted it :)


If I understand this right (I'm not an electrical engineer) a hypothetical memristor-based computer would not need discrete RAM and disk storage nor would it have a fixed processing power. Rather, the chip of memristors would simultaneously be acting as the CPU and as non-volatile primary memory with the speed and efficiency of DRAM (or better). Additionally, the number of memristors dedicated to computation vs. storage could be changed in real time on a live system.

This absolutely blows my mind.


> “If you want to really rethink computing architecture, we’re the only game in town now,” he said.

Another way of putting this: because of patents, we're in a privileged position of being the only people who are allowed to work on this problem.

So instead of the whole world rushing to make something interesting with this new tech, we get a group of people working in an ivory tower trying to come up with the perfect thing.

If we're lucky, they'll execute well and deliver a compelling product that they have monopoly power over for 10 years, or however long the patents last.

If we're not lucky, they'll bumble around like cable providers trying to develop "valuable add-ons", and the only reason they'll have any success is because no one is allowed to compete with them.

Sorry if that's overly pessimistic, but that's how this article came off to me. I guess patents do have the benefit that we are getting to hear about this development at all instead of it being a tightly-controlled trade secret. And this kind of payoff is what funds the R&D gamble to begin with. I just hope that they actually deliver reusable parts that other people can build into bigger innovations instead of trying to control the innovations themselves.


I don't think it's a patent issue so much as a question of resources. Sure, Google, Apple, Microsoft and others have the resources to develop a similar machine, but a new type of computer would wipe out their business model in a much more devastating manner than HP. HP has the resources, but also isn't enjoying too much success to scare away this risk.


If HP invents an insanely superior computer it will still need software, and HP hasn't lit the world on fire with its software. The real victim would be Intel/ARM.

It's not like Linux / Mac OS X / Windows won't run on a computer with a huge amount of RAM and no "hard disk". Most users won't care if a 100x faster computer runs their existing stuff only 10x faster.


You don't think the fact that HP has a bunch of patents around this sort of disincentives others?

Also I'm not seeing how radically denser/faster storage is going to make search engine or consumer device business models tank.


Or, on the other hand, Google/Apple/MSFT are actually incentivized to research this technology, precisely because it could disrupt their revenue. When new technology 5-10 years down the road threatens your present business model, it makes sense to research it so that you're prepared to adapt when it arrives.


>> Another way of putting this: because of patents, we're in a privileged position of being the only people who are allowed to work on this problem.

Isn't that fair since they are the company putting the R&D into it?


I don't think patent incentives work this way. HP are not "the only people who are allowed to work on this problem." Patents don't forbid research, only use.

And patents stack: if HP has a patent on the general idea of memristors (for example), and you get a patent on a particular way of making memristors, then no one including HP can make them your way without licensing your patent as well as HP's patent. (This is different from copyright, where if I write the first Harry Potter story and you write a derivative Harry Potter story, I just own your story.)

The ultimate idea is when you sell something complicated like a car, a share of the money gets divvied up among a bunch of people/companies who contributed to it -- sort of like selling a movie, where royalties get divvied up among a bunch of people/companies who worked on it.

So, R&D incentives: right now HP doesn't have a commercially viable way of using memristors -- they have to invent a bunch of new stuff first. If someone else invents that stuff before HP can, that person/company will share in the megacash that comes from the breakthrough.

The part that is theoretically bad for R&D is that it's literally a gamble to research a particular topic, because whoever makes a particular discovery first wins all the royalties for that discovery. If you think HP's lab is way ahead of everyone else then you might not want to sink a bunch of money into memristors, because they might be about to patent whatever technique you start researching.

On the other hand, this means you ought to sink your research money into a different angle than HP is investigating, rather than just duplicating what they're doing. Your incentive is to find the highest-payoff place to put down your R&D bets, which (we hope) is a net win for everyone.

We talk a lot about the evils of software patents, but mostly that's because the model of "make a car and pay out royalties to people who helped" doesn't seem to translate/scale to software, so the incentive systems break. If you're coming from software you have to be careful translating your intuitions back to hardware, where the incremental-invention/royalty model is more workable.


> Patents don't forbid research, only use.

From what I have read patents disallow others from doing commercial research unless that research falls inside a narrow exception. The exception covers getting regulatory approval for a drug, or for "amusement", but not for other commercial purposes. The exception is specifically disallowed if the research can further the alleged infringer's legitimate business. See: http://en.wikipedia.org/wiki/Research_exemption

> This is different from copyright, where if I write the first Harry Potter story and you write a derivative Harry Potter story, I just own your story.

I am almost certain this is not true:

"Most countries' legal systems seek to protect both [original and derivative] works. They grant authors the right to impede or otherwise control their integrity and the author's commercial interests. Derivatives and their authors benefit in turn from the full protection of copyright without prejudicing the rights of the original work's author." -- http://en.wikipedia.org/wiki/Derivative_work


From the last paragraph:

"The Machine isn’t on HP’s official roadmap. Fink says it could arrive as early as 2017 or take until the end of the decade. Any delivery date has to be taken with some skepticism given that HP has been hyping the memristor technology for years and failed to meet earlier self-imposed deadlines."

So, does anyone actually know whether significant new progress is being made on this project, or is this article just a win for HP's PR department and nothing more?


Well, in the past there was also some hoopla about IBM's Josephson junction and Intel's bubble memories.

After a few years, the subject goes away to die in encyclopedia articles.

If you do not have a viable product after six years, the outlook is bleak ...


Is this about a storage breakthrough with memristors, about alternatives to Von Neumann architecture, or just an incomprehensible press release?


A little of each. The article got kind of meta at the end.

"Any delivery date has to be taken with some skepticism given that HP has been hyping the memristor technology for years and failed to meet earlier self-imposed deadlines."


A little of each, but a lot of money being thrown at it, three quarters of HP research.


I so much want this to be true, I can't wait for it. Not having a layered memory architecture means no VM, everything has be rethought from the filesystem down. On a related note: "Operating systems have not been taught what to do with all of this memory, and HP will have to get very creative." Anyone knows what this could possibly mean?


Well, you wouldn't have to save files. You could just use a data structure in main memory because it's non-volatile. But how would you share data between programs when there's no disk? And you could execute binaries directly from storage without "loading" them into memory first. But you'd have to keep an original copy somewhere in case it modifies itself as it runs.


You'd probably still have memory partitioned into persistent areas and runtime heap of a sort, and you might very well continue to use the file system metaphor because it still makes sense to humans. Whether a file is bits on magnetic disk, in flash memory, or fast persistent RAM isn't all that meaningful to the user. As for sharing, I don't see why this would be an issue. If you have to move memory contents between machines then you still need some sort of external transport, whether it is a network, or a glowing glass cube.


Filesystem semantics are a bottleneck. Fusion IO had to get rid of them when they demoed their billion-IOPS platform last year. https://news.ycombinator.com/item?id=3434711

For communicating data between programs, would you create a filesystem object that points to memory inside your program, or what?


That doesn't make any sense. Create a file, memory-map it, bam, you have RAM performance with "filesystem semantics". (There is NO overhead associated with memory-mapping a file on a ramdisk in the Linux kernel.)

Probably what they're referring to is something more along the lines of replacing SCSI or NFS with a protocol that is a better match for the random access patterns used to access in-memory structures. (Hence their 64-byte I/O size; that is equal to a cache line on Intel architectures.)


Hm, how would this memristor stuff connect to the system? Will it look like a memory controller or more like PCI-e flash?


It sounds like HP is pitching it as memory.


Yes. On Linux it's /dev/shm and it is a ramdisk that you can access from multiple processes.

IPC (Inter-Process Communication) is a known technology. This won't change it much if at all.


Continuing to use the file-system metaphor (Instead of, I don't know, anything else, perhaps some kind of global namespace with built-in versioning like Urbit's) seems like lobotomising this incredible new paradigm before it is even born.


It might be. I'm not arguing that it is the ultimate best metaphor to use, but these things do tend to be adopted organically and become very entrenched.


Took the words right out of my mouth (better said, of course).


I mean, there should still be a need for both the data and code sections of a program. Similarly, some kind of tree-like structure for naming "files" locations (for interprocess communication). I would imagine, at least in the early days of some diskless architecture, we'd have something that looks very similar to how computing is currently done but in one memory space.


And you could execute binaries directly from storage without "loading" them into memory first.

Linux already does this. Program code is cached copy-on-write in shared memory. The fact that, say, bash has to be loaded from disk once is pretty much irrelevant. After that, the kernel's just managing pointers.


I think there would still be a need for "files", ie, a way to store a self-contained document format in memory that is independent from the code that is working on the data at one exact moment: to allow moving documents between computers, but also to allow updating the software in a safe way (ie without losing data), to save older version of a document, but most importantly IMHO to be able to create snapshots that can be safely reloaded in case of a system crash. We're not ready to have systems that can't be rebooted.


Filesystems aren't going to disappear. Databases haven't disappeared, even though there exist 256 GB RAM monsters in data warehouses.

The logical structure of a filesystem is still a useful abstraction (and has been decoupled from physical constraints for a long time now).


Some kind of git-like diff-based snapshot system for programs?


What's the issue with RAM disk to address your concern about "no disk"?


It's a lot of unnecessary copying. The data is already in RAM, but you "save" it to a different RAM location, in the RAM disk. Or if you do all your work directly on the file in the RAM disk, you have to deal with filesystem overhead constantly.


Or if you do all your work directly on the file in the RAM disk, you have to deal with filesystem overhead constantly.

Not so. Take a look at the Linux kernel implementation of memory-mapped ramdisk files. There is no "filesystem overhead". Zero. The kernel pretty much sets up the page mapping and then is hands-off.

Even if your memory mapping is backed by a file on a block device, the only overhead is occasional paging to disk. This happens on the order of SECONDS… not "constantly"!


That's assuming you already have a file and know how big it should be. Or does it take the whole space for one file, and every program on the computer shares the space?


Files can be both sparsely and dynamically allocated.


You save things to persistent storage because you want the copy. In the large majority of cases the format on the filesystem is different from the format in memory because the file format is meant to be more readable by humans or by machines with a different endianness or pointer size etc., whereas data in memory is binary and platform-specific to optimize for fast access by the local machine. The benefit of that distinction doesn't go away just because you make the persistent storage faster.


Virtual memory doesn't exist solely to support swap files. It also allows any program to assume it's the only thing running (which simplifies assembly code) and enables memory-mapping of files and devices.

(Yes, you'll still need to memory-map files when we have memristors. They still won't necessarily be contiguous on disk, and they still won't necessarily exist locally.)


Probably has to do with the way we interact with RAM (memory addresses?) vs persistent memory (file system). Can those two be merged, or is the whole system like a giant RAM disk?


> Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store. Another team is working on a stripped-down version of Linux with similar aims; another team is working on an Android version, looking to a point at which the technology could trickle down to PCs and smartphones.

I was hoping for a second there that somebody at HP had read about Loper OS and Stanislav Datsovskiy's ideas[1], or even about Symbolics LISP machines (great article on the frontpage today[2]), but nope, looks like they are just gonna repeat the same insanity of the past fifty years instead of actually doing something that will progress the field of computing.

[1] http://www.loper-os.org/?p=284

[2] https://news.ycombinator.com/item?id=7878679


I guess it would be like an OS that once loads and starts, will run forever. You can put your machine to sleep which will freeze the sate, and then bring it back on again and start processing instructions from where it left off.

But if that's the case then they'll have to think of ways to do e.g. OS upgrades - where you normally reboot to flush the old code out of RAM.

Or how will you recover if you load a bad driver end experience a blue screen? Normally you reboot to flush the bad state and start again.

Somehow I think you'll need a separate cut down immutable OS alongside the your System so that you can repair things or make updates to the main RAM/Storage space.

Or at least that's how I understand it..?


I really don't understand this whole discussion about redesigning operating systems for this. Just because you have a machine that doesn't lose memory state on power failure doesn't mean you can't reboot it.


It means something like this has never done before, and so HP would be the first to develop the software to do this!


It seems to me HP saw the marketing power of "Watson" (a platform for marketing AI technologies) and are trying to create "The Machine" to build a marketing platform around advanced computer architecture concepts.

Just like IBM had at least a few interesting ideas to give Watson credibility, HP hopes its memristor work will give "The Machine" enough credibility that they won't get laughed out of the room once they parade it around the MSM.


Soooo... this is basically just persistent memory right?

HP has been working on the memristor since 2008. Memristors have already been produced in labs by U of Michigan: https://en.wikipedia.org/wiki/Memristor

This article seeks to give the image that this is a novel idea that investors can take advantage of. In reality, it is not.


Actually, the cool bit about memristors is that you could treat them like both a transistor for computation or for memory, or flip. Like an fpga.

So imagine the difference in an operating system that could retain its running image between reboots, and not have to distinguish things like stack/heap.

And could reconfigure more computing resources on the fly and back again.

It is... a very different view of computing. Exciting though.


So imagine the difference in an operating system that could retain its running image between reboots

What do you mean by this? This sounds like hibernation to me.

not have to distinguish things like stack/heap

I'm lost here too. This is a distinction that is made for purposes of program structure, not to kotow to demands of hardware.


It sounds like hibernation but it isn't exactly. Hibernation the ram is off loaded to the HDD and read back into ram on boot.

With this system your ram would be constant. You could pull the plug on your computer any time, and everything would still be there. It would start in the exact same state it was in before the power loss, no data loss, no corruption.


Okay, imagine that to put your computer into hibernation mode all you need to do is hit the power button. This is instantaneous because none of the data has to be synced to permanent memory, because it's already in permanent, non-volatile memory.

I'm not sure what he means by distinguishing between stack/heap either.


The stack heap is a bit of a jab at the harvard architecture in general.

I see memristors as being closer to a Von Neumann architecture. http://en.wikipedia.org/wiki/Von_Neumann_architecture

Even then its not a perfect analogy admittedly. Sorry for the confusion!

I guess a better analogy would be able to be get rid of the idea of "loaded into memory" versus "in ram" versus on permanent storage.


An operating system can retain its memory between reboots using only RAM, as long as the reboots are not "cold" reboots (that is, machine powered off and then back on, or reset through hardware instead of software).

It of course depends on hardware support, but it can be done even today on select platforms.


I imagine that operating system, and my fear for rootkits is increased exponentially.


the first computer I worked on a PDP11/40 with core memory retained state when powered off :-)


Funnily enough the memristors seem to have much the same way of working as old core memory. I'm not sure I really buy the forth fundamental component stuff. Memristors seem to have similar properties to magnetic cores and memory diodes.


What don't you buy about them being the fourth fundamental circuit type?

Its mathematically provable, proven in the 60's actually. Here watch this for a decent introduction. https://www.youtube.com/watch?v=bKGhvKyjgLY


I tend to agree with a comment elsewhere: if memristance is a fourth "fundamental" element of any importance, it would have had to be well-understood (and compensated for) as a parasitic property of other components. It's not, unless you count purely thermal effects.


Well I can see resistors, capacitors and inductors as fundamental but when you beyond that I'm not convinced that a resistor with hysteresis is particularly more fundamental than an inductor with hysteresis (as in core memory) or a capacitor with hysteresis (not sure they've invented that yet) or a diode for that matter. The "we've discovered the forth fundamental component" stuff seems a little like hype to me. Not to knock its potential value for building computers.


There's a huge difference between the lab and a production model.


Creating a memristor in a lab is step 1. Step 2 will involve creating them at scale and building out all new infrastructure since current OSes aren't designed to use them. Step 3 is to productize step 2 and put on store shelves. HP is doing step 2 right now with the result being a prototype for step 3.


Link to the printable version, for those who don't like paginated stories:

> http://www.businessweek.com/printer/articles/206401-with-the...


Thanks. We changed the url to that.


It also correctly works with PageDown.


The biggest difference the memristor brings is to Machine Learning. The memristor is a first step towards a hardware implementation of Hebb's rule - "neurons that fire together wire together". Hebb's rule variants are used in machine learning algorithms to do things like web-search, image/object recognition, etc. (Specifically, Hebb's rule can be used to compute PCA.)

It's interesting to note that the brain does not have separate components for memory and computation. Every neuron computes and stores at the same time.


I don't think that's correct. Doesn't the hippo campus store memories?


> Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow.

... and they'd still have to wait for data, knowing that modern apps love pulling data from the network.


Sounds really neat, but "2017 - next decade" is the engineering estimate version of "sometime, maybe never".


Yup. They lost me at "The company says it will bring the Machine to market within the next few years".


I have my doubts too. Also the announcement looks a bit like the earlier case of HP almost betting the company on the acquisition of Autonomy (if that was the right name of that company it acquired a few years ago, before writing it off).


It really is incredible, the innovation that comes out of these Silicon Valley startups!


Startups can't throw a billion dollars and a decade at a MVP. This is not an argument against startups.


HP hasn't been a startup since World War II


Yet another article about HP's future monopoly on computing that doesn't mention PCM, RRAM, STT-RAM, etc.


If you want to start developing for this class of machine, you can build a prototype using DRAM, which is equivalent as long as the power stays on.


That's pretty awesome. Memristors are very promising for creating fast, high-density, non-volatile storage, and it's good to see that HP's seeing that in the midst of the solid-state flash memory craze and working on commercial applications for them.

What's also awesome is that - according to the article - HP plans on open-sourcing its custom "Machine OS"; rather refreshing coming from a company that's traditionally released its own operating systems under non-free licenses.

I'm not normally a fan of HP (aside from their printers), but seeing them go after this kind of stuff is certainly exciting.


Totally bought ten shares of HPQ after learning about that. Who knows, their plan might actually work. Go HP!


I bet Businessweek has some too, the way it's written.


There's a catch, of course -- always a catch: a truly nonvolatile memristor violates the Second Law of Thermodynamics. But see:

http://www.nature.com/ncomms/journal/v4/n4/abs/ncomms2784.ht... (open access)

The physics is (unsurprisingly) rather involved and I don't have time to decipher it, but, yeah, here's the guts.


"HP’s bet is the memristor, a nanoscale chip that Labs researchers must build and handle in full anticontamination clean-room suits."

So... it's a chip.


See some of the other answers.

The memristor is not a chip, but the 4th fundamental electrical component; the chip is a hybrid memory/storage chip using memristors (instead of platters or capacitors). Because of its nature it demands a pretty thorough reimagining of processing data.


What he's saying is that that particular aspect of how they're made (required a clean room) is nothing special.


Is it web scale? Can we make it social?


Obviously; it's been designed upfront to be SOLOMO.


It's too bad they haven't caught up to MoLoSo yet.


Will it be able to drive itself? Or think?


TLDR they are hyping up Memristors and hinting at HP trying to bundle it with some of their proprietary crap if/when they finally deliver.


You know a lot of programs that know how to use memristors?


No, but I can name a lot of programs where the authors never even considered how hard disks or RAM worked.


There are quite a few smug replies to you, but my siblings are missing that the interesting thing about memristors is their potential for putting dynamically configurable computation (using implication logic) directly inside high density storage.

Taking advantage of such systems will not be business as usual, it'll look even more foreign to us than CUDA does to a Javascript developer.

Sure, you can use an interface and use them just like a regular storage device, but until and unless you can get the cost per bit below flash it's not interesting at all.

If that was all memristors could do, it's not even clear that they would be in active development - this is a moon shot by HP (no pun intended).


All of them?

It's just another type of non-volatile storage.


I think the point here is not "knowing how to use memristors". Sure, it is possible to use them simply as another type of non-volatile memory. But it's the same as having an airplane and driving it on the road.

Memristors, the way they are hyped, promise to be orders of magnitude smaller, faster, cheaper AND less energy consuming than all of the other memories that exist. All at the same time. If time proves this all to be true, it can bring really groundbreaking changes to computing. And we all will be able to watch more funny cat videos, at a greater resolution.


That's...it's not...goddamnit. That's what we have drivers and operating systems for. :(


What's the long-term reliability/durability of memristors? Do they die after too many reads or writes?


Now THIS is "Hacker News". Perhaps a slightly unlikely source (does businessweek usually have this quality?) but v interesting article and discussion here. Sorry not to be adding more substantively to it, just glad it's here. :)


Seems like a test platform for memristor tech.


Is there a better way to test a tech than to make a product with it?


In a way it doesn't matter does it? If they can't sell it then they're not going to be supported by shareholders/traders to make it. The oft quoted maxim here is that "companies must work to maximise their profits" - that's our capitalism; if they do the research and don't make a product then it's wasted money from a profit-based perspective.

It's actually really encouraging to me that a company could be doing R&D that's not going to have an immediate pay-off. That those with the finances to push technological development are looking beyond the end's of their noses for a change.


Well they announced it cautiously back in 2006, popular science ran an article on it. They've done some press mainly with EE magazines since then.

This technology has been in R&D for nearly a decade. I figure it pretty mature.


> This technology has been in R&D for nearly a decade. I figure it pretty mature.

Funny, I read that the exact opposite way. If it was mature we'd be seeing products.


>Martin Fink, the chief technology officer and head of HP Labs, who is expected to unveil HP’s plans at a conference Wednesday.


Plans are not products, but let's wait and see before we get all happy.


Yeah my thought too--I'll believe it when they actually release a product. It's like most research projects--initial results are great. Having it on the market means it really exists though...


GMR went to market from discovery within a decade:

http://www.research.ibm.com/research/gmr.html

I've been hearing for so long about the memristor revolution being just around the corner that I will simply wait until I can buy it before I get happy. It's cynical, for sure, but HP have been hyping this once too many for me.

I do think it is great that they're working on this as hard as they are and I think if it pays off HP will be worth more than Apple by the time they're done. A fundamental break-through of this magnitude will be worth a fortune the likes of which we can probably not even imagine.


No. I didn't mean that as a criticism.


How about a thumb drive instead of a server farm?


This computer may take as long as a decade to build, according to the article. Why did HP choose to reveal their hand so early? Isn't it preferable to take everyone (especially competitors) by surprise?


> Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store.

It's great that they decided to make it open from the start.


as for a new OS, single level store OSes should work just fine in such an environment.


Various forms of naming hierarchies and tag databases are more natural for humans, though, than a numbered array.


The article mentions that

> Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store.

I have to ask: if this is open-source, where is it? Perhaps my google-fu is weak, but I can't find anything but news articles talking about "The Machine".

EDIT: Aha, found mention in some articles that it "will be made" open source. Future tense. That makes more sense.


They talk about memristors being the fourth type of circuit element and the final element at that.

It's just a resistor with memory. i.e. its R value can be changed by current and remembered. Wouldn't an inductor with memory in the sense that it's L is changed by voltage or current and remembered be a fifth and a capacitor with a C that is changed and remembered be sixth?


In case there's anyone in here who knows: Why does the Wikipedia page for memristors say that this 4th circuit element is proven to be impossible to create in physical reality, due tp the laws of thermodynamics?

Either HP must be lying, physics must be wrong or whomever figured out this "proof" must have screwed up.


I think there's STILL debate about whether you can actually make a REAL memristor, or whether you can merely make something that behaves exactly as a memristor would.

So, in theory you can't make one, but in practice you can. Something like that.


> make a REAL memristor, or whether you can merely make something that behaves exactly as a memristor would.

Alan Turing would say there is no difference.


or Wikipedia is wrong or you misread Wikipedia.


I'm looking forward to the list of exploits that emerge post-release. Exciting stuff, particularly in terms of moving OS development forward independent of the usual suspects.


I am happy to see them emerging as an R&D power. They haven't had much to show for themselves lately, so I wish them well on this.


Can someone please explain how can light be used to transfer data in computers?


I assume they're talking about fiber optics, or something similar. An optical fiber "wire" made of silica or plastic carries photons the way copper carries electrons. Most telecom trunks use them these days.

The only thing is that I've never heard of fiber optics being used inside a machine, only to connect one machine to another. I imagine the principles are similar, though.


My first thought is "whoa, they are going to have to figure out how to make those screaming fast to be able to keep up with the speed of SSD's by the time memristors are released" (though I guess the appeal of practically unlimited storage is nice...)


Professor F(r)ink??


> could replace a data center’s worth of equipment with a single refrigerator-size machine

Ten years later it will fit in your pocket.


The government has a secret system: a machine ...


HP need to show up at my door with a check for $599.00 plus tax for a laptop that I used 5 times and died on the day of 366! Nerds remember. I don't think this Podunk company realizes once you lose a customer, you lose them for a long time. The reverse is true; once a male buys a product he likes--you have a customer for life. It's taught in every business school? Men just want the dam thing to work? Oh, and hire a CEO who knows a little bit about engineering? That last sentence goes to all hardware companies.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: