Hacker News new | past | comments | ask | show | jobs | submit login
Possible unconventional computing techniques of the future (nautil.us)
53 points by diego898 on Feb 12, 2015 | hide | past | favorite | 37 comments



I was expecting a bit more from nautil.us. There are a lot of alternate computation mechanisms but they don't lend themselves to the environments where we put computers (you're not going to run a satellite on slime mold). Building better biologic diagnostics? Sure. But in the computer space where Moore's Law is often cited, graphene or other carbon structures, or even silicene seem more likely and 3D structures even more so.


A little rewording, and we get:

> Transistor computing is naturally parallel, aidenn0 says, with computations taking place simultaneously at every logic gate


Reminds me of another recent article, which I think I might also have found here at HN:

http://www.damninteresting.com/on-the-origin-of-circuits/


Boy, I should stop reading HN. The more I read about the scientific advancements that are going on in this field, the more I think about getting a second degree in, say, biology, to try to work on biological computers or stuff like that.


in my opinion moore's law is already fine for another 15 years.

(15 years / 18 months = 10, so 10 iterations). moore's law is about transistor count.

the reason it's fine to have up to 1024x as many transistors (2^10, since moore's law is about doubling):

>"Moore's law" is the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years.

oh, I see it says 2 years ,not even 18 months as i'd thought.

anyway the reason it will continue just fine is that we are using one tiny few-millimeter slice (plane) to print transistors onto whereas obviously by 2035 there will be multiple layers (3d). this is totally obvious since

https://www.google.com/search?q=c+%2F+4+ghz

and we're already at 7.49481145 centimeters travelled in a clock cycle. etchings are at 14 nm already. a molecule of carbon nanotube has its width at 4 nm. carbon itself (the atom) only has an atomic radius of 0.14 nm, and you're not going to be inserting your etchings into the quarks and protons of atoms and hoping to do your calculation there.

it's completely obvious that before very long we'll take those 8 centimeters light is travelling per cycle @ 4 ghz and have it other than zigzag across a plane, a single slice, when we can stack thousands of them.

there's nothing wrong with moore's law except the fact that people are too good at shrinking die size, so it'll be a while for them to think inside the box. (outside the plane.)


You're right that going 3D is a natural thing, but there appear to be pretty serious manufacturing challenges. The most obvious one: chips already take a pretty long time to manufacture - apparently the "latency" of a typical fab is on the order of weeks. If you double the number of layers, you double this latency.

So the only way to get serious about 3D is to produce thin slices in parallel and then put them together somehow. As a corollary, this means that we're most likely never going to see a logical unit such an entire core being distributed across two layers of transistors.

This doesn't invalidate your point, of course. We're likely to see designs where multiple dies, each with several cores, are stacked on top of each other.


(I don't know where you get off concluding "never" but whatever.)

but you're right, there are very (extremely) serious manufacturing challenges. the fact that people were so good at shrinking die size is what has led to a lack of the third dimension. I guarantee, if they couldn't figure it out they would have to - regardless of mfg challenges. it's the only direction to go. (other than more core numbers, which they also have done.)


Well, there is a "most likely" there ;)

But yes, I highly suspect that technology will never reach a point where multi-layer cores make sense. Part of the reasoning is that even if you reduce distances, you're still fighting with transistor switching speed, and individual cores are already very small. I do believe that designs with cores on one slice and L3 cache on another slice may happen, by the way. Another part of the reasoning is that (even if manufacturing can largely avoid the multiple-slice-then-glue method), the density of inter-layer connections will likely be much lower than the wiring density on the lower metal interconnect layers [0], which makes it unattractive to have closely tied logic spread across different layers. There are theoretical academic papers talking about the physical design of e.g. adders spread across a small number of layers, but they're frankly not very impressive, and their assumptions about the electrical properties of the inter-layer connections appear a bit on the optimistic side (understandable because hey, you've got to publish something!).

Overall, it's just a big headache, compared to the alternative of going 3D by stacking relatively large logical components such as cores and cache blocks.

[0] To be fair, I haven't seen numbers on this. I don't know how familiar you are with the typical stacks of interconnect metal, which consist of a very small number of layers for the thinnest wires, and then additional layers of increasingly (order of magnitude) larger wires for longer distance connections. In a hypothetical multi-transistor-layer design, how do you arrange those interconnect layers? To have a high density of connections between the layers, you want to only use layers with thin wires between the transistor layers, but then it's unclear how the longer distance wiring should work.

Edit: Let me formulate my position as a more concrete prediction. There will never be a major commercial microprocessor in whose design an automated tool is used to decide the assignment of a significant fraction[1] of transistors to their layer on an individual basis[2].

[1] Meaning a significant fraction of the random logic transistors; with growing caches, the fraction of transistors which is placed by an automatic tool is decreasing anyway.

[2] Meaning that the tool makes individual decisions about fundamental building blocks such as NAND gates. It is slightly more conceivable that an automatic tool is used to assign larger blocks (such as an FPU) to different layers. I would still predict that this won't happen, but with lower confidence.


You should read "The Hazards of Prophecy" http://www.sfcenter.ku.edu/Sci-Tech-Society/stored/futurists... by Arthur C. Clarke


"A ternary lookahead algorithm does not yet exist, and other algorithms that would be needed for ternary to be practical are also missing."

Really? I'm pretty sure carry select look-ahead would work for ternary.


Could you expand? :)


What a strange hook. "Moore's law is running out and we soon won't be able to squeeze more performance out of transistors, so... here's a bunch of computing technologies that may have a niche but stand no chance of having better performance than transistors even when optimized." It's a fine article, but the hook doesn't match it.


I agree with you 95%. The 5% being an admittedly far fetched and idyllic notion that just maybe one day we will design the mythical x86 bacterium and all of a sudden instead of needing meticulously crafted silicone wafers and logic gates... all of a sudden all we need is cheap simple hydrocarbon to grow the world's most elaborate super computers. They wouldn't be faster but more efficient and highly parallel. Like a more distributed version of a brain where bacterial (or whatever) cells take the place of traditional neurons.


While it's true that there's an unlikely but nice scenario in which it becomes ultra-cheap to make biological processors, I take some issue with the idea of them being "highly parallel."

What stops us from using more highly parallel chips right now is not the expense of silicon, it's that it turns out that highly parallel chips just aren't as useful as fast chips. Nothing about the ultra-cheap biological computers scenario would change that. That would be a world of omnipresent computing, not one of parallel computing.


Haha, the x86 bacterium. I've been looking for that term for so long.


Yes, that is a disappointingly baity title. We changed it to one that attempts to describe the article accurately, but if anyone can suggest a better, we'll change it again.


I think I would prefer it if progress in hardware efficiency just stopped when Moore's Law finally becomes invalid. Then hardware wouldn't become obsolete so quickly. That would surely be good for the environment, and for the poor.


I'm having trouble counting the ways this comment is invalid. It seems to be invalid at each word:

1) Hardware does not become obsolete so quickly: It is good that hardware becomes obsolete. I can't for the life of me imagine that the world would be a better place if state of the art CPUs were the 8Mhz Z80 my computing life started with.

2) Slower CPUs would be good for the environment: How? By needing more iron for a given task? More energy? Is it by disallowing hugely detailed climate simulations? How???

3) Slow CPUs would be good for the less economically privileged: The cheapest CPU that can be bought today is some orders of magnitude faster than an 8Mhz Z80. How would the less privileged be better off if CPU evolution had stopped in the early 80s?


You should apply http://en.wikipedia.org/wiki/Principle_of_charity

I don't think the OP was making the assertions you are attributing to them. I think they are arguing that a flatter gradient in the improvement of computers would encourage the diffusion of comparable levels of computing power (the poor). And that if computers didn't become obsolete so quickly, we wouldn't be buying and throwing them away every year.

Early on the computation curve this was true, computers had 20k to 64k for a couple decades. Remember the 386 to 486 upgrade adapters in the 90s. We no longer have that upgradability.

"invalid in each word" is a bit rude.


I never understood people who complain about computers rapidly and continuously getting better and cheaper. That's an incredibly short-sighted point of view. Your old computer would still run Windows 3.1 the same as it always did if you hadn't traded it up for something better. The creation of better machines didn't make your old machine any worse except through your raised aspirations. The poor have benefited immensely from Moore's Law and related effects. What used to require a corporate datacenter can now be had in a cheapo android tablet. The environment has benefited similarly through greatly improved computer-aided design and logistical efficiency. Wishing for that process to stop is a highly self-destructive line of thought.

If the problems of e-waste and computational access for the poor are issues you care about then go work for a computer recycle/reuse center. While you're there, cheer on Moore's Law for bringing better and better tech to the center's inbox by making it so cheap that people are happy to give it away for free!


And once again we would learn how to write efficient software, yay.

I can't imagine this happening though (before another paradigm shift).


Yeah, in the desktop era, you had huge amounts of bloat. Programmers assumed there was one user sitting behind the machine, and they were only doing one thing at a time, which was essentially true.

Performance matters more with mobile and cloud (sorry for buzzwords) -- in mobile because of battery life, and in cloud because a single machine supports so many users. And because we got stuck at 3 Ghz or so, so now we actually have to write concurrent and parallel software.

Although I suppose what appears to be happening isn't that application programmers are writing more efficient code. Instead, stack is getting taller, and systems programmers are applying JIT techniques to high level code -- e.g. v8, PyPy, HHVM, Julia, Swift, other LLVM users, etc.

So certainly some programmers are writing very efficient code -- there seems to be a a resurgence in that. But it seems to be systems programmers and not application programmers. For applications, productivity and high level abstractions like DSLs seem to be more important than performance (e.g. all these new and bloated JS frameworks).


Come on over to embedded software! Its all about efficiency, space, power, bytes and bandwidth.


You have no idea how good that sounds. "Enterprise" programming is soul-destroying, and the current vogue for mile-high node-based JS stacks looks even worse.

I miss the good old days of futzing around in 68k assembler for fun and... well, just fun, really.


You see this somewhat in mobile computers (phones). The power and thermal constraints limit performance, so developers limit features vs. their desktop counterparts.


I don't think anything has really changed. iOS 7 ran sluggishly on an iPhone 4, just as Windows Vista did on machines that came out when Windows XP was new.


It's already happening with tiny cloud-based computing images. MirageOS gets you a web server that's less than one megabyte.


Yeah, and software authors would have to get better and more efficient too...


I'd happily pay for a new version of Windows XP/7, Office, Adobe CC, ______ that was 20% faster/efficient, with the exact same feature set.

Sadly, I don't think the marketing departments realize this. They honestly believe that only new features sell computers.


You'd pay for speed-ups that come from reworking algorithms to make them more efficient, but how do you avoid someone adding wait loops in strategic places and then selling you the software again after taking them out? IBM sells you an increase in processing power for your mainframe by enabling processors that were already there the whole time but disabled by configuration. I don't think I like that business model.


The key is, I have a baseline now - a useable system that is X fast. OS's are mature products, with several alternatives. That keeps people honest


What about the sizeable parts of capitalism that aren't environmentally focused or that are economically comfortable?

What about the even poorer people who only receive new hardware when richer people buy upgrades?


> What about the even poorer people who only receive new hardware when richer people buy upgrades?

Good point. I suppose a better solution for the problem of hardware obsolescence caused by software inefficiency would be for conscientious software developers to deliberately use underpowered computers. I'm ashamed that I bought a new computer with a Core i7-4770 processor and 32 GB of RAM about a year and a half ago.


I'm feeling this pretty strongly as a minimally part-time freelance developer; I can't afford new hardware, and am coding on a machine that was at best a mediocre performer seven years ago. Startup time for fairly trivial applications is significant running Lubuntu, and starting a Clojure app is painful; I just timed Clooj at 1:48.

I find that it affects my decision whether or not to optimize code; I feel it sooner than I remember doing when I was on a more powerful machine, and that makes me decide, "That inner loop has to go." It also affects my choice of language; I'm more likely to use Pascal or Nim than Python, because the result feels better when I run it.


Physical hardware has a lifetime of only a few years (motherboard components, heat and moisture). BUt yeah if the designs could last longer than a prime time sitcom that would be nice.


>less economically privileged.

wow


All right, I don't know why I said it like that. Fixed.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: