Hacker News new | past | comments | ask | show | jobs | submit login
A New Approach to Multiplication Opens the Door to Better Quantum Computers (quantamagazine.org)
82 points by kristianp on April 27, 2019 | hide | past | favorite | 26 comments



I wrote the paper that this article is describing [1], and can answer questions.

The paper is probably more approachable than you think. Basically I rewrite code like `let intermediate_value = recursive_call(...)` into code like `output += recursive_call(...)`, which allows you to make recursive calls in a way that avoids ever storing the intermediate value. That's important in contexts where you can't simply discard information, namely quantum computing.

[1]: https://arxiv.org/abs/1904.07356


No questions right now, great work! However, one remark:

I suspect you might find the Turing Completeness & Recursive Types "off hand" result from the following paper of interest:

http://drops.dagstuhl.de/opus/volltexte/2017/7276/pdf/LIPIcs...

To quote:

"We further showed that certain variants of DOT’s recursive self types can be integrated successfully while keeping the calculus strongly normalizing. This result is surprising, as traditional recursive types are known to make a language Turing-complete."


How do you deal with reversibility? I assume that is the hard bit, as tail recursion is not new (and I would have assumed been the first thing picked up if it was easy)


It's not really tail recursion, it's just similar in that you get the recursion into a particular form in order to enable an optimization that avoids the need for intermediation.

For tail recursion, that form is the last thing executed in a recursive method should be "return recursive_call(...)". It allows you to tell the sub-call to give its result directly to the current method's caller, avoiding the need for the current method to act as an intermediary bouncing the result from its callee to its caller.

In this paper the special form is "output += recursive_call(...)", where output is a mutable reference to the location where the result should be stored. This also allows the current method to avoid acting as an intermediary between its callee and its caller, since the callee will take a mutable reference to (a subsection of) the output and directly mutate it.


If I'm understanding you correctly (and I haven't RTFA yet, sorry) this is a bit like Operational Transform, allowing you to run some or all steps in parallel and achieve eventual-consistency. As opposed to TCO, which is simply restating recursion as iteration, but still requires the operations to run in sequence.


The thing I'm describing doesn't allow the steps to run in parallel. In fact it prevents it, because each of the three recursive calls require exclusive access to overlapping regions of the output.



One should clarify that techniques such as these which are described as Vedic seemingly have nothing to do with Indian mathematics of the Vedic period.

https://en.wikipedia.org/wiki/Vedic_Mathematics_(book)

https://en.wikipedia.org/wiki/Indian_mathematics#Vedic_perio...


I learned to do multiplication in my head in a similar way, except I'd usually split on a multiples of 10 for simplicity.

For 25x63, I would go: 60102+3102+560+53


That's schoolbook multiplication. It has quadratic complexity in the number of digits.


I hate to repeat myself, but analog quantum computers are the future. Digital quantum computers are dead on arrival.


“Digital” quantum gates on superconducting quantum processors are implemented by changing the system Hamiltonian (by e.g. changing the magnetic flux through the qubit Josephson junction loop or driving the qubits with microwave radiation), so in that sense those quantum processors are analog. Most current quantum algorithms require a series of discrete operations to be performed (e.g. the Grover search is basically a for loop that executes a phase-encoding search function and a diffusion operator in each step), so they require a specific gate set to run them. That said researchers also used superconducting quantum processors to simulate other quantum systems using an analog approach, which is often more efficient than using a digital approach (so-called Trotter based quantum simulation), so both techniques are relevant I’d say.


> Most current quantum algorithms require a series of discrete operations to be performed

Ok so what?

https://en.wikipedia.org/wiki/Continuous-variable_quantum_in...


That means that an analog quantum computer will not give you much advantage when running e.g. Grover search if there’s not a more efficient implementation using a continuous variable approach (which to my knowledge isn’t), so it is not even clear how analog quantum computers could replace the gate-based approach there.


Like, I'm not particularly married to Grover's algorithm or any other gate based algorithm. Throw them all out of the window for all I care.


You keep posting this and it’s just completely unsubstantiated.


It’s not. Look into the work of Alessio Serafini https://www.amazon.com/dp/1482246341/ref=cm_sw_r_cp_api_i_UH...

Also there have been articles on this very topic posted on hn.


Why?


They are much faster, also can solve optimization problems very naturally.

Analog computing used to be a thing before ww2. It was dope for numerical computation but came out of fashion due to rift. Photonic analog quantum computing is the future.


Analog computing has problems of accumulating error that can’t be resolved.

The more sequential operations you have to perform, the more the error accrues because unlike any discrete system you have no ability to filter noise.

That’s why analog computers “fell out of fashion”. That and the relatively restricted programmability, higher power usage, etc.

There are some problems where analog can be faster, but the moment you have consecutive operations you need to accept imprecise answers.


What happens to this rift if it's operating on an equation with a derivative such as the gradient of a neural network?

I'm a huge fan of deterministic computation, but we seem to be doing okay without it in deep learning with respect to floating point round off error. Or despite the fact that the computation could be deterministic, the algorithms are written in a way that they are not due to asynchronous accumulation and computation.

Would we just get a different minimum each time we run or would something more pernicious occur?


Something I've never been able to figure out is whether our modern, deep sub-um and even nm-scale process nodes, are directly usable for anything analog. I suspect that they aren't - that the noise levels are now at a point where even the simplest "gate"-like structures are only useful if used in combination with that "attraction/iteration towards a fix point" mechanism that essentially defines digital electronics (from an "analog" POV), with its inherent resilience to noise. Unfortunately, I even doubt that the question is directly answerable without the sorts of proprietary data that only a multi-billion chip fab would have in the first place.


They are, certain components even in the smallest process have to be analog. One good process for analog functionality is 22nm SOI, 65nm TSMC also works very well.


Yes, that’s rift. Photonics don’t have that problem.


When I search ddg with the following query:

    rift "analog" "computing" -oculus
The only relevant result is your comment from this thread.


Accumulated error creates an gap, rift between the true answer and the computed answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: