Hacker News new | past | comments | ask | show | jobs | submit login

> deep learning will not produce the universal algorithm

I'm curious what HN users think the "universal algorithm" will end up looking like?

My own guess (wild speculation) is that we'll start moving in the direction of concepts like tensor networks. While that term sounds like it has something to do with machine learning, it actually falls under the domain of theoretical physics. Tensor networks are a relatively recent development in quantum mechanics that show promise because of their ability to extract the "interesting" information from a quantum state. Generally speaking, it's very difficult to compute/describe/compress a quantum state because it "lives" in an exponentially large Hilbert space. Traditionally, the field of quantum chemistry has built this space up using Gaussian basis functions, and the field of solid state physics has built it up using plane waves. The problem is that regardless of the basis set chosen, it appears as though exponentially more basis vectors are required to accurately describe a quantum state as the system becomes larger.

Tensor networks are an attempt to alleviate this problem. While it is true that the state space of an arbitrary quantum system is exponentially large in the number of particles, it turns out that for realistic quantum systems, the relevant state space is actually much smaller — i.e., real systems seem to live in a tiny corner of Hilbert space. And this tiny subspace even includes all of the possible states that one could put a collection of qubits into within the lifetime of the universe.

The projection of a system's state vector into either the position or momentum basis is known as the system's "wavefunction" (some texts allow more than these two bases). Since the wavefunction exhibits the highly desirable property of being localized in position/momentum space, this allows one to build up a good approximation to the state using Gaussians or plane waves — that is, unless the wavefunction exhibits strong electron correlation (quantum entanglement). Quantum entanglement is the exception to nature's tendency to localize state space about a point in spacetime, and thus it is frequently the case that the most commonly used basis sets are highly suboptimal for many real electronic systems (superconductors stand out as a notable and somewhat pathological example).

I'm not entirely familiar with all of the math behind it, but tensor networks essentially describe the small but relevant region of Hilbert space by exploiting properties of the renormalization group. In this sense, a compact way of describing "real world" quantum states is developed. I think this has applications to a "universal algorithm", because real world data rarely consists of a random or uniform scattering of information across the data's state space. In my own research, I've found that a lot of the NP-hard problems I run into are efficiently solvable in practice (stuff involving low rank PSD matrices) precisely because the data isn't random. If tensor networks are good at finding a basis set that is "local" in abstract Hilbert space with regard to some real-world set of quantum states, then it seems as though they would work equally well for a lot of the real world data that lives on a low-dimensional manifold in a high-dimensional space — the kind of data that machine learning (and eventually artificial general intelligence) seeks to tackle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: