Hacker News new | past | comments | ask | show | jobs | submit login
Steve Perlman unveils white paper explaining “impossible” wireless data rates (venturebeat.com)
110 points by evo_9 on July 28, 2011 | hide | past | favorite | 59 comments



This is the basic idea from the white paper.

DIDO communication begins with the DIDO APs exchanging brief test signals with the DIDO user devices. By analyzing what happened to these test signals as they propagate through the wireless links, the DIDO Data Center determines precisely what will happen when it transmits data signals from the APs to users, and how the simultaneously transmitted signals will sum together when received by each user device. Then, the DIDO Data Center uses this analysis, along with the data each user is requesting (e.g. video from a website), to create precise waveforms for all of the APs that, when transmitted at once will sum together at each user device to create a clean, independent waveform carrying the data requested by that user. So, if there are 10 APs and 10 users all within range of each other, then 10 radio signals will sum together at each antenna of each user’s device to produce an independent waveform for each device with only that device’s data.


One question that I haven't seen answered yet is what happens if the receiving antenna is not stationary. It seems to me that if it is moving at the time that the test signal was sent out there is no guarantee that the signals will sum together properly for the receiver now that is in a different location. Most devices that are wireless are wireless because they spend most of the time moving around.

But perhaps this system of analyzing the test signals and figuring out how to create the waveforms that will sum up for the receiver happens so fast that it is not a problem to recalibrate constantly to take in account cell phones in moving cars, etc.


From the patent app: "As such, in one embodiment of the invention, the channel characterization matrix 616 at the Base Station is continually updated. In one embodiment, the Base Station 600 periodically (e.g., every 250 milliseconds) sends out a new training signal to each Client Device, and each Client Device continually transmits its channel characterization vector back to the Base Station 600 to ensure that the channel characterization remains accurate (e.g. if the environment changes so as to affect the channel or if a Client Device moves). In one embodiment, the training signal is interleaved within the actual data signal sent to each client device. Typically, the training signals are much lower throughput than the data signals, so this would have little impact on the overall throughput of the system. Accordingly, in this embodiment, the channel characterization matrix 616 may be updated continuously as the Base Station actively communicates with each Client Device, thereby maintaining an accurate channel characterization as the Client Devices move from one location to the next or if the environment changes so as to affect the channel. "


"Each client device continually transmits its channel characterization vector..."

Sounds like the scheme is going to wreak havoc on the battery life of a typical mobile device.


If this continual transmission is only done during periods of network usage, when you're already powering up the CPU and antenna chipset, how much of an additional burden would this be for the client device? I doubt it would be continuously transmitting when the device is in sleep mode, and I have no indication that this is a CPU-intensive task for the client device (the data center device is a different matter).


For the mobile phone use case, it seems like the transmitter would have to be emitting constantly in order to keep the base station appraised of its current position, for the purposes of computing the channel characterization. Otherwise, how would the base station know what transformations to perform on the outgoing signals to reach the mobile device in the event that a call is received? It's not like you could broadcast that information to everyone, because that would end up corrupting the spectrum for everyone else, which defeats the entire point of this system.

The whole scheme seems really dependent on knowing the position of all transmitters/receivers at all times.


The "paging channel" problem is already present in existing mobile radio systems. Solving it pretty much requires some form of TDMA — i.e. scheduled transmission — because only TDMA gives you the ability to actually power your radio off most of the time. Even Wi-Fi has features for this.


Why couldn't you reserve a tiny fraction of the bandwidth of the system (or some other but related spectra) for this sort of "calling transmitter #92" broadcast? Yeah, it'd impact everyone, but only finitely.


I thought of this, too. (I'm taking it as a given that this technology works as advertised at low speeds.)

At higher speeds, low frequency/long wavelengths ought to work better because the constructively interfering areas will be larger for any single receiver than they would be with a shorter wavelength. Also, when the receiver is traveling at constant velocity, it is very easy to predict where the receiver will be in the next frame. This covers a lot of cases (people on planes, cars, trains).

In fact, I can't think of a place where an average person (e.g., not someone flying a fighter aircraft) would be moving rapidly and unpredictably.


In terms of multi-path, I can well imagine that being inside a fast-moving car in a body of traffic isn't far off the worst-case scenario - not only is the receiver moving (unpredictably, if they're changing lanes), but there are a bunch of reflectors moving around them, also unpredictably.


I imagine there would be some overhead (e.g. each user transmitting a certain user-specific pattern every 1/2 second, and each AP transmitting a certain AP-specific pattern every 1/2 second), to calibrate the location of each user in AP-space. Fundamentally I think all the calculations involved would be fast linear algebra operations that could be done in hardware on the order of microseconds.


This brings to mind a very interesting idea as well. If the changing physical location of the receiver is causing changes in the composite wireless signal the device is receiving, and the base stations are recalculating and refreshing this all the time very quickly it could potentially be a very reliable and accurate GPS as well.


Actually, I don't think so. This isn't finding your physical location, then computing the correct profile for that location; it empirically probes the performance your location is experiencing, then directly uses that performance information. While you could analyze the results and at least take a stab at the physical location, it's quite possible it will work no better than cell tower triangulation.

Even trying to empirically map certain performance profiles to certain spaces may be impractical if the client radios and antennae performances differ enough to throw off the profiles, which if I understand this properly and given our proclivities for making things as cheap as possible, probably means this won't work either.


What happens if two clients are right next to each other? I wonder if you can still count on enough interference to give each one a totally different data stream.


"The complete answer to this question is very long, involving immensely complex mathematics, very carefully designed software and hardware, and new data communications and modulation techniques."

None of which is discussed in the white paper. If I had to guess... It seems like a MIMO (Multi-Input, Multi-Output) technique that somehow uses the environment's impulse response (via the pilot signal) to achieve phased array-like spatial independence of signals. Without the "immensely complex mathematics," this white paper doesn't really deliver any insight -- no better than marketing speak IMO.


The patent application appears to have more detail: http://www.faqs.org/patents/app/20090067402


Let me take a stab at an explanation. Suppose we know the positions of all the transmitters and receivers, and we know which receivers are supposed to get which signals. Then we determine the signals (waves) to be emitted by the transmitters such that at each receiver, all the other signals cancel out (interfere destructively), leaving nothing but the signal you want at that receiver.

(Oh, and don't forget to account for reflection, refraction, dispersion, etc. as the waves propagate around.)

If this is what they're doing, then wow, that's pretty cool. It's amazing the problem can be solved fast enough.

The way they get around Shannon is by taking advantage of the spatial separation of the various receivers. In effect, Shannon assumed all receivers are at the same position.

Is this an accurate understanding?


As far as I can tell yes, basically its phased-array wi-fi where the system solves for creating channels to individual points rather than beam energy.

If you are familiar with phased array RADAR (as used on AEGIS and other platforms) the system computes on the fly the necessary set of signals from an array of antenna which will constructively interfere to put a 'beam' on the target. AEGIS can track hundreds of targets simultaneously (I believe the actual upper limit is classified)

I recall a startup in the bay area that was doing something like that with WiFi access points to provide both range and better signal integrity (you could exceed the power limits on non-license use going into the antenna as measured output was still within spec), I thought they had been acquired by Atheros but I'll have to dig a bit deeper to find out for sure.


IIRC, there are already companies doing this in the mobile phone space. In essence, they beam steer using slight differences in the arrival time of the signal to actual antennas which in turn causes constructive interference in the desired direction and destructive interference in other locations. For mobile phone networks, this is only for azimuth. For systems like the ones used for AEGIS (the SPY-1 and variants) you get azimuth and elevation beam steering.

Honestly, I can't see this working without stepping on some of the patents owned by companies like ArrayCom and others.


How did they fit enough transmit elements into a cell phone to do that? I thought cell phone antennae were omnidirectional.


That is the big claimed upside of this approach, the cell phone transmitter receiver is very simple, with only 1 antenna. The base station number crunching becomes much more sophisticated, but that is fine.


> I recall a startup in the bay area that was doing something like that with WiFi access points to provide both range and better signal integrity

You may be thinking of Vivato, whose assets were acquired by Catcher.


I proposed doing that in 1999: http://lists.canonical.org/pipermail/kragen-tol/1999-June/00... and in audio in 2000 http://lists.canonical.org/pipermail/kragen-tol/2000-May/000... but I was far from the first; the early papers on this go back to the 1970s.

I believe that you can't get all the other signals to cancel out exactly at everybody else's receiver, but you can get the desired signal to interfere constructively and thus have a much higher amplitude.

The common name for this is "MIMO," multiple-input multiple-output. It's not clear from the article how "DIDO" differs from MIMO.

But I didn't account for reflection, refraction, and dispersion (!? what medium is dispersive for radio waves? That's gonna make it rough.).

HASAS (HydroAcoustic Signal Analysis System) is a piece of open-source software designed to do this for underwater passive sonar purposes.


"Oh, and don't forget to account for reflection, refraction, dispersion, etc. as the waves propagate around.)"

See my post

     http://news.ycombinator.com/item?id=2820131
For your "reflection, refraction, dispersion, etc.", in principle the 'system' from each transmitter to each receiver is both time invariant and linear which means that can account for the effects of "reflection, refraction, dispersion, etc." just by applying a 'transfer function' (below). Yes, need a different transfer function for each pair of transmitter and receiver. Apparently the 'test data' that is sent is to determine the transfer functions.

"Signal"? A 'signal' is just a real valued function of the real variable time. So, say that for time t s(t) is such a signal. To keep this simple but still powerful enough for practice, suppose time t is in only a finite interval. In this real problem the length of the interval is likely a small fraction of one second.

"Time invariant"? Over an interval short enough that physical movement, the weather, birds, etc. don't significantly change the situation. In this real problem, we may be asking that the system remain time invariant only over a small fraction of a second.

"Linear": If send through signal x and receive signal r(x) and send through signal y and receive signal r(y), then for numbers a and b when send through signal ax + by receive signal ar(x) + br(y). Such linearity should hold even with lots of "reflection, refraction, dispersion, etc.".

So this linearity is just another example of classic 'linearity' that math is awash in from linear transformations in linear algebra, linearity of differentiation and integration in calculus, 'linear operators' as in much of mathematical physics, much of 'functional analysis' with Hilbert and Banach spaces, the various 'representation' theorems for linear operators, etc. Or, as in G. Simmons, the twin pillars of mathematical analysis are linearity and continuity.

"Transfer function"? Suppose we send signal u(t) and receive signal s(t). Take the Fourier transform of signal u(t). That is, convert u(t) to its 'frequencies'. If u(t) is a sound from one key on an organ, then the Fourier transform gives essentially just the sine waves at the various 'overtones' of the organ note. We are good at understanding such overtones if only because the human ear does some work close to Fourier transforms. Say that the Fourier transform of u(t) is U(w) for frequencies w. Or, U(w) is is the 'spectrum' of u(t).

Now, for our time invariant linear system, say that its 'transfer function' is H(w) for frequencies w. Say that the signal that is received is s(t) with Fourier transform S(w). Okay, presto, for each frequency w,

     S(w) = H(w) U(w).
That is, at each frequency, just multiply the input U(w) by the the value of the transfer function H(w) at frequency w and get the output S(w).

Then the final signal received s(t) is the inverse Fourier transform of S(w). Cute. Signals in electronic engineering are just awash in this relationship.

The context of Shannon's theorem is quite different from that of DIDO. If want to compare with Shannon's theorem, then DIDO raises an issue because as the number of transmitters and receivers and total 'bandwidth' increase, the total transmitted power will have to increase, but a big point of Shannon's work was that the power is limited. That is, Shannon understood that could push through all the data wanted if could just increase power arbitrarily.

Another big point in Shannon's work was 'noise' on the communications channel. Again, in principle, with no noise, can push through all the data you want even with limited power.

So, here's Shannon's work in a nutshell as I remember it from years ago without review (take with a shovel full of salt): As I did in

     http://news.ycombinator.com/item?id=2820131
consider the problem over a finite interval of time and convert to discrete points in time and in frequency. Are told the maximum power can send. And as the signal travels, a 'noise' signal gets added.

Due to the finite power, all the signals can send are in just a sphere (in a finite dimensional space of appropriate dimension). Then due to the noise, what is received is the signal sent plus the noise. So, in effect, due to the added noise, what is received is a point in a (small) sphere around the signal that would have been received without the noise.

So, all the signals that can be received are in a big sphere determined by the power. And each signal that is received is in a small sphere of its own inside the big sphere.

But want the separate signals to be distinguishable. So, for any two signals sent, want their small spheres at the receiver not to overlap. So, the number of different signals that can be sent is the number of small spheres that will fit, without overlap, inside the big sphere at the receiver.

So, net, we're talking about the number of little balls that will fit inside a big ball.

Basically that's it.

Likely more details are in

     http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf


... I get the impression that the writer doesn't really understand the DIDO stuff. For that matter, neither do I, but I think I have a rough idea: if you have control over all the DIDO transceivers in the area and know precisely where each one is, you can control how the EM spectrum looks at/near each transceiver.

In effect, DIDO relies heavily on interference. We should not think of the transceivers as independent devices; instead, they are all part of the system, which is under centralized control. (for the uninitiated: interference is any combination of waves, regardless of whether it is actually detrimental or beneficial to the user)

This is harder to achieve than one might think, harder than if all transceivers were stationary. Consider the mobile devices -- those are part of the equation too, and they are variable (not to mention any intervening objects that might contribute unaccounted interference). They also need to communicate with the DIDO datacenter somehow, and this is where my reasoning breaks down...


I advise you to skip the article and read the white paper instead since this article relies heavily on it: http://www.rearden.com/DIDO/DIDO_White_Paper_110727.pdf

Can anyone with expertise in the domain chime in on the credibility of these claims?

My understanding is that the system determines how test signals are modified at each user's device because of interferences and works backwards from that to create the waves that each access points needs to send out so that they all interfere together so that they're clear at each user's device location.

Sounds ok in theory, but I don't really get how that can work in practice. For example, wouldn't all access points send the signals at exactly the same time? How can they insure that? How can they insure the interferences won't have changed between the test signals and when the real signals are sent?


"For example, wouldn't all access points send the signals at exactly the same time? How can they insure that? How can they insure the interferences won't have changed between the test signals and when the real signals are sent?"

I think the key is to see the DIDO access points as part of a larger system and not as traditional access points. "So, you can think of the DIDO APs as a vast random array of antennas extending out from the DIDO Data Center for miles" as the whitepaper says. I assume that all the access points will be stationary and will talk to the DIDO centre very regularly. This would be required, at least in my mind, to ensure that their environment characterisation is accurate and up to date and to syncronise transmissions.


"For example, wouldn't all access points send the signals at exactly the same time?"

Yes, apparently there is a timing issue.

Maybe can resolve the timing issue by having the transmitters send timing signals. Then maybe one transmitter becomes the 'central timing' source.

So, maybe discretize time into, say, time windows where each window is, say, only a millisecond or so long. Maybe get a new window each, say, 3 milliseconds. Then at the beginning of each window, each transmitter sends. To get them all to send at the same time, use the central time source with each transmitter knowing its delay from the central source. So, for a window, the central time source says "SEND", and each transmitter knows just how long to wait after receiving the "SEND" message before starting to send.

Maybe some such.

"How can they insure the interferences won't have changed between the test signals and when the real signals are sent?"

Use the test signals every few milliseconds to recalculate all the transfer functions to all the receivers. If occasionally a receiver moves a little too fast, then depend on TCP to handle the error.

Maybe.

Do ask that mostly each receiver B is logically connected to nearly the nearest transmitter A. Then for transmitters a long way away from A and, thus, also B, they should be able mostly to f'get about the signal from A to B. That should help the timing issues.

So, in some area with a lot of users where want a lot of data rate at the one frequency available in that area, put in a lot of transmitters. Then expect that each user will get associated with a transmitter of their own that is nearly the nearest transmitter to them.

The location of each transmitter is essentially fixed; if the location varies a little day by day, no problem. The location of each receiver is essentially fixed over, say, a few milliseconds; that should be okay if redo the 'test signal' handshaking each few milliseconds.

Remember that are only sending digital packet data and looking for latencies only less than a few milliseconds. So, get a little time, a millisecond here and there, to send test signals, do the handshaking to get all the transmitters sending at the same time, buffer up at each transmitter the data to send during the next time window, etc.

For the communications between the central 'smart box' and each of the transmitters, have lots of options including, say, some form of multi-drop passive optical.

To make money, a key will be to keep the costs for installation down. So, need to be cheap at the central box, the communications from that box to the transmitters, the transmitters, and the card in each receiver.

Hmm, to 'wire' a suburb, go to a homeowner and rent a few square feet in their attic! Put in the central box and, say, 100 transmitters. To this house, run, say, 10 GbE over optical. So, that would be 100 Mbps per transmitter. Put a UPS in the attic. Hope to give really good service to, say, 25, maybe, 50 houses.

Cute solution to the 'last mile'!

If an attic gets busy, then light another 10 GbE wavelength in the optical fiber to the attic and put in another 100 transmitters.

Maybe!


The idea on which it's based is described on page 12 of the whitepaper. Skip the article. It's useless.


Totally. It glosses over the issue that I wanted to understand.

"The full explanation for why this happens is very long and involves immensely complex mathematics, carefully designed software and hardware, and new data communications and modulation techniques. Simply put, DIDO is a cloud wireless system."


I would wager that is phased array in reverse (phased array broadcasting). Constructs a 4d data space unique to all users, probably can't in this version handle moving receivers.

Think of it like this, if a phased array receiver can reconstruct the signal at any point, why can't a phased array broadcaster emit signals that recreate any signal at any point in space.

Wrong?


The white paper is non-technical, but I think this is the gist:

Currently, if you have multiple users and 1 access point (AP), the users split the bandwidth. Multiple APs and multiple users on the same channel results in split bandwidth as well, since the APs operate independently and interfere with each other.

This proposal uses N APs for N users on the same channel, allowing for full bidirectional use of the channel bandwidth by each user. To send data to N users simultaneously, a central server receives the data, and calculates the signal to send to each AP such that the user receives only the clean signal meant for him post interference. This requires precise localization of the user in AP space, presumably done by having the user transmit a certain pattern at the particular frequency, and measuring the result at each of the channels.

For the N users to transmit simultaneously to the N APs, the data center can take each of the incoming signals from the N APs along with the localization of the users in AP space, and apply linear algebra to unmix the signals into a signal from each user.

I imagine this adds some overhead to each channel in order to maintain precise localizations of each user in AP space.


It seems like there should be limits to how many waveforms you could combine without needing more spectrum.

Let's say I wanted to email N distinct 1MB attachments to N users. Before sending, every user generates a unique key and sends it to me. I then use my super math to encode/compress the N distinct 1MB attachments into a single a single combined 1MB attachment. I then send that resulting combined 1MB attachment to all N users. Each user then uses his unique key to decode/decompress the 1MB combined attachment and, viola, he gets the distinct 1MB attachment intended for him.

Now scale that up to very high values of N. Linearly. While the keys are changing constantly. And keeping the combined attachment fixed at 1MB.

Is my analogy way off? If not, I don't see how this would be possible.


This is a bit off. Consider it rather like this: you can transmit a sine wave of varying phase and/or amplitude. With a single transmitter and no multi-path interference, each receiver sees the exact wave, plus some noise. Shannon's paper defines the limits on what you can transfer.

Now consider when you have multiple coordinated transmitters transmitting with different phases and amplitudes. Each receiver receives the sum of these functions at a relative offset equal to the varying distances, so will decode as a different symbol.

Visual aid:

http://en.wikipedia.org/wiki/File:Two_sources_interference.g...

That's the simplest interference pattern you'll see. It's obvious to see that the wave a receiver gets will be dependent on location (e.g. notice the bands of 180 degree flipped phase). As a thought experiment you could imagine varying the phase and amplitude of the two transmitters such that receivers in 2 different places would see either similar, or different waveforms. There is almost certainly a limit to how many users you could support with N transmitters, but with good enough math and feedback, it's potentially fairly high, which is what these guys claim they can do.


There is an important detail. In this system, you need N transmiters to send the information to the N recivers.

All the trasmiters use the same frecuency, so the difficult part is to "mix" and "sincronize" the N transmitions in a way that each one of the N recivers see only the data it needs.


I think your analogy refers to data limits in a single channel. According to the article, dido creates different channels for users so that data rate in each channel isn't affected.


Definitely very interesting technique but it sounds like the big challenge will be scaling up to large numbers of base stations and mobile (vs. semi-stationary) clients. Inverting the matrices mentioned in the patent and continually updating the parameters to create the interference bubble around the roving antenna is going to take a lot of processing horsepower. As mentioned in another comment, getting reasonable battery life out of a mobile device using this technology is going to be tricky.

I didn't read the whole patent, is changing environment (e.g. cars passing by) taken into account? I imagine that multi-path reflections off of neighboring objects would be too dynamic to update in real time.


So, they claim this has much lower latency than WiFi, on the order of 1ms and yet they need a round-trip /over the internet/ to their datacenter to generate a waveform. That AP-Datacenter communication inherently adds more latency than a WiFi link. What am I missing?


Perhaps what they mean by "data center" is simply a server running at each wireless access point?


Except you need to combine the waveforms of multiple access points, so the APs need to send the current radio profiles to a shared server, the server needs to receive the proxied information and generate the waveforms for each AP, then the waveforms need to get sent back to the APs and transmitted over the radios.

Obviously, this is not what they are referring to in the 1ms timing, but I don't know what it is they are referring to.


This is snake oil until there is much better documentation.

If the original signal had the capacity to carry more signal as an overlay, why wouldn't that capacity have been available to the original sender as well? I haven't seen anything here which contradicts that proposition. There's some hocus pocus about APs arranging for distinct signatures, but that's irrelevant to how distinct signatures can be overlaid without reducing the capacity available to any one signature.

Now, maybe the original had spare capacity which it couldn't use for some reason. But that doesn't contradict Shannon.

And there's no third option. Either Shannon is true and Perlman is false or vice versa.


This is not the case. If you have N spatially separated antennas then you can transmit information at a rate of N * the Shannon limit. This is the case with MIMO as well, except in MIMO both of your terminals have N antennas. This DIDO concept seems to use multiple base station antennas which may be physically separated but only one mobile antenna. Of course, as a consequence the increased channel capacity is strictly one-way.


Are you sure that makes it one-way? Can't you apply the same mapping in reverse to extract received data?


Oversimplifying, Shannon's theorem applies to a single channel between source and destination.

Current Wifi uses the same channel for all destinations, since there is no way to spatially separate destinations - thus the signals to different destinations interfere and proportionately reduce the capacity to individual destinations.

DIDO is creating spatially separated channels to each destination by modeling the spatial domain and creating signals from different APs which interfere to deliver independent channels to each destination.


I'm no network security expert at all but could something like this, if implemented by a company, render DDoS attacks useless? I know DDoS attacks have to deal with the server's ability to process the packets but could this technology be engineered to allow these attacks to occur without any affect on the site under attack?


Even if a user can utilize the maximum bandwidth provided by a wireless network, it still doesn't necessarily follow that the resulting data transmission rate would also be capable of overwhelming the information processing capacity of some server.


Article about this from Wired last month: http://www.wired.com/epicenter/2011/06/perlman-holy-grail-wi...


Perlman also gave a talk at Columbia in June where he discussed this a bit (if only superficially): http://www.youtube.com/watch?v=1QxrQnJCXKo

It's around the 55:13 mark.


Can you use this for the uplink also?

Does the base station has to update the clients with the calculated and needed transformations? But still you can't fit that with the data of all the other clients.

Am I missing something?


But what about the uploads from user devices to access points ?... according to the explanation provided, this seem to be optimized for data transfer from access point to user devices..


Wish I had read the HN comments first, I came to same conclusion: read the white paper, not the article.

Fascinating stuff, this could very well be revolutionary if it's proven to scale up.


A question: Is there antenna tech that would complement this and hopefully move it towards mass application/adoption faster? Or has antenna tech reached its limit?


very interesting. It sounds like they are precomputing the streams into a single stream so that the interference at a given location only has to be calculated against background noise and not other signals. So in other words its not one signal overriding eachother, because there is only one signal at a given location and all devices know how to eavesdrop their signal out of the master one. Interesting concept.


FTA: "Rearden has built a test system with several access points in Pflugerville, Lake Austin, and Austin — all cities in Texas."

Great. Show us the data.


I've got the basics of how base-station -> mobile works, but how about the reverse? Is upstream still shared bandwidth?


The difficult task to get this technology to people- Telecom Monopolies and the complex Taxes


Looks doable in a straightforward, simple, elementary way:

We start with some background in 'signals'. Suppose for time t we have real number u(t). Suppose we have a 'time invariant linear system' and send in signal u = u(t). Suppose the signal that comes out is s = s(t). Suppose the 'transfer function' of the system is H = H(w) for frequency w (H must exist because our system is time invariant and linear). Suppose U = U(w) is the Fourier transform of u and S = S(w) is the Fourier transform of s. Then, presto, for each frequency w,

     S(w) = H(w) U(w),
and the output s = s(t) is just the inverse Fourier transform of S = S(w).

If for time t we have only a finite interval and for the signal have all the power in a finite 'band' (have a 'band-limited' signal), that is, have a maximum frequency with any power, then we can discretize both time and frequency and use the fast Fourier transform (FFT) to do the Fourier transform work (actually a non-trivial signal cannot be both time limited and band-limited at the same time, but as we discretize time the signal can be band-limited and zero at all our discrete time points outside of finite interval of time -- don't worry about such things!).

Suppose for some positive integer n we have n transmitters at distinct geographic locations. Suppose we also have n receivers at distinct geographic locations.

Suppose for receiver j = 1, 2, ..., n we want to send to receiver j signal s_j = s_j(t) for time t. Here we are borrowing from TeX where _j indicates a subscript.

Suppose by sending 'test signals' we have, for transmitter i = 1, 2, ..., n and receiver j = 1, 2, ..., n, 'transfer function' H_ij = H_ij(w) where w is frequency.

So, we want signals u_i = u_i(t) so that when, all at the same time, for all i = 1, 2, ..., n, transmitter i sends signal u_i, then each receiver j receives the desired s_j.

Let U_i be the Fourier transform of u_i and S_j be the Fourier transform of s_j.

Then borrowing from TeX, for each frequency w

     S_j(w) = sum_{i = 1}^n H_ij(w) U_i(w)
Now pick a particular value w of our discrete frequencies.

Let's let S(w) be the n x 1 matrix with S_j(w) in component j, H(w), the n x n matrix with H_ij(w) in component i, j, and U(w), the n x 1 matrix with U_i(w) in component i.

Then we have

     S(w) = H(w) U(w)
where on the right we have matrix multiplication of n x n H(w) and n x 1 U(w)

Then

     U(w) = H(w)^(-1) S(w)
where

     H(w)^(-1)
is the inverse of n x n matrix H(w).

Do this calculation for each w and have all of U = (U_i). For each i = 1, 2, ..., n, take the inverse Fourier transform of U_i = U_i(w) and get u_i = u_i(t). Done.

The math was not very advanced! Sorry 'bout that!

But there is an issue with Shannon's theorem: As n grows, the total power in the signals u_i, i = 1, 2,, ..., n, stands to grow. Shannon's result was for bounded power on the communications channel or at least bounded signal to noise ratio.


"The complete answer to this question is very long, involving immensely complex mathematics, very carefully designed software and hardware, and new data communications and modulation techniques. The following is a highly simplified explanation" = BULLSHIT

Sorry, guys, but it sounds like somebody is trying to raise money for vaporware.


Perhaps you should read the white paper and the patent application before writing off a decade's worth of work as bullshit?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: