Not an expert in the field, but it's possible to convolve a signal on the recorded impulse response of a room [1] or a 'Slinky' toy [2], or that of a subjectively 'desirable' piece of audio equipment [3].
The result is as if your signal were played in that room ( with all its reverb ) or through that thing ( with its 'desirable'-or-otherwise characteristics modeled in its impulse response ).
Thinkable in 1962, but not thought possible. Now it's part of an industry.
To add an aside to my other comment, as a grad student in a spatial audio class, I led a project to create a new method of simulating physical reverb by using mics and speakers pointed at each other in an anechoic space, letting the mutual feedback create reverb. You could apply delay to "push back" the walls and apply filters matching the absorption characteristics of different materials. It was a fun project, which won my group Grad Project of the Year at my school's demo day. A couple commercial products and systems use similar concepts to achieve active acoustic control.
The downside of convolutional reverb is the lack of parameterization. You're kind of stuck with one fixed geometry of source, receiver and surfaces. It can also be expensive to apply in real-time processing.
A lot can be done post-processing the impulse response (volume envelopes, timestretching, combining with other parts, etc)
As for the efficiency: a modern laptop can easily run ~100 channels of multi-second convolution reverb in realtime on 44.1/48kHz sample rate and <10ms latency in real-time on 1 core.
Yeah, but if you post-process the impulse response, it's difficult to end up with something else that "looks" as much like a real impulse response. If that's important to you.
Don't get me wrong, I'm not trying to argue against using convolutional reverb. I was just throwing out a couple reasons why people still use other approaches.
Not much later, and with its own completely different tech for auto navigation, came the Etak Navigator / https://en.wikipedia.org/wiki/Etak. A good bit of SV history there.
We can't have sidetone nor full duplex in wide area cellular telephony because end-to-end latencies become so long the far-end echos go beyond asthetically disconcerting.
Try this just for fun: Call someone right next to you via your cell phones and talk to each other. One finds one can speak several words, maybe even a short but complete sentence before the other person starts hearing any of it. It can become a laughably long latency when experienced so directly.
But this cumulative latency ( be it from queuing delays, voice packet grouping for burst transmission, buffering delays along the many packet switches from here to there, you name it ) comprise engineering trade-offs necessary for optimal radio-level multiple access, efficient packetizing, and economically efficient (packet) transmission.
In other words, I'd postulate we can't (today) afford full duplex, sidetone-included, wideband (5KHz+) sub-20-millisecond latency in cellular telephony. Cellular is a different thing, but surely it's its own kind of magic.
But consider modern office telephony over Ethernet: I don't know the internals of it, but aesthetically, I find it significantly better than the analog telephony I remember from 40 years ago: The sidetone's there, it's full-duplex, speakerphone functionality is absolutely superb, and subjectively I find it sounds good.
I'd guesstimate a three orders of magnitude latency ratio between LAN-based telephony vs cellular telephony seems to me key to the very different engineering possibilities and compromises necessary in these two very different domains.
You need to make sure the person on the other side is also using a landline, though.
Also, Skype is great when both parties have sufficient bandwidth for whatever codec it opts when it detects a high quality data connection but it simply sucks when it detects poor connection, gets worse than Google Hangouts does.
If you've ever worked in an office with VoIP phones, you know that the level of high quality audio is eerie. It's almost too real to seem like a phone call.
Wouldn't sidetone be generated locally on the device with near-zero latency? There must be some other reason they don't offer it. Maybe to avoid feedback?
* Understanding binary vs decimal math, deeply. Implementing decimal math out of integers nothing else is available.
* Understanding locking strategies and the need for shared data protection. Build and break and rebuild things until you can intuit then prove deadlock, lock failure or long waits when you suspect their effects.
* Getting good at debugging. Don't point the finger of suspicion at anyone or anything (unless it's at you or your work), /prove/ what's wrong. Toward that end, don't "kill the (error) messenger"; it's just the first to speak up about a problem that might be layers deeper in your software stack.
* Learn data; it outlives code. SQL may bore you but it pays the bills. You didn't like Linear Algebra in school? Well, I didn't much either, but I sure am glad now I took it back then.
* Be the person who can say, "Yeah, I can fix that," and just do that.
* Learn business areas like General Ledger and Accounts Receivable. Be the person who can say "Yeah, I can keep a balance on that for you" and do it.
* As far as application areas go, remember that money, unlike computer languages, never goes out of style.
Let's look at who's behind it, what it is, how it can be used, and where the benefits accrue:
* Who: NRG, while not a utility, is an owner of utility-scale generation capacity. They wholesale electricity to utilities for distribution and resale. Their portfolio is largely fossil-fuel but increasingly renewables [1]. They see themselves in the "death spiral" some other generation suppliers do [3][4].
"A single unit can integrate natural gas, solar panels,
wind turbines and batteries for storage," NRG says. "The
device is connected to the grid, which allows the two to
work seamlessly together. But if the grid's power goes
down, the engine's energy stays up. Any excess generated
energy can be sent back to the grid and may be credited
for your benefit."
.
Aside from the 15 kW maximum electrical output, output
can be used for space heating (35 kW or 0.12 MMBTU) or
hot water (200 gallons per hour at 70° rise). Additional
uses include snow melt and spa and pool heating.
.
The technical specifications are subject to change.
.
When necessary, the unit seamlessly transitions to a
backup power system, to maintain continuous operations,
NRG says.
.
The Beacon 10 unit's dimensions are 3 ft. 8 in. long by 2
ft. 6 in. wide by 4 ft.high, which NRG says is slightly
larger than a washing machine.
.
The unit weighs 1,500 lbs.
* How it can be used: Disrupt the utility's business by selling or leasing equipment directly to end users, and by doing energy arbitrage with the utility.
The whole business of solar integration and battery integration is optional, and smells of "greenwashing-by-association" to me: The cheapest install of this machine would not include a solar component nor battery, each of which might be very expensive in addition to this machine. Who could afford all that? Certainly not everyone.
* Where benefits accrue: See [5] for the long story, but it looks like NRG mainly. If you lease it from NRG, you may not get as much of the benefit of energy arbitrage.
* What I think of it: Regulators should insist on a "connection charge" for this and accept energy sold at wholesale prices, while selling to the user at retail prices. Disclosure: I own a solar system with net metering, but believe a "connection charge" is entirely reasonable /for those who have their own generation capability/ and thus use the grid for sale of distributed generation, but /not/ for those who don't.
> The cheapest install of this machine would not include a solar component nor battery, each of which might be very expensive in addition to this machine. Who could afford all that? Certainly not everyone.
I've seen German stirling engines that use a fresnel lens focusing sunlight to supply the heat - and the most expensive part is the sun-tracking software and equipment.
The result is as if your signal were played in that room ( with all its reverb ) or through that thing ( with its 'desirable'-or-otherwise characteristics modeled in its impulse response ).
Thinkable in 1962, but not thought possible. Now it's part of an industry.
-----
[1] https://en.wikipedia.org/wiki/Convolution_reverb
[2] http://www.openairlib.net/auralizationdb?page=1
[3] https://www.soundonsound.com/techniques/convolution-processi...