Apropos of absolutely nothing, but IIRC a "Star Trek" movie where a screenshot of one of the displays on some ship or another had the Shields Frequency up which allowed the enemy to match it and disable it, and back in the 80s(?) when this came out I remember thinking "237GHz" (or something like that) was an insanely high frequency and doubted it could ever be reached with any degree of stability. Physics: HMB
After that, I'm sure the Owner's Manual that comes with your new Enterprise-class Starship now has a "DO NOT DISPLAY YOUR SHIELD FREQUENCY" in the "Warnings" section
To be fair, we don't know if Starfleet ever figured out how the Klingons bypassed the shield. Nor do we know if they ever discovered someone installed a rootkit on Geordie's visor.
Also, given Starfleet's continued examples of not learning their lessons from previous encounters, I suspect all ships still display the shield frequency in big, bold font. Possibly even visible from the viewscreen of the bridge on one of the rear consoles.
I mean I get that series like Star Trek rely on technobabble and suspension of disbelief, but surely an enemy can just... detect the frequency, or rapidly adjust their own like tuning in until it matches?
That's the shield modulation frequency though, so the actual frequency of whatever field the shield consist of is unknown. It's also entirely possible the numeric strings displayed in the bottom part of the image constitute the secret key to a pseudorandom binary sequence that is overlaid onto the useful shield signal, and that's what the Klingon actually use to penetrate them.
Something not totally clear from the title, but it seems the claimed rate was actually achieve with a transmitter structure similar to what you would find in coherent optics (see figure 4). Instead of coupling to fiber, they couple to a high speed photodiode that radiates at the ~140GHz laser wavelength.
EDIT: Noticed that after a closer reading of the paper, the real goal was to assess the LO phase noise improvement when moving from a RF synthesizer to a SBS laser and PD based LO.
While it looks like they got lower phase noise using the optical LO than the traditional source, there are better conventional sources, like the LNS Series from NoiseXT.
I applaud their novel optical based system of generating the RF to then transmit via the antenna. The world is getting quite interesting, where you can actually buy test equipment these days that will tell you the frequency (down to the Hertz!) of Blue Laser Light.
They make a big point of using HD-FEC rather than SD-FEC.
To me, that makes no sense - the extra power usage of SD-FEC can be tiny, even at high data rates. The FEC problem can be parallelized (if the protocol is designed for this) so it doesn't need to run at line rate too.
(SD-FEC lets you get substantially more data throughput in a given channel).
I can think of a number of cases where using a hard decision decoder can be a better choice. Power would be one factor and I would strongly disagree that the power delta between soft and hard codes at these data rates is small. Unfortunately, I can not find any public data to back up that claim for what appear (based on coding gain and overhead) an HD-FEC using RS and a SD-FEC using braided BCH or LDPC.
Other factors can include reduced routing complexity and area requirements of HD since you have to shuffle around soft information. Extra die space is expensive so you want to avoid it if possible.
However, I think the most likely is the latency reduction you get when using HD-FEC. I know that some applications of microwave links are extremely latency sensitive, could be that this research is targeting one of those applications.
Yes, it's referring to transmitting with 20 meters between transmitter/receiver. Their highest speed transfer was done at 30 centimeters between transmitter/receiver.
Yes, they have their purposes. I work in finance and we use microwave links to talk between systems in Chicago & New York. Fiber links are the backup, here! The reason is purely latency as the bandwidth of the fiber links is orders of magnitude higher. The major downside is the microwave links drop quite frequently as they're sensitive to the totality of the weather between data centers (e.g. any sort of precipitation as a crow flies between NYC & CHI and there's potential for dropped links).
> Requirements for 6G include extremely high data rates (>100 Gbit/s), ultra-low latency (<0.1 ms)....
> For the wireless communication of the 6G era, new radio frequency bands in
sub-THz, ranging from 100 GHz to 300 GHz, have been identified as one of the most promising bands.
> There is also a technical challenge for seamlessly connecting wireless communication systems with fiber-optic communication networks
Seems useful as a cable-replacement for e.g. uncompressed video; DisplayPort tops out at 10.8 Gbps, so even allowing a 19x reduction for "real world" limitations, you could send DP over 20 meters with this.
They are only transmitting 30 mm, when they moved to transmitting 20 meters it had to drop from 64QAM to 32QAM and lost some throughput. It's also laser generated so I don't believe it's omnidirectional, just point to point.
Jokes aside, I get a consistent 1600mbps over wifi 6E which is good enough for me.
Of course you need an AP in every room to get consistent performance, but wifi has always been like that. And unless your applications use forward error correction wifi will always have latency spikes from L2 retransmission if there's even one wall between the AP and device
I paid ~$280 for each U6 Enterprise and that's the only thing that limits how many I have. I'm sure it's the same for anyone that cares about wifi performance. How much of a premium would you pay for better aesthetics?
Space on the ceiling where I have essentially unlimited space?
I'm not saying there aren't advantages to this idea. Eg powering the APs with 120V from the light fixture would mean they can use fiber instead of PoE 2.5G ethernet. That would save me the cost of an extra switch and also reduce power consumption on both sides of the cable. But I'm sure a combined light+AP unit would be expensive enough to make that a moot point. If anything, I'd prefer to sacrifice even more aesthetics and use a bare PCB to save money if I could.
If you live in a small/medium apartment then 6E is quite good. I've been using one of those "enterprise" tri-band APs for around a year and my experience has been also amazing, with just one AP in the entire flat.
Enabling all bands should allow your device to just drop to 5GHz/2.4GHz when needed. This has been seamless for me.
If I may ask, what is your hardware setup like if you're achieving consistent 1.6Gbps? Is that a reproducible, every day speed? Is that only for LAN or both LAN+WAN?
U6 Enterprise AP, Hasivo ethernet switch from aliexpress (used as a media converter from the AP's 2.5G copper to 10G fiber), MikroTik RB4011iGS for NAT (router on a stick), 56G Mellanox SX6036 for wired LAN. 56G optics from eBay and 10G optics from fs.com
>only for LAN or both LAN+WAN?
Wired LAN is 56gbps nominal.
Wireless LAN is 1.6gbps actual throughput.
WAN is 1.4gbps actual throughput (limited by Comcast DOCSIS)
Consider: the faster the packets go, the faster the channel gets free to let other TDMA stations talk. A big office with a lot of computers still experiences QoS losses over 6E when someone starts watching a 4K video. Get those bursts of video-buffer-filling done faster, and other traffic will stay smooth.
Oh absolutely! That's the main reason why every room needs an AP. A 20mbps 4K video only uses up 1% of the air time for wifi 6E. Any more than that will noticably increase p99 latency. A device outside the room could easily use 10x as much airtime for the same bitrate.
Nor for me; I have a 3000 square-foot house and run a single access point; I ran Ethernet to a very central place (we have a centrally located open stairwell between the two floors, which helps) and got a PoE WAP that covers essentially the whole house (if you hold your phone in the very corner of the corner rooms, you can get it to drop).
> if you hold your phone in the very corner of the corner rooms, you can get it to drop
See clearly we have very different standards. If you're seeing packet loss in the corner of the room, there will also be a fuckton of L2 retransmissions throughout the room. The latter will not be visible as ping loss%, but it has the same effect on p99 latency
I guess I don't do things on my phone that are that latency sensitive. Web browsing used to be latency sensitive, but now the typical website does nothing for 250-750ms when I click on a link, and (seemingly randomly) takes multiple seconds about 5% of the time even if I'm connected via GigE, so any network latency is masked by that pretty well.
Every website I use (besides reddit and wikia/Fandom) load almost instantly.
> typical website does nothing for 250-750ms
Two common causes:
- Your HTTP cache is slow because your workload size is larger than your SSD's SLC cache. Cheap 1TB SSDs only have 50GB SLC which gets nuked with every background software update, so buy a better one.
- your p99 DNS response time is slow. Recall that many websites require many DNS queries to randomly generated subdomains and the whole page is limited by the slowest response. Set prefetch=true in unbound and use multiple DNS servers in parallel with pihole/dnsmasq to eliminate that issue.