Hacker News new | past | comments | ask | show | jobs | submit login
USB Cheat Sheet (fabiensanglard.net)
371 points by WithinReason on May 5, 2022 | hide | past | favorite | 168 comments



Some of the entries seem incorrect: "USB 3.2 (USB 3.2 Gen 2x2) and "USB 4" (USB 4 USB4 Gen 2×2) should have the same nominal data rate of 2500MB/s, they're 2 lanes (x2) of 10GB/s. Though they are apparently coded differently electrically, so they're distinct protocols.

The tables would benefit from mentioning the coding (8/10 or 128/132) as IMO it's one of the most confusing bits when you see the effective data rates:

* USB 3.2 Gen 1x2 has a nominal data rate of 10G (2 lanes at 5G) with a raw throughput of 1GB/s (effective data rates topping out around 900MB/s)

* USB 3.2 Gen 2x1 has the same nominal data rate of 10G (1 lane at 10G) but a raw throughput of 1.2GB/s (and effective data rates topping out around 1.1GB/s)

The difference is that Gen 1x uses the "legacy" 8/10 encoding, while Gen 2x uses the newer 128/132 encoding, and thus has a much lower overhead (around 3%, versus 20).


Thank you for noticing these issues, I have updated the table.

I would be happy to improve it and add encoding. I am surprised by some of the summary entries on Wikipedia (https://en.wikipedia.org/wiki/USB4). Looks like USB4 "reverted" to 128b/132b. It is accurate?


128b/132b is the more efficient coding. The closer to 1 the fraction is, the less coding overhead it has, and 128/132 is larger than 8/10.


Actually, I just noticed that 128/132 is the same fraction as 66/64 so both scheme has the same encoding efficiency. So USB-4 did no "revert in terms of efficiency.


Indeed, according to the wiki page the subtetly is:

> USB 3.1 and DisplayPort 2.0 use 128b/132b encoding, which is identical to 64b/66b, but duplicates each of the preamble bits to reduce the risk of undetected errors there.

I guess that was found not to matter so they went back to the more normal 64/66 in USB4? I'm really weak on the hardware stuff so I really have no idea.


Fyi, last two columns in table 2 are a bit confusing: footnote c says "real life sequential speed", but then the last column title is "real life", so it's unclear what the difference is


He goes off the rails earlier than that, by saying that USB 2.0 is "also known as" Hi-speed. HS is only one data rate supported by the USB 2.0 standard; it incorporates both full speed from the earlier standard and low speed, which isn't mentioned at all.


That's more of an approximation matching how, frankly, most people think of the specs: yes USB 2.0 supersedes 1.1 entirely, but everyone will think of "full speed" and "low speed" as USB 1 which are BC supported by USB 2.0.

That's also why USB 3.1 and 3.2's rebranding of previous versions is so confusing and a pain in the ass to keep straight: USB 3.2 1x1 is USB 3.1 Gen 1 is USB 3.0 (ignoring the USB 2.0 BC).


Right, per his chart "Full Speed" should be known as USB 1.1 Full Speed and USB 2.0 Full Speed.


Also should be:

12 Mbps -> 1.43 MiB/s -> 1.5 MB/s

480 Mbps -> 57 MiB/s -> 60 MB/s

5000 Mbps (5 Gbps) -> 596 MiB/s -> 625 MB/s

10000 Mbps (10 Gbps) -> 1192 MiB/s -> 1250 MB/s

20000 Mbps (20 Gbps) -> 2384 MiB/s -> 2500 MB/s

40000 Mbps (40 Gbps) -> 4768 MiB/s -> 5000 MB/s


No, some of your rates are wrong.

The so-called 5 Gb/s USB has a data rate of 4 Gb/s.

The marketing data rates for Ethernet are true, i.e. 1 Gb/s Ethernet has a 1 Gb/s data rate, but a 1.25 Gb/s encoded bit rate over the cable.

The marketing data rates for the first 2 generations of PCIe, for all 3 generations of SATA, and for USB 3.0 a.k.a. "Gen 1" of later standards, are false, being advertised as larger with 25% (because 8 data bits are encoded into 10 bits sent over the wire, which does not matter for the user).

All these misleading marketing data rates have been introduced by Intel, who did not follow the rules used in vendor-neutral standards, like Ethernet.

So PCIe 1 is 2 Gb/s, PCIe 2 & USB 3.0 are 4 Gb/s and SATA 3 is 4.8 Gb/s.

So USB "5 Gbps" => 500 MB/s (not 625 MB/s), and after accounting for protocols like "USB Attached SCSI Protocol", the maximum speed that one can see for an USB SSD on a "5 Gbps" port is between 400 MB/s and 450 MB/s.

The same applies for a USB Type C with 2 x 5 Gb/s links.

As other posters have already mentioned, USB 3.1 a.k.a. the "Gen 2" of later standards has introduced a more efficient encoding, so its speed is approximately 10 Gb/s.

The "10 Gbps" USB is not twice faster than the "5 Gbps" USB, it is 2.5 times faster, and this is important to know.


I should add Nominal vs Raw vs Effective speed to the table.

Can you confirm with the rule to be used.

Raw Speed = Nominal / Encoding

UMS Speed = Raw / UMS overhead

In the case of 3.0 that would be:

Nominal = 625 MiB/s

Raw = 625 - 20% = 500 MiB/s

UMS = 500 - 20% = 400 MiB/s


The names for the various bit rates vary between authors and standards.

I believe that the least confusing names would be:

Data bit rate = the rate at which the data bits provided by the user are sent

Signaling bit rate = the rate at which bits are sent over the physical communication medium

The 2 rates are not the same because the user data bits are encoded in some way before being sent. The signaling bit rate does not have any importance, except for those who design communication equipment. For the users of some communication equipment, only the data bit rate matters.

The data bit rate is equal to the signaling bit rate multiplied by the ratio between data bits and the corresponding encoded bits.

For example, for USB 3.0 (single link Gen 1):

Signaling bit rate = 5 Gb/s

Data bit rate = (5 * 8 / 10) Gb/s = 4 Gb/s

Data byte rate = (4 / 8) GB/s = 500 MB/s = 477 MiB/s

5 Gb/s corresponds to 625 MB/s, but for a signaling bit rate it is completely useless to convert bits to bytes, because groups of 8 bits on the physical communication medium do not normally correspond to bytes from the data provided by the user. Only for the data bit rate it is meaningful to be converted to a data byte rate.

For USB 3.1 (single link Gen 2):

Signaling bit rate = 10 Gb/s

Data bit rate = (10 * 128 / 132) Gb/s = 9.7 Gb/s

Data byte rate = (9.7 / 8) GB/s = 1212 MB/s = 1156 MiB/s


>Data bit rate = the rate at which the data bits provided by the user are sent

> Signaling bit rate = the rate at which bits are sent over the physical communication medium

There is a third one: in addition to the line coding, there's the message framing (at the logical level) e.g. USB 3 has a signalling rate of 5Gb/s, it has a raw data rate of 4Gb/s, but it has a theoretical effective data rate of around 3.2Gb/s (400MB/s).


Is that the same intel that “forgot” to mention that they overclocked a demo cpu and used ndustrial water chiller?


My favorite is when Intel demoed the new ivybridge iGPU by having a guy fire up VLC player to play some footage of a racing game while he pretended to control it with a steering wheel controller.

I looked this up and it's actually even worse than than I thought. When called out he claimed it was being control from backstage.

https://www.techpowerup.com/158448/that-dodgy-intel-ivy-brid...


I was curious and went looking for it. That's pretty hilarious oversight!

https://www.tomshardware.com/news/intel-28-core-cpu-5ghz,372...


Lol that’s hilarious


No USB On-The-Go (https://en.wikipedia.org/wiki/USB_On-The-Go) or Wireless USB (https://en.wikipedia.org/wiki/Wireless_USB)?

USB is a triumph of marketeers over engineers. All these things are called USB because USB sells (see also: Bluetooth).


I don't know anything about wireless USB but USB OTG is called USB because it is USB. It's not some totally unrelated protocol.


I thought OTG was just changing up where the host controller is sitting in the USB relationship? So you can have a device that acts like a client when hooked to a computer, or a master when hooked to a thumb drive/webcam/etc...?


Yeah it just allows you to use a B port as a host (if supported). It's still the USB protocol.


AFAIK, according to the standard you still cannot use the B port as a host, it should instead be an AB port (a socket in which both A and B plugs fit).


I had an iRiver H320 with USB-OTG support. At the time I thought it was just a straight-up mini-B port but you're right, that's actually a mini-AB port!

https://www.guru3d.com/miraserver/images/reviews/soundcards/...

I am not sure I have ever seen mini-A anywhere.

God, what a wacky standard. (USB-OTG specifically but really USB plugs in general)


Another example of mini-AB is the TI-84 series. Two calculators can be directly connected but a USB A-A cable is verboten by the spec (although I sometimes see them nonetheless), so TI put an AB port on the calculator and sold a mini-A to mini-B cable. It is somewhat confusing to users that a cable with two different ends was nonetheless completely transposable.

I'm not sure of this at all but I sort of doubt the TI-84 used spec compliant OTG, because in general the USB implementation on that calculator was very weird and unreliable and gave the feeling that they were doing something uncouth like bit-banging and not quite fast enough. I remember it routinely taking multiple attempts to get something to transfer successfully.


It is one of these connectors:

https://en.wikipedia.org/wiki/USB_hardware#USB_On-The-Go_con...

But every OTG device I have ever used has just used the USB-A port.


i cannot find any examples of this kind of port, can you share a link?



Bluetooth Smart aka. Bluetooth Low Energy aka. Wibree aka. not actually bluetooth


> May 05, 2025

The article is dated May 5, 2025. I've long been wondering about the future of USB.


OP forgot [2025] from the title.


I could still fix it, but I fear the wrath of Dang


USB 4.2 (later renamed to USB 3.2 gen 2 Mk. 1) comes with built in time traveling. They just keep adding features to the protocol and making it complicated.


It's a form of SEO. Google promotes "fresh" content, so if it sees a date less than a year ago it often assumes the content is better. Normally you will see this abused by crappy content mills using a plugin that constantly updates the date on their garbage.

Putting a static date from 3 years in the future seems like a quick a dirty hack to do the same thing.


Not to be read before: see article time stamp


Fun fact: USB 2.0 webcams have been existing for over 10 years. USB 2.0 is 60 MB/s.

A pixel of an image is 3 Bytes. A 1920x1080 FullHD image is 6.2 MB. At 30 frames per second, second of a FullHD video is 186 MB. How did they do that?

Answer: frames are transferred as JPEG files. Even a cheap $15 webcam is a tiny computer (with a CPU, RAM, etc), which runs a JPEG encoder program.


Most webcams, especially 10 years ago are not 1080p, or even 60fps. Many aren't even 720p. 1280 x 720 x 3 bytes x 30 fps = ~80MB/second. 480p @ 30 fps = 26MB. That is how many webcams can get by without hardware JPEG/H264 encoding.

4K @ 60fps = 1.4GB/sec. USB 3, even with 2 lanes, will have trouble with that.


The cheap ones are using hardware JPEG encoders. The associated micro isn't powerful enough to do it in firmware alone.


Surprised they don't use a hardware video encoder, is it because the well and efficiently supported formats are all MPEG, and thus have fairly high licensing cost on top of the hardware? Or because even efficient HVEs use more resources than webcams can afford? Or because inter-frame coding requires more storage, which (again) means higher costs, which (again) eats into the margin, which cheap webcam manufacturers consider not worth the investment?


My older Logitech C920 has an on-board H.264 encoder. Newer revisions of the same model does not.

I haven't figured out why they chose to remove it, but your point about licensing cost combined with them not advertising it much as a feature, and most of their competitors not including "proper" video encoding might explain it.

Edit: Found an official explanation here: https://www.logitech.com/en-us/video-collaboration/resources... TLDR, they figure most computers at that point had HW encoders.


Unfortunately, this makes it much harder to use these as webcams on a Raspberry Pi (which even has H.264 hardware acceleration – the bottleneck is decoding the MJPEG stream from the camera, for which ffmpeg does not have hardware acceleration on the RPi).


As an alternative to ffmpeg, GStreamer provides hardware accelerated MJPEG decoding on the Pi. I think there are bugs, though, which makes it unsuitable for some use cases. Here's an example pipeline - https://forums.raspberrypi.com/viewtopic.php?p=1989575#p1989...


MJPEG is just a very simple "video" format that needs very simple and cheap electronics to work. Video encoding blocks are mostly part of bigger SoCs and comes with licensing costs.

Same goes on the other hand for the receiving end - decoding a stream of JPEGs is just much simpler in both CPU use and code complexity than dealing with something like H.264.


Hm. But then wouldn't it make more sense to just stream the raw sensor data, which is 1 byte per pixel (or up to 12 bits if you want to get fancy), and then demosaic it on the host? Full HD at 30 fps would be 59.33 MB/s, barely but still fitting into that limit.

But then also I think some webcams use H264? I remember reading that somewhere.


The pixel density doesn't generally refer to the density of the Bayer pattern, which can be even denser. Generally a cluster of four Bayer pixels makes up one pixel (RG/GB), but like most things in computing, the cognitive complexity is borderline fractal and this is a massive simplification.


> Full HD at 30 fps would be 59.33 MB/s, barely but still fitting into that limit.

It's not fitting into anything I fear, best case scenario the effective bulk transfer rate of USB2 is 53MB/s.

60 is the signaling rate, but that doesn't account for the framing or the packet overhead.


It would need a funny driver and since that stuff is big parallel image processing it's easy in HW but if someone has a netbook or cheap/old Celeron crap it would peg their CPU to do the demosaic and color correction.


> Full HD at 30 fps would be 59.33 MB/s, barely but still fitting into that limit.

That limit is too high even as a theoretical max.

You could do raw 720p.


I don't know where you get "1 byte per pixel" from. At minimum, raw 4:2:0 video would be two bytes per pixel, and RGB would be three bytes per pixel with 8-bit color depth.


You're talking about processed color frames. The GP was suggesting that the camera stream the raw sensor data, which doesn't have individual color channels, just a monochrome grid with 10 or 12 bits of usable data per pixel. A bayer filter[0] is placed in front of the sensor so that a given color of light falls on each cell. The USB host would be responsible for applying a demosaicing[0] algorithm to create the color channels from the raw sensor data.

If we take the AR0330 sensor used in the USB Camera C1[2] as an example, it has a native resolution of 2304H x 1296V and outputs 10 bits per native pixel after internal A-Law compression[3] for a total raw frame size of 3.56 MiB, assuming optimal packing. The corresponding image, demosaiced and downscaled to Full HD (1920x1080), in RGB with eight bits per channel would be 5.93 MiB.

[0] https://en.wikipedia.org/wiki/Bayer_filter

[1] https://en.wikipedia.org/wiki/Demosaicing

[2] https://www.kurokesu.com/shop/cameras/CAMUSB1

[3] https://www.onsemi.com/products/sensors/image-sensors/ar0330


> it has a native resolution of 2304H x 1296V

Seems to me like that kills the idea dead? GGP assumed 8bpp and that the raw resolution matched the output, and came out... well wrong (the effective bulk transfer rate of USB 2.0 is 53MB/s on a good day), but by just a few megs.

However the raw resolution is 40% higher than the final output, meaning even at 8bpp you're at 85MB/s and you've blown way past any hope of recovering via a few tricks. At 10 bpp you're above 100MB/s.


Native resolution at 10bpp requires 40% less data per frame than the final Full HD RGB output at 8bpp per channel (24bpp total), so it would represent some savings.

The problem is that neither format fits within the limits of USB 2.0 at 15 FPS or higher. To achieve a reasonable framerate you need to apply compression, and generally speaking you'll get better compression if you demosaic first.


When talking about digital cameras, each "pixel" is a single color sensor. Blame marketing.

Also 4:2:0 is 6 values per 4 pixels. 1.5 bytes per pixel at 8-bit.


It needs a uC with some special hardware anyways to do demosaic or else it would require special drivers that would peg some people's crappy laptop CPUs.

Also the raw YUV 4:2:0 is 1.5 bytes per pixel so that's doing half of the "compression" work for you.


Just how much do you have to hate consumers to come up with a scheme like this? Increment revisions as you add more features, add something to the end to say how fast it goes. The 3.2 renaming is idiotic.


USB 4 AKA USB 4 Gen2x2

USB 4 (opt) AKA USB 4 Gen3x2

They had a chance to fix their colossal fuckup and they decided not to.


In marketing and on cables they've chosen to use the terms USB4 20Gbps and USB4 40Gbps, so at least that's explicit. There's also officials ways to mark cables as being 100W or 240W capable.


Their issue was not the naming for consumer or tech user, their issue was "how do we allow any random laptop from claiming latest usb despite not actually supporting it".

It was super obvious with usb 3 and its sub versions, and it gets even worse with 4.


Yes. The "IF" in "USB-IF" stands for implementers forum, it is a consortium of hardware companies who make devices. It's preferable to them if they can slap "USB 3.2 support!" on the box without having to redo their boards with a new, expensive component.

In other words, the incentives here are for USB-IF to promote customer confusion, not to reduce it, because that confusion can sell devices and push profit margins.

It's absolutely terrible that the EU is giving this group a legal monopoly on the ability to create and proliferate new standards. Their incentives fundamentally run against the consumer and they have repeatedly acted against the interests of the consumer. Unlike HDMI, there is no VESA to counterbalance them, it is USB or nothing, so you'll have to deal with these crappy standards going forward.

--

HDMI is doing something similar now too - "HDMI 2.1" is a completely hollow standard where every single feature and signaling mode added since HDMI 2.0 is completely optional. You can take HDMI 2.0 hardware and get it recertified as HDMI 2.1 without any changes - actually you must do this since HDMI Forum is not issuing HDMI 2.0 certifications any more, only HDMI 2.1 going forward, the new standard "supercedes" the old one entirely.

So - "HDMI 2.1" on the box doesn't mean 4K120 support, it doesn't mean VRR support, it doesn't mean HDR support. It could actually just literally be HDMI 2.0 hardware inside. You need to look for specific feature keywords if that is your goal.

https://arstechnica.com/gadgets/2021/12/the-hdmi-forum-follo...

https://www.youtube.com/watch?v=qo9Y7AMPn00


USB versioning is such a clusterfuck.


There was a really short timeframe when I was really positive about USB, but that has been long lost since.

They should've never allowed cables to only provide some capabilities and still get the branding. Having capabilities for connectors was fine imo, but also accepting them with cables was bad because you cannot really find out what it supports and where the issue originates of something goes wrong


It’s why I always buy TB3 (or now TB4) cables rather than a cheaper USB-C to USB-C. Due to the strict requirements on TB cables, you can pretty much guarantee it’ll support any use case (alt modes, PD, etc). Sometimes overspending is worth the headache prevention.


Apple just released a €159 cable


And? Apple used to sell a way more expensive than that TB2 cable if high price is your point and besides, saying Apple charges a premium price for a premium product (like this) is about as insightful as saying the sky is blue.


FWIW, Monoprice's 1m USB4/TB4 cable (100W) is $25.

https://www.monoprice.com/product?p_id=41946


Apple's cable is 3m, which is likely most of why it's more expensive.

https://www.apple.com/shop/product/MWP02AM/A/thunderbolt-4-p...


Their 1.8m cable is $129, so still >2x the price of competitors. It’s an extremely well made cable, but overkill, probably?


Do you have any of these? The photos show no markings on the cable at all, which is a bummer. If I get some I'll have to put zip tie tags on them to label them, otherwise I'll never remember which ones are the good ones or not.


3m is beyond the max cable length specified by Thunderbolt, so it requires active extenders (they're hidden in the plugs) and tight manufacturing and shielding. You're paying extra for the ability to break that max length spec, and it's one of only a handful of products that do it.

The only other one I'm aware of is the Corning Active Optical Cable series which costs $360 for a 10m Thunderbolt 3 cable or $479 for a 30m cable, or $215 for a 10m Thunderbolt 2 cable (ie slower and different connector, potentially needs a $50 converter on each end). Also those Corning cables have a reputation for failing barely out of warranty even if they are treated very delicately. Amazon reviews are full of "my cable failed 1 year and 1 month after purchase and Corning told me to go eat a dick" type reviews.

https://www.bhphotovideo.com/c/product/1577008-REG/optical_c...

Also, just FYI, but max length spec on a USB 4 cable (which will support Thunderbolt-like features) is 0.8m and you'll need to use special cables to get the full capabilities there too, you can't just use a $15 usb-c to usb-c cable you bought off amazon. Just like some usb-c cables only support usb 2.0 speeds, you won't get full-duplex 40gbps signaling out of a 10gbps half-duplex USB 3.1 cable. USB certification isn't magic, these are physics-based electrical/RF problems here and high-capability cables/devices require more expensive implementations.

But anyway go ahead and click through that B+H link and look through their thunderbolt 3 cable category for another 3-meter cable. You won't find any. If 2 meters is not enough... your options are Apple, Corning, or nothing.

The Apple premium is still a thing, but I'd expect competitors to clock in around $100 if/when they come out. There is always a steep price inflection once you move from passive cables to active cables or fiber. If you can avoid that, great, use a shorter cable. If you can't, you have to pay up. Not everyone can just move everything closer (eg running through walls) and it's always so disappointing to see people arguing against consumers having options just because they don't personally need them. No one is making you buy this, but the people who do now have an option they didn't before. That "if I'm not interested in a product then it shouldn't exist at all!" mindset seems to be extremely pervasive in the tech space and I just don't get it, not every product has to be aimed at you personally. It's "center of the universe syndrome" as one of my teachers liked to call it.

I've looked at the Corning cables for setting up a Vive Wireless Adapter that can be in a different room from my desktop rig (adapter goes in a Thunderbolt enclosure, mounted on the wall, thunderbolt optical cable goes through the walls...) but the price and the failures kinda scared me off. I get that this won't work for normies, but personally I'd prefer to have the transceivers and the fiber be separate so I can replace one or the other if needed. Shipping it pre-assembled is fine but given we're talking about a $500 investment here I'd want it to not break in a year or at least to be semi-repairable if it does.


Can you use USB3 (whatever gen crap it is) for this application maybe?

I can drive an occulus quest2 via 8m of USB3 cable. The cable contains a fiber optic with a repeater hidden inside the female end. The total bandwidth this way is enough for the occulus quest2 at 90fps.


The goal here is wireless, a single thinner cable would be better but it's better to not have to worry about wires at all. And I'm not willing to set up a facebook account just for a Quest.

The Vive Wireless Adapter (VWA) is a PCIe card (single slot/low profile/mitx length). The output from the card is an SMA connector with a RF signal that goes to the antenna, max official length is 2 meters (and it isn't another SMA on the other end, it's hardwired into the antenna, so you have to use an extension, meaning multiple SMA connectors in the middle). I've seen people use some fairly long extension cables, but that attenuates the signal somewhat. It's probably fine but it's undesirable.

There are USB wireless adapters (TPLink makes one iirc) but generally they are agreed to be an inferior solution in various respects - higher CPU usage, higher latency, worse signal quality, a green bar on the top, etc. This is basically an ideal use-case for WiGig, it was literally designed to be a wireless display transmitter, and that's what the Vive Wireless Adapter uses inside, it's actually an off-the-shelf Intel WiGig card. The TPCast uses a much lower-bandwidth solution and compresses it much harder and that requires more latency, more oomph on the PC, and still gets a worse signal quality.

But, the WiGig card only has a short cable to the antenna. Solution: put the card in an enclosure and mount the enclosure on the wall, run the cables to the PC. Problem: thunderbolt also only runs 2 meters. Solution: optical thunderbolt cables. The rest is solvable from there.

The other reason I haven't raced into it is that HTC hasn't kept it up with the newer hardware. The Vive Pro has a higher-res screen and the VWA can only run at (iirc) 3/4ths resolution. It's still a better screen, there's less Screen Door Effect, but when you're talking about dropping around $1000 to get wireless working flawlessly and tucked away into the walls, it better be fucking flawless. On paper the WiGig actually has three channels and should be able to send on all three at once, but this doesn't seem to be implemented...

Honestly the TPCast is probably a 90% solution, it probably chokes on the Vive Pro as well but maybe for $200 instead of $1000 that's acceptable. But it's tough for me to accept "good enough" when there's a technically better solution. The VWA is an absolutely ideal solution here. At one point there were some updates pushed that looks like Valve was working on it, but (with apologies to South Park)... in typical Valve fashion, "they just sort of got high, and wandered off..."

And then, the Index is just an all-around better headset... but it doesn't have a wireless solution at all right now (apart from maybe the TPCast?). It kinda sucks, drives me up the wall that there's no "perfect answer" here. Every solution has some large downsides.


> 3m is beyond the max cable length specified by Thunderbolt, so it requires active extenders (they're hidden in the plugs) and tight manufacturing and shielding. You're paying extra for the ability to break that max length spec, and it's one of only a handful of products that do it.

I'm pretty sure it's not breaking the spec. Are you sure about that claim?

And the main factor is almost always decibels of signal loss rather than length, isn't it?

> Also those Corning cables have a reputation for failing barely out of warranty even if they are treated very delicately. Amazon reviews are full of "my cable failed 1 year and 1 month after purchase and Corning told me to go eat a dick" type reviews.

My understanding is that the thunderbolt 2 ones reliably self-destruct but the thunderbolt 3 ones probably fixed it? At the very least they can take a lot of physical abuse.

> 10gbps half-duplex USB 3.1 cable

I don't think any of the high speed wires are ever half duplex?


> I'm pretty sure it's not breaking the spec. Are you sure about that claim?

Actually we're both wrong... it appears max length for a passive cable is 18 inches for full performance. Passive cables technically max out at 18 inches for 40gbps and drop to 20gbps at 2 meters. Past that you need an active cable (which has signal repeaters).

Active cables generally run up to 2 meters (the Apple is the first 3m active cable except for the Corning AOC cables), but in most cases (everyone except apple) you start dropping features like USB 3.1 or displayport. AFAIK Apple's solutions are unique in that they don't - like for example I looked up a 2 meter Belkin cable advertised as TB3 and it doesn't carry the DisplayPort channel.

Which is why the advice for Thunderbolt is "just shut up and pay apple their money".

https://appleinsider.com/articles/17/08/15/psa-thunderbolt-3...

Not absolutely positive what the official standard is - they might well only say the passive number (ie 18 inches) because active can obviously be more or less arbitrarily long with things like fiber, it might not make sense to define a maximum cable length in that context. Or they might amend it as they go... obviously Apple has now broken the 2 meter barrier with their active copper cable.

I thiiiiink this becomes 0.8m for a passive cable in USB4/TB4 as the official passive spec? CableMatters seems to have a 2 meter active cable out though.

> I don't think any of the high speed wires are ever half duplex?

High-speed is half-duplex, yeah. It looks like SuperSpeed is full-duplex though so I'm wrong on that bit.

I just remember it being a nightmare trying to use USB external hard drives (which would have been back in the USB 2.0/High-speed era when I used them last!) and reading/writing at the same time tanked performance far beyond what you'd get with even an internal HDD. Read or write, one at a time, mixing both was a trip to hell.

https://www.ramelectronics.net/USB-3.aspx


> Actually we're both wrong... it appears max length for a passive cable is 18 inches for full performance. Passive cables technically max out at 18 inches for 40gbps and drop to 20gbps at 2 meters. Past that you need an active cable (which has signal repeaters).

I'm still pretty sure it's based on signal loss, and the lengths are just estimates of what you can reliably get out of a cost-optimized manufacturing process.

> High-speed is half-duplex, yeah. It looks like SuperSpeed is full-duplex though so I'm wrong on that bit.

By "the high speed wires" I mean the pairs introduced with USB 3. Not USB's dumb naming conventions.

> I just remember it being a nightmare trying to use USB external hard drives (which would have been back in the USB 2.0/High-speed era when I used them last!) and reading/writing at the same time tanked performance far beyond what you'd get with even an internal HDD. Read or write, one at a time, mixing both was a trip to hell.

I think a big part of that is also the mass storage protocol combined with slow responses off a hard drive. I have a USB 2.0 SSD-class drive around here and it actually performs pretty well even on mixed workloads.


> for example I looked up a 2 meter Belkin cable advertised as TB3 and it doesn't carry the DisplayPort channel

It’s not possible (I think?) for a USB-C cable to support TB3 but not DisplayPort. Both are alt modes on the USB protocol and use the same wires for transmission, so it doesn’t make sense to go the extra mile of TB certification and not support DP too when you already have the necessary wires.


> 3m is beyond the max cable length specified by Thunderbolt, so it requires active extenders (they're hidden in the plugs) and tight manufacturing and shielding.

The ICs at both ends, tight manufacturing, and shielding are required even for a 0.5m Thunderbolt cable.

For a 3m cable, the manufacturing tolerances and shielding go up even higher, but they were still required even for short cables.


So on the next versions of USB, the cable length will get shorter and shorter until the max gets to 5cm?

While I get the technical reasoning about high frequency/attenuation etc that limits cable length as speeds go higher, there are obviously some practical limits to how short cables can be.

How would that be solved, I don't know.


Keep the same speeds, add more wires.



I'm confused what that section is supposed to represent. E.g. Apple has a 3 meter USB 4 3x2 (40 Gbps) cable but the "cable" value for that section is listed as 0.8m. The only hit I'm getting in the USB 4 spec for "0.8" is on page 59 referring to maximum receiver insertion loss in dB for a gen 3 connection including a 0.8m passive cable but that in itself isn't a hard limitation on cable length.


Not my area of expertise, but maybe some (unrealistic) options include using fiber optics for the data lines, or adding more data lines.


There already exists some fiber-optic USB cables that come in lengths >50m and with support for USB 3.1 so it doesn't seem like a very unrealistic option.


Redmere chips also proved HDMI can go very, very far with a little extra investment. I've run 4K signals hundreds of feet with them. We've seen this problem solved several times, I can't imagine it's physically impossible with USB-C.


That sounds more like fiber optic adapters/converters that fit into usb-ports and talk usb, rather than USB-cables that can be 50+ meters.


I think GP is thinking of fiber optic Thunderbolt cables probably.


https://www.amazon.com/FIBBR-Female-Active-Extension-Optical... provides USB-to-fibre-to-USB in a single cable, and a copper pair in parallel for power.


$169+47 shipping for those curious

Can't say it's not unique and cool, to be fair


Some USB4 / Thunderbolt cables are like that, but with copper in the middle. The drivers on the device end wouldn't be able to maintain signal integrity over that size of cable, so there's a pair of transceivers in each end of the cable to convert the signal into a format that'll survive transmission.


Seems like you can get 3m optical usb-c cables. Oculus sells an official "Full featured USB active optical cable. USB 3.2 Gen 1 Type-C" for tethered play.

That's what the cheatsheet says so maybe that's part of the spec.


What's the difference between talking USB and being USB?


I guess at some point optical will be the only way forward.

Having more data lines in a serial bus is interesting, as the whole reasoning to go from parallel lines (e.g. Centronics, ATA/SCSI or ISA/PCI buses) to serial (SATA/SAS, PCIe, USB) was that coordinating multiple data lines got impossible due to physical limitations where e.g. minimal differences in cable lengths started to matter).


Multiple serial busses, each with its own clocking and buffer, so that the combined data is extracted synchronously at the end. The crosstalk is still a problem but there are ways around that: different twist rates for different pairs for instance.


> I guess at some point optical will be the only way forward.

Maybe. Though Infiniband's currently at 100Gbps per lane on a 1.5 meter passive cable. And active cables can give you a moderate boost while still on copper.


that's a giant QSFP+ cable (I think mine are at least 3/8") with tons of shielding and a terrible bend radius though.

And my cables all have a "10 plug/unplug cycle lifespan" sticker on them - it undoubtedly will go for longer in practice but it's not designed for USB-style usage where you might plug and unplug your phone a dozen times a day as you charge it.

Commercial design concerns are very different from consumer design concerns, basically. Phones would probably be easier if we had a 1/2" x 1/2" x 1.5" connector with a shielded connector body! ;)


Suggestion: maybe include all the USB-C-plug Thunderbolt versions too. My personal policy these days is to just buy reputable Thunderbolt cables for all my USB-C needs. Maybe I'm doing the wrong thing?

Also, I think there's a difference between active and passive USB-C cables, or something like that.


> Maybe I'm doing the wrong thing?

If you're happy with it then probably not.

The main possible issues are that it's more expensive and you get shorter and thicker (less flexible) cables, a passive non-optical TB (or USB4) cable will top out around 1m.

Less capable cables can be longer and thinner which is convenient for e.g. mice and such small devices. But otherwise may not matter overly much.


I've been pretty happy with my less flexible cables. I don't need to snake them around tight corners anywhere. Being less flexible seems to keep them from auto-tangling.


Ah USB. In the old days it was different cables for different things, nowadays it's 1 connector for everything but beware, the cable might physically plug into the socket, but whether you'll get the functionality you want?


Seems it is going backward to me too.

At one point I remember hooking up a computer being like one of those shape puzzles we give children. If you can match them they'll work. No two of my devices used the same cable or port, but if it fit it'd work.

Keyboard switched to PS/2 so those and PS/2 mice were confusing, but eventually they standardized on colours.

USB came out and you could just plug it in wherever. This was great.

And now? 20 combinations of cable features with the same socket but all do something else. I can only imagine what the return rate will be for stuff like this.


Just how many devices do you meet that regularly hit those edge cases? Outside 4K+ multimonitor connections?

(It's really popular and easy to bash on USB on this forum, but it turns out that in real life your USB-C device will "just work" for pretty much all setups outside really fringe high performance ones. And even those will usually just negotiate lower rate.)


There's something about the naming of USB that is great. I love how there are now something like a dozen 'universal' standards, and how the serial bus now has multiple lanes.


In the "USB-A/B" section, they're all labelled "Type-A", the 3rd and 4th should be labelled "Type B".

It's also missing - mini-b 4 wire (older phones, etc) - micro-b 4 wire (most electronics prior to type c) - micro-b 8 wire (mostly seen only found on external 2.5" HDDs)

There were also a bunch of other connectors (mini-a, mini-a/b, etc) but they are very rare.


I'm always flabbergasted at how difficult and hostile to the user is discern between the various USB standards.


Hello Fabien ! I saw on twitter that you had built a gaming setup, can you write an article on your blog as you did for your silent pc ?



Amazing setup for amazing game ! Thanks for this post, i love to read you :-)


I did build a PC last year especially to play Diablo2: Ressurected. I did write something at the time but never published it. Maybe I will clean it up.


So where exactly does USB-C fall into?

I have 2 different generations of USB-C hosts, and they behave quite differently when approaching max cap, especially with high-quality low-latency audio (USB-C was supposed to be de-facto replacement for FireWire).


USB-C is a connector type, like USB-A (usually known as the classic USB plug) and USB-B (usually the other side of said plug, a square kind of connector). USB-B had other offspring like miniUSB and microUSB (note that in these cases on the other side of the cable you usually have a USB-A plug).

USB-C is the first time cables have the same connector on both sides, so it obsoletes USB-A and USB-B. But what is sent over USB-C? Can be USB 3 with which it is often conflated because they came around the same time, but it can also be USB 2, so it is a bit hard to tell. But USB 3 can use old style USB-A as well (the blue plugs with the same shape as the classic USB plugs) and USB-C (the microUSB plugs with an extension off to the side).


Can be USB 3 [...] USB 2, so it is a bit hard to tell.

...or Thunderbolt, USB 4, DisplayPort (through Alt-mode or encapsulated in Thunderbolt), or HDMI (Alt-mode), or MHL (Alt-mode), USB Power Delivery...

Unfortunately, not every cable with USB-C connectors can carry all of these. E.g. there are USB-C cables that can only carry USB 2. Or cables that can carry USB 3, but not Thunderbolt. Also, not all cables can carry the same wattage for power delivery.

It's a mess.


Worse, there are no "best" cables longer than 0.5m: any longer than that, Thunderbolt 3 requires active cables which don't pass non-Thunderbolt data beyond, IIRC, 480 Mbps.

As someone who spent many years using a mix of 25/50/68/80-pin fast/ultra/… single-ended, LVD and HVD parallel SCSI devices, however, USB-C/Thunderbolt cabling still feels like a breath of fresh air.


I think Thunderbolt 4 active cables are supposed to pass higher USB 3 speeds? At least the Apple Thunderbolt 4 cable claims to do so:

https://www.apple.com/shop/product/MN713AM/A/thunderbolt-4-p...


Of course USB-C makes this worse, but the problem already started earlier: a few years ago I connected my phone to my computer with a USB-A to micro USB cable and was scratching my head why it didn't work. Then I remembered that the cable had come with some Bluetooth headphones and was only a charging cable without data lines...


Desktop speakers do this still. Instead of simply being a USB speaker set, they use the line out jack for audio and a USB plug for power.


There's actually a reason for that. Standard USB can (obviously) only transfer digital audio, and most speakers are "dumb" devices designed to just amplify an analog signal. In order to convert digital to analog, you need a DAC (Digital-to-Analog-Converter), and good DACs are still a nontrivial cost to a manufacturer, so whatever DAC you already have in your computer is probably better than the crappy one that would come with cheap consumer speakers.


Funny enough, USB-C can transport analogue audio over USB cable though :)

Tends to be too expensive for the cheapest of products.


There was a period of time where a google engineer was producting review on amazon about which usb-c cable would make your laptop burn. That was fun, and totally not the sign of an overbloated standard.


IIRC, that particular cable was one which had its power wired to the ground pin and ground wired to the power pin. No standard can help you if the cable is that badly made.

(The effect of that miswiring is to apply a negative voltage, around -5V, to a chip most probably designed for a range of -0.5V to 20.5V; which results in a short circuit through at least the ESD protection diodes within the chip, and possibly other parts of the chip too.)


Yep, a batch of cables having the super basic power pins wired backwards tells you basically nothing about the standard, no matter how often people try to use it as evidence of complication.

And the docks that were frying switches were putting 9 volts on a signal pin, also obviously wrong.


>note that in these cases on the other side of the cable you usually have a USB-A plug

Usually a full-size USB-A, you mean, because what we commonly know as mini-USB and micro-USB are actually mini-B and micro-B, which have corresponding (but now rarely used) micro-A and micro-A ports. Before USB-OTG, USB used to be an explicitly directional protocol, with a master and a slave device.

https://upload.wikimedia.org/wikipedia/commons/8/82/USB_2.0_...

https://en.wikipedia.org/wiki/USB_hardware


It's orthogonal.

Usb A is a host side connection Usb B (normal/mini/micro) is a client side connector Usb C is a 2 way connector.

Each of them can be implemented for each USB version, except USB C came later and makes no sense befor USB 3.

Then USB versions added features, signalling conventions and wires. But the USB A and B connector are backward compatible all the way to USB 1.0 1.5Mbit/s


On-the-Go allows mini and micro B to be host as well.


Good question. I bought a RaidSonic Icy Box IB-1121-C31 USB 3.1 (10Gbit) S-ATA dock recently (with a USB Type C connector) that came with a USB C cable and had a buy a special "USB-A - USB type C cable" to achieve 10Gbit/s with the 10GBit/s USB A connector of my mainboard.

The "USB A - USB type C cables" that i had already only worked up to 480MBit/s.


What does the U in USB stand for again?


Actually, I think it’s just a upside down “∩”. Makes sense because the non-C connectors are always upside down, and all it does is intersect wires.


Unintuitive.


Unintelligable


Uikipedia


U-turn


If you think it's not complicated enough, add Thunderbolt to it.



Like the simple site design with a one page info about USB.

someone please make similar one-page with tables about PCI Express, Ethernet, HDMI...


I created a cheat sheet for Ethernet, when I built my home network: https://github.com/ProZsolt/runbook/blob/master/ethernet-cab...


Really nice !

Today I learn that there is a 25 and 40G-BASE-T on copper, these PHY must heat like hell haha.


shoud 100, 400, 800G be added, yes a bit earlier for 800G but I think 100/400G are already in use at data centers.


This only covers twisted pair, as I only use this as reference when I need new endpoints in my or my parents house.

I'll update the guide when I rewire my house with fiber optics (currently I don't even use the full potential of cat 6), but contributions are welcomed.


I second that request!


If only it included a guide to the different USB connectors, but that might make TFA too long to publish.


I've definitely used 5m+ extensions on USB1 (and 2 iirc) before. I guess it'd be sketchy running something that requires decent throughput and not b0rk on ECC/FEC/whatever it uses but for temperature sensors which I was using, it was fine.


A long time ago, I was using a USB 1 or 2 Wifi adapter through a USB extension cord, I'm pretty sure the total cable length was more than 5 meters. It "worked", but even just flicking a light switch caused the network connection to reset. So yeah, it may "work", for certain values of "work".


This is wrong. For example Full Speed isn't a name for USB 1, it's the name for a speed which is supported by USB 1 and 2 (not sure about 3). Most USB microcontrollers are Full Speed USB 2.


Hi, nice cheatsheet! I've published it on DataStack, where anyone can contribute improvements (like adding columns or corrections).

https://datastack.net/datastack/usb

I can transfer the project to you if you like.


Why didn't they focus enough on cable length? I'm not sure about how much latency they would add since the current is still traveling at light speed.

Maybe there's someone in the world wondering if it's possible to emulate MarioKart from his office PC to the living room with a 10m HDMI and USB3 cable... Just guessing :)


Where's the standard speed?


CC1 and CC2 pins are missing.


They are not required to be wired through all cables. Similar to SBU signals.


USB 4 (opt) is ... optical? Or optional?


According to the wiki table (https://en.wikipedia.org/wiki/USB4#Support_of_data_transfer_...), it's "optional":

* "USB4 20 Gbit/s Transport" (= USB4 20Gbps = USB4 Gen 2x2) is required for host to support

* "USB4 40 Gbit/s Transport" (= USB4 40Gbps = USB4 Gen 3x2) is not

Also USB4 apparently only requires support for tunneling "SuperSpeed USB 10Gbps" (USB 3.2 Gen 2×1), "SuperSpeed USB 20Gbps" (USB 3.2 Gen 2x2) is optional.


It would have a longer max length if the data lanes were optical.


You can actually get optical usb3 and thunderbolt (all generations) cables. Thunderbolt was originally called light peak and shown off by Intel and Apple in demos as optical, and Sony had a line of laptops with optical light peak connectors to connect to external GPUs. But ultimately the default became non-optical because it can carry power too.


> But ultimately the default became non-optical because it can carry power too.

There's no problem in making a cable with two fibre leads for data and two (at higher lengths thicker to reduce issues with voltage drop) power lines.


That’s a good point although optical usb3 and thunderbolt3 cables tend to be advertised with electrical isolation as a feature and suggest using an external hub to provide power.[1]

[1] https://www.corning.com/optical-cables-by-corning/worldwide/...


Whoa, that pricing is quite a bit.


Yeah that must be the real reason it didn’t catch on as a default. Light peak did have cheaper cables though, the optical part was in the connector, not the cable. The connector was a usb-a connector if I remember correctly.. and the history is actually pretty interesting! Apparently that connector was deemed proprietary and frowned upon by the usb folks for causing consumer confusion[1]. Kind of hilarious now, seeing how thunderbolt 1/2 ended up barely being adopted outside of Apple and usb itself is a confusing mess these days.

[1] https://www.theverge.com/2011/10/14/2490694/how-sony-acciden...


> Kind of hilarious now, seeing how thunderbolt 1/2 ended up barely being adopted outside of Apple

Which is to a large degree the fault of Intel restricting the TB spec to hell and back.


All it's missing is the pinouts and the charger resistor divider setup definitions.


Thanks. Nit-picking here but ground is usually abbreviated GND, not GRD.


I have a related question:

Is the official name

USB 3.2 Gen 2x2

or

USB 3.2 Gen 2×2

(x vs ×)?


The USB pages and specs (on usb.org) seem to use "x" where they use the technical spelling.


Which USB do I choose for a 4x4 offroad adventure?


If you don’t mind sharing: what was the “bug”?


Is that year 2025...


Nope, USB4 has been defined for a while now.


just curious. Can USB3 work without D+ and D- ?


USB 3 and previous standards are completely separate connections and software stacks - the USB 1/2 D+/D- pair does not interact at all with SSRX/SSTX. You should be able to literally cut the D+/D- wires in an USB 3 cable and it should still work as a USB 3 cable.


I doubt any normal hosts will enumerate a USB device without the USB 2 data lines. PD will definitely not work. You might be able to get an alt mode running.


They probably would enumerate just fine, because the SuperSpeed enumeration is completely independent from the USB 2.x enumeration. USB-PD uses a separate pin/wire, not the USB 2.x D-/D+ pair, and you cannot get an alt mode running (except the special analog audio and debug accessory modes) if USB-PD doesn't work, since alt mode enumeration goes on top of USB-PD.


I'd actually be interested in a 3-only cable. Force it to use high speed or bust, no surprises, no guessing at which proto runs under the hood on that "universal" connector. Didn't know it was possible; I might make one once I've got a spare cable (just today I've got another spare but they always have just 4 cables).

More for geeky/testing purposes than to replace all my cables, but still




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: