> After being informed of IBM's hundreds of millions in yearly patent revenue, CEO Steve Jobs authorized a change in FireWire's licensing policy. Apple would now charge a fee of $1 per port. (So if a device has two ports, that's $2 per unit.)...Intel sent its CTO to talk to Jobs about the change, but the meeting went badly. Intel decided to withdraw its support for FireWire—to pull the plug on efforts to build FireWire into its chipsets—and instead throw its weight behind USB 2.0, which would have a maximum speed of 480 megabits a second (more like 280, or 30 to 40 MB/s, in practice)...A month later, Apple lowered the fee to 25 cents per (end-user) system, with that money distributed between all patent holders. But it was too late. Intel wasn't coming back to the table. This was the death blow for FireWire in most of the PC market.
For all of you who embrace the "fail fast" mindset, keep this story close as a reminder that some mistakes are irrevocable. This was one decision, reversed after 30 days.
It's also a good reminder that Jobs, for all the posthumous praise he gets, made some rather stupid business blunders during his tenures at Apple.
Apple can be aggressive about its tech today (such as Lightning and Thunderbolt; anyone know what they charge for those?), but back then they were much more of a niche player.
Thunderbolt is Intel AFAIK. (Correction: apparently Intel made some claims but Apple owns it.)
My guess is Intel probably would have backed USB over Firewire regardless since it had a stake in USB.
The irony is, USB would probably have failed without Apple.
The fact is that Intel has repeatedly chosen its own technology to promote, and until recently it had the market power to force everyone else to follow suit.
> (Correction: apparently Intel made some claims but Apple owns it.)
This doesn't seem to be correct. While I can't find anything with a simple search, the thunderbolt technology website is copyrighted by Intel, and Wikipedia directly attributes the development to Intel.
Iirc apple was mostly a first adapter to thunderbolt.
No one would say 'no' to that, but at the same time people would praise jobs anyways. "See, he made firewire, which made intel make USB 2.0. He knew exactly what he was doing." etc, etc.
With legal teams on hand and millions of dollars at stake, I'd imagine it as dry and unemotional for the most part.
A read-through on some paperwork, a couple of curt refusals, double check to confirm and get clarification, maybe a phone call or two, and call it a day.
I've heard it argued that burning Intel was intentional strategy as Jobs wanted Firewire to be associated with the Mac. People bought Macs simply because they couldn't get their DV cameras working with a third-party Windows-based setup.
What is your interpretation of the "fail fast" mindset? This sounds like the perfect example of a big project failing catastrophically that "fail fast" intends to avoid.
On the contrary, OP is saying that Apple made the decision to bump the license fee up to a dollar, very quickly saw that it was too high and lowered it to 25 cents, but it didn't matter. That one simple failed decision that Apple quickly reversed still ended up killing FireWire.
Would that actually have been possible, though? I'd expect that these royalties were defined by massive multi-year contracts. If that's the case, they couldn't realistically do something like gradually increase the price until they found the equilibrium, at least unless "over time" here is on the scale of a decade. I may be off base here, though; I'd be interested to hear from anyone who knows more.
I'm not sure I agree - at least they didn't fight each other for a few years and eventually both get passed up, like the 56k modem chip fiasco. Apple moved forward and still did OK. They might have "lost", but they also avoided a protracted and potentially self-defeating fight.
Apple might have done OK, but they killed a technical specification that was already present in thousands (millions) of users' devices, and that hundreds of other companies had invested in as the best way forward (technically it remains superior to USB).
One major thing not mentioned is that apprently not everyone bothered implementing FireWire the same. I never learned the technical reason, but if you bought certain digital audio recording interfaces circa 2007, you learned in the fine print/after you called customer service that it only worked with TI FireWire ports . These were the ones used on Macs at this point, I had a dell laptop which used a cheaper manufacturer so (perhaps combined with the fact that my recording interface itself was not a high end one and maybe cut some corners on its end too) the audio recording quality was staticy and uneven. I ended up having to return it and get a lower-bandwidth, noticeably higher latency USB powered one because at least USB ports were consistent.
I was always curious if some manufacturers were not implementing the full spec here (maybe because by this time according to the article FireWire was already on its last legs), or if this was due to flaws in the spec that left certain things optional or something and that ruined interoperability.
It shouldn't really surprise anyone that a cheaper VIA host interface chip would have worse performance or compatibility than a more expensive TI or Agere/LSI chip. The VIA chips were sufficient for connecting an external hard drive, and that made them sufficient for checking a box on the laptop's spec sheet.
You didn't run into trouble with USB because by that time all your USB ports were coming straight from the Intel southbridge. Even if Intel's host controller implementation had issues, it would have been the device maker's problem to work around them since Intel's market share was so large.
No modern peripheral interconnect is simple enough for host interfaces to be judged on a mere pass/fail basis. The cheap chips almost always find a way to suck, whether it's obscure like FireWire or ubiquitous like gigabit Ethernet.
I don't miss those days at all. I chose to build a fairly expensive setup ~10 years ago that is now completely obsolete. I'd have to build a 32-bit Windows XP machine to get it to even work. It got replaced by maybe $400 in USB devices and they'll power up on any computer I plug it into.
Yeah - the future's pretty awesome. I can run a three-camera live studio with six microphones off a laptop these days (for Internet streaming; higher quality obviously requires a little more work).
My first exposure to FireWire (like many folks, I assume) was the original 5GB iPod I bought from a friend. I thought it was SO cool that you just used this one cable to connect to your computer (in my case, my first Mac, an iBook 600mhz G3), as well as to the power adapter. The charging brick was just a brick with a firewire port and flip-out prongs that could be removed from the power brick, just like today's Mac laptop chargers. You could interchange the prongs with foreign prongs (I still have the travel kit somewhere.. of course the foreign ones don't have the same flip-out functionality). You could even put your "long" laptop charger cable on the Firewire iPod power brick! This was also true of the Airport Express. All interchangeable parts.
I'm not sure if Apple was the first company to do this; today everyone has wall chargers that output a single USB port, but I don't remember it existing before then. Hell, I'm not sure if there were any devices TO charge over USB when the iPod first came out -- those little SanDisk flash MP3 players, and I'm sure the bigger Archos hard-disk based MP3 players must have had their own chargers, there's no way they charged over USB.
I'm so happy that the world has gone in this direction, where I can use your Samsung USB charger to charge my Sony phone, or whatever. And Apple has still stick with the same charging brick style where you can flip out the prongs or use a longer cable. (not sure how it is now with USB C though).
But that firewire power brick still makes me nostalgic, a simple design that felt so elegant 15 years ago, and still does to this day.
Apple's 12 watt USB power adapter* still has the same design. Swappable plugs compatible with all of the laptop power brick international plugs. It comes with the iPad Pro these days (and probably others, too).
I loved FireWire. I had an external HD and DVD-RW, I could daisy chain them and connect them to my Dell, Sony, and Apple machines. Faster than USB, only needed one port... everyone complained FireWire was more expensive but the devices performed so much better.
> Speeds across networks of all sizes are now so high that there's also little need for something like FireWire. "The packets can arrive way before it's needed, because it's so fast," Sirkin noted. "So you don't need to worry about being synchronous any more."
For use cases where reliable low-latency transport is required (i.e., Firewire's main strength), what could possibly be meant by "packets arrive way before it's needed"?
At least on Linux, you can easily be barraged by tons of IRQs from your FireWire device. Every time some data comes in from your device (let's say some music production platform) it's going to DMA that data right on over via a PCI lane or 4 then fire off an interrupt to inform your OS "hey hey got some new information for ya!".
Now imagine the platform was poorly designed so it fires off that IRQ once for each track. You're re-tracking the drums since the drummer you're recording couldn't work with a click track. Let's say conservatively, you've got 2 overhead condensers, some ambient mic, and an SM57 at the kick. He's working against the guitarist and bass tracks along with some scratch vox. Depending on how you're patching and tracks are configured, you could easily have 12 tracks all firing a "hey I've got data for you! CONSUME IT!".
Each one of those interrupts is expensive, mind you. I mean not so much now, where we can shield processes on 16-core HT Xeons and basically dedicate a whole core to solely dealing with the interrupts. But imagine the early 2000s where you had P3 single-core's running at 600 MHzs. Each IRQ will context switch, which means whatever active process that was scheduled now gets bumped. The first IRQ is serviced and that process gets back to work and maybe it doesn't even have time to restore it's execution context before ANOTHER dang IRQ comes in. Like I say, not really a problem these days, but not so long ago....
I think it's being compared to something like 12 Mbps USB 1.1: Firewire is just so much faster that it doesn't matter.
I deal with upgrading a lot of industrial electronics, and have to answer this question frequently: people are concerned about replacing Modbus RTU and similar setups with modern protocols: "But it's not real time!", "It's too high-level!" "Consumer or office network gear can't possibly work here!"
No, it can. It's freaking gigabit Ethernet. You had 9600 baud before and were happy to read out a few tags a second, now we can stream multiple sensors at a multiple kilohertz each, or transfer the entire image of your old PLC in a couple packets.
There was a brief time when Firewire was way faster than everything else, but it didn't keep iterating like USB did. Honestly, that's probably a more accurate reason why it died. Plus the decision to use separate connectors for 400 and 800: if they had done like USB and allowed Firewire mice at 50 Mbps to connect to the same port as 800 Mbps camcorders, and built Firewire 1600 and 3200, it might still be around.
I'm not sure I understand. Are you replacing hardware for clients who think they need low-latency realtime delivery of data, or for clients who actually need to hit low-latency delivery deadlines?
From the article:
> And FireWire had its own micro-controller, so it was unaffected by fluctuations in CPU load.
This is still an important distinction for, say, an USB soundcard vs. a Firewire soundcard.
They need to get data in like... 3 seconds... So the operators can shut the machine down if something goes wrong.
But now the machine has much more intelligence, so it shuts itself off quicker. And where they needed to check the "low latency" and "realtime" boxes in the 90s to get adequate performance, now any bus (like Ethernet, or you can stick Ethercat or Powerlink on top if you need realtime) is so much faster that it doesn't matter.
It's like someone saying they need their Enterprise-grade 15k RPM SCSI drives in their server...when they just need an ordinary consumer SATAIII SSD.
Economies of scale are huge in tech. Stuff that reaches billions of people is often more performant than specialized gear.
This reminds me of cable internet vs DSL. I was such a DSL partisan. Cable's bullshit, you're sharing speeds with your whole neighborhood, no guaranteed bandwidth, yadda yadda yadda.
But at the end of the day, cable internet ended up being so damn fast it didn't matter. DSL providers couldn't bump the speed fast enough, telcos have to run FTTH to compete with cable, the infrastructure takes too long to arrive, and even with guaranteed bandwidth to the CO, your uplink will be oversubscribed anyway.
I type on gigabit xfinity docsis3.1 that speedtests at 250Mbps.....
> They need to get data in like... 3 seconds... So the operators can shut the machine down if something goes wrong.
That sounds like a not-necessarily-low-latency-yet-still-realtime problem for which the original low latency solution was replaced with a non-low-latency solution that still hit the realtime scheduling deadlines.
It seems as more and more big players put money into such solutions, the bonefide low-latency realtime problem space shrinks to a smaller mindshare. I'm not sure if that mindshare is fitting or insufficiently small, but the shift is palpable and does have drawbacks.
So if you know about the problem of receiving and sending audio fast enough so that the human on the other end doesn't hear the result as an echo, you read this article between the lines and go, "oh, that's what the microcontroller and isochrony are doing in there." If you're not, however, there's nothing explicitly stated in the article that informs the reader that this problem space still exists and can't be solved by throwing AI/4G/The Cloud at it.
This is a real-time standard for ethernet that lets you reserve slots for time sensitive information.
It's primarily being driven by the automotive guys who want to be able to wire up 4 displays, 12 speakers, and a video player for an in-car entertainment system.
Before TSN, it was AVB (audio-video bridging) and, while people networking things like gigantic stadium shows loved it, it didn't have enough volume to be popular.
With automotive behind it, it's going to get cheap and ubiquitous really quickly.
Amusingly enough, the audio data packing in AVB/TSN comes from FireWire via IEC 61883-6. AVB seems to be on its way to irrelevance in the pro audio/video world in favor of AES67/SMPTE 2110, which are Layer 3. Requiring switch support didn't help its adoption as it created the classic chicken/egg vendor adoption problem.
That sounds a lot like Ethercat, honestly. And I am more in automotive industrial stuff than the protocols on the car, which - while technically standardized - are usually a bit vendor-specific.
I had a nice chat with a customer rep from a big microcontroller company at a trade show where we talked about how in a couple years the 3 rows of "vendor-specific" industrial networking companies are going to be GONE.
And they don't even realize they're about to get run over.
It really is a shame that FireWire didn't work out. Aside from the far better transfer speeds, when used for external storage FireWire has always been more solid. USB storage has a nasty habit of periodically flickering out for a split second and has generally been more flaky, something that's actually gotten worse with USB 3 — if you haven't run into this set of issues yourself, do a quick Google and you'll find mountains of posts from people having major issues with USB 3 where 2 works great.
Naturally this extra speed and solidness made for a better experience when booting from external media. One my favorite features of my 4th gen 20GB iPod was the ability to keep an OS X partition on it that I could boot from via FireWire. It worked great — well enough to get work done with, even on low power machines like 400Mhz G3 iMacs — and it saved me several times.
Also, I always loved that little click when plugging in FW400 connectors (can't remember if 800 had this feature). That little bit of tactile confirmation that you've plugged your device in properly is something I wish USB had.
I've used FireWire to connect external disks (verdict: as stable as eSATA, much better than USB2) and to pull video from a cable set-top-box (verdict: the cable box was crappy; no problems directly attributable to the FireWire connection.)
Offering DMA to devices can be good actually if you care about performance, and the security issues can be eliminated with a properly set up IOMMU in the mix.
Thunderbolt has DMA and it's offered over USB-C so... kind of by proxy. It's not clear to me if a USB-C device doesn't support Thunderbolt (i.e. it only does the USB 3.1 part) if it also is clear of this potential issue.
I think people with actual need for eSATA have moved on to PCIe SSDs. USB3 killed everything else at the low end; I remember buying an external HDD with FW800 in the olden days of 2011, then getting a USB3 drive in 2013 which is still going strong and fast (enough).
I loved FireWire. It was faster in the real world than usb 2.0(this was fire 400 too and even 800) and my computer felt so responsive while transferring files.
Ethernet over FireWire in the days of winxp was great also!!
Since we're sharing our nostalgic Firewire stories... the first video camera I ever bought (almost 20 years ago) could only connect to computers via Firewire. I remember almost all digital cameras on the market only supported it. It seemed like a clear winner.
It's interesting to see an article that explains why it died.
Yeah, I remember this time being strange. I had bought a good bit of Firewire equipment for my Cameras and even got a PCI-E card for my PC. It just seemed to vanish without a trace. I still have those cables and accessories somewhere.
I got one of those Lacie big disks around 2006. I loved using the Firewire 400 port rather than USB. It was at least 20% faster. It's unfortunate that I never got to use the Firewire 800 ports on it.
I call this the "invented here" syndrome: When company management becomes so dysfunctional they don't trust anything invented at their own company unless it's been externally validated.
It's interesting that Apple, the company famous for dropping the floppy disk drive, CD drive, ethernet port, hell, even the USB port, was afraid to ADOPT something new.
This is an amazing story about hardware innovation, politics and some kind of bullying. I am amazed that despite all, corporations had innovation in focus, even at the cost of decreasing profit margins.
USB-C is just a connector. Some of the devices using that connector only support USB 2.0 signaling over that connector, so USB-C does not imply complete superiority over FireWire.
FireWire? That would have been a good addition to the article. Hopefully that will allow older FireWire devices to be used with a USB-C-to-Firewire adapter.
A cursory google search did not reveal any USB-C to FireWire adapters; there are Thunderbolt 3 to 2 adapters (Thunderbolt 3 is basically USB-C version of Thunderbolt 2 which is Mini DisplayPort) and there are Thunderbolt 2 to FireWire adapters, so that combo may work
Using a TB3 -> TB2 adapter and a TB2 -> FW800 adapter definitely works on Apple hardware with every FireWire device I've tested (hard drives, cameras, and audio gear)
I've used it for hardrives and video cameras (it was the standard way to get Video off a minidv tape camera). I always found it worked well.
But on video I Remember the different names being bing used being confusing. And while all macs had it some pcs didn't. With USB being ubiquitous and USB 2 being good enough when it finally showed up.
Somewhat oddly mac was a big pusher of USB as that's what the original iMac used.
It will be around for a while, as noted in the comments on the article: IEEE 1394B was used in the F-22 and F-35, so unless those are retrofitted the standard will be propped up in some form for decades.
At the time it was a good idea. It avoided bottlenecking everything through the CPU in an era when CPU power was extremely scarce: A 200MHz G3 chip doesn't have a lot of surplus horsepower.
Let's also hear your whines about how the C64 didn't have protected memory and the Apple ][ was so easy to hack because you could open the lid and poke around with a logic probe.
For all of you who embrace the "fail fast" mindset, keep this story close as a reminder that some mistakes are irrevocable. This was one decision, reversed after 30 days.