Hacker News new | past | comments | ask | show | jobs | submit login
Apple orders entire supply of TSMC's 3nm chips for iPhone 15 Pro and M3 Macs (macrumors.com)
422 points by retskrad on Feb 22, 2023 | hide | past | favorite | 456 comments



>Apple’s iPod, by comparison, was made far more pocketable thanks to the inclusion of a 1.8-inch Toshiba hard drive. The story behind how Apple came to acquire these hard drives for the original iPod is interesting.

>Jobs had expressed interest in building a portable music player in late 2000 and had tasked Jon Rubinstein with making it happen. The problem, however, was that Jobs wanted a player that was small and sleek like the flash-based music players of the time but that held as many songs as the hard drive-based players at the time (the flash ones would only hold between 10 and 20 songs depending upon how you compressed the music).

>Rubinstein told Jobs that the components necessary to make a small music player with a lot of storage didn’t exist yet. This was in late 2000, according to the story, which is well-documented in Walter Isaacson’s book.

>However in February of 2001, Rubinstein was in Japan meeting with Toshiba when the following scene played out, according to Isaacson (page 384 of "Steve Jobs"):

>At the end of a routine meeting with Toshiba, the engineers mentioned a new product they had in the lab that would be ready by that June. It was a tiny, 1.8-inch drive (the size of a silver dollar) that would hold five gigabytes of storage (about a thousand songs), and they were not sure what to do with it. When the Toshiba engineers showed it to Rubinstein, he knew immediately what it could be used for. A thousand songs in his pocket! Perfect. But he kept a poker face. Jobs was also in Japan, giving the keynote speech at the Tokyo Macworld conference. They met that night at the Hotel Okura, where Jobs was staying. “I know how to do it now,” Rubinstein told him. “All I need is a $10 million check.” Jobs immediately authorized it. So Rubinstein started negotiating with Toshiba to have exclusive rights to every one of the disks it could make, and he began to look around for someone who could lead the development team.

>"Exclusive rights to every one of the disks it could make" is the key takeaway here. Fast forward to later in the year – October 23, 2001, to be exact – and Apple would roll out a $400 pocketable MP3 player that could hold a thousand songs thanks to a hard drive no other company could use for competing products. The scroll wheel was pretty cool, too.

Source: https://techland.time.com/2013/10/23/watch-steve-jobs-unveil...


Interesting article, thanks for that. One of my key takeaways is the iPod represented not one but multiple simultaneous tech revolutions. Consumer storage until then unseen, paired with material and interface design that companies hadn't considered a priority.


I forget the exact timeline, but I think part of it too was music pricing. Physical media (tapes, records) had gotten very expensive, and almost everyone was pirating MP3s from Napster etc. Apple's iTunes prices were low enough that it became simpler just to buy stuff legally than pirate it.

I can't recall if iTunes was introduced at exact same time as iPod not not. Google search suggests it may have been 1 or 2 years later, but if so then where were people originally getting music to put on iPods ?


> Apple's iTunes prices were low enough that it became simpler just to buy stuff legally than pirate it.

I think this is why Spotify/Netflix are so successful. Streaming a good amount of content for a reasonable price is more convenient + less time wasted (+ less stress and moral implications) compared to pirating.

I used to pirate music/videos all the time and the moment those services were available I stopped because it was just easier to use them. Good UX can trump many other factors.


100%. I stopped pirating altogether for a few years because everything I needed was conveniently available, at a low price. I've told multiple people that if anime had a netflix equivalent with nice UX, I would happily pay for it.

Sadly now I feel like the UX and recommendation system of netflix/disney/hulu/etc. is genuinely worse than just pirating. I still pay for spotify though, its feature full, fairly cheap, and delivers all the songs in the world into my pocket


Yeah - the streaming fragmentation really does feel like it's being brought back full circle. I think I'm in the minority of binging a show only when it's completed, and only ever subscribing for a service for a month-at-a-time.

Although, I do keep Disney+ for my child, and Peacock because I still watch the Office a lot.


>delivers all the songs in the world

If only it were so.


Anything you have thats missing on spotify? For me one of the killer features of spotify is the ability to view and integrate songs on local/NAS storage into playlists, the few foreign tracks I had ripped worked just fine


At the least there are several songs I had saved that now have been pulled from Spotify. Mostly from smallish artists (<100k monthly listeners) but still...


Crunchyroll?


I tried it for a while, it didn't seem to have all the shows I wanted and the UX was worse than quite a few of the anime piracy sites.

Of course, anime is a bit of an oddball when it comes to piracy sites, the lack of legal enforcement combined with (from what I've noticed) that a disproportionate part of the audience are developers means there are some really good sites out there.


One of the original anime pirate sites.


The quote on this at the time was:

Piracy is free. How do we beat 'Free'?

'Easy'.


How much is your time worth?

Low cost but easy may still come out cheaper than "free" if you place sufficient value on your time.


I remember wanting to watch the HBO "Rome" TV series when I didn't have HBO, so therefore downloading it from usenet in some weird text format - had to combine multiple large downloaded chunks using some tool chain I forget (I think rar was involved). Now, that was painful. Nowadays I'll happily throw a few bucks to whatever streaming service to watch what I want on my Roku TV.


The iTunes being discussed here was not a streaming platform. It was a library management platform (2001) that also offered purchase and download of FairPlay DRM-laden music files (2003). Later it would offer DRM-free MP3 download for select music (2007) and eventually its entire catalog (2009).


Why be so quick to jump to "I can tell someone why they're wrong"? Taking the time to read their comment a second time might lead to the realisation that they clearly weren't commenting based on a perceived connection of 'streaming platforms', the connection was "business models which succeeded on account of offering a legal & convenient service cheap enough for people to choose it over their previous piracy".

Maybe I've just done the exact same thing, perhaps I was wrong to read an attempted correction into your comment and you just thought you were adding some interesting additional information to the conversation. In which case my bad, and sorry. But I think unlikely you'd share that as random "fun fact" extra info rather than replying to a misunderstanding of the previous comment's point.


> The iTunes being discussed here was not a streaming platform.

I never said it was. The point was not a comparison of iTunes to streaming - but the proposed UX enhancement iTunes provided mapped a way that was even simpler than pirating per u/HarHarVeryFunny, which made it worth it at the time to switch from pirating in their mind.

Streaming made this even easier in my opinion, and was the catalyst that prompted me to stop. Granted - when I was a teenager, I simply didn't have any money even for iTunes purchases - but when Spotify came around, I had just gotten my first job, and made the switch then.


> but if so then where were people originally getting music to put on iPods

The first iPod was a Mac-only Firewire device. "You need a Mac with Firewire support to even use this thing" set a minimum platform spec that also made other things implicit — in this case, the availability of a CD-ROM drive. (Which wasn't a given in 2001, but was a given on any Mac that had Firewire support.)

iTunes the software was introduced alongside the iPod; but the iTunes Music Store was not. iTunes was, at first, a CD ripping program (and library manager for music ripped from CD.) To buy music for your iPod, you'd go to the record store, buy a CD, put it in your Mac, rip it, and then load it onto the iPod.

And while, yes, music piracy slightly predated the iTunes Music Store, there were a lot of people buying iPods in that era who had fancy Mac computers, but just plain didn't have internet service to their homes, and so for whom music piracy wasn't really a possibility†. 2001 was an interesting time; Internet access was still very unevenly distributed, and in ways that had nothing to do with economic conditions. Rural areas might have phone lines so bad that even a 28.8kbps modem wouldn't work over them. The rollout of ISDN was only just getting started, and was limited to regions with less red tape (much like fiber rollouts in the last decade.)

† Well, okay, a different form of music piracy was probably happening in these Internet-starved areas: if you were willing to shell out for a CD burner for your Mac (not an internal option, rather an external SCSI-over-Firewire one), then you could also use iTunes to burn those ripped songs back onto a mix-tape CD — and then you could give them to your friends, who could in turn rip them to their PCs. Shame about the quality loss in the process (given iTunes' default ripping encoding of 128kbps AAC — though 256kbps AAC was possible even back then), but it'd be no worse than what dubbing between literal compact cassettes would get you, and certainly better than what you'd get by downloading the Napster-standard 96kbps MP3. (And CD-to-CD streaming bitwise dubbing was also possible back then, too, but not with iTunes, only expensive third-party software; I don't think anyone but professional bootleggers had any reason to try that for music. It was more a thing for ensuring you get clean working copies of PC "must have the CD-ROM in to play" games.)


> in this case, the availability of a CD-ROM drive. (Which wasn't a given in 2001, but was a given on any Mac that had Firewire support.)

It would have still been pretty odd to see a PC with no CD-ROM drive in 2001. It wasn't a given a machine would have a DVD-ROM drive in 2001, but CD-ROM was cheap by 2001 and rather ubiquitous.

Most home PCs in a box started shipping with CD-ROM in the mid 90s, along with Windows 95/98 etc, and by 2001 virtually all new software you bought at retail would require you to have one.

The firewire port was largely because IIRC USB 2 didnt exist at launch of iPod - transferring 5GB via USB 1 was painfully slow, the trick the iPod could do at the time that no competitor could was sync 1000 songs incredibly quickly.

> iTunes the software was introduced alongside the iPod

This isn't the case, iTunes was released before the iPod- before iPod "Rip/Mix/Burn" was a big marketing strategy for the iMac. The iPod did follow quickly though. Many of us had substantial libraries in iTunes before the iPod landed. Wikipedia suggests it was iTunes 2.0 that brought iPod support.


> by 2001 virtually all new software you bought at retail would require you to have one.

Yeah, but a piece of consumer electronics requiring someone to own a computer produced in the last five years (as a Firewire Mac would have been at the time) is a surprising choice, and especially so in that era. The majority of scanners and printers produced in 2001 were still parallel-port driven, because peripheral companies couldn't be sure that most people even had a computer with USB1.0 support yet. Windows XP was CD-only, but Windows 2000 still had a floppy-disk edition!

I wasn't really implying that Apple forced the spec of "a Mac with Firewire" in order to guarantee access to a CD-ROM drive; but it certainly did help, in the sense that if they had supported USB (or god forbid, parallel) from the start, and perhaps Windows from the start, then there would almost certainly have been people who bought an iPod for use on their Pentium 120 (like the one I was still personally using in 2001!), only to belatedly realize that they had no way of getting music onto their PC in order to then offload it onto the iPod.


It honestly wasn't during this period. in 2001, computers did not last as long as they do today, with performance doubling often every year to 18 months. Ask anyone who lived this period - by 2001, virtually everything had CD-ROM. If you didn't have CD-ROM in 2001, you literally couldn't buy or run most software, and again, CD-ROM had been standard in most pcs since around 96/97.

For what its worth, Microsoft Office 97 was last version to come with floppy disk release alongside the CD-ROM (46 floppies too...). That tells you a great deal about the state of CD-ROM in the late 90s. Office 2001, CD-ROM release only. There are many other examples you can point to. The PC game industry was ~100% CD-ROM or optical media in 2001 too.


You're talking about the period starting in 2001. I'm talking about the period ending in 2001. 2001 was a transition point from slow progress to the sort of performance-doubling progression you're thinking of. PCs running Windows 3.1 stuck around, doing what they do, until ~2002. People kept making games for DOS until ~2002! Then that all stopped. But someone planning a product for release in 2001, would tend to have been planning in mind of the previous era, not that new era. Except Apple, of course — as they tend to have their finger on the pulse of those sorts of changes (thus removing the floppy drive in the iMac, etc.)

> If you didn't have CD-ROM in 2001, you literally couldn't buy or run most software, and again, CD-ROM had been standard in most pcs since around 96/97.

While yes, this is true, most people didn't buy most software. I think you may have lived in a bit of a tech bubble. At the time, "retail-boxed PC software" was a thing only PC enthusiasts bought!

In that era, outside of a few coastal US cities with great Internet connectivity, most home PCs were used permanently disconnected from the Internet, with only the software that came loaded by the OEM (which is why, to this day, OEMs still feel an obligation to do that.) This is also why

> Microsoft Office 97 was last version to come with floppy disk release alongside the CD-ROM (46 floppies too...)

...because, if you look at PCs after 1996, they all had enough hard-drive space to ship with OEM-preinstalled copies of Microsoft Office. And they almost always did.

But most people of that era weren't buying new PCs. Most people of that era still only had the same PC they bought possibly nearly-a-decade prior, to be the fancy electric typewriter for their kids' book reports. Because nothing was demanding quite yet for them to upgrade. Because they weren't keeping up with new software. It was a "cycle of stagnation", per se.

It was only once Internet connectivity started improving in rural areas, and the web started developing multimedia killer apps with resource requirements that made old PCs feel slow, that the rest of the world started catching up, looking around, and wondering whether they indeed needed a new PC. (And even then, there was a lot of pushback, from a crowd who had up until that point got by with for years with the same first PC they ever bought; who was entirely unused to technology that changes out from under them. Especially when it was a teenager who wanted access to these killer apps — their parents often didn't get it. "Why do we need a new PC? The old one still works! This is like replacing a refrigerator that's still refrigerating!")

In short, this is why everything other than the iPod, in this era, was still using parallel. Consumer electronics people knew that for every one Silicon Valley technophile who has home broadband, there were 100 teen girls in Kansas with no Internet running Windows 98SE, who might nevertheless be swayed to buy an MP3 player — perhaps to then load it with music ripped from tapes using the PC's line-in jack with a program (distributed on an accompanying floppy!) that does some awful kind of low-bitrate but low-CPU-usage streaming compression.

It wasn't just that it took until 2003 for Apple to secure the IP rights necessary to launch the iTunes Music Store. It really just wouldn't have made sense in 2001. Too few people online to buy music to make sense of the licensing fees. But the adoption curve was clear (and clearly exponential); so Apple started work early, and got the deals done by 2003, when it would start being a profitable venture. Other companies, meanwhile — ones with large IP-rights-holders as partners — had tried the same thing years earlier! But those companies couldn't turn a profit. (At least outside of the few very special "even grandmas here are ahead of the rest of the world" markets of the time, like Japan.)

> Ask anyone who lived this period

To be clear, I'm probably older than you. I'm describing what I saw around me, at my house and my friends' houses and my school and my parents' offices and so forth, as someone who lived in a somewhat-rural area [town of ~50k people] from ~1995 until ~2007.


*2001 was a transition point from slow progress to the sort of performance-doubling progression you're thinking of.*

You are way off here. Lets check a contemporary source to understand the public sentiment at the time:

"My new computer's got the clocks, it rocks But it was obsolete before I opened the box You say you've had your desktop for over a week? Throw that junk away, man, it's an antique Your laptop is a month old? Well that's great If you could use a nice, heavy paperweight"

Weird Al - 1999

You also are totally incorrect about internet connectivity adoption at that point.


> "My new computer's got the clocks, it rocks But it was obsolete before I opened the box You say you've had your desktop for over a week? Throw that junk away, man, it's an antique Your laptop is a month old? Well that's great If you could use a nice, heavy paperweight"

Yes, all that is true... for people who were frequently trying to do anything "interesting" with their computers. Most people didn't try to do anything interesting with their computers! Most people — your average plumber in a small town in Ohio, for example — bought a computer for "productivity" (i.e. running Word); never played a single game on it other than the ones that came with Windows; and forgot it existed basically 99% of the time. Eventually, maybe they got AOL because of the free trial CD/floppy, and checked their email once a month. (But not more often than that; they didn't want to tie up their phone line!)

For these people, buying a piece of consumer electronics like an MP3 player, that forced them to use their computer to populate it (or, indeed, to do anything other than using Word), was a confusing incursion into what had previously been an analogue affair of tape dubbing, suddenly surprising them that the machine they thought would be just fine, had suddenly "gone bad" (i.e. gone obsolete) despite having been left mostly untouched.

> You also are totally incorrect about internet connectivity adoption at that point.

You are aware that in plenty of places that theoretically had the capacity for Internet connectivity, there were still many people who didn't bother to actually get Internet access, because they had no use for it, and that (unlike today) nothing out there demanded they get online to e.g. file their taxes, right?

Or that, even for many people "with Internet", they were still essentially using a captive portal experience (like AOL) at that point, with no web browser, FTP client, or anything else that would ever lead them to try to download a piece of software or anything else 1MB or larger, right?

Also, you know there are places outside the US, right? And that consumer electronics manufacturers care very much about selling into non-US markets? Look at global Internet penetration in 2001. Think about what you'd have to do to sell a consumer electronics device to someone in India or Indonesia in 2001.

My point wasn't that any of these factors pertained to the specific technosphere bubble that Apple released — and still releases — its products into. My point was, in fact, the opposite: that Apple is distinctive among consumer electronics manufacturers for a product marketing strategy involving addressing only that technosphere bubble. Everyone else used to target the lowest common denominator — that Ohioan plumber with the "antique paperweight", or even whatever someone might be using as a computer somewhere in Nigeria or the Philippines at that time.


This output is worse than chatGPT! So many words to express so few demonstrably incorrect points.


I'm from 1987, and born in Spain. Most DOS games (mainly shareware ones from magazines) were dead by ~1998-1999. Multimedia software exploded since 1996 even for multimedia CD's in Spanish, such as encyclopedias, cooking guides, video guides, tourist VR views, Trivia games and so on. No one mentally sane would develop for DOS in 2001. Period. By 1999 everyone who was a PC user had a CD drive for it, at least to install Windows 98 and any modern game. That's it.

Floppies were only used to share small documents and maybe some split(1)/Hacha splitted software from cybercafes into some floppy stacks.

And to boot from floppy on unsupported PC's.

Bear in mind that most Pentium MMX based PC's had a CD drive. Even the ones still running in 2002 were CD based.


Internet access was arguably more equal between rural and urban areas in the late 90s, early 2000s. Few had broadband and dial up was mostly the same regardless where you were living. Most PCs had a CD-ROM drive in 2000. You didn't need good internet to go to a store to pick up a copy of StarCraft, Diablo, Half-Life, Age of Empires, Quake III etc. but you did need a CD-ROM drive and decent PC. I grew up in a rural area too. Most of my friends had games like StarCraft or Diablo too


The idea CD-ROM wasn't ubiquitous in the PC industry and consumer desktops at large in 2001 is just laughable, I'm amazed this is still being debated. Do some people just choose not to remember the explosion in the "multimedia" industry in the 90s?

If you really want to put this to bed, just look at any vintage PC magazine from 97 onwards archived on archive.org, and pay attention to the adverts, which are a literal snapshot of what people could buy at that time. Good luck finding a consumer desktop PC at any price from the Windows 98 era onwards without a CD-ROM.

In 1998, most PC magazines even had demo CD-ROMs stuck to the cover, such was the wide penetration of CD-ROM drives.

> https://archive.org/details/computermagazines


I got my first Mac in 1995 (Performa 5200CD). Obvious from its name, it had a built in CD-ROM drive.

I had the internet throughout the rest of the 90’s, dialing up via a series of external modems. 14.4, 28.8, 33.6, and eventually a 56k modem. I downloaded MP3s and played them with SoundJam MP (which Apple bought and reskinned as iTunes).

That computer was so slow (75MHz PowerPC 603) that it took nearly all of its resources to play an MP3. However, my next computer was a B&W Power Mac G3 (released in 1999) running at 400 MHz.

The difference was night and day, way more than the 5 times implied by the clock speed. Around that time I also got cable Internet (6Mbit) which was far faster than 56K dialup (yes, more than 100X faster).

By the time the iPod came out I had more music than could even fit on the 5GB hard drive. I waited till the 3rd gen and got the 40GB model.

I honestly don’t think my experience was atypical of computer enthusiasts at the time (the type of people who read Slashdot and later HN). It was only grandmas who bought a Pentium 120 in the mid 90’s and hung onto it for decades.


To be clear, my point was that consumer electronics — like the iPod — were and are typically targeted at "everyone", not "computer enthusiasts" (which would be far too few people to be worth the CapEx of hardware product development); and that, for every one computer enthusiast on Earth, there are 100 grandmas (and other similar demographics.)

(Professional electronics, on the other hand, could make all sorts of demands and assume a very specific niche audience, because they were incredibly high-margin. There used to be a big difference between professional vs consumer electronics in terms of not just what parts they used, but what connectivity standards they could assume.)

> It was only grandmas who bought a Pentium 120 in the mid 90’s and hung onto it for decades.

And, uh, you know... poor people. Blue-collar poor people, who don't care much about computing. But who would really would like to listen to some music as they take the bus to work.

As I said in my original post, Apple was doing something very exceptional within the space of consumer electronics, by selling a consumer device (not a professional device!) that actually assumed a niche market (computer enthusiasts, who had thus purchased a recent Mac) rather than being "for everyone."


The original iPod was a Mac-only device. Anyone who bought it expecting it to work on their Windows computer, whether it was brand new or Pentium 120-vintage, would've quickly discovered the incompatibility and returned it to the store. Moreover, the initial price for a first gen iPod was $399 ($672 US in 2023 money). Not something I would expect to be affordable to a person struggling along with a 6-year-old computer (Pentium 120 came out in 1995).

For a device that only played music and only worked with FireWire-equipped Macs, the original iPod was not an "everyone" device, it was a luxury product for music enthusiasts.


No, you are wrong. Since Windows 98, having a machine with a CD was mandatory to be a multimedia PC. Heck, I remember four CD game releases such as Blade Runner.


Okay? You're talking about the requirements for new computers in the era, that retail software could assume. Meanwhile, I'm talking about the requirements for consumer electronics sold into the market in the era; where, to have the widest TAM, companies would usually design connectivity around the assumption that the person purchasing the device has a very old computer.


I was there, period. If by year 2001 you didn't have a CD drive to play Unreal (1998), Deus Ex (2000) or Max Payne (2001), your PC was seriously outdated junk and no company would take your PC seriously even for office work.

In 1999, yes, maybe a 486 with Windows 95 could be barely usable, but most basic software required a Pentium with MMX because of multimedia and optimisations. That's it. We already were in the Pentium II era, not owning a K6 or Pentium MMX would outclass you in a breeze for any serious task.

AMD and VIA CPU's were really cheap and by 1999 having a machine with a Pentium MMX-like specs with a CDROM was the norm because of that.

And OFC by 1999 DOS was pretty much dead and everyone switched to Windows 95 at least.

Yes, there were people and nerds like me in 2004 totally being able to use FreeBSD/Linux under a 486 with 16MB of RAM to play downscaled MP3's, nethack and surf the web with Lynx. But for sure we weren't the target of commercial software vendors.

By the year 2001, the minimal computer specs skyrocketed for everyone since 1997.


>The first iPod was a Mac-only Firewire device

I don't remember ever using Firewire for anything.

Are you saying it was only Firewire?

Quite possible if I dug enough, I could find my iPod. It wasn't one of the very first ones, but it was monochrome and hard-drive based.


Yep IIRC it literally has a Firewire port on the top.


Looks like USB support for syncing was introduced in 2003, and for charging in 2004.

Mine was a 4th generation monochrome - in other words, 2004ish. Had to have been USB. I don't think there were any Firewire Macs before 1999, and I don't think I ever got a Mac after that point, or that ran OS X.


>music pricing.

For sure. My memory of those days is getting hazy, but IIRC CD's from top artists were going for $15-$16 in the late 90's. Maybe $12 if you were lucky. Considering inflation and that you might only want two of the twelve songs on the disk, you could easily be paying $10/song (in 2023 dollars). Big chunk of change especially for a pre-teen or teenager.


>IIRC CD's from top artists were going for $15-$16 in the late 90's.

That's because the labels were engaging in pricefixing and blacklisting retailers who did not go along with their scheme. People suspected it but nothing really happened until the FTC stepped in.

https://archive.is/3thhx

Even then the result was basically a slap on the wrist and most people had begun using P2P instead.


I miss what.cd. It was such a great way of discovering new music back before there were algos recommending stuff to you 24/7.


Like always, communities reform. There’s a Swiss site now that I’ll have to leaved “REDacted” that is just as good if not better than what.cd


I also miss what so much


I seem to recall typically being able to buy major CDs in the $14 range in ~1995, but the expensive stores (Tower Records) charged $18.99 and up which is very clear in my mind. That's more than $37 in today's money.


Can't imagine anyone putting anything other than pirated mp3s in 95%+ of mp3 players ever produced.


So would it be fair to say that Apple built the iPod to take advantage of music piracy?


The iPod came out in 2001. The iTunes Music Store came out in 2003.

And the iTunes app itself came out 9 months before the iPod.

Apple had a big marketing push “rip, mix, burn”. iTunes had excellent support for ripping CDs and burning them to CD. I used iTunes for creating CDs for years and didn’t get my first iPod until 2006.


Back in the early days of podcast support I remember downloading episodes over dialup (slower than real time) and burning them to CD occasionally for listening elsewhere


There was no iTunes music store at that time. Apple was advertising how fast iTunes could rip music from a CD and in my experience it was definitely optimised for that.


Yes.

They made it easy to add your existing collection to digital, and also import a library from the friendly pirates.

Then they made it easy to buy new music, after people were in the ecosystem.


Like you said, Napster et al.

Apple pretended it came from CDs people had bought.


I surely remember much of my computer time being from ripping my CDs, borrowing friend's CDs to rip, going to the library (excellent source, really), to rip their CDs, etc.

Napster was all but toast once iTunes was on the scene, but Kaza and Limewire weren't any better in the problem of getting say, a whole album (those were still a thing) perfectly cataloged with metadata - and in high quality. iTunes made that very easy.

With the iPod, I would then just borrow people's iPods instead of CDs.

I didn't get into - and actually still don't subscribe to, a music service like Spotify. A few years after starting above, I would still just buy CDs from touring bands (these were bands driving cross country in vans and playing condemned warehouses, so safe to say their stuff wasn't being traded as mp3's, and if they were, they would have LOVED it). I remember buying some music off of iTunes Music, but I tried to not, as sharing their the system I had didn't work well - or even across my own devices.

MySpace Music was also big in discovery. Their whole music catalog was lost, though, right? Someone lost the private key?


> MySpace Music was also big in discovery. Their whole music catalog was lost, though, right? Someone lost the private key?

I think archive.org found a bunch of it, but some of it was deleted.


I mean, people at the time did have large collections of CDs with tons of music. Piracy existed, sure, but it was something a relatively small number of tech-savvy people were doing. Hell only half of Americans were even using the internet at the time. And at the time, MP3s were super inconvenient - you needed a CD for your stereo or whatever you were listening to music on, and CD-R drives had only just started to become cheap (and by cheap, I still mean several hundred dollars, which was quite a bit at the time). I would be in no way surprised if greater than 90% of the music that was loaded onto the first cohort of ipods came directly from CDs.


Ripping CDs was common practice back then, in addition to all the piracy you mentioned. I'm pretty sure iTunes had a built-in feature for it.


I can't recall if iTunes was introduced at exact same time as iPod not

iTunes was released in January of 2001, 9 months before the release of the first generation iPod in October 2001.


Yea another example is the NVIDIA Tegra. NVIDIA tried to use it to make their own portable gaming device but then Nintendo figured out how to use it to make the switch.


Not to forget the Toshiba AC100 with the original Tegra, which was one of the first usable ARM laptops (netbooks?) and helped much of Linux desktop support development for ARM in the early days.


> One of my key takeaways is the iPod represented not one but multiple simultaneous tech revolutions. Consumer storage until then unseen

I disagree entirely, for reasons that should be obvious once we examine the actual state of the market at the time. Storage was not the thing that made iPod successful.

The microdrive had existed for several years by then[0][1], developed originally for digital cameras. Like most storage technologies, it was growing in capacity as time went on. I don't think "technology gets marginally better with time" counts as a "tech revolution", nor do I think the microdrive counted as a consumer storage technology that was previously unseen by the time the iPod launched. A physically larger 1.8-inch microdrive (with consequently greater capacity) also does not count as a revolution... it's like making a bigger montor and being surprised that you have more screen real estate.

I also find the anecdote you replied to confusing because surely Apple would have known what a microdrive was before this meeting. That part of the anecdote makes no sense. IBM had their own 1GB microdrive by June of 2000, so even if Apple had exclusive rights to Toshiba's larger 1.8-inch microdrives, that seems entirely irrelevant, contrary to the "key takeaway" that the anecdote posited.

Maybe Apple would have slightly higher storage capacities, but... we're talking about a relatively marginal difference compared to the gulf in storage capacity between these microdrive MP3 players and the flash based MP3 players of the time. An advantage, but not a revolution.

As it turns out, Apple didn't even have a storage capacity advantage[2], in direct contradiction to the anecdote posted upthread.

[0]: https://en.wikipedia.org/wiki/Microdrive

[1]: https://www.youtube.com/watch?v=oAsBiplfM9A

[2]: https://news.ycombinator.com/item?id=34897209


From Wikipedia:

"A second generation of Microdrive was announced by IBM in June 2000 with increased capacities at 512MB and 1GB, with the 512 MB model costing $399 and the 1 GB model costing $499."

Yes, storage technology existed at the time, but it was an order of magnitude more expensive than what Apple needed to put in a consumer product.

It's unquestionable that iPod unleashed the explosive miniaturization of technology, and paved the road for everything that we saw in the following decade with smartphones.


Those prices were still from over a year before Apple announced the iPod, in a market where technology was rapidly advancing and getting cheaper, and those were the MSRP for single unit quantities, not the bulk order prices that would have been available at launch, let alone the bulk order prices that would have been available when Apple was planning to launch their iPod.

As others have pointed out here, there were other competitors offering larger-than-1-inch hard drives like the ones Apple used.

As I have pointed out elsewhere, a portable MP3 player with about 5GB of storage was actually introduced about 2 years before the iPod.

Based on the available evidence, storage was not the defining feature that made iPod successful, which is the claim that anecdote was trying to make. The iPod was successful for other reasons, regardless of how emotionally compelling that anecdote tries to be.


> It's unquestionable that iPod unleashed the explosive miniaturization of technology, and paved the road for everything that we saw in the following decade with smartphones.

PDAs were being developed at the same time as the iPod and probably have a much better parental claim than the iPod. The first smartphones -- the ones before the iPhone -- were basically PDAs with cellular modems, and utilized the similar OSes as their non-modem-bearing predecessors.

https://en.wikipedia.org/wiki/Personal_digital_assistant

I had an HTC Wizard, specifically the Cingular 8100, which came out two years before the first iPhone.

https://en.wikipedia.org/wiki/HTC_Wizard


Agree there were many attempts before. Heck, even General Magic created an iPhone-like experience over a decade earlier!

But none of those early predecessors achieved anything even close to the scale of the iPod.

I'm not arguing that Apple is the sole responsible for miniaturization of technology, but selling dozens of millions (and eventually hundreds of millions) of their gizmo certainly helped fast-forward the industry. Things would have happened a lot slower if it weren't for the iPod.


I didn't know about the Microdrive but Wikipedia says model larger than 1GB up to 4GB were first available in 2004, nearly 3 years after the iPod. At that time for $499 it costed you could already get a 40GB iPod.


Yeah, apple surely could've negotiated price discounts and I'm not sure what the bill of materials cost of the Toshiba hard drives were, but the larger IBM Microdrives were released in late 2000 and were $499 for the 1GB version (https://www.zdnet.com/article/ibm-1-gb-microdrive/). Even if that's a 100% markup, you're talking about $250 for 1GB.

Hard to build a consumer device with a MSRP of $399 if one of the components is over 60% of that cost. Competing firms estimated the total BOM for the original iPod at more like $200.

They were also surely looking at the potential trajectory of storage improvements too -- by the time the 4gb microdrives had been released in 2004, Apple was already shipping 40gb iPods and selling them for $499.


I had added this paragraph before you posted but probably after you saw my comment:

>> Maybe Apple would have slightly higher storage capacities, but... we're talking about a relatively marginal difference compared to the gulf in storage capacity between these microdrive MP3 players and the flash based MP3 players of the time. An advantage, but not a revolution.

Regardless, it turns out that it wasn't even a capacity advantage, since the Personal Jukebox[0][1] was released in 1999 and offered 4.8GB of storage in a package that weighed 300g instead of the 185g of the 2001 iPod. A difference in weight, sure, but... modern smartphones are often between those two weights, so a few extra grams would have been fine and the Personal Jukebox would have flown off shelves, if storage capacity was the defining feature.

Just telling a compelling anecdote doesn't make it accurate. Apple had other advantages at the time, including the scroll wheel and iTunes, as well as brand recognition. Based on the evidence of what existed at the time, I do not think storage was the compelling thing that made iPod more of a success than the Personal Jukebox.

[0]: https://www.pcworld.com/article/520590/evolution_of_the_mp3_...

[1]: https://en.wikipedia.org/wiki/Personal_Jukebox


I think the whole anecdote sounds like a story that's been told a few too many times (including variations of Toshibas helpless product department making stuff they didn't know how to sell), because finding "that one thing" that made a difference" makes for a good story.

Rubinstein saw the drive and realized that the price level and size was perfect for their product idea. And he had buy-in from the higher ups so they could negotiate a certain volume and price that made the project feasible.

That doesn't mean another company couldn't have been successful with a smaller 1GB drive (seeing the success of the smaller iPods later on) or a physically bigger 5GB drive (people were still carrying around walkmans and CD players).

What Apple managed to pull off was making a device that was easy to get music into (with iTunes) and easy to use (with the clickwheel) and, yes, compact and with good design and marketing.


100% agree with your entire comment. That is exactly what I believe too, based on the evidence I've seen. That anecdote is entirely too clever to be accurate. It was probably built on a kernel of truth, but it has been embellished to a degree that appears to make it contradict the reality that existed at the time.


Ah, you must not have seen the pre-ipod players. The Personal Jukebox was 2.5 larger than an ipod. It didn't fit in a pocket and the grey plastic felt cheap compared to the ipod's shiny metal back. It really felt experimental, and nothing like the polished Walkmen Sony was making.


That's... also what I'm saying, yes. I think we agree?

iPod was not successful because it was the only thing offering high storage capacity. It wasn't the only one. iPod was successful because it offered a touch wheel, iTunes, and Apple's brand recognition. As you point out, it also had very good build quality. None of these things have to do with some exceptional storage capacity.


Apple’s brand recognition wasn’t anything special at the time. The iPod was essentially their first meaningful entry into consumer music. Compared with say Sony, they were nobody - a recently failing maker of outdated computers.


But compared to the brand recognition of Remote Solution? The Remote Solution Personal Jukebox beat iPod to the punch by 2 years on offering a high capacity 5GB portable MP3 player, but it was not some runaway success. Apple's brand recognition was far higher than Remote Solution, which certainly was a factor in helping iPod to succeed where Personal Jukebox did not. (Not the only factor, of course, but one factor.)

Apple's brand may not have been as good as Sony in the music space, but Apple was still a very well-known brand, even if they were struggling at the time.


As has been said elsewhere - it didn’t beat apple to anything because it was giant, had a poor UI and no meaningful companion software. It was a prototype that wasn’t ready for the market. Nothing more.

Comparing the iPod to junk that happens to match on one spec or another tells us nothing.


> As has been said elsewhere

Elsewhere... as in the comment that I replied to? Which you then replied to me?

I was literally in agreement with the "elsewhere" that you're referring to. Your comments are confusing to me. Storage capacity was not the thing that made iPod successful, even though the anecdote at the top of this thread is trying to make that argument. There were a number of other factors that were more important. That's what I said and that's what the "elsewhere" also appears to have said. That's exactly what you just said too, so you're literally agreeing with me, but you want to argue for some reason.

Still, "Apple" was a much better brand than {someone no one has ever heard of}, which matters for something like the Personal Jukebox.


> "Apple" was a much better brand than {someone no one has ever heard of}.

This is of course true, but not relevant.

The personal jukebox was a junk product. Nobody would seriously imagine that the reason it didn’t win against the iPod was the brand.

Now imagine it the other way around - if apple had released the jukebox, and ‘remote solution’ had released the iPod.

Startups with no brand succeed all the time, if they have a great product.


> Now imagine it the other way around - if apple had released the jukebox, and ‘remote solution’ had released the iPod.

They probably would have both failed, in that case. Apple for delivering a product that wasn't good enough, and Personal Solutions for having no brand recognition and no retail partnerships/network.

> Startups with no brand succeed all the time, if they have a great product.

Launching a physical product back then wasn't as easy as launching a Kickstarter and eventually transitioning to an Amazon listing. Just having a good product didn't mean you knew how to get it into the hands of customers. Things are much easier these days, and even so... lots of good products still fail to reach market for reasons that have nothing to do with the product itself.


The design on the Personal Jukebox is rough. It reminds me of those cheap no-name PDAs from the late 90s/early 00s that were less capable than the earliest Palm and Newton models and more like toys than tools.


By that time competitors had 1.8" HDD with similar capacities. Creative also had a 40GB MP3 player with a 1.8" drive in 2004, the Zen Touch.


I bought an Archos Jukebox portable MP3 player refurbished with a 20 GB HDD (2.5") in June 2001 (original 6 GB release was December 2000). While not as pocketable and sleek as the iPod, when the iPod came out it was too little (storage, interoperability) too late for me.


Exactly, that’s why the Apple Newton didn’t work out - wrong timing.


I wouldn’t read it that way. Jobs was good at noticing something would become possible 2-3 years before the rest of the industry. Flash-based players existed back then, and would have followed the trajectory the iPod jumped on a few years after it launched. The iPod interface was middling. (iTunes really, really sucked, but the device was nice)

Enabling podcasts was a big deal though. I’ll give them that.


iTunes was amazing. It’s UI was so successful it guided most of the Mac’s for next decade: Coverflow and Spotlight are direct descendants.

Every major OS has Spotlight-like UI these days. You can thank iTunes for the search-as-filter paradigm.

It also did everything you’d need regarding MP3s. You could rip CDs, organize and burn them, have the lyrics and album artwork show neatly while displaying a cool set of animated waveforms…

I miss it dearly. The fact that it became a behemoth and later Apple Music was a tragedy.


> Every major OS has Spotlight-like UI these days. You can thank iTunes for the search-as-filter paradigm.

I think your Apple bias is clouding your memory.

Apple certainly didn't invent the Spotlight UI. Quicksilver did it before on OS X, and my memory is hazy, but it also might've been inspired by another app.

iTunes also didn't invent the search-as-filter paradigm. I remember using something similar on foobar2000 in the early 2000s, and it might've been available even before in some Winamp plugins. Again, hazy memory, and it's difficult to track down these things now with any certainty.

I'm not arguing against your love for iTunes, but I think you're overstating its influence on software design.


> Quicksilver did it before on OS X

And before Quicksilver there was LaunchBar, which pre-dates OS X having originally shipped for NeXT. I believe I starting using LaunchBar during the OS X public Beta, and I still use it.

You can download all of the old version of LaunchBar including for NeXT:

https://obdev.at/products/launchbar/download-legacy.html


Apple didn't invent much. Quite a bit of their hardware design is inspired by dieter rams, a lot of their software is inspired by similar features from other operating systems. Most recently, the iOS widget system.

What apple does is perfect that design. iTunes didn't invent search-as-filter or the general UI layout, but it certainly perfected it.


Quicksilver was after iTunes. iTunes ran on classic Macs for many iterations.

I’m not claiming iTunes invented it, but it was the most influential implementation.


And it was originally a 3rd party product that Apple acquihired, Casady & Greene's SoundJam.


But the UI was completely rewritten. No instant search, coverflow, etc


"rip, mix, burn" was an excellent campaign.


Every public source tells the story that the iPod went from idea to introduction in 9 months. Jobs also never wanted iTunes on Windows. He was dragged kicking and screaming into doing it.


In retrospect it sucked.

But at the time, iPod + iTunes + Mac was a better combo than anything else available on the market.


At the end of a routine meeting with Toshiba, the engineers mentioned a new product they had in the lab that would be ready by that June. It was a tiny, 1.8-inch drive (the size of a silver dollar) that would hold five gigabytes of storage (about a thousand songs), and they were not sure what to do with it.

Keep in mind that 1.8 inches was not remotely the smallest hard drive size at the time. The 1.3 inch Kittyhawk microdrive was released by HP in 1992, the 1-inch IBM microdrive in 340MB was released in 1999 and the 1GB in 2000.


Could any of them build at scale? Honest question. IBM has had 1nm transistors and etched their logo by manipulating individual atoms for years.

And yet, if you want the best chips by the thousand/millions you run to TSMC


The IBM drive was commonly sold with early DSLRs like the Canon 10D. It was very common and routine. I am even aware of some products in the embedded space that used it.


Define scale. Microdrives were quite common at the time. I used one with a PCMCIA reader before the first iPod and a couple years later I had an MP3 player and a PDA with one. I recall being giddy at the fact I could keep a text copy of wikipedia in my pocket.


Apple couldn’t build at scale either and didn’t have to for the first couple of years. It took two years to sell the first million.


\tangent The Innovator's Dilemma recounts a similar scene in 1994 - except the players don't connect. The big manufacturers couldn't find customers, and the customer couldn't find a seller. His point was about big companies being "captive" to their present customers, and to "the financial structure and organizational culture" of the value network.

But the manufacturers just thought they were ahead of the market. Given this iPod story from 2000, maybe they were.


Here is the story in video format: https://www.youtube.com/watch?v=H1dzNuyq6O0

I highly recommend checking out the other videos by that YouTube channel as well.


Walter Isaacson’s biography of Steve Jobs is a great book if you haven’t read it.


Focus me, sitting at my admin job at uni, downloading MP3s in early 2000, wanting a portable hard drive for all my songs.

All that stood in the way: random temporal mechanics and genetics


I'm not sure how big a difference this made. If I remember correctly there were iPod alternatives with similar drives pretty quickly.


The fact that we're talking about the 'iPod' & Apple and not the 'iRiver xyz' I think shows it made a big difference.

Yes, I know, survivorship bias, but apple has a history of not releases products unless they think they're good enough (for their own definition of good) and if apple hadn't had found a way to make it store enough songs to be a differentiator we probably would never have heard of the iPod, nor would we be talking about it today


> I think shows it made a big difference

No, it doesn't. It shows that something made a big difference, but that thing absolutely does not have to be the amount of storage.


To be more clear, I think apple wouldn't have made the iPod at all unless they got the storage where they wanted it. So maybe the amount of storage didn't matter to users in the end (anecdotally it did to me but anyway) the product probably wouldn't have existed until some other storage revolution happened.


Hah, no. It was their world class commercials. Not a single one of my twenty-something friends bought because of specs when they came out. It was these that everyone talked about: https://www.youtube.com/watch?v=MTs5pOn7XFU


At some point, great ads still need to be backed up by a good product, if the iPod was crap people wouldn't keep buying them no matter how good the ads.


Every surviving product is a revolution by this definition.


That's just it, being first to the market is sometimes more important than worrying about competitors. They killed it so hard with the ipod that even if the zune was a better device at the time people wanted the ipod.


The Zune came quite a while later, though (2006 vs iPod in 2001). The main competitors at the time was Creative (Nomad) and frin 2003 (IIRC) iRiver, and they typically had slow USB ports at the time and were more bulky.


I had a Creative Nomad, which I bought around the end of 2003 and I was scoffed at by Ipod owners at how big it was.

Doing a little digging, this was the model: https://www.cnet.com/reviews/creative-nomad-jukebox-zen-xtra... 4.4 x 3 x .86" bigger than the original Sony Walkman. It had much larger capacities and was outside the Apple ecosystem, which was an absolute feature at the time. Itunes was horrid.

USB Speeds weren't that big of a factor, though YMMV depending on the user. With the larger capacities of HD based players, you loaded once, which took some time, and then topped up with new additions which was not a big deal. My particular Nomad model had USB 2.0 though


people forget that the first ipod was firewire only and firewire was a very apple thing at the time. it wasn't until they gave in and changed it to usb (and released iTunes for windows) that it really set the world on fire.


When the iPod came out USB 2.0 had just been created. USB 2.0 wasn't in widespread use for at least a year or two after.

USB 1.1 was 12mbits/s -- that is 1.5MB/s. Firewire was 400mbit/s.

Accounting only for bus speed, filling up a 5GB iPod would have taken 54mins[0] for USB 1.1 vs 1.6mins[1] for Firewire 400.

[0] 1000MiB/1024MB * 5000MB = 4,883MB / 1.5MB/s = 3255s

[1] 4,883MB / (400mbit/8) = 98s


It's also worth pointing out that Firewire had the ability to charge and sync the iPod simultaneously. USB ports didn't output multiple amps like they do today, so not only did it take longer, but one would also end up with a warm iPod with even less battery life left after adding music via USB.

Apple even made dual Firewire/USB cables to allow the (Firewire) charging brick to be used while syncing.


It didn’t help that the Zune was sold in a fecal shade of brown. Apparently the product designers never watched Reservoir Dogs.


I remember these drives being advertised in Byte with some comparison to domino pieces.


"No wireless. Less space than a nomad. Lame." - Rob Malda


> Compared to 5nm process, the first-generation 3nm process can reduce power consumption by up to 45%, improve performance by 23% and reduce area by 16% compared to 5nm, while the second-generation 3nm process is to reduce power consumption by up to 50%, improve performance by 30% and reduce area by 35%.

What kind of real-world impact are we likely to see on the 3nm chips for the new iPhone and M3 macs? These figures seem outstanding and generationally impactful if true.


For the first year or so, the experience of using these 3nm machines will be unparalleled. Then after most devs switched to using M3 Macs, app performance will regress back to the baseline, making everything slow on M1/M2 machines, and basically unusable on Intel Macs. Paraphrasing the old saying: "What TSMC giveth, Electron taketh"


As an embedded developer I'm always surprised that physicists and engineers spend decades pushing the limits of physics, for software devs to come along and make text editors no faster than they were on Windows 3.1


Can we stop the hyperboling? The amount of multitasking we can do on consumer computers today is nowhere close what is what possible only 10 years ago.

About text editors... you can use blazing fast editors if you want like cli-base ones, notepad++ or sublimetext... If devs are not using those to code anymore there may be reasons, right?


My 2006 Vista machine, with two cores at 2.6ghz quite simply multitasked better than my current ryzen 5 3600 based machine. The one simple reason why is that all the programs I ran on that machine were native, while all the apps we run nowadays are blobs of javascript in a dumb chrome window.

Eclipse ran better on my $600 laptop in 2012 than pycharm runs on my 6 core 2019 macbook. Also, on my macbook, high cpu activity causes constant pops in my audio, which is just a thing that happens with macbooks that people just ignore apparently.


My recollection of Vista was that everybody was complaining of its performance compared to XP because so much switched to .net-based apps, slogging everything down. It wasn't until Win7 that everybody was happy (it could even run on netbooks).


The new Notepad on Windows struggles to open files just a few hundred MB in size and visibly stutters and freezes when doing not much on a high end gaming PC.

It is apparently an iron law of the Universe that all available computing power must be wasted.


I'd say that's _because_ it's an old app that hasn't been rewritten ground up to take advantage of modern hardware resources.


I'm talking about the NEW Notepad, which has been rewritten from the ground up to waste modern hardware resources.


Fun fact: Turbo Pascal was 38KB and that included the editor, compiler and debugger. It was lightning fast, compiling 10s of kSLOC in mere seconds on a 4.77MHz single-thread 8088 with 640KB of RAM. Now to build a Hello World full stack web app it can take 10+ minutes on a modern PC with 4GHz 16 core CPU and 64GB of RAM :)


> Now to build a Hello World full stack web app it can take 10+ minutes on a modern PC with 4GHz 16 core CPU and 64GB of RAM :)

You’re doing something extremely wrong if this is the case.

Or more likely, the people complaining about “10 minute” web app compile times are just repeating hyperbole they found in other cynical comments.


cool story, now try writing a "Hello World full stack web app" in turbo pascal.


It's actually quite easy if you can find a networking library: bind and listen on a socket on port 80, read text until you get GET / HTTP/1.1, discard whatever comes after, write back a response containing "Hello World" in the correct format (few lines of text) and that's it! A new full stack web app is born.


And another Youtube video is born...


> Now to build a Hello World full stack web app it can take 10+ minutes on a modern PC with 4GHz 16 core CPU and 64GB of RAM :)

That's a bit of an exaggeration. What stack's hello world takes 10 minutes to build?


depending on your network, `npx create-react-app --template typescript` might take 10-15 minutes easily.


Well, yes, given an infinitely slow network any action depending on the network can take an infinite amount of time.

Since the discussion is around CPU hardware improvements, I don't think network speeds are too relevant.


but isn't it? We've moved a ton of the compute to using the network and I think that makes things a lot slower than they need to be... and a lot less optimized. And along with that, since we have essentially infinite storage we havne't gotten good at trimming the fat... this is yet another thing that steals from our compute.


Compiling a web app takes seconds as well. The common complaint is the resources needed to run the app as a consumer, particularly when most of the apps you use are all instances of chromium.


In many cases developer resources are more expensive than hardware or network resources and the decreased efficiency of modern software reflects that.

We can all agree software is not as resource efficient as it used to be on the whole.

Beyond that we’d have to discuss specific examples and benchmarks.


I'm simultaneously grateful that we value developer time and wellbeing over software performance and appalled that we too often forget the relationship between software performance and developer time and wellbeing.


I almost exclusively use "glutenous" apps (browser, VSCode, Spotify, Discord, etc.) and have yet to experience these Electron performance issues decried in HN comment crusades. Everything is perfectly snappy. Ditto for my acquaintances. The ROI on these optimizations likely wouldn't matter for the overwhelming majority of folk.

I could be wrong. I often am. But I suspect that Electron consumes far more resources in the minds of HN users than it does in most computers.


It's possible you and your acquaintances can't tell. Similarly some people claim they can't tell a difference between a 720p video and higher. Or that they can't tell a difference between 128 kbps CBR MP3 and higher.

However the differences are real and have been measured. [1] [2] You are welcome to argue that the performance work isn't worth prioritizing, but don't fall into the trap of claiming the performance gap doesn't exist.

--

[1] https://pavelfatin.com/typing-with-pleasure/

[2] https://danluu.com/input-lag/


Typing in Obsidian is often excruciatingly slow. It is an electron app.

Typing in Sublime Text is always fast. Except for the first line which is the title of the tab. After that, it is fast.

I often switch to ST to type, then copy and paste to the slow app.

It's kinda nuts.


>no faster

In terms of latency it's usually slower by an order of magnitude.

There should be a law that mandates developers to use at least two generations older hardware than the average consumer.


Doing development work on a 10 year old ThinkPad dramatically changed how I think about performance in my software.

With a tiny bit of attention to performance you can do insane things


I want my ps2 keyboard port back


USB doesn't have to be this slow. Some slowness is required, but not this much. (And USB3 doesn't have to be slower than PS2.)

What we get is a result of nobody on the entire stack giving a shit. It's not a technological limitation.


ps2 keyboard/mouse support is still there on modern x86; but you have to carefully shop for motherboards. And if you also want a ps2 mouse, you'll probably need a splitter, 2x ps2 ports on a single board seems to be no longer available.


Why it is so surprising, when delivery of product/solution in most cases prioritized over quality/efficiency to very large degree, eg. good enough delivery in 3 month is more profitable, then perfect product in year or so.


The rust community seem to be leading the fightback on this but I think it'll be about 5 years before there's a big switch over. If I owned a popular electron app, I would definitely be thinking about starting to write a Rust version or migrating to Tauri because if you're not then somebody will be. Once that Rust version hits feature parity with your core functionality people will end up jumping ship for the faster, shinier version, just like people jumped ship from Atom to VSCode. Big Electron hitters like the aforementioned VSCode seem to be banking on people switching over to cloud which I'm not sure I see the majority doing. I find the UX of using VSCode in the browser not that great.

I'm already in the minority because I do all my work in Sublime Text so I might be completely wrong about all this, but this is how I suspect it will play out.


> If I owned a popular electron app, I would definitely be thinking about starting to write a Rust version or migrating to Tauri because if you're not then somebody will be. Once that Rust version hits feature parity with your core functionality people will end up jumping ship for the faster, shinier version, just like people jumped ship from Atom to VSCode.

Sublime Text was always faster than all of these (and still is) but that’s not why people are picking text editors.

Tauri is great for certain use cases but it’s not even close to a replacement for Electron at this point (having used both).

Optimizing for speed above all else is the kind of thing that appeals to a subset of programmers, but most people just don’t care about at all as long as it’s fast enough. Products like VSCode are more than fast enough for all but a few people who like to count milliseconds.


> Sublime Text was always faster than all of these (and still is) but that’s not why people are picking text editors.

Personally, I think VSCode has massive adoption because it's free and it's easiest to configure which both appeal to newcomers. The other camp that like VSCode are the nomads who like to work in the cloud. The senior devs I've met generally prefer either Jetbrains or Sublime, depending on if they like a maximal or minimal setup. And then there's the hardcore camp who can get on with Vim and Emacs. I don't know how the latter do it personally, I've tried multiple times to get something similar in functionality to Sublime/VSCode with them and it is always a monumentally frustrating fuck about.

> Tauri is great for certain use cases but it’s not even close to a replacement for Electron at this point (having used both).

I agree but it will definitely be ready within 5 years.

> Optimizing for speed above all else is the kind of thing that appeals to a subset of programmers, but most people just don’t care about at all as long as it’s fast enough. Products like VSCode are more than fast enough for all but a few people who like to count milliseconds.

I do agree with this to an extent but another reason I'm a big Sublime Text fan is the interface. I really appreciate the lack of screen clutter compared to VSCode and especially compared to Jetbrains.


> most people just don’t care about at all as long as it’s fast enough

People care about speed. Even non-technical people. Every non-technical person I know that has bought a new computer even though they already had one was because of performance. People care about performance so much that they are replacing their whole device! That costs a significant amount of money and comes with a real hassle of transferring their data.

I think people use VSCode over Sublime because of its features, not because they don't care about the speed. It's certainly what got me to migrate from Sublime to VSCode. Sublime is so much faster, but it just doesn't do the things that I want my editor to do. I would instantly switch away from VSCode to a faster alternative if it actually had the subset of VSCode features I care about.


What can’t you get Sublime to do that VSCode can? I’m intrigued because I went the other way from VSCode to Sublime


An example of a feature I use very frequently would be find in files. Sublime does sort of have this, but the ergonomics are worlds apart.

In VSCode the search gets refined as I type, the results are in a sidepanel and clicking on a result will change the main editor's view to that location in that file.

In Sublime I have to first define which files are even subject to the search. There are options to add folders or to search all currently open files, however a bizarre omission of searching all the files in the currently open project, which is what I almost always want and how VSCode works by default.

Then I have to type out the search term, which is sometimes needlessly precise - because in VSCode thanks to the incremental search I sometimes already see that there are no results or just a few results when I've typed in only a small prefix of what I was going to.

Once the search is done, in Sublime the results are opened as a pseudo-file in a new tab and clicking on a result switches away to a different file tab. It works, sure, but it's very inconvenient for cases where I want to quickly click through a bunch of resulting files to see which is the one I actually wanted. Lots of switching back-and-forth between file tabs, as opposed to VSCode where it's all done in a single view context with the results always visible in the sidepanel. This is especially annoying in Sublime when a search result is in a file that I already have open but is very far from the search result pseudo-file tab. This means that to get back to the search results I can't even just switch the file tab, I need to first scroll through the open files tabs to even find the pseudo-file.

So in short, I guess I would describe Sublime's search as a sort of middle point between using grep and VSCode. It's serviceable, but annoying and wastes my time.


> In Sublime I have to first define which files are even subject to the search. There are options to add folders or to search all currently open files, however a bizarre omission of searching all the files in the currently open project, which is what I almost always want and how VSCode works by default.

Sublime 100% does this because I use it everyday. You just have to use command + shift + f, type whatever you want and hit enter and it will search all files in the open project by default. Only time it won't do this is if there's something in the "where" input from a previous search.

I find the VSCode search results packed in like sardines and like the greater amount of context you with Sublime tab which generally means I get to find what I'm looking for without having to click through to check. I've got a keyboard shortcut set to jump between the entries which makes it feel a bit snappier to cycle through results as well. All personal preference at the end of the day though.


Tauri is not inherently faster. On the largest platform (Windows), Tauri scores similar performance numbers to Chromium. Why? Because it's still Blink under the hood, since the system browser is Edge.

The instances in which Tauri is faster are when you write your business logic in Rust and your frontend logic in JS.

Simply switching to Tauri will not instantly speed up your app.


I don’t see why Tauri would necessarily be faster, from what I understand the main difference is that it uses a browser bundled with OS where possible but that’s not necessarily a faster browser than Chromium


https://tauri.app/v1/references/benchmarks/

I will admit there doesn't seem to be as much in it as I thought there was.


I find VSCode to be responsive and snappy and I am on older hardware (Mid 2014 macbook pro). I don't go overboard with thousands of extensions though, but I like the flexibility and use a few "more heavy" extensions like the jupyter extension. I have to say I am very satisfied by my editor and I can not really understand the VSCode hate.

But I only use python and no demanding software like docker.


5 years sounds optimistic


You've got stuff like the following that are coming along nicely so might not be as far down the road as you think:

- https://zed.dev/releases

- https://lapce.dev


> Then after most devs switched to using M3 Macs, app performance will regress back to the baseline, making everything slow on M1/M2 machines, and basically unusable on Intel Macs.

This is one of those HN comments that has me feeling like I live in a different world than the commenter. What Electron apps are you using that are really that slow? I use several of the popular Electron and CEF apps on a daily basis and everything is fast.

The exceptions are anything that loads over a network. Spotify, Slack, and others can feel “slow” when the content is loading from a backend over the internet, but obviously that’s not the fault of the UI


When was the last time you used them on a $200 laptop like your cousin picked up cheap to take to college?

Or tried to help students use an electron IDE on a computer with 8Gb and a 2 ghz 10 year old processor that their dad saved to buy secondhand to help his kid not need to work overtime at the factory to keep food on the table?

Electon is only fast on developer and gamer tier hardware.

Try running this stuff on the hardware steam says is the average gamer and see how fast it actually feels.


Having used second hand laptops almost exclusively for 20 years and having helped friends to do the same I'm pretty confident saying that 90% of the time the slowness is botched Windows installation and the lack of computer hygiene. Not always, but in overwhelmingly majority of cases.


When i switch back and forth between two channels in Slack, it takes a noticeable amount of time. I’d say 300-400 ms or so. Is it significantly faster for you, or do you consider that to be fast?


Do you reboot every day?


Obsidian.


This doesn’t make sense. VS Code works fine on a 2010 iMac that’s so old that Apple doesn’t support the OS anymore.

I’m starting to see less software support, though. I should put Linux on it.


I use a mid 2014 MacBook pro. I spend my days in vscode, a terminal and safari. So far it's still going strong. It's sometimes laggy, of course, and the battery life is really bad, but in general it's still a machine that can be worked with.


Last November, I went from a Late 2013 MacBook Pro to an M1 MacBook Pro and it's phenomenal. Sure, I could get by on the Late 2013 but everything is so much better on the new one, I would hate to go back. I can easily code for an entire day now without having to worry about charging, makes a massive difference if you work outside the house a lot. I'm even thinking of upgrading again whenever these M3s come out because the performance and energy efficiency increases sound nuts. My advice would be to upgrade whenever you can. Even just 'little' things like being able to highlight and copy text out of photos and screenshots make it a vastly better experience.


I would love to upgrade, the short battery life is just really immensely frustrating because I need my laptop a lot outside the house too. And it is often slow, especially with too many tabs open. But it's way too much money for me right now. My comment was more a reply to:

> Then after most devs switched to using M3 Macs, app performance will regress back to the baseline, making everything slow on M1/M2 machines, and basically unusable on Intel Macs

which I felt can't be true because my laptop is ancient by modern standards but still usable. Not as snappy as I would like it to be, but also not terrible or unusable.


I think TSMC N3 will be underwhelming. It offers a shrink of logic transistors, but ZERO shrink to SRAM meaning one of Apple's biggest advantages (massive caches) isn't going to have much room to improve.


I know this is a joke but I’ve heard many variations and - how does Electron taketh?

Is the codebase getting de-optimized with time? Is it badly architectures extensions? Is the JS runtime becoming slower?


I think it's proliferation as well. If I look at what I'm using day to day: VSCode, Postman, Slack, Teams, Spotify are usually all open; they're all Electron.

People definitely don't optimise their Electron apps as well as they could for time reasons, normally not too bad if you're running 1 Electron app, but if you're running several then it becomes more obvious.

Tbh it's a old/new software design principle I guess. We all hear the stories of some 386 machine in the basement running some fortran program that hasn't been touched in years; whereas modern JS-based codebases are expected to be constantly updated. I've worked in JS my whole career & it's very rare to see a project that's one & done like software used to be.

Maybe old software did get updated, but in long release cycles, version 1, 1.1 etc. Now JS-based software is updated (sometimes) every day!


I wonder if there's a baseline for cynical comments about application performance?

Because it sure seems like this exact comment is a tinnitus-like constant ringing noise amongst the tech community.


It's really just HN


> reduce power consumption by up to 50%, improve performance by 30% and reduce area by 35%

To be clear, you don't get all three.

Also, are these Apple's numbers? They are better than what TSMC say (1st gen) 3nm silicon provides:

> N3 technology will offer up to 70% logic density gain, up to 15% speed improvement at the same power and up to 30% power reduction at the same speed as compared with N5 technology

https://www.tsmc.com/english/dedicatedFoundry/technology/log...


>To be clear, you don't get all three.

Actually you do. The branding is called FinFlex, but the upshot is that you can use different FETs for different parts of a circuit. So some critical timing bottleneck blocks could be built with performance optimized FETs and other blocks can optimize for low power.

The numbers from TSMC for N3E are +18% speed improvement at same power, -34% power reduction at same speed, and 1.7x density improvement.

https://www.tomshardware.com/news/tsmc-outlines-3nm-roadmap

>Also, are these Apple's numbers?

Ha ha. That's funny. Does Apple ever release any numbers prerelease?


> Also are these Apple's numbers?

Hope it's obvious, but no, they are not Apple's numbers—this is a rumor about unreleased products.


Bizarrely it seems the parent plucked those numbers from a Samsung 3mn press release! That's why they don't make sense in the context of TSMC 3nm. I've never had a presentation from Samsung, I guess they need to say something special.


I’m curious why not? Apple Silicon so far has improved performance, and reduced power consumption. Does that mean that it’s a larger chip than the others that came before it?


There is a trade-off between the three kinds of improvements. It is possible to pick some points (for example) that give a less than maximum power reduction as well as a less than maximum speed improvment. I think the comparison would be the Apple Silicon that they released is larger than a (hypothetical) alternative that only improved performance but kept the other two constant.


Maybe X*Y*Z=K but K increases over time. Ten years ago, we couldn't do what we can do now.


They got there by skimping on IO


Please elaborate. IO is really not an issue in the M* machines I‘ve used - on the contrary.


none of the M1 laptops let you use multiple external monitors and all of the M1 products have much lower limits on storage than PCs.


Not true. Depending upon the processor choice, M1, M1 pro, or M1 max, you get 1, 2, or 4 external monitors. Up to 64 GB ram. Do you really need more than 64 GB? The number of users who actually need more are a very small part of the market.


The price of RAM is absolutely bonkers though. If i ever need more than 16GB of RAM and more than 512GB of storage, I'll strongly consider moving back to Thinkpads or something.


And the M1 Ultra even offers up to 128 GB RAM -- and that is also unified with GPU RAM.


"Speed" != "Performance"

'70% logic density gain' sounds like 'reduce area by 35%' to me

The only one that might not line up is 'reduce power consumption by up to 50%' vs 'up to 30% power reduction at the same speed as compared with N5 technology'

...but you have to know what's being compared before deciding the numbers do (or don't) make sense


About the area, keep in mind that not every part of a chip is composed of dense logic:

Typically, sram (cache memories for example) doesn't scale as much as core logic, and interconnect (connections between the cores, DDR, etc...) almost don't scale at all.

This is highly dependent on the design of the SoC, but the core logic is rarely the largest of the three, so you will not see a 35% reduction of the total area of the chip: only of the areas the most densely packed with logic. What you could have for example is a 35% more powerful neural network accelerator in the same area as the previous generation. But you will not have 35% bigger cache.


Have you seen the 3nm SRAM scaling? It's really depressing. It won't bother CPUs too much but other devices are going to struggle to scale.


N3 scaling? There's literally zero scaling.

N3 is a stop-gap as TSMC is behind on their N2 GAA process. It's not so different from Intel's 14nm++++++ where it was dramatically better than before, but still not enough.


> About the area, keep in mind that not every part of a chip is composed of dense logic

No, but improving density still makes it smaller (for an otherwise-"identical" offering) :)


> "Speed" != "Performance"

Can you elaborate?


Sure

Start with processor pipelining, branch prediction, speculative execution, etc

Then move to multiple CPUs/CPU cores

A practically-linear program (ie hardly any unpredictable branching) can be run ~5x faster on a 5-stage-pipelined CPU than an unpipelined CPU

Liekwise, parallelizable tasks can benefit from multiple CPUs/cores, whereas non-parallelizable tasks will not (much)

And then you have performance-per-watt vs raw hertz count: today, most people seem to care more about performance-per-watt than raw "performance"

Ie, ceteris peribis, a 30w 4 core CPU at 1Ghz is going to be more favorably viewed than a 100w 6 core CPU at 1.8Ghz - you can throw 12 "low power", "slower" cores at the problem for ~10% less power draw than the "high power", "faster" cores

Then you can also look at the relative efficiency of different CPU platforms and/or microcode changes in a single platform: that was the major selling point of Apple's anti-Intel ads 15-20 years ago - the PowerPC was "faster" than the Pentium because it was "more efficient" at several families of tasks (even though the raw clock speed of the PPC was typically lower than the comparable Pentium offerings)

This comes up with the Apple M-series CPUs vs the Intel chips they replaced - they may not be "faster" (in raw ghz), but they're more performant


All true in the general case. But in the context of comparing silicon nodes (this case, seemingly), the terms "speed" and "performance" are interchangeable and mean exactly the same thing.


I don't read it that way - how did you come to that conclusion?


It'll help make the lightest and smallest devices even more capable than any other competing laptop. After using my work issued 16" M2 Pro, my 2019 intel mbp is crawlin slow, particularly with Docker. Build times can always be faster, battery life always better, cooling efficiency better.

The real life luxury of just being able to toss a 14" macbook pro that weighs almost nothing into your bag and have a savage development machine with up to 96gb of ram, no fan noise and minimal heat, great battery life, is very valuable.

I've started noticing all the processes that really bog my intel mac down. Indexing, running out of ram, builds, etc.. I was pretty pessimistic about macs for a while, but it feels right now like when they started putting retina displays in them. My previous mac couldn't even smoothly play a 1080p YouTube video very well.


Similar for me, caved in and bought a M2. It's great and all, gotta say though, not entirely happy with how unbearable the 2019 intel mbp has become.

Yeah, it struggles with YouTube videos and fans are spinning in vacuum cleaner mode. But it didn't in 2019. I feel OS and software updates ruined it. Not what I expect after only about 3 years of use. (obviously it wasn't cheap either)

Oh well, still gave Apple my money, just hope it's a special situation with the chipset transition and not the regular lifespan from now on.


It's worth it being aware that charging those 2019 Intel Macs from the right-hand-side USB-C ports will cause them to overheat and get thermal throttled.

Not saying you're doing that or that they are otherwise great machines (they're not) but they can go from being a steak grill back to being a relatively serviceable computer just by switching the charging port.


> charging those 2019 Intel Macs from the right-hand-side USB-C ports will cause them to overheat and get thermal throttled.

Pretty sure you have it backwards, it's charging from the _left_ that causes more throttling.


What browser are you running? My 2017 Mac runs 4k YouTube videos without a glitch in Safari.


I should have articulated that differently. What I meant to say was that the mac I had prior to my first Retina MBP (2013) couldn't run 1080p YT videos well. That model was a 2009 13", if memory serves. So I was just comparing the two transitional periods, whereby my Intel mac feels almost as limited as that 2009 machine did.


I prefer to use Brave to watch Youtube; it skips most of the ads automatically.


History suggests we will see obnoxious hacker news comments about how the M-series processors are some unrivaled alien technology that will be forever unbeaten when it comes to performance followed by AMD releasing a low wattage CPU with better multicore performance about 8 months later (or whenever they get fab capacity anyways).


Likewise, history will tell you we'll periodically see obnoxious hacker news comments claiming the M-series can’t possibly be better than what the x86 world has to offer.

We’ve been reading for decades that ISAs don’t matter that much anymore, and yet everything that needs good battery life is ARM. Intel was the undisputed manufacturing kind for 40 years or so. Things change.


I don't know, why did x86 beat PowerPC, MIPS, SPARC, and whatever else in the desktop and server space?

Seems like node size for the price beats everything else which is what we're talking about here.

Apple's anticompetitive practices are not particularly compelling evidence of some inherent architectural superiority.


The first M1 devices released in November 2020.

Where is AMD's competitive chip that came out July 2021?


AMD had the Ryzen 4800u shipping with a similar power envelope and higher multicore scores. Battery life was a good few hours shorter than the M1, but it was also manufactured on 7nm versus the Apple's 5nm. Performance tends to favor people on the denser node, so it's impressive to see x86 still competitive by modern standards.

If you need further evidence of this pattern, look at Nvidia's ridiculous 40-series cards. More expensive than a used car, but they were able to secure TSMC's 4nm node and annihilate everyone else. Apple did the same thing with M1, and it looks like that's what they're gearing up for with M3.


> similar power envelope

Power envelopes are reported in very misleading ways by companies, in practice it is not even close. The way they report stuff is having their cake and eat it too: the benchmarks results assume boosting their frequencies above the base one, while the "power envelopes" advertised are on the base frequency. It becomes quite apparent when you run them on battery, where M1/2s continue performing the same as plugged in, and amd/intel laptops underperform, while also not even lasting as much as apple's on battery.


Customers care about this:

> Battery life was a good few hours shorter than the M1

They do not care about this:

> ...but it was also manufactured on 7nm versus the Apple's 5nm


I agree. It's worth noting that the 4800u was on store shelves 18 months before the first M1s were released, however. Apple Silicon was released into a market that had had already seen comparable APUs.


What AMD-powered computer is more energy efficient than a M1 for example?


Apple M1 Pro vs AMD 5800u - 2 months apart (October vs December 2021). 5nm vs 7nm TSMC. They have comparable performance (60-120% in multi core depending on the task, 90-110% in single core) and the same power draw.


Where did you get those numbers? According to [1] its not even the same ballpark. To me it seems that even normal m1 is a bit better [2].

[1] - https://nanoreview.net/en/cpu-compare/apple-m1-pro-vs-amd-ry...

[2] - https://nanoreview.net/en/cpu-compare/apple-m1-vs-amd-ryzen-...


Is there any Windows machine that's fanless and at M1 speeds?


I can set my gigabyte laptop to Quiet mode and it's as good as fanless. As to the speed, I can play Doom Eternal with it (it has a mobile 3080 card), which M1 can't.


As good as fanless is not fanless. I want to be able to put my laptop on top of the bed of dust and not have the interior be coated in a second.


So not fanless


From the point of view of quantum mechanics, if you can't hear the fan, it does not exist)


Yeah i guess its mostly apple buying up the most advance nodes capacity at TSMC. It should be interesting once Intel and AMD get on the same node level as Apple hardware then a comparison can be made.


That will never happen. Apple has too much buying power. AMD and Intel don't make the most popular phone in the world, so they can't put down the money to compete for fab space.


What I really want is for them to release a lineup like Aerith - the super efficient Steam Deck SoC for the general market so we get more fun forms of computers


Also not mentioning the myriad of problems that comes with using Arm as you main PC


This is power consumption of (some of) the chips alone. That's pretty good, but the power consumption of the screen, cell radio, bluetooth, accelerometer, microphone, GPS - all of these add up to more than the power consumption by the chip. And not all of them, especially not the screen is going to be affected by TSMC's process change.

Obviously there's many caveats here based on how each person uses their device. But the broad takeaway is - a 45% reduction in power consumption by Apple's chips doesn't result in a corresponding increase in battery life. We'll only know when the final product is shipped.


Do you have a link which shows the power consumption of all the components you mentioned?

I've seen this comment over and over again, everytime an increase in processor efficiency is talked about and yet those gains usually do result in a very noticiable improvement on battery life.


No, it will have a massive impact on battery life. I only meant to say, a 50% decrease in consumption by CPU doesn't translate to doubling the battery life. All those other things matter too, so maybe it translates to 20-30% more. That's still massive!


Perhaps more fanless macs. That would be great and make a difference in the user exeprience. Re iphone: I only use mail, safari and chat apps so I doubt I can see any difference.


> I only use mail, safari and chat apps so I doubt I can see any difference.

You probably also take photos, where Apple automatically applies some Machine Learning tricks.

And wherever Machine Learning is used, more can be used. One day they could put an entire ChatGPT on the iPhone.


>> One day they could put an entire ChatGPT on the iPhone.

That sounds great except I never use siri...(in car commands being the exception).


I've heard that (I'm not experts on this, so correct me if i am wrong) making SRAM is huge different than making processing units (ALU, FPU or others). But CPU needs both SRAM and processing units. Marking always show best case only, therefore the final product won't as good as the marketing numbers.


This comparison is somewhat disingenuous. Why would they compare 3nm to 5mm, when the current TSMC process is 4nm, not 5nm? We don't know how (un)impressive the improvements of the 3nm process are, when we don't know how much of the difference between 5nm and 3nm merely comes from the difference between between 5nm and 4nm.


From what I have read, 4nm is not really a significant process change and is really just a variant of 5nm. There have been only minimal improvements in chips going form 5nm to 4nm as a result. 3nm really is different.


"There have been only minimal improvements in chips going form 5nm to 4nm as a result."

I'd like to see evidence for that, i.e. that the change from 4nm to 3nm offers significantly larger efficiency improvements than the change from 5nm to 4nm.


The chip industry articles I have seen say that 4nm is not mean fully smaller than 5nm and really can be just considered a version of 5nm. I don’t have the background to give you more than that but the message has been consistent across multiple tech writers.

You can see it in the smaller performance gains of 4nm A16 chips vs 4nm A15 chips from Apple.


Those numbers are just for marketing. There is no feature on the chip that is actually 5 nm, 4 nm, or whatever.


That's highly irrelevant for my point, which would still stand if the process generations were named differently.


Remember how Apple bought up most of the 5nm capacity for their M1 chips? Qualcomm was then forced to go with Samsung, which resulted in the Snapdragon 888 and 8 Gen 1 SoCs being pretty inefficient compared to the TSMC competition, like the Mediatek Dimensity 9000. Will something like this now happen again?


I thought those chips were inefficient because Qualcomm priotised performance and benchmark numbers over battery life.


That's definitely part of the issue, but compare the SD8G1 to the SD8G1+: these two chips are literally the exact same design, but the SD8G1 is manufactured by Samsung while the SD8G1+ is manufactured by TSMC. The TSMC variant ends up being ~10% faster while simultaneously drawing less power: https://www.gsmarena.com/testing_the_snapdragon_8_gen_1_plus...

(Apple, of course, still blows both of them out of the water.)


Qualcomm was not forced, Qualcomm was not able to compete with M1 chips

Qualcomm is a for profit, they are not forced to be bad, being bad mean less profit, they are just inefficient and had a different strategy, they saw competition coming and they panicked, they are not used to innovate they prefer milk their customers with small and insignificant upgrades every once in a very while as well as making use of their influence with the deep state to kill any weak competition (Samsung's Exynos)


Perhaps the parent should have said "lead to" rather than "forced", but the point still stands that Qualcomm went to Samsung's foundries, which provided lower quality results.

The Snapdragon 8 Gen 1 was a hot turd. In contrast, "plus" variation, which was the exact same chip but manufactured by TSMC instead of Samsung, got significantly better performance/power numbers.

(BTW, I think you're getting downvoted into oblivion because of the "deep state" nonsense. The rest of what you said is reasonable.)


That's not non-sense, that's the reality [1]

It happened with Japan, look at them nowadays [2]

It also happened with Europe, Alcatel and Nokia, look at them nowadays [3]

For some reason, only Microsoft can get away with bribing foreign countries [4, 5]

We seen the extension of it with the ban of chinese's 5G and multiple tech companies, including huawei, wich later unveiled novel EUV patents [6], they maybe knew what they were cooking

TSMC's developing story is also quite revealing, with US officials suggesting sabotaging TSMC [7]

"deep state" nowadays has negative connotation because the government wants it to be that way ('it's a conspiracy' argument), look, it works [8]

And it's not just with tech, there are some suspicious stories in other industries, car [9] and metal manufacturing [10] for example

After all these stories, seeing Samsung having to drop plans for a homemade chip with advanced homemade fabs instantly ring bells, specially when the house was tainted with similar stories [11]

[1] - https://en.wikipedia.org/wiki/American_exceptionalism

[2] - https://www.washingtonpost.com/archive/politics/1987/04/18/s...

[3] - https://en.wikipedia.org/wiki/Alcatel-Lucent#Lawsuits

[4] - https://www.theregister.com/2022/03/25/microsoft_accused_of_...

[5] - http://techrights.org/2014/05/28/microsoft-brazil/

[6] - https://www.techspot.com/news/97072-huawei-patents-euv-litho...

[7] - https://eurasiantimes.com/us-taiwan-should-destroy-its-semic...

[8] - https://en.wikipedia.org/wiki/Deep_state_in_the_United_State...

[9] - https://fr.wikipedia.org/wiki/Affaire_Carlos_Ghosn (i couldn't find english coverage)

[10] - https://en.wikipedia.org/wiki/Fr%C3%A9d%C3%A9ric_Pierucci

[11] - https://en.wikipedia.org/wiki/Lee_Kun-hee#Scandals_and_contr...


M2 was a skip generation for me, looking forward to M3.

Maybe the same with iPhone.

I generally care about process node as the main feature of a computer.


The M2 Pro chips finally allow 8k displays! That's a pretty significant new feature (at least on Macs, Windows has supported 8k displays for some time already).


I'm over 40. I can't see all the pixels on a 4k display, let alone 8k. I'm still happy with my 2013 MBP w/ Retina.


It's not just about the resolution, it's about physical area.

My plan is to buy a 65" or 75" 8k TV and use it as computer monitor. I'll have a screen that is as big as my desk.

A display of that size will allow me to have my IDE, source control, terminals, log files, the app that I'm developing, my email, the CI dashboard all visible at the same time.

Of course I'll mainly use the bottom center of the huge display, but I think it will be a completely different experience when you no longer have to cmd-tab between apps but can just glance up/left/right to see what's going on.


What you want is a curved ultrawide monitor like the Samsung Odyssey G9. I have one of them (49"). The curve is critical, because it allows the entire surface of the monitor to be basically the same distance from your eyes... meaning you can just glance to the side to use that app you placed in the corner. I have good vision but even for me a non-curved 49" ultrawide is unpleasant to use because the edges and corners are so far from your eyes. The 49" size is also large enough that it almost completely fills your field of view when seated normally.

A 75" TV is going to cost you a lot of money for a very ineffective display. It's so large that you'll probably have to physically move several feet to see the edges, meaning that realistically only about half the screen is usable.


The ultra-wide monitors sound nice, but they don't really offer that much space; it's basically just two 27" displays side by side.

I currently have two 27" and three 24" displays on my desk. I already physically move to focus on specific tasks (eg. I have email on a vertical monitor on the side, so when answering emails I'll roll over a bit to the side with my chair).

With a TV, I could have one big seamless area instead.

The one thing that does worry me about the TV is the lack of tilt at the top -- everything in the top half of the monitor will be hard to see because of the angle.

When I work with pen and paper, my entire desk is full of stuff, a dozen sheets and books, printouts and diagrams, all visible at a glance.

I don't see why I should limit myself to a much smaller area when working digitally.


Ultrawide 1440p unbearable font rendering. No thank you.


> My plan is to buy a 65" or 75" 8k TV and use it as computer monitor. I'll have a screen that is as big as my desk.

How far away are you going to sit??


Normal desktop monitor viewing distance (at the back of an 80cm deep desk), but I'm going to use it at 1x scaling. 65" will probably make the text too small -- I think the ideal resolution would be an ~80" display, but 75" and 85" TVs would be wider than my desk.


This is an interesting idea. You should write a blog post or something about your experience once you've settled in.


Might take about 10 minutes to migrate your mouse from one corner to the other!


The mouse will have offspring and die before reaching the other side.


I’m not sure if you’re aware of this, but alongside 3nm processes there’s a cutting edge technology called “glasses”…


Glasses help with focusing, but they don't help if your sensors (retinas) are just degrading.

Putting a very fancy lens on a 0.3MP sensor doesn't give you higher resolution photos.


GP didn't mention degrading retinas, just age. I'm over 50 and need glasses but, while acknowledging that life is a one-way journey, I just don't like people talking like ageing is necessarily this slide into decrepitude after 40.


> GP didn't mention degrading retinas, just age.

Retinas degrading is a factor of age.


The vast, vast majority of people in their 40s (wearing appropriate glasses if necessary) can tell the difference between an average monitor and a 4K display, which is the context of their comment.


I know zero people with a 8k display, I wouldn't call that a significant new feature.


> I know zero people with a 8k display, I wouldn't call that a significant new feature.

From what I understand, even 4k is usually a waste for a living room TV when viewed across the room from a couch. People just don't have the visual acuity to really tell the difference.

But I can easily tell the difference between 4k and 8k when a 43" TV is 16" away from me.

I get that companies want to make money so the new tech is going into big TVs. But at some level, it's very strange to me.


I have a 65" 4k TV and sit across the room from it. The difference from 1080p to 4k is definitely noticeable. It's not miles apart but the details are absolutely crisper and nicer.

I imagine the benefits of 8k only really start showing on 80"+ screens.


That's fair.

I've seen a number of resolution charts like the one here[1]. But I suppose I forgot how different peoples' setups can be.

Sometimes across the room is 6', sometimes it's 15'. And for sure, screen size and eyesight can make a big difference.

---

1. https://www.rtings.com/tv/reviews/by-size/size-to-distance-r...


Right. Indeed my living room is not very big. I have made an error though, my TV is 55" - not 65".


I know people have made naive comments about computers for years, "why would you ever need more than 8MB of RAM?", etc

But I genuinely believe 4k is the upper limit where you stop seeing benefits in increased resolution for a monitor. Our eyes do have limits after all! I have pretty good vision and 4k is enough, and at that point I'd prefer focus on other aspects of picture quality like color accuracy and higher refresh rate.

It's already annoying enough how large a screenshot of your monitor comes out (just tested on my desktop and it came out to 10MB). 8k is unnecessary in the vast majority of cases.


> But I genuinely believe 4k is the upper limit where you stop seeing benefits in increased resolution for a monitor. Our eyes do have limits after all!

I think it depends on how big the monitor is. You're probably right for 27" monitors. But running a 43" TV as a monitor, the pixel density is quite a bit lower than I'd like.


But simultaneously you won't be looking at that giant monitor from the same distance, but much further away. So it really shouldn't make that much of a difference as long as the angular resolution checks out.

I've currently got a 32" one running 1080p, from a normal viewing distance I can't see any pixels at all. Going beyond that and having to render 4x as much already seems super gratuitous. 8K is just completely ridiculous unless you're projecting a DnD game board or something that will be viewed up close.


I and several other programmers I know use a 43" monitor an arms distance away.

I run it at native resolution, like it's 4 1080p monitors glued together with no bezel.

I can definitely tell reading text is exactly like reading from a 1080p monitor compared to my retina display macbook screen right next to it.

A 43" 8k monitor for $750 or so would be an insta-buy for me.


So many assumptions here! I use tooling to divide my 4K monitor into five rectangles: two across the top each taking up 1/4 of the screen, and three across the bottom each taking up 1/6 of the screen. Even with that, I have to layer some things, and am often actively working with three or more of the "virtual screens" at once.

I am old enough that I've started to wear reading glasses to counter my age-related presbyopia, but if I had an 8K monitor it would probably sit less than two feet from my face just as this 4k monitor does, and I would make use of every pixel.


> But simultaneously you won't be looking at that giant monitor from the same distance

I sit 16-20 inches away from my monitor. I don't really think I sit further from this monitor than smaller monitors I've had in the past. Or a laptop on a stand.

As other people point out, I use it like multiple monitors that are seamlessly glued together with a tiling window manager (MacOS so Rectangle).


> But simultaneously you won't be looking at that giant monitor from the same distance,

You might be if it had a higher pixel density. There are plenty of people who run 2, 3 or even 4 smaller monitors next to each other at close distances. That could potentially be replaced by one much larger 8k monitor in the future.


Do those same eye limits apply to printers? A 24” 4k screen is 180 PPI (according to Google). Can you tell the difference between a page of text and images laser printed at 200 dpi and 1200 dpi?


I think VR can definitely benefit from more pixels.


Agree. The closer the display is to the eyes, the more pixels are needed to make it convincing. I feel the tech is close though. The 2016 HTC Vive (1080 × 1200/eye) is quite pixelated, but the 2020 Index headset (1440 × 1600/eye) looks smooth and clear.


For the unvarnished truth about when resolution is good enough, look at what Apple calls Retina. 4K at 21-inch, 5k at 27-inches, 6k at 32-inches. The resolution for all those displays is good enough to stop individual pixel perception. That's the best idea of what's good enough.


Didn't the M1 MacBook have the much more basic issue of only supporting a single external monitor via HDMI?


The M1 Pro and M1 Max can have multiple display outputs. The M1, be it in the Macbook Air, Mac Mini or Macbook Pro could only ever have 2 display outputs. On laptops, one of those was permanently reserved for the built in display.


Specifically you need the 14 or 16 inch M1 Pro for multiple display outputs. The M1 Pro 13 inch only supports one external display. Same with the M2 Pro 13 inch.

This is very confusing imo


I am referring to the M1 Pro the chip, not M1 Pro as in the Macbook Pro with the M1 chip. 14 and 16 inch Macbook Pros have the M1 Pro and M1 Max CPUs, whereas the plebian tier Macbook Pro 13 inch one has only the lowly M1 chip. I think whoever came up with this product strategy is a mastermind, deserves a raise from the perspective of shareholders and a slow and painful removal of their gonads from the perspective of people who bought the 13 inch MBP.


Regular M2 still has the same limitation of two total displays. Unless you use USB->HDMI adaptors that have their own display controllers.


This is crazy. Couldn't much weaker Intel chips do this years ago?


Intel HD graphics introduced with Westmere (2010) supported dual displays, and then Ivy Bridge (2012) supported three displays on the iGPU.

AMD's Llano (2011) supported dual/triple(1) displays on release and Trinity(2012) brought support for up to four displays at once.

(1)You could do triple displays on Llano, but it took a particular set of boards using Dual-Link DVI to drive two monitors IIRC.


It's Apple approach to market segmentation: you want more than 1 external monitor? Buy macbook pro with M1/M2 Pro/Max, for a $1000 more.


The M2 air was a major hardware upgrade from previous airs. I love mine dearly, from the long battery life to the super cool blue/black color way. The performance is unreal for something without a fan.


You care about the process node as a main feature? Really? I mean, it's nice and all, but the MAIN FEATURE?


Yes, as computers don't really get faster anymore, and I travel all the time (right now writing this from a hotel restaurant), the biggest change I notice is in battery time, which depends on energy efficiency improvements that usually come with process node change. I'm not being sarcastic.


The M-series is way faster than previous generations even in single treaded general purpose code. “Computers don’t get faster anymore” was Intel’s inability to innovate for many years, not an inevitability of physics. We will reach the end of Moore's law someday, but it’s not there yet.


Computers get faster in benchmarks. If anything, regular computer tasks like checking your email or editing a file has more latency than 5 years ago. Software bloats to fill whatever container you put it in.


If you use the proper software, it doesn’t. Vim is still as fast as it was 40 years ago if not faster. IO is also getting a lot faster. The SSDs these days are what RAM used to be.


Software gets slower faster than computers get faster.


Why then don’t you say

“I generally care about battery life as the main feature of a computer.”

?


I trust more TSMC process numbers (for large changes, not the 4nm bulls*t) more than battery life published by a PR/marketing team for my buying decisions. They are much harder to fake. And sure, performance/watt is what matters generally, which is again hard to compare between brands and marketing teams.


Even then, I would say you care about battery life and performance/Watt, and think process node a better proxy for them than what manufacturers and reviews say about them.


>Yes, as computers don't really get faster anymore

This was true from the late 2000s to about 2017 but in the last four years or so computers have gotten much, much faster.

Obviously, it is workload dependent. If all you use is a web browser then you may not notice. If you use a computer for any sort of parallelizable processing work the last four years have been transformative.

For workstations, performance has doubled in single-threaded applications since 2019 and since more cores are standard you can see up to a 10x increase in performance. For servers the performance improvement is even greater.

I do synthetic aperture radar processing and image generation and can get through 10x the amount of data in a day than I could in 2019. That's just CPU stuff, for GPU stuff it's even more-- I can do stuff now that was impossible in 2019 due to VRAM limitations.

There have also been specialized accelerations included in CPUs like the JavaScript optimizations Apple baked into the M1, which definitely do make a noticeable difference if all you use a computer for is web browsing.


Yes. TSMC 5nm was incredible. Everything fabbed on TSMC 5nm/4nm has seen a huge generational leap in performance and energy efficiency after years of relative stagnation. It directly improves single-threaded performance and battery life which are two of the absolute most important metrics for user perceived performance.


Most systems have similar IO, OS, runtimes. It makes sense for folks to be primarily concerned with Watts per MIPS.


me too, I love my 2019 Intel MBP, it keeps me warm in the evenings.


I first thought it was sarcasm. (Like someone who says: "the main thing I like about iPhones is their single core performance")


I briefly thought that. Then posted my comment anyways and.. wasn’t disappointed. Lol


don't take random internet people's saying at face value, some folks state 1000 things a day of which maybe 5 are still valid tomorrow


Process node is HUGE here. M2 is pretty close to just M1 with higher clocks and higher power consumption. This is largely because they didn't get to add transistors.

The N3 shrink doesn't help SRAM one bit, but it does give them more room to improve the CPU.


Caches are important, caches are mostly a question of the transistor wizards (between paradigm shifts at least)


These headlines always make it sound like the chips are fungible.

Is it that they’re basically saturating the entire production capacity with their orders?


Depends on your definition of "supply". The argument is mainly just semantics. They're not wrong per-se. A better phrase might have been "production" rather than supply. I agree, supply sounds like I'm buying an off the shelf component, not a custom designed wafer.

In short, Apple is set to purchase every wafer that TSMC produces when they launch their N3 node. How long for should have, but hasn't, been specified. It's safe to assume that by "entire supply" they mean "until TSMCs next node", at which point TSMC will take orders from other companies for their N3 node.


>It's safe to assume that by "entire supply" they mean "until TSMCs next node"

Is that really safe to assume? Maybe the original paywalled story is more precise with language, but the use of "all available orders" and "100% of the initial N3 supply" seems so vague to me. Could initial mean first month of full scale production?


Looks like all production till somewhere around Q4 2024.

https://www.tomshardware.com/news/intel-15th-gen-arrow-lake-...


Yeah that's exactly what's happening - TSMC predicts they can make N chips per Y period, and apple has said they'd like 100% of Y period to be dedicated to making chips for Apple.


It’s totally possible Apple put in an order for 200% and tsmc said we can only 100%. It’s actually more likely scenario and In this case, apple is not a strategizing villain purposely buying up all the supply.

It’s also possible they bought/earned exclusivity on the product. Maybe they funded or helped the R&D, in which case these things wouldn’t exist without apple so it makes sense they should get first access.


Plus an exclusivity agreement to ensure no sharing is happening.


Yes, that is exactly what happened.


At what point does this become a monopoly issue?

Sure others could have outbid hypothetically so it is on paper fair but Apple has a lot of cash.


The thing is, that Apple cash isn't just buying the production of TSMC, it's funding TSMC's R&D of the next node that they're buying. Without Apple, it's very likely that we would not be continuing to improve nodes at this rate.


2 issues in fact, one is TSMC being the only one capable of making these amazing chips and the second is Apple buying their entire supply. Now if any of that counts as a monopoly I don't think so


And the entire yield coming from a region of the world that China has said it will annex (at some point).


Why aren't there other companies outbidding Apple?


To give a slightly different answer, because they don't need to.

Intel has their own foundry, AMD is quite happy with whats a fairly recent move to 5 nm. Graphics card manufacturers are more concerned with selling off old cards than making new ones.

Apple isn't going to get very good yields out of 3nm and will likely only get to put it in a few products for now, but for a few months they get to say they'll be the biggest baddest company on the block, which is what apple marketing is all about.

Qualcomm gets the entire mobile market either way, and for desktop/server manufacturers apple isn't an option. Apple 3nm doesn't directly compete with amd/intel, so why worry


Because they don't have the money.

Apple's net income has been at least $20bn every quarter for the past two years[1]. Both Intel and AMD make a small fraction of that[2][3].

[1] https://www.macrotrends.net/stocks/charts/AAPL/apple/net-inc...

[2] https://www.macrotrends.net/stocks/charts/INTC/intel/net-inc...

[3] https://www.macrotrends.net/stocks/charts/AMD/amd/net-income


Not so much that they don't have the money (they don't, I agree), but the bigger issue is even if they could their margins aren't nearly as good as Apples are.


Their potential margins. So they would be taking large risk to potentially make much less.


Apple is the only company to sell an insanely highly mass produced product at insanely high prices. Usually you have to pick one or the other.

So the reason is that no-one can afford to out bid them.


Because it's a bad deal. N3 isn't a good node and offers ZERO SRAM scaling. It's a stopgap for N2 GAA.

At the same time, chip shipments have dropped like a rock as everyone braces for recession. Nvidia allegedly even tried to cancel most of their N5 orders because they didn't think they could move the chips.

Apple is pretty much the only company willing to pay large amounts into a risky market. There may not have even been any serious bids aside from their own.


They can't. Apple is the most valuable company in the world and has more cash on hand than many medium sized countries.


What do you mean "on hand", isn't their net worth mostly the company's stocks which they would have to sell for it to be "on hand"?


> isn't their net worth mostly the company's stocks

I think that would better be described as "market cap", which is certainly a much bigger number than "cash on hand." The latter is basically liquid assets, which I think is a much more important metric when we're talking about outbidding competitors for fab space (among other things).

> "Cash on hand can be defined as cash deposits at financial institutions that can immediately be withdrawn at any time, and investments maturing in one year or less that are highly liquid and therefore regarded as cash equivalents and reported with or near cash line items"

Source: https://www.macrotrends.net/stocks/charts/AAPL/apple/cash-on...


Because Apple has more money and higher margins than they do.


Presumably they're trying and that's driving up the price for Apple?


Conjecture: I suspect that Apple is underwriting or paying for the new fabs years in advance, since Apple has wads of cash overseas that it needs to invest (due to US tax laws). Apple can place orders years in advance for all the output of entire fabs, and by paying for fabs it would be able to control costs of exclusive orders for the production. Apple doesn’t want to bid prices up against competitors, so long term contracts make sense. TSMC would perhaps not be able to finance such aggressive moves to new nodes without Apple, and contracts can remove most financial risk for TSMC.

The above is just capitalism at work - the richest company in the world investing in the world’s most expensive toys. Apple has ~5x the market cap of NVidia and Apple can predict their usage of silicon, whereas NVidia’s market is more volatile.

Whether the above is monopoly power is arguable.

I also would strongly suspect that TSMC has political requirements embedded in the contracts. For example Taiwan needs phones that are secure against China (security against state level attacks), and Apple can meet needs that other fab users cannot. There is no doubt in my mind that TSMC are using Apple as part of their silicon shield: https://duckduckgo.com/?q=Taiwan+Silicon-Shield


This is phenomena is called monopsony and Apple has been doing this since forever (HDs, "retina" displays, glass, semis, etc). In Apple's case, they have essentially subsidize innovation and manufacturing capacity at the expense of temporary exclusivity and I believe that it's been beneficial for the market at large. It's like how guaranteed purchase contracts were instrumental in bringing COVID vaccine development forward.


This makes sense for Apple - they are aiming at the premium end of the market. Having a process advantage over the competition will translate into a performance / battery win. After all, this is what Intel did for decades while they could.


and when time comes and AI can run on a laptop, Apple will be 3-4 years ahead of everyone else…


The world really needs Intel Foundry Services to succeed, this doesn't look like a healthy market.


Intel is installing high-NA lithography tools from ASML in their new fabs and should be able to manufacture 3nm chips soon. Unfortunately they’ll certainly reserve 100% of that capacity for themselves.


I wonder how many transistor shrink steps we once EUV High-NA tools are in production. Namely, before it becomes insane and too much unreliable to be reasonable.


> Namely, before it becomes insane and too much unreliable to be reasonable.

Well, the question are the yield rates... but even low yield rate can be enough if the product can be priced expensively enough.


Every generation of shrink has pushed a company or two out of the fab business. I would guess that around 10-12 angstroms TSMC, Samsung, or Intel will cease being competitive at the leading edge and fold. Most likely that will happen one more time before the shrinking is no longer physically possible while maintaining chip function.


Physical impossibility probably won't be a factor, since, long before that, process improvements will become so expensive that they don't make economic sense. It doesn't help having a marginally more efficient chip for a much higher price.


"When" that happens, I'm extremely curious how and if the remaining incumbents will be able to leverage their dominant positions/technology/ip/etc to establish a footing in whatever new technology supplants the existing.

I'm not really talking about whether they'll be too ignorant to recognize the impending existential risk and take action, but these companies are much more than just ASML customers that know how to use lithograph machines. They have layered stacks of mind boggling processes, expertise and experience.

Will this new technology be so revolutionary and disruptive that all their advantages are rendered obsolete insofar as competing in this new arena.

I guess if anyone could answer any of these questions they wouldn't be sitting around here talking about it on HN :)


Allright, then an optimistic estimate of 3 times to "wrap up" the transistor skrinking with current tech.

Then some dies (or chiplet) will probably grow and consume more electricity: I am thinking GPUs... if they can be fed enough data fast enough though. Look at apple Mx dies.

That said, I have still no idea of the geometry of a transistor on a die to get an idea of how far we are (and I am not even talking about ion dopping not working properly anymore).


As for true geometry, current 5nm at TSMC is actually a gate pitch (spacing between the gates) of 51nm and interconnect (smallest possible distance between the centers of adjacent wires on interconnect metal layers) of 28nm. Intel 4 is 50nm gate pitch and 30nm interconnect. So, they're between 1nm of each other. As an example, IBM did create 5nm transistors on silicon nanosheets with GAAFETs in 2017. The spacing between the transistors then had to be 41nm to mitigate quantum tunneling. In the case of TSMC and Intel, their transistors are not at the actual sizes specified, but it is currently not clear that much would be gained if they were if the pitch still had to be so long.

NOTE: I am an amateur, so please don't take any of that as actionable fact.


But the "5nm" label still has meaning insofar as a "3nm" process allows proportionally more density (5/3 increase) right? I guess the trillion dollar question is how many more node shrinks are left before quantum tunneling problems overwhelm solutions to them.


That is correct. We would expect some kind of fractional shrink equivalent to the numbering system used.

Intel expects to shrink from 4 to 3 to 20A and finally 18A. Apparently, decent progress has been made on 20A, so at least 3 shrinks should be expected. TSMC has a roadmap to 2nm, and they expect to slow down after that. Samsung has plans for 1.4nm. My guess is that one major player to fall out around the 20, 18, 14 angstrom area. My present thought is that this will be Samsung. I think expect one or two shrinks before Intel or TSMC falls away. One more after that, and then some other technology may need to step up to the plate.

I had long thought that something like gallium arsenide would replace silicon allowing for higher frequencies, but cost might be prohibitive. The world after the shrinks may just be a logic and efficiency war.


I wish we would get a picture with all those metrics.

That said, all of those metrics seem huge to what EUV HighNA can do: I remember near "perfect" 7nm "lines" in photoresist.

I wonder what is the resolution of the motors to align all that (I guess piezo electric). And interferometers for alignment, x-rays??


Why hasn't Apple just bought TSMC?


Because it's not for sale. Even for all the money in Apple's offshore accounts.

The Taiwanese government would block the sale if a sale could even be agreed.


Why would they? With the current arrangement they get all the benefits and none of the risk. If TSMC falters they can go to whoever supplants TSMC and make the same arrangement and continue to have the latest and greatest chips.


> Why would they?

They could annihilate all competition in the market except for them. Every single Android phone would be forced to use subpar manufacturing from there on out.


The FTC would likely block the transaction precisely because of this. If they didn't block it, Apple would - upon doing the above - immediately subject themselves to anti-trust litigation from a lot of parties with a lot of money. Really hard to see this working out well for them.


So Apple would spend probably over 100 billion dollars to buy TSMC and make them a less profitable company by not serving other companies?

All this would do would weaken TSMC and strengthen other chip manufacturers for a couple years of absolute chip dominance. But they already have chip dominance, so it is for essentially zero gain.


> make them a less profitable company by not serving other companies?

They become part of apple. With a stranglehold on chips the price of apple products as a whole, or their sales, will skyrocket.


Would they? Apple already has the best chips. What makes you think that the average consumer cares that much about even a 20-30% performance difference. It is not like TSMC is the only competent chip manufacturer. Nvidia largely uses Samsung and they are doing just fine.


Apple already has the best chips, but in this case no one else would have them. Samsung is not doing fine, Nvidia switched off of them after a single generation because their process isn't very good.


> Every single Android phone would be forced to use subpar manufacturing from there on out.

Isn't that kind of how it is already?


EU and the US would not accept a merger in the first place.


It's a Taiwanese company

Why do you think the US or EU would get involved?


why is the EU involved?


if Apple wants to sell their products in EU, they have to listen what EU says.


They honestly already have. They make 80% of the profit in the cell phone market. And even their low end SE is two years ahead of the competition in performance.

Besides buying a major supplier out and then getting rid of their other customers is value destructive.


They just choose to do it now?


Apple doesn't want TSMC. Semi fabrication is a high risk business whose leader can change at any time. They want the freedom to chase that. They can keep those newest chips out of competitors hands by buying out capacity anyway.


It's hard to imagine that they'd be allowed to. Beyond that though, I think Apple's strategy is to create a monopsony for high end parts (ssds, displays, etc). Buying TSMC would work against that.


Given how expensive (and risky) chip fabrication is at the cutting edge doesn't it make more sense to just pay extra to use whoever has the cutting edge node and not take on the R&D risk?.


TSMC is worth over 500b dollars.

Apple doesn't have that amount of capital to even make an offer lol.


Apple has 100bn in available cash, they defiantly do have the capital to make an offer if they wanted


AAPL has $51B on hand, not $100B.


Where'd you get that number?

Wikipedia seem to put it at a max under $150b[0]

-------------

[0] https://en.wikipedia.org/wiki/TSMC


That is severely outdated number, please take a look at market cap from today:

https://finance.yahoo.com/quote/TSM

it is $454B


Apple's market cap is 2.333T.


It's called a stock swap, and Apple could easily accomplish it if from a financial perspective.

They could never achieve it with governments and regulators, however. Like 0.00% chance. Even if the government of Taiwan allowed it, there would be a long line of other governments and regulators blocking it.


They can pay with:

1. Bank loans

2. Apple stocks

Completely doable.


I expect it would be more likely they’d just go straight to ASML and start building their own fabs. That said, they may see enough value in their TSMC deal that it’s simply not worth what would be an eye-watering outlay for something that in the end may not really save them all that much.

It’s probably not seen as a bad thing internally that other rivals aren’t able to get on 3nm either, which wouldn’t be true if Apple weren’t buying up the supply.


TSMC has tons of custom IP on top of ASML machines. Apple could try but becoming leading fab company is something that could consume significant part of their cash without guarantee they'll succeed, and even if, that's a very long term project.


> I expect it would be more likely they’d just go straight to ASML and start building their own fabs.

Yeah, right. As if it is that simple. Just construct a building and put ASML machines inside and off you go.


easy as spinning up a CRUD app


Because clearly that’s what I just said.

> it’s simply not worth what would be an eye-watering outlay

Sigh.


The problem is not money. It's knowledge. And that knowledge is not in the ASML machines.


I believe there are substantial numbers of patents and/or trade secrets that TSMC hold. Buying ASML alone wouldn't let Apple spin up their own cutting edge IC production, at least not without a decade or more of extra R&D.

Having said that, it would give them more negotiating leverage against TSMC, since then they would be a monopoly equipment supplier as well as a customer.

Either way, I doubt governments would allow such a purchase. It would be denied on national security grounds and on competition grounds.


I agree - they would probably go straight to ASML.

But not having a fab means they're flexible to jump ship in the future if someone was ever to outpace TSMC.

Also, consider Intel's fabs, which were stuck on 14nm for years. This probably factored into one of the reasons Apple made the jump over to Apple Silicon.


Intel's been going to ASML and building their own fabs for decades now. It's clearly not that easy.


Even if they wanted to try, the $50bn of cash and short term investments Apple has on hand is about 1/9 of TSMC’s market cap. Even with their $100bn or so of longer term investments they’re still only up to a third of its value.


Taiwan would never allow that. Also, why would Apple want to?


Could it? 500B laying around somewhere? Regulators looking the other way?


Geopolitical Risk


Anti-trust


What would run afoul of antitrust laws? I don’t see it.


TSMC is a critical supplier of Apple and a bunch of Apple's competitors. Making it Apple-exclusive would kill AMD, Nvidia and most Android phones (Samsung and Huawei are the only exceptions using subpar non-TSMC chips) overnight. There is no way in hell that any antitrust regulator allows Apple to destroy their competitors with an anticompetitive purchase of a supplier.


Apple buying TSMC doesn't necessarily mean that they are restricting the supply of their competitors. I think you're confused about what anti-trust is or how it works.

A TSMC acquisition would only be subject to anti-trust if Apple restricted access to supplies that their competitors currently have access to. Buying/owning the means of supply for your competitors isn't subject to anti-trust. Apple would gladly continue supplying chips with TSMC. Their are fabs created for older features that Apple doesn't use that are core to TSMC's profitability. They're not going to cut that overnight. Especially under a COO-CEO like Cook.

The monopolistic practice of acquisition would be to hamstring your competition. In this case, the acquisition would be based on synergy.

Samsung, a phone competitor and also screen supplier to Apple, controls the OLED market. They are not subject to anti-trust because they supply a fair market.

A more current example would be the Microsoft Activision acquisition. Microsofts defense in that is that they will continue supplying Call Of Duty (and similar high profile games) to competitors.


...but meanwhile it's still fine to corner the market on 3nm...

(nevermind that modern chips are a hybrid of different feature sizes...)


Apple isn't cornering the market. Apple is buying all available supply for actual use.

"Cornering the market" typically refers to controlling supply of a product to restrict supply to the larger market and drive up price.


Consider the sourcing here -- Digitimes rumors are notoriously baseless.


Anybody who follows Apple and their relationship with TMSC knows that this scenario is very likely.

Digitimes gets to be the broken clock that's right twice a day.


When this says 45,000 Wafers in March, are we talking about Wafer Starts of finished wafers? What's the cycle time for 3nm?


The article has no information on what this statement means. I'm sure TSMC hasn't just walked away from their other customers like NVidia and AMD.

TSMC is still in process of ramping up production of their 3nm process (the article mentions they'll reach 45,000 wafers per month by March), so the only sensible reading of this statement seems to be that Apple are getting first X months (where X is unspecified) of production capacity until capacity has ramped up.

It'd be interesting to try to correlate this 45,000 wafers/month number to Apple sales. The only readily available info I could find is that these are 12" wafers and will cost ~$20,000 each. I'm not sure how many good die Apple are getting out of each wafer. So that's about $1B/month apple is spending on CPUs apparently.


The article says:

> Apple has reportedly secured all available orders for N3, TSMC's first-generation 3-nanometer process that is likely to be used in the upcoming iPhone 15 Pro lineup as well as new MacBooks scheduled for launch in the second half of 2023.

Note that they phrase it as: first-generation.


"all available orders" might also mean there were some orders already taken by others, and "all available" means "everything that was left"?


OK, so with the 2nd generation "N3E" due to be available in 2nd half of the year, that's implying Apple have exclusive access to TSMC 3nm for next few months.


Except for a brief period when HiSilicon was still making leading edge SOCs, Apple has been the first customer on TSMC’s new process technology, and always with a custom process flavor. This is completely normal.


That's not true? For example, I think the first TSMC 4nm chip was by Mediatek, not Apple.


4nm is a 5nm derivative node


What's the evidence this is both true and matters to the original claim? In some sense all nodes will be derivative of the previous one, as they never reinvent the wheel completely.


The evidence is that I am a subject matter expert and there is a very big difference between a derivative (ie half) node and full node. If you want to learn more about full vs half nodes, I would start with google.


It is interesting that Intel suddenly announced that they would be delaying their orders of this exact chip.


Are Intel getting cutting edge (3nm) chips made by TSMC ? I know they're outsourcing production of some of the older nodes, but that would be a bit surprising. Semiconductor manufacturing is all about scale - selling enough of each generation to fund development of the next generation (which is astronomically expensive). If Intel are buying cutting edge chips from TSMC then that would seem to be an act of desperation - helping their competitors because they have no choice due to their own troubles.



Wow - desperation indeed. It's amazing how Intel, who had such process expertise, managed to screw up like this, while TSMC and Samsung have adapted to new generations without any real hiccups.


>TSMC and Samsung have adapted to new generations without any real hiccups

Samsung has been having serious issues for years and TSMC is currently having serious issues with N3. Samsung's issues are too complicated and longstanding to go into but TSMC's issues are simpler and more recent. Their base N3 process (now called N3B) is a bit of a dud with apple being essentially the only major customer. The issue is with cost and yields, N3B just doesn't make economic sense for most customers. TSMC is aware of this and is racing to fix this. The fix is known as N3E which is an "enhanced" "version" of N3. The reason I put quotes around both of those words is because N3E actually has relatively little to do with N3B, there is no direct path to transition N3B chip designs to N3E, and because it features some improvements, but also some significant regressions in performance compared to N3B. The highest profile regression is the total lack of SRAM density improvements compared to N5. Sticking a huge chunk of SRAM on the chip has become very important to modern chip performance so this is a really big deal. The process is "enhanced" mostly in terms of economics, not performance, and it's coming an entire year after N3B. This has essentially delayed TSMC's "real" N3 node an entire year while also reducing its performance advantage over competitors. There's been a lot of discussions about the implications of this in the hardware space[1][2]

[1]https://www.semianalysis.com/p/tsmcs-3nm-conundrum-does-it-e...

[2]https://fuse.wikichip.org/news/7048/n3e-replaces-n3-comes-in...


Interesting - thanks!


Resting on laurels. Just like how IBM declined despite its prominence and capabilities, how BlackBerry lost... getting too confident or too comfortable.


Rather the opposite is true, Intel 7 was too ambitious and delayed for ages as a result, which caused a cascading delay of their entire product lineup for multiple generations.


thanks, that is an interesting tidbit in this context


Remember this is "N3" silicon, everyone wants N3E.


feel like this Intel story's related: https://news.ycombinator.com/item?id=34894943


Why does TSMC do that? When Apple has better options TSMS will be dumped, as with IBM and Intel before. Why hurt goodwill with other customers?


Because Apple showed up with a big bag of money, presumably.


And, importantly, Apple continues to show up with even more big bags of money. Giving Apple de facto right of first refusal for new node capacity is as much about maintaining TSMC's relationship with its most important customer as it is about this year's bottom line.


I bet they could ask the same amount for 80% of the capacity and Apple would agree.


Because those customers rarely have anywhere to go. because those customers would dump TSMC the instant there would be a better proposition?

Apple is the customer you want to maintain goodwill with.

Hell, there were times when Samsung was experiencing issues with sourcing their own chips for their own phones because they sold everything to Apple.


Launching new processes and building fabs for them is expensive and very risky. Pre-selling the output helps mitigate some of those concerns.


This is not the first time I've heard of TSMC reserving a very large portion of their production capacity for a single customer, especially in the early days of a new process node. I guess there's an inherent risk in being the launch customer for a new node (unexpected delays, low yield, bugs, etc.) that can only be offset by exceptionally large volume.

Maybe there's a gentleman's agreement among Apple, Samsung, AMD and nvidia that somebody's gotta buy those early wafers, but nobody can do that every single time?


Another part of the reason is that Apple reportedly bankrolls a large amount of the R&D for new nodes for TSMC


Bingo. TSMC doesn't just give Apple preferential treatment out of the bottom of their heart - Apple pays handsomely for it!


It's business. They aren't going to lose goodwill for selling their product to the highest bidder unless they are reneging on contracts or something crazy like that.


It feels somehow unfair and anticompetitive to buy up all the stock of the best processors so PCs and Android devices can't have any.


They don't really compete with desktop processors. During the initial phases of production, there are still relatively high defects per wafer. But the mobile chips are small so you lose a small percentage of chips per wafer due to these defects.

After the defect rate decreases enough then the larger chips will start to get made (GPUs, and CPUs).

If this is true, Apple is getting exclusive access for a period of time when the failure rates are highest then other customers will get their allotments down the road when defect rates are at acceptable levels for the size of their chips.


Yaay, I can't wait to have more battery life on my "professional" laptop that has no/poor support for anything I use (games, containers, etc)! /sarcasm

All that extra screen time will help me wait for the day that Linux is stable on the platform!


Can anyone explain to me why these chips don't suffer from quantum tunneling? Isn't there a limit to how small you can go?


There is a limit, but it depends on the materials and transistor geometry because those shape the potential wells, which affect tunelling. For example, high-κ dielectric materials. From Wikipedia:

> As the thickness [of the gate insulator] scales below 2 nm, leakage currents due to tunneling increase drastically, leading to high power consumption and reduced device reliability. Replacing the silicon dioxide gate dielectric with a high-κ material allows increased gate capacitance without the associated leakage effects.

In other words, the limit hasn't been reached yet.

Also bear in mind 3nm is a marketing number which has a tenuous connection to a technical feature of the lithography. These days it's really just a number which shrinks with each generation to indicate which generation. The transistors are not actually 3×3 nm in size; they are larger.


If an M3 mac has over 128gb RAM, I’m upgrading again

96gb is just under the threshold where it makes a difference, for me


Imagine how popular the M3 and friends would be if people could buy it off Newegg/Amazon and use an operating system of their choice?


No more popular than it is today. The vast majority of regular people don't give two cents about what is under the hood - just what a computer can or can't do with them.


Just curious: what do you need 128GB of RAM for? I have 16GB on my desktop and never feel constrained.


I've seen my Firefox process peak at 67GB (uncompressed) when viewing certain webapps, and it regularly goes over 16GB (uncompressed; about 10GB compressed). With only 16GB RAM that's too much swapping. It's slow due to the swapping. I can't even do that any more, as that swap also needs more free disk space than I have.

I feel constrained in 16GB all the time, and it's mostly due to browser memory usage, whichever browser I use (Firefox, Safari or Chrome). However there are other things running too such as Discord, Telegram, VMware, Libre or MS Office; the GBs add up. If I dare to compile something big, I have to close other things. I'm not even one of those media creators or gamers using heavy graphics!

So, I currently have a late 2013 MBP with 16GB which is going strong still. It's still great, except for RAM and battery. I'm thinking of upgrading to an M2 Max 64GB or 96GB, because I don't want to be caught short with only 24/32GB the way I've felt heavily constrained by 16GB a few years after buying the 2013 MBP.

And I want to get into modern graphics, physics simulations, blockchain simulations, terabyte-scale database engine design, neural nets and chip design more than I have been. I expect those to be memory heavy, enough that even 96GB RAM will be a design constraint. (Then again that's what I said when I bought the 2013 MBP, and in practice I ended up using datacentre servers for the heavy stuff, so I'm still weighing up the purchasing decision, whether it's worth it).


And the rest of us plebs are to be stuck with the Samsung fab again. Great.


Rich company pays large sum of money to other rich company for special treatment, peasants HORRIFIED.


Buy an iPhone then :)


[flagged]


Why do you think Samsung is any better?


A put on Apple stock might be a great hedge against China invading Taiwan.


Considering how failed was the launch of the iPhone 14 Pro (I had to wait for 45 dayss before getting mine) and sales of the "iPhone 14" (which is just an iPhone 13, basically) this is just expected


Saying that the launch failed because it had such a long wait time (assuming due to popular demand) has a sort of Yogi Berra quality that I appreciate. He was the one who said, "No one goes there anymore; it's too crowded".

If the launch was delayed because of technical issues then that would be another matter. I'm really not too up on the differences between phones and the specifics of one launch or another these days. When my current iPhone (which I'm reasonably sure is an 11) encounters some issue, I'll go and get whatever number is current at that time.


> Saying that the launch failed because it had such a long wait time (assuming due to popular demand) has a sort of Yogi Berra quality that I appreciate. He was the one who said, "No one goes there anymore; it's too crowded".

Do we really know what happened? Lack of silicon? high demand? logistic fails?


Has everyone already forgotten all the covid lockdowns in China?


We're you around for the 6+ and 6s+? Those phones were unobtanium for months and months. The 14 Pro being unavailable for 45 days is nothing. And the 14 not selling because people saw right through apple.


I think that was still the trailing edge of the whole "lining up outside the shop at 2am". I also did this once for the first iPad release :) Which really did turn out to be a groundbreaking product release though the actual hardware itself was still quite flawed. It needed a few iterations to become mature.


> We're you around for the 6+ and 6s+?

yep, but I bought it late in its lifecycle


> which is just an iPhone 13, basically

It's a repairable iPhone 13. I went with an iPhone 14 over the 14 Pro because of that -- I don't care as much about the "Pro". What I care about is that I don't want the repair costs for a cracked glass to be so high that it's cheaper to buy a 2 year old used phone.


I'd put heavy quotes around "failed".

There was a general slump in the smartphone market, and Appel was the only one increasing their sales in some months.


Plus there was pandemic issue in China factory and logistic chain.


Once again we see that Apple is highly exposed to geopolitical risk should China decide to take action against Taiwan. TSMC's US fabs won't be ready for at least a couple of years yet.


It's good to have deep pockets...


Do we know if Apple is getting rid of the notch for M3 macbooks in favor of the "dynamic island" solution from iPhone 14 Pro?


that would make no sense in the context of a MacBook. The phone notch and dynamic island is a freestanding element. The notch on the MacBooks is embedded in the menubar. There would be no “island”. It might be possible for Apple to add Live Activities to the notch on the MacBook (That is what those dynamic notices are called) but only if more software publishers put in the effort.


I am talking mostly about replacing the ugly cut-off at the top with a "smaller hole" instead, not really caring about any software functionality from iPhones.


that notch contains a camera, a couple of sensors and some mounting hardware. Eventually Apple might be able to engineer a smaller version and maybe eventually have them under the display but so far that has proven difficult. In the meantime, the notch is a cleaver method to increase the screen size without making the overall case larger and it does so by making use of a part of the screen that is normally little used.


They already did the miniaturization with iPhone 14 Pro which is why I asked. I don't really need to go through a rationalization why this ugly hack is good for me.


the components on the iPhone trade surface size for depth. They are much thicker than the lid of a laptop using much of the thickness of the phone’s body.


Get ready for all of the posts about Apple silicon trouncing Intel and AMD.


time to add more OS bloat so that only the 3nm process can support it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: