Hacker News new | past | comments | ask | show | jobs | submit login
Playstation 3 Architecture (copetti.org)
401 points by bangonkeyboard on Oct 20, 2021 | hide | past | favorite | 89 comments



This is pretty good overview. One fun tricks we would regularly do was move all the background audio off the disk and into GPU ram. We'd stream just enough that we wouldn't saturate the bus but could get back 20-30mb of system memory.

There was also that time we ran out of open file handles ahead of key demo at one of the tradeshows. I remember rewriting the whole audio streaming subsystem to piggyback on the seekfree texture loading system so that we could use the that single big file as a source for both streaming texture and audio data. Did that in 24 hours less than 2 days ahead of the show, fun times.

[Edit]

Another fun thing from that era was that almost no one other than the in-house Sony Studios used the SPUs well. X360 came out just ahead and had tooling that was miles ahead(PIX!) so everyone started from the X360. None of the workloads we're vectorized so it was absolutely brutal to get them to fit into the SPUs. For a long time a good chunk of that silicon was just idle. If you did start from the PS3 though because of the vectorized workloads you had insane cache coeheranch and usually ran better on the X360/PC as well.


If I recall correctly that was the main reason why PhyreEngine came to be, Sony eventually cave in to complaints and released proper tooling for the PS3.


coeheranch is certainly an interesting way of spelling coherence.


It's worth saying that the whole site is an absolute gold mine of fantastic information about console architectures over the ages, and I particularly like the way he always highlights how the DRM has been broken. It's really fascinating to see the convergent and occasionally divergent evolution that goes on in these machines, speaking to, but slightly independent from, the technological trends of their time.


For the adventurous, I can't find words to express the absolute wrath of information found in the 'PS3 Developer Wiki' [1]. Countless people around the world voluntarily sharing their research to build one of the most complete sources of information of an enigmatic console.

[1] https://www.psdevwiki.com/


If anyone is interested in a less in-depth, but still informative overview of game console architectures and DRM cracking history, ModernVintageGamer on YouTube has a lot of videos on the subject that I've enjoyed.

https://www.youtube.com/channel/UCjFaPUcJU1vwk193mnW_w1w


While MVG does a good video production job, he tends to get a lot of information wrong [1] and never corrects it after he's told [2]. Don't get me wrong, he's done very good videos in the past, but I wish he would take better care of his content.

[1] https://www.reddit.com/r/Games/comments/fnl0o1/why_playstati...

[2] https://news.ycombinator.com/item?id=26222361


Yeah, he just seems to read stuff online and never checks with actual sources. I wish he'd talked to us when doing the Wii security video, he got a lot of things wrong.


Doesn't know the difference between megabits/megabytes either.

https://youtu.be/A33wC-ltmdE


It's not public but I would like to say that I think the official PS4 and PS5 developer pages are also wonderful resources.


Exactly, this is why I love HN.. I woupd never have found this. I will shelf it for a long weekend. Wonderful stuff!


Yeah this is the sort of content that makes the rest of the noise worth it.


Alternative edition without styles: https://classic.copetti.org/writings/consoles/playstation-3/

(ideal if you use accessibility tools, like text to speech, or want to read from an eBook or legacy browser)

If you spot a mistake, please log an issue on the repo (https://github.com/flipacholas/Architecture-of-consoles). Thanks!


Really cool. I remember when Sony really pushed the idea that the Cell arch would power so many different things, shame that never took off.

I wonder if that university still uses their PS3 based supercomputer cluster


I worked on PS3 titles that used SPUs and I can't tell you how relieved I was to find that the PS4 did away with them. The fear that the next PS would have "64 SPUs and a marginally faster PPU" was real.

The analogy I gave to my friends at the time was working in a restaurant kitchen with a tiny stovetop and 2-dozen microwave ovens; does some things really fast, but only if you can cut them up into small pieces that are microwave-friendly.


Im not an expert at all, but I seem to recall issues also arising from the inability to do out of order execution.


Both the PS3 and X360 CPUs featured no out of order execution, no speculative execution, no automatic prefetch, and a 500 cycle latency for hitting main mem outside of cache.

The hardware design was based on 1) This is currently the only way to hit that clock rate at this cost 2) Hyperthreading effectively halve all latencies 3) It's a fixed platform, the compiler should know exactly what to do. 3 was pretty laughable. It is only technically possible if you have huge linear code blocks without branching or dynamic addressing.

I would remind junior programmers that the PS2 ran at 300Mhz and had a 50 cycle memory latency and the PS3 ran at 3Ghz but had a 500 cycle latency. So, if you are missing cache, your PS3 game runs like it's on a PS2.

On the other hand, a lot of people overreacted to the manual DMA situation of SPU programming. DMAs from main mem into SPU mem had a latency of.... 500 cycles! Once people put 2 and 2 together, SPU programming became less scary. Still a pain in the ass. And, a lot of work to reach peak performance. But, more approachable for sub-optimal tasks.


Also that damn load hit store penalty. PPEs and Xenon cores could only detect and stall if a read was for an address that was currently being flushed out of the write buffers. Most sane uarchs will bypass out of the store queue, but those cores needed to flush all of the way out to L1 and read it back in.

Also, IIRC, shift by variable amount was microcoded and would take a cycle for each bit distance shifted.


> Also, IIRC, shift by variable amount was microcoded and would take a cycle for each bit distance shifted.

Wait, it was possible to ship a CPU in 2006 without a barrel shifter? ARM1 was from 1985!

If it was something silly like "only constant bit shifts use the barrel shifter", I'm surprised that compilers didn't compile variable shifts as a jump table to a bunch of constant shift instructions... :)


>Wait, it was possible to ship a CPU in 2006 without a barrel shifter? ARM1 was from 1985!

I was absolutely floored too.

And it made very little sense to me either. PowerPC has probably my favorite main ISA bit manipulation instructions out there: rwlimi

https://www.ibm.com/docs/en/aix/7.2?topic=is-rlwimi-rlimi-ro...

Any logical shift, rotate, bit field extract (by constant) and more all in one single cycle instruction that's been included since the earliest POWER days. They had a barrel rotater in the core for that instruction.

The only thing that makes any sense to me is that somehow it would have been too expensive to rig up another register file read port to that sh input, so they just pump it as many times needed with sh fixed to 1. They seemed to be on some gate count crusade that might have payed off if they were able to clock it faster at the end of the day. It took the industry a bit to figure out that ubiquitous 10Ghz chips weren't going to happen, and the hardest lessons would have been right in that design cycle. : \

> If it was something silly like "only constant bit shifts use the barrel shifter", I'm surprised that compilers didn't compile variable shifts as a jump table to a bunch of constant shift instructions... :)

Variable shift isn't the most common op in the world, so as far as I know it was just listed as something to avoid if you're writing tight loops.


The SPU approach is pretty much what a GPU does, but instead of 64 they have hundreds if not thousands.

I think the main issue with the Cell design was that it was too "middle road" and wasn't specialised enough in either direction.


It really isn’t - the individual elements in a GPU can access RAM directly, while you needed to wheelbarrow data (and code!) to and from SPUs manually. This was the major pain point, not the extreme (for that time) parallelism or the weird-ish instruction set.


It depends on what you mean by "access RAM directly", both SPUs and GPUs (at least on PS4) can read/write system memory. Both do this asynchronously. If you want random access to system memory you will die on a GPU even sooner than on an SPU since latencies are much bigger.

So there is not much difference imho and the GP is correct.


SPUs read memory asynchronously by requesting a DMA; GPUs read memory asynchronously too, but this is not explicitly visible to the programmer, and serious infrastructure is devoted to hiding that latency. The problem with SPUs was never that they are slow, rather than they are outright programmer-hostile.


>but this is not explicitly visible to the programmer , and serious infrastructure is devoted to hiding that latency

What do you mean? You can hide latency on SPU by double-buffering the DMA but in a shader there is no infrastructure at all and no way to hide unlike SPU, you just block until the memory fetch completes before you need the data.

> they are outright programmer-hostile

Depends on the programmer I guess, I enjoyed programming SPUs, don't know personally anybody who had complaints. Only read about the "insanely hard to program PS3" on the internet and wonder "who are those people?". It's especially funny because the RSX was a pitiful piece of crap with crappy tooling [+] from NVidia yet nobody complaining about SPUs mentions that.

[+] Not an exaggeration. For example, the Cg compiler would produce different code if you +0/*1 random scalars in your shader and not necessarily slower code too! So one of the release steps was bruteforcing this to shave off few clocks from the shaders.


> The SPU approach is pretty much what a GPU does, but instead of 64 they have hundreds if not thousands.

Eh, only if you count every SIMD lane as a separate "core" like GPU manufacturer marketing does. More realistically, you should count what NVIDIA calls SMs, where the numbers are more comparable (GeForce RTX 3080 has 80, for example).


PS3 6 SPUs(1 is system reserved) 4-way SIMD

PS4 - 18 GCN CUs, each has four 16-ways SIMDs for 72 SIMDs but each is 4 times wider so PS4 has the same number of ways as in 288 4-ways SPUs.


> I wonder if that university still uses their PS3 based supercomputer cluster

Almost certainly not. A typical lifetime of a supercomputer is around 5 years, give or take. After that the electricity they consume makes it not worth continuing to run them vs. buying a new one.

See e.g. the Cell-based Roadrunner, in use 2008-2013: https://en.wikipedia.org/wiki/Roadrunner_(supercomputer)


5 is a bit young.

ORNL's Titan lasted about 7 years: 2012 through 2019. Its predecessor, Jaguar, was 2005 to 2012. Also 7 years.


AFAIK both Titan and Jaguar received significant mid-life upgrades. So in such a scenario 7 years sounds reasonable.

(At a previous job, we had a cluster that was about 15 years old. Of course, it had been expanded and upgraded over the years, so I'm not sure anything was left of the original. Maybe some racks and power cables.. :) )


Notice the N in National Labs, entirely different order of magnitude.


The hype was probably mostly from "Crazy" Ken Kutaragi, who had a knack for dialing the Playstation hype beyond 11. The PS2 was already supposed to replace your home PC and revolutionize ecommerce and online gaming and plug yourself into the Matrix [1]. In the past he was compared to Steve Jobs by some, but in hindsight he seems more like Sony's Baghdad Bob.

[1] https://www.newsweek.com/here-comes-playstation-2-156589


At least the home computer part of that was a shtick to avoid some european import taxes. https://www.theguardian.com/technology/2003/oct/01/business....

I've heard on the grapevine that the PS3's OtherOS facility was internally thought of as another go at the same idea. "Look, judge, it's a general purpose computer for reals this time. Your own universities are using it in super computing clusters, without ever launching a game".


Being forced kicking and screaming into the multicore on the PS3 improved our PC codebase by quite a good margin. So what I'm trying to say is that the hype was real, we have gone multicore, just not with the Cell architecture.


Did the 360 not drag things into multicore very similarly? Pretending its CPU has one core is a pretty similar experience to pretending a PS3 has no SPEs.


Afaik these university projects were done to test out how Cell-based supercomputers would behave. They're not competitive with up-to-date Supercomputers.

In my lab I found a couple of PS3s lying around several years ago, that hadn't been used in quite a while. (One of them may or may not have been adopted for less scientific purposes …)


If you are referring to the Barcelona Supercomputing Center, I think it's been decommissioned (their project is now archived https://web.archive.org/web/20090426190617/https://www.bsc.e...). Though this is expected in my opinion (Buy equipment -> Produce research -> Move to the next thing). IIRC their big thing is now the MareNostrum 4 supercomputer (https://en.wikipedia.org/wiki/MareNostrum).


In 2012 the Air Force Research Laboratory built a supercomputer cluster from 1760 PS3s that I think was in practical use for a while[0]. I recall reading online that the Air Force struck a special deal with Sony to buy some of the last remaining PS3s that had not been updated to no longer support Linux installation during manufacturing, but that isn't mentioned in this source.

>The Condor Cluster project began four years ago, when PlayStation consoles cost about $400 each. At the same time, comparable technology would have cost about $10,000 per unit. Overall, the PS3s for the supercomputer's core cost about $2 million. According to AFRL Director of High Power Computing Mark Barnell, that cost is about 5-10% of the cost of an equivalent system built with off-the-shelf computer parts.

>Another advantage of the PS3-based supercomputer is its energy efficiency: it consumes just 10% of the power of comparable supercomputers.

I wonder how significant the cost and energy savings were by the time the project was finished, and how long the cluster was actually used.

[0]https://phys.org/news/2010-12-air-playstation-3s-supercomput...


PS3 clusters are limited in supercomputing tasks largely by RAM, each PS3 having 256MB which is anemic.


I can think of a ton of number crunching you can do where each node having 256 wouldn’t be awful. Like maybe brute forcing stuff where they can just segment the use case to each node. But it would certainly be a specialized case.



I remember speculation at the time that the PS3 would be able to borrow from other cell powered appliances in your house when it was running, i.e. your toaster, to enhance its power.

Actually, I just did a quick Google on this and it was "Sony" themselves that appeared to mention this! [1]

[1] https://www.tomshardware.com/news/cell-broadband-engine-ps3-...


There was even a Penny Arcade comic about that: https://www.penny-arcade.com/comic/2002/08/07


Hmm, probably not. Seems like the biggest PS3 cluster had 1,760 units and was rated at 500 teraflops (single precision float, presumably).

An NVIDIA A100 GPU does about 20 teraflops. So you only need 25 of those chips to match the theoretical rating, and they have many other advantages like much higher memory per core, etc.


Have you got the NVIDIA numbers right? A NVIDIA DGX Station A100 (desktop/workstation sized computer with 4x A100 and draws 1.5 kW of power, is rated at 2.5 petaFLOPS, so a good 30+% more than the PS3 cluster.

Also the PS3 apparently drew up to 200w, so a cluster that size would have drawn 352 kW.


The units being used here are likely different. In most press releases Nvidia uses their "tensor core" performance, usually with either sparsity or 16 bit data. A single A100 is said to have 320 teraflops of "tensor float" performance but only 19 teraflops of "normal" full FP32 performance.

This is way out of my field so I don't know the whole implications, but my understanding is Nvidia cards cam only reach these speeds at the loss of precision or full functionality, so it's an apples to oranges comparison versus non-nvidia chips.


People forget that this was 2006. CUDA was not invented yet. And arguably didn't take off until Alexnet.

NVIDIA had not even released the API for writing vectorised C code yet.

Hence all those 2007 supercomputer stories. They actually had a genuine use, because the Cell was fully implemented into the Linux kernel by IBM.

These guys did incredibly well for their time, and they were entirely right about vectorised code.

They were just superseded by the longer term trend of tying together multiple silicon dies and ASICs for HPC.


CUDA was released in beta in November 2006, so it was invented by this point, even if it wasn’t well known.


I mean, the Cell SPU design is everywhere, it's just now part of every GPU and called compute shader. Vector processors are a good idea, but they have to be accessible to programmers through standard, easy-to-use, ideally-cross-platform APIs or invariably they won't get used to their full potential.


https://www.copetti.org/writings/consoles/ If anyone's interested in more writeups about console architectures from this person.


minor correction

> The accelerators included within PS3’s Cell are the Synergistic Processor Element (SPE). Cell includes eight of them, although one is physically disabled during manufacturing.

This was disabled in software (by syscon) early on in the boot process. Many people "unlocked" theirs without stability issues. My understanding was that it was a yield thing, and definitely not intended for general use ;)

> This makes you wonder if IBM/Sony/Toshiba hit a wall while trying to scale Cell further, so Sony had no option but to get help from a graphics company. Interviews from early 2nd party developers confirm this: https://www.ign.com/articles/2013/10/08/playstation-3-was-de...

That's also why the PS3 has two separate RAM, compared to the xbox 360's unified memory - Sony was trying to do that as well.

> HDMI connector

idk if anyone remembers this, but sony was talking about multi display gaming, such as having a status display. There was a prototype that had two hdmi ports and three ethernet - sony claimed it would also be a home server and router at that time. devkits did include two hdmi ports, and i suspect that the two screen claim was something that they made up almost on the spot, but who knows.

https://commons.wikimedia.org/wiki/File:PS3_e3_2005_prototyp...


>https://www.ign.com/articles/2013/10/08/playstation-3-was-de

Truncated URL. Was it copied from another comment?



I remember the Idle Thumbs podcast made a running joke of the fact that 2010 was supposed to be "Year of the PS3" because I believe Kaz Hirai had suggested that it would take multiple years for developers to unlock the true power of the system.

Which is an excellent way to brand how much of a pain it was for Devs to wring performance out of it.


This wasn't unusual sentiment for the time. Historically, consoles had a stark difference in quality between the initial batch of games for any system, and the last AAA titles for it. Compare Super Mario to Super Mario 3 on the NES. Or Donkey Kong Country vs Super Mario World on the SNES. Or LoK: Soul Reaver to Tomb Raider 1 on the PS1.

Some of the later PS3 games still look incredible. The biggest graphics limitation is resolution.


And some of those games on the PS3 ran at 20 fps. Hitting 30 when nothing was happening on screen.


There's an element of truth in that, but also there's the fact that as a developer, if you have to choose between a) shipping on a new console with a smaller install base, b) shipping on an old console with an established user base, c) shipping on 2 platforms at once

Then option (c) is only possible if you reduce the game to the lowest common denominator(with option option (a) becomes more appealing 2-3 years after the fact).

Nowadays the difference is less significantbetween PS4 and PS5 (or Xbox One and Xbox Series), but it's still a non-trivial amount to maintain; it would have been vastly more difficult with orders of magnitude in performance (between a PS2 and a PS3), or different programming paradigms (PS3 cell vs PS4 x86).


I think it’s been 3-4 years for devs to utilize the potential of a given console from the 5th gen onwards. I’d be curious to read of a console that was a dream to develop for soon as people got their hands on a devkit.


If the author sees this: Well done, really nice read.


Thanks. I was worried about the fact this console has so much going on inside, so if I end up writing too much (like 100k words) about it, the reader may get too tired and lose interest. In the end, I wrote 20k words with lots of diagrams to help out. I also adapted the site to properly add citations, in case the reader wants to know more about a specific topic.


The ps3 dev wiki [1] is also a great resource on thi topic

[1]https://www.psdevwiki.com/ps3/Main_Page


I find the PS3 historically interesting partially because the convoluted architecture was a natural evolution of Sony's continuing steps to make developing for their consoles harder and harder, driven partially by their arrogance at past success.

The PS1 was basically a repackaged project from their Nintendo partnership. They didn't really have time to develop proper development tools for it, and almost as a consequence of that, the development environment was very scrappy - it hooked into existing PCs and included a bunch of libraries that developers were somewhat familiar with using. As a result, the developer toolset was relatively easy to use. It allowed many developers to get started making 3D games and experiences very quickly. This caused developers to be swayed to the Playstation ecosystem early, and drove all 3D resources into Playstation development away from the Sega Saturn, which had the typically convoluted development environment from past generations (further exacerbated by Sega bolting together additional chips onto the Saturn to try to compete with the PS1).

PS2 came along, and Sony was already deviating from their easy to use console debut. Their "emotion engine" was notoriously hard to develop for, but Sega with the Dreamcast didn't have the legs to compete with Sony's momentum from the PS1, and quit the console business. Nintendo also screwed the pooch that generation with the Gamecube and Microsoft didn't seem to be a threat with the Xbox which was a major flop in Japan.

Along comes PS3, and Sony goes full speed ahead on their hubris - expensive console, impossible to develop for, an architecture somewhat reminiscent of Saturn's hodge podge of chips. At this point they completely lost their way from what made the PS1 set them up for multi-generations of success.

It's interesting to consider how the ecosystem matters when developing these hardware products, and how easy it seems for companies to lose sight of that.


On one hand, it's obviously the smart thing to keep your architecture standard in a world where most games are released cross-platform. Consoles are essentially just PCs with a set spec and in a clean wrapper at this point. But on the other hand it's also a bit disappointing that it's not really an option to make strange/interesting new architectures for a console.

Consoles are one of the last holdovers from the era where computers were designed with the hardware and software built and integrated from the bottom up into a cohesive package. Where they'll do a new one only every 7 years or so, and users just expect to have to buy new software for them every time. So it sounds like a space where they could get away with weird innovations and risks, but because of the need to keep cross-platform development feasible, it's not really an option.

I don't know. I see why it is the way it is, but I'd like to live in a world where consoles could get away with weirder things. Nintendo has kind of occupied that space with their touch screens and motion controls and whatnot, but the Switch is also more standard than ever before. Probably a net good for consumers though, with how much software is able to be ported to it.


What's the benefit of being weird today? A console can achieve pretty amazing graphics with basically off-the-shelf components, so the company isn't really buying much buy rolling their own solution.

The 3d was still being figured out in the 90s, which is what lead to so much diversity in product lineups. Not just for game consoles, but video cards, graphics API, etc. Being weird/unique was necessary because everyone was treading in uncharted territory.

Prior to the 3d era, game consoles had repurposed/customized off the shelf components. The 6502 and it's variants were used in all kinds of consoles and computers. And the 68000 was used in many, many more.

Gaming consoles have come full circle.


> What's the benefit of being weird today

Same benefit it's always had: fun and innovation


Interestingly, Apple is moving back towards the old console model. You now have a bespoke CPU/GPU platform from Apple running their own OS. I'd say we're already seeing the performance benefits of owning/designing/developing the whole stack.


Apple likes to have their own platform tools (Swift, Metal, etc), but I wouldn’t exactly call ARM and PowerVR[0] bespoke.

[0]Yeah Apple says the GPU[1]is entirely their design at this point, but then quietly reached some kind of agreement with Imagination in the past year or so.

[1]Though considering how often GPU drivers need game specific updates and how fast GPUs change and improve, each GPU is almost like it’s own little bespoke platform.


Old 8 and 16 bit home computer model, not consoles.

The IBM PC was an exception and only because IBM couldn't prevent Compaq to carry on.


Sony is a hardware company. So they thought they were great at making and evaluating hardware.

Ironically, the CEO of nVidia once stated that nVidia is a software company - the graphics cards are just dongles that monetize the software. This is what Sony missed.

The story I heard was that when sony finally went to buy a graphics card, they wouldn't pay for the software side of things. When nobody could get any performance out of it, they went back, cap in and and asked for the software to go with the hardware. Don't know if it's true, but I developed on PS3 and it feels true.

As I understand it, it was US developers and leaders like Mark Cerny that begged sony to stop fucking around and just make a console with a big cpu and a big gpu.


As I mentioned in another comment, I think PhyreEngine came out of it.


> impossible to develop for

This is what I remember from the Cell (both BE and the “serious” version IBM used in blade servers). It’s a fascinating machine, but being inconvenient to write software for is a key weakness.


I'm having the weirdest problem with my old PS3.. the HDMI chip got overloaded in some manner and I can't get it to connect to my monitor anymore. My monitor is fine with everything else, but on the PS3 it is just a black screen. Weirdly, I can get it to generate a signal that will display on the monitor briefly up to a certain point but not after. I put in a different drive and get initial setup screens, so the video circuitry and cpu still work, but after the setup part gets to the video setup or something it goes black. I need to find a tv with component video so I can find out if the unit will boot on component and maybe reset something in memory to get hdmi working again properly, or if there's some kind of chip that is burnt out and won't let hdmi run after power on.


Hope this doesn't sound inflammatory but have you tried other HDMI cables?


Not inflammatory no worries, and I have. It's a head scratcher, and a lesson to me. It happened when I disconnected and reconnected a cable while the system was turned on. I thought maybe the HDMI port or chip broke because it is something that can happen, but the fact that I am able to get an initial picture with the help of a box tells me that the hdmi port is actually intact as is the graphics chip. So, maybe it's a HDCP thing. Could be that a HDCP chip is burnt out or something.. I have ordered a component cable and will take the system over to a tv that has component to see what happens, and if I can reset the hdmi if I am able to boot into component first.


Brings back memories. I worked on tools for the PS2, and I have vivid memories of SCEI disclosing their PS3 plans in 2000; how it would be Power-based and that IBM would be doing all of the tools work for free. Bah. In the end it worked out because IBM bought our company for $34B.


Interesting, so you worked on the PS2 TOOL (IE PS2 dev system kits)? What was that like?


I worked for Cygnus (acquired by Red Hat (acquired by IBM)), who was contracted for the GNU toolchain for the PS2 (and simulator). I wasn't working on the toolchain code itself, but worked with our hackers to scope the work, wrote SOWs and negotiated with Sony. I loved that period of time, as I got to travel to Japan a lot (for SONY and our other Japanese customers), and I loved the people, culture and food there. Also, that's when I grew an appreciation for masking up when sick!


A fascinating writeup! It's a small thing, but I love the UI here for choosing images where the caption above the image tells you what you get if you click. Very user-friendly.


> The RSX inherits existing Nvidia technology, it’s reported to be based on the 7800 GTX model sold for PCs

You can also see this in the early Cell Evaluation systems which at first had 6800s in SLI and switched to a 7800 GTX before the RSX was ready: http://www.edepot.com/playstation3.html#Early_PS3_Models


Apparently the addition of the RSX was a last minute modification of the console. They got hit by the same dennard scaling issue that killed of the Netburst derived architectures too [0]. They had been planning on cranking the Cell to over 4Ghz, just including two of them in the PS3, and running GPU tasks 'in software' on the SPUs. The Ghz wall hit them hard and had them reeling looking for an actual graphics chip at the last minute.

[0] https://en.wikipedia.org/wiki/Dennard_scaling#Breakdown_of_D...


The story I have heard directly from insiders is that they had a GPU design from Toshiba. But, it was so deeply VLIW that writing an effective compiler for it was deemed infeasible and hand-coding for it would require deeply dedicated skills. So, they gave up on it late in production and went to Nvidia.

They looked at the G80, but that was too early and risky to jump on at the time. So, they settled for the 7800.

It's a shame. If they had delayed the release and went with the G80, the PS3 would have crushed the 360 as far as graphics. Instead, a whole lot of Cell SPU time had to be dedicated to shoring up the PS3's GPU issues to bring it on par with 360 titles.


I heard that the Toshiba GPU was also part of that last minute change after the dual Cell design didn't work out. The Nvidia talks happened sort of in parallel, but that was the last option.

I want to say it's mentioned in The Race For A New Game Machine too, but I'm not 100% on that.


I used to work on the hardware testing of the platform that is used in Playstation 3. The blade versions of this platform is called QS20, QS21 and QS22. The latter was used in Roadrunner. Fun times.

https://en.wikipedia.org/wiki/Roadrunner_(supercomputer)


Given how parallel ECS workloads like Unity’s DOTS are coming more into vogue, it feels a bit strange reading this retrospectively.


This is a good, well-explained article. I'm quite curious about the Ps4 and ps5's architecture. I'm quite wondering if it's complex than this.


I wonder where the even-odd approach for the SPE comes from. Avoiding data hazards and extracting performance seems great as long as the software is up to the task.


I remember my university buying a bunch of PS3s for their datacenter back in the days.


Love this. Any good books on HW architecture at this level?


There are some things mentioned in "Supporting readings" from that site. I can't vouch for them, but perhaps they're of interest: https://www.copetti.org/writings/consoles/materials/readings...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: