Windows should work since WebGPU can target DirectX or Vulkan and it should be possible to build in WSL.
However I was planning to announce next week after I've had a chance to test with my Windows-using colleagues and this thread came early, so it's possible we'll run into some hiccups.
Fair enough - I don't think there's any hard blockers to doing this, but to get the same QoL we'll want to add a dawn dll to the available prebuilt binaries and adjust the download script.
Will look into this in the coming weeks (or if anyone is up for contributing let us know).
Funny story from today while briefly on hold for a support rep. Got the standard 'we'll call you back if you want blah blah' followed by - I kid not - "you're estimated wait time for the next representative is 0.58333333 minutes"
At the end of the call I suggested if she can log a problem for IT to include this and that they should either round up or use seconds (I suspect it was 35 seconds)
We had this issue with time estimates at work. JIRA field was minutes, which got printed as fractional hours elsewhere. Not good if you put 20 minutes. So suggestion was to use 15 minute intervals, but I pointed out that as long as you used multiples of 3 you'd get 2 digit results, so I'd round to multiples of 3 instead
Those are ridiculously fine grained tasks (or very precise estimations)?
Anything that takes less than an eg half an hour, I would probably just do, instead of making a ticket and an estimation. But I guess some companies are more bureaucratic than others?
Well, what actually killed it historically was AMD64. AMD64 could easily not have happened, AMD has a very inconsistent track record; other contemporary CPUs like Alpha were never serious competitors for mainstream computing, and ARM was nowhere near being a contender yet. In that scenario, obviously mainstream PC users would have stuck with x86-32 for much longer than they actually did, but I think in the end they wouldn't have had any real choice but to be dragged kicking and screaming to Itanium.
PowerPC is the one I’d have bet on - Apple provided baseline volume, IBM’s fabs were competitive enough to be viable, and Windows NT had support. If you had the same Itanium stumble without the unexpectedly-strong x86 options, it’s not hard to imagine that having gotten traction. One other what-if game is asking what would’ve happened if Rick Belluzzo had either not been swayed by the Itanium/Windows pitch or been less effective advocating for it: he took PA-RISC and MIPS out, and really helped boost the idea that the combination was inevitable.
I also wouldn’t have ruled out Alpha. That’s another what-if scenario but they had 2-3 times Intel’s top performance and a clean 64-bit system a decade earlier. The main barrier was the staggering managerial incompetence at DEC: it was almost impossible to buy one unless you were a large existing customer. If they’d had a single competent executive, they could have been far more competitive.
Interesting to note that all state of the art video game consoles of the era (xbox 360, PS3 and Wii) used PowerPC CPUs (in the preceding generation the xbox used a Pentium III, the PS2 used MIPS and the GameCube was already PPC).
Address space pressure was immense back in the day, and plain doubling the width of everything while retaining the compatablity was the obvious choice.
>
Address space pressure was immense back in the day, and plain doubling the width of everything while retaining the compat[i]blity was the obvious choice.
PAE (https://en.wikipedia.org/wiki/Physical_Address_Extension) existed for quite some time to enable x86-32 processors to access > 4 GiB of RAM. Thus, I would argue that if the OS provided some functionality to move allocated pages in and out of the 32 bit address space of a process to enable the process to use more than 4 GiB of memory is a much more obvious choice.
> Thus, I would argue that if the OS provided some functionality to move allocated pages in and out of the 32 bit address space of a process to enable the process to use more than 4 GiB of memory ...
Oh, no. Back then the segmented memory model was still remembered and no one wanted a return to that. PAE wasn't seen as anything but a bandaid.
Everyone wanted big flat address space. And we got it. Because it was the obvious choice, and the silicon could support it, Intel or no.
PAE got some use - for that “each process gets 4GB” model you mentioned in Darwin and Linux - but it was slower and didn’t allow individual processes to easily use more than 2-3GB in practice.
In what way? Their track record is pretty consistent actually, which is what partially led to them fumbling the Athlon lead (along with Intel's shady business practices).
During the AMD64 days, AMD was pretty reliable with their technical advancements.
Yes, but AMD was only able to push AMD64 as an Itanium alternative for servers because they were having something of a renaissance with Opteron (2003 launch). In 2000/2001, AMD was absolutely not seen as something any serious server maker would choose over Intel.
You're right, there were ebbs and flows in their influence...but they were consistent in those trends. Releasing an extension during their strong period was almost certain to be picked up, especially if Intel wasn't offering an alternative (which Itanium wasn't considered as it was server only).