Wow, WebGPU support is really cool. Graphics is usually an afterthought in programming language runtimes, which is crazy when you consider that most devices these days have a GPU which is physically larger and has far more raw compute power than the CPU.
I understand why. The extremely proprietary and platform specific nature of graphics APIs and hardware makes it hard. WebGPU is a good choice for portability.
Looks like they are using wgpu, which is the main (only?) implementation of WebGPU. Seems like WebGPU will work because everyone is going to ship the same shim (wgpu) over the top of proprietary APIs.
There are three implementions of WebGPU in development for the three browser engines. Dawn for Blink and wgpu for Gecko are cross platform, and I think Apple may be doing something Metal-specific for WebKit.
Both Dawn and wgpu are standalone libraries too, so applications other than Blink/Gecko can plug either in to get a Metal-ish abstraction that works everywhere
It will be nice to finally have a portable 3D API that's sanely designed (unlike OpenGL) and that non-experts can be expected to handle (unlike Vulkan)
There's not that much boilerplate actually. Much less than in Vulkan. WebGPU abstracts away memory management, synchronization and image transitions, which make up for a lot of boilerplate code in Vulkan (and easy to get wrong). The WebGPU hello world code is not that much longer than equivalent OpenGL 3.x code, however it is a more strict API and in some cases requires a redesign of a renderer. For example, you can't update uniforms in the middle of a render pass, you have to create a large buffer and use dynamic offsets to move inside that buffer. In OpenGL the driver did that for you transparently.
Whereas in most middleware you can say "here is my scene, do your best", and only dig deeper if it needs some help to do its best.
As traditionally in all Khronos APIs, the step from "I managed a rotating cube with gourad shading", to loading a game scene with PBR material, is a gigantic step full hunting for libraries and trying to put them together, somehow.
It is no surprise that even with WebGL, most people end up picking ThreeJS or BabylonJS, after getting their first triangle.
Khronos APIs aren't meant to be middleware. They are meant to offer a thin low level abstraction for different vendors, upon which you can create middleware which offers higher level abstractions. Most famously in Vulkan, where each part of the spec is built almost for a specific vendor. Image transitions in Vulkan were mostly designed for AMD, NVidia doesn't require image transitions in Vulkan code at all. Renderpass APIs were developed mostly for tiled (usually mobile) renderers which can optimize when they know which render targets will be written to ahead of time.
Yet. Middleware renders the "portability"[0] of Khronos APIs a moot point, by having the best 3D API for each platform hardware, while at the same time exposing a more developer friendly infrastructure.
[0] - Anyone that was used Khronos APIs in anger across multiple OS and GPGPUs, knows the painful way of multiple code paths due to extensions, driver and hardware workarounds.
Easy and simple are not the same, and one still needs to go hunting for libraries to do basic stuff like loading textures, in good old Khronos tradition.
Which are shipped by the same companies as the 3D APIs, on the same OS SDK, where are the Khronos utility libraries?
The attempt to create an OpenGL SDK repository was a joke, and the best Vulkan SDK can offer is a tool to avoid loading all the layers by hand, as it reached the same extension spaghetti as any Khronos API.
But for cross-platform code it makes more sense anyway to use an independent cross-platform image loader library, instead of depending on the platform-specific utility APIs provided by D3DX or MetalKit.
That was just an example of a library, we can carry on with fonts, scene management, 3D math, .....
I did my thesis porting a particle engine from NeXTSTEP into Windows 98, and was big into OpenGL for a couple of years, went through the Longs Peak drama, eventually my focus switched to other 3D APIs by the GL 3.x timeframe.
WebGPU is the common cross-section of the modern 3D APIs (Metal, D3D12 and Vulkan). In general, and somewhat simplified, those APIs move all the expensive render-pipeline-configuration out of the render loop and into the initialization phase. So your code is much more "front-loaded" than GL, but once everything's initialized, the actual rendering is much less code and has much less overhead because most render-state has been pre-compiled into immutable objects. OpenGL on the other hand is a highly dynamic state machine where flipping a single "switch" can lead to expensive reconfiguration of the render pipeline during rendering.
I've been using wgpu for quite some time (in D, through wgpu-native bindings). What exactly do you find it horrifying? It's much closer to DX12/Vulkan than OpenGL. The main difference is that it introduces the concept of a render pipeline, encapsulating all render state from shaders used, shader input layouts, vertex layouts, depth/stencil states, color attachments, which can be pre-created and then bound with a single API call. It's a big improvement over OpenGL which required you to track the state yourself or pre-emptively restore most of the state to avoid side effects between calls.
Have you seen a Vulkan "Hello Triangle" in comparison? That's a least a thousand lines of code, and that's with taking shortcuts :)
132 lines isn't really that much for a triangle in any 3D API, except 90's style OpenGL 1.x (which was a nice and convenient API, but also very inefficient).
I'm sure there will be plenty of high-level Javascript libraries built on top of WebGPU which will cater to different philosophies and use cases, and those will also allow a Hello Triangle with fewer lines of code (but also will involve compromises at flexibility or performance).
Kinda :/ on there not being a GPU permission (that I can find in the post or the docs); GPUs/GPU drivers have been a vector for some pretty nasty attacks in the past...
For years WebGL has been exposed to every website you visit with no permission prompt, and WebGPU soon will be too. It is possible to expose GPUs securely and the WebGPU API is designed with sandboxing and security in mind.
Edit: I believe the bug you linked below (https://bugs.chromium.org/p/project-zero/issues/detail?id=20...) can't be exploited through WebGL or WebGPU in the browser because all GPU access is remoted to a separate process with a special GPU sandbox. I don't know if Deno does this but it should.
The thing is that without read/write or net permissions, a malitious script that turns your computer into a mining rig will throw because it can't connect to any server or use your filesystem. We already expose CPU to third party code, doing the same for GPU is trivial in that sense if you remove the possibility to do anything with it without your consent
That's part of my concern, though since scripts can use the CPU arbitrarily, they could already mine on the CPU. A larger concern is bugs like [0], since GPU drivers are highly complex and highly privileged code.
Yeah that could happen. There was some discussion about a permission in the Discord though, but I don't think it was ever added or thought about on GH.
I was under the impression that Deno is a runtime mostly intended for the server environment which is probably an area where GPU presence isn't generally as strong as on other types of devices outside of the applications that specifically need it. I could be wrong of course about anything or everything.
This is an important example of why to avoid Cloudflare. If your website gets popular, your website's images will be replaced with the image: "This content has been restricted. Using Cloudflare's basic service in this manner is a violation of the Terms of Service." and your website will break.
Small correction: TS (MPEG transport stream) is mainly used for transmissions or broadcasts (e.g. DVB). The container format used for DVDs is PS (MPEG program stream), which is often stored as .mpeg files, and isn't quite the same as TS.
Good illustration of why relying exclusively on code being online (external repositories, code on github, hosted code like this) to make a project compile is a bad idea.
Has anybody here used Deno recently? I tried it out at 1.0 and it was... a little bit rough. Basic APIs like fs weren't stable yet, the documentation on how to import them was out of date, there were multiple VSCode extensions and it took a couple hours to get one of them working properly.
Of course these are all relatively easy things to smooth out over time so I'm curious what the experience is like now.
To be clear: IDE support existed and worked fine once I got it going. It just wasn't a smooth experience out of the box; I had to do some digging on Google, fiddle with the configuration, hunt for answers in GitHub tickets, etc. For a runtime whose main appeal (for me) was no-hassle TypeScript, all the hassle put a pretty big damper on things.
I've not used it professionally for anything yet but I have been hacking around with it on the side for fun (I've recently created a Gopher protocol client and added it as a Third Party module: https://github.com/matt1/deno-gopher / https://deno.land/x/gopher@v0.0.5-alpha - it has been a fairly enjoyable process)
On the whole I have been quite happy with it - the VSCode support is now pretty good (and even without proper integration it was still fully usable even if I didn't get intellisense and had some red squiggles around the imports etc), and sounds like this release will improve it even more.
I do agree though that the documentation is a bit sparse - the auto-generated docs are nice enough, but it could benefit a bit more from some more coverage of human-generated docs for some of the more common use-cases... that will come with time I guess.
My main criticism is that the built-in test framework is a GREAT thing to have, but it does not have support for mocking or spies so it is limited in real-world usefulness.
E.g. in my Gopher protocol client, there is not an easy way to mock out the `Deno.connect` built-in function to create a TCP socket so it is hard for me to test failures at the TCP level. I am sure with some gymnastics I can abstract everything away under layers of interfaces and classes and manually inject everything to make testing easier, but it would be nice to just be able to mock/spy out functions/classes rather than have the entire architecture of the project be dictated by the testing framework. There are third-party alternatives that can be used, but I'd prefer to stay as close to "standard" as possible.
I was curious about how `Deno.permissions` prevents things like "requesting the permissions, then clearing the screen and asking to press 'g'". Apparently, `Deno.permissions.request()` just stops program execution waiting for user response, so it's impossible to do anything after this call.
It also seems to reset text color, so it's impossible to make text invisible.
iTerm custom escape sequences allows program to change color profile on the fly. Setting bg and fg to the same value hides Deno messages and allows someone to conceal which permissions are being granted.
If any Deno dev/members are here, it would be really really nice to see mocking/spy support in the Deno test framework.
This is the major thing holding me back from using Deno more at the moment.
Not being able to mock out a class/function or spy on a particular call etc means that there are large parts of my code/particular situations that are not able to be tested easily without refactoring my entire codebase to use manual IoC and excessive-OO to hide things away under layers of interfaces etc. E.g. I cannot currently mock out `Deno.connect` (....? or can I?) so it is hard to test all sitautions there. Unfortunately this sort of thing (connecting to a remote system) is often quite critical and would benefit hugely from extensive testing.
It would be great to be able to create tests where calls to Deno.connect throw errors, or where it returns 0 bytes, or a special sequence of bytes and so on and so on to verify that my code works - this is the classic sort of `Spy` functionality seen and used extensively in Jasmine et al.
I know there are third-party solutions to this, but it would be nice to see this in the standard distribution.
What language runtime provides mocking/spy support? This seems excessively catered to IOC OO design, not the kind of thing a multi-paradigm language usually supports.
AFAIK deno ships with "test" as a sub command - having access to mock/spy seems reasonable in that context. It sort of follows from the "batteries included".
Just like deno bundles typescript - there is a reasonable expectation to track latest stable typescript features.
I believe they intentionally left it up to third party modules instead of baking in one way of doing it. I created a module called mock on their third party module registry that can be used for spying on and stubbing functions. Even with third party modules, you cannot mock Deno.connect directly because the Deno object is not extensible. If you'd like to spy on or stub connect, you could have Deno.connect passed in to your function as an argument that you can replace in tests with a spy wrapper around it or your mocked out version of it.
Asking out loud, but is anyone aware of a way to build Ruby on Rails assets with Deno in place of yarn and node?
The use case is to be able to leverage ES6/Typescript but have security built into the assets management. I'm not unhappy with the sprockets pipeline, but I would like to move forward with a modern assets pipeline without having to think too hard about the security implications of using yarn and node.
Maybe esbuild can help? It's a go-based bundler that does typescript, es6, etc. and a lot more. https://github.com/evanw/esbuild It's still pretty well integrated into the npm world and distributed as a node package though, so there might be some issues.
esbuild is a "bundler" and can replace webpack/rollup, but you still need to use yarn/npm to install all dependencies exactly the same as you do now. If your concern is primarily security-driven, esbuild is not to the topic. This applies to all other bundlers like Snowpack and Vite etc.
As for migrating to Deno: you probably will not be able to do this because most NPM packages will not run in Deno out of the box. Your build pipeline almost certainly relies on libraries like gulp/sass/globby, and those projects will need to be ported first (or made compatible by their authors).
Any modules on npm for web may work with minimal to no changes. You will need to convert them to esm. There are cdns which provide automatic conversion.
The overwhelming majority of NPM packages are still in CJS, and will not work under Deno today. Yes, you can do hack-arounds like jspm.io/skypack, but if the person is concerned with stability and/or security, adding a 3rd party middleman into the pipeline will not go down well.
Now that Node supports ESM, I hope this year will start seeing more migrations, but it will be at a glacial place because the ESM<->CJS compatibility remains extremely brittle (among all bundlers, and for Node+Browser usage). If you are writing a greenfield small library you can make it work. But it will be a long time - if ever - before we see things like @babel work.
I don’t know about Rails specifically but I was able to create my own custom asset pipeline for Roda framework using the latest version of Snowpack. I’m assuming it would be no different for Deno.
It's a really good/barebones Roda project that uses webpack for assets/js pipeline and Tailwind CSS for UI. My methodology was:
1) clone the project
2) then swap out webpack for Snowpack
I was able to do that within 1 day and honestly, it was well worth the effort due to the amount of knowledge that come with it. Snowpack is much much faster and efficient in terms of resources vs webpack. The newest version, Snowpack3, uses esbuild internally which basically puts it on steroids. Good luck!
FYI, in my dev stack, I'm running Puma server, Sidekiq worker, Snowpack dev server and Guard for live reload capabilities. I'm using Foreman to startup all 4 things using a Procfile and my startup time clocks in at 2.85s from issuing "foreman start". I hope this helps!
If your concern is mainly security, you can try yarn 2 (berry)’s “zero install” feature.
That vendors yours dependencies, so the whole project can then be started without yarn itself, but avoids the gigabytes and millions of files problem of node_modules.
Wow, the `Deno.permissions` API is a great idea! Why haven't I thought of this? It sounds like a natural extension to what browsers are already capable of. It would make it very convenient to grant permissions to remote scripts without knowing the exact permissions they require beforehand.
Indeed. I find this to be a rather strange opinion; in terms of the language itself, I can’t really say one is better than the other for expressing mathematics.
As long as we're sticking to core language JavaScript is not the right answer as the only numeric type, Numeric, is a IEE-754 "double". For lots of maths however integer semantics are better suited.
Python has arbitrary sized integers in the language, which even improves the usefulness.
Of course libraries and extensions could eventually help.
And python doesn't? It has bignum, is that inferior? It is also arbitrary precision. Seems like to claim javascript has "sufficient" numerical properties would be better than claiming it is far superior to python. To do otherwise seems like FUD
Thus I claim the Python approachbiss till better in that specific area.
I don't know Deno's source for their claim. Certainly relevant is what area of Maths you're looking at. For instance looking at reread rations for vectors and matrices might be relevant for different maths problems. The functional approach in some of these things also might be interesting for some Maths problems ... broad topic :)
I'm hoping deno can lead to a good Electron alternative, in the same spirit and philosophy. I'm currently working on an app and would love to ditch a bunch of the node specific tooling for the deno alternative.
I have had a quick play around with it - seems functional. Usual caveats about cross-platform webview apply, but I don't see those as major blockers these days unless you are doing something particularly fancy/niche
I love that you can just run the examples by copy-pasting a command and everything is auto-downloaded via Demo's modules. No installation or huge separate binary downloads. Awesome stuff.
What are average build times for a deno project? One reason I've sticked with JS, Electron and node up until now are zero build times, but I really miss static typing.
I really want to use Deno, but until it's got a decent solution for cryptography, it seems as if encrypted session cookies are a no go, which makes a lot of my use cases a no go.
Is a comprehensive std a goal for Deno? In the future, can I replace Python and make a 10 line script to download a gzipped csv and parse it incrementally?
Yes! We have a CSV parser in the standard lib, and HTTP requests that are gzipped are automatically decompressed if they specify the `transfer-encoding: gzip` header.
Ok, I see the other thread on this now. Sounds like it’s hosted on Cloudflare? That is somewhat concerning, considering they say they are on the business tier.
The problem is that the blogpost added videos, which made cloudflare trigger and think that all the .ts files we distrubte are MPEG Transport Stream files, which would violate CF's ToS on streaming.
This is probably a dumb question, but I'm interested in Deno's TypeScript default configuration. I wonder if I can just steal that configuration and just use it for all of my (TS compiled to JS) NodeJS projects.
What's the difference between WGSL and GLSL? Is WGSL a native language understood by the GPU? Or is it a purely Web-level construct and needs translation to GLSL before piped to the GPU?
GLSL is not a native language understood by the GPU. GLSL is a high level language which has to be compiled to machine instructions for the GPU, just like HLSL and WGSL.
Yes. The GPU only runs machine code. That makes sense.
GLSL is compiled by the OpenGL driver. The question becomes is WGSL sitting on top of OpenGL driver and thus requires the OpenGL drivers to support it or with a transpiling to GLSL, or does it has its own WebGPU driver that can compile the WGSL and interface with the GPU directly?
I would imagine the details are platform specific. I doubt that anyone is using OpenGL, as WebGPU is intended to bring the benefits of the Vulkan/Metal/DX12 generation of graphics APIs to the web.
On platforms supporting Vulkan, WebGPU implementations are probably compiling WGSL to SPIRV before handing it to the driver, and a DX12 implementation would probably compile it to DXIL. I'm not aware of any intermediate representation on Apple platforms, so it's probably transpiled to MSL there.
The latter. WGSL (purposefully) lacks many of the high-level niceties of GLSL or HLSL in order to be a good compile target as well. IIRC it has a 1-1 mapping to SPIR-V constructs, so you can see how one might build higher-level languages/DSLs/compilers on top of it.
Does Deno have a standard API for building Rust modules that communicate with JavaScript? With Node, you have the N-API which is C++, which you can use with the Rust wrappers. But, a standard Rust API for native modules would be much nicer to use compared to the N-API...
I understand why. The extremely proprietary and platform specific nature of graphics APIs and hardware makes it hard. WebGPU is a good choice for portability.