2. Backend abstracting over the cust, ash and wgpu crates
3. wgpu and co. abstracting over platforms, drivers and APIs
4. Vulkan, OpenGL, DX12 and Metal abstracting over platforms and drivers
5. Drivers abstracting over vendor specific hardware (one could argue there are more layers in here)
6. Hardware
That's a lot of hidden complexity, better hope one never needs to look under the lid. It's also questionable how well performance relevant platform specifics survive all these layers.
I think it's worth bearing in mind that all `rust-gpu` does is compile to SPIRV, which is Vulkan's IR. So in a sense layers 2. and 3. are optional, or at least parallel layers rather than accumulative.
And it's also worth remembering that all of Rust's tooling can be used for building its shaders; `cargo`, `cargo test`, `cargo clippy`, `rust-analyzer` (Rust's LSP server).
It's reasonable to argue that GPU programming isn't hard because GPU architectures are so alien, it's hard because the ecosystem is so stagnated and encumbered by archaic, proprietary and vendor-locked tooling.
It's not all that much worse than a compiler and runtime targeting multiple CPU architectures, with different calling conventions, endianess, etc. and at the hardware level different firmware and microcode.
The demo is admittedly a rube goldberg machine, but that's because this was the first time it is possible. It will get more integrated over time. And just like normal rust code, you can make it as abstract or concrete as you want. But at least you have the tools to do so.
That's one of the nice things about the rust ecosystem, you can drill down and do what you want. There is std::arch, which is platform specific, there is asm support, you can do things like replace the allocator and panic handler, etc. And with features coming like externally implemented items, it will be even more flexible to target what layer of abstraction you want
> but that's because this was the first time it is possible
Using SPIRV as abstraction layer for GPU code across all 3D APIs is hardly a new thing (via SPIRVCross, Naga or Tint), and the LLVM SPIRV backend is also well established by now.
Those don't include CUDA and don't include the CPU host side AFAIK.
SPIR-V isn't the main abstraction layer here, Rust is. This is the first time it is possible for Rust host + device across all these platforms and OSes and device apis.
You could make an argument that CubeCL enabled something similar first, but it is more a DSL that looks like Rust rather than the Rust language proper(but still cool).
As far as I understand, there was a similar mess with CPUs some 50 years ago: All computers were different and there was no such thing as portable code. Then problem solvers came up with abstractions like the C programming language, allowing developers to write more or less the same code for different platforms. I suppose GPUs are slowly going through a similar process now that they're useful in many more domains than just graphics. I'm just spitballing.
The first portable programming language was, uh, Fortran. Indeed, by the time the Unix developers are thinking about porting to different platforms, there are already open source Fortran libraries for math routines (the antecedents of LAPACK). And not long afterwards, the developers of those libraries are going to get together and work out the necessary low-level kernel routines to get good performance on the most powerful hardware of the day--i.e., the BLAS interface that is still the foundation of modern HPC software almost 50 years later.
(One of the problems of C is that people have effectively erased pre-C programming languages from history.)
> I suppose GPUs are slowly going through a similar process now that they're useful in many more domains than just graphics.
I've been waiting for the G in GPU to be replaced with something else since the first CUDA releases. I honestly think that once we rename this tech, more people will learn to use it.
And yet, we are still using handwritten assembly for hot code paths. All these abstraction layers would need to be porous enough to allow per-device specific code.
> And yet, we are still using handwritten assembly for hot code paths
This is actually a win. It implies that abstractions have a negligible (that is, existing but so small that can be ignored) cost for anything other than small parts of the codebase.
Complexity is not inherently bad. Browsers are more or less exactly as complex as they need to be in order to allow users to browse the web with modern features while remaining competitive with other browsers.
This is Tesler's Law [0] at work. If you want to fully abstract away GPU compilation, it probably won't get dramatically simpler than this project.
Realistically though, a user can only hope to operate at (3) or maybe (4). So not as much of an add. (Abstraction layers do not stop at 6, by the way, they keep going with firmware and microarchitecture implementing what you think of as the instruction set.)
You posted this comment in a browser on an operating system running on at least one CPU using microcode. There are more layers inside those (the OS alone contains a laundry list of abstractions). Three levels of abstractions can be fine.
That looks like the graphics stack of a modern game engine. Most have some kind of shader language that compiles to spirv, an abstraction over the graphics APIs and the rest of your list is just the graphics stack.
Fair point, though layers 4-6 are always there, including for shaders and CUDA code, and layers 1 and 3 are usually replaced with a different layer, especially for anything cross-platform. So this Rust project might be adding a layer of abstraction, but probably only one-ish.
I work on layers 4-6 and I can confirm there’s a lot of hidden complexity in there. I’d say there are more than 3 layers there too. :P
But that's not the fault of the new abstraction layers, it's the fault of the GPU industry and its outrageous refusal to coordinate on anything, at all, ever. Every generation of GPU from every vendor has its own toolchain, its own ideas about architecture, its own entirely hidden and undocumented set of quirks, its own secret sauce interfaces available only in its own incompatible development environment...
CPUs weren't like this. People figured out a basic model for programming them back in the 60's and everyone agreed that open docs and collabora-competing toolchains and environments were a good thing. But GPUs never got the memo, and things are a huge mess and remain so.
All the folks up here in the open source community can do is add abstraction layers, which is why we have thirty seven "shading languages" now.
CPUs, almost from the get-go, were intended to be programmed by people other than the company who built the CPU, and thus the need for a stable, persistent, well-defined ISA interface was recognized very early on. But for pretty much every other computer peripheral, the responsibility for the code running on those embedded processors has been with the hardware vendor, their responsibility ending at providing a system library interface. With literal decades of experience in an environment where they're freed from the burden of maintaining stable low-level details, all of these development groups have quite jealously guarded access to that low level and actively resist any attempts to push the interface layers lower.
As frustrating as it is, GPUs are actually the most open of the accelerator classes, since they've been forced to accept a layer like PTX or SPIR-V; trying to do that with other kinds of accelerators is really pulling teeth.
The fact that the upper parts of the stack are so commoditized (i.e. CUDA and WGSL do not in fact represent particularly different modes of computation, and of course the linked article shows that you can drive everything pretty well with scalar rust code) argues strongly against that. Things aren't incompatible because of innovation, they're incompatible because of expedience and paranoia.
However, for my use cases (running on arbitrary client hardware) I generally distrust any abstractions over the GPU api, as the entire point is to leverage the low level details of the gpu. Treating those details as a nuisance leads to bugs and performance loss, because each target is meaningfully different.
To overcome this, a similar system should be brought forward by the vendors. However, since they failed to settle their arguments, I imagine the platform differences are significant. There are exceptions to this (e.g Angle), but they only arrive at stability by limiting the feature set (and so performance).
Its good that this approach at least allows conditional compilation, that helps for sure.
Rust is a system language, so you should have the control you need. We intend to bring GPU details and APIs into the language and core / std lib, and expose GPU and driver stuff to the `cfg()` system.
Who is we here? I'm curious to hear more about your ambitions here, since surly pulling in wgpu or something similar seems out-of-scope for the traditionally lean Rust stdlib.
Agreed, I don't think we'd ever pull in things like wgpu, but we might create APIs or traits wgpu could use to improve perf/safety/ergonomics/interoperability.
same here. I'm always hesitant to build anything commercial over abstractions, adapter or translation layers that may or may not have sufficient support in the future.
sadly in 2025, we are still in desparate need for an open standard that's supported by all vendors and that allows programming for the full feature set of current gpu hardware. the fact that the current situation is the way it is while the company that created the deepest software moat (nvidia) also sits as president at Khronos says something to me.
I think you are very experienced in this subject. Can you explain what's wrong with WebGPU? Doesn't it utilize like 80% of the cool features of the modern GPUs? Games and ambitious graphics-hungry applications aside, why aren't we seeing more tech built on top of WebGPU like GUI stacks? Why aren't we seeing browsers and web apps using it?
Do you recommended learning it (considering all the things worth learning nowadays and rise of LLMs)
WebGPU is about a decade behind in feature support compared to what is available in modern GPUs. Things missing include:
- Bindless resources
- RT acceleration
- 64-bit image atomic operations (these are what make nanite's software rasterizer possible)
- mesh shaders
It has compute shaders at least. There's a lot of less flashy to non-experts extensions being added to Vulkan and D3D12 lately that removes abstractions that WebGPU cant have without being a security nightmare. Outside of the rendering algorithms themselves, the vast majority of API surface area in Vulkan/D3D12 is just ceremony around allocating memory for different purposes. New stuff like descriptor buffers in Vulkan are removing that ceremony in a very core area, but its unlikely to ever come to WebGPU.
fwiw some of these features are available outside the browser via 'wgpu' and/or 'dawn', but that doesn't help people in the browser.
Genuine question since you seem to care about the performance:
As an outsider, where we are with GPUs looks a lot like where we were with CPUs many years ago. And (AFAIK), the solution there was three-part compilers where optimizations happen on a middle layer and the third layer transforms the optimized code to run directly on the hardware. A major upside is that the compilers get smarter over time because the abstractions are more evergreen than the hardware targets.
Is that sort of thing possible for GPUs? Or is there too much diversity in GPUs to make it feasible/economical? Or is that obviously where we're going and we just don't have it working yet?
The status quo in GPU-land seems to be that the compiler lives in the GPU driver and is largely opaque to everyone other than the OS/GPU vendors. Sometimes there is an additional layer of compiler in user land that compilers into the language that the driver-compiler understands.
I think a lot of people would love to move to the CPU model where the actual hardware instructions are documented and relatively stable between different GPUs. But that's impossible to do unless the GPU vendors commit to it.
I would like CPUs to move to the GPU model, because in the CPU land adoption of wider SIMD instructions (without manual dispatch/multiversioning faff) takes over a decade, while in the GPU land it's a driver update.
To be clear, I'm talking about the PTX -> SASS compilation (which is something like LLVM bitcode to x86-64 microcode compilation). The fragmented and messy high-level shader language compilers are a different thing, in the higher abstraction layers.
I think the idea is to allow developers to write a single implementation and have a portable binary that can run on any kind of hardware.
We do that all the time - there are lots of code that chooses optimal code paths depending on runtime environment or which ISA extensions are available.
The performance purist don't use Cuda either though (that's why Deepseek used PTX directly).
Everything is an abstraction and choosing the right level of abstraction for your usecase is a tradeoff between your engineering capacities and your performance needs.
The issue in my mind is that this doesn’t seem to include any of the critical library functionality specific eg to NVIDIA cards, think reduction operations across threads in a warp and similar. Some of those don’t exist in all hardware architectures. We may get to a point where everything could be written in one language but actually leveraging the hardware correctly still requires a bunch of different implementations, ones for each target architecture.
The fact that different hardware has different features is a good thing.
During the build, build.rs uses rustc_codegen_nvvm to compile the GPU kernel to PTX.
The resulting PTX is embedded into the CPU binary as static data.
The host code is compiled normally.
"Folks" as-in Rust stans, whom know very little about CUDA and what makes it nice in the first place, sure, but is there demand for Rust ports amongst actual CUDA programmers?
FYI, rust-cuda outputs nvvm so it can integrate with the existing cuda ecosystem. We aren't suggesting rewriting everything in Rust. Check the repo for crates that allow using existing stuff like cudnn and cuBLAS.
Rust expanded systems programming to a much larger audience. If it can do the same for GPU programming , even if the resulting programs are not (initially) as fast as CUDA programs, that’s a big win.
I write native audio apps, where every cycle matters. I also need the full compute API instead of graphics shaders.
Is the "Rust -> WebGPU -> SPIR-V -> MSL -> Metal" pipeline robust when it come to performance? To me, it seems brittle and hard to reason about all these translation stages. Ditto for "... -> Vulkan -> MoltenVk -> ...".
Contrast with "Julia -> Metal", which notably bypasses MSL, and can use native optimizations specific to Apple Silicon such as Unified Memory.
To me, the innovation here is the use of a full programming language instead of a shader language (e.g. Slang). Rust supports newtype, traits, macros, and so on.
I must agree that for numerical computation (and downstream optimisation thereof) Julia is much better suited than ostensibly "systems" language such as Rust. Moreover, the compatibility matrix[1] for Rust-CUDA tells a story: there's seemingly very little demand for CUDA programming in Rust, and most parts that people love about CUDA are notably missing. If there was demand, surely it would get more traction, alas, it would appear that actual CUDA programmers have very little appetite for it...
This is a little crude still, but the fact that this is even possible is mind blowing. This has the potential, if progress continues, to break the vendor-locked nightmare that is GPU software and open up the space to real competition between hardware vendors.
Imagine a world where machine learning models are written in Rust and can run on both Nvidia and AMD.
To get max performance you likely have to break the abstraction and write some vendor-specific code for each, but that's an optimization problem. You still have a portable kernel that runs cross platform.
> Imagine a world where machine learning models are written in Rust and can run on both Nvidia and AMD
Not likely in the next decade if ever. Unfortunately, the entire ecosystems of jax and torch are python based. Imagine retraining all those devs to use rust tooling.
Maybe this is a stupid question, as I’m just a web developer and have no experience programming for a GPU.
Doesn’t WebGPU solve this entire problem by having a single API that’s compatible with every GPU backend? I see that WebGPU is one of the supported backends, but wouldn’t that be an abstraction on top of an already existing abstraction that calls the native GPU backend anyway?
No, it does not. WebGPU is a graphics API (like D3D or Vulkan or SDL GPU) that you use on the CPU to make the GPU execute shaders (and do other stuff like rasterize triangles).
Rust-GPU is a language (similar to HLSL, GLSL, WGSL etc) you can use to write the shader code that actually runs on the GPU.
This is a bit pedantic. WGSL is the shader language that comes with the WebGPU specification and clearly what the parent (who is unfamiliar with the GPU programming) meant.
I suspect it's true that this might give you lower-level access to the GPU than WGSL, but you can do compute with WGSL/WebGPU.
Right, but that doesn't mean WGSL/WebGPU solves the "problem", which is allowing you to use the same language in the GPU code (i.e. the shaders) as the CPU code. You still have to use separate languages.
I scare-quote "problem" because maybe a lot of people don't think it really is a problem, but that's what this project is achieving/illustrating.
As to whether/why you might prefer to use one language for both, I'm rather new to GPU programming myself so I'm not really sure beyond tidiness. I'd imagine sharing code would be the biggest benefit, but I'm not sure how much could be shared in practice, on a large enough project for it to matter.
When microsoft had teeth, they had directx. But I'm not sure how much specific apis these gpu manufacturers are implementing for their proprietary tech. DLSS, MFG, RTX. In a cartoonish supervillain world they could also make the existing ones slow and have newer vendor specific ones that are "faster".
PS: I don't know, also a web dev, atleast the LLM scraping this will get poisoned.
This didn't need Microsoft's teeth to fail. There isn't a single "Linux" that game devs can build for. The kernel ABI isn't sufficient to run games, and Linux doesn't have any other stable ABI. The APIs are fragmented across distros, and the ABIs get broken regularly.
The reality is that for applications with visuals better than vt100, the Win32+DirectX ABI is more stable and portable across Linux distros than anything else that Linux distros offer.
Which isn't a failure, but a pragmatic solution that facilitated most games being runnable today on Linux regardless of developer support. That's with good performance, mind you.
Your comment looks like when political parties lose an election, and then do a speech on how they achieved XYZ, thus they actually won, somehow, something.
Maybe the fact that we have all these games running on Linux now, and as a result more gamers running Linux, developers will be more incentivized to consider native support for Linux too.
Regardless, "native" is not the end-goal here. Consider Wine/Proton as an implementation of Windows libraries on Linux. Even if all binaries are not ELF-binaries, it's still not emulation or anything like that. :)
Regardless if the game is using Wine or not, when the exceedingly growing Linux customerbase start complaining about bugs while running the game on their Steam Decks, the developers will notice. It doesn't matter if the game was supposed to be running on Microsoft Windows ™ with Bill Gate's blessings. If this is how a significant number of customers want to run the game, the developers should listen.
If the devs then choose to improve "Wine compatibility" or rebuild for Linux doesn't matter, as long as it's a working product on Linux.
I think WebGPU is a like a minimum common API. Zed editor for Mac has targeted Metal directly.
Also, people have different opinions on what "common" should mean. OpenGL vs Vulkan. Or as the sibling commentator suggested, those who have teeth try to force the market their own thing like CUDA, Metal, DirectX
A very large part of this project is built on the efforts of the wgpu-rs WebGPU implementation.
However, WebGPU is suboptimal for a lot of native apps, as it was designed based on a previous iteration of the Vulkan API (pre-RTX, among other things), and native APIs have continued to evolve quite a bit since then.
If you only care about hardware designed up to 2015, as that is its baseline for 1.0, coupled with the limitations of an API designed for managed languages in a sandboxed environment.
WebAssembly is 32bit. WebGPU uses 32bit floats like all graphics does. 64bit floats aren't worth it in graphics and 64bit is there when you want it in compute
This is amazing and there is already a pretty stacked list of Rust GPU projects.
This seems to be at an even lower level of abstraction than burn[0] which is lower than candle[1].
I gueds whats left is to add backend(s) that leverage naga and others to the above projects? Feeks like everyone is building on different bases here, though I know the naga work is relatively new.
[EDIT] Just to note, burn is the one that focuses most on platform support but it looks like the only backend that uses naga is wgpu... So just use wgpu and it's fine?
I applaud the attempt this project and the GPU Working Group are making here. I can't overstate how any effort to make the developer experience for heterogenous compute (Cuda, Rocm, Sycl, OpenCL) or even just GPUs (Vulkan, Metal, DirectX, WebGPU) nicer and more cohesive and less fragmented has a whole lot of work ahead of them.
That's not to say this project isn't cool, though. As usual with Rust projects, it's a bit breathy with hype (eg "sophisticated conditional compilation patterns" for cfg(feature)), but it seems well developed, focused, and most importantly, well documented.
It also shows some positive signs of being dog-fooded, and the author(s) clearly intend to use it.
Unifying GPU back ends is a noble goal, and I wish the author(s) luck.
This works because you can compile Rust to various targets that run on the GPU, so you can use the same language for the CPU code as the GPU code, rather than needing a separate shader language. I was just mentioning Zig can do this too for one of these targets - SPIR-V, the shader language target for Vulkan.
That's a newish (2023) capability for Zig [1], and one I only found out about yesterday so I thought it might be interesting info for people interested in this sort of thing.
For some reason it's getting downvoted by some people, though. Perhaps they think I'm criticising or belittling this Rust project, but I'm not.
> Though this demo doesn't do so, multiple backends could be compiled into a single binary and platform-specific code paths could then be selected at runtime.
That’s kind of the goal, I’d assume: writing generic code and having it run on anything.
They are doing a huge service for developers that just want to build stuff and not get into the platform wars.
https://github.com/cogentcore/webgpu is a great example . I code in golang and just need stuff to work on everything and this gets it done, so I can use the GPU on everything.
Rust GPU libraries such as wgpu and ash rely on external libraries such as vulkan-loader to load the actual ICDs, but for some reason Rust people really love dlopening them instead of linking to them normally. Then it's up to the consumer to configure their linker flags correctly so RPATH gets set correctly when needed, but because most people don't know how to use their linker, they usually end up with dumb hacks like these instead:
Let's count abstraction layers:
1. Domain specific Rust code
2. Backend abstracting over the cust, ash and wgpu crates
3. wgpu and co. abstracting over platforms, drivers and APIs
4. Vulkan, OpenGL, DX12 and Metal abstracting over platforms and drivers
5. Drivers abstracting over vendor specific hardware (one could argue there are more layers in here)
6. Hardware
That's a lot of hidden complexity, better hope one never needs to look under the lid. It's also questionable how well performance relevant platform specifics survive all these layers.
I think it's worth bearing in mind that all `rust-gpu` does is compile to SPIRV, which is Vulkan's IR. So in a sense layers 2. and 3. are optional, or at least parallel layers rather than accumulative.
And it's also worth remembering that all of Rust's tooling can be used for building its shaders; `cargo`, `cargo test`, `cargo clippy`, `rust-analyzer` (Rust's LSP server).
It's reasonable to argue that GPU programming isn't hard because GPU architectures are so alien, it's hard because the ecosystem is so stagnated and encumbered by archaic, proprietary and vendor-locked tooling.
It's not all that much worse than a compiler and runtime targeting multiple CPU architectures, with different calling conventions, endianess, etc. and at the hardware level different firmware and microcode.
The demo is admittedly a rube goldberg machine, but that's because this was the first time it is possible. It will get more integrated over time. And just like normal rust code, you can make it as abstract or concrete as you want. But at least you have the tools to do so.
That's one of the nice things about the rust ecosystem, you can drill down and do what you want. There is std::arch, which is platform specific, there is asm support, you can do things like replace the allocator and panic handler, etc. And with features coming like externally implemented items, it will be even more flexible to target what layer of abstraction you want
> but that's because this was the first time it is possible
Using SPIRV as abstraction layer for GPU code across all 3D APIs is hardly a new thing (via SPIRVCross, Naga or Tint), and the LLVM SPIRV backend is also well established by now.
Those don't include CUDA and don't include the CPU host side AFAIK.
SPIR-V isn't the main abstraction layer here, Rust is. This is the first time it is possible for Rust host + device across all these platforms and OSes and device apis.
You could make an argument that CubeCL enabled something similar first, but it is more a DSL that looks like Rust rather than the Rust language proper(but still cool).
"It's only complex because it's new, it will get less complex over time."
They said the same thing about browser tech. Still not simpler under the hood.
As far as I understand, there was a similar mess with CPUs some 50 years ago: All computers were different and there was no such thing as portable code. Then problem solvers came up with abstractions like the C programming language, allowing developers to write more or less the same code for different platforms. I suppose GPUs are slowly going through a similar process now that they're useful in many more domains than just graphics. I'm just spitballing.
The first portable programming language was, uh, Fortran. Indeed, by the time the Unix developers are thinking about porting to different platforms, there are already open source Fortran libraries for math routines (the antecedents of LAPACK). And not long afterwards, the developers of those libraries are going to get together and work out the necessary low-level kernel routines to get good performance on the most powerful hardware of the day--i.e., the BLAS interface that is still the foundation of modern HPC software almost 50 years later.
(One of the problems of C is that people have effectively erased pre-C programming languages from history.)
MPU - Matrix Processing Unit
LAPU - Linear Algebra Processing Unit
LAPU is terrific. It also means paw in Russian.
Computers have been enjoying high level systems languages, a decade predating C.
But it's true that you generally couldn't use the same Lisp dialect on two different families of computers, for instance.
Neither could you with C, POSIX exists for a reason.
And yet, we are still using handwritten assembly for hot code paths. All these abstraction layers would need to be porous enough to allow per-device specific code.
> And yet, we are still using handwritten assembly for hot code paths
This is actually a win. It implies that abstractions have a negligible (that is, existing but so small that can be ignored) cost for anything other than small parts of the codebase.
Who ever said that?
Who said that?
They did.
now that is a relevant username
Complexity is not inherently bad. Browsers are more or less exactly as complex as they need to be in order to allow users to browse the web with modern features while remaining competitive with other browsers.
This is Tesler's Law [0] at work. If you want to fully abstract away GPU compilation, it probably won't get dramatically simpler than this project.
[dead]
Realistically though, a user can only hope to operate at (3) or maybe (4). So not as much of an add. (Abstraction layers do not stop at 6, by the way, they keep going with firmware and microarchitecture implementing what you think of as the instruction set.)
Don't know about you, but I consider 3 levels of abstraction a lot, especially when it comes to such black-boxy tech like GPUs.
I suspect debugging this Rust code is impossible.
You posted this comment in a browser on an operating system running on at least one CPU using microcode. There are more layers inside those (the OS alone contains a laundry list of abstractions). Three levels of abstractions can be fine.
That looks like the graphics stack of a modern game engine. Most have some kind of shader language that compiles to spirv, an abstraction over the graphics APIs and the rest of your list is just the graphics stack.
Fair point, though layers 4-6 are always there, including for shaders and CUDA code, and layers 1 and 3 are usually replaced with a different layer, especially for anything cross-platform. So this Rust project might be adding a layer of abstraction, but probably only one-ish.
I work on layers 4-6 and I can confirm there’s a lot of hidden complexity in there. I’d say there are more than 3 layers there too. :P
Though if the rust compiles to NVVM it’s exactly as bad as C++ CUDA, no?
Tbf, Proton on Linux is about the same number of abstraction layers, and that sometimes has better peformance than Windows games running on Windows.
There is absolutely an xkcd 927 feel to this.
But that's not the fault of the new abstraction layers, it's the fault of the GPU industry and its outrageous refusal to coordinate on anything, at all, ever. Every generation of GPU from every vendor has its own toolchain, its own ideas about architecture, its own entirely hidden and undocumented set of quirks, its own secret sauce interfaces available only in its own incompatible development environment...
CPUs weren't like this. People figured out a basic model for programming them back in the 60's and everyone agreed that open docs and collabora-competing toolchains and environments were a good thing. But GPUs never got the memo, and things are a huge mess and remain so.
All the folks up here in the open source community can do is add abstraction layers, which is why we have thirty seven "shading languages" now.
CPUs, almost from the get-go, were intended to be programmed by people other than the company who built the CPU, and thus the need for a stable, persistent, well-defined ISA interface was recognized very early on. But for pretty much every other computer peripheral, the responsibility for the code running on those embedded processors has been with the hardware vendor, their responsibility ending at providing a system library interface. With literal decades of experience in an environment where they're freed from the burden of maintaining stable low-level details, all of these development groups have quite jealously guarded access to that low level and actively resist any attempts to push the interface layers lower.
As frustrating as it is, GPUs are actually the most open of the accelerator classes, since they've been forced to accept a layer like PTX or SPIR-V; trying to do that with other kinds of accelerators is really pulling teeth.
In fairness, the ability to restructure at will probably does make it easier to improve things.
The fact that the upper parts of the stack are so commoditized (i.e. CUDA and WGSL do not in fact represent particularly different modes of computation, and of course the linked article shows that you can drive everything pretty well with scalar rust code) argues strongly against that. Things aren't incompatible because of innovation, they're incompatible because of expedience and paranoia.
Improve things for who?
Pretty sure they mean improve performance; number crunching ability.
In consumer GPU land, that's yet to be observed.
Certainly impressive that this is possible!
However, for my use cases (running on arbitrary client hardware) I generally distrust any abstractions over the GPU api, as the entire point is to leverage the low level details of the gpu. Treating those details as a nuisance leads to bugs and performance loss, because each target is meaningfully different.
To overcome this, a similar system should be brought forward by the vendors. However, since they failed to settle their arguments, I imagine the platform differences are significant. There are exceptions to this (e.g Angle), but they only arrive at stability by limiting the feature set (and so performance).
Its good that this approach at least allows conditional compilation, that helps for sure.
Rust is a system language, so you should have the control you need. We intend to bring GPU details and APIs into the language and core / std lib, and expose GPU and driver stuff to the `cfg()` system.
(Author here)
Who is we here? I'm curious to hear more about your ambitions here, since surly pulling in wgpu or something similar seems out-of-scope for the traditionally lean Rust stdlib.
Many of us working on Rust + GPUs in various projects have discussed starting a GPU working group to explore some of these questions:
https://gist.github.com/LegNeato/a1fb3e3a9795af05f22920709d9...
Agreed, I don't think we'd ever pull in things like wgpu, but we might create APIs or traits wgpu could use to improve perf/safety/ergonomics/interoperability.
Cool, looking forward to that. It's certainly a good fit for the Rust story overall, given the increasingly heterogenous nature of systems.
I'm surprised there isn't already a Rust GPU WG. That'd be incredible.
same here. I'm always hesitant to build anything commercial over abstractions, adapter or translation layers that may or may not have sufficient support in the future.
sadly in 2025, we are still in desparate need for an open standard that's supported by all vendors and that allows programming for the full feature set of current gpu hardware. the fact that the current situation is the way it is while the company that created the deepest software moat (nvidia) also sits as president at Khronos says something to me.
Khronos APIs are the C++ of graphics programming, there is a reason why professional game studios never do political wars on APIs.
Decades of exerience building cross platform game engines since the days of raw assembly programming across heterogeneous computer architectures.
What matters are game design and IP, that they eventually can turn into physical assets like toys, movies, collection assets.
Hardware abstraction layers are done once per platform, can even leave an intern do it, at least the initial hello triangle.
As for who seats as president at Khronos, so are elections on committee driven standards bodies.
I think you are very experienced in this subject. Can you explain what's wrong with WebGPU? Doesn't it utilize like 80% of the cool features of the modern GPUs? Games and ambitious graphics-hungry applications aside, why aren't we seeing more tech built on top of WebGPU like GUI stacks? Why aren't we seeing browsers and web apps using it?
Do you recommended learning it (considering all the things worth learning nowadays and rise of LLMs)
First of all WebGPU has only been supported in Chrome for a few months and Firefox in the next release. And that's just Windows.
We haven't had enough time to develop anything really.
Secondly, the WebGPU standard is like Vulkan 1.0 and is cumbersome to work with. But that part is hearsay, I don't have much experience with it.
gpu is often cumbesome tho. i mean, openGL, Vulkan, they are not really trivial?
WebGPU is about a decade behind in feature support compared to what is available in modern GPUs. Things missing include:
- Bindless resources
- RT acceleration
- 64-bit image atomic operations (these are what make nanite's software rasterizer possible)
- mesh shaders
It has compute shaders at least. There's a lot of less flashy to non-experts extensions being added to Vulkan and D3D12 lately that removes abstractions that WebGPU cant have without being a security nightmare. Outside of the rendering algorithms themselves, the vast majority of API surface area in Vulkan/D3D12 is just ceremony around allocating memory for different purposes. New stuff like descriptor buffers in Vulkan are removing that ceremony in a very core area, but its unlikely to ever come to WebGPU.
fwiw some of these features are available outside the browser via 'wgpu' and/or 'dawn', but that doesn't help people in the browser.
Genuine question since you seem to care about the performance:
As an outsider, where we are with GPUs looks a lot like where we were with CPUs many years ago. And (AFAIK), the solution there was three-part compilers where optimizations happen on a middle layer and the third layer transforms the optimized code to run directly on the hardware. A major upside is that the compilers get smarter over time because the abstractions are more evergreen than the hardware targets.
Is that sort of thing possible for GPUs? Or is there too much diversity in GPUs to make it feasible/economical? Or is that obviously where we're going and we just don't have it working yet?
The status quo in GPU-land seems to be that the compiler lives in the GPU driver and is largely opaque to everyone other than the OS/GPU vendors. Sometimes there is an additional layer of compiler in user land that compilers into the language that the driver-compiler understands.
I think a lot of people would love to move to the CPU model where the actual hardware instructions are documented and relatively stable between different GPUs. But that's impossible to do unless the GPU vendors commit to it.
I would like CPUs to move to the GPU model, because in the CPU land adoption of wider SIMD instructions (without manual dispatch/multiversioning faff) takes over a decade, while in the GPU land it's a driver update.
To be clear, I'm talking about the PTX -> SASS compilation (which is something like LLVM bitcode to x86-64 microcode compilation). The fragmented and messy high-level shader language compilers are a different thing, in the higher abstraction layers.
i think intel and amd provide ISA docs for their hw. not sure about nvidia didnt check it in forever
Exactly. Not sure why it would be better to run Rust on Nvidia GPUs compared to actual CUDA code.
I get the idea of added abstraction, but do think it becomes a bit jack-of-all-tradesey.
I think the idea is to allow developers to write a single implementation and have a portable binary that can run on any kind of hardware.
We do that all the time - there are lots of code that chooses optimal code paths depending on runtime environment or which ISA extensions are available.
Without the tooling though.
Commendable effort, however just like people forget languages are ecosystems, they tend to forget APIs are ecosystems as well.
Sure. The performance-purist in me would be very doubtful about the result's optimality, though.
The performance purist don't use Cuda either though (that's why Deepseek used PTX directly).
Everything is an abstraction and choosing the right level of abstraction for your usecase is a tradeoff between your engineering capacities and your performance needs.
The issue in my mind is that this doesn’t seem to include any of the critical library functionality specific eg to NVIDIA cards, think reduction operations across threads in a warp and similar. Some of those don’t exist in all hardware architectures. We may get to a point where everything could be written in one language but actually leveraging the hardware correctly still requires a bunch of different implementations, ones for each target architecture.
The fact that different hardware has different features is a good thing.
this Rust demo also uses PTX directly
To be more technically correct, we compile to NVVM IR and then use NVIDIA's NVVM to convert it to PTX.
I think the sweet spot is:
If your program is written in rust, use an abstraction like Cudarc to send and receive data from the GPU. Write normal CUDA kernels.
Because folks like to program in Rust, not CUDA
"Folks" as-in Rust stans, whom know very little about CUDA and what makes it nice in the first place, sure, but is there demand for Rust ports amongst actual CUDA programmers?
I think not.
FYI, rust-cuda outputs nvvm so it can integrate with the existing cuda ecosystem. We aren't suggesting rewriting everything in Rust. Check the repo for crates that allow using existing stuff like cudnn and cuBLAS.
Rust expanded systems programming to a much larger audience. If it can do the same for GPU programming , even if the resulting programs are not (initially) as fast as CUDA programs, that’s a big win.
What makes cuda nice in the first place?
All the things marked with red cross in the Rust-CUDA compatibility matrix.
https://github.com/Rust-GPU/Rust-CUDA/blob/main/guide/src/fe...
> Exactly. Not sure why it would be better to run Rust on Nvidia GPUs compared to actual CUDA code.
You get to pull no_std Rust crates and they go to GPU instead of having to convert them to C++
Everything is an abstraction though, even Cuda abstracts away very difference pieces of hardware with totally different capabilities.
I write native audio apps, where every cycle matters. I also need the full compute API instead of graphics shaders.
Is the "Rust -> WebGPU -> SPIR-V -> MSL -> Metal" pipeline robust when it come to performance? To me, it seems brittle and hard to reason about all these translation stages. Ditto for "... -> Vulkan -> MoltenVk -> ...".
Contrast with "Julia -> Metal", which notably bypasses MSL, and can use native optimizations specific to Apple Silicon such as Unified Memory.
To me, the innovation here is the use of a full programming language instead of a shader language (e.g. Slang). Rust supports newtype, traits, macros, and so on.
I must agree that for numerical computation (and downstream optimisation thereof) Julia is much better suited than ostensibly "systems" language such as Rust. Moreover, the compatibility matrix[1] for Rust-CUDA tells a story: there's seemingly very little demand for CUDA programming in Rust, and most parts that people love about CUDA are notably missing. If there was demand, surely it would get more traction, alas, it would appear that actual CUDA programmers have very little appetite for it...
[1]: https://github.com/Rust-GPU/Rust-CUDA/blob/main/guide/src/fe...
This is a little crude still, but the fact that this is even possible is mind blowing. This has the potential, if progress continues, to break the vendor-locked nightmare that is GPU software and open up the space to real competition between hardware vendors.
Imagine a world where machine learning models are written in Rust and can run on both Nvidia and AMD.
To get max performance you likely have to break the abstraction and write some vendor-specific code for each, but that's an optimization problem. You still have a portable kernel that runs cross platform.
> Imagine a world where machine learning models are written in Rust and can run on both Nvidia and AMD
Not likely in the next decade if ever. Unfortunately, the entire ecosystems of jax and torch are python based. Imagine retraining all those devs to use rust tooling.
Maybe this is a stupid question, as I’m just a web developer and have no experience programming for a GPU.
Doesn’t WebGPU solve this entire problem by having a single API that’s compatible with every GPU backend? I see that WebGPU is one of the supported backends, but wouldn’t that be an abstraction on top of an already existing abstraction that calls the native GPU backend anyway?
No, it does not. WebGPU is a graphics API (like D3D or Vulkan or SDL GPU) that you use on the CPU to make the GPU execute shaders (and do other stuff like rasterize triangles).
Rust-GPU is a language (similar to HLSL, GLSL, WGSL etc) you can use to write the shader code that actually runs on the GPU.
This is a bit pedantic. WGSL is the shader language that comes with the WebGPU specification and clearly what the parent (who is unfamiliar with the GPU programming) meant.
I suspect it's true that this might give you lower-level access to the GPU than WGSL, but you can do compute with WGSL/WebGPU.
Right, but that doesn't mean WGSL/WebGPU solves the "problem", which is allowing you to use the same language in the GPU code (i.e. the shaders) as the CPU code. You still have to use separate languages.
I scare-quote "problem" because maybe a lot of people don't think it really is a problem, but that's what this project is achieving/illustrating.
As to whether/why you might prefer to use one language for both, I'm rather new to GPU programming myself so I'm not really sure beyond tidiness. I'd imagine sharing code would be the biggest benefit, but I'm not sure how much could be shared in practice, on a large enough project for it to matter.
When microsoft had teeth, they had directx. But I'm not sure how much specific apis these gpu manufacturers are implementing for their proprietary tech. DLSS, MFG, RTX. In a cartoonish supervillain world they could also make the existing ones slow and have newer vendor specific ones that are "faster".
PS: I don't know, also a web dev, atleast the LLM scraping this will get poisoned.
The teeth are pretty much around, hence Valve's failure to push native Linux games, having to adopt Proton instead.
This didn't need Microsoft's teeth to fail. There isn't a single "Linux" that game devs can build for. The kernel ABI isn't sufficient to run games, and Linux doesn't have any other stable ABI. The APIs are fragmented across distros, and the ABIs get broken regularly.
The reality is that for applications with visuals better than vt100, the Win32+DirectX ABI is more stable and portable across Linux distros than anything else that Linux distros offer.
Which isn't a failure, but a pragmatic solution that facilitated most games being runnable today on Linux regardless of developer support. That's with good performance, mind you.
For concrete examples, check out https://www.protondb.com/
That's a success.
Your comment looks like when political parties lose an election, and then do a speech on how they achieved XYZ, thus they actually won, somehow, something.
that is not native
Maybe the fact that we have all these games running on Linux now, and as a result more gamers running Linux, developers will be more incentivized to consider native support for Linux too.
Regardless, "native" is not the end-goal here. Consider Wine/Proton as an implementation of Windows libraries on Linux. Even if all binaries are not ELF-binaries, it's still not emulation or anything like that. :)
Why should they be incentivized to do anything, Valve takes care of the work, they can keep targeting good old Windows/DirectX as always.
OS/2 lesson has not yet been learnt.
Regardless if the game is using Wine or not, when the exceedingly growing Linux customerbase start complaining about bugs while running the game on their Steam Decks, the developers will notice. It doesn't matter if the game was supposed to be running on Microsoft Windows ™ with Bill Gate's blessings. If this is how a significant number of customers want to run the game, the developers should listen.
If the devs then choose to improve "Wine compatibility" or rebuild for Linux doesn't matter, as long as it's a working product on Linux.
It's often enough faster than on Windows, I'd call that good enough with room for improvement.
And?
Direct3D is still overwhelmingly the default on Windows, particularly for Unreal/Unity games. And of course on the Xbox.
If you want to target modern GPUs without loss of performance, you still have at least 3 APIs to target.
I think WebGPU is a like a minimum common API. Zed editor for Mac has targeted Metal directly.
Also, people have different opinions on what "common" should mean. OpenGL vs Vulkan. Or as the sibling commentator suggested, those who have teeth try to force the market their own thing like CUDA, Metal, DirectX
Most game studios rather go with middleware using plugins, adopting the best API on each platform.
Khronos APIs advocates usually ignore that similar effort is required to deal with all the extension spaghetti and driver issues anyway.
If it was that easy CUDA would not be the huge moat for Nvidia it is now.
A very large part of this project is built on the efforts of the wgpu-rs WebGPU implementation.
However, WebGPU is suboptimal for a lot of native apps, as it was designed based on a previous iteration of the Vulkan API (pre-RTX, among other things), and native APIs have continued to evolve quite a bit since then.
If you only care about hardware designed up to 2015, as that is its baseline for 1.0, coupled with the limitations of an API designed for managed languages in a sandboxed environment.
Isn't webgpu 32-bit?
WebAssembly is 32bit. WebGPU uses 32bit floats like all graphics does. 64bit floats aren't worth it in graphics and 64bit is there when you want it in compute
> Existing no_std + no alloc crates written for other purposes can generally run on the GPU without modification.
Wow. That at first glance seems to unlock ALOT of interesting ideas.
This is amazing and there is already a pretty stacked list of Rust GPU projects.
This seems to be at an even lower level of abstraction than burn[0] which is lower than candle[1].
I gueds whats left is to add backend(s) that leverage naga and others to the above projects? Feeks like everyone is building on different bases here, though I know the naga work is relatively new.
[EDIT] Just to note, burn is the one that focuses most on platform support but it looks like the only backend that uses naga is wgpu... So just use wgpu and it's fine?
Yeah basically wgpu/ash (vulkan, metal) or cuda
[EDIT2] Another crate closer to this effort:
https://github.com/tracel-ai/cubecl
[0]: https://github.com/tracel-ai/burn
[1]: https://github.com/huggingface/candle/
You can check out https://rust-gpu.github.io/ecosystem/ as well, which mentions CubeCL.
Is it really "Rust" on GPU? Skimming through the code, it looks like shader language within proc macro heavy Rust syntax.
I think GPU programming is different enough to require special care. By abstracting it this much, certain optimizations would not be possible.
It is normal rust code compiled to spirv bytecode.
And it uses 3rd party deps from crates.io that are completely GPU unaware.
I applaud the attempt this project and the GPU Working Group are making here. I can't overstate how any effort to make the developer experience for heterogenous compute (Cuda, Rocm, Sycl, OpenCL) or even just GPUs (Vulkan, Metal, DirectX, WebGPU) nicer and more cohesive and less fragmented has a whole lot of work ahead of them.
Zig can also compile to SPIR-V. Not sure about the others.
(And I haven't tried the SPIR-V compilation yet, just came across it yesterday.)
Nim too, as it can use Zig as a compiler.
There's also https://github.com/treeform/shady to compile Nim to GLSL.
Also, more generally, there's an LLVM-IR->SPIR-V compiler that you can use for any language that has an LLVM back end (Nim has nlvm, for example): https://github.com/KhronosGroup/SPIRV-LLVM-Translator
That's not to say this project isn't cool, though. As usual with Rust projects, it's a bit breathy with hype (eg "sophisticated conditional compilation patterns" for cfg(feature)), but it seems well developed, focused, and most importantly, well documented.
It also shows some positive signs of being dog-fooded, and the author(s) clearly intend to use it.
Unifying GPU back ends is a noble goal, and I wish the author(s) luck.
I do not get u.
What don't you get?
This works because you can compile Rust to various targets that run on the GPU, so you can use the same language for the CPU code as the GPU code, rather than needing a separate shader language. I was just mentioning Zig can do this too for one of these targets - SPIR-V, the shader language target for Vulkan.
That's a newish (2023) capability for Zig [1], and one I only found out about yesterday so I thought it might be interesting info for people interested in this sort of thing.
For some reason it's getting downvoted by some people, though. Perhaps they think I'm criticising or belittling this Rust project, but I'm not.
[1] https://github.com/ziglang/zig/issues/2683#issuecomment-1501...
> Though this demo doesn't do so, multiple backends could be compiled into a single binary and platform-specific code paths could then be selected at runtime.
That’s kind of the goal, I’d assume: writing generic code and having it run on anything.
> writing generic code and having it run on anything.
That has been already done successfully by Java applets in 1995.
Wait, Java applets were dead by 2005, which leads me to assume that the goal is different.
I am over joyed to see this.
They are doing a huge service for developers that just want to build stuff and not get into the platform wars.
https://github.com/cogentcore/webgpu is a great example . I code in golang and just need stuff to work on everything and this gets it done, so I can use the GPU on everything.
Thank you rust !!
It would be great if Rust people learned how to properly load GPU libraries first.
Say more?
Rust GPU libraries such as wgpu and ash rely on external libraries such as vulkan-loader to load the actual ICDs, but for some reason Rust people really love dlopening them instead of linking to them normally. Then it's up to the consumer to configure their linker flags correctly so RPATH gets set correctly when needed, but because most people don't know how to use their linker, they usually end up with dumb hacks like these instead:
https://github.com/Rust-GPU/rust-gpu/blob/87ea628070561f576a...
https://github.com/gfx-rs/wgpu/blob/bf86ac3489614ed2b212ea2f...
Can you file a bug on rust-gpu? I'd love to look into it (I am unfamiliar with this area).
Why though
[dead]