WebGPU is a global technology that promises to bring cutting-edge GPU computing capabilities to the web, benefiting all consumer platforms using a shared code base.
Although its predecessor, WebGL, is powerful, it seriously lacks compute shader capabilities, limiting its application scope.
WGSL (WebGPU Shader/Compute Language) draws on best practices from areas like Rust and GLSL.
As I was learning to use WebGPU, I came across some gaps in the documentation: I was hoping to find a simple starting point for using compute shaders to compute data for vertex and fragment shaders.
The single-file HTML for all the code in this tutorial can be found at https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb - read on for a detailed breakdown.
Here is a single-click demonstration of this HTML running on my domain: https://www.php.cn/link/bed827b4857bf056d05980661990ccdc A WebGPU-based browser such as Chrome or Edge https://www.php.cn/link/bae00fb8b4115786ba5dbbb67b9b177a).
This is a particle simulation - it happens in time steps over time.
Time is tracked on JS/CPU and passed to GPU as (float) uniform.
Particle data is managed entirely on the GPU - although still interacting with the CPU, allowing memory to be allocated and initial values set. It is also possible to read the data back to the CPU, but this is omitted in this tutorial.
The magic of this setup is that each particle is updated in parallel with all other particles, allowing for incredible calculation and rendering speeds in the browser (parallelization maximizes the number of cores on the GPU; We can divide the number of particles by the number of cores to get the true number of cycles per update step per core).
The mechanism WebGPU uses for data exchange between CPU and GPU is binding - JS arrays (such as Float32Array) can be "bound" to memory locations in WGSL using WebGPU buffers. WGSL memory locations are identified by two integers: the group number and the binding number.
In our case, both the compute shader and the vertex shader rely on two data bindings: time and particle position.
Uniform definitions exist in compute shaders (https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L43) and vertex shaders (https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L69) Medium - Calculate shader update position, vertex shader updates color based on time.
Let’s take a look at the binding setup in JS and WGSL, starting with compute shaders.
<code>const computeBindGroup = device.createBindGroup({ /* 参见 computePipeline 定义,网址为 https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L102 它允许将 JS 字符串与 WGSL 代码链接到 WebGPU */ layout: computePipeline.getBindGroupLayout(0), // 组号 0 entries: [{ // 时间绑定在绑定号 0 binding: 0, resource: { /* 作为参考,缓冲区声明为: const timeBuffer = device.createBuffer({ size: Float32Array.BYTES_PER_ELEMENT, usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST}) }) https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L129 */ buffer: timeBuffer } }, { // 粒子位置数据在绑定号 1(仍在组 0) binding: 1, resource: { buffer: particleBuffer } }] });</code>
and the corresponding declaration in the compute shader
<code>// 来自计算着色器 - 顶点着色器中也有类似的声明 @group(0) @binding(0) var<uniform> t: f32; @group(0) @binding(1) var<storage read_write=""> particles : array<particle>; </particle></storage></uniform></code>
Importantly, we bind the timeBuffer on the JS side to WGSL by matching the group number and binding number in JS and WGSL.
This allows us to control the value of the variable from JS:
<code>/* 数组中只需要 1 个元素,因为时间是单个浮点值 */ const timeJs = new Float32Array(1) let t = 5.3 /* 纯 JS,只需设置值 */ timeJs.set([t], 0) /* 将数据从 CPU/JS 传递到 GPU/WGSL */ device.queue.writeBuffer(timeBuffer, 0, timeJs);</code>
We store and update particle positions directly in GPU-accessible memory – allowing us to update them in parallel by taking advantage of the GPU’s massive multi-core architecture.
Parallelization is coordinated with the help of work group size, declared in the compute shader:
<code>@compute @workgroup_size(64) fn main(@builtin(global_invocation_id) global_id : vec3<u32>) { // ... } </u32></code>
@builtin(global_invocation_id) global_id : vec3
By definition, global_invocation_id = workgroup_id * workgroup_size local_invocation_id - this means it can be used as a particle index.
For example, if we have 10k particles and workgroup_size is 64, we need to schedule Math.ceil(10000/64) workgroups. Each time a compute pass is triggered from JS, we will explicitly tell the GPU to perform this amount of work:
<code>computePass.dispatchWorkgroups(Math.ceil(PARTICLE_COUNT / WORKGROUP_SIZE));</code>
If PARTICLE_COUNT == 10000 and WORKGROUP_SIZE == 64, we will start 157 workgroups (10000/64 = 156.25), and the calculated range of local_invocation_id of each workgroup is 0 to 63 (while the range of workgroup_id is 0 to 157 ). Since 157 * 64 = 1048, we will end up doing slightly more calculations in a workgroup. We handle overflow by discarding redundant calls.
Here is the final result of calculating the shader after taking these factors into account:
<code>@compute @workgroup_size(${WORKGROUP_SIZE}) fn main(@builtin(global_invocation_id) global_id : vec3<u32>) { let index = global_id.x; // 由于工作组网格未对齐,因此丢弃额外的计算 if (index >= arrayLength(&particles)) { return; } /* 将整数索引转换为浮点数,以便我们可以根据索引(和时间)计算位置更新 */ let fi = f32(index); particles[index].position = vec2<f32>( /* 公式背后没有宏伟的意图 - 只不过是用时间+索引的例子 */ cos(fi * 0.11) * 0.8 + sin((t + fi)/100)/10, sin(fi * 0.11) * 0.8 + cos((t + fi)/100)/10 ); } </f32></u32></code>
These values will persist across calculation passes because particles are defined as storage variables.
In order to read the particle positions in the vertex shader from the compute shader, we need a read-only view, since only the compute shader can write to the storage.
The following is a statement from WGSL:
<code>@group(0) @binding(0) var<uniform> t: f32; @group(0) @binding(1) var<storage> particles : array<vec2>>; /* 或等效: @group(0) @binding(1) var<storage read=""> particles : array<vec2>>; */ </vec2></storage></vec2></storage></uniform></code>
Trying to re-use the same read_write style in a compute shader will just error:
<code>var with 'storage' address space and 'read_write' access mode cannot be used by vertex pipeline stage</code>
Note that the binding numbers in the vertex shader do not have to match the compute shader binding numbers - they only need to match the vertex shader's binding group declaration:
<code>const renderBindGroup = device.createBindGroup({ layout: pipeline.getBindGroupLayout(0), entries: [{ binding: 0, resource: { buffer: timeBuffer } }, { binding: 1, resource: { buffer: particleBuffer } }] });</code>
I selected binding:2 in the GitHub sample code https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L70 - just to explore the boundaries of the constraints imposed by WebGPU
With all settings in place, the update and render loops are coordinated in JS:
<code>/* 从 t = 0 开始模拟 */ let t = 0 function frame() { /* 为简单起见,使用恒定整数时间步 - 无论帧速率如何,都会一致渲染。 */ t += 1 timeJs.set([t], 0) device.queue.writeBuffer(timeBuffer, 0, timeJs); // 计算传递以更新粒子位置 const computePassEncoder = device.createCommandEncoder(); const computePass = computePassEncoder.beginComputePass(); computePass.setPipeline(computePipeline); computePass.setBindGroup(0, computeBindGroup); // 重要的是要调度正确数量的工作组以处理所有粒子 computePass.dispatchWorkgroups(Math.ceil(PARTICLE_COUNT / WORKGROUP_SIZE)); computePass.end(); device.queue.submit([computePassEncoder.finish()]); // 渲染传递 const commandEncoder = device.createCommandEncoder(); const passEncoder = commandEncoder.beginRenderPass({ colorAttachments: [{ view: context.getCurrentTexture().createView(), clearValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 }, loadOp: 'clear', storeOp: 'store', }] }); passEncoder.setPipeline(pipeline); passEncoder.setBindGroup(0, renderBindGroup); passEncoder.draw(PARTICLE_COUNT); passEncoder.end(); device.queue.submit([commandEncoder.finish()]); requestAnimationFrame(frame); } frame();</code>
WebGPU unleashes the power of massively parallel GPU computing in the browser.
It runs in passes - each pass has local variables enabled through a pipeline with memory binding (bridging CPU memory and GPU memory).
Compute delivery allows for the coordination of parallel workloads through workgroups.
While it does require some heavy setup, I think the local binding/state style is a huge improvement over WebGL's global state model - making it easier to use while also finally bringing the power of GPU computing to Entered the Web.
The above is the detailed content of WebGPU tutorial: compute, vertex, and fragment shaders on the web. For more information, please follow other related articles on the PHP Chinese website!