Table of Contents
Advanced Settings
Bind
Time - uniforms
Particle Position - WGSL Storage
Read the particle position in the compute shader in the vertex shader
Run the simulation step by step
Conclusion
Home Web Front-end JS Tutorial WebGPU tutorial: compute, vertex, and fragment shaders on the web

WebGPU tutorial: compute, vertex, and fragment shaders on the web

Jan 17, 2025 am 08:30 AM

WebGPU tutorial: compute, vertex, and fragment shaders on the web

WebGPU is a global technology that promises to bring cutting-edge GPU computing capabilities to the web, benefiting all consumer platforms using a shared code base.

Although its predecessor, WebGL, is powerful, it seriously lacks compute shader capabilities, limiting its application scope.

WGSL (WebGPU Shader/Compute Language) draws on best practices from areas like Rust and GLSL.

As I was learning to use WebGPU, I came across some gaps in the documentation: I was hoping to find a simple starting point for using compute shaders to compute data for vertex and fragment shaders.

The single-file HTML for all the code in this tutorial can be found at https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb - read on for a detailed breakdown.

Here is a single-click demonstration of this HTML running on my domain: https://www.php.cn/link/bed827b4857bf056d05980661990ccdc A WebGPU-based browser such as Chrome or Edge https://www.php.cn/link/bae00fb8b4115786ba5dbbb67b9b177a).

Advanced Settings

This is a particle simulation - it happens in time steps over time.

Time is tracked on JS/CPU and passed to GPU as (float) uniform.

Particle data is managed entirely on the GPU - although still interacting with the CPU, allowing memory to be allocated and initial values ​​set. It is also possible to read the data back to the CPU, but this is omitted in this tutorial.

The magic of this setup is that each particle is updated in parallel with all other particles, allowing for incredible calculation and rendering speeds in the browser (parallelization maximizes the number of cores on the GPU; We can divide the number of particles by the number of cores to get the true number of cycles per update step per core).

Bind

The mechanism WebGPU uses for data exchange between CPU and GPU is binding - JS arrays (such as Float32Array) can be "bound" to memory locations in WGSL using WebGPU buffers. WGSL memory locations are identified by two integers: the group number and the binding number.

In our case, both the compute shader and the vertex shader rely on two data bindings: time and particle position.

Time - uniforms

Uniform definitions exist in compute shaders (https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L43) and vertex shaders (https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L69) Medium - Calculate shader update position, vertex shader updates color based on time.

Let’s take a look at the binding setup in JS and WGSL, starting with compute shaders.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

<code>const computeBindGroup = device.createBindGroup({

  /*

    参见 computePipeline 定义,网址为

    https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L102

 

    它允许将 JS 字符串与 WGSL 代码链接到 WebGPU

  */

  layout: computePipeline.getBindGroupLayout(0), // 组号 0

  entries: [{

    // 时间绑定在绑定号 0

    binding: 0,

    resource: {

      /*

      作为参考,缓冲区声明为:

 

      const timeBuffer = device.createBuffer({

        size: Float32Array.BYTES_PER_ELEMENT,

        usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST})

      })

 

      https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L129

      */

      buffer: timeBuffer

    }

  },

  {

    // 粒子位置数据在绑定号 1(仍在组 0)

    binding: 1,

    resource: {

      buffer: particleBuffer

    }

  }]

});</code>

Copy after login

and the corresponding declaration in the compute shader

1

2

3

4

<code>// 来自计算着色器 - 顶点着色器中也有类似的声明

@group(0) @binding(0) var<uniform> t: f32;

@group(0) @binding(1) var<storage read_write=""> particles : array<particle>;

</particle></storage></uniform></code>

Copy after login

Importantly, we bind the timeBuffer on the JS side to WGSL by matching the group number and binding number in JS and WGSL.

This allows us to control the value of the variable from JS:

1

2

3

4

5

6

7

<code>/* 数组中只需要 1 个元素,因为时间是单个浮点值 */

const timeJs = new Float32Array(1)

let t = 5.3

/* 纯 JS,只需设置值 */

timeJs.set([t], 0)

/* 将数据从 CPU/JS 传递到 GPU/WGSL */

device.queue.writeBuffer(timeBuffer, 0, timeJs);</code>

Copy after login

Particle Position - WGSL Storage

We store and update particle positions directly in GPU-accessible memory – allowing us to update them in parallel by taking advantage of the GPU’s massive multi-core architecture.

Parallelization is coordinated with the help of work group size, declared in the compute shader:

1

2

3

4

5

<code>@compute @workgroup_size(64)

fn main(@builtin(global_invocation_id) global_id : vec3<u32>) {

  // ...

}

</u32></code>

Copy after login

@builtin(global_invocation_id) global_id : vec3 The value provides the thread identifier.

By definition, global_invocation_id = workgroup_id * workgroup_size local_invocation_id - this means it can be used as a particle index.

For example, if we have 10k particles and workgroup_size is 64, we need to schedule Math.ceil(10000/64) workgroups. Each time a compute pass is triggered from JS, we will explicitly tell the GPU to perform this amount of work:

1

<code>computePass.dispatchWorkgroups(Math.ceil(PARTICLE_COUNT / WORKGROUP_SIZE));</code>

Copy after login

If PARTICLE_COUNT == 10000 and WORKGROUP_SIZE == 64, we will start 157 workgroups (10000/64 = 156.25), and the calculated range of local_invocation_id of each workgroup is 0 to 63 (while the range of workgroup_id is 0 to 157 ). Since 157 * 64 = 1048, we will end up doing slightly more calculations in a workgroup. We handle overflow by discarding redundant calls.

Here is the final result of calculating the shader after taking these factors into account:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

<code>@compute @workgroup_size(${WORKGROUP_SIZE})

fn main(@builtin(global_invocation_id) global_id : vec3<u32>) {

  let index = global_id.x;

  // 由于工作组网格未对齐,因此丢弃额外的计算

  if (index >= arrayLength(&particles)) {

    return;

  }

  /* 将整数索引转换为浮点数,以便我们可以根据索引(和时间)计算位置更新 */

  let fi = f32(index);

  particles[index].position = vec2<f32>(

    /* 公式背后没有宏伟的意图 - 只不过是用时间+索引的例子 */

    cos(fi * 0.11) * 0.8 + sin((t + fi)/100)/10,

    sin(fi * 0.11) * 0.8 + cos((t + fi)/100)/10

  );

}

</f32></u32></code>

Copy after login

These values ​​will persist across calculation passes because particles are defined as storage variables.

Read the particle position in the compute shader in the vertex shader

In order to read the particle positions in the vertex shader from the compute shader, we need a read-only view, since only the compute shader can write to the storage.

The following is a statement from WGSL:

1

2

3

4

5

6

7

8

<code>@group(0) @binding(0) var<uniform> t: f32;

@group(0) @binding(1) var<storage> particles : array<vec2>>;

/*

或等效:

 

@group(0) @binding(1) var<storage read=""> particles : array<vec2>>;

*/

</vec2></storage></vec2></storage></uniform></code>

Copy after login

Trying to re-use the same read_write style in a compute shader will just error:

1

<code>var with 'storage' address space and 'read_write' access mode cannot be used by vertex pipeline stage</code>

Copy after login

Note that the binding numbers in the vertex shader do not have to match the compute shader binding numbers - they only need to match the vertex shader's binding group declaration:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

<code>const renderBindGroup = device.createBindGroup({

  layout: pipeline.getBindGroupLayout(0),

  entries: [{

    binding: 0,

    resource: {

      buffer: timeBuffer

    }

  },

  {

    binding: 1,

    resource: {

      buffer: particleBuffer

    }

  }]

});</code>

Copy after login

I selected binding:2 in the GitHub sample code https://www.php.cn/link/2e5281ee978b78d6f5728aad8f28fedb#L70 - just to explore the boundaries of the constraints imposed by WebGPU

Run the simulation step by step

With all settings in place, the update and render loops are coordinated in JS:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

<code>/* 从 t = 0 开始模拟 */

let t = 0

function frame() {

  /*

    为简单起见,使用恒定整数时间步 - 无论帧速率如何,都会一致渲染。

  */

  t += 1

  timeJs.set([t], 0)

  device.queue.writeBuffer(timeBuffer, 0, timeJs);

 

  // 计算传递以更新粒子位置

  const computePassEncoder = device.createCommandEncoder();

  const computePass = computePassEncoder.beginComputePass();

  computePass.setPipeline(computePipeline);

  computePass.setBindGroup(0, computeBindGroup);

  // 重要的是要调度正确数量的工作组以处理所有粒子

  computePass.dispatchWorkgroups(Math.ceil(PARTICLE_COUNT / WORKGROUP_SIZE));

  computePass.end();

  device.queue.submit([computePassEncoder.finish()]);

 

  // 渲染传递

  const commandEncoder = device.createCommandEncoder();

  const passEncoder = commandEncoder.beginRenderPass({

    colorAttachments: [{

      view: context.getCurrentTexture().createView(),

      clearValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 },

      loadOp: 'clear',

      storeOp: 'store',

    }]

  });

  passEncoder.setPipeline(pipeline);

  passEncoder.setBindGroup(0, renderBindGroup);

  passEncoder.draw(PARTICLE_COUNT);

  passEncoder.end();

  device.queue.submit([commandEncoder.finish()]);

 

  requestAnimationFrame(frame);

}

frame();</code>

Copy after login

Conclusion

WebGPU unleashes the power of massively parallel GPU computing in the browser.

It runs in passes - each pass has local variables enabled through a pipeline with memory binding (bridging CPU memory and GPU memory).

Compute delivery allows for the coordination of parallel workloads through workgroups.

While it does require some heavy setup, I think the local binding/state style is a huge improvement over WebGL's global state model - making it easier to use while also finally bringing the power of GPU computing to Entered the Web.

The above is the detailed content of WebGPU tutorial: compute, vertex, and fragment shaders on the web. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1662
14
PHP Tutorial
1262
29
C# Tutorial
1235
24
Demystifying JavaScript: What It Does and Why It Matters Demystifying JavaScript: What It Does and Why It Matters Apr 09, 2025 am 12:07 AM

JavaScript is the cornerstone of modern web development, and its main functions include event-driven programming, dynamic content generation and asynchronous programming. 1) Event-driven programming allows web pages to change dynamically according to user operations. 2) Dynamic content generation allows page content to be adjusted according to conditions. 3) Asynchronous programming ensures that the user interface is not blocked. JavaScript is widely used in web interaction, single-page application and server-side development, greatly improving the flexibility of user experience and cross-platform development.

The Evolution of JavaScript: Current Trends and Future Prospects The Evolution of JavaScript: Current Trends and Future Prospects Apr 10, 2025 am 09:33 AM

The latest trends in JavaScript include the rise of TypeScript, the popularity of modern frameworks and libraries, and the application of WebAssembly. Future prospects cover more powerful type systems, the development of server-side JavaScript, the expansion of artificial intelligence and machine learning, and the potential of IoT and edge computing.

JavaScript Engines: Comparing Implementations JavaScript Engines: Comparing Implementations Apr 13, 2025 am 12:05 AM

Different JavaScript engines have different effects when parsing and executing JavaScript code, because the implementation principles and optimization strategies of each engine differ. 1. Lexical analysis: convert source code into lexical unit. 2. Grammar analysis: Generate an abstract syntax tree. 3. Optimization and compilation: Generate machine code through the JIT compiler. 4. Execute: Run the machine code. V8 engine optimizes through instant compilation and hidden class, SpiderMonkey uses a type inference system, resulting in different performance performance on the same code.

JavaScript: Exploring the Versatility of a Web Language JavaScript: Exploring the Versatility of a Web Language Apr 11, 2025 am 12:01 AM

JavaScript is the core language of modern web development and is widely used for its diversity and flexibility. 1) Front-end development: build dynamic web pages and single-page applications through DOM operations and modern frameworks (such as React, Vue.js, Angular). 2) Server-side development: Node.js uses a non-blocking I/O model to handle high concurrency and real-time applications. 3) Mobile and desktop application development: cross-platform development is realized through ReactNative and Electron to improve development efficiency.

Python vs. JavaScript: The Learning Curve and Ease of Use Python vs. JavaScript: The Learning Curve and Ease of Use Apr 16, 2025 am 12:12 AM

Python is more suitable for beginners, with a smooth learning curve and concise syntax; JavaScript is suitable for front-end development, with a steep learning curve and flexible syntax. 1. Python syntax is intuitive and suitable for data science and back-end development. 2. JavaScript is flexible and widely used in front-end and server-side programming.

How to Build a Multi-Tenant SaaS Application with Next.js (Frontend Integration) How to Build a Multi-Tenant SaaS Application with Next.js (Frontend Integration) Apr 11, 2025 am 08:22 AM

This article demonstrates frontend integration with a backend secured by Permit, building a functional EdTech SaaS application using Next.js. The frontend fetches user permissions to control UI visibility and ensures API requests adhere to role-base

From C/C   to JavaScript: How It All Works From C/C to JavaScript: How It All Works Apr 14, 2025 am 12:05 AM

The shift from C/C to JavaScript requires adapting to dynamic typing, garbage collection and asynchronous programming. 1) C/C is a statically typed language that requires manual memory management, while JavaScript is dynamically typed and garbage collection is automatically processed. 2) C/C needs to be compiled into machine code, while JavaScript is an interpreted language. 3) JavaScript introduces concepts such as closures, prototype chains and Promise, which enhances flexibility and asynchronous programming capabilities.

Building a Multi-Tenant SaaS Application with Next.js (Backend Integration) Building a Multi-Tenant SaaS Application with Next.js (Backend Integration) Apr 11, 2025 am 08:23 AM

I built a functional multi-tenant SaaS application (an EdTech app) with your everyday tech tool and you can do the same. First, what’s a multi-tenant SaaS application? Multi-tenant SaaS applications let you serve multiple customers from a sing

See all articles