This series introduces WebGPU, and computer graphics in general.
First let's look what we are going to build,
Life Game
3D Rendering
3D Rendering, but with lighting
Rendering 3D Model
Except for the basic knowledge of JS, no prior knowledge is needed.
The tutorial is already finished on my github, along with the source code.
WebGPU is a relatively new API for the GPU. Albeit named as WebGPU, it can actually be considered a layer on top of Vulkan, DirectX 12, and Metal, OpenGL and WebGL. It is designed to be a low-level API, and is intended to be used for high-performance applications, such as games and simulations.
In this chapter, we will draw something on the screen. The first part will refer to the Google Codelabs Tutorial. We will create a life game on the screen.
We will just create an empty vanilla JS project in vite with typescript enabled. Then clear all the extra codes, leaving only the main.ts.
const main = async () => { console.log('Hello, world!') } main()
Before actual coding, please check if your browser has WebGPU enabled. You can check it on WebGPU Samples.
Chrome now defaults to enabled. On Safari, you should go to developer settings, flag settings and enable WebGPU.
We also need to enable thee types for WebGPU, install @webgpu/types, and in tsc compiler options, add "types": ["@webgpu/types"].
Furthermore, we replace the
There are many boilerplate codes to WebGPU, here is how it looks like.
First we need access to the GPU. In WebGPU, it is done by the concept of an adapter, which is a bridge between the GPU and the browser.
const adapter = await navigator.gpu.requestAdapter();
Then we need to request a device from the adapter.
const device = await adapter.requestDevice(); console.log(device);
We draw our triangle on the canvas. We need to get the canvas element and configure it.
const canvas = document.getElementById('app') as HTMLCanvasElement; const context = canvas.getContext("webgpu")!; const canvasFormat = navigator.gpu.getPreferredCanvasFormat(); context.configure({ device: device, format: canvasFormat, });
Here, we use getContext to get relative information about the canvas. By specifying webgpu, we will get a context that is responsible for rendering with WebGPU.
CanvasFormat is actually the color mode, for example, srgb. We usually just use the preferred format.
Lastly, we configure the context with the device and the format.
Before diving further into the engineering details, we first must understand how GPU handles rendering.
The GPU rendering pipeline is a series of steps that the GPU takes to render an image.
The application run on GPU is called a shader. The shader is a program that runs on the GPU. The shader has a special programming language that we will discuss later.
The render pipeline has the following steps,
Depending on the primitives, the smallest unit that GPU can render, the pipeline may have different steps. Typically, we use triangles, which signals the GPU to treat every 3 group of vertices as a triangle.
Render Pass is a step of the full GPU rendering. When a render pass is created, the GPU will start rendering the scene, and vice versa when it finishes.
To create a render pass, we need to create an encoder that is responsible for compiling the render pass to GPU codes.
const main = async () => { console.log('Hello, world!') } main()Then we create a render pass.
const adapter = await navigator.gpu.requestAdapter();Copy after loginCopy after loginCopy after loginHere, we create a render pass with a color attachment. Attachment is a concept in GPU that represents the image that is going to be rendered. An image may have many aspect which the GPU need to process, and each of them is an attachment.
Here we only have one attachment, which is the color attachment. The view is the panel that the GPU will render on, here we set it to the texture of the canvas.
loadOp is the operation that the GPU will do before the render pass, clear means GPU will first clear all the previously data from the last frame, and storeOp is the operation that the GPU will do after the render pass, store means GPU will store the data to the texture.
loadOp can be load, which preserves the data from the last frame, or clear, which clears the data from the last frame. storeOp can be store, which stores the data to the texture, or discard, which discards the data.
Now, just call pass.end() to end the render pass. Now, the command is saved in the command buffer of the GPU.
To get the compiled command, use the following code,
const device = await adapter.requestDevice(); console.log(device);Copy after loginCopy after loginCopy after loginCopy after loginAnd, finally, submit the command to the render queue of the GPU.
const canvas = document.getElementById('app') as HTMLCanvasElement; const context = canvas.getContext("webgpu")!; const canvasFormat = navigator.gpu.getPreferredCanvasFormat(); context.configure({ device: device, format: canvasFormat, });Copy after loginCopy after loginCopy after loginCopy after loginNow, you should see an ugly black canvas.
Based on our stereotypical concepts about 3D, we would expect empty space to be a blue color. We can done that by setting the clear color.
const encoder = device.createCommandEncoder();Copy after loginDrawing a Triangle Using Shader
Now, we will draw a triangle on the canvas. We will use a shader to do that. The shader language will be wgsl, WebGPU Shading Language.
Now, suppose we want to draw a triangle with the following coordinates,
const pass = encoder.beginRenderPass({ colorAttachments: [{ view: context.getCurrentTexture().createView(), loadOp: "clear", storeOp: "store", }] });Copy after loginAs we stated before, to complete a render pipeline, we need a vertex shader and a fragment shader.
Vertex Shader
Use the following code to create shader modules.
const commandBuffer = encoder.finish();Copy after loginlabel here is simply a name, which is meant for debugging. code is the actual shader code.
Vertex shader is a function that takes any parameter and returns the position of the vertex. However, contrary to what we might expect, the vertex shader returns a four dimensional vector, not a three dimensional vector. The fourth dimension is the w dimension, which is used for perspective division. We will discuss it later.
Now, you can simply regard a four dimensional vector (x, y, z, w) as a three dimensional vector (x / w, y / w, z / w).
However, there is another problem- how to pass the data to the shader, and how to get the data out from the shader.
To pass the data to the shader, we use the vertexBuffer, a buffer that contains the data of the vertices. We can create a buffer with the following code,
const main = async () => { console.log('Hello, world!') } main()Copy after loginCopy after loginCopy after loginHere we create a buffer with a size of 24 bytes, 6 floats, which is the size of the vertices.
usage is the usage of the buffer, which is VERTEX for vertex data. GPUBufferUsage.COPY_DST means this buffer is valid as a copy destination. For all buffer whose data are written by the CPU, we need to set this flag.
The map here means to map the buffer to the CPU, which means the CPU can read and write the buffer. The unmap means to unmap the buffer, which means the CPU can no longer read and write the buffer, and thus the content is available to the GPU.
Now, we can write the data to the buffer.
const adapter = await navigator.gpu.requestAdapter();Copy after loginCopy after loginCopy after loginHere, we map the buffer to the CPU, and write the data to the buffer. Then we unmap the buffer.
vertexBuffer.getMappedRange() will return the range of the buffer that is mapped to the CPU. We can use it to write the data to the buffer.
However, these are just raw data, and the GPU doesn't know how to interpret them. We need to define the layout of the buffer.
const device = await adapter.requestDevice(); console.log(device);Copy after loginCopy after loginCopy after loginCopy after loginHere, arrayStride is the number of bytes the GPU needs to skip forward in the buffer when it's looking for the next input. For example, if the arrayStride is 8, the GPU will skip 8 bytes to get the next input.
Since here, we use float32x2, the stride is 8 bytes, 4 bytes for each float, and 2 floats for each vertex.
Now we can write the vertex shader.
const canvas = document.getElementById('app') as HTMLCanvasElement; const context = canvas.getContext("webgpu")!; const canvasFormat = navigator.gpu.getPreferredCanvasFormat(); context.configure({ device: device, format: canvasFormat, });Copy after loginCopy after loginCopy after loginCopy after loginHere, @vertex means this is a vertex shader. @location(0) means the location of the attribute, which is 0, as previously defined. Please note that in the shader language, you are dealing with the layout of the buffer, so whenever you pass a value, you need to pass either a struct, whose fields had defined @location, or just a value with @location.
vec2f is a two dimensional float vector, and vec4f is a four dimensional float vector. Since vertex shader is required to return a vec4f position, we need to annotate that with @builtin(position).
Fragment Shader
Fragment shader, similarly, is something that takes the interpolated vertex output and output the attachments, color in this case. The interpolated means that although only certain pixel on the vertices have decided value, for every other pixel, the values are interpolated, either linear, averaged, or other means. The color of fragment is a four dimensional vector, which is the color of the fragment, respectively red, green, blue, and alpha.
Please note that the color is in the range of 0 to 1, not 0 to 255. In addition that, fragment shader defines the color of every vertex, not the color of the triangle. The color of the triangle is determined by the color of the vertices, by interpolation.
Since we currently does not bother to control the color of the fragment, we can simply return a constant color.
const main = async () => { console.log('Hello, world!') } main()Copy after loginCopy after loginCopy after loginRender Pipeline
Then we define the customized render pipeline by replacing the vertex and fragment shader.
const adapter = await navigator.gpu.requestAdapter();Copy after loginCopy after loginCopy after loginNote that in fragment shader, we need to specify the format of the target, which is the format of the canvas.
Draw Call
Before render pass ends, we add the draw call.
const device = await adapter.requestDevice(); console.log(device);Copy after loginCopy after loginCopy after loginCopy after loginHere, in setVertexBuffer, the first parameter is the index of the buffer, in the pipeline definition field buffers, and the second parameter is the buffer itself.
When calling draw, the parameter is the number of vertices to draw. Since we have 3 vertices, we draw 3.
Now, you should see a yellow triangle on the canvas.
Draw Life Game Cells
Now we tweak our codes a bit- since we want to build a life game, so we need to draw squares instead of triangles.
A square is actually two triangles, so we need to draw 6 vertices. The changes here are simple and you don't need a detailed explanation.
const canvas = document.getElementById('app') as HTMLCanvasElement; const context = canvas.getContext("webgpu")!; const canvasFormat = navigator.gpu.getPreferredCanvasFormat(); context.configure({ device: device, format: canvasFormat, });Copy after loginCopy after loginCopy after loginCopy after loginNow, you should see a yellow square on the canvas.
Coordinate System
We didn't discuss the coordinate system of the GPU. It is, well, rather simple. The actual coordinate system of the GPU is a right-handed coordinate system, which means the x-axis points to the right, the y-axis points up, and the z-axis points out of the screen.
The range of the coordinate system is from -1 to 1. The origin is at the center of the screen. z-axis is from 0 to 1, 0 is the near plane, and 1 is the far plane. However, z-axis is for depth. When you do 3D rendering, you can not just use z-axis to determine the position of the object, you need to use the perspective division. This is called the NDC, normalized device coordinate.
For example, if you want to draw a square at the top left corner of the screen, the vertices are (-1, 1), (-1, 0), (0, 1), (0, 0), though you need to use two triangles to draw it.
The above is the detailed content of Triangles On Web Chraw Something. For more information, please follow other related articles on the PHP Chinese website!