Home Web Front-end H5 Tutorial Overview of WebGL 3D in HTML5 (Part 1)—WebGL native development opens a new era of web page 3D rendering_html5 tutorial skills

Overview of WebGL 3D in HTML5 (Part 1)—WebGL native development opens a new era of web page 3D rendering_html5 tutorial skills

May 16, 2016 pm 03:49 PM
3d webgl rendering

WebGL opens a new era of 3D rendering on web pages, which allows 3D content to be rendered directly in canvas without any plug-ins. WebGL is the same as the canvas 2D API. It manipulates objects through scripts, so the steps are basically similar: prepare the work context, prepare data, draw the object in the canvas and render it. The difference from 2D is that 3D involves more knowledge, such as world, light, texture, camera, matrix and other professional knowledge. There is a good Chinese tutorial on WebGL, which is the first link in the usage reference below, so I won’t do anything about it here. The following content is just a brief summary of the learning content.

Browser support
Since Microsoft has its own graphics development plan and has not supported WebGL, IE is currently unable to run WebGL except for installing plug-ins. For other mainstream browsers such as Chrome, FireFox, Safari, Opera, etc., just install the latest version. In addition to installing the latest browser, you also need to ensure that the graphics card driver is also up to date.
After installing these, you can open the browser and enter the following URL to verify the browser's support for WebGL: http://webglreport.sourceforge.net/.

If you still cannot run WebGL after installing the above browsers normally, you can try to force WebGL support to be turned on. The opening method is as follows:
Chrome browser
We need to add some startup parameters to Chrome. The following specific steps take the Windows operating system as an example: Find the shortcut of the Chrome browser, right-click the shortcut Method, select properties; in the target box, after the quotation marks behind chrome.exe, add the following content:

--enable-webgl--ignore-gpu-blacklist--allow-file-access-from-files

Close Chrome after clicking OK, and then use this shortcut to launch the Chrome browser.
The meanings of several parameters are as follows:
--enable-webgl means to enable WebGL support;
--ignore-gpu-blacklist means to ignore the GPU blacklist, which means that there are some graphics cards and GPUs Because it is too old and other reasons, it is not recommended to run WebGL. This parameter allows the browser to ignore this blacklist and force WebGL to run;
--allow-file-access-from-files means to allow resources to be loaded locally. If you are not a WebGL developer and do not need to develop and debug WebGL, but just want to take a look at the WebGL Demo, then you do not need to add this parameter.

Firefox browser
Firefox users please enter "about:config" in the address bar of the browser, press Enter, and then search for "webgl" in the filter (filter) and replace webgl Set .force-enabled to true; set webgl.disabled to false; search for "security.fileuri.strict_origin_policy" in the filter and set security.fileuri.strict_origin_policy to false; then close all currently open Firefox windows , restart Firefox.
The first two settings are to force WebGL support to be turned on, and the last security.fileuri.strict_origin_policy setting is to allow resources to be loaded from local sources. If you are not a WebGL developer, you do not need to develop and debug WebGL, but just want to take a look at WebGL. Demo, then you don’t need to set this item.

Safari browser
Find "Properties" → "Advanced" in the menu, select "Show Development Menu", then go to the "Develop" menu and select "Turn on WebGL".

Development Steps

The following code simply summarizes the relevant concepts. It comes from the Chinese tutorial in the reference and involves more 3D knowledge. Interested students can jump directly to the Chinese tutorials in the Practical Reference, which are much more detailed and accurate than what I explain here. Students who join in the fun can simply take a look at it without delving into the meaning of each line of code.


Preparation
Needless to say, this is to add a canvas element to the page as a rendering container. For example:

Copy code
The code is as follows:



Yourbrowserdoesn'tappeartosupporttheHTML5canvaselement.



It’s time to officially start writing the script. First, take a look at the program entry and overall structure:

Copy the code
The code is as follows:

functionstart(){
varcanvas=document.getElementById("glcanvas");
initGL(canvas);
initShaders() ;
initBuffers();
gl.clearColor(0.0,0.0,0.0,1.0);
gl.enable(gl.DEPTH_TEST);
drawScene();
}

The several methods here represent typical WebGL drawing steps:

Step 1: Initialize the WebGL working environment - initGL
The code for this method is as follows:

Copy code
The code is as follows:

vargl;
functioninitGL(canvas){
gl=null;
try{
//Trytograbthestandardcontext.Ifitfails,fallbacktoexperimental.
gl=canvas.getContext("webgl")||canvas.getContext("experimental-webgl");
}
catch(e){} //Ifwedon'thaveaGLcontext,giveupnow
if(!gl){
alert("UnabletoinitializeWebGL.Yourbrowsermaynotsupportit.");
}
}

This method is very simple. It is to obtain the WebGL drawing environment. You need to pass the parameter "webgl" to the canvas.getContext method. However, since the current WebGL standard has not been finalized, the parameters used in the experimental stage are all is "experimental-webgl". Of course, you can also call canvas.getContext("experimental-webgl") directly. After the standard is set, you can modify another code.

Step 2: Initialize Shaders - initShaders
The concept of shader is relatively simple. To put it bluntly, it is a graphics card operation instruction. Constructing a 3D scene requires a large amount of calculations of color, position, and other information. If these calculations are performed by software, the speed will be very slow. Therefore, letting the graphics card calculate these operations is very fast; how to perform these calculations is specified by the shader. The shader code is written in a shader language called GLSL, which we will not describe here.
Shaders can be defined in html and used in code. Of course, the same applies if you use a string to define a shader in your program.
Let’s look at the definition part first:

Copy the code
The code is as follows:


precisionmediumpfloat;
varyingvec4vColor;
voidmain(void){
gl_FragColor=vColor;
}


attributevec3aVertexPosition;
attributevec4aVertexColor;
uniformmat4uMVMatrix;
uniformmat4uPMatrix;
varyingvec4vColor;
voidmain(void){
gl_Position=uPMatrix*uMVMatrix*vec4(aVertexPosition,1.0);
vColor=aVertexColor;
}
< /script>

There are two shaders here: face shader and vertex shader.
Regarding these two shaders, it is necessary to explain here that 3D models in computers are basically described by points combined with triangular patches. The vertex shader is to process the data of these points, and the surface shader is Through interpolation, the data of points on the triangular patch are processed.
The vertex shader defined above defines the position and color calculation method of the vertex; while the surface shader defines the color calculation method of the interpolation point. In actual application scenarios, effects such as processing light in the shader will also be involved.
Defines shaders, you can find them in the program and use them:

Copy code
The code is as follows:

varshaderProgram;
functioninitShaders(){
varfragmentShader=getShader(gl,"shader-fs");
varvertexShader=getShader(gl,"shader-vs ");
shaderProgram=gl.createProgram();
gl.attachShader(shaderProgram,vertexShader);
gl.attachShader(shaderProgram, fragmentShader);
gl.linkProgram(shaderProgram);
if(!gl.getProgramParameter(shaderProgram,gl.LINK_STATUS)){
alert("Couldnotinitialiseshaders");
}
gl.useProgram(shaderProgram);
shaderProgram.vertexPositionAttribute=gl.getAttribLocation (shaderProgram, "aVertexPosition");
gl.enableVertexAttribArray(shaderProgram.vertexPositionAttribute);
shaderProgram.vertexColorAttribute=gl.getAttribLocation(shaderProgram, "aVertexColor");
shaderProgram.pMatrixUniform=gl.getUniformLocation(shaderProgram,"uPMatrix");
shaderProgram.mvMatrixUniform=gl.getUniformLocation(shaderProgram,"uMVMatrix");
}


The shader is there, but how do you let the graphics card execute it? Program is this kind of bridge. It is the native binary code of WebGL. Its function is basically to let the graphics card run the shader code to render the specified model data.
An auxiliary method getShader is also used here. This method is to traverse the HTML document, find the definition of the shader, and create the shader after getting the definition. I won’t go into details here:

Copy code
The code is as follows:

functiongetShader(gl,id){
varshaderScript,theSource,currentChild,shader;
shaderScript=document.getElementById(id);
if(!shaderScript){
returnnull;
}
theSource="";
currentChild=shaderScript.firstChild;
while (currentChild){
if(currentChild.nodeType==currentChild.TEXT_NODE){
theSource =currentChild.textContent;
}
currentChild=currentChild.nextSibling;
}
if( shaderScript.type=="x-shader/x-fragment"){
shader=gl.createShader(gl.FRAGMENT_SHADER);
}elseif(shaderScript.type=="x-shader/x-vertex" ){
shader=gl.createShader(gl.VERTEX_SHADER);
}else{
//Unknownshadertype
returnnull;
}
gl.shaderSource(shader,theSource);
//Compiletheshaderprogram
gl.compileShader(shader);
//Seeifitcompiledsuccessfully
if(!gl.getShaderParameter(shader,gl.COMPILE_STATUS)){
alert("Anerroroccurredcompilingtheshaders:" gl. getShaderInfoLog(shader));
returnnull;
}
returnshader;
}

Step 3: Create/load model data - initBuffers
In these small examples, the model data is basically generated directly. In the actual program, these data should be loaded from the model:

Copy the code
The code is as follows:

vartriangleVertexPositionBuffer;
vartriangleVertexColorBuffer;
functioninitBuffers(){
triangleVertexPositionBuffer=gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER,triangleVertexPositionBuffer);
varvertices=[
0.0,1.0,0.0,
-1.0,-1.0,0.0,
1.0,-1.0,0.0
];
gl.bufferData(gl.ARRAY_BUFFER,newFloat32Array(vertices),gl.STATIC_DRAW);
triangleVertexPositionBuffer.itemSize=3;
triangleVertexPositionBuffer.numItems=3;
triangleVertexColorBuffer=gl.createBuffer( );
gl.bindBuffer(gl.ARRAY_BUFFER,triangleVertexColorBuffer);
varcolors=[
1.0,0.0,0.0,1.0,
0.0,1.0,0.0,1.0,
0.0,0.0 ,1.0,1.0
];
gl.bufferData(gl.ARRAY_BUFFER,newFloat32Array(colors),gl.STATIC_DRAW);
triangleVertexColorBuffer.itemSize=4;
triangleVertexColorBuffer.numItems=3;
}

The above code creates the vertices of the triangle and the color data of the vertices and places them in the buffer.

Step 4: Rendering - drawScene
After preparing the data, just hand it over to WebGL for rendering. The gl.drawArrays method is called here. Look at the code:

Copy the code
The code is as follows:

functiondrawScene(){
gl.viewport(0,0,gl.viewportWidth,gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT|gl.DEPTH_BUFFER_BIT);
pMatrix=okMat4Proj(45.0,gl.viewportWidth/gl.viewportHeight ,0.1,100.0);
mvMatrix=okMat4Trans(-1.5,0.0,-7.0);
gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, triangleVertexPositionBuffer.ite mSize ,gl.FLOAT,false,0,0);
gl.bindBuffer(gl.ARRAY_BUFFER,triangleVertexColorBuffer);
gl.vertexAttribPointer(shaderProgram.vertexColorAttribute,triangleVertexColorBuffer.itemSize,gl.FLOAT,false,0,0 );
setMatrixUniforms();
gl.drawArrays(gl.TRIANGLES,0,triangleVertexPositionBuffer.numItems);
}

This function first sets the background of the 3D world to black, then sets the projection matrix, sets the position of the object to be drawn, and then draws the object based on the vertex and color data in the buffer. There are also some auxiliary methods for generating projection matrices and model view rectangles (using the matrix auxiliary methods in the Oak3D graphics library) that have little to do with the topic and will not be explained in detail here.
Basically, that’s all the process. More complex textures, lights, etc. are all implemented by adding some WegGL features on these basis. Please refer to the following Chinese tutorial for detailed examples.

How about it? What is it like to develop using native WebGL? Not only do you need to have deep 3D knowledge, you also need to know various implementation details. WebGL does this to flexibly adapt to various application scenarios, but for most non-professionals like me, many details don't need to be known. This has given rise to various class libraries that assist development, such as the Oak3D library used in this section (in order to demonstrate WebGL development, only the matrix auxiliary method is used in the example). The next section will introduce a commonly used Three.js graphics library.

Practical reference:
Chinese tutorial: http://www.hiwebgl.com/?p=42

Development Center: https://developer.mozilla.org/en/WebGL

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Two Point Museum: All Exhibits And Where To Find Them
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Why is Gaussian Splatting so popular in autonomous driving that NeRF is starting to be abandoned? Why is Gaussian Splatting so popular in autonomous driving that NeRF is starting to be abandoned? Jan 17, 2024 pm 02:57 PM

Written above & the author’s personal understanding Three-dimensional Gaussiansplatting (3DGS) is a transformative technology that has emerged in the fields of explicit radiation fields and computer graphics in recent years. This innovative method is characterized by the use of millions of 3D Gaussians, which is very different from the neural radiation field (NeRF) method, which mainly uses an implicit coordinate-based model to map spatial coordinates to pixel values. With its explicit scene representation and differentiable rendering algorithms, 3DGS not only guarantees real-time rendering capabilities, but also introduces an unprecedented level of control and scene editing. This positions 3DGS as a potential game-changer for next-generation 3D reconstruction and representation. To this end, we provide a systematic overview of the latest developments and concerns in the field of 3DGS for the first time.

Learn about 3D Fluent emojis in Microsoft Teams Learn about 3D Fluent emojis in Microsoft Teams Apr 24, 2023 pm 10:28 PM

You must remember, especially if you are a Teams user, that Microsoft added a new batch of 3DFluent emojis to its work-focused video conferencing app. After Microsoft announced 3D emojis for Teams and Windows last year, the process has actually seen more than 1,800 existing emojis updated for the platform. This big idea and the launch of the 3DFluent emoji update for Teams was first promoted via an official blog post. Latest Teams update brings FluentEmojis to the app Microsoft says the updated 1,800 emojis will be available to us every day

CLIP-BEVFormer: Explicitly supervise the BEVFormer structure to improve long-tail detection performance CLIP-BEVFormer: Explicitly supervise the BEVFormer structure to improve long-tail detection performance Mar 26, 2024 pm 12:41 PM

Written above &amp; the author’s personal understanding: At present, in the entire autonomous driving system, the perception module plays a vital role. The autonomous vehicle driving on the road can only obtain accurate perception results through the perception module. The downstream regulation and control module in the autonomous driving system makes timely and correct judgments and behavioral decisions. Currently, cars with autonomous driving functions are usually equipped with a variety of data information sensors including surround-view camera sensors, lidar sensors, and millimeter-wave radar sensors to collect information in different modalities to achieve accurate perception tasks. The BEV perception algorithm based on pure vision is favored by the industry because of its low hardware cost and easy deployment, and its output results can be easily applied to various downstream tasks.

Choose camera or lidar? A recent review on achieving robust 3D object detection Choose camera or lidar? A recent review on achieving robust 3D object detection Jan 26, 2024 am 11:18 AM

0.Written in front&& Personal understanding that autonomous driving systems rely on advanced perception, decision-making and control technologies, by using various sensors (such as cameras, lidar, radar, etc.) to perceive the surrounding environment, and using algorithms and models for real-time analysis and decision-making. This enables vehicles to recognize road signs, detect and track other vehicles, predict pedestrian behavior, etc., thereby safely operating and adapting to complex traffic environments. This technology is currently attracting widespread attention and is considered an important development area in the future of transportation. one. But what makes autonomous driving difficult is figuring out how to make the car understand what's going on around it. This requires that the three-dimensional object detection algorithm in the autonomous driving system can accurately perceive and describe objects in the surrounding environment, including their locations,

Paint 3D in Windows 11: Download, Installation, and Usage Guide Paint 3D in Windows 11: Download, Installation, and Usage Guide Apr 26, 2023 am 11:28 AM

When the gossip started spreading that the new Windows 11 was in development, every Microsoft user was curious about how the new operating system would look like and what it would bring. After speculation, Windows 11 is here. The operating system comes with new design and functional changes. In addition to some additions, it comes with feature deprecations and removals. One of the features that doesn't exist in Windows 11 is Paint3D. While it still offers classic Paint, which is good for drawers, doodlers, and doodlers, it abandons Paint3D, which offers extra features ideal for 3D creators. If you are looking for some extra features, we recommend Autodesk Maya as the best 3D design software. like

How to render orthogonal top view in Kujiale_Tutorial on rendering orthogonal top view in Kujiale How to render orthogonal top view in Kujiale_Tutorial on rendering orthogonal top view in Kujiale Apr 02, 2024 pm 01:10 PM

1. First open the design plan to be rendered in Kujiale. 2. Then open top view rendering under the rendering menu. 3. Then click Orthogonal in the parameter settings in the top view rendering interface. 4. Finally, after adjusting the model angle, click Render Now to render the orthogonal top view.

Get a virtual 3D wife in 30 seconds with a single card! Text to 3D generates a high-precision digital human with clear pore details, seamlessly connecting with Maya, Unity and other production tools Get a virtual 3D wife in 30 seconds with a single card! Text to 3D generates a high-precision digital human with clear pore details, seamlessly connecting with Maya, Unity and other production tools May 23, 2023 pm 02:34 PM

ChatGPT has injected a dose of chicken blood into the AI ​​industry, and everything that was once unthinkable has become basic practice today. Text-to-3D, which continues to advance, is regarded as the next hotspot in the AIGC field after Diffusion (images) and GPT (text), and has received unprecedented attention. No, a product called ChatAvatar has been put into low-key public beta, quickly garnering over 700,000 views and attention, and was featured on Spacesoftheweek. △ChatAvatar will also support Imageto3D technology that generates 3D stylized characters from AI-generated single-perspective/multi-perspective original paintings. The 3D model generated by the current beta version has received widespread attention.

An in-depth interpretation of the 3D visual perception algorithm for autonomous driving An in-depth interpretation of the 3D visual perception algorithm for autonomous driving Jun 02, 2023 pm 03:42 PM

For autonomous driving applications, it is ultimately necessary to perceive 3D scenes. The reason is simple. A vehicle cannot drive based on the perception results obtained from an image. Even a human driver cannot drive based on an image. Because the distance of objects and the depth information of the scene cannot be reflected in the 2D perception results, this information is the key for the autonomous driving system to make correct judgments on the surrounding environment. Generally speaking, the visual sensors (such as cameras) of autonomous vehicles are installed above the vehicle body or on the rearview mirror inside the vehicle. No matter where it is, what the camera gets is the projection of the real world in the perspective view (PerspectiveView) (world coordinate system to image coordinate system). This view is very similar to the human visual system,

See all articles