Home > Web Front-end > H5 Tutorial > A brief discussion on the application of needsUpdate in three.js_html5 tutorial skills

A brief discussion on the application of needsUpdate in three.js_html5 tutorial skills

WBOY
Release: 2016-05-16 15:50:57
Original
2714 people have browsed it

Many objects in three.js have a needsUpdate attribute, which is rarely mentioned in the documentation (but there are not many documents for three.js, and many problems have to be solved by issues on github). In various tutorials on the Internet I'm not very good at writing this, because for a simple entry-level program, this attribute is not used.
So what is this attribute used for? In a nutshell, it tells the renderer that I should update the cache for this frame. Although it is very simple to use as a flag, because you want to know why the cache needs to be updated, you need to Which caches are updated, so it is necessary to understand it carefully.
Why needsUpdate is needed
First of all, let’s take a look at why cache is needed. The cache generally exists to reduce the number of data transmissions, thereby reducing the time the program spends on data transmission. Here is the same Generally, it is not easy for an object (Mesh) to be successfully displayed on the screen. It needs to be transferred to the battlefield three times
First, all the vertex data and texture data are read from the local disk into the memory through the program.
Then after the program has done appropriate processing in the memory, it will transfer the vertex data and texture data of the objects that need to be drawn to the screen to the video memory.
Finally, when rendering each frame, the vertex data and texture data in the video memory are flushed to the GPU for assembly and drawing.
According to the pyramid data transmission model, the first step is obviously the slowest. If it is transmitted through the network in an environment like WebGL, it will be even slower. The second step is the time of transmission from memory to video memory. A simple data test will be done later.
Then there is the frequency of use of these three steps. For small scenes, the first step is a one-time operation, that is, all the data of a scene will be loaded into the memory every time the program is initialized. For For large-scale scenarios, some asynchronous loading may be done, but it is not currently under consideration. The frequency of the second step should be the most important thing to talk about this time. First, write a simple program to test the consumption caused by this step of transmission

Copy code
The code is as follows:

var canvas = document.createElement('canvas');
var _gl = canvas.getContext('experimental -webgl');
var vertices = [];
for(var i = 0; i < 1000*3; i ){
vertices.push(i * Math.random() );
}
var buffer = _gl.createBuffer();
console.profile('buffer_test');
bindBuffer();
console.profileEnd('buffer_test');
function bindBuffer(){
for(var i = 0; i < 1000; i ){
_gl.bindBuffer(_gl.ARRAY_BUFFER, buffer);
_gl.bufferData(_gl.ARRAY_BUFFER, new Float32Array (vertices), _gl.STATIC_DRAW);
}
}

Let me briefly explain this program first. Vertices is an array that stores vertices. Here, 1000 vertices are randomly generated. Because each vertex has three coordinates: x, y, and z, an array of 3000 size is needed. The _gl.createBuffer command opens a buffer in the video memory to store vertex data, and then uses _gl.bufferData to transfer a copy of the generated vertex data from the memory to the video memory. It is assumed here that there are 1000 objects with 1000 vertices in a scene. Each vertex is 3 float data of 32 bits and 4 bytes. After calculation, it is almost 1000 x 1000 x 12 = 11M data. The profile is almost consumed. 15ms time, here you may see that 15ms is only such a small amount of time, but for a real-time program, if you want to ensure a frame rate of 30fps, the time required for each frame should be controlled at about 30ms, which is just to do the data once. The transmission only takes half the time. You must know that the big head should be the drawing operations in the GPU and various processing in the CPU. Every step of the entire rendering process should be stingy.
So the number of transmissions in this step should be minimized. In fact, all the vertex data and texture data can be transferred from the memory to the video memory as soon as it is loaded. This is what three.js does now. First At this time, the vertex data of the object (Geometry) that needs to be drawn is transferred to the display memory, and the buffer is cached to geometry.__webglVertexBuffer. Afterwards, the verticesNeedUpdate attribute of the Geometry will be judged every time it is drawn. If no update is needed, the current cache will be used directly. , if you see that verticesNeedUpate is true, the vertex data in Geometry will be re-transmitted to geometry.__webglVertexBuffer. Generally, we do not need this step for static objects, but if we encounter objects whose vertices change frequently, for example, use Particle systems that use vertices as particles, and Mesh that uses skeletal animation. These objects will change their vertices every frame, so they need to set their verticesNeedUpdate property to true every frame to tell the renderer that I need to retransmit the data. Got it!
In fact, in WebGL programs, the position of vertices is mostly changed in the vertex shader to complete particle effects and skeletal animation. Although it is easier to expand if calculated on the CPU side, due to the limitation of JavaScript's computing power, Most of these computationally intensive operations will be done on the GPU side. In this case, there is no need to retransmit the vertex data, so the above case is actually not used much in actual programs. More often, the cache of textures and materials will be updated.
The above case mainly describes a scene of transmitting vertex data. In addition to vertex data, there is also a big part of texture. A 1024*1024 size R8G8B8A8 format texture will occupy up to 4M of memory, so Look at the following example

Copy the code
The code is as follows:

var canvas = document. createElement('canvas');
var _gl = canvas.getContext('experimental-webgl');
var texture = _gl.createTexture();
var img = new Image;
img. onload = function(){
console.profile('texture test');
bindTexture();
console.profileEnd('texture test');
}
img.src = 'test_tex.jpg';
function bindTexture(){
_gl.bindTexture(_gl.TEXTURE_2D, texture);
_gl.texImage2D(_gl.TEXTURE_2D, 0, _gl.RGBA, _gl.RGBA, _gl .UNSIGNED_BYTE, img);
}

There is no need to repeat it 1000 times here. It takes 30ms to transmit a 10241024 texture once, and almost 2ms for a 256256 texture. Therefore, in three.js, the texture is only transmitted once at the beginning. , then if the texture.needsUpdate property is not manually set to true, the texture that has been transferred to the video memory will be used directly.
Which caches need to be updated
The above describes through two cases why three.js needs to add such a needsUpdate attribute. Next, let’s list a few scenarios to know under what circumstances it needs to be done manually. Update these caches.
Asynchronous loading of textures
This is a small pitfall, because the front-end images are loaded asynchronously. If you write texture.needsUpdate=true directly after creating the img, three.js The renderer will use _gl.texImage2D to transfer the empty texture data to the video memory in this frame, and then set this flag to false. After that, the video memory data will not be updated until the image is loaded. Therefore, you must wait for the entire image to be loaded in the onload event before writing texture.needsUpdate = true
Video Texture
Most textures just load and transfer the image once like the case above. , but not for video textures, because the video is a picture stream, and the picture to be displayed is different in each frame, so needsUpdate needs to be set to true in each frame to update the texture data in the graphics card.
Use render buffer
Render buffer is a special object. Generally, the program flushes the entire scene directly to the screen, but if there is more post processing or this screen based xxx (such as screen based ambient occlusion), you need to draw the scene to a render buffer first. This buffer is actually a texture, but it is generated by the previous drawing instead of loading from the disk. There is a special texture object WebGLRenderTarget in three.js to initialize and save the renderbuffer. This texture also needs to set needsUpdate to true in each frame
Material's needsUpdate
The material is in three. js is described through THREE.Material. In fact, the material does not have any data to be transmitted, but why do we need to create a needsUpdate? Here we also want to talk about the shader. The literal translation of shader is shader, which is provided in the gpu. The possibility of programming to process vertices and pixels. In painting, there is a term shading to represent the shading method of painting. The shading in GPU is similar. The shading of lighting is calculated through the program to express the material of the object. ok, since the shader is a run. Programs on the GPU, like all programs, need to be compiled and linked once. In WebGL, the shader program is compiled at runtime, which of course takes time, so it is best to compile and run the program once. Finish. Therefore, in three.js, the shader program is compiled and linked when the material is initialized and the program object obtained after compilation and linking is cached. Generally, a material does not need to recompile the entire shader. To adjust the material, you only need to modify the uniform parameters of the shader. But if you replace the entire material, such as replacing the original phong shader with a lambert shader, you need to set material.needsUpdate to true and recompile. However, this situation is rare, and the more common situation is the one mentioned below.
Adding and deleting lights
This should be relatively common in scenes. Maybe many people who have just started using three.js will fall into this pit and dynamically add lights to the scene. After setting up a light, I discovered that the light did not work. However, this was when using the built-in shader of three.js, such as phong and lambert. If you look at the source code in the renderer, you will find that three.js is in the built-in shader code. Use #define to set the number of lights in the scene, and the value of this #define is obtained by string splicing shader every time the material is updated. The code is as follows

Copy code
The code is as follows:

"#define MAX_DIR_LIGHTS " parameters.maxDirLights,
"#define MAX_POINT_LIGHTS " parameters.maxPointLights,
"#define MAX_SPOT_LIGHTS " parameters.maxSpotLights,
"#define MAX_HEMI_LIGHTS " parameters.maxHemiLights,

It is true that this way of writing can effectively reduce the use of gpu registers. If there is only one light, you can only declare a uniform variable required for one light, but every time the number of lights changes, especially when adding You need to re-stitch, compile and link the shader. At this time, you also need to set material.needsUpdate of all materials to true;
Change texture
Changing texture here does not mean updating texture data, but It's because the original material used textures and then stopped using them, or the original material didn't use textures and then added them later. If you don't forcefully update the materials manually, the final effect will be different from what you thought. The reasons for this problem are as follows Adding lights above is almost the same, because a macro is added to the shader to determine whether textures are used,

Copy code
Code As follows:

parameters.map ? "#define USE_MAP" : "",
parameters.envMap ? "#define USE_ENVMAP" : "",
parameters.lightMap ? "#define USE_LIGHTMAP" : "",
parameters.bumpMap ? "#define USE_BUMPMAP" : "",
parameters.normalMap ? "#define USE_NORMALMAP" : "",
parameters.specularMap ? "#define USE_SPECULARMAP" : "",

So every time map, or envMap or lightMap changes the true value, the material needs to be updated
Changes in other vertex data
Actually The above texture change will also cause a problem. The main reason is that there is no texture during initialization, but later it is dynamically added. In this environment, it is not enough to set material.needsUpdate to true. You also need to set geometry.uvsNeedsUpdate to true. , Why does this problem occur? It is still because of the optimization of the program by three.js. When the geometry and material are initialized for the first time in the renderer, if it is judged that there is no texture, although the data in the memory contains UV data for each vertex. , but three.js still will not copy these data to the video memory. The original intention should be to save some precious video memory space, but after adding the texture, geometry will not intelligently re-transmit the UV data for texture use. We have to manually set uvsNeedsUpdate to tell it that it is time to update the uv. This problem really bothered me for a long time at the beginning.
For information about the needUpdate attribute of several vertex data, please see this issue
https://github.com/mrdoob/three.js/wiki/Updates
Finally
three The optimization of .js is good, but various optimizations bring various pitfalls. In this case, the best way is to look at the source code, or file issues on github.
Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template