


three.js uses gpu to select objects and calculate intersection positions
Raycasting method
It is very simple to select objects using the Raycaster that comes with three.js , the code is as follows:
var raycaster = new THREE.Raycaster(); var mouse = new THREE.Vector2(); function onMouseMove(event) { // 计算鼠标所在位置的设备坐标 // 三个坐标分量都是-1到1 mouse.x = event.clientX / window.innerWidth * 2 - 1; mouse.y = - (event.clientY / window.innerHeight) * 2 + 1; } function pick() { // 使用相机和鼠标位置更新选取光线 raycaster.setFromCamera(mouse, camera); // 计算与选取光线相交的物体 var intersects = raycaster.intersectObjects(scene.children); }
[Related course recommendations: JavaScript Video Tutorial]
It uses bounding box filtering to calculate the projection ray and each triangular surface element Whether the intersection is achieved.
However, when the model is very large, such as 400,000 faces, selecting objects and calculating the location of collision points through traversal will be very slow and the user experience is not good.
But using gpu to select objects does not have this problem. No matter how big the scene and model are, the position of the object and intersection point at the mouse point can be obtained within one frame.
Use GPU to select objects
The implementation method is very simple:
1. Create a selection material and replace the material of each model in the scene with different color.
2. Read the pixel color at the mouse position and determine the object at the mouse position based on the color.
Specific implementation code:
1. Create a selected material, traverse the scene, and replace each model in the scene with a different color.
let maxHexColor = 1;// 更换选取材质 scene.traverseVisible(n => { if (!(n instanceof THREE.Mesh)) { return; } n.oldMaterial = n.material; if (n.pickMaterial) { // 已经创建过选取材质了 n.material = n.pickMaterial; return; } let material = new THREE.ShaderMaterial({ vertexShader: PickVertexShader, fragmentShader: PickFragmentShader, uniforms: { pickColor: { value: new THREE.Color(maxHexColor) } } }); n.pickColor = maxHexColor; maxHexColor++; n.material = n.pickMaterial = material; }); PickVertexShader: void main() { gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0); } PickFragmentShader: uniform vec3 pickColor;void main() { gl_FragColor = vec4(pickColor, 1.0); }
2. Draw the scene on WebGLRenderTarget, read the color of the mouse position, and determine the selected object.
let renderTarget = new THREE.WebGLRenderTarget(width, height); let pixel = new Uint8Array(4);// 绘制并读取像素 renderer.setRenderTarget(renderTarget); renderer.clear(); renderer.render(scene, camera); renderer.readRenderTargetPixels(renderTarget, offsetX, height - offsetY, 1, 1, pixel); // 读取鼠标所在位置颜色 // 还原原来材质,并获取选中物体 const currentColor = pixel[0] * 0xffff + pixel[1] * 0xff + pixel[2]; let selected = null; scene.traverseVisible(n => { if (!(n instanceof THREE.Mesh)) { return; } if (n.pickMaterial && n.pickColor === currentColor) { // 颜色相同 selected = n; // 鼠标所在位置的物体 } if (n.oldMaterial) { n.material = n.oldMaterial; delete n.oldMaterial; } });
Explanation: offsetX and offsetY are the mouse position, and height is the canvas height. The meaning of the readRenderTargetPixels line is to select the color of the pixel at the mouse position (offsetX, height - offsetY), with a width of 1 and a height of 1.
pixel is Uint8Array(4), which stores four channels of rgba color respectively. The value range of each channel is 0~255.
Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js
Use GPU to obtain intersection points Position
The implementation method is also very simple:
1. Create a depth shader material and render the scene depth to the WebGLRenderTarget.
2. Calculate the depth of the mouse position, and calculate the intersection position based on the mouse position and depth.
Specific implementation code:
1. Create a depth shader material, encode the depth information in a certain way, and render it to the WebGLRenderTarget.
Depth Material:
const depthMaterial = new THREE.ShaderMaterial({ vertexShader: DepthVertexShader, fragmentShader: DepthFragmentShader, uniforms: { far: { value: camera.far } } }); DepthVertexShader: precision highp float; uniform float far; varying float depth;void main() { gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0); depth = gl_Position.z / far; } DepthFragmentShader: precision highp float; varying float depth;void main() { float hex = abs(depth) * 16777215.0; // 0xffffff float r = floor(hex / 65535.0); float g = floor((hex - r * 65535.0) / 255.0); float b = floor(hex - r * 65535.0 - g * 255.0); float a = sign(depth) >= 0.0 ? 1.0 : 0.0; // depth大于等于0,为1.0;小于0,为0.0。 gl_FragColor = vec4(r / 255.0, g / 255.0, b / 255.0, a); }
Important Note:
a. gl_Position.z is the depth in camera space, which is linear and ranges from cameraNear to cameraFar. Shader varying variables can be used directly for interpolation.
b. The reason for gl_Position.z/far is to convert the value to the range of 0~1 for easy output as a color.
c. You cannot use the depth in screen space. After perspective projection, the depth becomes -1~1, most of which are very close to 1 (more than 0.9). It is not linear and almost unchanged. The output color is almost Unchanged and very inaccurate.
d. Obtain the depth method in the fragment shader: the camera space depth is gl_FragCoord.z, and the screen space depth is gl_FragCoord.z / gl_FragCoord.w.
e. The above descriptions are all for perspective projection. In orthographic projection, gl_Position.w is 1, and the camera space and screen space depth are the same.
f. In order to output the depth as accurately as possible, three components of rgb are used to output the depth. The gl_Position.z/far range is 0~1, multiplied by 0xffffff, and converted into an rgb color value. The r component 1 represents 65535, the g component 1 represents 255, and the b component 1 represents 1.
Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js
2. Read the mouse position Color, restore the read color value to the camera space depth value.
a. Draw the "encrypted" depth on the WebGLRenderTarget. Reading color method
let renderTarget = new THREE.WebGLRenderTarget(width, height); let pixel = new Uint8Array(4); scene.overrideMaterial = this.depthMaterial; renderer.setRenderTarget(renderTarget); renderer.clear(); renderer.render(scene, camera); renderer.readRenderTargetPixels(renderTarget, offsetX, height - offsetY, 1, 1, pixel);
Description: offsetX and offsetY are the mouse position, and height is the canvas height. The meaning of the readRenderTargetPixels line is to select the color of the pixel at the mouse position (offsetX, height - offsetY), with a width of 1 and a height of 1.
pixel is Uint8Array(4), which stores four channels of rgba color respectively. The value range of each channel is 0~255.
b. "Decrypt" the "encrypted" camera space depth value to obtain the correct camera space depth value.
if (pixel[2] !== 0 || pixel[1] !== 0 || pixel[0] !== 0) { let hex = (this.pixel[0] * 65535 + this.pixel[1] * 255 + this.pixel[2]) / 0xffffff; if (this.pixel[3] === 0) { hex = -hex; } cameraDepth = -hex * camera.far; // 相机坐标系中鼠标所在点的深度(注意:相机坐标系中的深度值为负值)}
3. Based on the position of the mouse on the screen and the depth of the camera space, interpolate and back-calculate the coordinates in the world coordinate system of the intersection.
let nearPosition = new THREE.Vector3(); // 鼠标屏幕位置在near处的相机坐标系中的坐标 let farPosition = new THREE.Vector3(); // 鼠标屏幕位置在far处的相机坐标系中的坐标 let world = new THREE.Vector3(); // 通过插值计算世界坐标 // 设备坐标 const deviceX = this.offsetX / width * 2 - 1; const deviceY = - this.offsetY / height * 2 + 1;// 近点 nearPosition.set(deviceX, deviceY, 1); // 屏幕坐标系:(0, 0, 1) nearPosition.applyMatrix4(camera.projectionMatrixInverse); // 相机坐标系:(0, 0, -far) // 远点 farPosition.set(deviceX, deviceY, -1); // 屏幕坐标系:(0, 0, -1) farPosition.applyMatrix4(camera.projectionMatrixInverse); // 相机坐标系:(0, 0, -near) // 在相机空间,根据深度,按比例计算出相机空间x和y值。 const t = (cameraDepth - nearPosition.z) / (farPosition.z - nearPosition.z); // 将交点从相机空间中的坐标,转换到世界坐标系坐标。 world.set( nearPosition.x + (farPosition.x - nearPosition.x) * t, nearPosition.y + (farPosition.y - nearPosition.y) * t, cameraDepth ); world.applyMatrix4(camera.matrixWorld);
Full code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event/GPUPickEvent.js
Related applications
Use gpu to select objects and calculate the intersection position, which is mostly used in situations where very high performance is required. For example:
1. The hover effect when the mouse moves to the 3D model.
2. When adding a model, the model moves with the mouse and the effect of placing the model in the scene is previewed in real time.
3. Tools such as distance measurement and area measurement, lines and polygons can be previewed in real time as the mouse moves on the plane, and the length and area can be calculated.
4. The scene and model are very large, the ray casting method selection speed is very slow, and the user experience is very poor.
Here is a picture of using GPU to select objects and achieve mouse hover effect. The red border is the selection effect, and the yellow translucent effect is the mouse hover effect.
Do not understand? Maybe you are not familiar with the various projection operations in three.js. The projection operation formula in three.js is given below.
Projection operation in three.js
1. modelViewMatrix = camera.matrixWorldInverse * object.matrixWorld
2. viewMatrix = camera .matrixWorldInverse
3. modelMatrix = object.matrixWorld
4. project = applyMatrix4( camera.matrixWorldInverse ).applyMatrix4( camera.projectionMatrix )
5. unproject = applyMatrix4( camera.projectionMatrixInverse ).applyMatrix4( camera.matrixWorld )
6. gl_Position = projectionMatrix * modelViewMatrix * position
* viewMatrix * modelMatrix * position
Reference materials:
1. Complete implementation code: https://gitee.com/tengge1/ShadowEditor/blob/master/ShadowEditor.Web/src/event /GPUPickEvent.js
2. Open source three-dimensional scene editor based on three.js: https://github.com/tengge1/ShadowEditor
3. Using shaders to draw depth values in OpenGL :https://stackoverflow.com/questions/6408851/draw-the-depth-value-in-opengl-using-shaders
4. In glsl, get the real fragment shader depth value: https://gamedev.stackexchange.com/questions/93055/getting-the-real-fragment-depth-in-glsl
This article comes from the
js tutorialThe above is the detailed content of three.js uses gpu to select objects and calculate intersection positions. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

How to use JS and Baidu Map to implement map pan function Baidu Map is a widely used map service platform, which is often used in web development to display geographical information, positioning and other functions. This article will introduce how to use JS and Baidu Map API to implement the map pan function, and provide specific code examples. 1. Preparation Before using Baidu Map API, you first need to apply for a developer account on Baidu Map Open Platform (http://lbsyun.baidu.com/) and create an application. Creation completed

Essential tools for stock analysis: Learn the steps to draw candle charts in PHP and JS. Specific code examples are required. With the rapid development of the Internet and technology, stock trading has become one of the important ways for many investors. Stock analysis is an important part of investor decision-making, and candle charts are widely used in technical analysis. Learning how to draw candle charts using PHP and JS will provide investors with more intuitive information to help them make better decisions. A candlestick chart is a technical chart that displays stock prices in the form of candlesticks. It shows the stock price

Face detection and recognition technology is already a relatively mature and widely used technology. Currently, the most widely used Internet application language is JS. Implementing face detection and recognition on the Web front-end has advantages and disadvantages compared to back-end face recognition. Advantages include reducing network interaction and real-time recognition, which greatly shortens user waiting time and improves user experience; disadvantages include: being limited by model size, the accuracy is also limited. How to use js to implement face detection on the web? In order to implement face recognition on the Web, you need to be familiar with related programming languages and technologies, such as JavaScript, HTML, CSS, WebRTC, etc. At the same time, you also need to master relevant computer vision and artificial intelligence technologies. It is worth noting that due to the design of the Web side

With the rapid development of Internet finance, stock investment has become the choice of more and more people. In stock trading, candle charts are a commonly used technical analysis method. It can show the changing trend of stock prices and help investors make more accurate decisions. This article will introduce the development skills of PHP and JS, lead readers to understand how to draw stock candle charts, and provide specific code examples. 1. Understanding Stock Candle Charts Before introducing how to draw stock candle charts, we first need to understand what a candle chart is. Candlestick charts were developed by the Japanese

How to use PHP and JS to create a stock candle chart. A stock candle chart is a common technical analysis graphic in the stock market. It helps investors understand stocks more intuitively by drawing data such as the opening price, closing price, highest price and lowest price of the stock. price fluctuations. This article will teach you how to create stock candle charts using PHP and JS, with specific code examples. 1. Preparation Before starting, we need to prepare the following environment: 1. A server running PHP 2. A browser that supports HTML5 and Canvas 3

How to use JS and Baidu Maps to implement map polygon drawing function. In modern web development, map applications have become one of the common functions. Drawing polygons on the map can help us mark specific areas for users to view and analyze. This article will introduce how to use JS and Baidu Map API to implement map polygon drawing function, and provide specific code examples. First, we need to introduce Baidu Map API. You can use the following code to import the JavaScript of Baidu Map API in an HTML file

Overview of how to use JS and Baidu Maps to implement map click event processing: In web development, it is often necessary to use map functions to display geographical location and geographical information. Click event processing on the map is a commonly used and important part of the map function. This article will introduce how to use JS and Baidu Map API to implement the click event processing function of the map, and give specific code examples. Steps: Import the API file of Baidu Map. First, import the file of Baidu Map API in the HTML file. This can be achieved through the following code:

How to use JS and Baidu Maps to implement the map heat map function Introduction: With the rapid development of the Internet and mobile devices, maps have become a common application scenario. As a visual display method, heat maps can help us understand the distribution of data more intuitively. This article will introduce how to use JS and Baidu Map API to implement the map heat map function, and provide specific code examples. Preparation work: Before starting, you need to prepare the following items: a Baidu developer account, create an application, and obtain the corresponding AP
