The development of electronic technology allows us to enjoy an "audio-visual feast" anytime and anywhere, and human hearing and vision have been completely liberated.
#In recent years, adding "haptics" to devices has gradually become a new research hotspot, especially with the blessing of the concept of "metaverse". The high sense of touch will undoubtedly greatly enhance the realism of the virtual world.
Current tactile sensing technology mainly simulates and renders touch through a "data-driven" model. The model first records the user's interaction with the real texture. The signal is then input to the texture generation part, and the touch sensation is "played back" to the user in the form of vibration.
# Some recent methods are mostly based on user interactive motion and high-frequency vibration signals to model texture features, such as friction and microscopic surface features.
Although data-driven greatly improves the realism of simulation, there are still many limitations.
#For example, there are "countless types" of textures in the world. If each texture was recorded, the manpower and material resources required would be unimaginable. It also cannot meet the needs of some niche users.
Human beings are very sensitive to touch. Different people have different feelings for the same object. The data-driven approach cannot fundamentally eliminate the problem of touch. Texture recording to texture rendering perceptual mismatch issue.
Recently, three doctoral students at the University of Southern California Viterbi School of Engineering proposed a new "preference-driven" model framework that uses humans to resolve texture details. The ability to adjust the generated virtual perception can ultimately achieve quite realistic tactile perception. The paper was published in IEEE Transactions on Haptics.
Paper link : https://ieeexplore.ieee.org/document/9772285
The preference-driven model will first give the user a realistic touch texture, and then the model will use Dozens of variables randomly generate three virtual textures, from which the user can then choose the one that feels most similar to the real object.
With continuous trial and error and feedback, the model will continuously optimize the distribution of variables through search, making the generated texture closer to the user's preference. This method has significant advantages over directly recording and playing back textures, as there is always a gap between what the computer reads and how humans actually feel.
This process is actually similar to "Party A and Party B". As the perceiver (Party A), if we feel that the touch feels wrong, we will Go back and let the algorithm (Party B) modify and regenerate until the generated effect is satisfactory.
This is actually very reasonable, because different people will have different feelings when touching the same object, but the signal released by the computer is the same, so It is necessary to customize the touch according to each person!
The entire system consists of two modules. The first is a deep convolutional generative adversarial network (DCGAN), which is used to map the vectors of the latent space to the texture model. In the UPenn Haptic Texture Toolkit (HaTT) for training.
The second module is a comparison-based evolutionary algorithm: from a set of generated texture models, the covariance matrix adaptive evolution strategy (CMA-ES) will create a new texture model based on user preference feedback. evolution.
To simulate real textures, the researchers first ask users to touch real textures using custom tools, and then use a haptic device to touch a set of virtual texture candidates, where haptic Feedback is transmitted via Haptuator connected to the device's stylus.
The only thing the user needs to do is to select the virtual texture that is closest to the real texture and use a simple slider interface to adjust the amount of friction. Because friction is critical to texture feel, it can also vary from person to person.
Then all virtual textures will be updated according to the evolution strategy according to the user's selection, and then the user will select and adjust again.
Repeat the above process until the user finds a virtual texture that they believe is close to the real texture and saves it, or a closer virtual texture cannot be found .
The researchers divided the evaluation process into two phases, each with a separate group of participants.
#In the first stage, five participants generated and searched virtual textures for 5 real textures respectively.
The second stage is to evaluate the gap between the final saved preference-driven texture (VTp) and its corresponding real texture (RT).
The evaluation method mainly uses adjective rating to evaluate perceptual dimensions including roughness, hardness and smoothness.
#And compare the similarities between VTp, RT and data-driven textures (VTd).
#The experimental results also show that following the evolutionary process, users can effectively find a virtual texture model that is more realistic than the data-driven model.
In addition, more than 80% of participants believed that the virtual texture ratings generated by the preference-driven model were better than those generated by the data-driven model.
Haptic devices are becoming more and more popular in video games, fashion design, and surgical simulations, and even at home, we are starting to see users using Laptops are as popular as those tactile devices.
#For example, adding touch to first-person video games will greatly enhance the player’s sense of reality.
The author of the paper stated that when we interact with the environment through tools, tactile feedback is just one form, one kind of sensory feedback, and audio is another sensory feedback, both are very important.
#In addition to games, the results of this work will be particularly useful for virtual textures used in dental or surgical training, which need to be very accurate.
"Surgical training is absolutely a huge field that requires very realistic textures and tactile feedback; decoration design also requires a high degree of precision in texture during development Simulate it on the ground and then manufacture it."
Everything from video games to fashion design is integrating haptic technology, and existing virtual texture databases can be improved with this user-preference approach.
Texture search models also allow users to extract virtual textures from databases, such as the University of Pennsylvania's Tactile Texture Toolkit, which can be refined until they are obtained desired result.
#Once this technology is combined with the texture search model, you can use virtual textures that have been recorded by others before, and then optimize the textures based on strategies.
#The author imagines that in the future, models may not even need real textures.
The sense of some common things in our lives is very intuitive, and we are hard-wired to fine-tune our senses by looking at photos without having to Reference to real textures.
For example, when we see a table, we can imagine how it will feel once we touch the table, using our prior knowledge of the surface. With experience knowledge, you can provide visual feedback to users and allow them to select matching content.
The first author of the article, Shihan Lu, is currently a doctoral candidate at the School of Computer Science at the University of Southern California. He has previously done sound-related work in immersive technology. , which makes virtual textures more immersive by introducing matching sounds when tools interact with them.
The second author of the article, Mianlun Zheng (Zheng Mianlun), is a doctoral candidate in the School of Computer Science at the University of Southern California. He graduated from Wuhan University with bachelor's and master's degrees. .
##
The above is the detailed content of Touch has never been so real! Two Chinese Ph.D.s from the University of Southern California innovate the 'tactile perception' algorithm. For more information, please follow other related articles on the PHP Chinese website!