


Chinese scholars developed a nursing robot simulation environment and conducted real-person experiments, winning one of the best papers in IROS 2022
From October 23rd to 27th, the top robot conference IROS 2022 was held in Kyoto, Japan. The conference received a total of 3,579 paper submissions from 57 countries and regions around the world, and finally accepted 1,716 papers, with an acceptance rate of 47.9%.
## Among them, Lu Cewu’s team from Shanghai Jiao Tong University and researchers from Cornell University and Columbia University jointly launched Nursing robot simulation environment RCareWorld. This work received one of the six best paper awards at the IROS 2022 conference for Best RoboCup Paper.
Two Best RoboCup papers. Image source: Twitter@ctwy
- Paper address: https://arxiv.org/pdf/2210.10821.pdf
- Github address: https://github.com/empriselab/RCareWorld
- Paper homepage: https://emprise.cs.cornell.edu/rcareworld/
According to WHO data, there are 190 million people in the world with varying degrees of mobility restrictions and need the help of caregivers to achieve a higher quality of life. At present, major countries in the world are gradually entering aging societies of varying degrees, and the demand for nursing staff is rising sharply, and the training of relevant talents requires long-term investment. Therefore, designing nursing robots is a possible solution.
However, the development of the field of care robots faces many difficulties, such as:
1. Cutting-edge researchers lack caregivers and caregivers Research accumulation of first-person real needs (tasks, algorithms, data).
2. The development, deployment, operation and maintenance of real robots are also very expensive.
3. Experiments in this field need to be modified in a targeted manner after understanding the daily activity needs of the care recipients, which greatly increases research costs.
Therefore, a simulation platform that can highly simulate nursing scenarios can greatly lower the threshold for entering this research field, making it easier to conduct research in this field and compare it with academic peers.
#Different from previous simulation environments used for general-purpose robots, the RCareWorld team also absorbed suggestions from people involved in caregiving scenarios and robotics researchers to develop a more comprehensive approach to robots. Full support is provided for required learning skills, virtual human modeling, activity scene design, functional interfaces, etc.
Robotic Caregiving SkillsThe author benchmarked common caregiving tasks in a simulation environment: feeding, dressing, wiping the body, repositioning limbs, and assisting with movement , help going to the bathroom, etc.
In addition, the authors conducted two real-world experiments:
1. The strategies learned in the body wiping task were directly transferred to the real machine experiment.
2. Social caregiving tasks in the real world: The author used behavior tree programming The NAO robot serves as a coach, guiding the physical rehabilitation of the person being cared for through a VR interface.
Complete virtual human modeling: musculoskeletal, soft tissue, joint range of motion
Musculoskeletal system
The musculoskeletal system of the human body will receive activation signals from the nervous system. This determines the contraction or relaxation of muscles, thereby driving the movement of bones and joints. In this process, the author uses Hill-type muscle model modeling and refers to relevant data from the OpenSim database to complete the setting of muscles in the human body model.
Soft tissue
On the other hand, when human body joints move, it will cause surface soft tissue deformation. The author used XPBD-based simulation technology to model this part. These soft tissues exist not only on the surface, but also in the oral cavity. In the oral cavity, the authors also modeled the tongue. As shown in the picture, when a strawberry is placed in a person's mouth, his tongue deforms.
Range of joint movement
When a person suffers some injuries, such as spinal injury (C1-C3, C4-C5, C6- C7), cerebral palsy, hemiplegia, stroke, etc., the mobility of the body's joints will be greatly affected, and the movement pattern will also change. Based on clinical data, the authors modeled the corresponding joint activities of the human body after such injuries.
Activity scene: three levels of accessibility
According to the accessibility of the activity scene, the scene is divided into three levels Level:
- Normal (L1): No modification is required at this time.
- Partial accessibility (L2): At this time, some objects need to be removed, and tools or parts that are inconvenient to use need to be replaced. For example, rotating door handles need to be replaced with push-type ones. , or replace the entire door with a sliding door.
- Completely barrier-free (L3): At this time, all activity areas should be passable and accessible, which may involve widening stairs, aisles, pull-down top-hanging cabinets, and under the countertops Hollow out and other renovations.
The comprehensive modification plan is referenced from the "Universal Design Manual". Following guidance from the Universal Design Guidelines, the authors modified 16 houses. The house model was selected from the Matterport3D dataset and includes a total of 17 kitchens, 17 living rooms, 59 bedrooms, 16 dining rooms, 70 bathrooms, 18 lounges and 41 other rooms. Suitable areas of the house are equipped with hospital beds, patient slings, wheelchairs and other medical assistive equipment.
Functional interface: model, sensing, interactive interface
According to the suggestions of robot researchers, the simulation environment should:
1. Supports control of common care robot models: including HSR, Stretch, Nao, Fetch, Kinova, Franka, UR, etc.
2. Provide multi-modal sensing: RGB, depth, LiDAR, joint and end force sensing, full Arm tactile perception.
3. Has a variety of interactive control interfaces and interfaces, supports planning, control and learning algorithms, making it easy for developers to use : Python, ROS, VR.
Shanghai Jiao Tong University launches the embodied intelligence platform RobotFlow/RFUniverse
The RCareWorld project is developed based on the RFUniverse simulation platform. RFUniverse is a multi-physics robot simulation platform under the embodied intelligence platform RobotFlow initiated by the team of Professor Lu Cewu of Shanghai Jiao Tong University. The simulation platform supports advanced robot operation tasks including food cutting, clothing folding and other tasks. Provide support for rigid body, joint body, flexible body, fluid and other object types. The MVIG Lu Cewu team of Shanghai Jiao Tong University has long-term research on embodied intelligence and computer vision. The team has published more than 100 papers in "Nature", "Nature. Machine Intelligence", TPAMI, ICRA, IROS, forming GraspNet (Anygrasp), Aphapose, Well-known robot learning and computer vision systems such as HAKE.
Now open source: https://github.com/mvig-robotflow/pyrfuniverse
RCareWorld Paper Common The first author, Dr. Xu Wenqiang, is the core staff of this system.
Introduction to the co-first author
Ye Ruolin is a bachelor’s degree student in the Department of Electronics at Shanghai Jiao Tong University and a first-year doctoral candidate in the Department of Computer Science at Cornell University, under the supervision of Professor Tapomayukh Bhattacharjee. The research direction is human-robot interaction. The main work of RCareWorld was completed during her internship at the MVIG Laboratory of Shanghai Jiao Tong University (mentor Lu Cewu).
Xu Wenqiang is a fourth-year doctoral student in the MVIG Laboratory of Shanghai Jiao Tong University, studying under Professor Lu Cewu. The research direction is embodied intelligence. Lead the RobotFlow project within the lab, which includes the multi-physics robot simulation platform RFUniverse, which is the foundation of RCareWorld.
- RobotFlow project: https://robotflow.ai
- RFUniverse platform: https ://sites.google.com/view/rfuniverse
The above is the detailed content of Chinese scholars developed a nursing robot simulation environment and conducted real-person experiments, winning one of the best papers in IROS 2022. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



The humanoid robot Ameca has been upgraded to the second generation! Recently, at the World Mobile Communications Conference MWC2024, the world's most advanced robot Ameca appeared again. Around the venue, Ameca attracted a large number of spectators. With the blessing of GPT-4, Ameca can respond to various problems in real time. "Let's have a dance." When asked if she had emotions, Ameca responded with a series of facial expressions that looked very lifelike. Just a few days ago, EngineeredArts, the British robotics company behind Ameca, just demonstrated the team’s latest development results. In the video, the robot Ameca has visual capabilities and can see and describe the entire room and specific objects. The most amazing thing is that she can also

In the field of industrial automation technology, there are two recent hot spots that are difficult to ignore: artificial intelligence (AI) and Nvidia. Don’t change the meaning of the original content, fine-tune the content, rewrite the content, don’t continue: “Not only that, the two are closely related, because Nvidia is expanding beyond just its original graphics processing units (GPUs). The technology extends to the field of digital twins and is closely connected to emerging AI technologies. "Recently, NVIDIA has reached cooperation with many industrial companies, including leading industrial automation companies such as Aveva, Rockwell Automation, Siemens and Schneider Electric, as well as Teradyne Robotics and its MiR and Universal Robots companies. Recently,Nvidiahascoll

Editor of Machine Power Report: Wu Xin The domestic version of the humanoid robot + large model team completed the operation task of complex flexible materials such as folding clothes for the first time. With the unveiling of Figure01, which integrates OpenAI's multi-modal large model, the related progress of domestic peers has been attracting attention. Just yesterday, UBTECH, China's "number one humanoid robot stock", released the first demo of the humanoid robot WalkerS that is deeply integrated with Baidu Wenxin's large model, showing some interesting new features. Now, WalkerS, blessed by Baidu Wenxin’s large model capabilities, looks like this. Like Figure01, WalkerS does not move around, but stands behind a desk to complete a series of tasks. It can follow human commands and fold clothes

This week, FigureAI, a robotics company invested by OpenAI, Microsoft, Bezos, and Nvidia, announced that it has received nearly $700 million in financing and plans to develop a humanoid robot that can walk independently within the next year. And Tesla’s Optimus Prime has repeatedly received good news. No one doubts that this year will be the year when humanoid robots explode. SanctuaryAI, a Canadian-based robotics company, recently released a new humanoid robot, Phoenix. Officials claim that it can complete many tasks autonomously at the same speed as humans. Pheonix, the world's first robot that can autonomously complete tasks at human speeds, can gently grab, move and elegantly place each object to its left and right sides. It can autonomously identify objects

The following 10 humanoid robots are shaping our future: 1. ASIMO: Developed by Honda, ASIMO is one of the most well-known humanoid robots. Standing 4 feet tall and weighing 119 pounds, ASIMO is equipped with advanced sensors and artificial intelligence capabilities that allow it to navigate complex environments and interact with humans. ASIMO's versatility makes it suitable for a variety of tasks, from assisting people with disabilities to delivering presentations at events. 2. Pepper: Created by Softbank Robotics, Pepper aims to be a social companion for humans. With its expressive face and ability to recognize emotions, Pepper can participate in conversations, help in retail settings, and even provide educational support. Pepper's

Sweeping and mopping robots are one of the most popular smart home appliances among consumers in recent years. The convenience of operation it brings, or even the need for no operation, allows lazy people to free their hands, allowing consumers to "liberate" from daily housework and spend more time on the things they like. Improved quality of life in disguised form. Riding on this craze, almost all home appliance brands on the market are making their own sweeping and mopping robots, making the entire sweeping and mopping robot market very lively. However, the rapid expansion of the market will inevitably bring about a hidden danger: many manufacturers will use the tactics of sea of machines to quickly occupy more market share, resulting in many new products without any upgrade points. It is also said that they are "matryoshka" models. Not an exaggeration. However, not all sweeping and mopping robots are

In the blink of an eye, robots have learned to do magic? It was seen that it first picked up the water spoon on the table and proved to the audience that there was nothing in it... Then it put the egg-like object in its hand, then put the water spoon back on the table and started to "cast a spell"... …Just when it picked up the water spoon again, a miracle happened. The egg that was originally put in disappeared, and the thing that jumped out turned into a basketball... Let’s look at the continuous actions again: △ This animation shows a set of actions at 2x speed, and it flows smoothly. Only by watching the video repeatedly at 0.5x speed can it be understood. Finally, I discovered the clues: if my hand speed were faster, I might be able to hide it from the enemy. Some netizens lamented that the robot’s magic skills were even higher than their own: Mag was the one who performed this magic for us.

"The Legend of Zelda: Tears of the Kingdom" became the fastest-selling Nintendo game in history. Not only did Zonav Technology bring various "Zelda Creator" community content, but it also became the United States' A new engineering course at the University of Maryland (UMD). Rewrite: The Legend of Zelda: Tears of the Kingdom is one of Nintendo's fastest-selling games on record. Not only does Zonav Technology bring rich community content, it has also become part of the new engineering course at the University of Maryland. This fall, Associate Professor Ryan D. Sochol of the University of Maryland opened a course called "
