


Train your robot dog in real time with Vision Pro! MIT PhD student's open source project becomes popular
Vision Pro has another hot new way to play, and this time it is linked with embodied intelligence~
Just like this, the MIT guy used the hand tracking function of Vision Pro to successfully realize Real-time control of robot dogs.
Not only can actions such as opening a door be accurately obtained:
, but there is also almost no delay.
# As soon as the Demo came out, not only netizens praised Goosemeizi, but also various embodied intelligence researchers became excited.
For example, this prospective doctoral student from Tsinghua University:
Some people boldly predict: This is how we will interact with the next generation of machines.
How to implement the project, the author Park Younghyo (Younghyo Park) has open source on GitHub. Relevant apps can be downloaded directly from Vision Pro’s App Store.
Use Vision Pro to train robot dogs
Let’s take a closer look at the App developed by the author--Tracking Steamer.
As the name suggests, this application is designed to use Vision Pro to track human movements and transmit these movement data to other robot devices under the same WiFi in real time.
The motion tracking part mainly relies on Apple’s ARKit library.
Head tracking calls queryDeviceAnchor. Users can reset the head frame to its current position by pressing and holding the Digital Crown.
Wrist and finger tracking are implemented through HandTrackingProvider. It is able to track the position and orientation of the left and right wrists relative to the ground frame, as well as the posture of 25 finger joints on each hand relative to the wrist frame.
In terms of network communication, this App uses gRPC as the network communication protocol to stream data. This enables data to be subscribed to more devices, including Linux, Mac and Windows devices.
In addition, in order to facilitate data transmission, the author has also prepared a Python API that allows developers to programmatically subscribe to and receive tracking data streamed from Vision Pro.
The data returned by the API is in dictionary form, including the SE (3) posture information of the head, wrist, and fingers, that is, the three-dimensional position and direction. Developers can process this data directly in Python for further analysis and control of the robot.
As many professionals have pointed out, regardless of whether the movements of the robot dog are still controlled by humans, in fact, compared to the "control" itself, combined with imitation In the process of learning algorithms, humans are more like robot coaches.
Vision Pro provides an intuitive and simple interaction method by tracking the user's movements, allowing non-professionals to provide accurate training data for robots.
The author himself also wrote in the paper:
In the near future, people may wear devices like Vision Pro just like wearing glasses every day. Imagine what we can do from this How much data is collected in the process!
This is a promising source of data from which robots can learn how humans interact with the real world.
Finally, a reminder, if you want to try this open source project, in addition to a Vision Pro, you also need to prepare:
- Apple Developer Account
- Vision Pro Developer Accessory (Developer Strap, priced at $299)
- Mac computer with Xcode installed
Well, it seems that Apple still has to make a profit first (doge).
Project link: https://github.com/Improbable-AI/VisionProTeleop?tab=readme-ov-file
The above is the detailed content of Train your robot dog in real time with Vision Pro! MIT PhD student's open source project becomes popular. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Current deep edge detection networks usually adopt an encoder-decoder architecture, which contains up and down sampling modules to better extract multi-level features. However, this structure limits the network to output accurate and detailed edge detection results. In response to this problem, a paper on AAAI2024 provides a new solution. Thesis title: DiffusionEdge:DiffusionProbabilisticModelforCrispEdgeDetection Authors: Ye Yunfan (National University of Defense Technology), Xu Kai (National University of Defense Technology), Huang Yuxing (National University of Defense Technology), Yi Renjiao (National University of Defense Technology), Cai Zhiping (National University of Defense Technology) Paper link: https ://ar

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

In time for the Spring Festival, version 1.5 of Tongyi Qianwen Model (Qwen) is online. This morning, the news of the new version attracted the attention of the AI community. The new version of the large model includes six model sizes: 0.5B, 1.8B, 4B, 7B, 14B and 72B. Among them, the performance of the strongest version surpasses GPT3.5 and Mistral-Medium. This version includes Base model and Chat model, and provides multi-language support. Alibaba’s Tongyi Qianwen team stated that the relevant technology has also been launched on the Tongyi Qianwen official website and Tongyi Qianwen App. In addition, today's release of Qwen 1.5 also has the following highlights: supports 32K context length; opens the checkpoint of the Base+Chat model;

Large language models (LLMs) typically have billions of parameters and are trained on trillions of tokens. However, such models are very expensive to train and deploy. In order to reduce computational requirements, various model compression techniques are often used. These model compression techniques can generally be divided into four categories: distillation, tensor decomposition (including low-rank factorization), pruning, and quantization. Pruning methods have been around for some time, but many require recovery fine-tuning (RFT) after pruning to maintain performance, making the entire process costly and difficult to scale. Researchers from ETH Zurich and Microsoft have proposed a solution to this problem called SliceGPT. The core idea of this method is to reduce the embedding of the network by deleting rows and columns in the weight matrix.

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Original title: PointTransformerV3: Simpler, Faster, Stronger Paper link: https://arxiv.org/pdf/2312.10035.pdf Code link: https://github.com/Pointcept/PointTransformerV3 Author unit: HKUSHAILabMPIPKUMIT Paper idea: This article is not intended to be published in Seeking innovation within the attention mechanism. Instead, it focuses on leveraging the power of scale to overcome existing trade-offs between accuracy and efficiency in the context of point cloud processing. Draw inspiration from recent advances in 3D large-scale representation learning,

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.
