


New technology launched, IDEA Research Institute released T-Rex model, allowing users to select 'Prompt' prompts directly on the image
Following the popularity of Grounded SAM, the IDEA Research Institute team is back with a blockbuster new work: Brand new Visual Prompt(Visual Prompt)ModelT-Rex, identified by pictures Picture, ready to use right out of the box, opens up a new world of open set detection!
Pull the box, detect, and complete! At the just-concluded 2023 IDEA conference, Shen Xiangyang, founding chairman of the IDEA Research Institute and foreign academician of the National Academy of Engineering, demonstrated a new target detection experience based on visual cues and released the model laboratory (playground) of the new visual cues model T-Rex ), Interactive Visual Prompt (iVP), set off a wave of trial climaxes on site.
On iVP, users can personally unlock the "a picture is worth a thousand words" prompting experience: mark objects of interest on the picture, provide visual examples to the model, and the model will Detect all similar instances in the target image. The entire process is interactive and can be easily completed in just a few steps.
Grounded SAM (Grounding DINO SAM) released by IDEA Research Institute in April was once very popular on Github and has collected 11K stars so far. Different from Grounded SAM, which only supports text prompts, the T-Rex model released this time provides a visual prompt function that focuses on creating strong interaction.
T-Rex has strong out-of-the-box features and can detect objects that the model has never seen during the training phase without retraining or fine-tuning. This model can not only be applied to all detection tasks including counting, but also provides new solutions for intelligent interactive annotation scenarios.
#The team revealed that the development of visual prompt technology was derived from the observation of pain points in real scenes. Some partners hope to use visual models to count the number of goods on trucks. However, the model cannot individually identify each goods through text prompts only. The reason is that objects in industrial scenes are rare in daily life and difficult to describe in words. In this case, visual cues are clearly a more efficient approach. At the same time, intuitive visual feedback and strong interactivity also help improve the efficiency and accuracy of detection.
Based on insights into actual usage requirements, the team designed T-Rex as a model that can accept multiple visual prompts and has the ability to prompt across images. In addition to the most basic single-round prompt mode, the current model also supports the following three advanced modes.
- Multi-round positive mode: suitable for when the visual prompts are not accurate enough Scenarios that cause missed detection
- Positive example and negative example mode: Suitable for scenarios where visual prompts are ambiguous and cause false detection
- Cross-picture mode: Suitable for prompting detection through a single reference picture The scene he pictured
In the technical report released at the same time, the team summarized the four main features of the T-Rex model:
- Open set: Not restricted by predefined categories, with the ability to detect all objects
- Visual prompts: Use visual examples to specify detection targets, overcome the problem that rare and complex objects are difficult to fully express in words, and improve prompt efficiency
- Intuitive visual feedback: Provide intuitive visual feedback such as bounding boxes to help users efficiently evaluate detection results
- Interactivity: Users can conveniently participate in the detection process and correct model results
The research team pointed out that in the target detection scenario, the addition of visual cues can make up for some of the shortcomings of text cues. In the future, the combination of the two will further unleash the potential of CV technology in more vertical fields.
For technical details of the T-Rex model, please refer to the technical report released at the same time.
iVPModel Lab: https://deepdataspace.com/playground/ivp
Github link: trex-counting.github.io
This work comes from the Computer Vision and Robotics Research Center of the IDEA Institute. The team's previously open source target detection model DINO was the first DETR model to achieve first place in the COCO target detection rankings; the very popular zero-shot detector Grounding DINO on Github and the DINO can detect and segment any object. Grounded SAM, also the work of this team
The above is the detailed content of New technology launched, IDEA Research Institute released T-Rex model, allowing users to select 'Prompt' prompts directly on the image. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Diffusion can not only imitate better, but also "create". The diffusion model (DiffusionModel) is an image generation model. Compared with the well-known algorithms such as GAN and VAE in the field of AI, the diffusion model takes a different approach. Its main idea is a process of first adding noise to the image and then gradually denoising it. How to denoise and restore the original image is the core part of the algorithm. The final algorithm is able to generate an image from a random noisy image. In recent years, the phenomenal growth of generative AI has enabled many exciting applications in text-to-image generation, video generation, and more. The basic principle behind these generative tools is the concept of diffusion, a special sampling mechanism that overcomes the limitations of previous methods.

Kimi: In just one sentence, in just ten seconds, a PPT will be ready. PPT is so annoying! To hold a meeting, you need to have a PPT; to write a weekly report, you need to have a PPT; to make an investment, you need to show a PPT; even when you accuse someone of cheating, you have to send a PPT. College is more like studying a PPT major. You watch PPT in class and do PPT after class. Perhaps, when Dennis Austin invented PPT 37 years ago, he did not expect that one day PPT would become so widespread. Talking about our hard experience of making PPT brings tears to our eyes. "It took three months to make a PPT of more than 20 pages, and I revised it dozens of times. I felt like vomiting when I saw the PPT." "At my peak, I did five PPTs a day, and even my breathing was PPT." If you have an impromptu meeting, you should do it

In the early morning of June 20th, Beijing time, CVPR2024, the top international computer vision conference held in Seattle, officially announced the best paper and other awards. This year, a total of 10 papers won awards, including 2 best papers and 2 best student papers. In addition, there were 2 best paper nominations and 4 best student paper nominations. The top conference in the field of computer vision (CV) is CVPR, which attracts a large number of research institutions and universities every year. According to statistics, a total of 11,532 papers were submitted this year, and 2,719 were accepted, with an acceptance rate of 23.6%. According to Georgia Institute of Technology’s statistical analysis of CVPR2024 data, from the perspective of research topics, the largest number of papers is image and video synthesis and generation (Imageandvideosyn

We know that LLM is trained on large-scale computer clusters using massive data. This site has introduced many methods and technologies used to assist and improve the LLM training process. Today, what we want to share is an article that goes deep into the underlying technology and introduces how to turn a bunch of "bare metals" without even an operating system into a computer cluster for training LLM. This article comes from Imbue, an AI startup that strives to achieve general intelligence by understanding how machines think. Of course, turning a bunch of "bare metal" without an operating system into a computer cluster for training LLM is not an easy process, full of exploration and trial and error, but Imbue finally successfully trained an LLM with 70 billion parameters. and in the process accumulate

Quick Start with PyCharm Community Edition: Detailed Installation Tutorial Full Analysis Introduction: PyCharm is a powerful Python integrated development environment (IDE) that provides a comprehensive set of tools to help developers write Python code more efficiently. This article will introduce in detail how to install PyCharm Community Edition and provide specific code examples to help beginners get started quickly. Step 1: Download and install PyCharm Community Edition To use PyCharm, you first need to download it from its official website

As a widely used programming language, C language is one of the basic languages that must be learned for those who want to engage in computer programming. However, for beginners, learning a new programming language can be difficult, especially due to the lack of relevant learning tools and teaching materials. In this article, I will introduce five programming software to help beginners get started with C language and help you get started quickly. The first programming software was Code::Blocks. Code::Blocks is a free, open source integrated development environment (IDE) for

Editor of the Machine Power Report: Yang Wen The wave of artificial intelligence represented by large models and AIGC has been quietly changing the way we live and work, but most people still don’t know how to use it. Therefore, we have launched the "AI in Use" column to introduce in detail how to use AI through intuitive, interesting and concise artificial intelligence use cases and stimulate everyone's thinking. We also welcome readers to submit innovative, hands-on use cases. Video link: https://mp.weixin.qq.com/s/2hX_i7li3RqdE4u016yGhQ Recently, the life vlog of a girl living alone became popular on Xiaohongshu. An illustration-style animation, coupled with a few healing words, can be easily picked up in just a few days.

Title: A must-read for technical beginners: Difficulty analysis of C language and Python, requiring specific code examples In today's digital age, programming technology has become an increasingly important ability. Whether you want to work in fields such as software development, data analysis, artificial intelligence, or just learn programming out of interest, choosing a suitable programming language is the first step. Among many programming languages, C language and Python are two widely used programming languages, each with its own characteristics. This article will analyze the difficulty levels of C language and Python
