


When 'dividing everything' meets image repair: no need for precise marking, click on the object to achieve object removal, content filling, and scene replacement
In early April, Meta released the first basic image segmentation model in history - SAM (Segment Anything Model) [1]. As a segmentation model, SAM has powerful capabilities and is very user-friendly. For example, if the user simply clicks to select the corresponding object, the object will be segmented immediately, and the segmentation result is very accurate. As of April 15, SAM's GitHub repository has a star count of 26k.
How to make good use of such a powerful "split everything" model and expand it to application scenarios with more practical needs is crucial. For example, what kind of sparks will emerge when SAM meets practical image inpainting (Image Inpainting) tasks?
The research team from the University of Science and Technology of China and the Eastern Institute of Technology gave a stunning answer. Based on SAM, they proposed the "Inpaint Anything" (IA) model. Different from the traditional image repair model, the IA model does not require detailed operations to generate masks and supports marking selected objects with one click. IA can remove everything and fill in all contents. Fill Anything) and Replace Anything, covering a variety of typical image repair application scenarios including target removal, target filling, background replacement, etc.
- ## Paper link: http://arxiv.org/abs/2304.06790
- Code library link: https://github.com/geekyutao/Inpaint-Anything
- Method introduction
Although current image inpainting systems have made significant progress, they still face difficulties in selecting mask images and filling holes. Based on SAM,
researchers tried for the first time mask-free image repair, and built a "Clicking and Filling" A new paradigm in image patching, which they call Inpaint Anything (IA). The core idea behind IA is to combine the advantages of different models to build a powerful and user-friendly image repair system. IA has three main functions: (i) Remove Anything: Users only need to click on the object they want to remove, and IA will remove it without leaving a trace Object to achieve efficient "magic elimination"; (ii) Fill Anything: At the same time, the user can further tell IA what they want to fill in the object through text prompt (Text Prompt), and IA will then drive the embedded AIGC (AI-Generated Content) model (such as Stable Diffusion [2]) generates corresponding content-filled objects to realize "content creation" at will; (iii) Replace Anything: Users can also click to select objects that need to be retained , and use text prompts to tell IA what you want to replace the background of the object with, then you can replace the background of the object with the specified content to achieve a vivid "environment transformation". The overall framework of IA is shown below:
Remove everything
Remove Anything diagram "Remove Everything" steps are as follows: Fill everything
##Fill Anything diagram, the text prompt used in the picture: a teddy bear on a bench
"Fill Anything" steps As follows:
- Step 1: The user clicks on the object they want to remove;
- Step 2: SAM removes the object Segment it out;
- Step 3: The user indicates the content he wants to fill through text;
- Step 4: Image based on text prompts The patch model (Stable Diffusion) fills objects based on user-supplied text.
Replace Everything
## Replace Anything diagram, the text prompt used in the picture: a man in office
The steps to "fill everything" are as follows:
- Step 1: User clicks The object you want to remove;
- Step 2: SAM segments the object;
- #Step 3: The user indicates through text The background you want to replace;
- Step 4: The text prompt-based image repair model (Stable Diffusion) replaces the background of the object based on the text provided by the user.
- Model results
researcher’s model also supports 2K high-definition images and any aspect ratio, which enables the IA system to achieve efficient migration applications in various integration environments and existing frameworks .
Remove all experimental results
Text prompt: a camera lens in the hand
Text prompt: an aircraft carrier on the sea
Text prompt: a sports car on a road
##Text prompt: a Picasso painting on the wall
##Replace all experimental results
Text prompt: sit on the swing
#Text prompt: a bus, on the center of a country road , summer
##Text prompt: crossroad in the city
SummaryThe researchers established such an interesting project to demonstrate the powerful capabilities that can be obtained by fully utilizing existing large-scale artificial intelligence models, and to reveal the unlimited potential of "composable artificial intelligence" (Composable AI). The Inpaint Anything (IA) proposed by the project is a multifunctional image repair system that integrates object removal, content filling, scene replacement and other functions (more functions are on the way, so stay tuned).
IA combines visual basic models such as SAM, image repair models (such as LaMa) and AIGC models (such as Stable Diffusion) to achieve user-friendly maskless image repair , and also supports "fool-style" user-friendly operations such as "click to delete and prompt to fill in". In addition, IA can process images with arbitrary aspect ratios and 2K HD resolution, regardless of the original content of the image.
Currently, the project has been completely open source
. Finally, everyone is welcome to share and promote Inpaint Anything (IA), and I look forward to seeing more new projects based on IA. In the future, researchers will further explore the potential of Inpaint Anything (IA) to support more practical new functions, such as fine-grained image cutout, editing, etc., and apply it to more real-life applications.
The above is the detailed content of When 'dividing everything' meets image repair: no need for precise marking, click on the object to achieve object removal, content filling, and scene replacement. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

This paper explores the problem of accurately detecting objects from different viewing angles (such as perspective and bird's-eye view) in autonomous driving, especially how to effectively transform features from perspective (PV) to bird's-eye view (BEV) space. Transformation is implemented via the Visual Transformation (VT) module. Existing methods are broadly divided into two strategies: 2D to 3D and 3D to 2D conversion. 2D-to-3D methods improve dense 2D features by predicting depth probabilities, but the inherent uncertainty of depth predictions, especially in distant regions, may introduce inaccuracies. While 3D to 2D methods usually use 3D queries to sample 2D features and learn the attention weights of the correspondence between 3D and 2D features through a Transformer, which increases the computational and deployment time.
