


ICCV 2023 announced: Popular papers such as ControlNet and SAM won awards
The International Conference on Computer Vision ICCV (International Conference on Computer Vision) opened in Paris, France this week
As the top academic conference in the global computer vision field, ICCV is held every two years.
ICCV’s popularity has always been on par with CVPR, setting new highs repeatedly
At today’s opening ceremony, ICCV officially announced this year’s paper data: a total of 8,068 papers were submitted to this year’s ICCV. , 2160 of them were accepted, with an acceptance rate of 26.8%, slightly higher than the acceptance rate of the previous ICCV 2021 of 25.9%

In terms of paper topics, the official also announced Relevant data has been obtained: 3D technology with multiple viewing angles and sensors is the most popular

At today’s opening ceremony, the most important part is undoubtedly the award presentation. Next, we will announce the winners of the best paper, best paper nomination and best student paper one by one
Best Paper-Marr Award
This year’s best Best paper (Marr Prize) Two papers won this award
The first study was conducted by researchers at the University of Toronto

Paper address: https://openaccess.thecvf.com/content/ICCV2023/papers/Wei_Passive_Ultra-Wideband_Single-Photon_Imaging_ICCV_2023_paper.pdf
Authors: Mian Wei, Sotiris Nousias, Rahul Gulve, David B. Lindell , Kiriakos N. Kutulakos
Rewritten content: The University of Toronto is a well-known institution
Abstract: This paper considers the range of extreme time scales, simultaneously (seconds to picoseconds) The problem with imaging a dynamic scene, and doing it passively, without much light and without any timing signal from the light source emitting it. Because existing flux estimation techniques for single-photon cameras fail in this case, we develop a flux detection theory that draws insights from stochastic calculus to enable the Time-varying flux of reconstructed pixels in a stream of photon detection timestamps.
This paper uses this theory to (1) show that passive free-running SPAD cameras have an achievable frequency bandwidth under low-flux conditions, spanning the entire DC-to31 GHz range, (2) derive a a novel Fourier domain flux reconstruction algorithm, and (3) ensure that the algorithm's noise model remains valid even for very low photon counts or non-negligible dead times.
Popular papers such as ControlNet and SAM won awards, and the ICCV 2023 paper awards were announced. This paper experimentally demonstrates the potential of this asynchronous imaging mechanism: (1) to image scenes illuminated simultaneously by light sources (bulbs, projectors, multiple pulsed lasers) operating at significantly different speeds, without synchronization, (2 ) Passive non-line-of-sight video acquisition; (3) Record ultra-wideband video that can later be played back at 30 Hz to show everyday movement, but also a billion times slower to show the propagation of light itself

The content that needs to be rewritten is: the second article is what we know as ControNet

Paper address: https ://arxiv.org/pdf/2302.05543.pdf
Authors: Zhang Lumin, Rao Anyi, Maneesh Agrawala
Institution: Stanford University
Abstract: This article is proposed An end-to-end neural network architecture, ControlNet, is developed. This architecture can control the diffusion model (such as Stable Diffusion) by adding additional conditions, thereby improving the image-generating effect, and can generate full-color images from line drawings and generate structures with the same depth. The map, hand key points can also be used to optimize the generation of hands, etc.
The core idea of ControlNet is to add some additional conditions to the text description to control the diffusion model (such as Stable Diffusion), thereby better controlling the character pose, depth, picture structure and other information of the generated image.
Rewritten as: We can input additional conditions in the form of images to allow the model to perform Canny edge detection, depth detection, semantic segmentation, Hough transform line detection, overall nested edge detection (HED), Human pose recognition and other operations, and retain this information in the generated image. Using this model, we can directly convert line drawings or graffiti into full-color images and generate images with the same depth structure. At the same time, we can also optimize the generation of character hands through hand key points

For detailed introduction, please refer to the report on this site: AI dimensionality reduction attacks human painters, Vincentian graphs are introduced into ControlNet, depth and edge information are fully reusable
Best Paper nomination: SAM
In April this year, Meta released an AI model called "Segment Everything (SAM)", which can generate masks for any object in an image or video. This technology shocked researchers in the field of computer vision, and some even called it "CV does not exist anymore"
Now, this high-profile paper has been nominated for the best paper.

Paper address: https://arxiv.org/abs/2304.02643
Rewritten content: Institution: Meta AI
Rewritten content: There are currently two methods to solve the segmentation problem. The first is interactive segmentation, which can be used to segment any class of objects but requires a human to guide the method by iteratively refining the mask. The second is automatic segmentation, which can be used to segment predefined specific object categories (such as cats or chairs), but requires a large number of manually annotated objects for training (such as thousands or even tens of thousands of examples of segmented cats). Neither of these two methods provide a universal, fully automatic segmentation method
The SAM proposed by Meta summarizes these two methods well. It is a single model that can easily perform interactive segmentation and automatic segmentation. The model's promptable interface allows users to use it in a flexible way. A wide range of segmentation tasks can be completed by simply designing the correct prompts for the model (clicks, box selections, text, etc.)
Summary , these features enable SAM to adapt to new tasks and domains. This flexibility is unique in the field of image segmentation
For detailed introduction, please refer to the report on this site:CV no longer exists? Meta releases "split everything" AI model, CV may usher in GPT-3 moment
Best Student Paper
The research was conducted by Cornell University It was jointly completed by researchers from , Google Research and UC Berkeley. The first work was Qianqian Wang, a doctoral student from Cornell Tech. They jointly proposed OmniMotion, a complete and globally consistent motion representation, and proposed a new test-time optimization method to perform accurate and complete motion estimation for every pixel in the video.

- Paper address: https://arxiv.org/abs/2306.05422
- Project homepage: https://omnimotion.github.io/
In the field of computer vision, there are two commonly used motion estimation methods: sparse feature tracking and dense optical flow. However, both methods have some drawbacks. Sparse feature tracking cannot model the motion of all pixels, while dense optical flow cannot capture motion trajectories for a long time
OmniMotion is a new technology proposed by research that uses quasi-3D canonical volumes to characterize video. OmniMotion is able to track every pixel through a bijection between local space and canonical space. This representation not only ensures global consistency and motion tracking even when objects are occluded, but also enables modeling of any combination of camera and object motion. Experiments have proven that the OmniMotion method is significantly better than the existing SOTA method in performance

For detailed introduction, please refer to the report on this site: Track every pixel anytime, anywhere , the "track everything" video algorithm that is not afraid of occlusion is here
Of course, in addition to these award-winning papers, there are many outstanding papers in ICCV this year that deserve everyone's attention. Finally, here is an initial list of 17 award-winning papers.

The above is the detailed content of ICCV 2023 announced: Popular papers such as ControlNet and SAM won awards. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

Conference Introduction With the rapid development of science and technology, artificial intelligence has become an important force in promoting social progress. In this era, we are fortunate to witness and participate in the innovation and application of Distributed Artificial Intelligence (DAI). Distributed artificial intelligence is an important branch of the field of artificial intelligence, which has attracted more and more attention in recent years. Agents based on large language models (LLM) have suddenly emerged. By combining the powerful language understanding and generation capabilities of large models, they have shown great potential in natural language interaction, knowledge reasoning, task planning, etc. AIAgent is taking over the big language model and has become a hot topic in the current AI circle. Au
