It can also segment new objects that have never been seen before.
This is a new learning framework developed by DeepMind: Object discovery and representation networks (Odin for short)
Previous The self-supervised learning (SSL) method can describe the entire large scene well, but it is difficult to distinguish individual objects.
Now, the Odin method does it, and does it without any supervision.
It is not easy to distinguish a single object in an image. How is it done?
It can well distinguish various objects in the image, mainly due to the "self-circulation" of the Odin learning framework.
Odin learned two sets of networks that work together, namely the target discovery network and the target representation network.
Target Discovery Network takes a cropped part of the image as input. The cropped part should contain most of the image area, and this part of the image has not been enhanced in other aspects.
Then perform cluster analysis on the feature map generated from the input image, and segment each object in the image according to different features.
The input view of the target representation network is the segmented image generated in the target discovery network.
After the views are input, random preprocessing is performed on them, including flipping, blurring, and point-level color conversion.
In this way, two sets of masks can be obtained. Except for the differences in cropping, other information is the same as the underlying image content.
Then the two masks will learn features that can better represent the objects in the image through contrast loss.
Specifically, through contrast detection, a network is trained to identify the characteristics of different target objects, and there are also many "negative" characteristics from other irrelevant objects.
Then, maximize the similarity of the same target object in different masks, minimize the similarity between different target objects, and then perform better segmentation to distinguish different target objects.
#At the same time, the target discovery network will be updated regularly based on the parameters of the target representation network.
The ultimate goal is to ensure that these object-level characteristics are roughly unchanged in different views, in other words, to separate the objects in the image.
So what is the effect of the Odin learning framework?
The performance of transfer learning of the Odin method in scene segmentation without prior knowledge is also very powerful.
First, use the Odin method to pre-train on the ImageNet dataset, and then evaluate its effect on the COCO dataset as well as PASCAL and Cityscapes semantic segmentation.
Already know the target object, that is, the method that obtains prior knowledge is significantly better than other methods that do not obtain prior knowledge when performing scene segmentation.
Even if the Odin method does not obtain prior knowledge, its effect is better than DetCon and ReLICv2 which obtain prior knowledge.
In addition, the Odin method can be applied not only to ResNet models, but also to more complex models, such as Swim Transformer.
In terms of data, the advantages of Odin framework learning are obvious. So where are the advantages of Odin reflected in the visual images?
Compare the segmented images generated using Odin with those obtained from a randomly initialized network (Column 3), an ImageNet-supervised network (Column 4).
Columns 3 and 4 fail to clearly depict the boundaries of objects, or lack the consistency and locality of real-world objects, and the image effects generated by Odin are obviously better.
Reference link:
[1] https://twitter.com/DeepMind/status/1554467389290561541
[2] https://arxiv.org/abs/2203.08777
The above is the detailed content of Unknown objects can also be easily identified and segmented, and the effect can be transferred. For more information, please follow other related articles on the PHP Chinese website!