Recently, Yuncong Technology's large-scale model has made important progress again in the field of vision. The target detector based on the basic large-scale model of vision has achieved great results on the famous benchmark COCO data set in the detection field from Microsoft Research (MSR) and Shanghai Artificial Intelligence Laboratory. , Zhiyuan Artificial Intelligence Research Institute and many other well-known companies and research institutions stood out and set new world records.
The average accuracy (hereinafter referred to as mAP, mean Average Precision) of Yuncong Technology's large model on the COCO test set reached 0.662, ranking first on the list (see the figure below). On the validation set, the single-scale achieved mAP of 0.656, and the mAP after multi-scale TTA reached 0.662, both reaching world-leading levels.
Big data combined with self-supervised learning to create visual core technology
Big data self-supervised pre-training represented by GPT has made remarkable breakthroughs in the field of natural language understanding (NLP). In the visual field, basic model training combining big data with self-supervised learning has also made important progress.
On the one hand, a wide range of visual data helps the model learn common basic features. YunCong Vision's large-scale basic model uses more than 2 billion data, including a large number of unlabeled data sets and multi-modal image and text data sets. The richness and diversity of the data sets enable the model to extract robust features, greatly reducing the complexity of downstream tasks. Development costs.
On the other hand, self-supervised learning does not require manual annotation, making it possible to train visual models with massive unlabeled data. Yuncong has made many improvements to the self-supervised learning algorithm, making it more suitable for fine-grained tasks such as detection and segmentation, as evidenced by its good results on the COCO detection task.
Open target detection and zero-time learning detection capabilities significantly reduce R&D costs
Thanks to the excellent performance of the visual basic model, Yuncong Rongrong's large model can be trained based on large-scale image and text multi-modal data to support zero-shot learning (hereinafter referred to as zero-shot) detection of thousands of categories of targets, covering energy, transportation , manufacturing and other industries.
Performance of the zero-shot capability of the large model on different data sets
Zero-shot can imitate the human reasoning process and use past knowledge to reason about the specific form of new objects in the computer, thus giving the computer the ability to recognize new things.
How to understand zero-shot? Suppose we know the morphological characteristics of donkeys and horses, and also know that tigers and hyenas are striped animals, and pandas and penguins are black and white animals. We define zebras as equids with black and white stripes. Without looking at any photos of zebras, just relying on inference, we can find zebras among all the animals in the zoo.
Yuncong Vision's large-scale basic model shows strong generalization performance, greatly reducing the data dependence and development costs required for downstream tasks. At the same time, zero-shot greatly improves training and development efficiency, making wide application and rapid deployment a possible.
The above is the detailed content of Yuncong Technology's large-scale model breaks the world record on benchmark COCO, significantly reducing the cost of AI application. For more information, please follow other related articles on the PHP Chinese website!