Directory:
Paper 1: Quantum machine learning beyond kernel methods
##Abstract:In this article, A research team from the University of Innsbruck, Austria, has identified a constructive framework that captures all standard models based on parameterized quantum circuits: the linear quantum model.
The researchers show how using tools from quantum information theory to efficiently map data re-upload circuits into a simpler picture of a linear model in quantum Hilbert space. Furthermore, the experimentally relevant resource requirements of these models are analyzed in terms of the number of qubits and the amount of data that needs to be learned. Recent results based on classical machine learning demonstrate that linear quantum models must use many more qubits than data reupload models to solve certain learning tasks, while kernel methods also require many more data points.The results provide a more comprehensive understanding of quantum machine learning models, as well as insights into the compatibility of different models with NISQ constraints.
## Researched in this work Quantum machine learning model.
Recommended:
Quantum machine learning beyond kernel methods, a unified framework for quantum learning models.
Paper 2: Wearable in-sensor reservoir computing using optoelectronic polymers with through-space charge-transport characteristics for multi-task learning
In-sensor multi-task learning is not only a key advantage of biological vision, but also a major advantage of artificial intelligence Target. However, traditional silicon vision chips have a large time and energy overhead. Additionally, training traditional deep learning models is neither scalable nor affordable on edge devices. In this article,
The research team from the Chinese Academy of Sciences and the University of Hong Kong proposes a materials algorithm co-design to simulate the learning paradigm of the human retina with low overhead . Based on the bottlebrush-shaped semiconductor p-NDI with efficient exciton dissociation and through-space charge transport properties, a wearable transistor-based dynamic sensor reservoir computing system is developed that exhibits excellent separability on different tasks properties, attenuation memory and echo state characteristics. Combined with the "readout function" on the memristive organic diode, RC can recognize handwritten letters and numbers, and classify various clothing, with an accuracy of 98.04%, 88.18% and 91.76% (higher than all reported organic semiconductors).
Comparison of the photocurrent response of conventional semiconductors and p-NDI, and detailed semiconductor design principles of the RC system within the sensor.
Recommendation: Low energy consumption and low time consumption, the Chinese Academy of Sciences & University of Hong Kong team used a new method to perform multi-task learning for internal reservoir calculations in wearable sensors.
Paper 3: Dash: Semi-Supervised Learning with Dynamic Thresholding
Abstract: This paper innovatively proposes to use dynamic threshold to filter unlabeled samples for semi-supervised learning (SSL). Method, we transformed the training framework of semi-supervised learning, improved the selection strategy of unlabeled samples during the training process, and selected more effective unlabeled samples for training through dynamically changing thresholds. Dash is a general strategy that can be easily integrated with existing semi-supervised learning methods.
In terms of experiments, we have fully verified its effectiveness on standard data sets such as CIFAR-10, CIFAR-100, STL-10 and SVHN. In theory, the paper proves the convergence properties of the Dash algorithm from the perspective of non-convex optimization.
#Fixmatch Training Framework
Recommendation: Damo Academy’s open source semi-supervised learning framework Dash refreshes many SOTAs.
Paper 4: StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
Abstract: Are diffusion models the best at text-to-image generation? Not necessarily, the results of the new StyleGAN-T launched by Nvidia and others show that GAN is still competitive. StyleGAN-T only takes 0.1 seconds to generate a 512×512 resolution image:
Recommendation: GAN is back? NVIDIA spent 64 A100 training StyleGAN-T, which outperformed the diffusion model.Paper 5: Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge Transfer
To this end,
Tencent Youtu Lab, together with Tsinghua University and Shenzhen University, proposed a framework MKT based on multi-modal knowledge transfer, utilize the powerful image-text matching capabilities of the image-text pre-training model to retain key visual consistency information in image classification, and realize Open Vocabulary classification of multi-label scenes. This work has been selected for AAAI 2023 Oral.
Comparison of ML-ZSL and MKT methods.
Recommended: AAAI 2023 Oral | How to identify unknown tags? Multimodal knowledge transfer framework to achieve new SOTA.
Paper 6: ChatGPT is not all you need. A State of the Art Review of large Generative AI models
Abstract: In the past two years, a large number of large-scale generative models have appeared in the AI field, such as ChatGPT or Stable Diffusion. Specifically, these models are able to perform tasks like general question answering systems or automatically creating artistic images, which are revolutionizing many fields.
In a recent review paper submitted by researchers from Comillas Pontifical University in Spain, the author tried to describe the impact of generative AI on many current models in a concise way. And classify the major recently released generative AI models.
## Classification icon.
Recommendation:
ChatGPT is not all you need, a review of 9 types of generative AI models from 6 major companies.Paper 7: ClimaX: A foundation model for weather and climate
Author: Tung Nguyen et al
The Microsoft Autonomous Systems and Robotics research group and the Microsoft Research Center for Scientific Intelligence have developed ClimaX, a flexible and scalable deep learning for weather and climate science Model , can be trained using heterogeneous data sets spanning different variables, spatiotemporal coverage, and physical basis. ClimaX extends the Transformer architecture with novel encoding and aggregation blocks that allow efficient use of available computation while maintaining generality. ClimaX is pretrained using a self-supervised learning objective on climate datasets derived from CMIP6. The pretrained ClimaX can then be fine-tuned to solve a wide range of climate and weather tasks, including those involving atmospheric variables and spatiotemporal scales not seen during pretraining.
ClimaX architecture used during pre-training
Recommended:
The Microsoft team released the first AI-based weather and climate basic model ClimaX.The above is the detailed content of NVIDIA 64 A100 training StyleGAN-T; review of nine types of generative AI models. For more information, please follow other related articles on the PHP Chinese website!