Home > Technology peripherals > AI > body text

Based on information theory calibration technology, CML makes multi-modal machine learning more reliable

PHPz
Release: 2023-06-27 16:26:56
forward
809 people have browsed it

Multimodal machine learning has made impressive progress in various scenarios. However, the reliability of multimodal learning models lacks in-depth research. "Information is the elimination of uncertainty." The original intention of multi-modal machine learning is consistent with this - added modalities can make predictions more accurate and reliable. However, the paper "Calibrating Multimodal Learning" recently published in ICML2023 found that current multimodal learning methods violate this reliability assumption, and made detailed analysis and corrections.

Based on information theory calibration technology, CML makes multi-modal machine learning more reliablePicture


  • ##Paper Arxiv: https:// arxiv.org/abs/2306.01265
  • Code GitHub: https://github.com/QingyangZhang/CML

The current multi-modal classification method has unreliable confidence, that is, when some modes are removed, the model may produce higher confidence, which violates the information theory "Information is the elimination of uncertainty" is the basic principle. To address this problem, this article proposes the Calibrating Multimodal Learning method. This method can be deployed in different multi-modal learning paradigms to improve the rationality and credibility of multi-modal learning models.

Based on information theory calibration technology, CML makes multi-modal machine learning more reliablePicture

This work points out that current multi-modal learning methods have unreliable prediction confidence problems. Modal machine learning models tend to rely on partial modalities to estimate confidence. In particular, the study found that the confidence of current model estimates increases when certain modes are damaged. To solve this unreasonable problem, the authors propose an intuitive multi-modal learning principle: when the modality is removed, the model prediction confidence should not increase. However, current models tend to believe and be influenced by a subset of modalities, rather than considering all modalities fairly. This further affects the robustness of the model, i.e. the model is easily affected when certain modes are damaged.

Based on information theory calibration technology, CML makes multi-modal machine learning more reliable

To solve the above problems, some current methods adopt existing uncertainty calibration methods, such as Temperature Scaling or Bayesian learning methods. These methods can construct more accurate confidence estimates than traditional training/inference methods. However, these methods only match the confidence estimate of the final fusion result with the accuracy, and do not explicitly consider the relationship between the modal information amount and confidence. Therefore, they cannot essentially improve the credibility of the multi-modal learning model.

The author proposes a new regularization technique called "Calibrating Multimodal Learning (CML)". This technique enforces the matching relationship between model prediction confidence and information content by adding a penalty term to achieve consistency between prediction confidence and information content. This technique is based on the natural intuition that when a modality is removed, prediction confidence should decrease (at least it should not increase), which can inherently improve confidence calibration. Specifically, a simple regularization term is proposed to force the model to learn intuitive ordering relationships by adding a penalty to those samples whose prediction confidence increases when a modality is removed:

Based on information theory calibration technology, CML makes multi-modal machine learning more reliable

Based on information theory calibration technology, CML makes multi-modal machine learning more reliable

The above constraints are regular losses, which appear as penalties when modal information is removed and confidence increases.

Experimental results show that CML regularization can significantly improve the reliability of prediction confidence of existing multi-modal learning methods. Additionally, CML can improve classification accuracy and improve model robustness.

Based on information theory calibration technology, CML makes multi-modal machine learning more reliable

Multimodal machine learning has made significant progress in various scenarios, but the reliability of multimodal machine learning models is still a problem that needs to be solved. Through extensive empirical research, this paper finds that current multimodal classification methods have the problem of unreliable prediction confidence and violate the principles of information theory. To address this issue, researchers proposed the CML regularization technique, which can be flexibly deployed to existing models and improve performance in terms of confidence calibration, classification accuracy, and model robustness. It is believed that this new technology will play an important role in future multi-modal learning and improve the reliability and practicality of machine learning.

The above is the detailed content of Based on information theory calibration technology, CML makes multi-modal machine learning more reliable. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template