Speech recognition using OpenAI's Whisper model
Speech recognition is a field in artificial intelligence that allows computers to understand human speech and convert it into text. The technology is used in devices such as Alexa and various chatbot applications. The most common thing we do is voice transcription, which can be converted into transcripts or subtitles.
#Recent developments in state-of-the-art models such as wav2vec2, Conformer, and Hubert have significantly advanced the field of speech recognition. These models employ techniques that learn from raw audio without manually labeling the data, allowing them to efficiently use large datasets of unlabeled speech. They have also been extended to use up to 1,000,000 hours of training data, well beyond the traditional 1,000 hours used in academic supervised datasets, but models pretrained in a supervised manner across multiple datasets and domains have been found to perform better Robustness and generalization to held datasets, so performing tasks such as speech recognition still requires fine-tuning, which limits their full potential. To solve this problem OpenAI developed Whisper, a model that utilizes weak supervision methods.
This article will explain the types of data sets used for training, the training methods of the model, and how to use Whisper
Whisper model introduction
Using the data set:
The Whisper model was trained on a dataset of 680,000 hours of labeled audio data, including 117,000 hours of speech in 96 different languages and 125,000 hours of translation data from "any language" to English. The model leverages Internet-generated text that is generated by other automatic speech recognition systems (ASR) rather than created by humans. The dataset also includes a language detector trained on VoxLingua107, a collection of short speech clips extracted from YouTube videos and tagged based on the language of the video title and description, with additional steps to remove false positives .
Model:
The main structure used is the encoder-decoder structure.
Resampling: 16000 Hz
Feature extraction method: Calculate an 80-channel log Mel spectrogram representation using a 25 ms window and a 10 ms step.
Feature normalization: The input is globally scaled to between -1 and 1 and has an approximately zero mean on the pre-trained dataset.
Encoder/Decoder: The encoder and decoder of this model use Transformers.
Procedure of the encoder:
The encoder first processes the input representation using a stem containing two convolutional layers (filter width 3), using the GELU activation function.
The stride of the second convolutional layer is 2.
The sinusoidal position embedding is then added to the output of the stem and the encoder Transformer block is applied.
Transformers use pre-activated residual blocks, and the output of the encoder is normalized using a normalization layer.
Model block diagram:
Decoding process:
In the decoder, learning position embedding and binding input and output markers are used express.
The encoder and decoder have the same width and number of Transformers blocks.
Training
To improve the scaling properties of the model, it is trained on different input sizes.
Train the model through FP16, dynamic loss scaling, and data parallelism.
Using AdamW and gradient norm clipping, the linear learning rate decays to zero after warming up the first 2048 updates.
Use a batch size of 256 and train the model for 220 updates, which is equivalent to two to three forward passes on the dataset.
Since the model was trained for only a few epochs, overfitting was not a significant issue, and no data augmentation or regularization techniques were used. This instead relies on diversity within large datasets to promote generalization and robustness.
Whisper has demonstrated good accuracy on previously used datasets and has been tested against other state-of-the-art models.
Advantages:
- Whisper has been trained on real data as well as data used on other models and with weak supervision.
- The accuracy of the model was tested against human listeners and its performance evaluated.
- It detects unvoiced areas and applies NLP technology to correctly punctuate the transcript.
- The model is scalable and allows extracting transcripts from audio signals without dividing the video into chunks or batches, thus reducing the risk of missing sounds.
- The model achieves higher accuracy on various data sets.
Comparative results of Whisper on different data sets, compared with wav2vec, it has achieved the lowest word error rate so far
The model was not tested on the timit dataset, so in order to check its word error rate, we will demonstrate here how to use Whisper to self-verify the timit dataset, that is, use Whisper to build our own speech recognition application .
Using Whisper Model for Speech Recognition
The TIMIT Reading Speech Corpus is a collection of speech data specifically used for acoustic speech research and the development and evaluation of automatic speech recognition systems. It includes recordings of 630 speakers from the eight major dialects of American English, each reading ten phonetically rich sentences. The corpus includes time-aligned orthographic, phonetic, and word transcriptions as well as 16-bit, 16kHz speech waveform files for each voice. The corpus was developed by the Massachusetts Institute of Technology (MIT), SRI International (SRI), and Texas Instruments (TI). TIMIT corpus transcriptions have been manually verified, with testing and training subsets specified to balance phonetic and dialect coverage.
Installation:
!pip install git+https://github.com/openai/whisper.git !pip install jiwer !pip install datasets==1.18.3
The first command will install all the dependencies required by the whisper model. jiwer is used to download the text error rate package. The datasets are provided by hugface. You can download the timit dataset.
Import library
import whisper from pytube import YouTube from glob import glob import os import pandas as pd from tqdm.notebook import tqdm
Load timit data set
from datasets import load_dataset, load_metric timit = load_dataset("timit_asr")
Calculate the Word error rate under different model sizes
Consider filtering English data and non-English data To meet the needs, we choose to use a multi-language model here instead of a model specifically designed for English.
But the TIMIT data set is pure English, so we have to apply the same language detection and recognition process. In addition, the TIMIT data set has been divided into training and verification sets, and we can use it directly.
To use Whisper, we must first understand the parameters, size and speed of different models.
Load model
model = whisper.load_model('tiny')
tiny can be replaced with the model name mentioned above.
Function to define language detector
def lan_detector(audio_file): print('reading the audio file') audio = whisper.load_audio(audio_file) audio = whisper.pad_or_trim(audio) mel = whisper.log_mel_spectrogram(audio).to(model.device) _, probs = model.detect_language(mel) if max(probs, key=probs.get) == 'en': return True return False
Function to convert speech to text
def speech2text(audio_file): text = model.transcribe(audio_file) return text["text"]
Run the above function under different model sizes, timit the word errors obtained by training and testing The rate is as follows:
Transcribe speech from u2b
Compared with other speech recognition models, Whisper can not only recognize speech, but also interpret the content of a person’s speech punctuation intonation, and insert appropriate punctuation marks. We will use u2b’s video for testing below.
Here we need a package pytube, which can easily help us download and extract audio
def youtube_audio(link): youtube_1 = YouTube(link) videos = youtube_1.streams.filter(only_audio=True) name = str(link.split('=')[-1]) out_file = videos[0].download(name) link = name.split('=')[-1] new_filename = link+".wav" print(new_filename) os.rename(out_file, new_filename) print(name) return new_filename,link
After obtaining the wav file, we can apply the above function to extract text from it.
Summary
The code of this article is here
https://drive.google.com/file/d/1FejhGseX_S1Ig_Y5nIPn1OcHN8DLFGIO/view
There are many more The operation can be completed with Whisper, and you can try it yourself based on the code in this article.
The above is the detailed content of Speech recognition using OpenAI's Whisper model. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G
