Table of Contents
1. Background introduction
3. Acoustic modeling MSMC-TTS
5. Summary
6. Author information
Home Technology peripherals AI How to build high-performance speech synthesis systems with compact speech representations

How to build high-performance speech synthesis systems with compact speech representations

Apr 13, 2023 am 11:10 AM
deep learning speech synthesis

The Xiaohongshu Multimedia Intelligent Algorithm Team and the Chinese University of Hong Kong jointly proposed a high-performance speech synthesis scheme MSMC-TTS based on multi-stage multi-codebook compact speech representation for the first time. The feature analyzer based on vector quantized variational autoencoder (VQ-VAE) uses several codebooks to encode acoustic features in stages to form a set of latent sequences with different temporal resolutions. These latent sequences can be predicted from text by a multi-stage predictor and converted into target audio by a neural vocoder. Compared with the Mel-Spectrogram-based Fastspeech baseline system, this solution has significant improvements in sound quality and naturalness. This work has now been summarized into the paper "A Multi-Stage Multi-Codebook VQ-VAE Approach to High-Performance Neural TTS" and was accepted by the speech domain conference INTERSPEECH 2022.

1. Background introduction

Text-to-Speech (TTS) is a technology that converts text into speech. It is widely used in video dubbing and audio and video content creation. , intelligent human-computer interaction and other products. The back-end acoustic modeling technology of mainstream speech synthesis systems usually includes three parts: feature extractor, acoustic model and vocoder. TTS usually performs acoustic modeling on acoustic features obtained based on signal processing (such as Mel Spectrogram). However, limited by the fitting ability of the model, there is a certain difference in distribution between the predicted acoustic features and the real data. This This makes it difficult for vocoder trained on real data to generate high-quality audio from predicted features.


How to build high-performance speech synthesis systems with compact speech representations

##TTS system framework diagram


To address this problem, the academic community uses A more complex model structure and a more novel generative algorithm are adopted to reduce prediction errors and distribution differences. This work takes a different approach, taking compact speech representation as the starting point to consider the problem. For speech synthesis, 1) good compactness of acoustic features can ensure more accurate model prediction results and more robust waveform generation; 2) good completeness of acoustic features can ensure better reconstruction of speech signals. Based on these two considerations, this paper proposes to use vector quantization variational autoencoder (VQ-VAE) to mine a better compact representation from the target data.

2. Representation learning MSMC VQ-VAE

VQ-VAE includes encoder and decoder. The encoder processes the input acoustic feature sequence into a latent sequence and quantizes it using the corresponding codebook. The decoder restores the quantized sequence to the original acoustic feature sequence. This quantized sequence has better compactness (less number of characteristic parameters) as a discretized representation. The higher the degree of quantization, that is, the smaller the codebook capacity, the higher the compactness of the features. But this also results in information compression, making feature completeness worse. In order to ensure sufficient completeness, more codewords are generally used. However, as the codebook capacity increases, the amount of data required for codebook update and the number of training times will increase exponentially, which makes it difficult for VQ-VAE to effectively enhance the representation completeness by increasing the codebook. To address this problem, this paper proposes the multi-head vector quantization (MHVQ) method.


How to build high-performance speech synthesis systems with compact speech representations##VQ-VAE model structure diagram


MHVQ will be a single codebook It is divided into several sub-codebooks according to the direction of feature dimensions. During quantization, each input vector is equally cut into several sub-vectors, quantized with corresponding sub-codebooks respectively, and finally spliced ​​into an output vector. In this way, we can more effectively improve codebook utilization and representation capacity without increasing the amount of codebook parameters. For example, to reduce the compression ratio by a factor of 1, the codewords must be increased to the square of the original codebook number. After using MHVQ, the same compression rate can be achieved by simply splitting the codebook into two parts. Therefore, this method can regulate the completeness of quantitative representation more effectively.


How to build high-performance speech synthesis systems with compact speech representationsMHVQ example picture


In addition, when quantizing the speech sequence , all kinds of information contained in speech features are lost to varying degrees. This information is different in time granularity, such as coarse-grained timbre, pronunciation style, etc., and fine-grained pitch, pronunciation details, etc. Over-compressing information on any time scale can cause some degree of degradation in speech quality. To alleviate this problem, this work proposes a multi-time scale speech modeling method. As shown in the figure, the acoustic feature sequence is encoded in stages to different time scales through several encoders, and then quantized layer by layer through the decoder, decoding to obtain several quantized sequences with different time resolutions. The representation composed of this type of sequence set is the multi-stage multi-codebook representation proposed in this work.

How to build high-performance speech synthesis systems with compact speech representations

Multi-stage modeling example diagram


3. Acoustic modeling MSMC-TTS

For multi-stage Multi-codebook represents MSMCR, and this paper proposes the corresponding TTS system, namely MSMC-TTS system. The system includes three parts: analysis, synthesis and prediction. In system training, the system first trains the analysis module. The audio in the training set is converted into high-completeness acoustic features (such as the Mel-Spectrogram features used in this work) after signal processing. These acoustic features are used to train the feature analyzer based on MSMC-VQ-VAE. At the end of the training, they are converted into the corresponding MSMCR, and then the acoustic model and neural vocoder are trained. During decoding, the system uses an acoustic model to predict MSMCR from text and then uses a neural vocoder to produce the target audio.


How to build high-performance speech synthesis systems with compact speech representations

##MSMC-TTS system framework diagram


This work also proposes a A multi-stage predictor to fit MSMCR modeling. This model is implemented based on FastSpeech, but differs on the decoder side. The model first encodes the text and upsamples the text based on predicted duration information. The sequence is then downsampled to each time resolution corresponding to MSMCR. These sequences will be decoded and quantized step by step from low resolution to high resolution by different decoders. At the same time, the low-resolution quantized sequence is sent to the next stage decoder to assist prediction. Finally, the predicted MSMCR is fed into the neural vocoder to generate the target audio.


How to build high-performance speech synthesis systems with compact speech representations

Multi-stage predictor structure diagram


For multi-stage predictor When performing training and inference, this work chooses to directly predict the target representation in continuous space. This method can better take into account the distance relationship between vectors and codewords in linear continuous space. In addition to the MSE loss function commonly used for TTS modeling, the training criterion also uses a "triplet loss" to force the prediction vector away from non-target codewords and closer to the target codeword. By combining the two loss function terms, the model is able to better predict the target codeword.

4. Experimental results

This work was conducted on the public English single-speaker data set Nancy (Blizzard Challenge 2011). We organized a subjective opinion score test (MOS) to evaluate the MSMC-TTS synthesis effect. The experimental results show that when the original recording is 4.50 points, the MSMC-TTS score is 4.41 points, and the baseline system Mel-FS (Mel-Spectrogram based FastSpeech) is 3.62 points. We tuned the vocoder of the baseline system to match the Mel-FS output characteristics, and the result was 3.69 points. This comparison result proves the significant improvement of the TTS system proposed by the method proposed in this article.

How to build high-performance speech synthesis systems with compact speech representations

In addition, we further discussed the impact of modeling complexity on the performance of TTS. The number of model parameters decreases exponentially from M1 to M3, and the Mel-FS synthesis effect drops to 1.86 points. In contrast, for MSMC-TTS, the reduction in the number of parameters did not have a significant impact on the synthesis quality. When the acoustic model parameter size is 3.12 MB, the MOS can still maintain 4.47 points. This not only demonstrates the low complexity requirements of MSMC-TTS modeling based on compact features, but also demonstrates the potential of this method to be applied to lightweight TTS systems.

How to build high-performance speech synthesis systems with compact speech representations


Finally, we conducted a MSMC-TTS comparison based on different MSMCRs to explore the impact of MHVQ and multi-stage modeling on TTS. . Among them, the V1 system uses a single-stage single codebook representation, the V2 system uses 4-head vector quantization based on V1, and the V3 system uses two-stage modeling based on V2. First, the representation used by the V1 system has the highest feature compression ratio, but exhibits the lowest completeness in analysis synthesis experiments and the worst synthesis quality in TTS experiments. After MHVQ enhanced the completeness, the V2 system has also been significantly improved in TTS effect. Although the multi-stage representation used by V3 did not show further improvements in completeness, it showed the best results on TTS, with significant improvements in both rhythmic naturalness and audio quality. This further shows that multi-stage modeling and multi-scale information retention are of great significance in MSMC-TTS.

How to build high-performance speech synthesis systems with compact speech representations

5. Summary

This work proposes a new set of high-performance TTS (MSMC-TTS) modeling methods from the perspective of studying compact speech representation. The system extracts multi-stage multi-codebook representations from audio in place of traditional acoustic features. Input text can be converted into this speech representation consisting of multiple sequences with different temporal resolutions by a multi-stage predictor and converted to a target speech signal by a neural vocoder. Experimental results show that compared with the mainstream FastSpeech system based on Mel-Spectrogram, this system exhibits better synthesis quality and lower requirements for modeling complexity.

6. Author information

Guo Haohan: Intern in Xiaohongshu Multimedia Intelligent Algorithm Team. He graduated from Northwestern Polytechnical University with a bachelor's degree and studied in the ASLP laboratory under Professor Xie Lei. Currently, he is studying for his Ph.D. in the HCCL Laboratory of the Chinese University of Hong Kong, studying under Professor Meng Meiling. So far, as a first author, six papers have been published at ICASSP, INTERSPEECH, and SLT international speech conferences.

Xie Fenglong: Head of voice technology of Xiaohongshu Multimedia Intelligent Algorithm Team. He has published more than ten papers in speech conferences and journals such as ICASSP, INTERSPEECH, and SPEECHCOM. He has long been a reviewer for major speech conferences such as ICASSP and INTERSPEECH. His main research direction is speech signal processing and modeling.

The above is the detailed content of How to build high-performance speech synthesis systems with compact speech representations. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Methods and steps for using BERT for sentiment analysis in Python Methods and steps for using BERT for sentiment analysis in Python Jan 22, 2024 pm 04:24 PM

BERT is a pre-trained deep learning language model proposed by Google in 2018. The full name is BidirectionalEncoderRepresentationsfromTransformers, which is based on the Transformer architecture and has the characteristics of bidirectional encoding. Compared with traditional one-way coding models, BERT can consider contextual information at the same time when processing text, so it performs well in natural language processing tasks. Its bidirectionality enables BERT to better understand the semantic relationships in sentences, thereby improving the expressive ability of the model. Through pre-training and fine-tuning methods, BERT can be used for various natural language processing tasks, such as sentiment analysis, naming

Analysis of commonly used AI activation functions: deep learning practice of Sigmoid, Tanh, ReLU and Softmax Analysis of commonly used AI activation functions: deep learning practice of Sigmoid, Tanh, ReLU and Softmax Dec 28, 2023 pm 11:35 PM

Activation functions play a crucial role in deep learning. They can introduce nonlinear characteristics into neural networks, allowing the network to better learn and simulate complex input-output relationships. The correct selection and use of activation functions has an important impact on the performance and training results of neural networks. This article will introduce four commonly used activation functions: Sigmoid, Tanh, ReLU and Softmax, starting from the introduction, usage scenarios, advantages, disadvantages and optimization solutions. Dimensions are discussed to provide you with a comprehensive understanding of activation functions. 1. Sigmoid function Introduction to SIgmoid function formula: The Sigmoid function is a commonly used nonlinear function that can map any real number to between 0 and 1. It is usually used to unify the

Beyond ORB-SLAM3! SL-SLAM: Low light, severe jitter and weak texture scenes are all handled Beyond ORB-SLAM3! SL-SLAM: Low light, severe jitter and weak texture scenes are all handled May 30, 2024 am 09:35 AM

Written previously, today we discuss how deep learning technology can improve the performance of vision-based SLAM (simultaneous localization and mapping) in complex environments. By combining deep feature extraction and depth matching methods, here we introduce a versatile hybrid visual SLAM system designed to improve adaptation in challenging scenarios such as low-light conditions, dynamic lighting, weakly textured areas, and severe jitter. sex. Our system supports multiple modes, including extended monocular, stereo, monocular-inertial, and stereo-inertial configurations. In addition, it also analyzes how to combine visual SLAM with deep learning methods to inspire other research. Through extensive experiments on public datasets and self-sampled data, we demonstrate the superiority of SL-SLAM in terms of positioning accuracy and tracking robustness.

Latent space embedding: explanation and demonstration Latent space embedding: explanation and demonstration Jan 22, 2024 pm 05:30 PM

Latent Space Embedding (LatentSpaceEmbedding) is the process of mapping high-dimensional data to low-dimensional space. In the field of machine learning and deep learning, latent space embedding is usually a neural network model that maps high-dimensional input data into a set of low-dimensional vector representations. This set of vectors is often called "latent vectors" or "latent encodings". The purpose of latent space embedding is to capture important features in the data and represent them into a more concise and understandable form. Through latent space embedding, we can perform operations such as visualizing, classifying, and clustering data in low-dimensional space to better understand and utilize the data. Latent space embedding has wide applications in many fields, such as image generation, feature extraction, dimensionality reduction, etc. Latent space embedding is the main

Understand in one article: the connections and differences between AI, machine learning and deep learning Understand in one article: the connections and differences between AI, machine learning and deep learning Mar 02, 2024 am 11:19 AM

In today's wave of rapid technological changes, Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) are like bright stars, leading the new wave of information technology. These three words frequently appear in various cutting-edge discussions and practical applications, but for many explorers who are new to this field, their specific meanings and their internal connections may still be shrouded in mystery. So let's take a look at this picture first. It can be seen that there is a close correlation and progressive relationship between deep learning, machine learning and artificial intelligence. Deep learning is a specific field of machine learning, and machine learning

From basics to practice, review the development history of Elasticsearch vector retrieval From basics to practice, review the development history of Elasticsearch vector retrieval Oct 23, 2023 pm 05:17 PM

1. Introduction Vector retrieval has become a core component of modern search and recommendation systems. It enables efficient query matching and recommendations by converting complex objects (such as text, images, or sounds) into numerical vectors and performing similarity searches in multidimensional spaces. From basics to practice, review the development history of Elasticsearch vector retrieval_elasticsearch As a popular open source search engine, Elasticsearch's development in vector retrieval has always attracted much attention. This article will review the development history of Elasticsearch vector retrieval, focusing on the characteristics and progress of each stage. Taking history as a guide, it is convenient for everyone to establish a full range of Elasticsearch vector retrieval.

Super strong! Top 10 deep learning algorithms! Super strong! Top 10 deep learning algorithms! Mar 15, 2024 pm 03:46 PM

Almost 20 years have passed since the concept of deep learning was proposed in 2006. Deep learning, as a revolution in the field of artificial intelligence, has spawned many influential algorithms. So, what do you think are the top 10 algorithms for deep learning? The following are the top algorithms for deep learning in my opinion. They all occupy an important position in terms of innovation, application value and influence. 1. Deep neural network (DNN) background: Deep neural network (DNN), also called multi-layer perceptron, is the most common deep learning algorithm. When it was first invented, it was questioned due to the computing power bottleneck. Until recent years, computing power, The breakthrough came with the explosion of data. DNN is a neural network model that contains multiple hidden layers. In this model, each layer passes input to the next layer and

AlphaFold 3 is launched, comprehensively predicting the interactions and structures of proteins and all living molecules, with far greater accuracy than ever before AlphaFold 3 is launched, comprehensively predicting the interactions and structures of proteins and all living molecules, with far greater accuracy than ever before Jul 16, 2024 am 12:08 AM

Editor | Radish Skin Since the release of the powerful AlphaFold2 in 2021, scientists have been using protein structure prediction models to map various protein structures within cells, discover drugs, and draw a "cosmic map" of every known protein interaction. . Just now, Google DeepMind released the AlphaFold3 model, which can perform joint structure predictions for complexes including proteins, nucleic acids, small molecules, ions and modified residues. The accuracy of AlphaFold3 has been significantly improved compared to many dedicated tools in the past (protein-ligand interaction, protein-nucleic acid interaction, antibody-antigen prediction). This shows that within a single unified deep learning framework, it is possible to achieve

See all articles