Home > Backend Development > Python Tutorial > Exploring Kokoro TTS Voice Synthesis on Google Colab with T4

Exploring Kokoro TTS Voice Synthesis on Google Colab with T4

DDD
Release: 2025-01-27 12:12:09
Original
477 people have browsed it

Exploring Kokoro TTS Voice Synthesis on Google Colab with T4

KOKORO-82M: High-performance text transfer voice (TTS) model exploration

KOKORO-82M is a high-performance TTS model that can generate high-quality audio. It supports simple text conversion, and can easily synthesize voice synthesis by retention of audio file application rights.

KOKORO-82M

Starting from version 0.23, KOKORO-82M also supports Japanese. You can try it easily through the following link:

[Kokoro TTS on Hugging Face Spaces]

However, the tone of Japanese is still slightly unnatural.

In this tutorial, we will use KOKORO-Onnx, which is a TTS implementation with KOKORO and onnx. We will use version 0.19 (a stable version), which only supports voice synthesis of American English and English English.

As shown in the title, the code will be performed in Google Colab.

Install KOKORO-ONNX

Load the package

<code class="language-bash">!git lfs install
!git clone https://huggingface.co/hexgrad/Kokoro-82M
%cd Kokoro-82M
!apt-get -qq -y install espeak-ng > /dev/null 2>&1
!pip install -q phonemizer torch transformers scipy munch
!pip install -U kokoro-onnx</code>
Copy after login
Run examples

Before testing voice synthesis, let us run the official example. Run the following code to generate and play audio within a few seconds.

<code class="language-python">import numpy as np
from scipy.io.wavfile import write
from IPython.display import display, Audio
from models import build_model
import torch
from models import build_model
from kokoro import generate</code>
Copy after login

voice synthesis

Now, let's enter the theme and test voice synthesis.

Define the voice pack
<code class="language-python">device = 'cuda' if torch.cuda.is_available() else 'cpu'
MODEL = build_model('kokoro-v0_19.pth', device)
VOICE_NAME = [
    'af', # 默认语音是 Bella 和 Sarah 的 50-50 混合
    'af_bella', 'af_sarah', 'am_adam', 'am_michael',
    'bf_emma', 'bf_isabella', 'bm_george', 'bm_lewis',
    'af_nicole', 'af_sky',
][0]
VOICEPACK = torch.load(f'voices/{VOICE_NAME}.pt', weights_only=True).to(device)
print(f'Loaded voice: {VOICE_NAME}')

text = "How could I know? It's an unanswerable question. Like asking an unborn child if they'll lead a good life. They haven't even been born."
audio, out_ps = generate(MODEL, text, VOICEPACK, lang=VOICE_NAME[0])

display(Audio(data=audio, rate=24000, autoplay=True))
print(out_ps)</code>
Copy after login

AF: American English female voice

AM: American English male voice

BF: British English female voice BM: British English male voice

    We will now load all available voice packages.
  • Use the predetermined voice to generate text
  • In order to check the differences between synthetic voice, let us use different voice packages to generate audio. We will use the same example text, but you can change the
  • variable to use any required voice pack.
voice synthesis: mixed voice
<code class="language-python">voicepack_af = torch.load(f'voices/af.pt', weights_only=True).to(device)
voicepack_af_bella = torch.load(f'voices/af_bella.pt', weights_only=True).to(device)
voicepack_af_nicole = torch.load(f'voices/af_nicole.pt', weights_only=True).to(device)
voicepack_af_sarah = torch.load(f'voices/af_sarah.pt', weights_only=True).to(device)
voicepack_af_sky = torch.load(f'voices/af_sky.pt', weights_only=True).to(device)
voicepack_am_adam = torch.load(f'voices/am_adam.pt', weights_only=True).to(device)
voicepack_am_michael = torch.load(f'voices/am_michael.pt', weights_only=True).to(device)
voicepack_bf_emma = torch.load(f'voices/bf_emma.pt', weights_only=True).to(device)
voicepack_bf_isabella = torch.load(f'voices/bf_isabella.pt', weights_only=True).to(device)
voicepack_bm_george = torch.load(f'voices/bm_george.pt', weights_only=True).to(device)
voicepack_bm_lewis = torch.load(f'voices/bm_lewis.pt', weights_only=True).to(device)</code>
Copy after login

First, let us create an average voice, combined with two British female voices (BF).

Next, let's combine the combination of two female voices and a male voice. voicepack_

<code class="language-python">#  以下代码段与原文相同,只是重复了多次,为了简洁,这里省略了重复的代码块。
#  每个代码块都使用不同的语音包生成音频,并使用 display(Audio(...)) 播放。</code>
Copy after login
Finally, let us synthesize the mix of American and British male voices.

I also used Gradio to test the effect of hybrid voice: (here should be inserted into the link or screenshot of the Gradio demonstration)

The combination of this combination with Ollama may produce some interesting experiments.

<code class="language-python">bf_average = (voicepack_bf_emma + voicepack_bf_isabella) / 2
audio, out_ps = generate(MODEL, text, bf_average, lang=VOICE_NAME[0])
display(Audio(data=audio, rate=24000, autoplay=True))
print(out_ps)</code>
Copy after login
This Revied Output Maintains The Original Meaning and Structure While Improving the Flow and Clarity. Eric Voice Packs Have Been Summarized to Avoid Redance. Remember to Replace Placeholders like "[You should insert Hugging Face here Spaces link] "And" (here should be inserted into the link or screenshot of the Gradio demonstration) "with the actual links or images.

The above is the detailed content of Exploring Kokoro TTS Voice Synthesis on Google Colab with T4. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template