Apple's Personal Voice lets you create a synthesized clone of your voice using AI in iOS and macOS. While it's a nifty feature, it is pretty limited in what you can do with it. We think limiting its use strictly to accessibility features is a missed opportunity.
Apple Personal Voice was released with iOS 17 and macOS Sonoma, and is supported on devices from the iPhone 12 onward, as well as Apple Silicon Macs. To set it up, head to (System) Settings > Accessibility > Personal Voice and read a list of phrases aloud and then leave your device plugged in overnight so that it can process and clone your voice using AI.
You must read 150 phrases aloud to record all of the sounds and inflections of your unique voice, so you'll probably want to set aside an hour for this (and you'll be sick of the sound of your own voice by the time you're done).
Currently, you can only use your cloned personal voice for Accessibility tasks within iOS, like using it to speak in phone calls and read text aloud in-person from your phone speaker. With such a cool piece of technology readily available, securely running on your own device, we think there's an opportunity for Apple to let you do more with it.
Here's what we'd love to do with our personal voice (app developers take note).
Imagine if your iPhone could answer a call for you, ask some basic questions in your voice, and then only alert you if the call is important. It'd be a great way to filter out telemarketers, and non-urgent calls (like your partner letting you know they've gone to the shops or an appointment confirmation from your doctor) could be summarized and sent to you via iMessage.
This would be especially useful when you're otherwise engaged driving, or at the movies, and can't use your phone.
Not everyone is a great narrator. It can be hard to speak consistently and clearly while reading from a script. Conversely, it can be difficult to formulate what you want to say off-the-cuff while speaking unscripted. Being able to write a transcript of what you want to say and then exporting the resulting spoken text it as an audio file could be great for podcasts, YouTube videos and presentations to help people who aren't great public speakers.
Sure, Personal Voice has yet to be perfected by Apple and can sound a bit robotic at times. But in time, as the feature matures, it's only going to become more convincing.
Ever been on a call with someone who has a cold or a sore throat? They don't sound too good, and this can leave a bad impression. An AI-cloned voice could be used to smooth that out, removing that 'blocked nose' nasal sound, sniffing, coughing, and throat clearing while you're in a meeting. It could also be used to make audio clearer on bad connections or in noisy environments.
If iOS can replicate your voice, it should be able to detect if someone else is doing the same without your permission. This is especially relevant as voice cloning scams are becoming increasingly common (using recordings of you taken from social media).
By securely sharing a fingerprint of your voice with your iCloud family and other trusted contacts, they could be warned if they receive a call that is attempting to use a copy of your voice that isn't actually from you. This would fit well with Apple's focus on user privacy and security.
Perhaps the most obvious missed opportunity of the lot. Talk into your phone, and have the Translate app spit out a live translation of your voice. How cool would that be? Convince your friends that you learned Finnish over the weekend!
The Personal Voice feature is kind of hidden in iOS and macOS, and hasn't been widely advertised. iOS has a bunch of other hidden features you might not know about, so be sure to check them out and see if there's something in there that will make your life easier.
The above is the detailed content of 5 Ways I Wish Apple Would Let Me Use My AI Cloned Voice. For more information, please follow other related articles on the PHP Chinese website!