Apple’s iOS 17 will evidently contain a new feature that can replicate your voice in 15 minutes by reading a random paragraph, and then allowing the phone software to effectively sample your voice, and use AI to replicate it.
https://news.sky.com/story/new-iphone-feature-can-create-a-voice-that-sounds-like-you-in-just-15-minutes-12882547
Whilst I applaud the overall idea in the sense that it could assist those who suffer from ALS in the future by allowing them to convert typing into their own voice, I also see a note sinister side where this technology could easily be abused.
As a starting point, think “my voice is my password”… Then there’s always the possibility of being leveraged for deep fake purposes without consent. And what security controls are in place to protect that data, which easily falls into the personally identified category?
Liife enriching and life changing, yes, but the changes attract a new array of nefarious possibilities. Clearly, it’s possible to protect the voice synthesis with a biometric footprint, but how simple and cost effective works this be to implement?
Would love to hear opinions on this topic.