Humans have officially given their voice to machines.
A research paper published by Google this month—which has not been peer reviewed—details a text-to-speech system called Tacotron 2, which claims near-human accuracy at imitating audio of a person speaking from text.
The system is Google’s second official generation of the technology, which consists of two deep neural networks. The first network translates the text into a spectrogram (pdf), a visual way to represent audio frequencies over time. That spectrogram is then fed into WaveNet, a system from Alphabet’s AI research lab DeepMind, which reads the chart and generates the corresponding audio elements accordingly.
The Google researchers also demonstrate that Tacotron 2 can handle hard-to-pronounce words and names, as well as alter the way it enunciates based on punctuation. For instance, capitalized words are stressed, as someone would do when indicating that specific word is an important part of a sentence.
Unlike some core AI research the company does, this technology is immediately useful to Google. WaveNet, first announced in 2016, is now used to generate the voice in Google Assistant. Once readied for production, Tacotron 2 could be an even more powerful addition to the service.
However, the system is only trained to mimic the one female voice; to speak like a male or different female, Google would need to train the system again.