Stephen Hawking’s voice is one of the most recognisable in the world. He’s been using it since 1985, and is apparently incredibly attached to it. On his own website, he describes it as ‘the best I have heard’.
Although the means which he uses to communicate have changed over the years, the voice has remained the same, tracked down by various engineers updating his software. The voice is that of MIT engineer Dennis Klatt, who was working on text-to-speech algorithms in the 1980s. He produced one of the first devices which translated text into speech and used recordings of himself, his wife, and his daughter for the voices. His own recording ended up being the first speech synthesiser voice that Hawking heard, and it stuck.
How Hawking Communicates
Hawking’s first communication system was operated by a hand clicker, and gave him a typing speed of around fifteen words per minute. On a program called EZ Keys, the cursor would constantly move across a keyboard on the computer screen. Hawking would click when it got to the letter he wanted to choose, and slowly build up words and sentences.When his hand became too weak to use the clicker, in 2008, one of his graduate students developed a cheek switch. This switch is linked to an infrared sensor mounted on Hawking’s glasses, which detects cheek movements. The software works in the same way as the hand clicker, with the cursor always moving over the onscreen keyboard until Hawking stops it.
His newest computer, last updated by Intel in 2014, uses SwiftKey software, which predicts letters and even words in order to speed things up. For example, ‘black’ would be suggested after ‘the’, and ‘hole’ after ‘the black’. It also has facial recognition technology, which recognises movements of Hawking’s mouth and eyebrows as well as his cheek. Before this new update, Hawking’s typing speed was down to around one or two words per minute. Now he’s twice as fast, and Intel are still looking into ways to improve.
Hawking’s Transcripts Will Never Need to Be Transcribed
Hawking’s computer works in sort of the opposite way to audio transcription: he types out sentences to be spoken, instead of copying down what has been said. However, there are obvious links between the two. For a start, Hawking’s speech will never need to be transcribed, because all of it can be saved with a click. In practise, this means that lots of his lectures are freely available to read on the Internet, without having to be typed up by anyone else!
The act of typing speech will also serve to modify the style of Hawking’s communication. Even in a lecture, where more formal language is expected, it stands out. The transcript of Hawking’s TED Talk is well edited and punctuated. Reactions from the listeners are not mentioned until the applause at the very end, so it seems safe to assume that this transcript comes from Hawking himself. In contrast, the transcript of Brian Cox’s TED Talk contains many more asides and hesitations, as well as instances of laughter and applause from the audience.
The Typing Speeds Are A Little Different
Like a transcriber, Hawking is reliant on his typing speed to help him communicate, and so Intel’s ability to speed it up obviously makes a massive difference to him. While he is incredibly patient when typing out what he wants to say—one of the Intel team reports that on their first meeting it took Hawking twenty minutes to type approximately 30 words—it is easy to imagine how frustrating the deterioration in his typing speed must have been before Intel updated his software.
The Future of Voice Mechanics
As of last year, the software which Intel developed for Hawking, called ACAT for ‘assistant context – aware toolkit’, is available to download for PC. This is not only the speech software; it’s an entire toolkit aimed at making computers easier to use for disabled people. It’s been open sourced so that more developers can have access to it, and suggest improvements. Hopefully, we’ll be seeing more development in this area, and improved facilities for Hawking, in the coming years.