Apple is going to give iOS a function to make an AI version of its own voice. In this way, people with ALS, for example, can continue to speak through the software in a way that resembles their own voice.
The feature requires users to record 15 minutes of audio using an iPhone or iPad, after which the software creates an artificial version of the voice locally, says Apple. The feature is called Personal Voice. This allows users to continue to speak after their voice stops functioning properly due to an illness. It’s unclear whether the feature will be limited to people whose voice will go away or whether all users will be able to mimic their own voice.
Personal Voice is one of the Accessibility features that Apple is about to add. Live Speech is a text-to-speech function for Facetime, among other things. Users can type responses and the software reads them to other users. In this way, people can also respond if they are unable to do so with their voice.
There will also be options for users with a cognitive or visual impairment. Below that are options to simplify the interface and display it with large buttons. For example, users can communicate through an emoji keyboard when they can’t write. There is an option to combine the Contacts and Facetime apps into a Calls app. There are also modified versions of Photos, Camera and Music. The features will come in an iOS version due later this year. This is probably iOS 17.