Generative AI isn’t just for Google or Microsoft. On May 16, Apple presented the first features of iOS 17, its operating system that will be unveiled on June 5 (at the WWDC conference) and will be rolled out to all iPhones by the end of the year.
Among them, a function called “Live Speech”. The principle: allow the user to write a text for his iPhone to read “out loud” during a phone call, to exchange with his correspondent.
fifteen minute workout
As specified by Apple, this option has been designed for users who do not have or no longer have the use of their voice, for example due to certain pathologies such as Charcot’s disease.
In its press release, Apple discloses an additional option, called “Personal Voice” (“Personal Voice”), which allows you to use the “Live Speech” function, but with a voice that replicates your own.
To create this double sound, all you have to do is read the phrases displayed by the iPhone for fifteen minutes, so that the machine can use this database to create the digital model. Initially, however, this option will only be available in English.
Apple also introduced a version of iOS dedicated to people with mental disabilities. The interface is thus very simplified, with functions reduced to a minimum and better described on the screen.
Source: BFM TV
