Apple is testing generative AI concepts that may one day be used in Siri, although there are some fundamental problems with Siri's basic architecture.
#Apple introduced the company’s large language model and other AI tools to employees at its annual AI Summit last month. Apple engineers, including members of the Siri team, are reportedly already testing language generation concepts "on a weekly basis" in response to the rise of chatbots like ChatGPT.
ChatGPT These next-generation AI technologies highlight how Siri, Alexa and other voice assistants have lost their edge in the competition in the AI field. Siri, in particular, has encountered multiple obstacles to making significant improvements.
John Burkey, a former Apple engineer who was responsible for improving Siri in 2014, told the New York Times that Siri was built on a clunky code that took weeks to update basic functionality.
Siri’s “complex design” makes it difficult for engineers to add new features. For example, Siri's database contains a vast number of phrases in nearly two dozen languages, turning it into "a giant snowball." Burkey added that if someone wanted to add a word to Siri's database, "it would be added to a big pile."
This means that simple updates to the data set, such as adding a new phrase, require rebuilding the entire Siri database, which can take up to six weeks. And adding more complex features, such as new search tools, can take up to a full year. Therefore, Burke believes that Siri cannot become a "creative assistant" like ChatGPT.
The above is the detailed content of Former Apple engineer reveals why Siri cannot become ChatGPT. For more information, please follow other related articles on the PHP Chinese website!