Apple said the update is coming “in the coming weeks.”
This is mine said before, external its notification summaries — which group and rewrite previews of several recent app notifications into a single alert on users’ lock screens — aim to let users “scan for key details.”
“Apple Intelligence features are in beta, and we’re constantly improving them with user feedback,” the company said in a statement Monday, adding that receiving summaries is optional.
“A software update coming in the coming weeks will clarify when the text displayed is a summary provided by Apple Intelligence. We encourage users to report their concerns if they see an unexpected notification summary.”
feature, along with others released as part of a broader suite of artificial intelligence tools was launched in the UK in December. It’s only available on iPhone 16, iPhone 15 Pro, and Pro Max models running iOS 18.1 or later, and on select iPads and Macs.
Several examples of technology that appears to interpret messages in a very crude, literal way have gone viral on social media.
In November, a reporter for ProPublica highlighted, external erroneous summaries by Apple AI of alerts from the New York Times app suggesting that it had reported that Israeli Prime Minister Benjamin Netanyahu had been arrested.
The BBC was unable to independently verify the screenshots, and the New York Times declined to comment.
Reporters Without Borders, an organization that represents the rights and interests of journalists, urged Apple to disable this feature in December.
It said the BBC’s attribution of the false headline about Mr Mangione showed that “generative AI services are still too immature to provide reliable information to the public”.
Apple is not alone in releasing generative AI tools that can create text, images and more content at the request of users – but with mixed results.
Google’s AI review feature, which provides a written summary of information from the search engine’s top results in response to user queries, faced criticism last year for some messy answers.
At the time, a Google spokesperson said that these were “isolated examples” and that the feature generally worked well.