As AI tools move into our phones – Apple, Google and Samsung have launched services that can edit photos, translate languages and search the web – we’re at the dawn of an era in which AI becomes an integral part of our of digital life and increasingly useful on a personal level.
That’s if we allow it, because it takes a little faith.
Take keeping a diary as an example. An AI tool can manage your diary effectively if you let it access it. But how far should it go?
To be truly helpful, does that mean he also needs to know who you don’t want to date, or relationships you want to keep secret, and from whom?
Do you want him to provide you with consultation or medical appointment summaries?
This is very personal information and could potentially be very embarrassing and extremely valuable if some glitch meant it was shared. Do you trust big tech firms with this kind of data?
Microsoft is pushing hard on this particular door. In 2024, he got into trouble for demonstrating a tool called Recall, which took snapshots of laptop desktops every few seconds to help users find content they’d seen but couldn’t remember where.
It has now made a number of changes to a product that was never launched, but is sticking with it.
“I think we’re moving toward a fundamentally new era of ever-present, persistent, and highly capable co-pilot companions in your everyday life,” the firm’s AI chief, Mustafa Suleiman, told me recently.
Despite the challenges, Ben Wood, principal analyst at technology research firm CCS Insight, expects more personalized AI services to emerge in 2025.
“The data will be constantly updated with new data sources such as emails, messages, documents and social media interactions.
“This will enable the AI service to be tailored specifically to a person’s communication style, needs and preferences,” he says.
But Mr Wood admits that allowing AI to reveal your personal information will be a big step.
“Trust is going to be essential,” says Mr Wood.