Stories you may like
Apple AI stresses privacy with synthetic and anonymised data
Apple has introduced a novel approach to enhance its AI capabilities while safeguarding user privacy. The method involves comparing synthetic datasets to user data samples—such as recent emails or messages—from individuals who have opted into the company's Device Analytics program. Devices determine which synthetic inputs most closely resemble real data and send only a signal indicating the best match, ensuring that actual user data never leaves the device. This allows Apple to refine its AI outputs, like email summaries, without direct access to personal information.
Apple's privacy strategy emphasizes minimizing data exposure by ensuring that only the necessary data is shared when cloud processing is required. This principle of limited data sharing helps protect user privacy even when data needs to leave the device.
Apple's AI, known for incorporating personal information while maintaining privacy, promises not to use that data for training AI models. Trust is critical for Apple, which ensures even ChatGPT integration respects user privacy by masking IP addresses and not retaining requests. Unlike other companies avoiding personalized AI, Apple overtly uses information from contacts and apps, even in generating stylized images of known individuals.
Apple has introduced several privacy features in iOS 18 to ensure the security of their AI, known as Apple Intelligence. There are three layers of security: on-device operations, Private Cloud Compute (PCC), and third-party AI services like ChatGPT. On-device processes are the most secure, with all data remaining on user devices, while PCC encrypts data sent to servers where it is processed on Apple Silicon chips. Though privacy remains a concern, Apple’s encryption and secure hardware protect data within PCC. However, third-party AI services are less secure and not recommended for privacy-conscious users. Despite Apple's efforts, total immunity from cyber threats is improbable. The article emphasizes balancing trust in Apple with the inherent risks of cloud computing while advising users to avoid third-party AI for maximum privacy.
Apple has launched its new AI tool, "Apple Intelligence," in Australia, with an emphasis on enhanced privacy and security. The company has redesigned Siri to link with ChatGPT, enabling more intuitive and contextual user interactions. Apple CEO Tim Cook expects the tool to transform user experience with Apple products by combining generative AI with personal context. Apple has implemented measures such as on-device processing and a "Private Cloud Compute" system to protect user data. Apple's AI capabilities aim to rival Google, which still holds a dominant share of the search engine market in Australia. Apple offers a $1 million bounty for identifying privacy vulnerabilities. The company is opening up its technology to developers to stimulate innovation and new applications.
Apple's new AI feature, Apple Intelligence, integrated with the latest iPhone 15 and 16 models, is raising privacy concerns due to its ability to access and analyze sensitive data from apps, including banking, financial, and location information. Chip Hallett, a privacy expert, has warned users about the feature's potential to track their movements and usage patterns. He advises users to disable the AI for any apps involving banking, health, fitness, or location tracking to protect their privacy. Although Apple claims that the AI does not store personal data and processes information through its Private Cloud Compute system for personalized assistance, concerns about remote access and data exposure persist. Apple has even offered a $1 million bounty for hacking its system, underlining the critical importance of security measures.
Apple Intelligence privacy isn't perfect thanks to ChatGPT integration. Apple Intelligence privacy is stronger than that of any other AI company, but even its security protections aren’t perfect once ChatGPT gets involved. That’s the argument made by the security chief at Inrupt, the privacy-focused company co-founded by the inventor of the world wide web, Tim Berners-Lee.
Apple Intelligence privacy is a key differentiator for the company’s own AI initiative, with the company taking a three-step approach to safeguard personal data. But Apple says we won’t have to take the company’s word for it: It is taking an “extraordinary step” to enable third-party security researchers to fully and independently verify the privacy protections in place.
Apple Intelligence is launching later this month, bringing a first wave of AI features to your iPhone, iPad, and Mac. But as with all AI technology, the matter of privacy is a key one to pay attention to. How does Apple Intelligence handle user privacy? Here’s what you should know.
Apple's AI, known for incorporating personal information while maintaining privacy, promises not to use that data for training AI models. Trust is critical for Apple, which ensures even ChatGPT integration respects user privacy by masking IP addresses and not retaining requests. Unlike other companies avoiding personalized AI, Apple overtly uses information from contacts and apps, even in generating stylized images of known individuals.
Apple has introduced several privacy features in iOS 18 to ensure the security of their AI, known as Apple Intelligence. There are three layers of security: on-device operations, Private Cloud Compute (PCC), and third-party AI services like ChatGPT. On-device processes are the most secure, with all data remaining on user devices, while PCC encrypts data sent to servers where it is processed on Apple Silicon chips. Though privacy remains a concern, Apple’s encryption and secure hardware protect data within PCC. However, third-party AI services are less secure and not recommended for privacy-conscious users. Despite Apple's efforts, total immunity from cyber threats is improbable. The article emphasizes balancing trust in Apple with
Apple AI stresses privacy with synthetic and anonymised data
Apple is taking a new approach to training its AI models – one that avoids collecting or copying user content from iPhones or Macs.
According to a recent blog post, the company plans to continue to rely on synthetic data (constructed data that is used to mimic user behaviour) and differential privacy to improve features like email summaries, without gaining access to personal emails or messages.
For users who opt in to Apple’s Device Analytics program, the company’s AI models will compare synthetic email-like messages against a small sample of a real user’s content stored locally on the device. The device then identifies which of the synthetic messages most closely matches its user sample, and sends information about the selected match back to Apple. No actual user data leaves the device, and Apple says it receives only aggregated information.
The technique will allow Apple to improve its models for longer-form text generation tasks without collecting real user content. It’s an extension of the company’s long-standing use of differential privacy, which introduces randomised data into broader datasets to help protect individual identities. Apple has used this method since 2016 to understand use patterns, in line with the company’s safeguarding policies.
Improving Genmoji and other Apple Intelligence features
The company already uses differential privacy to improve features like Genmoji, where it collects general trends about which prompts are most popular without linking any prompt with a specific user or device. In upcoming releases, Apple plans to apply similar methods to other Apple Intelligence features, including Image Playground, Image Wand, Memories Creation, and Writing Tools.
For Genmoji, the company anonymously polls participating devices to determine whether specific prompt fragments have been seen. Each device responds with a noisy signal – some responses reflect actual use, while others are randomised. The approach ensures that only widely-used terms become visible to Apple, and no individual response can be traced back to a user or device, the company says.
Curating synthetic data for better email summaries
While the above method has worked well with respect to short prompts, Apple needed a new approach for more complex tasks like summarising emails. For this, Apple generates thousands of sample messages, and these synthetic messages are converted into numerical representations, or ’embeddings,’ based on language, tone, and topic. Participating user devices then compare the embeddings to locally stored samples. Again, only the selected match is shared, not the content itself.
Apple collects the most frequently-selected synthetic embeddings from participating devices and uses them to refine its training data. Over time, this process allows the system to generate more relevant and realistic synthetic emails, helping Apple to improve its AI outputs for summarisation and text generation without apparent compromise of user privacy.
Available in beta
Apple is rolling out the system in beta versions of iOS 18.5, iPadOS 18.5, and macOS 15.5. According to Bloomberg’s Mark Gurman, Apple is attempting to address challenges with its AI development in this way, problems which have included delayed feature rollouts and the fallout from leadership changes in the Siri team.
Whether its approach will yield more useful AI outputs in practice remains to be seen, but it signals a clear public effort to balance user privacy with model performance.
User's Comments
No comments there.