Apple Intelligence's on-device AI can be manipulated by attackers using prompt injection techniques, according to new research that shows a high success rate and potential access to sensitive user data. Apple Intelligence Researchers from RSAC Research have unveiled a method to circumvent Apple's security measures. They achieved a 76% success rate in 100 tests by employing adversarial prompts and Unicode obfuscation These findings were shared with Apple on October 15, 2025. The focus was on the on-device large language model embedded in Apple's operating systems, which is accessible to third-party applications. Continue Reading on AppleInsider | Discuss on our Forums
Apple Intelligence's on-device AI can be manipulated by attackers using prompt injection techniques, according to new research that shows a high success rate and potential access to sensitive user data. Apple Intelligence Researchers from RSAC Research have unveiled a method to circumvent Apple's security measures. They achieved a 76% success rate in 100 tests by employing adversarial prompts and Unicode obfuscation These findings were shared with Apple on October 15, 2025. The focus was on the on-device large language model embedded in Apple's operating systems, which is accessible to third-party applications. Continue Reading on AppleInsider | Discuss on our Forums
Apple Intelligence's on-device AI can be manipulated by attackers using prompt injection techniques, according to new research that shows a high success rate and potential access to sensitive user data.
Apple Intelligence Researchers from RSAC Research have unveiled a method to circumvent Apple's security measures.
They achieved a 76% success rate in 100 tests by employing adversarial prompts and Unicode obfuscation These findings were shared with Apple on October 15, 2025.
This page keeps Apple rumors separate from official updates, so readers can follow early reports without confusing them with confirmed announcements.