Microsoft 365 Copilot Improperly Accessed Classified Drafts in Outlook.
Microsoft Fixes Copilot Privacy Bug: AI Exposed "Confidential" Emails in Chat Work Mode
Microsoft has officially addressed a concerning privacy bug (tracked as CW1226324) involving its Microsoft 365 Copilot. The issue, first reported by system administrators on January 21, involved the AI's "Work" mode improperly accessing and summarizing email content labeled as "Confidential" Under Microsoft’s standard service agreement, Copilot is strictly programmed to ignore content with such sensitivity labels to ensure data sovereignty and privacy.
Scope of the Incident
According to Microsoft, the impact was limited to a specific subset of data. The bug allowed Copilot to read and display confidential information only from the user’s own Sent Items and Drafts folders within the Outlook desktop application.
Crucially, Microsoft emphasized that this was not a data breach to external parties. No other users or unauthorized individuals could access these confidential emails through Copilot. The error was internal, meaning the AI simply showed the user their own protected data in a way it wasn't supposed to.
Patch and Remediation
Upon confirming the bug, Microsoft rolled out a security patch earlier this month. The company stated that the fix is now active, though they continue to monitor the system to ensure that Copilot consistently adheres to sensitivity labels across all Microsoft 365 environments.
- The "Confidential" label is part of the Microsoft Purview system, a key component of compliance for large organizations. If AI can bypass these labels, it will significantly reduce organizational trust in using AI in their business.
- While Microsoft insists that "others cannot see" this information, the real risk is that if an employee uses Copilot to summarize work in public or shares meeting screens, Copilot might inadvertently summarize confidential content in a chat, leading to unintentional data leaks.
- This is different from AI "hallucination"; it's a problem with AI access control, reflecting the complexity of managing vast amounts of data permissions at the organizational level.
- For IT administrators, in 2026, it's recommended to review Data Loss Prevention (DLP) policies and implement "Just-In-Time" access in conjunction with AI as a second layer of protection in case the AI itself has bugs in its label filtering system.
Google 2025 Security Report 1.75 Million Apps Rejected to Protect the Android Ecosystem.
Source: Bleeping Computer
Comments
Post a Comment