📡 Breaking news
Analyzing latest trends...

Microsoft 365 Copilot Improperly Accessed Classified Drafts in Outlook.

 

Microsoft 365 Copilot Improperly Accessed Classified Drafts in Outlook.
Microsoft Fixes Copilot Privacy Bug: AI Exposed "Confidential" Emails in Chat Work Mode

Microsoft has officially addressed a concerning privacy bug (tracked as CW1226324) involving its Microsoft 365 Copilot. The issue, first reported by system administrators on January 21, involved the AI's "Work" mode improperly accessing and summarizing email content labeled as "Confidential" Under Microsoft’s standard service agreement, Copilot is strictly programmed to ignore content with such sensitivity labels to ensure data sovereignty and privacy.

Scope of the Incident

According to Microsoft, the impact was limited to a specific subset of data. The bug allowed Copilot to read and display confidential information only from the user’s own Sent Items and Drafts folders within the Outlook desktop application.

Crucially, Microsoft emphasized that this was not a data breach to external parties. No other users or unauthorized individuals could access these confidential emails through Copilot. The error was internal, meaning the AI simply showed the user their own protected data in a way it wasn't supposed to.

Patch and Remediation

Upon confirming the bug, Microsoft rolled out a security patch earlier this month. The company stated that the fix is now active, though they continue to monitor the system to ensure that Copilot consistently adheres to sensitivity labels across all Microsoft 365 environments.

  • The "Confidential" label is part of the Microsoft Purview system, a key component of compliance for large organizations. If AI can bypass these labels, it will significantly reduce organizational trust in using AI in their business.
  • While Microsoft insists that "others cannot see" this information, the real risk is that if an employee uses Copilot to summarize work in public or shares meeting screens, Copilot might inadvertently summarize confidential content in a chat, leading to unintentional data leaks.
  • This is different from AI "hallucination"; it's a problem with AI access control, reflecting the complexity of managing vast amounts of data permissions at the organizational level.
  • For IT administrators, in 2026, it's recommended to review Data Loss Prevention (DLP) policies and implement "Just-In-Time" access in conjunction with AI as a second layer of protection in case the AI ​​itself has bugs in its label filtering system. 

 

 

Google 2025 Security Report 1.75 Million Apps Rejected to Protect the Android Ecosystem.

 

Source: Bleeping Computer 

💬 AI Content Assistant

Ask me anything about this article. No data is stored for your question.

Comments

Popular posts from this blog

Google Vids Goes Pro Veo 3.1 and Lyria 3 AI Tools Now Available for Free Users.

Anthropic has requested the deletion of leaked data sent via an NPM package.

Google Workspace Shuts Down Ransomware New AI Defense is 14x Stronger.

Sony Acquires Cinemersive Labs to Revolutionize 3D Graphics for PlayStation.

Microsoft to Block Legacy Drivers in Windows 11 A Major Security

Studio Display XDR Gets a $400 Price Cut for VESA Configurations.

Raspberry Pi Hits Second Price Hike of 2026 16GB Models Jump by $100.