Platform, Security, Workplace
Platform, Security, Workplace
Someone might be inside your Microsoft 365 environment right now. Here’s how to find out fast, keep in mind this article is based on my experience and my point of view.
Picture this: a colleague walks up to your desk on a Tuesday morning looking confused. People are replying to emails she never sent. Someone called her asking why she needed an urgent wire transfer. Her calendar has meetings she never scheduled. According to IBM’s research, the global average cost of a data breach reached $4.44 million in 2025, and stolen or compromised credentials remain the most common attack vector and among the hardest breaches to detect, often taking months to identify and contain.
Every minute you spend figuring out where to look is a minute the attacker spends going deeper. The difference between a contained incident and a full organizational crisis almost always comes down to how quickly someone knew where to look, and what to do when they found it.
This guide gives you that. A clear, repeatable process to detect a Microsoft 365 account compromise in under 10 minutes, written in plain language without assuming you have a security operations team standing behind you. Keep in mind this is a rapid triage for obvious compromises, not guaranteed full detection!
The Unified Audit Log is automatically enabled for most modern Microsoft 365 tenants. However, older tenants or some legacy configurations may still have it disabled. If audit logging is off, you will not have historical activity data for your investigation. Admins must enable it manually to record Microsoft 365 activities. If it was never turned on, you’re investigating blind, there’s no history of what happened. Go to the Microsoft Purview compliance portal → Solutions → Audit, and if you see a banner saying “Start recording user and admin activity,” click it immediately. Then come back to this guide.
Audit (Premium) which is included in Microsoft 365 E5 and retains audit records for Exchange, SharePoint, and Microsoft Entra for one year and provides access to critical events like when users access, reply to, or forward mail items. It is possible to buy a 10 year retention add-on for the E5 license. Standard audit (E3 and below) retains most records for only 90-180 days at the moment of writing and depending on service. The shorter your retention window, the smaller your investigation window. This matters if the attacker has been sitting quietly in the account for weeks.
| License | Default audit log retention | Notes |
|---|---|---|
| E3 / Microsoft 365 Business Standard/Premium | 90–180 days (depends on service and tenant settings) | Called Standard Audit. Some critical activities are not recorded or retained long enough for thorough investigations. |
| E5 / Microsoft 365 E5 Compliance Add-on | 1 year by default, extendable to 10 years with add-on | Called Premium Audit. Includes all critical audit events, like mailbox forwarding, admin role changes, OAuth consent events, and SharePoint/OneDrive file activity. |
Key point: Some events that are essential for breach detection are only available in E5 / Premium audit. For example:
• Set-Mailbox (for mailbox-level forwarding)
• Consent to application (OAuth app consent events)
• Certain SharePoint/OneDrive file activity events
• Directory role changes
If you are on E3 or lower, these events might not appear in your audit log or may only be retained for a short period, making it hard to investigate older compromises.
| Audit Event / Action | Available in E3 / Standard Audit? | Available in E5 / Premium Audit? | Why it matters for breach detection |
|---|---|---|---|
| Set-Mailbox (mailbox forwarding changes) | ❌ Not retained / may be missing | ✅ Fully retained | Detects attackers setting up mailbox-level forwarding to exfiltrate emails silently |
| New-InboxRule (including hidden rules) | ✅ Partially visible | ✅ Fully visible | Detects hidden inbox rules that hide security alerts or forward emails |
| Consent to application (OAuth app consent events) | ❌ Not fully visible | ✅ Fully visible | Detects malicious third-party apps maintaining access even after password reset or MFA changes |
| Add member to role (directory role changes) | ❌ Limited or not retained | ✅ Fully retained | Detects attackers trying to escalate privileges or gain admin access |
| FileDownloaded / FileSyncDownloadedFull (SharePoint & OneDrive bulk downloads) | ❌ Partially retained | ✅ Fully retained | Detects large-scale data exfiltration |
| Sign-in logs & authentication events | ✅ Basic logs available | ✅ Full logs + risky sign-ins | Impossible travel, unusual hours, legacy auth usage, MFA bypass detection |
Notes / Recommendations:
1. E3 tenants:
• Some critical events may be missing; consider direct PowerShell checks for mailbox forwarding or OAuth apps.
• Retention window is shorter (typically 90–180 days), so older breaches may be impossible to investigate.
• Particularly OAuth consent events on E3 tenants, direct PowerShell checks are required
2. E5 tenants:
• Premium audit gives full visibility with 1-year retention by default (extendable to 10 years).
• Supports full correlation of suspicious activity across Exchange, SharePoint, OneDrive, Teams, and Azure AD.
Keep in mind that the information in the above table may change, it is based on current findings.
Before you start digging through logs, you need to know what you’re looking for. Attackers are predictable. They follow patterns. Here are the five things they almost always do after compromising a Microsoft 365 account, look for any of these and treat them as serious until proven otherwise.
This is the single most reliable indicator of a compromised Microsoft 365 account. It is also the most consistently overlooked.
After gaining access, an attacker’s first move is almost always to create inbox rules that work silently in the background. These rules serve two purposes: hiding security notifications from the legitimate user, and siphoning data to an external address. A rule that moves emails containing words like “password,” “invoice,” or “Microsoft security alert” into the Junk folder means the account owner never sees warnings about their own compromise. A rule that forwards every incoming email to a Gmail address means the attacker gets a live feed of everything without ever logging in again.
What makes this particularly dangerous is that some of these rules are deliberately hidden and do not appear in standard admin tools. You can only find them using PowerShell with a specific parameter that forces a direct query of the mailbox storage. Standard Microsoft 365 admin tools only show some inbox rules, leaving security teams blind to the most dangerous ones. PowerShell’s Get-InboxRule -IncludeHidden command is a way to find hidden mailbox rules.
Separate from inbox rules, Microsoft 365 allows forwarding to be configured at the mailbox level, a global setting that silently copies every email to an external address. This is different from an inbox rule because it operates at the infrastructure level, is invisible to the user in their Outlook settings, and survives even if someone cleans up suspicious inbox rules. An attacker who configures mailbox-level forwarding and then gets discovered can clean up their inbox rules, remove their registered MFA device, and disappear, while email continues flowing to their external address for weeks afterward.
A login from a country your colleague has never visited. A successful authentication at 3am when she has never worked outside business hours. Two successful logins from different continents within the same 20-minute window, physically impossible unless someone else used her credentials. These are the patterns that the sign-in logs reveal immediately, and they are often the fastest way to confirm what you already suspect. That’s physically impossible and Microsoft flags it as “impossible travel”.
Once an attacker registers their own authenticator app or phone number on the compromised account, they have effectively locked in permanent access. Even if the legitimate user resets their password, the attacker authenticates with their own MFA method and walks straight back in. Finding an unrecognized MFA registration on an account is not a maybe, it is a confirmed breach until proven otherwise.
This is the sign most IT administrators miss entirely, and attackers rely on that blind spot. OAuth applications are third-party tools that users can authorize to access their Microsoft 365 data. When an attacker tricks someone into clicking a carefully crafted link, the user may unknowingly grant a malicious application permission to read all their emails, access their files, or send mail on their behalf. The application then maintains that access independently of the user’s password or MFA settings. You can reset the password, revoke sessions, even wipe the device, and the OAuth application still has a valid token sitting in your tenant, quietly doing its job.
Here’s your exact investigation sequence. Start the clock!
Go to: Microsoft Entra admin center (entra.microsoft.com) → Users → All Users → Select the suspected user → Sign-in logs
Filter for Successful sign-ins first. You are looking for:
– Unfamiliar countries: If your colleague works in the Netherlands and you see a successful login from Vietnam, that needs explaining before anything else.
– Impossible travel: Look at timestamps on successful logins. Two successful authentications from cities that are six flight hours apart, occurring 30 minutes apart, means two different people used those credentials.
– Unusual hours: A login at 3am from the user’s usual country is less alarming than one from abroad, but still worth noting, especially if it is followed by unusual activity.
– Legacy authentication clients: If you see successful logins under client apps labeled “Other clients,” “IMAP,” “POP,” or “Exchange ActiveSync” from a source that does not match a known device, legacy authentication is being exploited. These protocols bypass MFA entirely, which means your MFA policies offered zero protection against this particular login. Some organizations may also need to check SMTP AUTH, which can be exploited for bypassing MFA.
– Failed attempts followed by success: A cluster of failed logins that ends in a successful one is the fingerprint of a password spray or brute force attack.
Where to go (for standard inbox rules): Microsoft 365 admin center → Active users → Select user → Mail tab → Email apps → Manage email apps
For hidden inbox rules (the ones attackers actually use), you need PowerShell:
Open PowerShell connected to Exchange Online and run:
Get-InboxRule -Mailbox "user@yourdomain.com" -IncludeHidden | Format-List Name, Enabled, RedirectTo, ForwardTo, ForwardAsAttachmentTo, DeleteMessage
The -IncludeHidden flag is not optional here. Without it, PowerShell returns only the rules visible in standard admin tools, the attacker’s rules will not appear. Look for any rule that forwards to an external address, redirects mail to Junk or Notes folders, or deletes messages automatically. Any of these on an account the user did not configure themselves is a red flag.
To check for global mailbox forwarding:
Get-Mailbox -Identity "user@yourdomain.com" | Format-List ForwardingAddress, ForwardingSmtpAddress, DeliverToMailboxAndForward
If ForwardingSmtpAddress has any value at all, email is being forwarded externally. If DeliverToMailboxAndForward is set to False, the user is not even receiving copies of their own incoming mail, it is going exclusively to the attacker’s address.
One more thing to check: Mailbox Delegation
Forwarding rules and mailbox-level forwarding get most of the attention, but there is a third way attackers silently maintain access to a mailbox that almost nobody checks during an initial investigation: delegate permissions. Mailbox delegation allows one account to access another account’s mailbox directly, reading emails, sending messages on behalf of the owner, or both. It is a legitimate feature used by executive assistants and shared mailbox setups. It is also something an attacker with temporary access to an account can configure in seconds, and it will survive a password reset completely intact.
The dangerous part is that the legitimate user has no obvious indication this happened. There is no notification, no visible setting in Outlook, and no inbox rule to stumble across. The attacker’s account simply has quiet, persistent read access to everything that arrives. To check whether any unexpected accounts have been granted delegate access, run the following in Exchange Online PowerShell:
Get-MailboxPermission -Identity "user@yourdomain.com" | Where-Object {$_.AccessRights -eq "FullAccess" -and $_.IsInherited -eq $false}
Any result that is not inherited and does not belong to a legitimate admin or assistant should be removed immediately. Also check Send As permissions separately:
Get-RecipientPermission -Identity "user@yourdomain.com" | Where-Object {$_.AccessRights -eq "SendAs" -and $_.IsInherited -eq $false}
A Send As permission means someone can send emails that appear to come directly from this person, no forwarding rule needed, no trace in the compromised account’s Sent Items folder. For business email compromise, this is particularly valuable to an attacker because the emails look completely authentic to the recipient. If you find anything unexpected in either of these outputs, remove it immediately and note the account name it was granted to, that account may itself be compromised or attacker-controlled.
Where to go: Microsoft Purview compliance portal (compliance.microsoft.com) → Solutions → Audit → New Search
The Unified Audit Log records activity across Microsoft 365 services such as Exchange, SharePoint, OneDrive, and Teams. Authentication events and detailed login information are found in Entra ID sign-in logs. Set your date range to cover the last 30 days (or more if you suspect a longer intrusion). Enter the user’s email address. Leave the activities filter broad for the initial pass, you want to see everything.
Key things to search for:
“New-InboxRule”: when was the suspicious inbox rule created, and does that timestamp match a suspicious login from step one? If they line up, you have connected the dots.
“Set-Mailbox”: This records when mailbox-level settings were changed, including when external forwarding was switched on.
“Add member to role”: if the attacker had enough access and time, they may have tried to elevate the compromised account’s privileges by assigning it an admin role.
“Consent to application”: this is the OAuth footprint. If this event appears in the audit log around the same time as suspicious logins, an attacker almost certainly authorized a malicious application using this account.
“FileDownloaded” or “FileSyncDownloadedFull” bulk file downloads from SharePoint or OneDrive are a strong indicator of data exfiltration. A user downloading 400 documents in 10 minutes is not normal behavior.
The audit log can be used to find the IP address of the computer used to access a compromised account, determine who set up email forwarding for a mailbox, and determine if a user deleted email items in their mailbox.
Where to go: Microsoft Entra admin center → Users → All users → Select the user → Authentication methods
Go through every single registered authentication method on this account. Ask yourself for each one: did this person add this? Is this their phone number? Is this their authenticator app? Anything you cannot account for was registered by someone else. Also check the Devices registered to this account. An unfamiliar device, particularly one enrolled recently and running an operating system inconsistent with what that user normally uses, suggests the attacker registered their own machine, potentially to satisfy device compliance requirements in your Conditional Access policies.
Where to go: Microsoft Entra admin center → Applications → Enterprise applications
Filter by “Users and groups” or look for recently added applications. You are looking for anything that was not deliberately installed by your IT team, particularly applications with broad permission scopes such as:
– Mail.Read or Mail.ReadWrite
– Files.ReadWrite.All
– Mail.Send
– Directory.ReadWrite.All
Not all OAuth applications carry the same level of risk, and knowing the difference helps you prioritize what to investigate first. When a regular user authorizes an application, that consent applies only to their own data, the app can access their mailbox or their files, but nobody else’s. When an administrator grants consent on behalf of the entire organization, however, that single approval gives the application access to every user’s data across the whole tenant. This is called admin consent, and it is significantly more dangerous in the wrong hands.
When you are reviewing Enterprise Applications under a suspected compromise, sort by consent type and look at admin-consented applications first. A malicious app with admin consent is not just a problem for one account, it is a problem for every account in your organization simultaneously. Any admin-consented application you cannot directly account for should be treated as the highest priority item in your entire investigation, ahead of everything else on this list.
To quickly identify which applications have been granted admin consent across your tenant, go to Microsoft Entra admin center → Applications → Enterprise applications → filter by “Admin consent” in the permissions column, or run the following:
Get-MgServicePrincipal -All | Where-Object {$_.AppRoles -ne $null} | Select-Object DisplayName, AppId
For a more detailed view of exactly what permissions each application holds, the Microsoft Entra admin center provides a cleaner picture than PowerShell for most investigations, navigate to the specific application → Permissions → and look at the Admin consent tab versus the User consent tab side by side.
Any application with these permissions that you cannot explain the origin of should be treated as malicious until proven otherwise. Check when it was authorized, which account authorized it, and whether that timestamp aligns with suspicious sign-in activity from step one.
Detection without response is just watching the damage happen. The moment you confirm a breach, move through these steps without stopping.
Step 1: Block the account
Microsoft 365 admin center → Active users → Select user → Block sign-in
This prevents any new authentication using these credentials. It does not disconnect active sessions, that is the next step, but it closes the front door immediately. Keep in mind that blocking an account in M365 admin center only prevent logging in, when you disable a account in Entra ID, it disables that account and that may result in issues when you use for example dynamic security groups with filtering on ‘AccountEnabled’ property, just keep that in mind!
Step 2: Revoke all active sessions
Microsoft Entra admin center → Users → Select user → Revoke sessions
This invalidates every active refresh token the attacker is holding. Any session they currently have open will require re-authentication, which they cannot complete because sign-in is now blocked. One important caveat: access tokens that were already issued can remain valid for up to approximately one hour before they expire naturally. You have a short window where residual access is still technically possible, which is why the next steps need to happen quickly.
Note: Existing access tokens may remain active until they naturally expire. Access tokens typically remain valid for up to one hour, although exact lifetimes depend on policy and service. Administrators can shorten token lifetimes with Conditional Access sign-out policies, but this is advanced and not always configured by default.
Step 3: Reset the password
Use a strong, randomly generated password the account has never used before. Do not send it via email, if the attacker is still reading the inbox during that one-hour residual window, they will intercept it. Deliver it via a password management tool like 1Password for example.
Step 4: Delete the malicious inbox rules and remove forwarding
Using the PowerShell commands from earlier, delete every suspicious rule:
Remove-InboxRule -Mailbox "user@yourdomain.com" -Identity "RuleName"
Keep in mind that it is smart to always export the existing rules first using: Get-InboxRule | Export-Csv before deleting anything.
And clear external forwarding:
Set-Mailbox -Identity "user@yourdomain.com" -ForwardingSmtpAddress $null -DeliverToMailboxAndForward $false
Step 5: Remove suspicious MFA methods and devices
Once you’ve blocked the account and revoked sessions, the next step is to make sure the attacker can’t simply log back in using their own authentication methods. Go into the Microsoft Entra admin center, navigate to the affected user, and check their Authentication methods. Look at every phone number, authenticator app, or security key that’s registered. Ask yourself: did the user personally add this? Anything unfamiliar is a red flag. Also check if a Temporary Access Pass was created!
Remove any methods that don’t belong to the legitimate user, and then force them to re-register MFA from scratch. This ensures they’re starting clean with verified methods only.
Next, check registered devices. Attackers sometimes enroll their own machines to satisfy Conditional Access policies or maintain persistent access. Any device that looks unfamiliar — especially recently added ones or devices running operating systems the user doesn’t normally use, should be removed immediately. This step ensures that the attacker’s devices are cut off entirely.
Cleaning up MFA and devices might feel tedious, but it’s one of the most important ways to lock a compromised account down quickly. Without this, the attacker could bypass your password reset and regain access in minutes.
Step 6: Revoke suspicious OAuth app access
Even after resetting passwords and cleaning up MFA, there’s one more trap many organizations overlook: malicious OAuth applications. These are third-party apps a user might have unwittingly authorized to access their Microsoft 365 data. Once an attacker gets a token through one of these apps, they can continue reading emails, sending messages, or accessing files, all without touching the password.
To check for this, go to Enterprise Applications in Entra. Filter for recently added apps or those with broad permissions, such as Mail.ReadWrite, Mail.Send, Files.ReadWrite.All, or Directory.ReadWrite.All. For each app, ask: did IT authorize this, or did the user install it knowingly? Anything suspicious should be removed immediately.
Don’t forget to check whether other users in the organization authorized the same app. Attackers often capture multiple accounts in a single phishing campaign.
To prevent this from happening again, consider setting up alerts in Microsoft Purview or Microsoft Sentinel that notify you whenever a user grants an application access. For most organizations, OAuth consent should be a rare event, if an alert fires, treat it as urgent.
Step 7: Tell the user, and tell your team
The compromised account may still have sent emails that appear to come from a trusted colleague. People in your organization may have already replied to requests from the attacker, shared documents, or taken actions they should not have. Get the word out quickly, and make clear that anything sent from that account in the relevant time window should be treated with suspicion until verified.
Running through the checklist above will catch the majority of compromised accounts. But the attacks making headlines right now are designed specifically to pass every check on that list.
The most notable example: a token theft campaign first identified in December 2025 and documented by KnowBe4 researchers does not steal passwords or bypass MFA through fatigue attacks. Instead, it tricks the user into completing a completely legitimate Microsoft authentication, on the real Microsoft domain, with their real MFA method, and intercepts the OAuth token that Microsoft issues after successful login. The attacker never needs the user’s credentials. They receive a valid, authenticated token that grants full access to the account. Your sign-in logs show a successful MFA-completed login. Everything looks normal. Nothing is.
This is why having the right alerts configured before a breach happens matters as much as knowing how to investigate one after.
Set up Risky User alerts. Microsoft Entra ID Protection, available with a P2 license, scores every sign-in for suspicious characteristics and flags accounts it considers at risk. By default, those flags sit in a dashboard that nobody watches. Configure email alerts for medium and high risk users so that the moment Microsoft’s systems detect something wrong, your team knows about it within minutes rather than days.
Restrict device code flow. Device code authentication exists for legitimate purposes, allowing devices without browsers, like printers or shared screens, to authenticate with Microsoft 365. It is also the exact mechanism that token theft campaigns exploit. If your organization does not use shared devices that require this flow, disable it through a Conditional Access policy. Note that some shared devices like conference room systems or lab machines may require exceptions.
Alert on OAuth consent events. Create an alert in Microsoft Purview or Microsoft Sentinel that fires every time any user in your organization grants permissions to an application. A typical user should almost never do this. When the alert fires, treat it as a priority investigation until you can confirm the application is legitimate.
The 10-minute window in this guide’s title is a target, not a guarantee. The first time you run through this process it will probably take longer, because you are learning where things live and what normal looks like. That is exactly why you should practice it before you need it. Run through this checklist on a known-good account. Understand what your sign-in logs look like when nothing is wrong. Know what a clean inbox rule list looks like. Learn where the enterprise applications page is before 9am on a crisis morning. Because when the moment comes, and for most organizations, it eventually does, you will not have time to figure out the basics. You will only have time to act.