UtopianKnight Consultancy – James Griffiths

STRATEGIC | TECHNICAL | ADVISORY | AI | DEVELOPMENT | vCTO | CYBER | ICS & OT

, , , ,

Using Microsoft Purview and DLP to See What’s Happening with AI Agents in Your Organisation

As artificial intelligence tools continue to permeate the modern workplace, their use introduces a unique set of data protection, compliance, and security concerns. While these tools, ranging from large language models to autonomous agents offer productivity gains, they can also act as unregulated data gateways if not properly governed.

In this post, we’ll explore how Microsoft Purview and Data Loss Prevention (DLP) policies can help organisations gain visibility into AI agent usage and establish guardrails to protect sensitive information. This is especially critical for organisations that are concerned about data exfiltration, intellectual property loss, and compliance breaches stemming from uncontrolled AI interactions.


Why Monitor AI Agent Activity?

AI agents and tools such as ChatGPT, Copilot, Bard, Claude, and others are becoming mainstream in business environments. Employees might be using them to:

  • Draft reports or code
  • Summarise internal documents
  • Query data
  • Create client-facing material

However, these use cases carry significant risks, including:

  • Uploading sensitive content (e.g. client data, source code, confidential strategy)
  • Unintentional data sharing with external services
  • Lack of audit trails or accountability for how data was handled
  • Shadow AI tools used without IT or security team awareness

This is where Microsoft Purview and DLP become essential.


What is Microsoft Purview?

Microsoft Purview is a comprehensive data governance and compliance platform that helps organisations manage and protect their data estate across cloud, on-premises, and hybrid environments.

Key components relevant to AI oversight include:

  • Information Protection – classification and labelling of data
  • Data Loss Prevention (DLP) – policies to prevent sensitive data movement
  • Audit and Insider Risk Management – track and flag suspicious activities
  • Communication Compliance – analyse messages for regulatory compliance
  • Cloud App Security (Defender for Cloud Apps) – discover unsanctioned applications, including AI tools

When configured correctly, these services can work together to give your organisation visibility and control over how AI tools interact with sensitive data.


Step 1: Discover AI Tools with Microsoft Defender for Cloud Apps

Before implementing controls, you need visibility. Many AI tools operate in the browser and don’t require software installation. This means traditional app whitelisting or endpoint management won’t detect them.

Microsoft Defender for Cloud Apps (MDCA) provides:

  • Cloud Discovery via network traffic logs or endpoint agents
  • App governance with risk scoring
  • Shadow IT detection including tools like ChatGPT, Claude, and Jasper

Example: If an employee uploads files to ChatGPT via the web interface, MDCA can flag this behaviour, including the domain used, file size, and whether it aligns with approved data handling policies.

Once you identify high-risk AI applications, you can choose to sanction or block them, and monitor for continued use.


Step 2: Classify and Label Sensitive Data with Microsoft Purview Information Protection

AI tools are most dangerous when used with sensitive data. To mitigate risk, implement a data classification and labelling strategy using Microsoft Purview Information Protection.

Capabilities include:

  • Auto-labelling based on content inspection (e.g., credit card numbers, source code, legal contracts)
  • Manual labelling for user-driven classification
  • Sensitivity labels that apply protection actions (encryption, watermarking, restrictions)

Example: A document containing customer PII is automatically labelled “Confidential – Customer Data”. If a user tries to upload it to a web-based AI tool, DLP rules can trigger a block or alert.

Labelling becomes the foundation for many compliance and protection policies downstream.


Step 3: Monitor and Control Data Flows with Data Loss Prevention (DLP)

With labelling in place, Microsoft Purview DLP policies can prevent sensitive data from leaving your environment via AI agents.

DLP can be applied across:

  • Microsoft 365 Apps (Outlook, Teams, SharePoint, OneDrive)
  • Windows 10/11 endpoints
  • Microsoft Edge and Chrome browsers
  • Power Platform (e.g., Power Automate)
  • Non-Microsoft SaaS apps via MDCA

DLP policies allow you to:

  • Detect when sensitive data is copied to the clipboard, pasted into browser forms, or uploaded
  • Block risky activities in real time
  • Educate users via policy tips and warnings
  • Log and alert security teams of violations

Example: A user attempts to paste internal financial forecasts into an AI chatbot. Purview DLP identifies the sensitivity label, matches it to a rule, and prevents the paste action with a warning: “Sharing of confidential finance data is not permitted.”

This ensures that even when using unmanaged AI tools, controls are still enforced.


Step 4: Enable Audit Trails and Alerts

Visibility is not just about prevention, it’s also about accountability. Microsoft Purview provides extensive audit logging to monitor:

  • Which users accessed what data
  • When and where they interacted with AI platforms
  • What actions triggered DLP or access controls
  • Policy overrides or attempted bypasses

For higher-risk scenarios, you can configure alerts and escalation workflows, ensuring rapid incident response.

Use Case: A user consistently attempts to upload labelled documents to ChatGPT despite warnings. An alert can be triggered in Microsoft Defender, creating an incident for investigation or triggering insider risk management reviews.


Step 5: Use Insider Risk Management for AI-Related Behaviour

Microsoft Purview Insider Risk Management (IRM) allows you to detect patterns of risky behaviour before they become incidents.

IRM leverages signals such as:

  • Unusual access or sharing activity
  • Repeated DLP violations
  • Use of unsanctioned AI apps
  • Departing employees exporting data

Custom policies can be configured for AI-specific concerns, such as:

  • Users copying source code and uploading to web AI tools
  • Employees using AI tools to rewrite sensitive documents
  • Excessive document access combined with file uploads

Example: A departing developer accesses hundreds of files over two days and uploads several to an unsanctioned AI writing assistant. IRM flags this for review, helping to mitigate IP theft.


Step 6: Educate and Empower Users

Technology alone isn’t enough. Microsoft Purview also provides tools to educate users in the moment through policy tips, emails, and justifications.

Best practices include:

  • Explain why AI-related DLP policies exist
  • Provide sanctioned alternatives (e.g. Microsoft Copilot or in-house AI solutions)
  • Train users on appropriate use of generative AI
  • Review usage trends to adjust policies without being overly restrictive

Education paired with policy creates a culture of shared responsibility around data protection.


Key Benefits of Using Microsoft Purview for AI Oversight

BenefitDescription
VisibilityDiscover which AI tools are in use and how data is flowing across platforms
PreventionBlock sensitive data from being uploaded to unmanaged AI platforms
ComplianceSupport data protection regulations such as UK GDPR, ISO 27001, HIPAA
Risk ReductionDetect insider threats, IP leakage, or shadow AI usage early
AutomationApply labels, enforce policies, and escalate incidents without manual intervention
FlexibilityTailor DLP and governance to your organisation’s unique risk profile and business needs

Challenges and Considerations

While Microsoft Purview and DLP provide a strong governance foundation, organisations should be aware of some limitations and challenges:

  • False positives: DLP policies can be overly aggressive if not finely tuned, interrupting legitimate workflows.
  • User pushback: Blocking access to popular AI tools without alternatives can cause friction.
  • Policy tuning overhead: Continuous adjustment is needed as tools and data evolve.
  • Shadow AI: Employees may find new tools faster than policies can adapt.
  • Privacy vs. Surveillance: Balance needs to be struck between monitoring and respecting employee privacy.

Conclusion

AI agents are becoming embedded in modern digital workflows, bringing both value and risk. Without oversight, they can become unmonitored vectors for data loss or compliance breaches.

By leveraging Microsoft Purview’s classification, labelling, auditing, and Data Loss Prevention policies, organisations can create a robust framework for identifying, monitoring, and managing how AI tools are used without stifling innovation.

The journey starts with visibility and ends with informed, automated governance that empowers employees to work responsibly with cutting-edge tools.


Further Reading and Resources