UtopianKnight Consultancy – James Griffiths

STRATEGIC | TECHNICAL | ADVISORY | AI | DEVELOPMENT | vCTO | CYBER | ICS & OT


AI on Mobile Devices, Laptops, and Browsers – Detecting, Monitoring, and Reducing Risk for Your Business

Artificial Intelligence (AI) is no longer a niche technology reserved for tech giants or research labs. From AI-driven productivity tools on mobile devices to browser-based assistants embedded in everyday workflows, AI is everywhere whether your business has officially adopted it or not.

Employees are using AI to draft emails, generate reports, analyse data, create marketing content, translate documents, summarise meetings, write code, and even produce images and videos. This can be a game-changer for productivity but it also introduces new risks, compliance challenges, and visibility gaps for IT and security teams.

The problem?

Many organisations don’t actually know the full extent to which AI tools are being used in their environment, which devices they’re running on, and whether this usage complies with policy or regulation.

In this article, we’ll explore:

  1. Why AI usage has exploded on mobile devices, laptops, and browsers.
  2. The risks of shadow AI and unmonitored adoption.
  3. How to detect and monitor AI usage in real time.
  4. Practical steps to manage and reduce AI-related risks while enabling safe adoption.
  5. Tools, technologies, and policy frameworks for businesses of all sizes.

1. The AI Explosion – And Why It’s Everywhere

The past 24 months have seen an unprecedented boom in AI adoption. Models like ChatGPT, Claude, Gemini, and Copilot have shifted from being accessed through a handful of websites to deep integration into the tools people use every day.

Here’s why AI usage is now so hard to track:

  • Built into everyday applications – AI assistants are embedded in Microsoft 365, Google Workspace, Adobe Creative Cloud, and even CRM systems.
  • Browser extensions and plugins – Chrome, Edge, and Firefox have a growing library of AI-powered extensions, many with no central management.
  • Mobile app integration – iOS and Android have thousands of AI-powered apps, often requiring only minimal permissions to access sensitive data.
  • Generative AI APIs in custom tools – Low-code and no-code platforms let employees build their own AI workflows without IT involvement.

The net effect?

AI is now ubiquitous and invisible it runs in the background of tools your employees already use, which means shadow adoption is not just possible, it’s inevitable.


2. The Hidden Risks of Shadow AI

“Shadow AI” refers to employees using AI tools without IT or compliance oversight. While intentions are often good (faster work, better output), the risks are significant.

2.1 Data Leakage

  • Employees may input sensitive company data into AI systems without realising it is stored or used to train models.
  • Confidential customer information, intellectual property, or strategic plans could end up outside your control.

2.2 Regulatory Non-Compliance

  • GDPR in the UK and EU has strict rules about personal data processing.
  • AI tools may process data in non-approved jurisdictions.
  • Businesses may fail to meet audit and accountability requirements if usage is undocumented.

2.3 Security Risks

  • AI browser extensions could introduce malicious code or data exfiltration capabilities.
  • AI mobile apps may have over-permissive access to device storage, camera, microphone, or GPS.

2.4 Reputational Damage

  • AI-generated output may contain bias, inaccuracies, or offensive material, which could harm brand trust.
  • If leaked, it may appear that your organisation endorses the AI output.

2.5 Compliance With Industry Standards

  • Sectors like finance, healthcare, and defence have industry-specific AI guidance emerging rapidly.
  • Using AI without documented risk assessment could breach ISO 27001, PCI DSS, HIPAA, or FCA guidelines.

3. Detecting and Monitoring AI Usage – The Business Imperative

To manage AI risk, you first need visibility. That means detecting where and how AI is being used across devices, browsers, and applications.

3.1 Network and Endpoint Visibility

  • Endpoint Detection and Response (EDR) tools (e.g., Microsoft Defender for Endpoint, CrowdStrike, SentinelOne) can monitor for AI application executables, browser extension installs, and suspicious API calls.
  • Network monitoring can detect traffic to known AI domains (e.g., openai.com, anthropic.com, gemini.google.com).

3.2 Cloud App Discovery

  • Microsoft Defender for Cloud Apps or Cisco Umbrella can identify SaaS applications used in your environment, even if unsanctioned.
  • Create AI category filters to flag services like ChatGPT, Midjourney, Jasper, or Perplexity.

3.3 Browser Extension Auditing

  • Use Group Policy Objects (GPO) or MDM profiles to inventory and restrict AI-related browser extensions.
  • Tools like ManageEngine Browser Security Plus or Chrome Enterprise give centralised control.

3.4 Mobile Device Management (MDM)

  • MDM platforms like Intune, Jamf, or VMware Workspace ONE can:
    • List all installed AI apps.
    • Block unapproved AI apps.
    • Enforce data-sharing restrictions.

3.5 API and Application Logging

  • Implement application logging for internal tools to track AI API calls.
  • Use API gateways to enforce policies on generative AI API usage.

3.6 User Behaviour Analytics

  • AI usage often involves large copy-paste actions, unusual text generation, or access to specific web resources.
  • UBA tools can detect patterns consistent with AI prompting or misuse.

4. Building an AI Usage Policy

Once detection is in place, you need a clear AI policy to define what’s allowed, what’s restricted, and what’s outright banned.

4.1 Define Acceptable Use

  • Clearly state which AI tools are permitted and for what purposes.
  • Require employees to label AI-generated content.

4.2 Data Handling Rules

  • Never input personal data without anonymisation.
  • Restrict client data from entering AI systems unless contractually allowed.

4.3 Risk Classification

  • Define AI usage levels:
    • Low-risk: Public, non-sensitive research.
    • Medium-risk: Internal operational content.
    • High-risk: Customer or financial data (require explicit approval).

4.4 Employee Training

  • Educate on AI risks and capabilities.
  • Train employees to spot AI hallucinations and bias.

4.5 Regular Review

  • AI tools evolve quickly policies should be reviewed quarterly.

5. Tools and Technologies for AI Monitoring

Here’s a breakdown of technologies that businesses of all sizes can use:

CategoryExamplesBest for
Network MonitoringPalo Alto NGFW, Fortinet, Cisco UmbrellaMedium–large businesses
EDR/XDRSentinelOne, Microsoft Defender, CrowdStrikeAll business sizes
Cloud App DiscoveryMicrosoft Defender for Cloud Apps, NetskopeMedium–large
Browser ManagementChrome Enterprise, GPO, ManageEngineAll sizes
MDMIntune, Jamf, Workspace ONESMB to enterprise
API ManagementApigee, Kong, AWS API GatewayBusinesses with dev capabilities
UBAExabeam, Sumo Logic, Microsoft SentinelLarger organisations

6. Implementation Roadmap – From Zero to AI-Aware

A structured approach can help your business go from unknown AI usage to controlled and monitored adoption.

Step 1 – Discovery Phase

  • Use network and cloud app discovery tools to map current AI usage.
  • Identify top 10 AI tools being accessed.

Step 2 – Risk Assessment

  • For each tool, assess:
    • Data storage location.
    • Compliance requirements.
    • Security track record.

Step 3 – Policy Creation

  • Draft an AI usage policy aligned with business goals.
  • Share draft with legal, HR, and department heads.

Step 4 – Control Implementation

  • Use EDR, MDM, and browser management to enforce controls.
  • Block high-risk tools.

Step 5 – Training & Communication

  • Run AI awareness workshops.
  • Encourage employees to suggest approved AI tools.

Step 6 – Continuous Monitoring

  • Set up dashboards in your SIEM or security platform.
  • Automate alerts for new AI service detection.

7. Enabling Safe AI Adoption

The aim isn’t to ban AI entirely that’s unrealistic and counterproductive. Instead:

  • Sanction safe AI tools – Pick vendors with strong privacy and compliance practices.
  • Integrate AI securely – Use enterprise-grade AI APIs where you control the data pipeline.
  • Sandbox risky experimentation – Create safe environments for employees to try new tools without risking production data.

8. Future Trends – What’s Next for AI Monitoring

  • Native OS AI Assistants – Windows, macOS, iOS, and Android will embed deeper AI features, making detection harder.
  • Encrypted AI Traffic – More AI apps will use encryption, requiring endpoint-level detection.
  • Regulatory Pressure – Expect mandatory AI usage reporting in regulated industries within 2–3 years.
  • AI Watermarking – AI-generated content may carry metadata that can be detected for compliance purposes.

Final Thoughts

AI is here to stay and your employees are almost certainly using it already, whether you know it or not. The question isn’t “Should we allow AI?” but rather “How can we detect, monitor, and manage AI usage in a way that reduces risk and supports productivity?”

For businesses of all sizes, the path forward is clear:

  1. Get visibility – You can’t protect what you can’t see.
  2. Set rules – Define acceptable AI use and train staff.
  3. Enforce controls – Use the right mix of EDR, MDM, and cloud security tools.
  4. Review regularly – AI moves fast; your governance must keep pace.

By taking a proactive approach today, you’ll be able to harness the benefits of AI while protecting your organisation from the significant risks of shadow usage tomorrow.