UtopianKnight Consultancy – James Griffiths

STRATEGIC | TECHNICAL | ADVISORY | AI | DEVELOPMENT | vCTO | CYBER | ICS & OT

, ,

The Hidden Risks of Unregulated AI Tool Usage in the Workplace

Artificial Intelligence (AI) tools have rapidly become part of everyday working life. From writing assistants and automated design tools to customer service bots and code generators, the benefits are hard to ignore. They promise increased productivity, faster decision-making, and even improved creativity. But while AI opens doors to new efficiencies, it also introduces very real risks especially when used without oversight or governance.

Working in the cyber security space, I’ve observed a concerning trend: the unchecked use of AI platforms in workplaces. Often dubbed “shadow AI”, this refers to employees using AI tools that are not sanctioned or monitored by their organisation’s IT or security teams. While this can stem from good intentions such as trying to streamline tasks it can inadvertently expose businesses to data breaches, regulatory penalties, and reputational damage.


Why Is Unregulated AI Use a Problem?

Let’s start by understanding why this is a growing concern.

Most modern AI tools operate via the cloud, and many of them rely on processing data through third-party servers located around the world. That means when an employee pastes sensitive company information into a chatbot, the data could be stored, logged, or used to train future AI models. Even if the AI provider claims to anonymise data, businesses have no real control over where it goes or how long it’s kept.

Consider this example:

An employee at a legal firm uses a public AI writing assistant to help draft a contract. They copy and paste client names, clauses, and confidential legal terms into the tool. The document may now be sitting on a third-party server, possibly in another country, well outside the firm’s data protection controls.

This isn’t just theoretical. In early 2023, a well-known tech company banned the use of ChatGPT internally after employees reportedly shared source code and sensitive documentation via the tool.


Key Risks to Businesses

1. Data Protection Breaches (GDPR and Beyond)

One of the biggest threats comes from data protection regulations like the UK GDPR. If employees input personally identifiable information (PII) into AI systems without the necessary safeguards or lawful basis, the organisation could be in breach of the law.

For example, imagine staff using AI to analyse customer feedback emails. If those emails contain names, addresses or contact numbers, and that data is processed by an unauthorised AI tool without the customer’s consent or appropriate safeguards, the organisation could be liable.

Consequences:

  • ICO investigations
  • Financial penalties
  • Mandatory data breach disclosures
  • Reputational damage

2. Loss of Intellectual Property (IP)

AI tools often have unclear terms around data usage. Some reserve the right to retain and use submitted content to improve their models or provide services to others. This poses a serious threat to proprietary business information.

Any sensitive design specs, strategy documents, source code, or internal communications copied into such tools could be used, intentionally or otherwise, in ways the business never intended.

3. Unintentional Exposure of Confidential Information

AI usage isn’t always malicious or deliberate. Employees might simply be trying to get their job done faster. But without proper training or awareness, they may inadvertently copy sensitive internal data into an AI tool without realising the consequences.

4. Bypassing Security Controls

Because these tools are typically accessed through web browsers, users can bypass traditional security controls, like email filters or document classification systems. In effect, they create blind spots in your organisation’s security perimeter.


Real-World Examples of AI Misuse

Here are just a few real-world scenarios where AI tool misuse has caused issues:

  • Samsung: In 2023, engineers at Samsung unintentionally leaked internal source code and meeting transcripts by using ChatGPT for debugging and documentation support. The company responded by banning generative AI tools entirely.
  • Amazon: Issued internal guidance warning staff not to input confidential code or business-sensitive information into AI tools after discovering that some content from internal prompts appeared in later outputs to unrelated users.
  • UK Law Firms: Some have already been flagged by clients and regulators for inadvertently sharing legal case data with cloud-based tools without proper safeguards or client consent.

How to Identify AI Usage in Your Organisation

  1. Network Monitoring and Traffic Logs
    Look at DNS logs and outbound HTTP/S traffic. Frequent visits to popular AI domains (e.g., openai.com, bard.google.com, perplexity.ai, etc.) can highlight usage patterns.
  2. Endpoint Monitoring
    Security platforms like Microsoft Defender for Endpoint or CrowdStrike can be configured to detect when certain applications or browser extensions are used.
  3. Employee Surveys and Interviews
    Often overlooked, direct engagement with employees can be very effective. Ask how they’re using AI, what problems they’re trying to solve, and where they think it adds value. This can help shape official tooling policies.
  4. Shadow IT Discovery Tools
    Some advanced SaaS security tools can automatically discover unsanctioned cloud services being accessed from within the network.

Practical Steps to Manage the Risk

1. Develop a Clear AI Usage Policy

Every organisation needs a written policy that defines:

  • Which AI tools are approved for use
  • What kind of data is permitted to be shared with them
  • How to report suspected misuse
  • Who to contact for support or questions

This policy should sit alongside your existing Acceptable Use and Data Protection policies.

2. Implement Technical Controls

  • Web Filtering: Block access to known AI tool domains that are not sanctioned.
  • Data Loss Prevention (DLP): Use DLP rules to prevent sensitive data from leaving the corporate environment.
  • Cloud Access Security Brokers (CASBs): Help monitor and control cloud app usage, including AI tools.

3. Employee Training and Awareness

Raise awareness through practical examples and case studies. Help staff understand:

  • The risks of sharing company data with AI tools
  • The legal consequences under data protection laws
  • How to use approved tools safely

Encourage a culture of asking before using.

4. Use Enterprise-Grade AI Tools

When possible, opt for AI platforms with enterprise licensing options that offer:

  • Data isolation
  • No data retention
  • EU/UK data centre hosting
  • Audit logs and access control

For instance, Microsoft Copilot for 365 and Google’s Gemini for Workspace offer AI features with better enterprise control than free, public-facing tools.


Final Thoughts

AI in the workplace is here to stay and for good reason. It can drive innovation, speed, and competitive advantage. But like all powerful tools, it must be handled responsibly.

Unregulated use of AI tools poses significant risks to organisations, from data protection breaches to IP loss. It’s essential that businesses take proactive steps to manage this emerging threat, starting with visibility, governance, and education.

As cyber security professionals, we must strike the balance between enabling innovation and protecting the integrity of our data. Shadow AI may be invisible, but with the right strategy, it doesn’t have to be unmanaged.