UtopianKnight Consultancy – James Griffiths

STRATEGIC | TECHNICAL | ADVISORY | AI | DEVELOPMENT | vCTO | CYBER | ICS & OT

, ,

Why Some Cisco ASA Firewalls May Still Be at Risk and What the Red Hat / GitLab Claims Reveal About Broader Security Fault Lines

In October 2025, cybersecurity media lit up with headlines like “Many Cisco ASA firewalls still unsecure, hackers claim Red Hat’s GitLab breach”.  These stories link two seemingly distinct issues zero-day vulnerabilities in Cisco’s Adaptive Security Appliance (ASA) line, and claims of massive data exfiltration from a Red Hat GitLab instance used by its consulting arm. Together, they underscore a deeper systemic truth: even mature security infrastructure and major enterprise vendors are not immune to gaps in practice, patching, configuration, or incident response.

This article will:

  1. Outline the critical vulnerabilities in Cisco ASA/FTD devices and how they’ve been exploited
  2. Review what is (and what is not) known about the Red Hat / GitLab incident
  3. Explore how the two stories intersect in lessons about risk, trust, and defensive posture
  4. Lay out recommendations for organisations still running ASA, and for any organisation relying on third-party platforms or consulting firms

Part I: The ASA Alarm Bells Why Cisco’s Firewalls Are Under Scrutiny

1. The Zero-Day Exploits: CVE-2025-20333 & CVE-2025-20362

In late September 2025, multiple cybersecurity advisories revealed that two zero-day vulnerabilities had been actively exploited in Cisco ASA (and related FTD) firewalls. 

  • CVE-2025-20333: An authenticated attacker may execute arbitrary code (root-level) on an ASA or FTD device’s web VPN interface under certain configurations. 
  • CVE-2025-20362: An unauthenticated remote attacker can access restricted URL endpoints on the VPN web server, bypassing authentication. 

Cisco’s advisories, and investigative reports from allied cybersecurity agencies, assert that these vulnerabilities have been weaponised in real campaigns, likely state-sponsored, targeting a subset of ASA devices. 

What makes these flaws particularly dangerous is the use of sophisticated persistence techniques. In some confirmed cases, threat actors have modified the ROMMON (basic bootstrap mode) of ASA devices to maintain foothold even after reboots, firmware upgrades, or attempts to reset the device. 

That technique is especially insidious because ROMMON lies below the operating-system layer; compromising it can allow a backdoor to survive typical remedial steps. 

2. The Scope: How Many ASAs Are Still Vulnerable?

Alarmingly, scanning efforts by security organisations show that tens of thousands of ASA / FTD devices remain exposed and unpatched. A commonly cited figure is ~ 48,000 vulnerable appliances, many of them internet-facing. 

Even after warnings from Cisco and government agencies, many organisations have yet to remediate, likely due to:

  • Legacy or out-of-support hardware that cannot be patched
  • Complexity or risk in updating firewall configurations in production environments
  • Lack of visibility into where all ASA / FTD devices are deployed
  • Operational inertia or resource constraints

The National Cyber Security Centre (UK) and counterparts elsewhere have published detailed detection scripts and response playbooks, stressing the urgency of upgrades and forensic inspection. 

The NCSC has noted that some ASA 5500-X series models (such as 5512-X, 5515-X, 5525-X, 5545-X, 5555-X, 5585-X) which predate Secure Boot or hardware trust anchor features are especially at risk of ROMMON modification. 

Consequently, some organisations are being urged not merely to patch but to replace or retire vulnerable device classes where possible. 

3. What It Means: “ASA Still Unsecure”

When commentators say “many Cisco ASA firewalls remain unsecure”, they’re essentially pointing to a persistent gap between known risk and real remediation action. The vulnerabilities are not theoretical they are being actively exploited; yet the footprint of vulnerable devices remains large. 

From a security posture perspective, this gap is dangerous because firewalls are often seen as the first line of defence. If that barrier is compromised, attackers can pivot deeper into corporate networks, exfiltrate data, disrupt connectivity, manipulate traffic, or launch further attacks.

In short: a firewall is only as secure as its patching, its configuration, and the organisation’s willingness to respond swiftly to threat intelligence. The ASA case is a stark reminder that even venerable infrastructure cannot be ignored.


Part II: The Red Hat / GitLab Claims, What Happened, What’s Speculative

1. The Incident (as Confirmed by Red Hat)

On 2 October 2025, Red Hat publicly acknowledged that a GitLab instance used by its consulting business had suffered unauthorized access. 

In its statement, Red Hat says:

  • They detected the breach, initiated an investigation, and isolated the affected GitLab instance
  • The compromised data is from consulting engagements project specifications, examples, internal communication, etc. 
  • Importantly, Red Hat asserts that the instance “does not typically store sensitive personal data”, and to date they have not found evidence of such exposure. 
  • They believe the breach does not affect their broader product ecosystem, supply chain, or software downloads from official channels. 
  • Impacted customers (of Red Hat Consulting) will be contacted directly. 

Red Hat also clarifies this was a self-managed GitLab (Community Edition) instance, not GitLab’s own hosted infrastructure. 

So far, the known facts are limited. Red Hat positions the incident as serious but constrained in scope.

2. The Attackers’ Claims & Third-Party Commentary

Opposing or supplementing Red Hat’s version, a threat actor calling themselves Crimson Collective claims to have exfiltrated 570 GB of compressed data from 28,000 repositories

According to the attackers and media investigations:

  • The stolen data includes Customer Engagement Reports (CERs), which may contain network and infrastructure configurations, authentication tokens, API keys, database URIs, and other critical secrets. 
  • The directory listing shared by the group references organisations across sectors (banks, telecoms, government, technology), implying broad exposure. 
  • They claim to have used some of those credentials to pivot into customer infrastructure. 
  • They also allege that Red Hat’s response was minimal, amounting to a templated vulnerability disclosure request rather than genuine engagement. 

These claims have stirred alarm across the security community, with some analysts warning of downstream risks: if customers’ network details, keys or VPN setups were exposed, follow-on intrusions could become possible. 

However, independent verification is still underway. Red Hat has not publicly confirmed the full scale of the attackers’ assertions. 

3. Evaluating Credibility and Risk

Given the divergence between what Red Hat confirms and what the attackers claim, how should one assess the situation?

Strengths of the attackers’ claims:

  • The directory structure and repository listing published on Telegram lends some plausibility. 
  • The scope of consulting documents (CERs) is precisely the kind of asset that, if compromised, can yield deep insight into infrastructure and security posture.
  • The motive is logical: exfiltrate consulting data, then leverage that knowledge for downstream attacks or extortion.

Weaknesses or uncertainties:

  • Red Hat explicitly states it has found no evidence of personal data, or of supply chain compromise. That suggests either a narrower breach than claimed, or that some portions of the attackers’ claims are exaggerated. 
  • The link from knowledge of a customer’s infrastructure to actual compromise is nontrivial. Even with access to architecture diagrams or blueprints, executing an attack requires credentials, timing, and access vectors.
  • Some extortion groups tend to inflate their claims to increase leverage, and the alleged timeline and volume remain to be fully corroborated.

Thus, the prudent view is: the breach is real, meaningful and concerning. But the full extent of damage, especially third-party impact, remains unproven. Organisations should behave as though the worst case is possible, while reserving judgment until more forensic detail emerges.


Part III: Intersection, Implications & Lessons Learned

At first glance, the Cisco ASA zero-day saga and the Red Hat / GitLab incident might appear unconnected. One is about firewall vulnerabilities; the other is about code repository compromise. But both carry overlapping lessons and shared strategic implications.

1. The Shared Underlying Truth: Known Vulnerabilities Persist

Both stories emphasise that just because a vulnerability is known doesn’t mean all organisations will fix it promptly. The ASA flaws were publicly disclosed, yet many devices remain unpatched. 

Likewise, the Red Hat incident suggests that even organisations with strong security reputations can have hidden gaps in their deployment, configuration, or incident detection.

In both cases, the “window of exploitation” is long. Attackers exploit lag between disclosure and remediation.

2. Persistence & Low-Level Compromise Are a Growing Threat

The ASA case shows malware able to compromise firmware or bootloader layers (ROMMON), evading patches, reboots and resets. 

In repository or infrastructure attacks, sophisticated adversaries may implant backdoors or lateral access paths that survive credential rotations or standard incident response.

Both stories reinforce that defenders must assume that some internal systems may already be compromised and plan for detection and containment, not just prevention.

3. Supply-Chain & Consulting Risk Amplification

The Red Hat breach is particularly illustrative of the risk of trusted third parties. Many organisations may outsource consulting or infrastructure design. If those consultants’ historic blueprints, credentials, configs or playbooks are exposed, the attacker gets a “map” into multiple customers’ environments all at once. 

This suggests that when choosing vendors, integrators or consultants, organisations should incorporate stricter controls, compartmentalisation, credential hygiene, and post-engagement audits.

4. The Illusion of “Safe” Zones

Many teams believe that internal or partner zones are inherently safer. But both incidents show that vulnerabilities, misconfigurations, or trust assumptions can be exploited even in presumed “trusted” segments whether that’s a firewall appliance’s management interface or a consultants’ GitLab server.

Defence-in-depth is not optional relying on perimeter alone is too brittle.

5. Communication, Transparency & Crisis Response Matter

Red Hat’s measured public statement, and the attacker’s sharp criticisms of Red Hat’s response, highlight that narrative control, stakeholder communication and trust are key during an incident. If an organisation appears passive, defensive or opaque, adversaries may amplify their claims, driving reputational harm.

Similarly, Cisco, governments and cybersecurity agencies have actively published technical guidance, detection scripts, and directives in the ASA case which helps defenders respond. Timely, transparent guidance can help contain damage.


Part IV: What You Should Do (If You’re Running ASA / Rely on Vendors / Are a Potential Customer)

This section translates the risk into actionable steps.

1. If You Operate Cisco ASA / FTD Devices

  • Inventory & visibility: Identify all ASA / FTD instances, especially those in remote locations or shadow deployments. Many breaches occur via forgotten or unmanaged firewalls.
  • Patch immediately: Ensure that all devices are updated to the fixed firmware versions Cisco has released.
  • Forensic dump & inspection: Where possible, perform memory captures, ROMMON inspectors or forensic dumps to detect signs of firmware manipulation.
  • Factory reset and reconfigure: After patching, reset devices to factory defaults when possible, and rebuild configurations, with regenerated cryptographic keys, passwords and certificates. 
  • Replace legacy hardware: If your ASA hardware lacks Secure Boot or trust anchor capabilities, plan a hardware upgrade or migration where feasible.
  • Monitor logging behavior: Validate that logging has not been suppressed or tampered with. Attackers may disable or reroute logging to hide traces. 
  • Segment access: Limit management interface exposure (e.g., place firewall management behind a jump host or VPN, restrict source IPs).
  • Engage external security experts: Use red teaming or specialised firmware integrity assessment to look for low-level compromises.

2. If You Use Third-Party Platforms or are a Client of Consulting Firms

  • Require strong vendor security policies: Demand that your vendors or consultants run their infrastructure following robust security practices (patching, segmentation, credentials management, incident response).
  • Audit what you share: Limit what network blueprints, topologies or credentials you allow vendors to retain. Use ephemeral, minimized data.
  • Rotate credentials and tokens post-engagement: After a consulting engagement ends, invalidate secrets or keys used, rotate credentials, and perform security review.
  • Mandate third-party breach liability clauses: In contracts, include clauses requiring timely disclosure of vendor breaches, obligations to notify clients, and remediation responsibilities.
  • Monitor for suspicious access: Use logging, infrastructure monitoring, and anomaly detection (e.g., access from vendor credentials at odd times).
  • Conduct periodic supply-chain risk assessments: Recognise that breaches in vendor infrastructure can cascade to your organisation.

3. Incident Response & Crisis Planning

  • Prepare for worst-case assumptions: Even without confirmation, treat high-risk breaches as potentially severe; begin credential rotations, forensics, stakeholder communication.
  • Establish disclosure and communication protocols: Be ready to issue timely, transparent statements to customers, regulators or partners.
  • Post-incident audit and lessons learned: After remediating, perform root cause analysis, extract lessons, and improve your vulnerability management, detection and resilience posture.
  • Engage with intelligence sharing communities: Threat intelligence, ISACs or governmental agencies can offer IOCs, detection scripts and alerts tailored to your industry.

Part V: Conclusions & Outlook

The convergence of the Cisco ASA vulnerabilities and the Red Hat / GitLab breach claims is a vivid reminder: security is a moving target, and no vendor, infrastructure, or practice is beyond scrutiny. The ASA incident demonstrates how even foundational defensive appliances can become Achilles’ heels if complacency, legacy support, or patch backlog dominate. The Red Hat case reminds us of the pervasive risk of trusting third-party infrastructure and the sensitive exposure that consulting engagements can generate.

If there is one takeaway, it is that resilience the ability to detect, contain, recover is as important as prevention. Organisations should assume some portion of their systems may already be compromised and build detection and recovery capabilities commensurate to their risk.