CVE-2025-61784: LLaMA-Factory: SSRF+LFI in multimodal chat API

GHSA-527m-2xhr-j27g HIGH PoC AVAILABLE CISA: TRACK*
Published October 7, 2025
CISO Take

Any authenticated user of LLaMA-Factory ≤0.9.3 can pivot into your internal network or read arbitrary server files via crafted image/video/audio URLs in the chat API — no special skills required. If your team uses this library for LLM fine-tuning (especially in shared environments or with external collaborators), patch to 0.9.4 immediately or disable multimodal URL inputs at the network layer. Low EPSS now does not mean safe: this is trivially exploitable once an attacker has any valid account.

Risk Assessment

Effective risk is higher than CVSS 8.1 suggests in AI/ML contexts. The attack requires only a low-privileged authenticated account — common in collaborative fine-tuning environments where multiple researchers, contractors, or automated pipelines share access. SSRF in cloud-hosted training infrastructure exposes cloud metadata endpoints (IMDSv1/v2), internal databases, model registries, and secret stores. LFI compounds this: an attacker can chain LFI to exfiltrate model weights, training data paths, API keys in .env files, SSH private keys, and Hugging Face tokens typically present on fine-tuning servers. EPSS is low because the CVE is recent, not because the vector is complex.

Affected Systems

Package Ecosystem Vulnerable Range Patched
llama-factory pip No patch
llamafactory pip <= 0.9.3 0.9.4
70.8K 1 dependents Pushed 8d ago 75% patched ~167d to patch Full package profile →

Severity & Risk

CVSS 3.1
8.1 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 20% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR Low
UI None
S Unchanged
C High
I High
A None

Recommended Action

5 steps
  1. PATCH

    Upgrade llamafactory (pip) to >=0.9.4 immediately — this is the only complete fix.

  2. WORKAROUND (if patching is blocked): Implement egress firewall rules on the training server to block SSRF targets: deny access to 169.254.169.254 (AWS IMDS), 100.100.100.200 (Alibaba IMDS), internal RFC-1918 ranges, and localhost from the LLaMA-Factory process.

  3. RESTRICT

    Disable or require admin-only access to the multimodal chat API endpoint if image/video/audio URL inputs are not needed for your workflow.

  4. DETECT

    Audit logs for outbound HTTP requests from the LLaMA-Factory process to internal IPs or IMDS addresses. Look for requests to /etc/passwd, ~/.ssh, .env, or *token* paths in access logs.

  5. ROTATE

    If the server has been externally accessible with LLaMA-Factory running, assume credentials stored on disk may be compromised — rotate API keys, Hugging Face tokens, cloud IAM credentials, and SSH keys.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system technical security
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain risk management MAP 5.1 - Likelihood and impact of risks identified in mapping
OWASP LLM Top 10
LLM06:2023 - Sensitive Information Disclosure LLM07:2023 - Insecure Plugin Design

Frequently Asked Questions

What is CVE-2025-61784?

Any authenticated user of LLaMA-Factory ≤0.9.3 can pivot into your internal network or read arbitrary server files via crafted image/video/audio URLs in the chat API — no special skills required. If your team uses this library for LLM fine-tuning (especially in shared environments or with external collaborators), patch to 0.9.4 immediately or disable multimodal URL inputs at the network layer. Low EPSS now does not mean safe: this is trivially exploitable once an attacker has any valid account.

Is CVE-2025-61784 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-61784, increasing the risk of exploitation.

How to fix CVE-2025-61784?

1. PATCH: Upgrade llamafactory (pip) to >=0.9.4 immediately — this is the only complete fix. 2. WORKAROUND (if patching is blocked): Implement egress firewall rules on the training server to block SSRF targets: deny access to 169.254.169.254 (AWS IMDS), 100.100.100.200 (Alibaba IMDS), internal RFC-1918 ranges, and localhost from the LLaMA-Factory process. 3. RESTRICT: Disable or require admin-only access to the multimodal chat API endpoint if image/video/audio URL inputs are not needed for your workflow. 4. DETECT: Audit logs for outbound HTTP requests from the LLaMA-Factory process to internal IPs or IMDS addresses. Look for requests to /etc/passwd, ~/.ssh, .env, or *token* paths in access logs. 5. ROTATE: If the server has been externally accessible with LLaMA-Factory running, assume credentials stored on disk may be compromised — rotate API keys, Hugging Face tokens, cloud IAM credentials, and SSH keys.

What systems are affected by CVE-2025-61784?

This vulnerability affects the following AI/ML architecture patterns: LLM fine-tuning pipelines, model serving, MLOps platforms, training pipelines, multi-user ML research environments.

What is the CVSS score for CVE-2025-61784?

CVE-2025-61784 has a CVSS v3.1 base score of 8.1 (HIGH). The EPSS exploitation probability is 0.07%.

Technical Details

NVD Description

LLaMA-Factory is a tuning library for large language models. Prior to version 0.9.4, a Server-Side Request Forgery (SSRF) vulnerability in the chat API allows any authenticated user to force the server to make arbitrary HTTP requests to internal and external networks. This can lead to the exposure of sensitive internal services, reconnaissance of the internal network, or interaction with third-party services. The same mechanism also allows for a Local File Inclusion (LFI) vulnerability, enabling users to read arbitrary files from the server's filesystem. The vulnerability exists in the `_process_request` function within `src/llamafactory/api/chat.py.` This function is responsible for processing incoming multimodal content, including images, videos, and audio provided via URLs. The function checks if the provided URL is a base64 data URI or a local file path (`os.path.isfile`). If neither is true, it falls back to treating the URL as a web URI and makes a direct HTTP GET request using `requests.get(url, stream=True).raw` without any validation or sanitization of the URL. Version 0.9.4 fixes the underlying issue.

Exploitation Scenario

A red teamer (or malicious insider) with a valid LLaMA-Factory account crafts a multimodal chat request containing an image URL set to 'http://169.254.169.254/latest/meta-data/iam/security-credentials/'. The `_process_request` function, finding no base64 header and no local file match, fetches the URL via `requests.get()` and returns the content — leaking the EC2 instance's IAM role credentials in the response. With these credentials, the attacker assumes the IAM role, accesses S3 buckets containing training datasets and model checkpoints, and potentially pivots to other AWS services. Separately, they also submit a request with a 'file path' URL triggering the `os.path.isfile` branch, reading `/root/.ssh/id_rsa` or `/opt/llamafactory/.env` to harvest SSH keys and API tokens for further access.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N

Timeline

Published
October 7, 2025
Last Modified
March 19, 2026
First Seen
October 7, 2025

Related Vulnerabilities