CVE-2025-46153: PyTorch: Dropout inconsistency enables membership inference

MEDIUM
Published September 25, 2025
CISO Take

PyTorch's nn.Dropout1d/2d/3d produces statistically incorrect dropout masks when fallback_random=True due to a flawed bernoulli_p decomposition, creating weaker-than-expected randomization. The CVSS C:L rating reflects that predictable dropout patterns can be exploited to infer model internals or training data membership via repeated inference queries. Upgrade to PyTorch 2.7.0 immediately and audit any production pipelines using these dropout variants with fallback_random=True.

Risk Assessment

Medium risk (CVSS 5.3), but organizationally significant given PyTorch's ubiquity across AI/ML production deployments. The no-auth, network-accessible attack vector means any exposed inference endpoint is in scope. Primary risk materializes in scenarios using MC Dropout for uncertainty quantification, where repeated queries can exploit the statistical divergence from the expected Bernoulli distribution. Not in CISA KEV; no known active exploitation. Elevated concern for organizations subject to ISO 42001 or EU AI Act audits where model behavioral consistency is a documented requirement.

Affected Systems

Package Ecosystem Vulnerable Range Patched
pytorch pip No patch
99.6K OpenSSF 6.4 21.7K dependents Pushed 6d ago 8% patched ~142d to patch Full package profile →

Do you use pytorch? You're affected.

Severity & Risk

CVSS 3.1
5.3 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 23% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C Low
I None
A None

Recommended Action

5 steps
  1. PATCH

    Upgrade PyTorch to 2.7.0+ (fix in PR #143460).

  2. WORKAROUND

    Remove fallback_random=True from all nn.Dropout1d/2d/3d instantiations; default eager execution is not affected.

  3. AUDIT

    Grep codebase and training configs for 'fallback_random=True' combined with Dropout1d/2d/3d — flag any models trained with this configuration for retraining validation.

  4. SERVING

    If using MC Dropout at inference time for uncertainty quantification, treat outputs from unpatched versions as unreliable and deprioritize patching urgency accordingly.

  5. DETECTION

    Monitor inference APIs for repeated identical-input queries with statistical analysis of output variance — this is the probe pattern for membership inference exploitation.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity of high-risk AI systems
ISO 42001
8.4 - AI system technical robustness and reliability
NIST AI RMF
MANAGE-2.2 - Risk treatments including controls and safeguards are applied

Frequently Asked Questions

What is CVE-2025-46153?

PyTorch's nn.Dropout1d/2d/3d produces statistically incorrect dropout masks when fallback_random=True due to a flawed bernoulli_p decomposition, creating weaker-than-expected randomization. The CVSS C:L rating reflects that predictable dropout patterns can be exploited to infer model internals or training data membership via repeated inference queries. Upgrade to PyTorch 2.7.0 immediately and audit any production pipelines using these dropout variants with fallback_random=True.

Is CVE-2025-46153 actively exploited?

No confirmed active exploitation of CVE-2025-46153 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-46153?

1. PATCH: Upgrade PyTorch to 2.7.0+ (fix in PR #143460). 2. WORKAROUND: Remove fallback_random=True from all nn.Dropout1d/2d/3d instantiations; default eager execution is not affected. 3. AUDIT: Grep codebase and training configs for 'fallback_random=True' combined with Dropout1d/2d/3d — flag any models trained with this configuration for retraining validation. 4. SERVING: If using MC Dropout at inference time for uncertainty quantification, treat outputs from unpatched versions as unreliable and deprioritize patching urgency accordingly. 5. DETECTION: Monitor inference APIs for repeated identical-input queries with statistical analysis of output variance — this is the probe pattern for membership inference exploitation.

What systems are affected by CVE-2025-46153?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, uncertainty quantification (MC Dropout).

What is the CVSS score for CVE-2025-46153?

CVE-2025-46153 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.08%.

Technical Details

NVD Description

PyTorch before 3.7.0 has a bernoulli_p decompose function in decompositions.py even though it lacks full consistency with the eager CPU implementation, negatively affecting nn.Dropout1d, nn.Dropout2d, and nn.Dropout3d for fallback_random=True.

Exploitation Scenario

An adversary targets a production computer vision API (e.g., a medical imaging classifier) that uses nn.Dropout2d with fallback_random=True for Monte Carlo Dropout uncertainty estimation. The attacker submits 100-200 identical image queries to the public inference endpoint, collecting the probability distribution of outputs across runs. Because the bernoulli_p inconsistency makes dropout masks statistically predictable — diverging from true Bernoulli behavior — the output variance exhibits a distinctive signature for training samples versus non-training samples. Using this oracle, the adversary performs a membership inference attack with measurably higher accuracy than would be achievable against a correctly-implemented system, without requiring any authentication, special access, or knowledge of model architecture.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N

Timeline

Published
September 25, 2025
Last Modified
October 3, 2025
First Seen
September 25, 2025

Related Vulnerabilities