CVE-2025-46153: PyTorch: Dropout inconsistency enables membership inference
MEDIUMPyTorch's nn.Dropout1d/2d/3d produces statistically incorrect dropout masks when fallback_random=True due to a flawed bernoulli_p decomposition, creating weaker-than-expected randomization. The CVSS C:L rating reflects that predictable dropout patterns can be exploited to infer model internals or training data membership via repeated inference queries. Upgrade to PyTorch 2.7.0 immediately and audit any production pipelines using these dropout variants with fallback_random=True.
Risk Assessment
Medium risk (CVSS 5.3), but organizationally significant given PyTorch's ubiquity across AI/ML production deployments. The no-auth, network-accessible attack vector means any exposed inference endpoint is in scope. Primary risk materializes in scenarios using MC Dropout for uncertainty quantification, where repeated queries can exploit the statistical divergence from the expected Bernoulli distribution. Not in CISA KEV; no known active exploitation. Elevated concern for organizations subject to ISO 42001 or EU AI Act audits where model behavioral consistency is a documented requirement.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| pytorch | pip | — | No patch |
Do you use pytorch? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade PyTorch to 2.7.0+ (fix in PR #143460).
-
WORKAROUND
Remove fallback_random=True from all nn.Dropout1d/2d/3d instantiations; default eager execution is not affected.
-
AUDIT
Grep codebase and training configs for 'fallback_random=True' combined with Dropout1d/2d/3d — flag any models trained with this configuration for retraining validation.
-
SERVING
If using MC Dropout at inference time for uncertainty quantification, treat outputs from unpatched versions as unreliable and deprioritize patching urgency accordingly.
-
DETECTION
Monitor inference APIs for repeated identical-input queries with statistical analysis of output variance — this is the probe pattern for membership inference exploitation.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-46153?
PyTorch's nn.Dropout1d/2d/3d produces statistically incorrect dropout masks when fallback_random=True due to a flawed bernoulli_p decomposition, creating weaker-than-expected randomization. The CVSS C:L rating reflects that predictable dropout patterns can be exploited to infer model internals or training data membership via repeated inference queries. Upgrade to PyTorch 2.7.0 immediately and audit any production pipelines using these dropout variants with fallback_random=True.
Is CVE-2025-46153 actively exploited?
No confirmed active exploitation of CVE-2025-46153 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-46153?
1. PATCH: Upgrade PyTorch to 2.7.0+ (fix in PR #143460). 2. WORKAROUND: Remove fallback_random=True from all nn.Dropout1d/2d/3d instantiations; default eager execution is not affected. 3. AUDIT: Grep codebase and training configs for 'fallback_random=True' combined with Dropout1d/2d/3d — flag any models trained with this configuration for retraining validation. 4. SERVING: If using MC Dropout at inference time for uncertainty quantification, treat outputs from unpatched versions as unreliable and deprioritize patching urgency accordingly. 5. DETECTION: Monitor inference APIs for repeated identical-input queries with statistical analysis of output variance — this is the probe pattern for membership inference exploitation.
What systems are affected by CVE-2025-46153?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, uncertainty quantification (MC Dropout).
What is the CVSS score for CVE-2025-46153?
CVE-2025-46153 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.08%.
Technical Details
NVD Description
PyTorch before 3.7.0 has a bernoulli_p decompose function in decompositions.py even though it lacks full consistency with the eager CPU implementation, negatively affecting nn.Dropout1d, nn.Dropout2d, and nn.Dropout3d for fallback_random=True.
Exploitation Scenario
An adversary targets a production computer vision API (e.g., a medical imaging classifier) that uses nn.Dropout2d with fallback_random=True for Monte Carlo Dropout uncertainty estimation. The attacker submits 100-200 identical image queries to the public inference endpoint, collecting the probability distribution of outputs across runs. Because the bernoulli_p inconsistency makes dropout masks statistically predictable — diverging from true Bernoulli behavior — the output variance exhibits a distinctive signature for training samples versus non-training samples. Using this oracle, the adversary performs a membership inference attack with measurably higher accuracy than would be achievable against a correctly-implemented system, without requiring any authentication, special access, or knowledge of model architecture.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N References
Timeline
Related Vulnerabilities
CVE-2024-5452 9.8 pytorch-lightning: RCE via deepdiff Delta deserialization
Same package: torch CVE-2022-45907 9.8 PyTorch: RCE via unsafe eval in JIT annotations
Same package: torch CVE-2023-43654 9.8 TorchServe: SSRF + RCE via unrestricted model URL loading
Same package: torch CVE-2022-0845 9.8 pytorch-lightning: code injection enables full RCE
Same package: torch CVE-2024-35198 9.8 TorchServe: URL bypass enables arbitrary model loading
Same package: torch
AI Threat Alert