Paper 2512.12921v1

Cisco Integrated AI Security and Safety Framework Report

threats now span content safety failures (e.g., harmful or deceptive outputs), model and data integrity compromise (e.g., poisoning, supply-chain tampering), runtime manipulations (e.g., prompt injection, tool and agent misuse

medium relevance tool
Paper 2512.10998v1

SCOUT: A Defense Against Data Poisoning Attacks in Fine-Tuned Language Models

Backdoor attacks create significant security threats to language models by

high relevance attack
Paper 2601.22308v2

Stealthy Poisoning Attacks Bypass Defenses in Regression Settings

natural and physical sciences, yet their robustness to poisoning has received less attention. When it has, studies often assume unrealistic threat models and are thus less useful in practice

high relevance attack
Paper 2601.04266v1

State Backdoor: Towards Stealthy Real-world Poisoning Attack on Vision-Language-Action Model in State Space

Vision-Language-Action (VLA) models are widely deployed in safety

high relevance attack
Paper 2603.04859v1

Osmosis Distillation: Model Hijacking with the Fewest Samples

generated by dataset distillation methods, where an adversary can perform a model hijacking attack with only a few poisoned samples in the synthetic dataset. To reveal this threat, we propose

medium relevance benchmark
Paper 2603.12989v1

Test-Time Attention Purification for Backdoored Large Vision Language Models

defenses across diverse datasets and backdoor attack types, while preserving the model's utility on both clean and poisoned samples

medium relevance benchmark
Paper 2509.26032v2

Stealthy Yet Effective: Distribution-Preserving Backdoor Attacks on Graph Classification

semantic deviation caused by label flipping, both of which make poisoned graphs easily detectable by anomaly detection models. To address this, we propose DPSBA, a clean-label backdoor framework that

high relevance attack
Paper 2603.18034v1

Semantic Chameleon: Corpus-Dependent Poisoning Attacks and Defenses in RAG Systems

documents are preferentially retrieved at inference time, enabling targeted manipulation of model outputs. We study gradient-guided corpus poisoning attacks against modern RAG pipelines and evaluate retrieval-layer defenses that

high relevance attack
Paper 2602.11213v1

Transferable Backdoor Attacks for Code Models via Sharpness-Aware Adversarial Perturbation

software development but remain vulnerable to backdoor attacks via poisoned training data. Existing backdoor attacks on code models face a fundamental trade-off between transferability and stealthiness. Static trigger-based

high relevance attack
Paper 2602.06532v1

Dependable Artificial Intelligence with Reliability and Security (DAIReS): A Unified Syndrome Decoding Approach for Hallucination and Backdoor Trigger Detection

models, including Large Language Models (LLMs), are characterized by a range of system-level attributes such as security and reliability. Recent studies have demonstrated that ML models are vulnerable

medium relevance defense
Paper 2509.19921v2

On the Fragility of Contribution Score Computation in Federated Learning

alter the final scores. Second, we explore vulnerabilities posed by poisoning attacks, where malicious participants strategically manipulate their model updates to inflate their own contribution scores or reduce the importance

medium relevance benchmark
Paper 2603.01019v1

BadRSSD: Backdoor Attacks on Regularized Self-Supervised Diffusion Models

backdoor attack targeting the representation layer of self-supervised diffusion models. Specifically, it hijacks the semantic representations of poisoned samples with triggers in Principal Component Analysis (PCA) space toward those

high relevance attack
Paper 2601.05504v2

Memory Poisoning Attack and Defense on Memory Based LLM-Agents

Large language model agents equipped with persistent memory are vulnerable to memory poisoning attacks, where adversaries inject malicious instructions through query only interactions that corrupt the agents long term memory

high relevance attack
Paper 2509.21761v2

Backdoor Attribution: Elucidating and Controlling Backdoor in Language Models

Fine-tuned Large Language Models (LLMs) are vulnerable to backdoor attacks through data poisoning, yet the internal mechanisms governing these attacks remain a black box. Previous research on interpretability

medium relevance attack
Paper 2602.04899v1

Phantom Transfer: Data-level Defences are Insufficient Against Data Poisoning

data-level defences are insufficient for stopping sophisticated data poisoning attacks. We suggest that future work should focus on model audits and white-box security methods

medium relevance attack
Paper 2602.02629v1

Trustworthy Blockchain-based Federated Learning for Electronic Health Records: Securing Participant Identity with Decentralized Identifiers and Verifiable Credentials

patient data. Despite its potential, FL remains vulnerable to poisoning and Sybil attacks, in which malicious participants corrupt the global model or infiltrate the network using fake identities. While recent

medium relevance benchmark
Paper 2602.19547v1

CIBER: A Comprehensive Benchmark for Security Evaluation of Code Interpreter Agents

four major types of adversarial attacks: Direct/Indirect Prompt Injection, Memory Poisoning, and Prompt-based Backdoor. We evaluate six foundation models across two representative code interpreter agents (OpenInterpreter and OpenCodeInterpreter), incorporating

medium relevance benchmark
Paper 2602.07200v1

BadSNN: Backdoor Attacks on Spiking Neural Networks via Adversarial Spiking Neuron

converts input data into spikes following the Leaky Integrate-and-Fire (LIF) neuron model. This model includes several important hyperparameters, such as the membrane potential threshold and membrane time constant

high relevance attack
Paper 2602.01942v1

Human Society-Inspired Approaches to Agentic AI Security: The 4C Framework

software components. Although recent work has strengthened defenses against model and pipeline level vulnerabilities such as prompt injection, data poisoning, and tool misuse, these system centric approaches may fail

medium relevance tool
Paper 2510.09710v2

SeCon-RAG: A Two-Stage Semantic Filtering and Conflict-Free Framework for Trustworthy RAG

Retrieval-augmented generation (RAG) systems enhance large language models (LLMs) with external knowledge but are vulnerable to corpus poisoning and contamination attacks, which can compromise output integrity. Existing defenses often

medium relevance benchmark
Previous Page 4 of 10 Next