Attack MEDIUM
Xunzhuo Liu, Huamin Chen, Samzong Lu +27 more
As large language models (LLMs) diversify across modalities, capabilities, and cost profiles, the problem of intelligent request routing -- selecting...
1 months ago cs.NI cs.AI
PDF
Attack MEDIUM
Kaiwen Wang, Xiaolin Chang, Yuehan Dong +1 more
Secure comparison is a fundamental primitive in multi-party computation, supporting privacy-preserving applications such as machine learning and data...
Attack HIGH
Nadav Kadvil, Malak Fares, Ayellet Tal
Large Vision-Language Models (LVLMs) can be vulnerable to adversarial images that subtly bias their outputs toward plausible yet incorrect responses....
Attack HIGH
Xiaochong Jiang, Shiqi Yang, Wenting Yang +2 more
Agentic systems built on large language models (LLMs) extend beyond text generation to autonomously retrieve information and invoke tools. This...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Amirhossein Farzam, Majid Behabahani, Mani Malek +2 more
Large language models (LLMs) remain vulnerable to jailbreak prompts that are fluent and semantically coherent, and therefore difficult to detect with...
Attack HIGH
Charles Ye, Jasmine Cui, Dylan Hadfield-Menell
Language models remain vulnerable to prompt injection attacks despite extensive safety training. We trace this failure to role confusion: models...
1 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Charles Ye, Jasmine Cui, Dylan Hadfield-Menell
Language models remain vulnerable to prompt injection attacks despite extensive safety training. We trace this failure to role confusion: models...
1 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Sieun Kim, Yeeun Jo, Sungmin Na +5 more
Red-teaming, where adversarial prompts are crafted to expose harmful behaviors and assess risks, offers a dynamic approach to surfacing underlying...
Attack HIGH
Shenyang Chen, Liuwan Zhu
Standard evaluations of backdoor attacks on text-to-image (T2I) models primarily measure trigger activation and visual fidelity. We challenge this...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Zafir Shamsi, Nikhil Chekuru, Zachary Guzman +1 more
Large Language Models (LLMs) are increasingly integrated into high-stakes applications, making robust safety guarantees a central practical and...
1 months ago cs.CL cs.AI
PDF
Attack MEDIUM
Diego Soi, Silvia Lucia Sanna, Lorenzo Pisu +2 more
In recent years, stealthy Android malware has increasingly adopted sophisticated techniques to bypass automatic detection mechanisms and harden...
Attack HIGH
Jingkai Guo, Chaitali Chakrabarti, Deliang Fan
Large language models (LLMs) are increasingly deployed in safety and security critical applications, raising concerns about their robustness to model...
1 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Manuel Wirth
As Large Language Models (LLMs) are increasingly integrated into automated decision-making pipelines, specifically within Human Resources (HR), the...
1 months ago cs.CR cs.AI
PDF
Attack LOW
Wyatt Benno, Alberto Centelles, Antoine Douchet +1 more
We present Jolt Atlas, a zero-knowledge machine learning (zkML) framework that extends the Jolt proving system to model inference. Unlike zkVMs...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Xinhao Deng, Jiaqing Wu, Miao Chen +3 more
Agent hijacking, highlighted by OWASP as a critical threat to the Large Language Model (LLM) ecosystem, enables adversaries to manipulate execution...
1 months ago cs.AI cs.LG
PDF
Attack MEDIUM
Justin Albrethsen, Yash Datta, Kunal Kumar +1 more
While Large Language Model (LLM) capabilities have scaled, safety guardrails remain largely stateless, treating multi-turn dialogues as a series of...
1 months ago cs.AI cs.ET cs.LG
PDF
Attack MEDIUM
Nils Palumbo, Sarthak Choudhary, Jihye Choi +2 more
LLM-based agents are increasingly being deployed in contexts requiring complex authorization policies: customer service protocols, approval...
1 months ago cs.CR cs.AI cs.MA
PDF
Attack LOW
Adib Sakhawat, Fardeen Sadab
Evaluating the social intelligence of Large Language Models (LLMs) increasingly requires moving beyond static text generation toward dynamic,...
Attack HIGH
Thomas Michel, Debabrota Basu, Emilie Kaufmann
Modern AI models are not static. They go through multiple updates in their lifecycles. Thus, exploiting the model dynamics to create stronger...
1 months ago cs.LG cs.CR math.ST
PDF
Attack HIGH
Yiwen Lu
Federated Learning (FL) enables collaborative model training without exposing clients' private data, and has been widely adopted in privacy-sensitive...
1 months ago cs.CR cs.DC
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial