Benchmark MEDIUM
Tomer Kordonsky, Maayan Yamin, Noam Benzimra +2 more
LLMs are increasingly used for code generation, but their outputs often follow recurring templates that can induce predictable vulnerabilities. We...
3 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Rohan Saxena
Fine-tuning language models on narrowly harmful data causes emergent misalignment (EM) -- behavioral failures extending far beyond training...
3 months ago cs.CL cs.AI
PDF
Attack MEDIUM
Patrick Cooper, Alireza Nadali, Ashutosh Trivedi +1 more
Large language models (LLMs) are known to exhibit brittle behavior under adversarial prompts and jailbreak attacks, even after extensive alignment...
3 months ago cs.CL cs.AI cs.CR
PDF
Benchmark MEDIUM
Najmul Hasan, Prashanth BusiReddyGari
The Uniform Resource Locator (URL), introduced in a connectivity-first era to define access and locate resources, remains historically limited,...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Rodrigo Tertulino, Ricardo Almeida, Laercio Alencar
The digitization of healthcare has generated massive volumes of Electronic Health Records (EHRs), offering unprecedented opportunities for training...
3 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Ching-Yun Ko, Pin-Yu Chen
Modern artificial intelligence (AI) models are deployed on inference engines to optimize runtime efficiency and resource allocation, particularly for...
3 months ago cs.LG cs.CL cs.PL
PDF
Defense MEDIUM
Zeming Wei, Zhixin Zhang, Chengcan Wu +3 more
Recent advancements in LLMs have led to significant breakthroughs in various AI applications. However, their sophisticated capabilities also...
3 months ago cs.SE cs.AI cs.CL
PDF
Defense MEDIUM
Ali Mahdavi, Santa Aghapour, Azadeh Zamanifar +1 more
Existing Byzantine robust aggregation mechanisms typically rely on fulldimensional gradi ent comparisons or pairwise distance computations, resulting...
3 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Alsharif Abuadbba, Nazatul Sultan, Surya Nepal +1 more
AI is moving from domain-specific autonomy in closed, predictable settings to large-language-model-driven agents that plan and act in open,...
3 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Siqi Wen, Shu Yang, Shaopeng Fu +3 more
Vision Language Action (VLA) models close the perception action loop by translating multimodal instructions into executable behaviors, but this very...
Defense MEDIUM
Siqi Wen, Shu Yang, Shaopeng Fu +3 more
Vision Language Action (VLA) models close the perception action loop by translating multimodal instructions into executable behaviors, but this very...
Survey MEDIUM
Yilin Geng, Omri Abend, Eduard Hovy +1 more
It is not only what we ask large language models (LLMs) to do that matters, but also how we prompt. Phrases like "This is urgent" or "As your...
3 months ago cs.CL cs.AI
PDF
Benchmark MEDIUM
Yen-Shan Chen, Zhi Rui Tam, Cheng-Kuang Wu +1 more
Current evaluations of LLM safety predominantly rely on severity-based taxonomies to assess the harmfulness of malicious queries. We argue that this...
3 months ago cs.CR cs.CL cs.CY
PDF
Benchmark MEDIUM
Max Manolov, Tony Gao, Siddharth Shukla +2 more
Large language models (LLMs) are increasingly used to assist developers with code, yet their implementations of cryptographic functionality often...
3 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Poushali Sengupta, Shashi Raj Pandey, Sabita Maharjan +1 more
Large language models (LLMs) generate outputs by utilizing extensive context, which often includes redundant information from prompts, retrieved...
3 months ago cs.CL cs.AI stat.ML
PDF
Attack MEDIUM
Eliron Rahimi, Elad Hirshel, Rom Himelstein +3 more
Diffusion language models (DLMs) have recently emerged as a promising alternative to autoregressive (AR) models, offering parallel decoding and...
3 months ago cs.LG cs.AI
PDF
Benchmark MEDIUM
Abhilekh Borah, Shubhra Ghosh, Kedar Joshi +2 more
Tasks such as solving arithmetic equations, evaluating truth tables, and completing syllogisms are handled well by large language models (LLMs) in...
Attack MEDIUM
Xinyi Hou, Shenao Wang, Yifan Zhang +4 more
Agentic AI systems built around large language models (LLMs) are moving away from closed, single-model frameworks and toward open ecosystems that...
Attack MEDIUM
Manveer Singh Tamber, Hosna Oyarhoseini, Jimmy Lin
Research on adversarial robustness in language models is currently fragmented across applications and attacks, obscuring shared vulnerabilities. In...
3 months ago cs.CL cs.IR
PDF
Tool MEDIUM
Naen Xu, Hengyu An, Shuo Shi +7 more
Recent advancements in large language models (LLMs) have significantly enhanced the capabilities of collaborative multi-agent systems, enabling them...
3 months ago cs.CL cs.AI cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial