Defense HIGH
Hao Zhu, Jia Li, Cuiyun Gao +7 more
Large language models (LLMs) have achieved remarkable progress in code understanding tasks. However, they demonstrate limited performance in...
4 months ago cs.SE cs.CR
PDF
Benchmark MEDIUM
Shiyin Lin
Software fuzzing has become a cornerstone in automated vulnerability discovery, yet existing mutation strategies often lack semantic awareness,...
4 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Mohammad Atif Quamar, Mohammad Areeb, Mikhail Kuznetsov +2 more
Aligning large language models (LLMs) with human values is crucial for safe deployment. Inference-time techniques offer granular control over...
Attack HIGH
Geoff McDonald, Jonathan Bar Or
Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications,...
4 months ago cs.CR cs.AI
PDF
Other LOW
Wendong Xu, Chujie Chen, He Xiao +8 more
Large Language Model (LLM) inference services demand exceptionally high availability and low latency, yet multi-GPU Tensor Parallelism (TP) makes...
Attack MEDIUM
Botao 'Amber' Hu, Helena Rong
As the "agentic web" takes shape-billions of AI agents (often LLM-powered) autonomously transacting and collaborating-trust shifts from human...
4 months ago cs.HC cs.AI cs.MA
PDF
Attack HIGH
Yize Liu, Yunyun Hou, Aina Sui
Large Language Models (LLMs) have been widely deployed across various applications, yet their potential security and ethical risks have raised...
4 months ago cs.CR cs.CL
PDF
Attack HIGH
Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan +1 more
Open-weight models provide researchers and developers with accessible foundations for diverse downstream applications. We tested the safety and...
4 months ago cs.CR cs.LG
PDF
Defense LOW
Bryce-Allen Bagley, Navin Khoshnan
The complexity of human cognition has meant that psychology makes more use of theory and conceptual models than perhaps any other biomedical field....
4 months ago q-bio.NC cs.CL cs.CY
PDF
Tool LOW
Qi Li, Jianjun Xu, Pingtao Wei +8 more
With the widespread application of Large Language Models (LLMs), their associated security issues have become increasingly prominent, severely...
Attack HIGH
Rishi Rajesh Shah, Chen Henry Wu, Shashwat Saxena +3 more
Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like...
4 months ago cs.CR cs.AI cs.CL
PDF
Benchmark MEDIUM
Jon Kutasov, Chloe Loughridge, Yuqi Sun +4 more
As AI systems become more capable and widely deployed as agents, ensuring their safe operation becomes critical. AI control offers one approach to...
Attack HIGH
Chloe Loughridge, Paul Colognese, Avery Griffin +3 more
As AI deployments become more complex and high-stakes, it becomes increasingly important to be able to estimate their risk. AI control is one...
Survey LOW
Gian Maria Campedelli
While the possibility of reaching human-like Artificial Intelligence (AI) remains controversial, the likelihood that the future will be characterized...
4 months ago cs.CY cs.AI cs.HC
PDF
Attack MEDIUM
W. K. M Mithsara, Ning Yang, Ahmed Imteaj +2 more
The widespread integration of wearable sensing devices in Internet of Things (IoT) ecosystems, particularly in healthcare, smart homes, and...
4 months ago cs.LG cs.CR
PDF
Defense LOW
Xiumei Deng, Zehui Xiong, Binbin Chen +3 more
Large language models (LLMs) are proliferating rapidly at the edge, delivering intelligent capabilities across diverse application scenarios....
4 months ago cs.DC cs.AI cs.LG
PDF
Attack MEDIUM
Roy Rinberg, Adam Karvonen, Alexander Hoover +2 more
As large AI models become increasingly valuable assets, the risk of model weight exfiltration from inference servers grows accordingly. An attacker...
4 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Patrick Karlsen, Even Eilertsen
This paper investigates some of the risks introduced by "LLM poisoning," the intentional or unintentional introduction of malicious or biased data...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Aashray Reddy, Andrew Zagula, Nicholas Saban
Large Language Models (LLMs) remain vulnerable to jailbreaking attacks where adversarial prompts elicit harmful outputs. Yet most evaluations focus...
4 months ago cs.CL cs.AI cs.CR
PDF
Tool HIGH
Xu Liu, Yan Chen, Kan Ling +4 more
The widespread deployment of Large Language Models (LLMs) as public-facing web services and APIs has made their security a core concern for the web...
4 months ago cs.CR cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial