Attack MEDIUM
George Mikros
Large language models (LLMs) present a dual challenge for forensic linguistics. They serve as powerful analytical tools enabling scalable corpus...
3 months ago cs.CL cs.CY
PDF
Attack MEDIUM
Sima Jafarikhah, Daniel Thompson, Eva Deans +2 more
Manual vulnerability scoring, such as assigning Common Vulnerability Scoring System (CVSS) scores, is a resource-intensive process that is often...
3 months ago cs.CR cs.AI cs.PL
PDF
Attack MEDIUM
Donghang Duan, Xu Zheng, Yuefeng He +3 more
Current LLM-based text anonymization frameworks usually rely on remote API services from powerful LLMs, which creates an inherent privacy paradox:...
3 months ago cs.CR cs.CL
PDF
Attack HIGH
Songping Wang, Rufan Qian, Yueming Lyu +5 more
Image-to-Video (I2V) generation synthesizes dynamic visual content from image and text inputs, providing significant creative control. However, the...
Attack HIGH
Chenyu Zhang, Yiwen Ma, Lanjun Wang +3 more
Text-to-image~(T2I) models commonly incorporate defense mechanisms to prevent the generation of sensitive images. Unfortunately, recent jailbreaking...
3 months ago cs.CR cs.AI cs.CV
PDF
Attack HIGH
Shiji Zhao, Shukun Xiong, Yao Huang +7 more
Multimodal Large Language Models (MLLMs) are widely used in various fields due to their powerful cross-modal comprehension and generation...
Attack HIGH
Weikai Lu, Ziqian Zeng, Kehua Zhang +5 more
Multimodal Large Language Models (MLLMs) are increasingly vulnerable to multimodal Indirect Prompt Injection (IPI) attacks, which embed malicious...
3 months ago cs.CR cs.MM
PDF
Attack HIGH
Fan Yang
Large Language Models (LLMs) have demonstrated exceptional performance across various tasks, but their security vulnerabilities can be exploited by...
3 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Jinbo Liu, Defu Cao, Yifei Wei +6 more
Graph topology is a fundamental determinant of memory leakage in multi-agent LLM systems, yet its effects remain poorly quantified. We introduce MAMA...
3 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Itay Yona, Amir Sarid, Michael Karasik +1 more
We introduce $\textbf{Doublespeak}$, a simple in-context representation hijacking attack against large language models (LLMs). The attack works by...
3 months ago cs.CL cs.AI cs.CR
PDF
Attack MEDIUM
Hanxiu Zhang, Yue Zheng
The protection of Intellectual Property (IP) in Large Language Models (LLMs) represents a critical challenge in contemporary AI research. While...
3 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Jun Leng, Yu Liu, Litian Zhang +3 more
Large Language Models (LLMs) serve as the backbone of modern AI systems, yet they remain susceptible to adversarial jailbreak attacks. Consequently,...
Attack MEDIUM
Thomas Rivasseau
Current research on operator control of Large Language Models improves model robustness against adversarial attacks and misbehavior by training on...
Attack HIGH
Yuan Xiong, Ziqi Miao, Lijun Li +3 more
While Multimodal Large Language Models (MLLMs) show remarkable capabilities, their safety alignments are susceptible to jailbreak attacks. Existing...
3 months ago cs.CV cs.CL cs.CR
PDF
Attack HIGH
Afshin Khadangi, Hanna Marxen, Amir Sartipi +2 more
Frontier large language models (LLMs) such as ChatGPT, Grok and Gemini are increasingly used for mental-health support with anxiety, trauma and...
3 months ago cs.CY cs.AI
PDF
Attack HIGH
Ziyi Tong, Feifei Sun, Le Minh Nguyen
Large Multimodal Language Models (MLLMs) are emerging as one of the foundational tools in an expanding range of applications. Consequently,...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Yuanhe Zhang, Weiliu Wang, Zhenhong Zhou +5 more
Large Language Model (LLM)-based agents have demonstrated remarkable capabilities in reasoning, planning, and tool usage. The recently proposed Model...
3 months ago cs.CR cs.CL
PDF
Attack MEDIUM
Adel Chehade, Edoardo Ragusa, Paolo Gastaldo +1 more
Traffic classification (TC) plays a critical role in cybersecurity, particularly in IoT and embedded contexts, where inspection must often occur...
3 months ago cs.NI cs.CR cs.LG
PDF
Attack MEDIUM
Zixia Wang, Gaojie Jin, Jia Hu +1 more
Recent advancements in Large Language Models (LLMs) have led to their widespread adoption in daily applications. Despite their impressive...
3 months ago cs.LG cs.AI
PDF
Attack MEDIUM
Alexander Boyd, Franz Nowak, David Hyland +2 more
World models have been recently proposed as sandbox environments in which AI agents can be trained and evaluated before deployment. Although...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial