Defense LOW
Alessio Benavoli, Alessandro Facchini, Marco Zaffalon
How can we ensure that AI systems are aligned with human values and remain safe? We can study this problem through the frameworks of the AI...
4 months ago cs.AI cs.GT
PDF
Defense HIGH
Toqeer Ali Syed, Mohammad Riyaz Belgaum, Salman Jan +2 more
The software supply chain attacks are becoming more and more focused on trusted development and delivery procedures, so the conventional post-build...
4 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Manu, Yi Guo, Kanchana Thilakarathna +5 more
Large Language Models (LLMs) can be driven into over-generation, emitting thousands of tokens before producing an end-of-sequence (EOS) token. This...
4 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Jiawei Liu, Zhuo Chen, Rui Zhu +4 more
Neural ranking models have achieved remarkable progress and are now widely deployed in real-world applications such as Retrieval-Augmented Generation...
4 months ago cs.CR cs.IR
PDF
Defense LOW
Xingwei Ma, Shiyang Feng, Bo Zhang +1 more
Remote sensing change detection (RSCD), a complex multi-image inference task, traditionally uses pixel-based operators or encoder-decoder networks...
4 months ago cs.CV cs.AI
PDF
Attack HIGH
Zhen Liang, Hai Huang, Zhengkui Chen
Large language models (LLMs), such as ChatGPT, have achieved remarkable success across a wide range of fields. However, their trustworthiness remains...
4 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Armstrong Foundjem, Lionel Nganyewou Tidjon, Leuson Da Silva +1 more
Machine learning (ML) underpins foundation models in finance, healthcare, and critical infrastructure, making them targets for data poisoning, model...
4 months ago cs.CR cs.LG cs.MA
PDF
Benchmark MEDIUM
Karolina Korgul, Yushi Yang, Arkadiusz Drohomirecki +7 more
Web-based agents powered by large language models are increasingly used for tasks such as email management or professional networking. Their reliance...
4 months ago cs.HC cs.AI cs.MA
PDF
Benchmark LOW
Kerem Zaman, Shashank Srivastava
Recent work, using the Biasing Features metric, labels a CoT as unfaithful if it omits a prompt-injected hint that affected the prediction. We argue...
4 months ago cs.CL cs.AI cs.LG
PDF
Attack HIGH
Soham Padia, Dhananjay Vaidya, Ramchandra Mangrulkar
Securing blockchain-enabled IoT networks against sophisticated adversarial attacks remains a critical challenge. This paper presents a trust-based...
4 months ago cs.CR cs.LG cs.MA
PDF
Benchmark HIGH
Woorim Han, Yeongjun Kwak, Miseon Yu +4 more
Learning-based automated vulnerability repair (AVR) techniques that utilize fine-tuned language models have shown promise in generating vulnerability...
Attack HIGH
Zongmin Zhang, Zhen Sun, Yifan Liao +5 more
Prompt-driven Video Segmentation Foundation Models (VSFMs) such as SAM2 are increasingly deployed in applications like autonomous driving and digital...
4 months ago cs.CV cs.CR
PDF
Benchmark LOW
Vahideh Zolfaghari
Background Large language models (LLMs) are increasingly deployed in medical consultations, yet their safety under realistic user pressures remains...
4 months ago cs.CL cs.AI
PDF
Attack LOW
Jiayu Hu, Beibei Li, Jiangwei Xia +3 more
While Vision-Language Models (VLMs) have garnered increasing attention in the AI community due to their promising practical applications, they...
4 months ago cs.CV cs.LG
PDF
Benchmark LOW
Marc S. Montalvo, Hamed Yaghoobian
Recent advances in large language models (LLMs) are transforming data-intensive domains, with finance representing a high-stakes environment where...
4 months ago cs.MA cs.AI
PDF
Benchmark HIGH
Chinmay Pushkar, Sanchit Kabra, Dhruv Kumar +1 more
Large Language Models (LLMs) have demonstrated significant potential in automated software security, particularly in vulnerability detection....
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Mengqi He, Xinyu Tian, Xin Shen +4 more
Vision-language models (VLMs) achieve remarkable performance but remain vulnerable to adversarial attacks. Entropy, a measure of model uncertainty,...
4 months ago cs.CV cs.LG
PDF
Attack MEDIUM
Tsogt-Ochir Enkhbayar
Warning-framed content in training data (e.g., "DO NOT USE - this code is vulnerable") does not, it turns out, teach language models to avoid the...
4 months ago cs.LG cs.CL cs.CR
PDF
Defense LOW
Eranga Bandara, Tharaka Hewa, Ross Gore +12 more
Agentic AI represents a major shift in how autonomous systems reason, plan, and execute multi-step tasks through the coordination of Large Language...
Attack MEDIUM
Tian Li, Bo Lin, Shangwen Wang +1 more
Retrieval-Augmented Code Generation (RACG) is increasingly adopted to enhance Large Language Models for software development, yet its security...
4 months ago cs.CR cs.SE
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial