Defense MEDIUM
Saeid Jamshidi, Omar Abdul Wahab, Foutse Khomh +1 more
Federated learning (FL) has become an effective paradigm for privacy-preserving, distributed Intrusion Detection Systems (IDS) in cyber-physical and...
1 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Edward Y. Chang, Longling Geng
Inference-time scaling can amplify reasoning pathologies: sycophancy, rung collapse, and premature certainty. We present RAudit, a diagnostic...
Tool LOW
Saeid Jamshidi, Kawser Wazed Nafi, Arghavan Moradi Dakhel +3 more
Large Language Models (LLMs) are increasingly adopted in sensitive domains such as healthcare and financial institutions' data analytics; however,...
1 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Haitham S. Al-Sinani, Chris J. Mitchell
Wireless ethical hacking relies heavily on skilled practitioners manually interpreting reconnaissance results and executing complex, time-sensitive...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhixiang Zhang, Zesen Liu, Yuchong Xie +2 more
Semantic caching has emerged as a pivotal technique for scaling LLM applications, widely adopted by major providers including AWS and Microsoft. By...
1 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Yanghao Su, Wenbo Zhou, Tianwei Zhang +4 more
Emergent Misalignment refers to a failure mode in which fine-tuning large language models (LLMs) on narrowly scoped data induces broadly misaligned...
1 months ago cs.CL cs.AI cs.CR
PDF
Benchmark LOW
Wei Chen, Zhiyuan Peng, Xin Yin +4 more
Smart contracts are the backbone of the decentralized web, yet ensuring their functional correctness and security remains a critical challenge. While...
Benchmark HIGH
Yunpeng Xiong, Ting Zhang
Static Application Security Testing (SAST) tools are essential for identifying software vulnerabilities, but they often produce a high volume of...
Benchmark MEDIUM
Evgeny Grigorenko, David Stanojević, David Ilić +2 more
Modern Integrated Development Environments (IDEs) increasingly leverage Large Language Models (LLMs) to provide advanced features like code...
1 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Farnaz Soltaniani, Shoaib Razzaq, Mohammad Ghafari
Early detection of security bug reports (SBRs) is critical for timely vulnerability mitigation. We present an evaluation of prompt-based engineering...
1 months ago cs.CR cs.AI cs.LG
PDF
Tool MEDIUM
Waleed Khan Mohammed, Zahirul Arief Irfan Bin Shahrul Anuar, Mousa Sufian Mousa Mitani +2 more
Advanced Persistent Threats (APTs) are among the most challenging cyberattacks to detect. They are carried out by highly skilled attackers who...
1 months ago cs.CR cs.AI
PDF
Other LOW
Eduardo C. Garrido-Merchán, Adriana Constanza Cirera Tirschtigel
As Large Language Models become ubiquitous sources of health information, understanding their capacity to accurately represent stigmatized conditions...
Defense MEDIUM
Charles Westphal, Keivan Navaie, Fernando E. Rosas
Fine-tuned LLMs can covertly encode prompt secrets into outputs via steganographic channels. Prior work demonstrated this threat but relied on...
1 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Haoyun Yang, Ronghong Huang, Yong Fang +4 more
Transport Layer Security (TLS) is fundamental to secure online communication, yet vulnerabilities in certificate validation that enable...
Attack LOW
Yilong Huang, Songze Li
Diffusion-based face swapping achieves state-of-the-art performance, yet it also exacerbates the potential harm of malicious face swapping to violate...
1 months ago cs.CV cs.CR cs.LG
PDF
Benchmark HIGH
Ivan K. Tung, Yu Xiang Shi, Alex Chien +2 more
Creating attack paths for cyber defence exercises requires substantial expert effort. Existing automation requires vulnerability graphs or exploit...
1 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Jaehee Kim, Pilsung Kang
Modern LLMs are increasingly accessed via black-box APIs, requiring users to transmit sensitive prompts, outputs, and fine-tuning data to external...
1 months ago cs.CR cs.CL
PDF
Benchmark LOW
Yanlin Wang, Ziyao Zhang, Chong Wang +5 more
Large Language Models (LLMs) have demonstrated remarkable capabilities in code generation, but their proficiency in producing secure code remains a...
1 months ago cs.CR cs.SE
PDF
Attack MEDIUM
Mingqian Feng, Xiaodong Liu, Weiwei Yang +3 more
Large Language Models (LLMs) are typically evaluated for safety under single-shot or low-budget adversarial prompting, which underestimates...
Benchmark HIGH
Miao Lin, Feng Yu, Rui Ning +6 more
Deep neural networks are highly susceptible to backdoor attacks, yet most defense methods to date rely on balanced data, overlooking the pervasive...
1 months ago cs.CR cs.CV cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial