Benchmark MEDIUM
Pei-Yu Tseng, Lan Zhang, ZihDwo Yeh +3 more
Cyber Threat Intelligence (CTI) reports contain Indicators of Compromise (IOCs) that are critical for security operations. To operationalize these...
Tool MEDIUM
Shangkun Che, Silin Du, Ge Gao
The widespread use of Large Language Models (LLMs) in text generation has raised increasing concerns about intellectual property disputes....
4 weeks ago cs.CR cs.CL
PDF
Attack MEDIUM
Hongru Song, Yu-An Liu, Ruqing Zhang +4 more
Retrieval-augmented generation (RAG) enhances large language model (LLM) reasoning by retrieving external documents, but also opens up new attack...
Attack MEDIUM
Anes Abdennebi, Nadjia Kara, Laaziz Lahlou
The applications of Generative Artificial Intelligence (GenAI) and their intersections with data-driven fields, such as healthcare, finance,...
4 weeks ago cs.CR cs.AI
PDF
Defense MEDIUM
Willy Carlos Tchuitcheu, Tan Lu, Ann Dooms
Historical approaches to Table Representation Learning (TRL) have largely adopted the sequential paradigms of Natural Language Processing (NLP). We...
Defense MEDIUM
Adam Stein, Davis Brown, Hamed Hassani +2 more
To identify safety violations, auditors often search over large sets of agent traces. This search is difficult because failures are often rare,...
4 weeks ago cs.AI cs.CL
PDF
Benchmark MEDIUM
Ricardo Bessa, Rui Claro, João Trindade +1 more
Large Language Models (LLMs) are redefining offensive cybersecurity by allowing the generation of harmful machine code with minimal human...
Defense MEDIUM
Junxiao Yang, Haoran Liu, Jinzhe Tu +9 more
Large language models (LLMs) often demonstrate strong safety performance in high-resource languages, yet exhibit severe vulnerabilities when queried...
4 weeks ago cs.LG cs.AI cs.CL
PDF
Benchmark MEDIUM
Hanbo Huang, Xuan Gong, Yiran Zhang +2 more
Large language model (LLM) watermarking has emerged as a promising approach for detecting and attributing AI-generated text, yet its robustness to...
Benchmark MEDIUM
Ricardo Bessa, Rui Claro, João Trindade +1 more
The application of Machine Learning techniques in code generation is now a common practice for most developers. Tools such as ChatGPT from OpenAI...
Other MEDIUM
Yiran Ling, Wenxuan Li, Siying Dong +5 more
Robot grasping of desktop object is widely used in intelligent manufacturing, logistics, and agriculture.Although vision-language models (VLMs) show...
Attack MEDIUM
Shuhao Zhang, Yuli Chen, Jiale Han +2 more
Watermarking provides a critical safeguard for large language model (LLM) services by facilitating the detection of LLM-generated text....
1 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Xiaomeng Hu, Yinger Zhang, Fei Huang +7 more
AI agents are expected to perform professional work across hundreds of occupational domains (from emergency department triage to nuclear reactor...
Benchmark MEDIUM
Yuchen Chen, Yuan Xiao, Chunrong Fang +2 more
The proliferation of large language models for code (CodeLMs) and open-source contributions has heightened concerns over unauthorized use of source...
Defense MEDIUM
Xuwei Ding, Skylar Zhai, Linxin Song +6 more
Computer-use agents (CUAs) can now autonomously complete complex tasks in real digital environments, but when misled, they can also be used to...
1 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Wenhao Yuan, Chenchen Lin, Jian Chen +3 more
In large language model (LLM) agents, reasoning trajectories are treated as reliable internal beliefs for guiding actions and updating memory....
1 months ago cs.AI cs.CL
PDF
Attack MEDIUM
Nam Duong Tran, Phi Le Nguyen
Recent advances in Vision-Language Models (VLMs) have greatly enhanced the integration of visual perception and linguistic reasoning, driving rapid...
1 months ago cs.CV cs.AI
PDF
Attack MEDIUM
Nicolás E. Díaz Ferreyra, Monika Swetha Gurupathi, Zadia Codabux +2 more
Generative Artificial Intelligence (GenAI) has become a central component of many development tools (e.g., GitHub Copilot) that support software...
1 months ago cs.SE cs.CR cs.HC
PDF
Defense MEDIUM
Weiwei Qi, Zefeng Wu, Tianhang Zheng +4 more
Ensuring Large Language Model (LLM) safety is crucial, yet the lack of a clear understanding about safety mechanisms hinders the development of...
Attack MEDIUM
Labani Halder, Payel Sadhukhan, Sarbani Palit
Ensuring reliability in adversarial settings necessitates treating privacy as a foundational component of data-driven systems. While differential...
1 months ago cs.CR cs.AI cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial