📝 Selected Publications
(*: Equal Contribution)

Raconteur: A Knowledgeable, Insightful, and Portable LLM-Powered Shell Command Explainer Jiangyi Deng*, Xinfeng Li*, Yanjiao Chen, Yijie Bai, Haiqin Weng, Yan Liu, Tao Wei, Wenyuan Xu. In Proceedings of the Network and Distributed System Security Symposium, NDSS 2025 (CCF-A, Big4) [Website, Dataset]
Raconteur is the first tool to use Large Language Models (LLMs) to explain shell commands clearly. It provides detailed insights into command functions and purposes, aligning with MITRE ATT&CK standards. Equipped with professional cybersecurity knowledge, Raconteur delivers accurate explanations. A documentation retriever helps in understanding new commands. Tested extensively, Raconteur offers unique insights, helping security analysts better understand command intentions.

SafeGen: Mitigating Sexually Explicit Content Generation in Text-to-Image Models Xinfeng Li, Yuchen Yang, Jiangyi Deng, Chen Yan, Yanjiao Chen, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the ACM Conference on Computer and Communications Security, CCS 2024 (CCF-A, Big4) [Code][Pretrained Weights][Paper]
As T2I technologies advance, their misuse for generating sexually explicit content has become a major concern. SafeGen effectively mitigates this by removing any explicit visual representations from the model, making it safe regardless of the adversarial text input and outperforming eight other protection approaches. SafeGen adjusts the model’s visual self-attention layers to ensure that the high-quality production of benign images is not compromised.

SafeEar: Content Privacy-Preserving Audio Deepfake Detection Xinfeng Li, Kai Li, Yifan Zheng, Chen Yan, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the ACM Conference on Computer and Communications Security, CCS 2024 (CCF-A, Big4) [Website][Dataset][Code]
To our knowledge, SafeEar is the first content privacy-preserving audio deepfake detection framework. As audio deepfakes and user privacy concerns have become increasingly significant societal issues, we demonstrate how to achieve reliable deepfake detection while preventing both machine and human adversaries from eavesdropping on sensitive user speech content. To facilitate future research, we also developed a comprehensive multilingual deepfake dataset (with more than 1,500,000 genuine & deepfake audio samples) using advanced TTS/VC techniques.

Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time Xinfeng Li, Chen Yan, Xuancun Lu, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the Network and Distributed System Security Symposium, NDSS 2024 (CCF-A, Big4) [Website1], [Website2], [Code]
- First completely inaudible adversarial perturbation attacks, increasing the attack range to 10 meters.
- First attempt at ultrasound transformation modeling.
- Significant improvement in attack distances, universality, and stealthiness compared with prior adversarial example attacks.

Learning Normality is Enough: A Software-based Mitigation against Inaudible Voice Attacks Xinfeng Li, Xiaoyu Ji, Chen Yan, Chaohao Li, Yichen Li, Zhenning Zhang, Wenyuan Xu. In Proceedings of the 32nd USENIX Security 2023 (CCF-A, Big4) [Website]
- Unsupervised software-based mitigation that can instantly protect miscellaneous legacy devices.
- Universal detection performance demonstrated on a wide range of devices.
-
Neural Invisibility Cloak: Concealing Adversary in Images via Compromised AI-driven Image Signal Processing, Wenjun Zhu, Xiaoyu Ji, Xinfeng Li, Qihang Chen, Kun Wang, Xinyu Li, Ruoyan Xu, and Wenyuan Xu, Zhejiang University. USENIX Security 2025 (CCF-A, Big4)
-
LightAntenna: Characterizing the Limits of Fluorescent Lamp-Induced Electromagnetic Interference, Fengchen Yang, Wenze Cui, Xinfeng Li, Chen Yan, Xiaoyu Ji, Wenyuan Xu. NDSS 2025 (CCF-A, Big4)
-
The Eye of Sherlock Holmes: Uncovering User Private Attribute Profiling via Vision-Language Model Agentic Framework, Feiran Liu, Yuzhe Zhang, Xinyi Huang, Yinan Peng, Xinfeng Li^ (Corresponding Author), Lixu Wang, Yutong Shen, Ranjie Duan, Simeng Qin, Xiaojun Jia, Qingsong Wen, Wei Dong. MM 2025 (CCF-A)
-
Heuristic-Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models, Teng Ma, Xiaojun Jia, Ranjie Duan, Xinfeng Li, Yihao Huang, Zhixuan Chu, Yang Liu, Wenqi Ren. ICCV 2025 (CCF-A)
-
Legilimens: Practical and Unified Content Moderation for Large Language Model Services, Jialin Wu, Jiangyi Deng, Shengyuan Pang, Yanjiao Chen, Jiayang Xu, Xinfeng Li, Wenyuan Xu. ACM CCS 2024 (CCF-A, Big4) [Code]
-
Scoring Metrics for Assessing Voiceprint Distinctiveness based on Speech Content and Rate, Ruiwen He, Yushi Cheng, Junning Ze, Xinfeng Li, Xiaoyu Ji, Wenyuan Xu. Transactions on Dependable and Secure Computing, TDSC 2024 (CCF-A)
-
Detecting Inaudible Voice Commands via Acoustic Attenuation by Multi-channel Microphones, Xiaoyu Ji, Guoming Zhang, Xinfeng Li, Gang Qu, Xiuzhen Cheng, Wenyuan Xu. Transactions on Dependable and Secure Computing, TDSC 2023 (CCF-A)
-
The Silent Manipulator: A Practical and Inaudible Backdoor Attack against Speech Recognition Systems, Zhicong Zheng, Xinfeng Li, Chen Yan, Xiaoyu Ji, Wenyuan Xu. ACM Multimedia MM 2023 (CCF-A)
-
Enrollment-stage Backdoor Attacks on Speaker Recognition Systems via Adversarial Ultrasound, Xinfeng Li, Junning Ze, Chen Yan, Yushi Cheng, Xiaoyu Ji, Wenyuan Xu. IEEE Internet of Things Journal, IoT-J (SCI, JCR-1)
-
Towards Pitch-Insensitive Speaker Verification via Soundfield, Xinfeng Li, Zhicong Zheng, Chen Yan, Chaohao Li, Xiaoyu Ji, Wenyuan Xu. IEEE Internet of Things Journal, IoT-J (SCI, JCR-1)
-
UltraBD: Backdoor Attack against Automatic Speaker Verification Systems via Adversarial Ultrasound, Junning Ze, Xinfeng Li, Yushi Cheng, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the 2022 IEEE 28th International Conference on Parallel and Distributed Systems ICPADS 2022 (CCF-C) [Extended Version]
-
“OK, Siri” or “Hey, Google”: Evaluating Voiceprint Distinctiveness via Content-based PROLE Score, Ruiwen He, Xiaoyu Ji, Xinfeng Li, Yushi Cheng, Wenyuan Xu. In Proceedings of the 31st USENIX Security 2022 (CCF-A, Big4) [Website, Our Interactive Demo]
-
EarArray: Defending against DolphinAttack via Acoustic Attenuation, Guoming Zhang, Xiaoyu Ji, Xinfeng Li, Gang Qu, Wenyuan Xu. In Proceedings of the Network and Distributed System Security Symposium, NDSS 2021 (CCF-A, Big4)