I am currently a Research Fellow at NTU, working with Prof. XiaoFeng Wang and Prof. Wei Dong. In 2024, I completed my Ph.D. with honor at the Zhejiang University, co-supervised by Prof. Wenyuan Xu, Prof. Xiaoyu Ji, and Prof. Chen Yan. Previously, I obtained my B.Eng with honor also from Zhejiang University in 2019.
My research is broadly in the fields of (1) AI security and privacy, with a special focus on the security and privacy issues associated with multimodal AI, and (2) AI for system security, particularly for addressing vulnerabilities in critical IoT, communication, and software systems.
In AI-oriented contexts, I focus on developing trustworthy intelligent audio and vision models, safeguarding user privacy, and fortifying generative models against various leaks and attacks. I also regulate AI behavior to ensure alignment with societal responsibilities, especially in the context of large language models such as Stable Diffusion and GPT-4.
In system-oriented contexts, I work toward developing dependable and secure machine learning (ML) systems and am committed to their application for deployment in critical infrastructures and consumer electronics, e.g., in the domain of audio/vision-interface IoT devices, carrier networks, and software systems.
If you are seeking any form of academic cooperation, please feel free to email me at xinfengli@ntu.edu.sg or lxfmakeit@gmail.com.
I have published over 20 papers in top-tier international security, AI, and mobile sensing conferences and journals, such as USENIX Security, ACM CCS, NDSS, KDD, IEEE TDSC, ICCV, ACL, MM, and EMNLP.
🔥 News
- 2025.06: 🔥 AudioTrust, the First Comprehensive Trustworthiness Benchmark for Audio Large Language Models, is now released! We hope this can serve as a solid foundation for academia and industry for safe audio-based LLM system development. [Github] (Media Coverage: [量子位])
- 2025.06: 🎉 Neural Invisibility Cloak got accepted by USENIX Security’25. Congratulations to Wenjun!
- 2025.04: 🔥 Lead/Contributed to 3 (Trustworthy) LLM Agent survey papers are now released: (1) TrustAgent: A survey on trustworthy LLM agents: Threats and countermeasures [Paper (Accepted by KDD’25)]; (2) Advances and challenges in foundation agents: From brain-inspired intelligence to evolutionary, collaborative, and safe systems [Paper Github] [HuggingFace] (Media Coverage, e.g., [SANER, 机器之心]); (3) A Comprehensive Survey in LLM (-Agent) Full Stack Safety: Data, Training, and Deployment.
- 2024.11: 🎉 LightAntenna got accepted by NDSS 2025! We reveal a new Electromagnetic Interference (EMI) threat where everyday fluorescent lamps can secretly manipulate IoT devices. Unlike visible metal antennas, LightAntenna transforms unaltered lamps into stealthy EMI sources, enabling remote attacks up to 20 meters away. This groundbreaking method exposes a hidden threat in plain sight.
- 2024.08: 💪🏻 Raconteur was accepted by NDSS 2025! Raconteur is the first tool using Large Language Models (LLMs) to explain shell commands clearly. It provides detailed insights into command functions and purposes, aligning with MITRE ATT&CK standards. Equipped with professional cybersecurity knowledge, Raconteur delivers accurate explanations. A documentation retriever helps in understanding new commands. Tested extensively, Raconteur offers unique insights, helping security analysts better understand command intentions. Please find out more (e.g., our dataset) on the [website].
- 2024.08: 📝 Legilimens was accepted by CCS 2024! A new SOTA for the LLM unsafe moderation technique, with significant improvement in efficiency. Congratulations to Jialin and Dr. Deng.
- 2024.05: 🔥 SafeGen was accepted by CCS 2024! As T2I technologies advance, their misuse for generating sexually explicit content has become a major concern. SafeGen effectively mitigates this by removing any explicit visual representations from the model, making it safe regardless of the adversarial text input, outperforming eight other protection approaches. SafeGen adjusts the model’s visual self-attention layers to ensure that the high-quality production of benign images is not compromised. More information is on our [code][pretrained model].
- 2024.05: 🔥 SafeEar got accepted by CCS 2024! To our knowledge, this is the first content privacy-preserving audio deepfake detection framework. As audio deepfakes and user privacy concerns have become increasingly significant societal issues, we demonstrate how to achieve reliable deepfake detection while preventing both machine and human adversaries from eavesdropping on sensitive user speech content. To facilitate future research, we also developed a comprehensive multilingual deepfake dataset (more than 1,500,000 genuine & deepfake audio samples) using advanced TTS/VC techniques. Please check out our [website][code].
- 2023.08: 🎉 VRifle got accepted by NDSS 2024! We demonstrate how to achieve a completely inaudible adversarial perturbation attack via ultrasound, which achieves the farthest attack range (~10 meters away) and the most universal capability (1 perturbation can tamper with >28,000 benign samples). Our novel ultrasonic transformation model can be generalized to other modalities of attacks, such as laser and electromagnetic.
- 2023.08: 🔥 I attended the USENIX Security 2023 Symposium and presented our work NormDetect in person.
- 2023.07: 😄 Our practical backdoor attack (SMA) against ASR models was accepted by ACM MM 2023!
- 2022.09: 🎉 Tuner and UltraBD were accepted by IoT-J 2023 and ICPADS 2022! We demonstrate a practical inaudible backdoor attack against speaker recognition systems.
- 2022.07: 💪🏻 NormDetect is accepted by USENIX Security 2023! We rethink the challenging topic of defending against inaudible voice attacks and present a software-based mitigation that can instantly protect legacy and future devices in an actionable and practical manner, which is verified on 24 mainstream smart devices with voice interfaces.
- 2021.07: 🎉 PROLE Score is accepted by USENIX Security 2022! “OK Siri” or “Hey Google”? We conduct an extensive voiceprint security measurement. Our findings and designed metrics shall aid manufacturers and users in DIY highly secure voiceprint phrases.
- 2020.12: 🔥 EarArray is accepted by NDSS 2021! We uncover the inherent physical properties of the inaudible attack, i.e., ultrasound field distributions, and redesign microphone arrays for accurate detection and attack localization.
📝 Selected Publications
(*: Equal Contribution)

Raconteur: A Knowledgeable, Insightful, and Portable LLM-Powered Shell Command Explainer Jiangyi Deng*, Xinfeng Li*, Yanjiao Chen, Yijie Bai, Haiqin Weng, Yan Liu, Tao Wei, Wenyuan Xu. In Proceedings of the Network and Distributed System Security Symposium, NDSS 2025 (CCF-A, Big4) [Website, Dataset]
Raconteur is the first tool to use Large Language Models (LLMs) to explain shell commands clearly. It provides detailed insights into command functions and purposes, aligning with MITRE ATT&CK standards. Equipped with professional cybersecurity knowledge, Raconteur delivers accurate explanations. A documentation retriever helps in understanding new commands. Tested extensively, Raconteur offers unique insights, helping security analysts better understand command intentions.

SafeGen: Mitigating Sexually Explicit Content Generation in Text-to-Image Models Xinfeng Li, Yuchen Yang, Jiangyi Deng, Chen Yan, Yanjiao Chen, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the ACM Conference on Computer and Communications Security, CCS 2024 (CCF-A, Big4) [Code][Pretrained Weights][Paper]
As T2I technologies advance, their misuse for generating sexually explicit content has become a major concern. SafeGen effectively mitigates this by removing any explicit visual representations from the model, making it safe regardless of the adversarial text input and outperforming eight other protection approaches. SafeGen adjusts the model’s visual self-attention layers to ensure that the high-quality production of benign images is not compromised.

SafeEar: Content Privacy-Preserving Audio Deepfake Detection Xinfeng Li, Kai Li, Yifan Zheng, Chen Yan, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the ACM Conference on Computer and Communications Security, CCS 2024 (CCF-A, Big4) [Website][Dataset][Code]
To our knowledge, SafeEar is the first content privacy-preserving audio deepfake detection framework. As audio deepfakes and user privacy concerns have become increasingly significant societal issues, we demonstrate how to achieve reliable deepfake detection while preventing both machine and human adversaries from eavesdropping on sensitive user speech content. To facilitate future research, we also developed a comprehensive multilingual deepfake dataset (with more than 1,500,000 genuine & deepfake audio samples) using advanced TTS/VC techniques.

Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time Xinfeng Li, Chen Yan, Xuancun Lu, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the Network and Distributed System Security Symposium, NDSS 2024 (CCF-A, Big4) [Website1], [Website2], [Code]
- First completely inaudible adversarial perturbation attacks, increasing the attack range to 10 meters.
- First attempt at ultrasound transformation modeling.
- Significant improvement in attack distances, universality, and stealthiness compared with prior adversarial example attacks.

Learning Normality is Enough: A Software-based Mitigation against Inaudible Voice Attacks Xinfeng Li, Xiaoyu Ji, Chen Yan, Chaohao Li, Yichen Li, Zhenning Zhang, Wenyuan Xu. In Proceedings of the 32nd USENIX Security 2023 (CCF-A, Big4) [Website]
- Unsupervised software-based mitigation that can instantly protect miscellaneous legacy devices.
- Universal detection performance demonstrated on a wide range of devices.
-
Neural Invisibility Cloak: Concealing Adversary in Images via Compromised AI-driven Image Signal Processing, Wenjun Zhu, Xiaoyu Ji, Xinfeng Li, Qihang Chen, Kun Wang, Xinyu Li, Ruoyan Xu, and Wenyuan Xu, Zhejiang University. USENIX Security 2025 (CCF-A, Big4)
-
LightAntenna: Characterizing the Limits of Fluorescent Lamp-Induced Electromagnetic Interference, Fengchen Yang, Wenze Cui, Xinfeng Li, Chen Yan, Xiaoyu Ji, Wenyuan Xu. NDSS 2025 (CCF-A, Big4)
-
The Eye of Sherlock Holmes: Uncovering User Private Attribute Profiling via Vision-Language Model Agentic Framework, Feiran Liu, Yuzhe Zhang, Xinyi Huang, Yinan Peng, Xinfeng Li^ (Corresponding Author), Lixu Wang, Yutong Shen, Ranjie Duan, Simeng Qin, Xiaojun Jia, Qingsong Wen, Wei Dong. MM 2025 (CCF-A)
-
Heuristic-Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models, Teng Ma, Xiaojun Jia, Ranjie Duan, Xinfeng Li, Yihao Huang, Zhixuan Chu, Yang Liu, Wenqi Ren. ICCV 2025 (CCF-A)
-
Legilimens: Practical and Unified Content Moderation for Large Language Model Services, Jialin Wu, Jiangyi Deng, Shengyuan Pang, Yanjiao Chen, Jiayang Xu, Xinfeng Li, Wenyuan Xu. ACM CCS 2024 (CCF-A, Big4) [Code]
-
Scoring Metrics for Assessing Voiceprint Distinctiveness based on Speech Content and Rate, Ruiwen He, Yushi Cheng, Junning Ze, Xinfeng Li, Xiaoyu Ji, Wenyuan Xu. Transactions on Dependable and Secure Computing, TDSC 2024 (CCF-A)
-
Detecting Inaudible Voice Commands via Acoustic Attenuation by Multi-channel Microphones, Xiaoyu Ji, Guoming Zhang, Xinfeng Li, Gang Qu, Xiuzhen Cheng, Wenyuan Xu. Transactions on Dependable and Secure Computing, TDSC 2023 (CCF-A)
-
The Silent Manipulator: A Practical and Inaudible Backdoor Attack against Speech Recognition Systems, Zhicong Zheng, Xinfeng Li, Chen Yan, Xiaoyu Ji, Wenyuan Xu. ACM Multimedia MM 2023 (CCF-A)
-
Enrollment-stage Backdoor Attacks on Speaker Recognition Systems via Adversarial Ultrasound, Xinfeng Li, Junning Ze, Chen Yan, Yushi Cheng, Xiaoyu Ji, Wenyuan Xu. IEEE Internet of Things Journal, IoT-J (SCI, JCR-1)
-
Towards Pitch-Insensitive Speaker Verification via Soundfield, Xinfeng Li, Zhicong Zheng, Chen Yan, Chaohao Li, Xiaoyu Ji, Wenyuan Xu. IEEE Internet of Things Journal, IoT-J (SCI, JCR-1)
-
UltraBD: Backdoor Attack against Automatic Speaker Verification Systems via Adversarial Ultrasound, Junning Ze, Xinfeng Li, Yushi Cheng, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the 2022 IEEE 28th International Conference on Parallel and Distributed Systems ICPADS 2022 (CCF-C) [Extended Version]
-
“OK, Siri” or “Hey, Google”: Evaluating Voiceprint Distinctiveness via Content-based PROLE Score, Ruiwen He, Xiaoyu Ji, Xinfeng Li, Yushi Cheng, Wenyuan Xu. In Proceedings of the 31st USENIX Security 2022 (CCF-A, Big4) [Website, Our Interactive Demo]
-
EarArray: Defending against DolphinAttack via Acoustic Attenuation, Guoming Zhang, Xiaoyu Ji, Xinfeng Li, Gang Qu, Wenyuan Xu. In Proceedings of the Network and Distributed System Security Symposium, NDSS 2021 (CCF-A, Big4)
📚 Professional Services
I actively contribute to the academic community through program organization and peer review for leading conferences and journals in security, AI, and systems.
Program Organization
- KDD 2025: Tutorial Organizer
Conference
- ICLR: Area Chair (2026)
- AAAI: Reviewer (2026)
- IEEE SaTML: TPC Member (2026)
- IEEE S&P: External Reviewer (2019, 2020)
- ACM CCS: External Reviewer (2021, 2022, 2023, 2024)
- USENIX Security: External Reviewer (2019, 2020, 2021, 2024)
- NDSS: External Reviewer (2020, 2022, 2023, 2024)
Journal
- IEEE Transactions on Information Forensics and Security (TIFS)
- IEEE Transactions on Dependable and Secure Computing (TDSC)
- IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
- ACM Transactions on Software Engineering and Methodology (TOSEM)
- IEEE Internet of Things Journal (IoT-J)
- ACM Transactions on Privacy and Security
- ACM Transactions on Internet Technology (TOIT)
- IEEE Transactions on Cognitive Communications and Networking (TCCN)
🎖 Honors and Awards
- CCS 2024 Student Grant (2024)
- NDSS 2024 Student Grant (2024)
- Merit Graduate Student Scholarship, Excellent Graduate Student Scholarship & Excellent Graduate Student Cadre (Zhejiang University, 2020-2023)
- “Challenge Cup” National College Student Curricular Academic Science and Technology Works Competition: First Prize (Zhejiang University, 2021)
- Outstanding Undergraduate at Edison Honor Class (Zhejiang University, 2019)
- Top 10 Students at EE College (Top 1%, Zhejiang University, 2018)
- National Scholarship (Top 2%, Zhejiang University, 2018)
- Meritorious Scholarship (Top 3%, Zhejiang University, 2016-2018)
📖 Educations
- 2019.06 - Present, Ph.D., Zhejiang University, Hangzhou.
- 2015.09 - 2019.06, Undergraduate, College of Electrical Engineering, Zhejiang University, Hangzhou.
- 2012.09 - 2015.06, Yuyao High School, Ningbo.
💬 Invited Talks
- 2024.10, ACM CCS 2024 at Salt Lake City, USA. | [Slides]
- 2024.02, NDSS 2024 at San Diego, California, USA. | [Paper] | [Code] | [Video]
- 2023.08, USENIX Security Symposium 2023 at Anaheim, California, USA. | [Slides]
💻 Internships
- 2021.11 - 2022.05, Zhuoxi Brain and Intelligence Institute, Hangzhou, China.