I completed my Ph.D. at the USSLAB@Zhejiang University in 2024, co-supervised by Prof. Wenyuan Xu, Prof. Xiaoyu Ji, and Prof. Chen Yan. I obtained my B.Eng also from Zhejiang University in 2019.

My research is broadly in the field of AI and IoT security, with a particular interest in safeguarding user privacy, intelligent audio/vision systems, and generative models against various forms of leaks and attacks. I work toward developing dependable and secure machine learning (ML) systems and am committed to making their applications for deployment in critical infrastructures and consumer electronics. In user-oriented contexts, my investigations also encompass the interplay between protecting user privacy and enhancing the usability of machine learning systems. In model-oriented scenarios, such as the use of generative models in model-as-a-service (MaaS) applications, my work involves regulating their behavior to align with societal responsibilities.

If you are seeking any form of academic cooperation, please feel free to email me at xinfengli@zju.edu.cn.

I have published 10+ papers at the top-tier international security/AI/mobile sensing conferences and journals, such as USENIX Security, ACM CCS, NDSS, TDSC, IoT-J, and ACM MM.

🔥 News

  • 2024.08:  💪🏻 Raconteur was accepted by NDSS 2025! Raconteur is the first tool using Large Language Models (LLMs) to explain shell commands clearly. It provides detailed insights into command functions and purposes, aligning with MITRE ATT&CK standards. Equipped with professional cybersecurity knowledge, Raconteur delivers accurate explanations. A documentation retriever helps understand new commands. Tested extensively, Raconteur offers unique insights, helping security analysts better understand command intentions. Please find out more (e.g., our dataset) on the [website].
  • 2024.08:  📝 Legilimens was accepted by CCS 2024! A new SOTA of the LLM unsafe moderation technique, with significant improvement in efficiency. Congratulations to Jialin and Dr. Deng.
  • 2024.05:  🔥 SafeGen was accepted by CCS 2024! As T2I technologies advance, their misuse to generate sexually explicit content has become a major concern. SafeGen effectively mitigates this by removing any explicit visual representations from the model, making it safe regardless of the adversarial text input, outperforming eight other protection approaches. SafeGen adjusts the model’s visual self-attention layers to ensure that the high-quality production of benign images is not compromised. More information are on our [code][pretrained model].
  • 2024.05:  🔥 SafeEar got accepted by CCS 2024! To our knowledge, this is the 1st content privacy-preserving audio deepfake detection framework. Since audio deepfakes and user privacy concerns have been increasingly significant societal issues, we demonstrate how to achieve reliable deepfake detection while preventing both machine and human adversaries from eavesdropping on sensitive user speech content. To facilitate future research, we also develop a comprehensive multilingual deepfake datasets (more than 1,500,000 genuine & deepfake audio samples) using advanced TTS/VC techniques. Please check out our [website][code].
  • 2023.08:  🎉 VRifle got accepted by NDSS 2024! We demonstrate how to achieve completely inaudible adversarial perturbation attack via ultrasound, which achieves the farthest attack range (~10 meters away) and most universal capability (1 perturb. can tamper with >28,000 benign samples). Our novel ultrasonic transformation model can be generalized to other modality of attacks, such as laser, electromagentic.
  • 2023.08:  🔥 I attend USENIX Security 2023 Symposium and present our work NormDetect in person.
  • 2023.07:  😄 Our practical backdoor attack (SMA) against ASR models is accepted by ACM MM 2023!
  • 2022.09:  🎉 Tuner and UltraBD are accepted by IoT-J 2023 and ICPADS 2022! We demonstrate a practical inaudible backdoor attacks against the speaker recognition systems.
  • 2022.07:  💪🏻 NormDetect is accepted by USENIX Security 2023! We rethink the challenging topic of defending against inaudible voice attacks, and present a software-based mitigation that can instantly protect legacy and future devices in an actionable and practical manner, which is verified on 24 mainstream smart devices with voice interfaces.
  • 2021.07:  🎉 PROLE Score is accepted by USENIX Security 2022! “OK Siri” or “Hey Google”? We conduct an extensive voiceprint security measurement. Our findings and designed metrics shall aid manufactures and users to DIY highly secure voiceprint phrases.
  • 2020.12:  🔥 EarArray is accepted by NDSS 2021! We uncover the inherent physical properties of inaudible attack, i.e. ultrasound field distributions, and redesign microphone arrays for accurate detection and attack localization.

📝 Selected Publications

(*: Equal Contribution)

NDSS 2025
sym

Raconteur : A Knowledgeable, Insightful, and Portable LLM-Powered Shell Command Explainer Jiangyi Deng*, Xinfeng Li*, Yanjiao Chen, Yijie Bai, Haiqin Weng, Yan Liu, Tao Wei, Wenyuan Xu. In Proceedings of Network and Distributed System Security Symposium, NDSS 2025 (CCF-A, Big4) [Website, Dataset]

Raconteur is the first tool using Large Language Models (LLMs) to explain shell commands clearly. It provides detailed insights into command functions and purposes, aligning with MITRE ATT&CK standards. Equipped with professional cybersecurity knowledge, Raconteur delivers accurate explanations. A documentation retriever helps understand new commands. Tested extensively, Raconteur offers unique insights, helping security analysts better understand command intentions.

ACM CCS 2024
sym

SafeGen: Mitigating Sexually Explicit Content Generation in Text-to-Image Models Xinfeng Li, Yuchen Yang, Jiangyi Deng, Chen Yan, Yanjiao Chen, Xiaoyu Ji, Wenyuan Xu. To appear in Proceedings of ACM Conference on Computer and Communications Security, CCS 2024 (CCF-A, Big4) [Code][Pretrained Weights]

As T2I technologies advance, their misuse to generate sexually explicit content has become a major concern. SafeGen effectively mitigates this by removing any explicit visual representations from the model, making it safe regardless of the adversarial text input, outperforming eight other protection approaches. SafeGen adjusts the model’s visual self-attention layers to ensure that the high-quality production of benign images is not compromised.

ACM CCS 2024
sym

SafeEar: Content Privacy-Preserving Audio Deepfake Detection Xinfeng Li, Kai Li, Yifan Zheng, Chen Yan, Xiaoyu Ji, Wenyuan Xu. To appear in Proceedings of ACM Conference on Computer and Communications Security, CCS 2024 (CCF-A, Big4) [Website][Dataset][Code].

To our knowledge, SafeEar is the first content privacy-preserving audio deepfake detection framework. Since audio deepfakes and user privacy concerns have been increasingly significant societal issues, we demonstrate how to achieve reliable deepfake detection while preventing both machine and human adversaries from eavesdropping on sensitive user speech content. To facilitate future research, we also develop a comprehensive multilingual deepfake datasets (more than 1,500,000 genuine & deepfake audio samples) using advanced TTS/VC techniques.

NDSS 2024
sym

Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time, Xinfeng Li, Chen Yan, Xuancun Lu, Xiaoyu Ji, Wenyuan Xu., In Proceedings of Network and Distributed System Security Symposium, NDSS 2024 (CCF-A, Big4) [Website1], [Website2], [Code]

  • First completely inaudible adversarial perturbation attacks, increasing the attack range to 10 meters away.
  • First attempt on ultrasound transformation modeling.
  • Significant improvement of attack distances, universality, and stealthiness compared with prior adversarial example attacks.
USENIX 2023
sym

Learning Normality is Enough: A Software-based Mitigation against the Inaudible Voice Attacks, Xinfeng Li, Xiaoyu Ji, Chen Yan, Chaohao Li, Yichen Li, Zhenning Zhang, Wenyuan Xu., In Proceedings of 32nd USENIX Security 2023 (CCF-A, Big4) [Google Site]

  • Unsupervised software-based mitigation that can instantly protect miscallenous legacy devices.
  • Universal detection performance demonstrated on a wide range of devices.

🎖 Honors and Awards

  • CCS 2024 Student Grant (2024)
  • NDSS 2024 Student Grant (2024)
  • Merit Graduate Student Scholarship & Excellent Graduate Student Scholarship & Excellent Graduate Student Cadre (Zhejiang University, 2020-2023)
  • “Challenge Cup” National College Student Curricular Academic Science and Technology Works Competition: First prize (Zhejiang University, 2021)
  • Outstanding Undergraduates at Edison Honor Class, (Zhejiang University 2019)
  • Top-10 Students at EE College (Top 1%, Zhejiang University, 2018)
  • National Scholarship (Top 2%, Zhejiang University, 2018)
  • Meritorious Scholarship (Top 3%, Zhejiang University, 2016-2018)

📖 Educations

  • 2019.06 - Present, PhD, USSLAB, Zhejiang University, Hangzhou.
  • 2015.09 - 2019.06, Undergraduate, Electrical Engineering College, Zhejiang Univeristy, Hangzhou.
  • 2012.09 - 2015.06, Yuyao High School, Ningbo.

💬 Invited Talks

  • 2023.08, USENIX Security Symposium 2023 at Anaheim, California, USA. | [Slides]
  • 2024.02, NDSS 2024 at San Diego, California, USA. | [Paper] | [Code] | [Video]

💻 Internships

🗺️ Visitor Map