📝 Selected Publications

(*: Equal Contribution)

NDSS 2025
sym

Raconteur: A Knowledgeable, Insightful, and Portable LLM-Powered Shell Command Explainer Jiangyi Deng*, Xinfeng Li*, Yanjiao Chen, Yijie Bai, Haiqin Weng, Yan Liu, Tao Wei, Wenyuan Xu. In Proceedings of the Network and Distributed System Security Symposium, NDSS 2025 (CCF-A, Big4) [Website, Dataset]

Raconteur is the first tool to use Large Language Models (LLMs) to explain shell commands clearly. It provides detailed insights into command functions and purposes, aligning with MITRE ATT&CK standards. Equipped with professional cybersecurity knowledge, Raconteur delivers accurate explanations. A documentation retriever helps in understanding new commands. Tested extensively, Raconteur offers unique insights, helping security analysts better understand command intentions.

ACM CCS 2024
sym

SafeGen: Mitigating Sexually Explicit Content Generation in Text-to-Image Models Xinfeng Li, Yuchen Yang, Jiangyi Deng, Chen Yan, Yanjiao Chen, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the ACM Conference on Computer and Communications Security, CCS 2024 (CCF-A, Big4) [Code][Pretrained Weights][Paper]

As T2I technologies advance, their misuse for generating sexually explicit content has become a major concern. SafeGen effectively mitigates this by removing any explicit visual representations from the model, making it safe regardless of the adversarial text input and outperforming eight other protection approaches. SafeGen adjusts the model’s visual self-attention layers to ensure that the high-quality production of benign images is not compromised.

ACM CCS 2024
sym

SafeEar: Content Privacy-Preserving Audio Deepfake Detection Xinfeng Li, Kai Li, Yifan Zheng, Chen Yan, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the ACM Conference on Computer and Communications Security, CCS 2024 (CCF-A, Big4) [Website][Dataset][Code]

To our knowledge, SafeEar is the first content privacy-preserving audio deepfake detection framework. As audio deepfakes and user privacy concerns have become increasingly significant societal issues, we demonstrate how to achieve reliable deepfake detection while preventing both machine and human adversaries from eavesdropping on sensitive user speech content. To facilitate future research, we also developed a comprehensive multilingual deepfake dataset (with more than 1,500,000 genuine & deepfake audio samples) using advanced TTS/VC techniques.

NDSS 2024
sym

Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time Xinfeng Li, Chen Yan, Xuancun Lu, Xiaoyu Ji, Wenyuan Xu. In Proceedings of the Network and Distributed System Security Symposium, NDSS 2024 (CCF-A, Big4) [Website1], [Website2], [Code]

  • First completely inaudible adversarial perturbation attacks, increasing the attack range to 10 meters.
  • First attempt at ultrasound transformation modeling.
  • Significant improvement in attack distances, universality, and stealthiness compared with prior adversarial example attacks.
USENIX 2023
sym

Learning Normality is Enough: A Software-based Mitigation against Inaudible Voice Attacks Xinfeng Li, Xiaoyu Ji, Chen Yan, Chaohao Li, Yichen Li, Zhenning Zhang, Wenyuan Xu. In Proceedings of the 32nd USENIX Security 2023 (CCF-A, Big4) [Website]

  • Unsupervised software-based mitigation that can instantly protect miscellaneous legacy devices.
  • Universal detection performance demonstrated on a wide range of devices.