AI Backdoor - Papers


AI Backdoor - Papers

  1. An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection
    • Comments: To appear in USENIX Security ‘24
    • introduce CodeBreaker, a pioneering LLM-assisted backdoor attack framework
    • code completion task
    • https://arxiv.org/abs/2406.06822
    • [Submitted on 10 Jun 2024]
  2. BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents
  3. RTL-Breaker: Assessing the Security of LLMs against Backdoor Attacks on HDL Code Generation
    • Comments: Accepted at 2025 Design, Automation & Test in Europe (DATE) Conference
    • propose RTL-Breaker, a novel backdoor attack framework for LLM-based HDL code generation
    • HDL code generation and security risks in hardware systems
    • https://doi.org/10.48550/arXiv.2411.17569
    • [Submitted on 11 Nov 2024]
  4. BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models
    • Introduces BackdoorLLM, the first comprehensive benchmark for studying backdoor attacks on LLMs
    • propose BackdoorLLM, a benchmark for evaluating diverse backdoor attacks in LLMs
    • backdoor attacks in LLM text generation tasks
    • https://doi.org/10.48550/arXiv.2408.12798
    • [Submitted on 27 Aug 2024]

Author: Liang Junyi
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint policy. If reproduced, please indicate source Liang Junyi !
 Previous
2025-01-10 Liang Junyi
Next 
2024-12-27 Liang Junyi
  TOC