[论文翻译]大语言模型在网络安全中的应用:系统性文献综述


原文地址:https://arxiv.org/pdf/2405.04760v3


Large Language Models for Cyber Security: A Systematic Literature Review

大语言模型在网络安全中的应用:系统性文献综述

HANXIANG XU, Huazhong University of Science and Technology, China SHENAO WANG, Huazhong University of Science and Technology, China NINGKE LI, Huazhong University of Science and Technology, China KAILONG WANG*, Huazhong University of Science and Technology, China YANJIE ZHAO, Huazhong University of Science and Technology, China KAI CHEN*, Huazhong University of Science and Technology, China TING YU, Hamad Bin Khalifa University, The State of Qatar YANG LIU, Nanyang Technological University, Singapore HAOYU WANG*, Huazhong University of Science and Technology, China

The rapid advancement of Large Language Models (LLMs) has opened up new opportunities for leveraging artificial intelligence in a variety of application domains, including cyber security. As the volume and sophistication of cyber threats continue to grow, there is an increasing need for intelligent systems that can automatically detect vulnerabilities, analyze malware, and respond to attacks. In this survey, we conduct a comprehensive review of the literature on the application of LLMs in cyber security (LL M 4 Security). By comprehensively collecting over 30K relevant papers and systematically analyzing 127 papers from top security and software engineering venues, we aim to provide a holistic view of how LLMs are being used to solve diverse problems across the cyber security domain.

大语言模型的快速发展为在各个应用领域(包括网络安全)中利用人工智能开辟了新的机会。随着网络威胁的数量和复杂性不断增长,对能够自动检测漏洞、分析恶意软件并响应攻击的智能系统的需求也越来越大。在本调查中,我们对大语言模型在网络安全中的应用(LLM 4 Security)进行了全面的文献回顾。通过全面收集超过 30,000 篇相关论文,并系统分析了来自顶级安全和软件工程会议的 127 篇论文,我们旨在提供一个全面的视角,展示大语言模型如何被用于解决网络安全领域的各种问题。

Through our analysis, we identify several key findings. First, we observe that LLMs are being applied to a wide range of cyber security tasks, including vulnerability detection, malware analysis, network intrusion detection, and phishing detection. Second, we find that the datasets used for training and evaluating LLMs in these tasks are often limited in size and diversity, highlighting the need for more comprehensive and representative datasets. Third, we identify several promising techniques for adapting LLMs to specific cyber security domains, such as fine-tuning, transfer learning, and domain-specific pre-training. Finally, we discuss the main challenges and opportunities for future research in LL M 4 Security, including the need for more interpret able and explain able models, the importance of addressing data privacy and security concerns, and the potential for leveraging LLMs for proactive defense and threat hunting.

通过我们的分析,我们识别出几个关键发现。首先,我们观察到,大语言模型 (LLM) 被广泛应用于各种网络安全任务,包括漏洞检测、恶意软件分析、网络入侵检测和钓鱼检测。其次,我们发现,用于训练和评估大语言模型的数据集在规模和多样性上往往有限,这凸显了需要更全面和具有代表性的数据集。第三,我们识别出几种有前景的技术,用于将大语言模型适应特定的网络安全领域,如微调 (fine-tuning)、迁移学习 (transfer learning) 和领域特定的预训练 (domain-specific pre-training)。最后,我们讨论了 LLM 4 Security 未来研究的主要挑战和机遇,包括需要更具可解释性和可解释性的模型、解决数据隐私和安全问题的重要性,以及利用大语言模型进行主动防御和威胁狩猎的潜力。

Overall, our survey provides a comprehensive overview of the current state-of-the-art in LL M 4 Security and identifies several promising directions for future research. We believe that the insights and findings presented in this survey will contribute to the growing body of knowledge on the application of LLMs in cyber security and provide valuable guidance for researchers and practitioners working in this field.

总体而言,我们的调查全面概述了大语言模型 (LLM) 在安全领域的最新技术,并指出了未来研究的几个有前景的方向。我们相信,本次调查中提出的见解和发现将为大语言模型在网络安全领域的应用知识体系做出贡献,并为该领域的研究人员和从业者提供宝贵的指导。

一INTRODUCTION

引言

The rapid advancements in natural language processing (NLP) over the past decade have been largely driven by the development of large language models (LLMs). By leveraging the Transformer architecture [206] and training on massive amounts of textual data, LLMs like BERT [50], GPT-3,4 [148, 150], PaLM [41], Claude [16] and Chinchilla [79]

过去十年中,自然语言处理(NLP)的快速发展主要得益于大语言模型(LLMs)的进步。通过利用 Transformer 架构 [206] 并在海量文本数据上进行训练,诸如 BERT [50]、GPT-3,4 [148, 150]、PaLM [41]、Claude [16] 和 Chinchilla [79] 等大语言模型得以诞生。

'Corresponding authors

通讯作者

Table 1. State-of-the-art surveys related to LLMs for security.

表 1: 与大语言模型 (LLM) 安全相关的最新综述。

参考文献 时间 主题范围 讨论维度 时间范围 论文数量
Motlagh 等人 [80] 2024 安全应用 任务 2022-2023 未指定
Divakaran 等人 [51] 2024 安全应用 任务 2020-2024 未指定
Yao 等人 [230] 2023 安全应用、大语言模型的安全性 模型、任务 2019-2024 281
Yigit 等人 [232] 2024 安全应用、大语言模型的安全性 任务 2020-2024 未指定
Coelho 等人 [43] 2024 安全应用 任务、领域特定技术 2021-2023 19
Novelli 等人 [146] 2024 安全应用、大语言模型的安全性 任务 2020-2024 未指定
LLM4Security 2024 安全应用 模型、任务、领域特定技术、数据 2020-2024 127

have achieved remarkable performance across a wide range of NLP tasks, including language understanding, generation, and reasoning. These foundational models learn rich linguistic representations that can be adapted to downstream applications with minimal fine-tuning, enabling breakthroughs in domains such as open-domain question answering [2], dialogue systems [152, 231], and program synthesis [6].

在广泛的语言理解、生成和推理等 NLP 任务中取得了显著性能。这些基础模型学习了丰富的语言表示,可以通过少量微调适应下游应用,从而在开放域问答 [2]、对话系统 [152, 231] 和程序合成 [6] 等领域实现了突破。

In particular, one important domain where LLMs are beginning to show promise is cyber security. With the growing volume and sophistication of cyber threats, there is an urgent need for intelligent systems that can automatically detect vulnerabilities, analyze malware, and respond to attacks [20, 36, 138]. Recent research has explored the application of LLMs across a wide range of cyber security tasks, i.e., LL M 4 Security hereafter. In the domain of software security, LLMs have been used for detecting vulnerabilities from natural language descriptions and source code, as well as generating security-related code, such as patches and exploits. These models have shown high accuracy in identifying vulnerable code snippets and generating effective patches for common types of vulnerabilities [30, 40, 65]. Beyond code-level analysis, LLMs have also been applied to understand and analyze higher-level security artifacts, such as security policies and privacy policies, helping to classify documents and detect potential violations [75, 135]. In the realm of network security, LLMs have demonstrated the ability to detect and classify various types of attacks from network traffic data, including DDoS attacks, port scanning, and botnet traffic [10, 11, 140]. Malware analysis is another key area where LLMs are showing promise, with models being used to classify malware families based on textual analysis reports and behavioral descriptions, as well as detecting malicious domains and URLs [93, 123]. LLMs have also been employed in the field of social engineering to detect and defend against phishing attacks by analyzing email contents and identifying deceptive language patterns [90, 172]. Moreover, researchers are exploring the use of LLMs to enhance the robustness and resilience of security systems themselves, by generating adversarial examples for testing the robustness of security class if i ers and simulating realistic attack scenarios for training and evaluation purposes [31, 179, 198]. These diverse applications demonstrate the significant potential of LLMs to improve the efficiency and effectiveness of cyber security practices by processing and extracting insights from large amounts of unstructured text, learning patterns from vast datasets, and generating relevant examples for testing and training purposes.

特别是在网络安全领域,大语言模型开始展现出巨大的潜力。随着网络威胁的数量和复杂性不断增加,迫切需要能够自动检测漏洞、分析恶意软件并响应攻击的智能系统 [20, 36, 138]。最近的研究探索了大语言模型在广泛网络安全任务中的应用,即下文所称的 LLM 4 Security。在软件安全领域,大语言模型已被用于从自然语言描述和源代码中检测漏洞,并生成与安全相关的代码,例如补丁和漏洞利用程序。这些模型在识别易受攻击的代码片段和为常见类型的漏洞生成有效补丁方面表现出高准确性 [30, 40, 65]。除了代码级别的分析外,大语言模型还被应用于理解和分析更高级别的安全工件,例如安全策略和隐私策略,帮助分类文档并检测潜在的违规行为 [75, 135]。在网络安全领域,大语言模型展示了从网络流量数据中检测和分类各种类型攻击的能力,包括 DDoS 攻击、端口扫描和僵尸网络流量 [10, 11, 140]。恶意软件分析是大语言模型展现出潜力的另一个关键领域,模型被用于基于文本分析报告和行为描述对恶意软件家族进行分类,并检测恶意域名和 URL [93, 123]。大语言模型还被应用于社交工程领域,通过分析电子邮件内容和识别欺骗性语言模式来检测和防御钓鱼攻击 [90, 172]。此外,研究人员正在探索利用大语言模型通过生成对抗样本来测试安全分类器的鲁棒性,并模拟真实的攻击场景用于训练和评估,从而增强安全系统自身的鲁棒性和弹性 [31, 179, 198]。这些多样化的应用展示了大语言模型通过处理和提取大量非结构化文本中的见解、从庞大数据集中学习模式以及生成相关测试和训练样本,从而提升网络安全实践效率和有效性的巨大潜力。

While there have been several valuable efforts in the literature to survey the LL M 4 Security [43, 51, 141, 230], given the growing body of work in this direction, these studies often have a more focused scope. Many of the existing surveys primarily concentrate on reviewing the types of tasks that LLMs can be applied to, without providing an extensive analysis of other essential aspects related to these tasks, such as the data and domain-specific techniques employed [146, 232], as shown in Table 1. For example, Divakaran et al. [51] only analyzed the prospects and challenges of LLMs in various security tasks, discussing the characteristics of each task separately. However, it lacks insight into the connection between the requirements of these security tasks and data, as well as the application of LLMs in domain-specific technologies.

尽管文献中有几项有价值的努力来调查 LLM 4 Security [43, 51, 141, 230],但随着这一方向的研究不断增加,这些研究通常具有更集中的范围。许多现有的调查主要集中在大语言模型可以应用的任务类型上,而没有对这些任务相关的其他重要方面进行广泛分析,例如所使用的数据和特定领域的技术 [146, 232],如表 1 所示。例如,Divakaran 等人 [51] 仅分析了大语言模型在各种安全任务中的前景和挑战,分别讨论了每项任务的特点。然而,它缺乏对这些安全任务的需求与数据之间的联系,以及大语言模型在特定领域技术中的应用的深入洞察。

To address these limitations and provide an in-depth understanding of the state-of-the-art in LL M 4 Security, we conduct a systematic and extensive survey of the literature. By comprehensively collecting 38,112 relevant papers and systematically analyzing 127 papers from top security and software engineering venues, our survey aims to provide a holistic view of how LLMs are being applied to solve diverse problems across the cyber security domain. In addition to identifying the types of tasks that LLMs are being used for, we also examine the specific datasets, preprocessing techniques, and domain adaptation methods employed in each case. This enables us to provide a more nuanced analysis of the strengths and limitations of different approaches, and to identify the most promising directions for future research. Specifically, we focus on answering four key research questions (RQs):

为了解决这些局限性并提供对大语言模型 (LLM) 在安全领域最新进展的深入理解,我们对相关文献进行了系统且广泛的调查。通过全面收集 38,112 篇相关论文并系统分析来自顶级安全与软件工程会议的 127 篇论文,我们的调查旨在提供一个关于大语言模型如何应用于解决网络安全领域各种问题的整体视角。除了识别大语言模型被用于的任务类型外,我们还研究了每种情况下使用的具体数据集、预处理技术以及领域适应方法。这使我们能够对不同方法的优势和局限性进行更细致的分析,并确定未来研究中最有前景的方向。具体而言,我们专注于回答四个关键研究问题 (RQs):

For each research question, we provide a fine-grained analysis of the approaches, datasets, and evaluation methodologies used in the surveyed papers. We identify common themes and categorize the papers along different dimensions to provide a structured overview of the landscape. Furthermore, we highlight the key challenges and limitations of current approaches to guide future research towards addressing the gaps. We believe our survey can serve as a valuable resource for researchers working at the intersection of NLP, AI, and cyber security. The contributions of this work are summarized asfollows:

针对每个研究问题,我们对所调查论文中使用的方法、数据集和评估方法进行了细致分析。我们识别了常见主题,并沿不同维度对论文进行分类,以提供该领域的结构化概览。此外,我们强调了当前方法的关键挑战和局限性,以指导未来研究填补这些空白。我们相信,这项调查可以为从事自然语言处理 (NLP)、人工智能 (AI) 和网络安全交叉领域的研究人员提供宝贵的资源。本工作的贡献总结如下:

The survey progresses with the following framework. We outline our survey methodology, including the search strategy, inclusion/exclusion criteria, and the data extraction process, in Section 2. The analysis and findings for each of the four research questions can be found in Sections 4 through 6. Sections 7 to 8 explore the constraints and significance of our results, while also identifying promising directions for future research. Finally, Section 9 concludes the paper.

本次调查按照以下框架进行。我们在第2节概述了调查方法,包括搜索策略、纳入/排除标准和数据提取过程。第4至6节分别对四个研究问题进行了分析和发现。第7至8节探讨了我们结果的局限性和意义,同时指出了未来研究的有前途方向。最后,第9节对本文进行了总结。

2 METHODOLOGY

2 方法论

In this study, we conducted a Systematic Literature Review (SLR) to investigate the latest research on LL M 4 Security. This review aims to provide a comprehensive mapping of the landscape, identifying how LLMs are being deployed to enhance cyber security measures.

在本研究中,我们进行了系统性文献综述 (Systematic Literature Review, SLR),以探讨大语言模型在安全领域的最新研究。该综述旨在全面描绘这一领域的发展现状,识别大语言模型如何被用于增强网络安全措施。


Fig. 1. Systematic Literature Review Methodology for LL M 4 Security.

图 1: 大语言模型安全性的系统文献综述方法论

Following the established SLR guidelines [99, 164], our methodology is structured into three pivotal stages as shown in Figure 2: Planning (\$2.1), Conducting (\$2.2, \$2.3), and Reporting (\$2.4), each meticulously designed to ensure comprehensive coverage and insightful analysis of the current state of research in this burgeoning field.

遵循既定的 SLR 指南 [99, 164],我们的方法分为三个关键阶段,如图 2 所示:计划 ($2.1)、执行 ($2.2, $2.3) 和报告 ($2.4),每个阶段都经过精心设计,以确保对这一新兴领域的研究现状进行全面覆盖和深入分析。

Planning. Initially, we formulated precise research questions to understand how LLMs are being utilized in security tasks, the benefits derived, and the associated challenges. Subsequently, we developed a detailed protocol delineating our search strategy, including specific venues and databases, keywords, and quality assessment criteria. Each co-author reviewed this protocol to enhance its robustness and align with our research objectives.

规划。最初,我们制定了精确的研究问题,以了解大语言模型在安全任务中的应用、带来的好处以及相关的挑战。随后,我们制定了一个详细的协议,阐明了我们的搜索策略,包括特定的场所和数据库、关键词以及质量评估标准。每位合著者都审查了该协议,以增强其稳健性并符合我们的研究目标。

Literature survey and analysis. We meticulously crafted our literature search to ensure comprehensiveness, employing both manual and automated strategies across various databases to encompass a wide range of papers. Each study identified underwent a stringent screening process, initially based on their titles and abstracts, followed by a thorough review of the full text to ensure conformity with our predefined criteria. To prevent overlooking related papers, we also conducted forward and backward snowballing on the collected papers.

文献调查与分析。我们精心设计了文献搜索策略,以确保全面性,采用了手动和自动化的方法,在多个数据库中涵盖了广泛的论文。每篇识别到的研究都经过严格的筛选过程,首先基于其标题和摘要,随后对全文进行彻底审查,以确保符合我们预定的标准。为了防止遗漏相关论文,我们还对收集到的论文进行了前向和后向的雪球式搜索。

Reporting. We present our findings through a structured narrative, complemented by visual aids like flowcharts and tables, providing a clear and comprehensive overview of the existing literature. The discussion delves into the implications of our findings, addressing the potential of LLMs to revolutionize cyber security practices and identifying gaps that warrant further investigation.

报告。我们通过结构化的叙述呈现我们的发现,辅以流程图和表格等视觉辅助工具,提供对现有文献的清晰全面概述。讨论深入探讨了我们发现的含义,探讨了大语言模型 (LLM) 革新网络安全实践的潜力,并指出了值得进一步研究的空白领域。

2.1 Research Question

2.1 研究问题

The primary aim of this SLR, focused on the context of LL M 4 Security, is to meticulously dissect and synthesize existing research at the intersection of these two critical fields. This endeavor seeks to illuminate the multifaceted applications of LLMs in cyber security, assess their effectiveness, and delineate the spectrum of methodologies employed across various studies. To further refine this objective, we formulated the following four Research Questions (RQs):

本次系统性文献综述(SLR)的主要目标,聚焦于大语言模型(LLM)在安全领域的应用,旨在深入剖析并综合这两个关键领域交叉点的现有研究。这一努力旨在阐明大语言模型在网络安全中的多方面应用,评估其有效性,并描绘各项研究中采用的方法谱系。为了进一步细化这一目标,我们制定了以下四个研究问题(RQs):

2.2 Search Strategy

2.2 搜索策略

To collect and identify a set of relevant literature as accurately as possible, we employed the “Quasi-Gold Standard" (QGS) [239] strategy for literature search. The overview of the strategy we applied in this work is as follows:

为了尽可能准确地收集和识别相关文献,我们采用了“准黄金标准”(Quasi-Gold Standard, QGS) [239] 策略进行文献检索。我们在本工作中应用的策略概述如下:

Step1: Identify related venues and databases. To initiate this approach, we first identify specific venues for manual search and then choose suitable libraries and databases for the automated search. In this stage, we opt for six of the top Security conferences and journals (i.e., S&P, NDSS, USENIX Security, CCS, TDSC, and TIFS) as well as six of the leading Software Engineering conferences and journals (i.e.,ICSE, ESEC/FSE, ISSTA, ASE, TOSEM, and TSE). Given the emerging nature of LLMs in research, we also include arXiv in both manual and automated searches, enabling us to capture the latest unpublished studies in this rapidly evolving field. For automated searches, we select seven widely utilized databases, namely the ACM Digital Library, IEEE Xplore, Science Direct, Web of Science, Springer, Wiley, and arXiv. These databases offer comprehensive coverage of computer science literature and are commonly employed in systematic reviews within this domain [80, 236, 252].

步骤1:识别相关会议和数据库。为了启动这一方法,我们首先确定特定的会议进行手动搜索,然后选择合适的图书馆和数据库进行自动搜索。在这一阶段,我们选择了六个顶级安全会议和期刊(即S&P、NDSS、USENIX Security、CCS、TDSC和TIFS)以及六个领先的软件工程会议和期刊(即ICSE、ESEC/FSE、ISSTA、ASE、TOSEM和TSE)。鉴于大语言模型在研究中新兴的性质,我们还将arXiv纳入手动和自动搜索中,以便捕捉这一快速发展领域中最新未发表的研究。对于自动搜索,我们选择了七个广泛使用的数据库,即ACM数字图书馆、IEEE Xplore、Science Direct、Web of Science、Springer、Wiley和arXiv。这些数据库提供了计算机科学文献的全面覆盖,并且通常在该领域的系统综述中使用 [80, 236, 252]。

Step2: Establish QGs. In this step, we start with creating a manually curated set of studies that have been carefully screened to form the QGS. A total of 41 papers relevant to LLM4Sec are manually identified, aligning with the research objective and encompassing various techniques, application domains, and evaluation methods.

步骤2:建立QGs。在这一步中,我们首先创建一组经过精心筛选的手动策划研究,以形成QGS。我们手动识别了41篇与LLM4Sec相关的论文,这些论文与研究目标一致,涵盖了各种技术、应用领域和评估方法。

Step3: Define search keywords. The keywords for automatic search are elicited from the title and abstract of the selected QGS papers through word frequency analysis. The search string consists of two sets of keywords:

第三步:定义搜索关键词。自动搜索的关键词通过词频分析从选定的QGS论文标题和摘要中提取。搜索字符串由两组关键词组成:


Fig. 2. Paper Search and Selection Process.

图 2: 论文搜索与选择流程

Step4: Conduct an automated search. These identified keywords are paired one by one and input into automated searches across the above-mentioned seven widely used databases. Our automated search focused on papers published after 2019, in which GPT-2 was published, as it marked a significant milestone in the development of large language models. The search was conducted in the title, abstract, and keyword fields of the papers in each database. Specifically, the number of papers retrieved from each database after applying the search query and the year filter (2019-2023) is as follows: 398 papers in ACM Digital Library, 2,112 papers in IEEE Xplore, 724 papers in Science Direct, 4,245 papers in Web of Science, 23,721 papers in Springer, 7,154 papers in Wiley, and 3,557 papers in arXiv.

步骤4:进行自动化搜索。将这些识别出的关键词两两配对,并输入到上述七个广泛使用的数据库中进行自动化搜索。我们的自动化搜索聚焦于2019年后发表的论文,因为GPT-2的发布标志着大语言模型发展的一个重要里程碑。搜索在每个数据库的论文标题、摘要和关键词字段中进行。具体来说,应用搜索查询和年份过滤器(2019-2023)后,从每个数据库检索到的论文数量如下:ACM Digital Library 398篇,IEEE Xplore 2,112篇,Science Direct 724篇,Web of Science 4,245篇,Springer 23,721篇,Wiley 7,154篇,arXiv 3,557篇。

2.3 Study Selection

2.3 研究选择

After obtaining the initial pool of 38,112 papers (38,071 from the automated search and 41 from the QGS), we conducted a multi-stage study selection process to identify the most relevant and high-quality papers for our systematic review.

在获得初步的 38,112 篇论文(其中 38,071 篇来自自动搜索,41 篇来自 QGS)后,我们进行了多阶段的研究筛选过程,以确定与我们的系统综述最相关和高质量的论文。

2.3.1 Coarse-Grained Inclusion and Exclusion Criteria. To select relevant papers for our research questions, we defined four inclusion criteria and eight exclusion criteria (as listed in Table 2) for the coarse-grained paper selection process. Among them, $\operatorname{In}#1$ ,Ex#1, $\mathrm{Ex#2}$ and $\mathrm{Ex}#3$ were automatically filtered based on the keywords, duplication status, length, and publication venue of the papers. The remaining inclusion criteria $(\mathrm{In}#2{\sim}4)$ and exclusion criteria $\left(\operatorname{Ex}#4{\sim}8\right)$ were manually applied by inspecting the topic and content of each paper. Specifically, the criteria of $\mathbf{In}#1$ retained 7,582 papers whose titles and abstracts contained a pair of the identified search keywords. Subsequently, Ex#1 filtered out 440 duplicate or multi-version papers from the same authors with little difference. Next, the automated fltering criteria $\operatorname{Ex}#2$ was applied to exclude short papers, tool demos, keynotes, editorials, books, theses, workshop papers, or poster papers, resulting in 4,855 papers being removed. The remaining papers were then screened based on the criteria $\mathrm{Ex}#3$ which retained 523 full research papers published in the identified venues or as preprints on arXiv. The remaining inclusion and exclusion criteria (In#2~4, Ex#4~8) were then manually applied to the titles and abstracts of these 523 papers, in order to determine their relevance to the research topic. Three researchers independently applied the inclusion and exclusion criteria to the titles and abstracts. Disagreements were resolved through discussion and consensus. After this manual inspection stage, 156 papers were included for further fine-grained full-text quality assessment.

2.3.1 粗粒度纳入与排除标准。为了选择与研究问题相关的论文,我们定义了四个纳入标准和八个排除标准(如表 2 所列),用于粗粒度论文筛选过程。其中,$\operatorname{In}#1$、Ex#1、$\mathrm{Ex#2}$ 和 $\mathrm{Ex}#3$ 是根据论文的关键词、重复状态、长度和出版渠道自动筛选的。其余的纳入标准 $(\mathrm{In}#2{\sim}4)$ 和排除标准 $\left(\operatorname{Ex}#4{\sim}8\right)$ 是通过检查每篇论文的主题和内容手动应用的。具体来说,$\mathbf{In}#1$ 的标准保留了 7,582 篇标题和摘要中包含一对已识别搜索关键词的论文。随后,Ex#1 过滤掉了 440 篇来自同一作者且差异不大的重复或多版本论文。接下来,应用了自动筛选标准 $\operatorname{Ex}#2$,排除了短论文、工具演示、主题演讲、社论、书籍、学位论文、研讨会论文或海报论文,导致 4,855 篇论文被移除。然后,根据标准 $\mathrm{Ex}#3$ 对剩余论文进行了筛选,保留了 523 篇发表在已识别渠道或作为 arXiv 预印本的完整研究论文。然后,对这些 523 篇论文的标题和摘要手动应用了其余的纳入和排除标准(In#2~4, Ex#4~8),以确定它们与研究主题的相关性。三位研究人员独立地将纳入和排除标准应用于标题和摘要。分歧通过讨论和共识解决。在这个手动检查阶段之后,156 篇论文被纳入进一步细粒度全文质量评估。

Table 2. Inclusion and exclusion criteria.

表 2: 纳入和排除标准

纳入标准
In#1: 论文的标题和摘要包含一对已识别的搜索关键词;
In#4: 评估大语言模型在安全场景中的性能或有效性的论文。
排除标准
Ex#1: 重复的论文,同一作者的多版本差异较小的研究;
Ex#2: 少于8页的短论文、工具演示、主题演讲、社论、书籍、学位论文、研讨会论文或海报论文;
Ex#3: 未在已识别会议或期刊上发表,也未在arXiv上作为预印本发表的论文;
Ex#4: 不关注安全任务的论文(例如,一般领域的自然语言处理任务);
Ex#5: 使用传统机器学习或深度学习技术而不涉及大语言模型的论文;
Ex#6: 二次研究,如系统文献综述(SLR)、评论或调查;
Ex#7: 非英文撰写的论文;
Ex#8: 关注大语言模型的安全性而非将其用于安全任务的论文。

2.3.2 Fine-grained Quality Assessment. To ensure the included papers are of suffcient quality and rigor, we assessed them using a set of quality criteria adapted from existing guidelines for systematic reviews in software engineering. The quality criteria included:

2.3.2 细粒度质量评估

Each criterion was scored on a 3-point scale (0: not met, 1: partially met, 2: fully met). Papers with a total score of 6 or higher (out of 10) were considered as having acceptable quality. After the quality assessment, 93 papers remained in the selected set.

每个标准按照3分制评分 (0: 未满足, 1: 部分满足, 2: 完全满足)。总分为6分及以上 (满分10分) 的论文被视为质量合格。经过质量评估后,93篇论文被保留在选集中。

2.3.3 Forward and Backward Snowballing. To further expand the coverage of relevant literature, we performed forward and backward snowballing on the 93 selected papers. Forward snowballing identified papers that cited the selected papers, while backward snowballing identified papers that were referenced by the selected papers.

2.3.3 前向与后向滚雪球法

Here we obtained 2,056 and 5,255 papers separately during the forward and backward process. Then we applied the same inclusion/exclusion criteria and quality assessment to the papers found through snowballing. After the initial keyword filtering and de duplication, there were 1,978 papers that remained available. Among them, 68 papers were excluded during the page number filtering step, and 1,235 papers were deleted to ensure the papers were published in the selected venues. After confirming the paper topics and assessing the paper quality, only 44 papers were ultimately retained in the snowballing process, resulting in a final set of 127 papers for data extraction and synthesis.

在正向和反向过程中,我们分别获取了 2,056 篇和 5,255 篇论文。然后,我们对通过滚雪球法找到的论文应用了相同的纳入/排除标准和质量评估。经过初步关键词过滤和去重后,剩下 1,978 篇论文可用。其中,68 篇论文在页码过滤步骤中被排除,1,235 篇论文被删除以确保论文发表在选定的会议/期刊上。在确认论文主题并评估论文质量后,滚雪球过程中最终仅保留了 44 篇论文,最终得到 127 篇论文用于数据提取和综合。

2.4 Statistics of Selected Papers

2.4 所选论文的统计数据

After conducting searches and snowballing, a total of 127 relevant research papers were ultimately obtained. The distribution of the included documents is outlined in Figure 3. As depicted in Figure 3(A), $39%$ of the papers underwent peer review before publication. Among these venues, ICSE had the highest frequency, contributing $7%$ . Other venues making significant contributions included FSE, ISSTA, ASE, and TSE, with contributions of $5%$ $5%$ $3%$ and $3%$ respectively. Meanwhile, the remaining $61%$ of the papers were published on arXiv, an open-access platform serving as a repository for scholarly articles. This discovery is unsurprising given the rapid emergence of new LL M 4 Security studies, with many works recently completed and potentially undergoing peer review. Despite lacking peer review, we conducted rigorous quality assessments on all collected papers to ensure the integrity of our investigation results. This approach enables us to include all high-quality and relevant publications while upholding stringent research standards.

经过检索和滚雪球法,最终获得了127篇相关研究论文。所纳入文献的分布情况如图3所示。如图3(A)所示,39%的论文在发表前经过了同行评审。在这些会议中,ICSE的出现频率最高,占比7%。其他贡献显著的会议包括FSE、ISSTA、ASE和TSE,分别占比5%、5%、3%和3%。与此同时,其余61%的论文发表在开放获取平台arXiv上,该平台作为学术文章存储库。鉴于LLM安全研究迅速涌现,许多工作最近才完成,可能正在进行同行评审,这一发现并不令人意外。尽管这些论文缺乏同行评审,我们对所有收集到的论文进行了严格的质量评估,以确保调查结果的完整性。这种方法使我们能够在坚持严格研究标准的同时,纳入所有高质量和相关的出版物。


Fig. 3. Overview of the selected 127 papers’ distribution.

图 3: 所选 127 篇论文的分布概览。

The temporal distribution of the included papers is depicted in Figure 3(B). Since 2020, there has been a notable upward trend in the number of publications. In 2020, only 1 relevant paper was published, followed by 2 in 2021. However, the number of papers sharply increased to 11 by 2022. Surprisingly, in 2023, the total count surged to 109 published papers. This rapid growth trend signifies an increasing interest in LL M 4 Security research. Currently, many works from 2024 are still under review or unpublished. Hence, we have chosen only 6 representative papers. We will continue to observe the developments in LL M 4 Security research throughout 2024.

图 3(B) 展示了所纳入论文的时间分布。自 2020 年以来,相关论文数量呈现出显著上升趋势。2020 年仅发表了 1 篇相关论文,2021 年增至 2 篇。然而,到 2022 年,论文数量急剧上升至 11 篇。令人惊讶的是,2023 年发表的论文总数激增至 109 篇。这一快速增长趋势表明,大语言模型 (LLM) 在安全领域的研究兴趣日益增加。目前,2024 年的许多工作仍处于评审或未发表状态。因此,我们仅选择了 6 篇代表性论文。我们将继续观察 2024 年大语言模型在安全领域的研究进展。

Table 3. Extracted data items and related research questions (RQs).

表 3: 提取的数据项及相关研究问题 (RQs)。

RQ 数据项
1,2,3,4 大语言模型的类别
1,3,4 网络安全领域的类别
1,2,3 大语言模型的属性及适用性
1,3 安全任务需求与大语言模型解决方案的应用
1 安全领域所属的安全任务
3 使大语言模型适应任务的技术
3 显著的外部增强技术
4 所使用数据集的类型及特征

After completing the full-text review phase, we proceeded with data extraction. The objective was to collect all relevant information essential for offering detailed and insightful answers to the RQs outlined in $\S2.1$ As illustrated in Table 3, the extracted data included the categorization of security tasks, their corresponding domains, as well as classifications of LLMs, external enhancement techniques, and dataset characteristics. Using the gathered data, we systematically examined the relevant aspects of LLM application within the security domains.

在完成全文审查阶段后,我们继续进行数据提取。目的是收集所有相关信息,以便为$\S2.1$中概述的研究问题提供详细且深入的答案。如表3所示,提取的数据包括安全任务的分类、其对应的领域,以及大语言模型、外部增强技术和数据集特征的分类。利用收集到的数据,我们系统地研究了安全领域中大语言模型应用的相关方面。

3RQ1:WHAT TYPES OF SECURITY TASKS HAVE BEEN FACILITATED BY LL M-BASED APPROACHES?

3RQ1: 基于大语言模型的方法促进了哪些类型的安全任务?

This section delves into the detailed examination of LLM utilization across diverse security domains. We have classified them into six primary domains, aligning with the themes of the collected papers: software and system security, network security, information and content security, hardware security, and blockchain security, totaling 127 papers. Figure 4 visually depicts the distribution of LLMs within these six domains. Additionally, Table 4 offers a comprehensive breakdown of research detailing specific security tasks addressed through LLM application.

本节深入探讨大语言模型 (LLM) 在不同安全领域的详细应用。我们根据收集的论文主题将其分为六个主要领域:软件和系统安全、网络安全、信息和内容安全、硬件安全以及区块链安全,共计 127 篇论文。图 4 直观地展示了大语言模型在这六个领域的分布情况。此外,表 4 提供了通过大语言模型应用解决的具体安全任务的研究详细分类。


Fig. 4. Distribution of LLM usages in security domains.

图 4: 安全领域中大语言模型 (LLM) 的使用分布。

The majority of research activity in the realm of software and system security, constituting around $59.84%$ Oof the total research output, is attributed to the advancements made by code LLMs [178, 247, 250] and the extensive applications of LLMs in software engineering [80]. This emphasis underscores the significant role and impact of LLMs in software and system security, indicating a predominant focus on leveraging LLMs to automate the handling of potential security issues in programs and systems. Approximately $17.32%$ of the research focus pertains to network security tasks, highlighting the importance of LLMs in aiding traffic detection and network threat analysis. Information and content security activities represent around $14.17%$ of the research output, signaling a growing interest in employing LLMs for generating and detecting fake content. Conversely, activities in hardware security and blockchain security account for approximately $4.72%$ and $3.94%$ of the research output, respectively, suggesting that while exploration in these domains has been comparatively limited thus far, there remains research potential in utilizing LLMs to analyze hardware-level vulnerabilities and potential security risks in blockchain technology.

在软件和系统安全领域,大部分研究活动(约占总研究产出的 59.84%)归功于代码大语言模型 [178, 247, 250] 的进展以及大语言模型在软件工程中的广泛应用 [80]。这一重点强调了大语言模型在软件和系统安全中的重要角色和影响力,表明研究者主要专注于利用大语言模型来自动处理程序和系统中的潜在安全问题。约 17.32% 的研究关注点涉及网络安全任务,突显了大语言模型在辅助流量检测和网络威胁分析中的重要性。信息和内容安全活动约占研究产出的 14.17%,表明研究者对利用大语言模型生成和检测虚假内容的兴趣日益增长。相比之下,硬件安全和区块链安全活动分别约占研究产出的 4.72% 和 3.94%,这表明尽管目前在这些领域的探索相对有限,但利用大语言模型分析硬件级漏洞和区块链技术中的潜在安全风险仍具有研究潜力。

Table 4. Distribution of security tasks over six security domains.

表 4: 六大安全领域的安全任务分布

安全领域 安全任务 总计
网络安全 Web 模糊测试 (3) 流量和入侵检测 (10) 网络威胁分析 (5) 渗透测试 (4) 漏洞检测 (17) 22
软件与系统安全 漏洞修复 (10) 缺陷检测 (8) 缺陷修复 (20) 程序模糊测试 (6) 逆向工程与二进制分析 (7) 恶意软件检测 (2) 76
信息与内容安全 系统日志分析 (6) 钓鱼和诈骗检测 (8) 有害内容检测 (6) 隐写术 (2) 访问控制 (1) 18
硬件安全 取证 (1) 硬件漏洞检测 (2) 6
区块链安全 硬件漏洞修复 (4) 智能合约安全 (4) 交易异常检测 (1) 5

3.1 Application of LLMs in Network Security

3.1 大语言模型 (LLM) 在网络安全中的应用

This section explores the application of LLMs in the field of network security. The tasks include web fuzzing, intrusion and anomaly detection, cyber threat analysis, and penetration testing.

本节探讨大语言模型 (LLM) 在网络安全领域的应用。任务包括网络模糊测试、入侵和异常检测、网络威胁分析以及渗透测试。

Web fuzzing. Web fuzzing is a mutation-based fuzzer that generates test cases increment ally based on the coverage feedback it receives from the instrumented web application [205]. Security is undeniably the most critical concern for web applications. Fuzzing can help operators discover more potential security risks in web applications. Liang et al. [115] proposed GPTFuzzer based on an encoder-decoder architecture. It generates effective payloads for web application firewalls (WAFs) targeting SQL injection, XSS, and RCE attacks by generating fuzz test cases. The model undergoes reinforcement learning [112] fine-tuning and KL-divergence penalty to effectively generate attack payloads and mitigate the local optimum issue. Similarly, Liu et al. [120] utilized an encoder-decoder architecture model to generate SQL injection detection test cases for web applications, enabling the translation of user inputs into new test cases. Meng et al's CHATAFL [133], on the other hand, shifts focus to leveraging LLMs for generating structured and sequenced effective test inputs for network protocols lacking machine-readable versions.

Web fuzzing。Web fuzzing 是一种基于变异的模糊测试工具,它根据从插桩的 Web 应用程序接收到的覆盖率反馈逐步生成测试用例 [205]。安全性无疑是 Web 应用程序最关键的关注点。模糊测试可以帮助操作人员发现 Web 应用程序中更多的潜在安全风险。Liang 等人 [115] 提出了一种基于编码器-解码器架构的 GPTFuzzer。它通过生成模糊测试用例,针对 SQL 注入、XSS 和 RCE 攻击为 Web 应用程序防火墙 (WAF) 生成有效载荷。该模型经过强化学习 [112] 微调和 KL 散度惩罚,以有效生成攻击载荷并缓解局部最优问题。同样,Liu 等人 [120] 利用编码器-解码器架构模型生成用于 Web 应用程序的 SQL 注入检测测试用例,从而将用户输入转换为新的测试用例。另一方面,Meng 等人的 CHATAFL [133] 则将重点转向利用大语言模型为缺乏机器可读版本的网络协议生成结构化和有序的有效测试输入。

Traffc and intrusion detection. Detecting network traffic and intrusions is a crucial aspect of network security and management [137]. LLMs have been widely applied in network intrusion detection tasks, covering traditional web applications, IoT (Internet of Things), and in-vehicle network scenarios [11, 62, 131, 138]. LLMs not only learn the characteristics of malicious traffc data [10, 11, 138] and capture anomalies in user-initiated behaviors [24] but also describe the intent of intrusions and abnormal behaviors [3, 10, 58]. Additionally, they can provide corresponding security recommendations and response strategies for identified attack types [37]. Liu et al. [123] proposed a method for detecting malicious URL behavior by utilizing LLMs to extract hierarchical features of malicious URLs. Their work extends the application of LLMs in intrusion detection tasks to the user level, demonstrating the generality and effectiveness of LLMs in intrusion and anomaly detection tasks.

流量和入侵检测。检测网络流量和入侵是网络安全和管理的关键方面 [137]。大语言模型已广泛应用于网络入侵检测任务,涵盖传统网络应用、物联网 (Internet of Things) 和车载网络场景 [11, 62, 131, 138]。大语言模型不仅学习了恶意流量数据的特征 [10, 11, 138] 并捕捉了用户发起的行为中的异常 [24],还描述了入侵和异常行为的意图 [3, 10, 58]。此外,它们还可以为识别出的攻击类型提供相应的安全建议和响应策略 [37]。Liu 等人 [123] 提出了一种通过利用大语言模型提取恶意 URL 的层次特征来检测恶意 URL 行为的方法。他们的工作将大语言模型在入侵检测任务中的应用扩展到了用户层面,展示了大语言模型在入侵和异常检测任务中的通用性和有效性。

Cyber threat analysis. In contemporary risk management strategies, Cyber Threat Intelligence (CTI) reporting plays a pivotal role, as evidenced by recent research [34]. With the continued surge in the volume of CTI reports, there is a growing need for automated tools to facilitate report generation. The application of LLMs in network threat analysis can be categorized into CTI generation and CTI analysis for decision-making. The emphasis on CTI generation varies including extracting CTI from network security text information (such as books, blogs, news) [5], generating structured CTI reports from unstructured information [189], and generating CTI from network security entity graphs [162]. Aghaei et al's CVEDrill [4] can generate priority recommendation reports for potential cyber security threats and predict their impact. Additionally, Moskal et al. [140] explored the application of ChatGPT in assisting or automating response decision-making for threat behaviors, demonstrating the potential of LLMs in addressing simple network attack activities.

网络威胁分析。在当代风险管理策略中,网络威胁情报 (CTI) 报告扮演着关键角色,正如最近的研究 [34] 所证明的那样。随着 CTI 报告数量的持续激增,对自动化工具以促进报告生成的需求也在不断增加。大语言模型在网络威胁分析中的应用可分为 CTI 生成和 CTI 分析以支持决策。CTI 生成的重点各不相同,包括从网络安全文本信息(如书籍、博客、新闻)中提取 CTI [5],从非结构化信息生成结构化的 CTI 报告 [189],以及从网络安全实体图中生成 CTI [162]。Aghaei 等人的 CVEDrill [4] 可以为潜在网络安全威胁生成优先级推荐报告并预测其影响。此外,Moskal 等人 [140] 探索了 ChatGPT 在辅助或自动化威胁行为响应决策中的应用,展示了大语言模型在应对简单网络攻击活动中的潜力。

Penetration test. Conducting a controlled attack on a computer system to evaluate its security is the essence of penetration testing, which remains a pivotal approach utilized by organizations to bolster their defenses against cyber threats [183]. The general penetration testing process consists of three steps: information gathering, payload construction, and vulnerability exploitation. Temara [198] utilized LLMs to gather information for penetration testing. including the IP address, domain information, vendor technologies, SSL/TLS credentials, and other details of the target website. Sai Charan et al. [31] critically examined the capability of LLMs to generate malicious payloads for penetration testing, with results indicating that ChatGPT can generate more targeted and complex payloads for attackers. Happe et al. [74] developed an automated Linux privilege escalation guidance tool using LLMs. Additionally, the automated penetration testing tool PentestGPT [45], based on LLMs, achieved excellent performance on a penetration testing benchmark containing 13 scenarios and 182 subtasks by combining three self-interacting modules (inference, generation, and parsing modules).

渗透测试。对计算机系统进行受控攻击以评估其安全性是渗透测试的核心,这仍然是组织用来加强防御网络威胁的关键方法 [183]。一般的渗透测试过程包括三个步骤:信息收集、载荷构建和漏洞利用。Temara [198] 利用大语言模型收集渗透测试所需的信息,包括目标网站的 IP 地址、域名信息、供应商技术、SSL/TLS 凭证等详细信息。Sai Charan 等人 [31] 批判性地研究了大语言模型生成渗透测试恶意载荷的能力,结果表明 ChatGPT 可以为攻击者生成更具针对性和复杂性的载荷。Happe 等人 [74] 利用大语言模型开发了一种自动化的 Linux 权限提升指导工具。此外,基于大语言模型的自动化渗透测试工具 PentestGPT [45] 通过结合三个自交互模块(推理、生成和解析模块),在包含 13 个场景和 182 个子任务的渗透测试基准上取得了出色的性能。

3.2 Application of LLMs in Software and System Security

3.2 大语言模型在软件和系统安全中的应用

This section explores the application of LLMs in the field of software and system security. LLMs excel in understanding user commands, inferring program control and data flow, and generating complex data structures [216]. The tasks it includes vulnerability detection, vulnerability repair, bug detection, bug repair, program fuzzing, reverse engineering and binary analysis, malware detection, and system log analysis.

本节探讨了大语言模型(LLM)在软件和系统安全领域的应用。大语言模型在理解用户命令、推断程序控制和数据流以及生成复杂数据结构方面表现出色[216]。其任务包括漏洞检测、漏洞修复、错误检测、错误修复、程序模糊测试、逆向工程和二进制分析、恶意软件检测以及系统日志分析。

Vulnerability detection. The escalation in software vulnerabilities is evident in the recent surge of vulnerability reports documented by Common Vulnerabilities and Exposures (CVEs) [14]. With this rise, the potential for cyber security attacks grows, posing significant economic and social risks. Hence, the detection of vulnerabilities becomes imperative to safeguard software systems and uphold social and economic stability. The method of utilizing LLMs for static vulnerability detection in code shows significant performance improvements compared to traditional approaches based on graph neural networks or matching rules [17, 36, 38, 40, 61, 98, 124, 168, 199, 203, 211, 238, 246]. The potential demonstrated by GPT series models in vulnerability detection tasks is particularly evident [38, 61, 98, 124, 204, 238]. However, LLMs may generate false positives when dealing with vulnerability detection tasks due to minor changes in function and variable names or modifications to library functions [203]. Liu et al. [121] proposed LATTE, which combines LLMs to achieve automated binary taint analysis. This overcomes the limitations of traditional taint analysis, which requires manual customization of taint propagation rules and vulnerability inspection rules. They discovered 37 new vulnerabilities in real firmware. Tihanyi et al. [200] used LLMs to generate a large-scale vulnerability-labeled dataset, FormAI, while also noting that over $50%$ of the code generated by LLMs may contain vulnerabilities, posing a significant risk to software security.

漏洞检测。软件漏洞的增加在最近由通用漏洞披露 (CVE) [14] 记录的漏洞报告中表现得尤为明显。随着漏洞的增加,网络安全攻击的潜在风险也在增长,带来了重大的经济和社会风险。因此,漏洞检测变得至关重要,以保护软件系统并维护社会和经济的稳定。利用大语言模型进行代码静态漏洞检测的方法,相比基于图神经网络或匹配规则的传统方法,显示出显著的性能提升 [17, 36, 38, 40, 61, 98, 124, 168, 199, 203, 211, 238, 246]。GPT 系列模型在漏洞检测任务中表现出的潜力尤为明显 [38, 61, 98, 124, 204, 238]。然而,大语言模型在处理漏洞检测任务时,可能会因函数和变量名称的微小变化或库函数的修改而产生误报 [203]。Liu 等人 [121] 提出了 LATTE,结合大语言模型实现自动化的二进制污点分析。这克服了传统污点分析需要手动定制污点传播规则和漏洞检查规则的局限性。他们在真实固件中发现了 37 个新漏洞。Tihanyi 等人 [200] 使用大语言模型生成了一个大规模的漏洞标记数据集 FormAI,同时也指出大语言模型生成的代码中超过 $50%$ 可能包含漏洞,对软件安全构成了重大风险。

Vulnerability repair. Due to the sharp increase in the number of detected vulnerabilities and the complexity of modern software systems, manually fixing security vulnerabilities is extremely time-consuming and labor-intensive for security experts [243]. Research shows that $50%$ of vulnerabilities have a lifecycle exceeding 438 days [110]. Delayed vulnerability patching may result in ongoing attacks on software systems [118], causing economic losses to users. The T5 model based on the encoder-decoder architecture performs better in vulnerability repair tasks [65, 240]. Although LLMs can effectively generate fixes, challenges remain in maintaining the functionality correctness of functions [158], and they are susceptible to influences from different programming languages. For example, the current capabilities of LLMs in repairing Java vulnerabilities are limited [218]. Constructing a comprehensive vulnerability repair dataset and fine-tuning LLMs on it can significantly improve the model's performance in vulnerability repair tasks [65]. Alrashedy et al. [30] proposed an automated vulnerability repair tool driven by feedback from static analysis tools. Tol et al. [201] proposed a method called ZeroLeak, which utilizes LLMs to repair side-channel vulnerabilities in programs. Chara lamb o us et al. [12] combined LLMs with Bounded Model Checking (BMC) to verify the effectiveness of repair solutions, addressing the problem of decreased functionality correctness after using LLMs to repair vulnerabilities.

漏洞修复。由于检测到的漏洞数量急剧增加以及现代软件系统的复杂性,手动修复安全漏洞对安全专家来说极其耗时且费力 [243]。研究表明,$50%$ 的漏洞生命周期超过 438 天 [110]。延迟修复漏洞可能导致软件系统持续受到攻击 [118],给用户造成经济损失。基于编码器-解码器架构的 T5 模型在漏洞修复任务中表现更好 [65, 240]。尽管大语言模型能够有效生成修复方案,但在保持功能正确性方面仍存在挑战 [158],并且它们容易受到不同编程语言的影响。例如,目前大语言模型在修复 Java 漏洞方面的能力有限 [218]。构建全面的漏洞修复数据集并在此基础上对大语言模型进行微调,可以显著提高模型在漏洞修复任务中的性能 [65]。Alrashedy 等人 [30] 提出了一种由静态分析工具反馈驱动的自动化漏洞修复工具。Tol 等人 [201] 提出了一种名为 ZeroLeak 的方法,利用大语言模型修复程序中的侧信道漏洞。Charalambous 等人 [12] 将大语言模型与有界模型检验 (BMC) 相结合,验证修复方案的有效性,解决了使用大语言模型修复漏洞后功能正确性下降的问题。

Bug detection. Bugs typically refer to any small faults or errors present in software or hardware, which may cause programs to malfunction or produce unexpected results. Some bugs may be exploited by attackers to create security vulnerabilities. Therefore, bug detection is crucial for the security of software and system. LLMs can be utilized to generate code lines and compare them with the original code to flag potential bugs within code snippets [7]. They can also combine fedback from static analysis tools to achieve precise bug localization [92, 111]. Fine-tuning techniques are crucial for bug detection tasks as well, applying fine-tuning allows LLMs to identify errors in code without relying on test cases [106, 227]. Additionally, Du et al. [54] and Li et al. [114] introduced the concept of contrastive learning, which focuses LLMs on the subtle differences between correct and buggy versions of code lines. Fang et al. [57] proposed a software-agnostic representation method called Represent Them All, based on contrastive learning and fine-tuning modules, suitable for various downstream tasks including bug detection and predicting the priority and severity of bugs.

Bug detection. Bugs typically refer to any small faults or errors present in software or hardware, which may cause programs to malfunction or produce unexpected results. Some bugs may be exploited by attackers to create security vulnerabilities. Therefore, bug detection is crucial for the security of software and system. LLMs can be utilized to generate code lines and compare them with the original code to flag potential bugs within code snippets [7]. They can also combine fedback from static analysis tools to achieve precise bug localization [92, 111]. Fine-tuning techniques are crucial for bug detection tasks as well, applying fine-tuning allows LLMs to identify errors in code without relying on test cases [106, 227]. Additionally, Du et al. [54] and Li et al. [114] introduced the concept of contrastive learning, which focuses LLMs on the subtle differences between correct and buggy versions of code lines. Fang et al. [57] proposed a software-agnostic representation method called Represent Them All, based on contrastive learning and fine-tuning modules, suitable for various downstream tasks including bug detection and predicting the priority and severity of bugs.

Bug 检测。Bugs 通常指软件或硬件中存在的任何小故障或错误,可能导致程序运行异常或产生意外结果。一些 bugs 可能被攻击者利用来制造安全漏洞。因此,bug 检测对于软件和系统的安全至关重要。大语言模型 (LLM) 可用于生成代码行并与原始代码进行比较,以标记代码片段中的潜在 bugs [7]。它们还可以结合静态分析工具的反馈来实现精确的 bug 定位 [92, 111]。微调技术对于 bug 检测任务也至关重要,应用微调可以使大语言模型在不依赖测试用例的情况下识别代码中的错误 [106, 227]。此外,Du 等人 [54] 和 Li 等人 [114] 引入了对比学习的概念,它使大语言模型专注于正确和错误代码行之间的细微差异。Fang 等人 [57] 提出了一种名为 Represent Them All 的与软件无关的表示方法,基于对比学习和微调模块,适用于各种下游任务,包括 bug 检测和预测 bug 的优先级和严重性。

Bug repair. LLMs possess robust code generation capabilities, and their utilization in engineering for code generation can significantly enhance efficiency. However, code produced by LLMs often carries increased security risks, such as bugs and vulnerabilities [163]. These program bugs can lead to persistent security vulnerabilities. Hence, automating the process of bug fixing is imperative, involving the use of automation technology to analyze flawed code and generate accurate patches to rectify identified issues. LLMs like CodeBERT [88, 105, 222, 241], CodeT5 [88, 197, 209], Codex [56, 92, 223], LLaMa [197], CodeLLaMa [147, 188] , CodeGEN [223], UniXcoder [241], T5 [234], PLBART [88], and GPT Series [147, 197, 223, 224, 241, 242, 244] have showcased effectiveness in generating syntactically accurate and con textually relevant code. This includes frameworks with encoder-decoder architecture like Repilot [214], tailored specifically for producing repair patches. Utilizing LLMs for program repair can achieve competitive performance in producing patches for various types of errors and defects [224]. These models effectively capture the underlying semantics and dependencies in code, resulting in precise and effcient patches. Moreover, fine-tuning LLMs on specific code repair datasets can further improve their ability to generate high-quality patches for real-world software projects. Integrating LLMs into program repair not only speeds up the error-fixing process but also allows software developers to focus on more complex tasks, thereby enhancing the reliability and maintainability of the software [223]. As demonstrated in the case of ChatGPT, notably enhances the accuracy of program repairs when integrated with interactive feedback loops [223]. This iterative process of patch generation and validation fosters a nuanced comprehension of software semantics, thereby resulting in more impactful fixes. By integrating domain-specific knowledge and technologies with the capabilities of LLMs, their performance is further enhanced. Custom prompts, fine-tuning for specific tasks, retrieving external data, and utilizing static analysis tools [65, 92, 197, 221, 240] significantly improve the effectiveness of bug fixes driven by LLMs.

Bug修复。LLMs具备强大的代码生成能力,在工程中利用它们进行代码生成可以显著提高效率。然而,LLMs生成的代码通常携带更高的安全风险,例如漏洞和缺陷 [163]。这些程序漏洞可能导致持久的安全漏洞。因此,自动化修复漏洞的过程势在必行,这涉及使用自动化技术分析有缺陷的代码并生成准确的补丁来修复已识别的问题。像CodeBERT [88, 105, 222, 241]、CodeT5 [88, 197, 209]、Codex [56, 92, 223]、LLaMa [197]、CodeLLaMa [147, 188]、CodeGEN [223]、UniXcoder [241]、T5 [234]、PLBART [88]和GPT系列 [147, 197, 223, 224, 241, 242, 244]等LLMs在生成语法正确且上下文相关的代码方面展示了有效性。这包括专门用于生成修复补丁的具有编码器-解码器架构的框架,如Repilot [214]。利用LLMs进行程序修复可以在生成各种类型错误和缺陷的补丁方面达到有竞争力的性能 [224]。这些模型有效捕捉了代码中的底层语义和依赖关系,从而生成精确且高效的补丁。此外,在特定代码修复数据集上微调LLMs可以进一步提高它们为现实世界软件项目生成高质量补丁的能力。将LLMs集成到程序修复中不仅加快了错误修复过程,还使软件开发人员能够专注于更复杂的任务,从而提高软件的可靠性和可维护性 [223]。正如ChatGPT的案例所示,当与交互式反馈循环结合时,显著提高了程序修复的准确性 [223]。这种补丁生成和验证的迭代过程促进了对软件语义的细致理解,从而产生更有影响力的修复。通过将领域特定知识和技术与LLMs的能力相结合,进一步提升了它们的性能。自定义提示、特定任务微调、检索外部数据以及使用静态分析工具 [65, 92, 197, 221, 240]显著提高了LLMs驱动的漏洞修复效果。

Program fuzzing. Fuzz testing, or fuzzing, refers to an automated testing method aimed at generating inputs to uncover unforeseen behaviors. Both researchers and practitioners have effectively developed practical fuzzing tools, demonstrating significant success in detecting numerous bugs and vulnerabilities within real-world systems [22]. The generation capability of LLMs enables testing against various input program languages and diferent features [46, 220], effectively overcoming the limitations of traditional fuz testing methods. Under strategies such as repetitive querying. example querying, and iterative querying [237], LLMs can significantly enhance the generation effectiveness of test cases. LLMs can generate test cases that trigger vulnerabilities from historical bug reports of programs [47], produce test cases similar but different from sample inputs [85], analyze compiler source code to generate programs that trigger specific optimization s [228], and split the testing requirements and test case generation using a dual-model interaction framework, assigning them to different LLMs for processing.

程序模糊测试。模糊测试(Fuzzing)是一种自动化测试方法,旨在生成输入以发现未预见的行为。研究人员和实践者已经有效地开发了实用的模糊测试工具,并在检测现实世界系统中的众多错误和漏洞方面取得了显著成功 [22]。大语言模型的生成能力使其能够针对各种输入程序语言和不同特性进行测试 [46, 220],有效克服了传统模糊测试方法的局限性。在重复查询、示例查询和迭代查询等策略下 [237],大语言模型可以显著提高测试用例的生成效果。大语言模型可以从程序的历史错误报告中生成触发漏洞的测试用例 [47],生成与样本输入相似但不同的测试用例 [85],分析编译器源代码以生成触发特定优化的程序 [228],并使用双模型交互框架将测试需求和测试用例生成分开,分配给不同的大语言模型处理。

Reverse engineering and binary analysis. Reverse engineering is the process of attempting to understand how existing artifacts work, whether for malicious purposes or defensive purposes, and it holds significant security implications. The capability of LLMs to recognize software functionality and extract important information enables them to perform certain reverse engineering steps [159]. For example, Xu et al. [226] achieved recovery of variable names from binary files by propagating LLMs query results through multiple rounds. Armengol-Estape et al. [15] combined type inference engines with LLMs to perform d is assembly of executable files and generate program source code. LLMs can also be used to assist in binary program analysis. Sun et al. [193] proposed DexBert for characterizing Android system binary bytecode. Pei et al. [160] preserved the semantic symmetry of code based on group theory, resulting in their binary analysis framework SYMC demonstrating outstanding generalization and robustness in various binary analysis tasks. Song et al. [191] utilized LLMs to address authorship analysis issues in software engineering. effectively applying them to real-world APT malicious software for organization-level verification. Some studies [86] apply LLMs to enhance the readability and usability of decompiler outputs, thereby assisting reverse engineers in better understanding binary files.

逆向工程与二进制分析。逆向工程是试图理解现有工件如何运作的过程,无论是出于恶意目的还是防御目的,它都具有重大的安全意义。大语言模型识别软件功能和提取重要信息的能力使它们能够执行某些逆向工程步骤 [159]。例如,Xu 等人 [226] 通过多轮传播大语言模型的查询结果,实现了从二进制文件中恢复变量名称。Armengol-Estape 等人 [15] 将类型推断引擎与大语言模型结合,执行可执行文件的反汇编并生成程序源代码。大语言模型还可用于辅助二进制程序分析。Sun 等人 [193] 提出了 DexBert 用于表征 Android 系统二进制字节码。Pei 等人 [160] 基于群论保留了代码的语义对称性,使他们的二进制分析框架 SYMC 在各种二进制分析任务中表现出出色的泛化性和鲁棒性。Song 等人 [191] 利用大语言模型解决软件工程中的作者分析问题,有效地将其应用于现实世界中的 APT 恶意软件进行组织级验证。一些研究 [86] 应用大语言模型来提高反编译器输出的可读性和可用性,从而帮助逆向工程师更好地理解二进制文件。

Malware detection. Due to the rising volume and intricacy of malware, detecting malicious software has emerged as a significant concern. While conventional detection techniques rely on signatures and heuristics, they exhibit limited effectiveness against unknown attacks and are susceptible to evasion through obfuscation techniques [2o]. LLMs can extract semantic features of malware, leading to more competitive performance. AVScan2Vec, proposed by Joyce et al. [93], transforms antivirus scan reports into vector representations, effectively handling large-scale malware datasets and performing well in tasks such as malware classification, clustering, and nearest neighbor search. Botacin [23] explored the application of LLMs in malware defense from the perspective of malware generation. While LLMs cannot directly generate complete malware based on simple instructions, they can generate building blocks of malware and successfully construct various malware variants by blending different functionalities and categories. This provides a new perspective for malware detection and defense.

恶意软件检测。随着恶意软件数量和复杂性的增加,检测恶意软件已成为一个重要问题。传统的检测技术依赖于签名和启发式方法,但它们在应对未知攻击时效果有限,并且容易通过混淆技术逃避检测 [2o]。大语言模型可以提取恶意软件的语义特征,从而获得更具竞争力的性能。Joyce 等人 [93] 提出的 AVScan2Vec 将反病毒扫描报告转换为向量表示,有效处理大规模恶意软件数据集,并在恶意软件分类、聚类和最近邻搜索等任务中表现良好。Botacin [23] 从恶意软件生成的角度探讨了大语言模型在恶意软件防御中的应用。虽然大语言模型无法根据简单的指令直接生成完整的恶意软件,但它们可以生成恶意软件的构建块,并通过混合不同功能和类别成功构建各种恶意软件变体。这为恶意软件检测和防御提供了新的视角。

System log analysis. Analyzing the growing amount of log data generated by software-intensive systems manually is unfeasible due to its sheer volume. Numerous deep learning approaches have been suggested for detecting anomalies in log data. These approaches encounter various challenges, including dealing with high-dimensional and noisy log data, addressing class imbalances, and achieving generalization [89]. Nowadays, researchers are utilizing the language understanding capabilities of LLMs to identify and analyze anomalies in log data. Compared to traditional deep learning methods, LLMs demonstrate outstanding performance and good interpret ability [166, 185]. Fine-tuning LLMs for specific types of logs [97] or using reinforcement learning-based fine-tuning strategies [72] can significantly enhance their performance in log analysis tasks. LLMs are also being employed for log analysis in cloud servers [39, 119], where their reasoning abilities can be combined with server logs to infer the root causes of cloud service incidents.

系统日志分析。由于软件密集型系统生成的日志数据量庞大,手动分析这些不断增长的日志数据是不可行的。许多深度学习方法已被提出用于检测日志数据中的异常。这些方法面临各种挑战,包括处理高维和噪声日志数据、解决类别不平衡问题以及实现泛化 [89]。如今,研究人员正在利用大语言模型的语言理解能力来识别和分析日志数据中的异常。与传统的深度学习方法相比,大语言模型展现出卓越的性能和良好的可解释性 [166, 185]。通过对大语言模型进行特定类型日志的微调 [97] 或使用基于强化学习的微调策略 [72],可以显著提升其在日志分析任务中的性能。大语言模型还被用于云服务器的日志分析 [39, 119],其推理能力可以与服务器日志结合,推断云服务事件的根本原因。

3.3 Application of LLMs in Information and Content Security

3.3 大语言模型在信息和内容安全中的应用

This section explores the application of LLMs in the field of information and content security. The tasks it includes phishing and scam, harmful contents, s tegan o graph y, access control, and forensics.

本节探讨大语言模型在信息和内容安全领域的应用,涵盖钓鱼和诈骗、有害内容、隐写术 (steganography)、访问控制和取证等任务。

Phishing and scam detection. Network deception is a deliberate act of introducing false or misleading content into a network system, threatening the personal privacy and property security of users. Emails, short message service (SMS), and web advertisements are leveraged by attackers to entice users and steer them towards phishing sites, enticing them to click on malicious links [196]. LLMs can generate deceptive or false information on a large scale under specific prompts [172], making them useful for automated phishing email generation[77, 176], but compared to manual design methods, phishing emails generated by LLMs have lower click-through rates [77]. LLMs can achieve phishing email detection through prompts based on website information [100] or fine-tuning for specific email features [139, 176]. Spam emails often contain a large number of phishing emails. Labonne et al's research [102] has demonstrated the effectiveness of LLMs in spam email detection, showing significant advantages over traditional machine learning methods. An interesting study [28] suggests that LLMs can mimic real human interactions with scammers in an automated and meaningless manner, thereby wasting scammers′ time and resources and alleviating the nuisance of scamemails.

钓鱼和诈骗检测。网络欺骗是故意在网络系统中引入虚假或误导性内容的行为,威胁用户的个人隐私和财产安全。攻击者利用电子邮件、短信服务(SMS)和网页广告引诱用户进入钓鱼网站,诱使他们点击恶意链接[196]。大语言模型可以在特定提示下大规模生成欺骗性或虚假信息[172],使其在自动生成钓鱼邮件方面具有应用价值[77, 176],但与手动设计方法相比,大语言模型生成的钓鱼邮件点击率较低[77]。大语言模型可以通过基于网站信息的提示[100]或针对特定邮件特征的微调[139, 176]来实现钓鱼邮件检测。垃圾邮件中通常包含大量钓鱼邮件。Labonne等人的研究[102]表明,大语言模型在垃圾邮件检测方面具有显著优势,优于传统的机器学习方法。一项有趣的研究[28]指出,大语言模型可以以自动化和无意义的方式模仿真实人类与诈骗者的互动,从而浪费诈骗者的时间和资源,减轻诈骗邮件的困扰。

Harmful contents detection. Social media platforms frequently face criticism for amplifying political polarization and deteriorating public discourse. Users often contribute harmful content that reflects their political beliefs, thereby intensifying contentious and toxic discussions or participating in harmful behavior [215]. The application of LLMs in detecting harmful content can be divided into three aspects: detection of extreme political stances [73, 135], tracking of criminal activity discourse [83], and identification of social media bots [27]. LLMs tend to express attitudes consistent with the values encoded in the programming when faced with political discourse, indicating the complexity and limitations of LLMs in handling social topics [75]. Hartvigsen et al. [132] generated a large-scale dataset of harmful and benign discourse targeting 13 minority groups using LLMs. Through validation, it was found that human annotators struggled to distinguish between LLM-generated and human-written discourse, advancing efforts in filtering and combating harmful contents.

有害内容检测。社交媒体平台经常因加剧政治极化和恶化公共讨论而受到批评。用户常常发布反映其政治信仰的有害内容,从而加剧争议性和有毒的讨论,或参与有害行为 [215]。大语言模型在检测有害内容方面的应用可以分为三个方面:极端政治立场的检测 [73, 135]、犯罪活动言论的追踪 [83] 以及社交媒体机器人的识别 [27]。大语言模型在面对政治话语时,往往会表达与编程中编码的价值观一致的态度,这表明大语言模型在处理社会话题时的复杂性和局限性 [75]。Hartvigsen 等人 [132] 利用大语言模型生成了一个针对 13 个少数群体的有害和良性言论的大规模数据集。通过验证发现,人类标注者难以区分大语言模型生成的言论和人类撰写的言论,这推动了过滤和打击有害内容的努力。

S tegan o graph y. S tegan o graph y, as discussed in Anderson's work [13], focuses on embedding confidential data within ordinary information carriers without alerting third parties, thereby safeguarding the secrecy and security of the concealed information. Wang et al. [207] introduced a method for language st eg analysis using LLMs based on few-shot learning principles, aiming to overcome the limited availability of labeled data by incorporating a small set of labeled samples along with auxiliary unlabeled samples to improve the efficiency of language st eg analysis. This approach significantly improves the detection capability of existing methods in scenarios with few samples. Bauer et al. [18] used the GPT-2 model to encode ciphertext into natural language cover texts, allowing users to control the observable format of the ciphertext for covert information transmission on public platforms.

隐写术 (Steganography)
隐写术,如 Anderson 的研究 [13] 所述,专注于将机密数据嵌入到普通信息载体中,而不引起第三方的注意,从而保护隐藏信息的机密性和安全性。Wang 等人 [207] 提出了一种基于少样本学习原则,利用大语言模型进行语言隐写分析的方法,旨在克服标记数据有限的挑战,通过引入少量标记样本和辅助的未标记样本来提高语言隐写分析的效率。这种方法在样本较少的情况下显著提升了现有方法的检测能力。Bauer 等人 [18] 使用 GPT-2 模型将密文编码为自然语言覆盖文本,使用户能够控制密文的可观察格式,以便在公共平台上进行隐蔽信息传输。

Access control. Access control aims to restrict the actions or operations permissible for a legitimate user of a computer system [180], with passwords serving as the fundamental component for its implementation. Despite the proliferation of alternative technologies, passwords continue to dominate as the preferred authentication mechanism [156]. PassGPT, a password generation model leveraging LLMs, introduces guided password generation, wherein PassGPT's sampling process generates passwords adhering to user-defined constraints. This approach outperforms existing methods utilizing Generative Adversarial Networks (GANs) by producing a larger set of previously unseen passwords, thereby demonstrating the effectiveness of LLMs in improving existing password strength estimators [173].

访问控制

Forensics. In the realm of digital forensics, the successful prosecution of cyber criminals involving a wide array of digital devices hinges upon its pivotal role. The evidence retrieved through digital forensic investigations must be admissible in a court of law [184]. Scanlon and colleagues [182] delved into the potential application of LLMs within the field of digital forensics. Their exploration encompassed an assessment of LLM performance across various digital forensic scenarios, including file identification, evidence retrieval, and incident response. Their findings led to the conclusion that while LLMs currently lack the capability to function as standalone digital forensic tools, they can nonetheless serve as supplementary aids in select cases.

取证。在数字取证领域,成功起诉涉及各种数字设备的网络犯罪分子,其关键在于取证的关键作用。通过数字取证调查获取的证据必须在法庭上可采信[184]。Scanlon及其同事[182]探讨了大语言模型在数字取证领域的潜在应用。他们的研究涵盖了大语言模型在不同数字取证场景中的表现评估,包括文件识别、证据检索和事件响应。他们的研究得出结论,尽管大语言模型目前尚无法作为独立的数字取证工具使用,但它们在某些情况下仍可作为辅助工具。

3.4 Application of LLMs in Hardware Security

3.4 大语言模型在硬件安全中的应用

Modern computing systems are built on System-on-Chip (SoC) architectures because they achieve high levels of integration by using multiple Intellectual Property (IP) cores. However, this also brings about new security challenges, as a vulnerability in one IP core could affect the security of the entire system. While software and firmware patches can address many hardware security vulnerabilities, some vulnerabilities cannot be patched, and extensive security assurances are required during the design process [49]. This section explores the application of LLMs in the field of hardware security. The tasks it includes hardware vulnerability detection and hardware vulnerability repair.

现代计算系统基于片上系统 (SoC) 架构,因为它们通过使用多个知识产权 (IP) 核实现了高度集成。然而,这也带来了新的安全挑战,因为一个 IP 核中的漏洞可能会影响整个系统的安全性。虽然软件和固件补丁可以解决许多硬件安全漏洞,但有些漏洞无法修补,因此在设计过程中需要广泛的安全保证 [49]。本节探讨了大语言模型在硬件安全领域的应用。其任务包括硬件漏洞检测和硬件漏洞修复。

Hardware vulnerability detection. LLMs can extract security properties from hardware development documents. Meng et al. [134] trained HS-BERT on hardware architecture documents such as RISC-V, OpenRISC, and MIPS, and identified 8 security vulnerabilities in the design of the OpenTitan SoC. Additionally, Paria et al. [155] used LLMs to identify security vulnerabilities from user-defined SoC specifications, map them to relevant CWEs, generate corresponding assertions, and take security measures by executing security policies.

硬件漏洞检测。大语言模型可以从硬件开发文档中提取安全属性。Meng 等人 [134] 在 RISC-V、OpenRISC 和 MIPS 等硬件架构文档上训练了 HS-BERT,并识别出 OpenTitan SoC 设计中的 8 个安全漏洞。此外,Paria 等人 [155] 使用大语言模型从用户定义的 SoC 规范中识别安全漏洞,将其映射到相关的 CWE,生成相应的断言,并通过执行安全策略来采取安全措施。

Hardware vulnerability repair. LLMs have found application within the integrated System-on-Chip (SoC) security verification paradigm, showcasing potential in addressing diverse hardware-level security tasks such as vulnerability insertion, security assessment, verification, and the development of mitigation strategies [179]. By leveraging hardware vulnerability information, LLMs offer advice on vulnerability repair strategies, thereby improving the efficiency and accuracy of hardware vulnerability analysis and mitigation efforts [116]. In their study, Nair and colleagues [144] demonstrated that LLMs can generate hardware-level security vulnerabilities during hardware code generation and explored their utility in generating secure hardware code. They successfully produced secure hardware code for 10 Common Weakness Enumerations (CWEs) at the hardware design level. Additionally, Tan et al. [8] curated a comprehensive corpus of hardware security vulnerabilities and evaluated the performance of LLMs in automating the repair of hardware vulnerabilities based on this corpus.

硬件漏洞修复。大语言模型在集成在片系统(SoC)安全验证范式中找到了应用,展示了在解决多种硬件级安全任务(如漏洞插入、安全评估、验证以及缓解策略开发)方面的潜力 [179]。通过利用硬件漏洞信息,大语言模型提供了漏洞修复策略的建议,从而提高了硬件漏洞分析和缓解工作的效率和准确性 [116]。Nair 及其同事 [144] 在他们的研究中证明,大语言模型可以在硬件代码生成过程中生成硬件级安全漏洞,并探索了它们在生成安全硬件代码中的实用性。他们成功地为硬件设计级别的 10 个常见弱点枚举(CWE)生成了安全硬件代码。此外,Tan 等人 [8] 整理了一个全面的硬件安全漏洞语料库,并基于该语料库评估了大语言模型在自动化硬件漏洞修复中的表现。

3.5 Application of LLMs in Blockchain Security

3.5 大语言模型在区块链安全中的应用

This section explores the application of LLMs in the field of blockchain security. The tasks it includes smart contract security and transaction anomaly detection.

本节探讨大语言模型在区块链安全领域的应用,包括智能合约安全和交易异常检测任务。

Smart contract security. With the advancement of blockchain technology, smart contracts have emerged as a pivotal element in blockchain applications [251]. Despite their significance, the development of smart contracts can introduce vulnerabilities that pose potential risks such as financial losses. While LLMs offer automation for detecting vulnerabilities in smart contracts, the detection outcomes often exhibit a high rate of false positives [32, 42]. Performance varies across different vulnerability types and is constrained by the contextual length of LLMs [32]. GPTLENS [87] divides the detection proces of smart contract vulnerabilities into two phases: generation and discrimination. During the generation phase, diverse vulnerability responses are generated, and in the discrimination phase, these responses are evaluated and ranked to mitigate false positives. Sun and colleagues [194] integrated LLMs and program analysis to identify logical vulnerabilities in smart contracts, breaking down logical vulnerability categories into scenarios and attributes. They utilized LLMs to match potential vulnerabilities and further integrated static confirmation to validate the findings of LLMs.

智能合约安全。随着区块链技术的发展,智能合约已成为区块链应用中的关键要素[251]。尽管其重要性不言而喻,但智能合约的开发可能会引入漏洞,带来诸如财务损失等潜在风险。虽然大语言模型能够自动化检测智能合约中的漏洞,但检测结果通常表现出较高的误报率[32, 42]。不同漏洞类型的检测性能各异,并受限于大语言模型的上下文长度[32]。GPTLENS[87]将智能合约漏洞的检测过程分为两个阶段:生成和判别。在生成阶段,生成多样化的漏洞响应;在判别阶段,对这些响应进行评估和排序,以减少误报。Sun及其团队[194]结合大语言模型和程序分析来识别智能合约中的逻辑漏洞,将逻辑漏洞类别分解为场景和属性。他们利用大语言模型匹配潜在漏洞,并进一步结合静态验证来确认大语言模型的发现。

Transaction anomaly detection. Due to the limitations of the search space and the significant manual analysis required, real-time intrusion detection systems for blockchain transactions remain challenging. Traditional methods primarily employ reward-based approaches, focusing on identifying and exploiting profitable transactions, or patternbased techniques relying on custom rules to infer the intent of blockchain transactions and user address behavior [175, 217]. However, these methods may not accurately capture all anomalies. Therefore, more general and adaptable LLMs technology can be applied to effectively identify various abnormal transactions in real-time. Gai et al. [66] apply LLMs to dynamically and in real-time detect anomalies in blockchain transactions. Due to its unrestricted search space and independence from predefined rules or patterns, it enables the detection of a wider range of transaction anomalies.

交易异常检测。由于搜索空间的限制和大量手动分析的需求,区块链交易的实时入侵检测系统仍然具有挑战性。传统方法主要采用基于奖励的方法,专注于识别和利用有利可图的交易,或基于模式的技术,依赖自定义规则来推断区块链交易和用户地址行为的意图 [175, 217]。然而,这些方法可能无法准确捕获所有异常。因此,更通用和适应性强的大语言模型技术可以应用于实时有效地识别各种异常交易。Gai 等人 [66] 应用大语言模型动态实时检测区块链交易中的异常。由于其不受限制的搜索空间和独立于预定义规则或模式,它能够检测到更广泛的交易异常。

RQ1 - Summary

RQ1 - 总结

(1) We have divided cyber security tasks into six domains: software and system security, network security, information and content security, hardware security, and blockchain security. We have summarized the specific applications of LLMs in these domains.

我们将网络安全任务分为六个领域:软件与系统安全、网络安全、信息与内容安全、硬件安全以及区块链安全。我们总结了大语言模型在这些领域的具体应用。

(2) We discussed 21 cyber security tasks and found that LLMs are most widely applied in the field of software and system security, with 76 papers covering 8 tasks. Only 5 papers mentioned the least applied domain-blockchain security.

我们讨论了21项网络安全任务,发现大语言模型在软件和系统安全领域的应用最为广泛,共有76篇论文涵盖了8项任务。仅有5篇论文提到了应用最少的领域——区块链安全。

4RQ2:WHAT LL MS HAVE BEEN EMPLOYED TO SUPPORT CYBER SECURITY TASKS?

4RQ2: 有哪些大语言模型被用于支持网络安全任务?

4.1 Architecture of LLMs in Cyber security

4.1 大语言模型在网络安全中的架构

Pre-trained Language Models (PLMs) have exhibited impressive capabilities across various NLP tasks [101, 136, 186, 212, 248]. Researchers have noted substantial improvements in their performance as model size increases, with surpassing certain parameter thresholds leading to significant performance gains [79, 186]. The term "Large Language Model" (LLM) distinguishes language models based on the size of their parameters, specifically referring to large-sized PLMs [136, 248].However, there is no formal consensus in the academic community regarding the minimum parameter size for LLMs, as model capacity is intricately linked to training data size and overall computational resources [96]. In this study, we adopt to the LLM categorization framework introduced by Panet et al. [154], which classifies the predominant LLMs explored in our research into three architectural categories: encoder-only, encoder-decoder, and decoder-only. We also considered whether the related models are open-source. Open-source models offer higher flexibility and can acquire new knowledge through fine-tuning on specific tasks based on pre-trained models, while closed-source models can be directly called via APIs, reducing hardware expenses. This taxonomy and relevant models are shown in Table 5. We analyzed the distribution of different LLM architectures applied in various cyber security domains, as shown in Fig 5.

预训练语言模型 (Pre-trained Language Models, PLMs) 在各种 NLP 任务中展现了令人印象深刻的能力 [101, 136, 186, 212, 248]。研究人员发现,随着模型规模的增加,其性能显著提升,超过某些参数阈值后更是带来显著的性能增益 [79, 186]。术语“大语言模型 (Large Language Model, LLM)”根据参数规模区分语言模型,特别是指大规模的 PLMs [136, 248]。然而,学术界对于 LLM 的最小参数规模尚无正式共识,因为模型能力与训练数据规模和整体计算资源密切相关 [96]。在本研究中,我们采用 Panet 等人 [154] 提出的 LLM 分类框架,将我们研究中的主要 LLM 分为三种架构类别:仅编码器 (encoder-only)、编码器-解码器 (encoder-decoder) 和仅解码器 (decoder-only)。我们还考虑了相关模型是否开源。开源模型具有更高的灵活性,可以在预训练模型的基础上通过微调获得特定任务的新知识,而闭源模型则可以通过 API 直接调用,减少硬件成本。该分类及相关模型如表 5 所示。我们分析了不同 LLM 架构在网络安全领域中的应用分布,如图 5 所示。

Encoder-only LLMs. Encoder-only models, as their name implies, comprise solely an encoder network. Initially designed for language understanding tasks like text classification, these models, such as BERT and its variants [5, 50, 60, 71, 76, 127, 129, 181], aim to predict a class label for input text [50]. For instance, BERT, which adopts the encoder architecture of the Transformer model, is mentioned in 35 papers included in this study. Encoder-only LLMs use a bidirectional multi-layer self-attention mechanism to calculate the relevance of each token with allother tokens, thereby capturing semantic features that include the global context. This architecture is mainly used for processing input data, focusing on understanding and encoding information rather than generating new text. Researchers employed these models to generate embeddings for data that is relavent to cyber security (such as traffic data and code), mapping complex data types into vector space. These models typically use a masking strategy during pre-training, and the complex training strategies increase training time and the risk of over fitting. In the realm of cyber security, researchers have adopted advanced models that offer capabilities much needed in cyber security tasks such as code understanding [211] and traffic analysis [3].

仅编码器大语言模型 (Encoder-only LLMs)。仅编码器模型,顾名思义,仅包含编码器网络。最初设计用于文本分类等语言理解任务,这些模型如 BERT 及其变体 [5, 50, 60, 71, 76, 127, 129, 181],旨在为输入文本预测类别标签 [50]。例如,采用 Transformer 模型编码器架构的 BERT 在本研究涉及的 35 篇论文中被提及。仅编码器大语言模型使用双向多层自注意力机制计算每个 Token 与所有其他 Token 的相关性,从而捕获包含全局上下文的语义特征。该架构主要用于处理输入数据,侧重于理解和编码信息,而非生成新文本。研究人员使用这些模型为与网络安全相关的数据(如流量数据和代码)生成嵌入,将复杂数据类型映射到向量空间。这些模型在预训练期间通常使用掩码策略,复杂的训练策略增加了训练时间和过拟合风险。在网络安全领域,研究人员采用了提供网络安全任务所需能力的高级模型,如代码理解 [211] 和流量分析 [3]。

Various prominent models, including CodeBERT [60], Graph Code BERT [71], RoBERTa [127], CharBERT [129], DeBERTa [76], and DistilBERT [181], have gained widespread usage due to their ability to effectively process and analyze code, making them valuable tools in the field of cyber security. An example is RoBERTa [127], which enhances BERT's robustness through various model design adjustments and training techniques. These include altering key hyper parameters, eliminating the next-sentence pre-training objective, and utilizing substantially larger mini-batches and learning rates during training. CodeBERT [60] is a bimodal extension of BERT that utilizes both natural language and source code as its input. It employs a replaced token detection task to bolster its understanding of programming languages, in order to tackle code generation and vulnerability detection tasks. The encoder-only architecture provides models with excellent data representation capabilities. Note that these aforementioned BERT variants were not initially designed for cyber security tasks. Instead, their application in the cyber security field stems from their capabilities as general models in NLP tasks for code semantics interpretation and understanding. In contrast, SecureBERT [5] is a BERT variant specifically designed for cyber threat analysis tasks. Its development highlights the robustness and flexibility of encoder-only architecture models across different tasks. Diverse training tasks and specialized training schemes enhance the model's feature representation capabilities and boosts its performance in cyber security-related tasks.

包括 CodeBERT [60]、Graph Code BERT [71]、RoBERTa [127]、CharBERT [129]、DeBERTa [76] 和 DistilBERT [181] 在内的多种知名模型,因其能够有效处理和分析代码而得到广泛应用,成为网络安全领域中的宝贵工具。以 RoBERTa [127] 为例,它通过多种模型设计调整和训练技术增强了 BERT 的鲁棒性。这些技术包括改变关键超参数、取消下一句预训练目标,以及在训练过程中使用更大的小批量和学习率。CodeBERT [60] 是 BERT 的双模态扩展,它同时使用自然语言和源代码作为输入,并采用替换 Token 检测任务来增强其对编程语言的理解,以应对代码生成和漏洞检测任务。仅编码器架构为模型提供了出色的数据表示能力。需要注意的是,上述 BERT 变体最初并非为网络安全任务设计,它们在网络安全领域的应用源于其作为通用模型在 NLP 任务中对代码语义解释和理解的能力。相比之下,SecureBERT [5] 是专门为网络威胁分析任务设计的 BERT 变体。其发展突显了仅编码器架构模型在不同任务中的鲁棒性和灵活性。多样化的训练任务和专门的训练方案增强了模型的特征表示能力,并提升了其在网络安全相关任务中的表现。

Table 5. The classification of the LLMs used in the collected papers, with the number following the model indicating the count of papers that utilized that particular LLM.

表 5: 收集论文中使用的大语言模型分类,模型后的数字表示使用该模型的论文数量。

模型 发布时间 开源
仅编码器 BERT (8) 2018.10
RoBERTa (12) 2019.07
DistilBERT (3) 2019.10
CodeBERT (8) 2020.02
DeBERTa (1) 2020.06
GraphCodeBERT (1) 2020.09
CharBERT (1) 2020.11
编码器-解码器 T5 (4) 2019.10
BART (1) 2019.10
PLBART (3) 2021.03
CodeT5 (5) 2021.09
UniXcoder (1) 2022.03
Flan-T5 (1) 2022.10
仅解码器 GPT-2 (9) 2019.02
GPT-3 (4) 2020.04
GPT-Neo (1) 2021.03
CodeX (9) 2021.07
CodeGen (5) 2022.03
InCoder (1) 2022.04
PaLM (3) 2022.04
Jurassic-1 (1) 2022.04
GPT-3.5 (52) 2022.11
LLaMa (4) 2023.02
GPT-4 (38) 2023.03
Bard (8) 2023.03
Claude (3) 2023.03
StarCoder (3) 2023.05
Falcon (2) 2023.06
CodeLLaMa (4) 2023.08

Regarding the model applicability, as shown in the Figure 5, encoder-only models initially garnered attention in the fields of network cyber security [11] and software and systems cyber security [106, 222]. In 2023, this concept was extended to the field of information and content cyber security, utilizing encoder-only models to harmful content on social media platforms [27, 73, 135].

关于模型适用性,如图 5 所示,仅编码器模型最初在网络网络安全 [11] 以及软件和系统网络安全 [106, 222] 领域引起了关注。2023 年,这一概念被扩展到信息和内容网络安全领域,利用仅编码器模型来检测社交媒体平台上的有害内容 [27, 73, 135]。

Encoder-decoder LLMs. The Transformer model, based on the encoder-decoder architecture [206], consists of two sets of Transformer blocks: the encoder and decoder. Stacked multi-head self-attention layers are used by the encoder to encode the input sequence, generating latent representations. In contrast, the decoder performs cross-attention on these representations and sequentially produces the target sequence. The structure of encoder-decoder LLMs makes them highly suitable for sequence-to-sequence tasks such as code translation and sum mari z ation. However, their complex architecture requires more computational resources and high-quality labeled data.

编码器-解码器大语言模型。基于编码器-解码器架构 [206] 的 Transformer 模型由两组 Transformer 块组成:编码器和解码器。编码器使用堆叠的多头自注意力层对输入序列进行编码,生成潜在表示。相比之下,解码器对这些表示进行交叉注意力操作,并顺序生成目标序列。编码器-解码器大语言模型的结构使其非常适合序列到序列任务,如代码翻译和摘要生成。然而,其复杂的架构需要更多的计算资源和高质量的标注数据。

Models like BART [109], T5 [171], and CodeT5 [210] exemplify this architecture. CodeT5 [210] and PLBART [9] have built upon the foundation of their original models by introducing bimodal inputs of programming language and text, demonstrating effective code comprehension capabilities. Rafle et al. [171] show that almost all NLP tasks can be framed as a sequence-to-sequence generation task in their work. In LL M 4 Security, the encoder-decoder architecture was first attempted to be applied in the field of network security [120]. However, subsequent research has not widely adopted this approach, possibly due to the complexity of the encoder-decoder structure. From another perspective, owing to its flexible training strategy and excellent adaptability to complex tasks, the encoder-decoder model was later extended to other cyber security tasks such as program fuzzing [47], reverse engineering [15], and phishing emails detection[90].

BART [109]、T5 [171] 和 CodeT5 [210] 等模型展示了这种架构。CodeT5 [210] 和 PLBART [9] 在原始模型的基础上,通过引入编程语言和文本的双模态输入,展示了有效的代码理解能力。Rafle 等人 [171] 在他们的工作中表明,几乎所有的 NLP 任务都可以被构造成序列到序列的生成任务。在 LL M 4 Security 中,首次尝试将编码器-解码器架构应用于网络安全领域 [120]。然而,后续研究并未广泛采用这种方法,可能是因为编码器-解码器结构的复杂性。从另一个角度来看,由于其灵活的训练策略和对复杂任务的出色适应性,编码器-解码器模型后来扩展到其他网络安全任务,如程序模糊测试 [47]、逆向工程 [15] 和钓鱼邮件检测 [90]。


Fig. 5. Distribution and trend of different model architectures.

图 5: 不同模型架构的分布与趋势。

Decoder-only LLMs. Unlike the encoder-decoder architecture, which involves the encoder processing input text and the decoder generating output text by predicting subsequent tokens from an initial state, decoder-only LLMs rely solely on the decoder module to produce the target output text [169]. This auto regressive training paradigm allows decoder-only models to generate longer-form outputs token by token, making them well-suited for producing detailed analyses, advisories, and even code relevant to cyber security. The attention mechanism in these models also enables them to flexibly draw upon the extensive knowledge stored in their parameters and apply it to the current context.

仅解码器的大语言模型

GPT-2 [170], GPT-3 [25], GPT-3.5 [148], and GPT-4 [150] belong to the GPT series of models, among which GPT-3.5 and GPT-4 are the models most frequently used to address various cyber security issues in this study, covering almost all cyber security applications [45, 58, 62, 182]. Their strong few-shot learning abilities allow rapid development of new cyber security capabilities with minimal fine-tuning. More specialized versions like Codex [33] and others have been fine-tuned for specific code-related tasks. Open-source models like GPT-Neo [21], LLaMa [202], and Falcon [161] also follow this architecture. Additionally, code generation LLMs such as CodeGen [145], InCoder [64], StarCoder [113], and CodeLLaMa [177] have been widely used for bug detection and repair, as well as for vulnerability repair [218, 224, 227].

GPT-2 [170]、GPT-3 [25]、GPT-3.5 [148] 和 GPT-4 [150] 属于 GPT 系列模型,其中 GPT-3.5 和 GPT-4 是本研究中最常用于解决各类网络安全问题的模型,涵盖了几乎所有网络安全应用 [45, 58, 62, 182]。它们强大的少样本学习能力使得在最小微调的情况下能够快速开发新的网络安全能力。更专业的版本如 Codex [33] 等已经针对特定的代码相关任务进行了微调。开源模型如 GPT-Neo [21]、LLaMa [202] 和 Falcon [161] 也遵循了这一架构。此外,代码生成大语言模型如 CodeGen [145]、InCoder [64]、StarCoder [113] 和 CodeLLaMa [177] 已广泛用于漏洞检测和修复,以及漏洞修复 [218, 224, 227]。

The large context window of decoder-only models allows them to take in and utilize more context about the cyber security task, like related vulnerabilities, reports, and code snippets.

解码器专用模型的大上下文窗口使它们能够接收并利用更多关于网络安全任务的上下文信息,例如相关的漏洞、报告和代码片段。

Due to the powerful natural language generation capabilities of the decoder-only architecture, researchers initially attempted to apply it to the generation of fake cyber threat intelligence [172]. Decoder-only LLMs have gained prominence in recent years, especially in 2022 and 2023 as shown in Figure 5, witnessing a surge in development and commercial adoption by leading Internet companies. For instance, Google introduced Bard [69], while Meta unveiled LLaMa [202]. Unlike GPT-4 and its derivative application, ChatGPT, which quickly found integration into various cyber security tasks. These newer models have yet to see widespread adoption in the cyber security domain.

由于仅解码器架构(decoder-only architecture)在自然语言生成方面的强大能力,研究人员最初尝试将其应用于虚假网络威胁情报的生成 [172]。近年来,仅解码器的大语言模型(LLM)逐渐崭露头角,尤其是在2022和2023年,如图 5 所示,其开发和商业应用迎来了显著增长。例如,Google 推出了 Bard [69],而 Meta 发布了 LLaMa [202]。与 GPT-4 及其衍生应用 ChatGPT 不同,这些新模型尚未在网络安领域得到广泛应用。

4.2 Trend Analysis

4.2 趋势分析

Illustrated in Figure 5, from 2020 to 2024, there have been significant shifts in the preference and utilization of LLM architectures across cyber security tasks. The selection of decoder-only, encoder-decoder, and encoder-only structures has influenced diverse research directions and solutions in the cyber security field. This examination delves into the trends regarding the adoption of these architectures over time, reflecting the evolving landscape of LLM applications for cyber security tasks.

如图 5 所示,从 2020 年到 2024 年,网络安全任务中大语言模型架构的偏好和使用发生了显著变化。仅解码器、编码器-解码器和仅编码器结构的选择影响了网络安全领域的各种研究方向和解决方案。本文深入探讨了这些架构随时间采用的趋势,反映了大语言模型在网络安全任务中应用的不断演变。

Table 6. Overview of the distribution of LLMs in the open-source community.

表 6: 开源社区中大语言模型的分布概览

(a) Top 20 most downloaded models on Hugging face.

(a) Hugging Face 上下载量最高的 20 个模型

Model Architecture
BERT-base Encoder-only
DistilBERT-base Encoder-only
GPT2 Decoder-only
RoBERTa-large Encoder-only
RoBERTa-base Encoder-only
xlm-RoBERTa-large Encoder-only
xlm-RoBERTa-base Encoder-only
DeBERTa-base Encoder-only
Qwen-VL-Chat Decoder-only
T5-small Decoder-encoder
BERT-base-cased Encoder-only
T5-base Decoder-encoder
BERT-base-uncased Encoder-only
CamemBERT-base
DistilGPT2 Encoder-only
DistilRoBERTa-base Decoder-only
LLaMa3-8B Encoder-only
ALBERT-base-v2 Decoder-only
Encoder-only
DeBERTa-v3-base Encoder-only
ByT5-small Decoder-encoder

(b) Top 20 most liked models on Hugging face.

(b) Hugging Face 上最受欢迎的 20 个模型

模型 架构
BLOOM-176B 仅解码器
LLaMa3-8B 仅解码器
LLaMa2-7B 仅解码器
Mixtral-8x7B 仅解码器
Mixtral-7B 仅解码器
Phi-2 解码器-编码器
Gemma-7B 仅解码器
ChatGLM-6B 仅解码器
StarCoder 仅解码器
Falcon-40B
Grok-1 仅解码器
ChatGLM2-6B 仅解码器
GPT2 仅解码器
Dolly-v2-12B 仅解码器
BERT-base 仅解码器
仅编码器
Zephyr-7B 仅解码器
OpenELM 仅解码器
Phi-1.5 解码器-编码器
Yi-34B 仅解码器
Flan-T5 解码器-编码器

Timeline and Model Architecture distribution. In 2020 and 2021, the use of LLMs in cyber security was limited, with only 3 research papers exploring their potential. In 2020, encoder-decoder LLMs, known for their strong performance on sequence-to-sequence tasks, were the sole architecture used in a single paper. However, in 2021, the focus shifted to decoder-only LLMs, which excel at generating longer-form outputs and handling diverse queries due to their auto regressive generation capabilities and large context windows. This shift can be attributed to the research emphasis on LLM performance in natural language processing tasks and innovations in LLM architectures during this period [25, 96].

时间线和模型架构分布。2020年和2021年,大语言模型在网络安全中的应用有限,仅有三篇研究论文探索了其潜力。2020年,编码器-解码器大语言模型,以其在序列到序列任务中的强大表现而闻名,是唯一在一篇论文中使用的架构。然而,到了2021年,研究重点转向了仅解码器大语言模型,由于其自回归生成能力和大上下文窗口,这类模型在处理多样化查询和生成长篇输出时表现出色。这一转变可归因于该时期大语言模型在自然语言处理任务中的性能研究以及大语言模型架构的创新 [25, 96]。

The year 2022 marked a significant turning point, with the number of papers employing LLMs for cyber security tasks surging to 11, surpassing the combined total from the previous two years. This year also saw increased diversity in the LLM architectures used. Encoder-only LLMs, valued for their representation learning and classification abilities, were utilized in $46%$ of the research (5 papers). Encoder-decoder LLMs, with their strong performance on well-defined tasks, were featured in $18%$ (2 papers), while decoder-only LLMs, leveraging their knowledge recall and few-shot learning capabilities, garnered $36%$ of the research interest (4 papers). This varied distribution highlights the active exploration of different architectures to address the diverse needs and challenges in cyber security.

2022 年是一个重要的转折点,使用大语言模型进行网络安全任务的论文数量激增至 11 篇,超过了过去两年的总和。这一年,使用的大语言模型架构也更加多样化。仅编码器架构的大语言模型以其表征学习和分类能力受到重视,在 46% 的研究中被使用(5 篇论文)。编码器-解码器架构的大语言模型因其在明确任务上的强劲表现,出现在 18% 的研究中(2 篇论文),而仅解码器架构的大语言模型凭借其知识召回和少样本学习能力,获得了 36% 的研究关注(4 篇论文)。这种多样化的分布凸显了人们积极探索不同架构以满足网络安全领域多样化需求和挑战的态势。

The years 2023 and 2024 witnessed a significant shift towards decoder-only LLMs, which emerged as the primary architecture for addressing cyber security challenges. This trend is closely tied to the powerful text comprehension, reasoning capabilities [153, 213], and open-ended generation demonstrated by chatbots like ChatGPT. These decoderonly models require minimal fine-tuning and can generate both syntactically correct and functionally relevant code snippets [103, 178]. In 2023, decoder-only LLMs accounted for $68.9%$ of the total research, while encoder-decoder LLMs and encoder-only LLMs contributed $10.7%$ (14 papers) and $22.1%$ (27 papers), respectively. Remarkably, all studies conducted in 2024 utilized the decoder-only architecture, indicating a strong focus on exploring and leveraging the unique advantages of these models in cyber security research and applications.

2023 年和 2024 年见证了仅解码器大语言模型的显著转变,其成为解决网络安全挑战的主要架构。这一趋势与 ChatGPT 等聊天机器人展现的强大文本理解、推理能力 [153, 213] 以及开放性生成密切相关。这些仅解码器模型需要极少的微调,即可生成语法正确且功能相关的代码片段 [103, 178]。2023 年,仅解码器大语言模型占总研究的 68.9%,而编码器-解码器大语言模型和仅编码器大语言模型分别占 10.7%(14 篇论文)和 22.1%(27 篇论文)。值得注意的是,2024 年进行的所有研究均采用了仅解码器架构,表明在网络安全研究和应用中探索和利用这些模型的独特优势已成为强烈关注点。

The dominance of decoder-only LLMs in cyber security research aligns with the broader trends in the LLM community. An analysis of the top 20 most liked and downloaded LLMs on Hugging face [1], a popular open-source model community, reveals that while encoder-only models like BERT and its variants have the highest number of downloads, decoder-only models are gaining significant traction. Moreover, 16 out of the top 20 most liked LLMs are decoder-only models, indicating a strong preference and excitement for their potential to handle complex, open-ended tasks. The growing interest in decoder-only LLMs can be attributed to their strong generation, knowledge, and few-shot learning abilities, which make them well-suited for the diverse challenges in cyber security. However, the larger parameter size of these models compared to encoder-only models may limit their current adoption due to the scarcity of computational resources[59].

解码器专用大语言模型在网络安全研究中的主导地位与大语言模型社区的整体趋势一致。对 Hugging Face [1] 上最受欢迎和下载量最高的前 20 个大语言模型的分析表明,虽然 BERT 及其变体等编码器专用模型的下载量最高,但解码器专用模型正获得显著关注。此外,前 20 个最受欢迎的大语言模型中有 16 个是解码器专用模型,这表明人们对其处理复杂、开放式任务的潜力表现出强烈的偏好和期待。对解码器专用大语言模型的兴趣日益增长,可以归因于其强大的生成能力、知识掌握能力以及少样本学习能力,这些能力使其非常适合应对网络安全中的多样化挑战。然而,与编码器专用模型相比,这些模型的参数规模较大,可能因计算资源稀缺而限制其当前的采用 [59]。

Applying LLMs to cyber security. In our research, the use of LLMs can be categorized into agent-based processing and fine-tuning for specific tasks. Closed-source LMs, represented by the GPT series, are the most popular in our studies. Researchers access LLMs online by calling APIs provided by LLM publishers and design task-specific prompts to guide LLMs to solve real-world problems with their training data [53, 91, 130], such as vulnerability repair and penetration testing [38, 45, 218]. Another approach involves locally fine-tuning open-source LLMs, by using datasets customized for specific functionalities, where researchers are able to achieve significant performance improvements [188, 227].

将大语言模型应用于网络安全。在我们的研究中,大语言模型的使用可以分为基于智能体的处理和特定任务的微调。以GPT系列为代表的闭源大语言模型在我们的研究中最为流行。研究人员通过调用大语言模型发布者提供的API在线访问大语言模型,并设计特定任务的提示词,引导大语言模型利用其训练数据解决现实世界中的问题 [53, 91, 130],例如漏洞修复和渗透测试 [38, 45, 218]。另一种方法涉及本地微调开源大语言模型,通过使用为特定功能定制的数据集,研究人员能够实现显著的性能提升 [188, 227]。

In summary, the transition of LLMs in cyber security, progressing from encoder-only architectures to decoder-only architectures, underscores the dynamic nature and flexibility of the field. This change has fundamentally altered the method for addressing cyber security tasks, signaling ongoing innovation within the discipline.

总之,大语言模型在网络安全领域从仅编码器架构向仅解码器架构的转变,突显了该领域的动态性和灵活性。这一变化从根本上改变了处理网络安全任务的方法,标志着该学科的持续创新。

RQ2 - Summary

RQ2 - 总结

5 RQ3: WHAT DOMAIN SPECIFICATION TECHNIQUES ARE USED TO ADAPT LLMS TO SECURITY TASKS?

5 RQ3: 哪些领域规范技术被用于使大语言模型适应安全任务?

LLMs have demonstrated their efficacy across various intelligent tasks [94]. Initially, these models undergo pretraining on extensive unlabeled corpora, followed by fine-tuning for downstream tasks. However, discrepancies in input formats between pre-training and downstream tasks pose challenges in leveraging the knowledge encoded within LLMs efficiently. The techniques employed with LLMs for security tasks can be broadly classified into three categories: prompt engineering, fine-tuning, and external augmentation. We will delve into a comprehensive analysis of these three categories and further explore their subtypes, as well as summarize the connections between LLM techniques and various security tasks.

大语言模型 (LLMs) 在各种智能任务中展示了其有效性 [94]。最初,这些模型在大量的无标注语料库上进行预训练,然后针对下游任务进行微调。然而,预训练和下游任务之间输入格式的差异给有效利用大语言模型中编码的知识带来了挑战。用于安全任务的大语言模型技术大致可以分为三类:提示工程 (prompt engineering)、微调 (fine-tuning) 和外部增强 (external augmentation)。我们将深入分析这三类技术,并进一步探讨它们的子类型,同时总结大语言模型技术与各种安全任务之间的联系。

5.1 Fine-tuning LLMs for Security Tasks

5.1 为大语言模型微调以应对安全任务

Fine-tuning techniques are extensively utilized across various downstream tasks in NLP [192], encompassing the adjustment of LLM parameters to suit specific tasks. This process entails training the model on task-relevant datasets, with the extent of fine-tuning contingent upon task complexity and dataset size [52, 167]. Fine-tuning can mitigate the constraints posed by model size, enabling smaller models fine-tuned for specific tasks to outperform larger models lacking fine-tuning [98, 249]. We classify fine-tuning techniques employed in papers leveraging LLMs for security tasks into two categories: full fine-tuning and partial fine-tuning. Notably, many papers employ fine-tuning without explicitly specifying the technique. In such cases, if an open-source LLM is utilized, we presume full fine-tuning; if a closed-source LLM like GPT series models is utilized, we assume partial fne-tuning.

微调技术广泛应用于 NLP 的各种下游任务 [192],包括调整大语言模型参数以适应特定任务。这一过程需要在任务相关数据集上训练模型,微调的程度取决于任务复杂性和数据集大小 [52, 167]。微调可以缓解模型大小带来的限制,使针对特定任务微调的小模型优于未微调的大模型 [98, 249]。我们将论文中利用大语言模型进行安全任务的微调技术分为两类:全微调和部分微调。值得注意的是,许多论文使用微调时并未明确说明具体技术。在这种情况下,如果使用了开源大语言模型,我们假设为全微调;如果使用了 GPT 系列模型等闭源大语言模型,我们假设为部分微调。

A total of 32 papers in this study applied fine-tuning techniques to address security tasks. Am