Robustness Verification Method for Artificial Intelligence Systems Based on Source Code Processing
Journal article
Yang, Yan-Jing, Mao, Run-Feng, Tan, Rui, Shen, Haifeng and Rong, Guo-Ping. (2023). Robustness Verification Method for Artificial Intelligence Systems Based on Source Code Processing. Ruanjian Xuebao. 2023(34), pp. 4018-4036. https://doi.org/10.13328/j.cnki.jos.006879
Authors | Yang, Yan-Jing, Mao, Run-Feng, Tan, Rui, Shen, Haifeng and Rong, Guo-Ping |
---|---|
Abstract | The development of artificial intelligence (AI) technology provides strong support for AI systems based on source code processing. Compared with natural language processing, source code is special in semantic space. Machine learning tasks related to source code processing usually employ abstract syntax trees, data dependency graphs, and control flow graphs to obtain the structured information of codes and extract features. Existing studies can obtain excellent results in experimental scenarios through in-depth analysis of source code structures and flexible application of classifiers. However, for real application scenarios where the source code structures are more complex, most of the AI systems related to source code processing have poor performance and are difficult to implement in the industry, which triggers practitioners to consider the robustness of AI systems. As AI-based systems are generally data-driven black box systems, it is difficult to directly measure the robustness of these software systems. With the emerging adversarial attack techniques, some scholars in natural language processing have designed adversarial attacks for different tasks to verify the robustness of models and conducted large-scale empirical studies. To solve the instability of AI systems based on source code processing in complex code scenarios, this study proposes robustness verification by Metropolis-Hastings attack method (RVMHM). Firstly, the code preprocessing tool based on abstract syntax trees is adopted to extract the variable pool of the model, and then the MHM source code attack algorithm is employed to replace the prediction effect of the variable perturbation model. The robustness of AI systems is measured by observing the changes in the robustness verification index before and after the attack by interfering with the data and model interaction process. With vulnerability prediction as a typical binary classification scenario of source code processing, this study verifies the robustness of 12 groups of AI vulnerability prediction models on three datasets of open source projects to illustrate the RVMHM effectiveness for robustness verification of source code processing based on AI systems. |
Keywords | Artifical intelligence; code structure analysis; code adversarial attack; system quality evaluation; source code; Metropolis-Hastings attack method |
Year | 01 Jan 2023 |
Journal | Ruanjian Xuebao |
Journal citation | 2023 (34), pp. 4018-4036 |
Publisher | Chinese Academy of Sciences |
ISSN | 1000-9825 |
Digital Object Identifier (DOI) | https://doi.org/10.13328/j.cnki.jos.006879 |
Web address (URL) | https://www.jos.org.cn/josen/article/abstract/6879?st=article_issue |
Open access | Published as non-open access |
Research or scholarly | Research |
Page range | 4018-4036 |
Publisher's version | License All rights reserved File Access Level Open |
Output status | Published |
Publication dates | |
13 Jan 2023 | |
Publication process dates | |
Accepted | 14 Dec 2022 |
Deposited | 18 Oct 2024 |
Additional information | © Copyright by Institute of Software, Chinese Academy of Sciences. |
Place of publication | China |
https://acuresearchbank.acu.edu.au/item/91048/robustness-verification-method-for-artificial-intelligence-systems-based-on-source-code-processing
Download files
Publisher's version
Shen_2023_Robustness_Verification_Method_for_Artificial_Intelligence_Systems.pdf | |
License: All rights reserved | |
File access level: Open |
18
total views37
total downloads12
views this month30
downloads this month