Distilling quality enhancing comments from code reviews to underpin reviewer recommendation
Journal article
Rong, Guoping, Yu, Yongda, Zhang, Yifan, Zhang, He, Shen, Haifeng, Shao, Dong, Kuang, Hongyu, Wang, Min, Wei, Zhao, Xu, Yong and Wang, Juhong. (2024). Distilling quality enhancing comments from code reviews to underpin reviewer recommendation. IEEE Transactions on Software Engineering. 50(7), pp. 1658-1674. https://doi.org/10.1109/TSE.2024.3356819
Authors | Rong, Guoping, Yu, Yongda, Zhang, Yifan, Zhang, He, Shen, Haifeng, Shao, Dong, Kuang, Hongyu, Wang, Min, Wei, Zhao, Xu, Yong and Wang, Juhong |
---|---|
Abstract | Code review is an important practice in software development. One of its main objectives is for the assurance of code quality. For this purpose, the efficacy of code review is subject to the credibility of reviewers, i.e., reviewers who have demonstrated strong evidence of previously making quality-enhancing comments are more credible than those who have not. Code reviewer recommendation (CRR) is designed to assist in recommending suitable reviewers for a specific objective and, in this context, assurance of code quality. Its performance is susceptible to the relevance of its training dataset to this objective, composed of all reviewers’ historical review comments, which, however, often contains a plethora of comments that are irrelevant to the enhancement of code quality. Furthermore, recommendation accuracy has been adopted as the sole metric to evaluate a recommender's performance, which is inadequate as it does not take reviewers’ relevant credibility into consideration. These two issues form the ground truth problem in CRR as they both originate from the relevance of dataset used to train and evaluate CRR algorithms. To tackle this problem, we first propose the concept of Quality-Enhancing Review Comments (QERC), which includes three types of comments - change-triggering inline comments, informative general comments, and approve-to-merge comments. We then devise a set of algorithms and procedures to obtain a distilled dataset by applying QERC to the original dataset. We finally introduce a new metric – reviewer's credibility for quality enhancement (RCQE) – as a complementary metric to recommendation accuracy for evaluating the performance of recommenders. To validate the proposed QERC-based approach to CRR, we conduct empirical studies using real data from seven projects containing over 82K pull requests and 346K review comments. Results show that: (a) QERC can effectively address the ground truth problem by distilling quality-enhancing comments from the dataset containing original code reviews, (b) QERC can assist recommenders in finding highly credible reviewers at a slight cost of recommendation accuracy, and (c) even “wrong” recommendations using the distilled dataset are likely to be more credible than those using the original dataset. |
Keywords | code review; review comment; reviewer recommendation |
Year | 2024 |
Journal | IEEE Transactions on Software Engineering |
Journal citation | 50 (7), pp. 1658-1674 |
Publisher | Institute of Electrical and Electronics Engineers |
ISSN | 0098-5589 |
Digital Object Identifier (DOI) | https://doi.org/10.1109/TSE.2024.3356819 |
Scopus EID | 2-s2.0-85183976635 |
Funder | National Natural Science Foundation of China (NSFC) |
Jiangsu Provincial Key Research and Development Program | |
Nanjing University | |
Publisher's version | License All rights reserved File Access Level Controlled |
Output status | Published |
Publication dates | |
Online | 22 Jan 2024 |
Publication process dates | |
Deposited | 12 Feb 2025 |
Grant ID | 62072227 |
62202219 | |
62302210 | |
BE2021002-2 | |
ZZKT2022A25 | |
KFKT2022A09 | |
KFKT2023A09 | |
KFKT2023A10 |
https://acuresearchbank.acu.edu.au/item/9151v/distilling-quality-enhancing-comments-from-code-reviews-to-underpin-reviewer-recommendation
Restricted files
Publisher's version
4
total views0
total downloads4
views this month0
downloads this month