Leveraging Prompt-based Large Language Models for Code Smell Detection: A Comparative Study on the MLCQ Dataset
Résumé
Code smells are indicators of potential issues in software code that can make maintenance more challenging. Traditional approaches to detecting code smells have primarily relied on handcrafted rules and heuristics, while recent advances have explored Machine Learning (ML) and Deep Learning (DL) techniques. In this paper, we investigate the application of prompt-based Large Language Models (LLMs) for code smell detection, utilizing state-of-the-art models, namely Generative Pretrained Transformer-4 (GPT-4) and Large Language Model Meta AI (LLaMA). We conduct an extensive analysis of the Machine Learning Code Quality (MLCQ) dataset, focusing on how these LLMs perform when prompted to identify and classify code smells. By systematically evaluating each model’s performance, we provide insights into their precision, recall and ability to generalize across different types of code smells. Our results aim to demonstrate the potential of LLMs as a promising tool for automating certain types of code smells while underperforming for others.
\keywords{Code Smells \and GPT-4 \and Large Language Models \and LLaMA \and LLMs \and Machine Learning}
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|