Communication Dans Un Congrès Année : 2025

Leveraging Prompt-based Large Language Models for Code Smell Detection: A Comparative Study on the MLCQ Dataset

Résumé

Code smells are indicators of potential issues in software code that can make maintenance more challenging. Traditional approaches to detecting code smells have primarily relied on handcrafted rules and heuristics, while recent advances have explored Machine Learning (ML) and Deep Learning (DL) techniques. In this paper, we investigate the application of prompt-based Large Language Models (LLMs) for code smell detection, utilizing state-of-the-art models, namely Generative Pretrained Transformer-4 (GPT-4) and Large Language Model Meta AI (LLaMA). We conduct an extensive analysis of the Machine Learning Code Quality (MLCQ) dataset, focusing on how these LLMs perform when prompted to identify and classify code smells. By systematically evaluating each model’s performance, we provide insights into their precision, recall and ability to generalize across different types of code smells. Our results aim to demonstrate the potential of LLMs as a promising tool for automating certain types of code smells while underperforming for others.
\keywords{Code Smells \and GPT-4 \and Large Language Models \and LLaMA \and LLMs \and Machine Learning}
Fichier principal
Vignette du fichier
EIDWT 2025 paper.pdf (438.02 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04881949 , version 1 (13-01-2025)

Identifiants

  • HAL Id : hal-04881949 , version 1

Citer

Djamel Mesbah, Nour El Madhoun, Khaldoun Al Agha, Hani Chalouati. Leveraging Prompt-based Large Language Models for Code Smell Detection: A Comparative Study on the MLCQ Dataset. The 13-th International Conference on Emerging Internet, Data & Web Technologies (EIDWT-2025), Feb 2025, Matsue, Japan. ⟨hal-04881949⟩
0 Consultations
0 Téléchargements

Partager

More