|
|
Please use this identifier to cite or link to this item:
http://hdl.handle.net/10174/39856
|
| Title: | Integrating Large Language Models into Automated Software Testing |
| Authors: | Iznaga, Yanet Rato, Luís Salgueiro, Pedro León, Javier |
| Editors: | Bellavista, Paolo |
| Keywords: | automated software testing large language models test case generation low-rank adaptation codestral mamba model |
| Issue Date: | Oct-2025 |
| Publisher: | MDPI |
| Citation: | Iznaga, Y. S., Rato, L., Salgueiro, P., & León, J. L. (2025). Integrating Large Language Models into Automated Software Testing. Future Internet, 17(10), 476. https://doi.org/10.3390/fi17100476 |
| Abstract: | This work investigates the use of LLMs to enhance automation in software testing, with a particular focus on generating high-quality, context-aware test scripts from natural language descriptions, while addressing both text-to-code and text+code-to-code generation tasks. The Codestral Mamba model was fine-tuned by proposing a way to integrate LoRA matrices into its architecture, enabling efficient domain-specific adaptation and positioning Mamba as a viable alternative to Transformer-based models. The model was trained and evaluated on two benchmark datasets: CONCODE/CodeXGLUE and the proprietary TestCase2Code dataset. Through structured prompt engineering, the system was optimized to generate syntactically valid and semantically meaningful code for test cases. Experimental results demonstrate that the proposed methodology successfully enables the automatic generation of code-based test cases using large language models. In addition, this work reports secondary benefits, including improvements in test coverage, automation efficiency, and defect detection when compared to traditional manual approaches. The integration of LLMs into the software testing pipeline also showed potential for reducing time and cost while enhancing developer productivity and software quality. The findings suggest that LLM-driven approaches can be effectively aligned with continuous integration and deployment workflows. This work contributes to the growing body of research on AI-assisted software engineering and offers practical insights into the capabilities and limitations of current LLM technologies for testing automation. |
| URI: | https://www.mdpi.com/1999-5903/17/10/476 http://hdl.handle.net/10174/39856 |
| Type: | article |
| Appears in Collections: | VISTALab - Publicações - Artigos em Revistas Internacionais Com Arbitragem Científica
|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
|