Please use this identifier to cite or link to this item:
http://hdl.handle.net/10174/41401
|
| Title: | Parameter Efficient Fine-Tunning of LLMs: Application to Machine Translation from English to Portuguese |
| Authors: | Santos, Daniel Nogueira, Vitor Quaresma, Paulo |
| Keywords: | Translation Large language models Memory management Hardware Machine translation Optimization fine-tunning LoRA QLoRA |
| Issue Date: | 2025 |
| Citation: | D. Santos, V. B. Nogueira and P. Quaresma, "Parameter Efficient Fine-Tunning of LLMs: Application to Machine Translation from English to Portuguese," 2025 4th International Conference on Computer Technologies (ICCTech), Kuala Lumpur, Malaysia, 2025, pp. 24-28, doi: 10.1109/ICCTech66294.2025.00014. |
| Abstract: | Fine-tuning Large Language Models (LLMs) for specific tasks, such as machine translation, is a computationally expensive process that often requires substantial hardware resources. Parameter-Efficient Fine-Tuning (PEFT) methods, such as Low-Rank Adaptation (LoRA) and Quantized Low-Rank Adaptation (QLoRA), offer a resource-efficient alternative by significantly reducing the number of trainable parameters and memory requirements. In this work, we compare the performance and memory efficiency of LoRA and QLoRA on English-Portuguese translation tasks, utilizing two cutting edge LLMs, Meta LLaMA 3.1 8B and Mistral 7B. Our experiments demonstrate that both LoRA and QLoRA achieve substantial memory savings. Moreover, this work underscores the practical advantages of LoRA and QLoRA in resource-constrained environments, providing a foundation for further optimization and experimentation in machine translation using large language models. |
| URI: | https://ieeexplore.ieee.org/document/11078353 http://hdl.handle.net/10174/41401 |
| ISBN: | 979-8-3315-1453-2 |
| Type: | article |
| Appears in Collections: | VISTALab - Artigos em Livros de Actas/Proceedings
|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
|