LoRA-Based Lightweight Adaptation of Pretrained Models for Low-Resource Text Summarization

Main Article Content

Thayer Ellison

Abstract

This study explores the efficient fine-tuning method of the LoRA (Low-Rank Adaptation) algorithm in a low-resource environment, and experimentally evaluates its performance under different data scales and low-rank matrix parameter r settings. For scenarios with limited computing resources, LoRA introduces a low-rank matrix to efficiently adapt the parameters of the pre-trained model, significantly reducing the memory usage and computing requirements. The experimental results show that with only 1% to 10% of the training data, the ROUGE indicator of LoRA fine-tuning is only slightly lower than that of full parameter fine-tuning, verifying its effectiveness in low-resource data environments. At the same time, the analysis of the r value found that a smaller r value can ensure better computational efficiency, while an appropriate increase in the r value can improve the quality of the summary, but when r exceeds a certain threshold, the performance gain tends to stabilize, indicating that LoRA needs to balance model performance and computational overhead. Overall, LoRA provides an efficient and feasible solution for large-scale model fine-tuning in a low-resource environment, and has extensive research value in future applications.

Article Details

How to Cite
Ellison, T. (2025). LoRA-Based Lightweight Adaptation of Pretrained Models for Low-Resource Text Summarization. Journal of Computer Science and Software Applications, 5(6). Retrieved from https://mfacademia.org/index.php/jcssa/article/view/232
Section
Articles