Transfer Methods for Large Language Models in Low-Resource Text Generation Tasks

Main Article Content

Yingnan Deng

Abstract

This study investigates the transferability of large language models in low-resource generation tasks. To address the decline in generation performance of pre-trained language models under data-scarce conditions, it proposes a transfer mechanism that combines instruction tuning with parameter-efficient fine-tuning, aiming to enhance generation stability and semantic consistency in low-resource settings. Based on the NATURAL INSTRUCTIONS v2 dataset, several text generation tasks with limited samples are constructed. Three mainstream fine-tuning strategies—Full Fine-tuning, Low-Rank Adaptation (LoRA), and Adapter—are systematically compared and evaluated using BLEU, ROUGE-L, and METEOR metrics. In addition, performance under Few-shot, Zero-shot, and instruction tuning settings is analyzed, along with a comparative study between multilingual and monolingual models to assess the cross-lingual advantages of multilingual pretraining. Experimental results show that instruction tuning achieves higher generation quality and generalization ability in low-resource environments. LoRA, as a parameter-efficient method, achieves performance close to full fine-tuning while significantly reducing the number of parameter updates. In contrast, monolingual models underperform in cross-lingual tasks, while multilingual models exhibit stronger adaptability due to their broader linguistic coverage. Overall, the proposed method and experimental framework offer an effective technical path and systematic validation for enabling capability transfer of large language models in low-resource tasks.

Article Details

How to Cite
Deng, Y. (2024). Transfer Methods for Large Language Models in Low-Resource Text Generation Tasks. Journal of Computer Science and Software Applications, 4(6). https://doi.org/10.5281/zenodo.15392270
Section
Articles