Transfer Methods for Large Language Models in Low-Resource Text Generation Tasks
Main Article Content
Abstract
This study investigates the transferability of large language models in low-resource generation tasks. To address the decline in generation performance of pre-trained language models under data-scarce conditions, it proposes a transfer mechanism that combines instruction tuning with parameter-efficient fine-tuning, aiming to enhance generation stability and semantic consistency in low-resource settings. Based on the NATURAL INSTRUCTIONS v2 dataset, several text generation tasks with limited samples are constructed. Three mainstream fine-tuning strategies—Full Fine-tuning, Low-Rank Adaptation (LoRA), and Adapter—are systematically compared and evaluated using BLEU, ROUGE-L, and METEOR metrics. In addition, performance under Few-shot, Zero-shot, and instruction tuning settings is analyzed, along with a comparative study between multilingual and monolingual models to assess the cross-lingual advantages of multilingual pretraining. Experimental results show that instruction tuning achieves higher generation quality and generalization ability in low-resource environments. LoRA, as a parameter-efficient method, achieves performance close to full fine-tuning while significantly reducing the number of parameter updates. In contrast, monolingual models underperform in cross-lingual tasks, while multilingual models exhibit stronger adaptability due to their broader linguistic coverage. Overall, the proposed method and experimental framework offer an effective technical path and systematic validation for enabling capability transfer of large language models in low-resource tasks.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Mind forge Academia also operates under the Creative Commons Licence CC-BY 4.0. This allows for copy and redistribute the material in any medium or format for any purpose, even commercially. The premise is that you must provide appropriate citation information.