1.
Kai A, Zhu L, Gong J. Efficient Compression of Large Language Models with Distillation and Fine-Tuning. JCSSA [Internet]. 2023 Oct. 1 [cited 2025 Apr. 16];3(4):30-8. Available from: https://mfacademia.org/index.php/jcssa/article/view/215