[1]
A. Kai, L. Zhu, and J. Gong, “Efficient Compression of Large Language Models with Distillation and Fine-Tuning”, JCSSA, vol. 3, no. 4, pp. 30–38, Oct. 2023.