(1)
Kai, A.; Zhu, L.; Gong, J. Efficient Compression of Large Language Models With Distillation and Fine-Tuning. JCSSA 2023, 3, 30-38.