Advancing Emotional Analysis with Large Language Models

Main Article Content

Haowei Yang
Yun Zi
Honglin Qin
Hongye Zheng
Yuxiang Hu

Abstract

The objective of this research is to enhance the efficiency of intelligence acquisition through sentiment analysis of public opinion, a crucial element of open-source intelligence, utilizing a few-shot learning framework. To address the limitations of sentiment detection models in low-resource environments due to insufficient data, we propose a novel method that integrates large language model knowledge with contrastive prompts. The methodology begins with augmenting training samples using the general knowledge from large language models. This is followed by employing unlabeled data for contrastive embedding training to improve the semantic representation capabilities of the text encoder. Finally, a prompt-based learning mechanism is used for iterative self-prediction training on the unlabeled data, further refining the model for specific tasks. Experimental results on public datasets demonstrate that the proposed model outperforms baseline methods when the same amount of labeled data is used.

Article Details

How to Cite
Yang, H., Zi, Y., Qin, H., Zheng, H., & Hu, Y. (2024). Advancing Emotional Analysis with Large Language Models. Journal of Computer Science and Software Applications, 4(3), 8–15. https://doi.org/10.5281/zenodo.12204513
Section
Articles

References

Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey. Ain Shams engineering journal, 5(4), 1093-1113.

Zhao, B., Cao, Z., & Wang, S. (2017). Lung vessel segmentation based on random forests. Electronics Letters, 53(4), 220-222.

Liu, Z., & Song, J. (2021, November). Comparison of Tree-based Feature Selection Algorithms on Biological Omics Dataset. In Proceedings of the 5th International Conference on Advances in Artificial Intelligence (pp. 165-169).

Li, M., Zhu, Z., Xu, R., Feng, Y., & Xiao, L. (2024). Research on Image Classification And Semantic Segmentation Model Based on Convolutional Neural Network. Journal of Computing and Electronic Information Management, 12(3), 94-100.

Lin, T., & Cao, J. (2020). Touch interactive system design with intelligent vase of psychotherapy for alzheimer’s disease. Designs, 4(3), 28.

Zhang, H., Diao, S., Yang, Y., Zhong, J., & Yan, Y. (2024). Multi-scale image recognition strategy based on convolutional neural network. Journal of Computing and Electronic Information Management, 12(3), 107-113.

Yang, Y., Chen, Z., Yan, Y., Li, M., & Gegen, T. (2024). A new method of image recognition based on deep learning generates adversarial networks and integrates traditional algorithms. Journal of Computing and Electronic Information Management, 13(1), 57-61.

Ding, N., Qin, Y., Yang, G., Wei, F., Yang, Z., Su, Y., ... & Sun, M. (2023). Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence, 5(3), 220-235.

Xu, T., Li, I., Zhan, Q., Hu, Y., & Yang, H. (2024). Research on Intelligent System of Multimodal Deep Learning in Image Recognition. Journal of Computing and Electronic Information Management, 12(3), 79-83.

Yuan, J., Wu, L., Gong, Y., Yu, Z., Liu, Z., & He, S. (2024). Research on Intelligent Aided Diagnosis System of Medical Image Based on Computer Deep Learning. arXiv preprint arXiv:2404.18419.

Wang, Q., Schindler, S. E., Chen, G., Mckay, N. S., McCullough, A., Flores, S., ... & Benzinger, T. L. (2024). Investigating White Matter Neuroinflammation in Alzheimer Disease Using Diffusion-Based Neuroinflammation Imaging. Neurology, 102(4), e208013.

Yan, Y., He, S., Yu, Z., Yuan, J., Liu, Z., & Chen, Y. (2024). Investigation of Customized Medical Decision Algorithms Utilizing Graph Neural Networks. arXiv preprint arXiv:2405.17460.

Lu, S., Liu, Z., Liu, T., & Zhou, W. (2023). Scaling-up medical vision-and-language representation learning with federated learning. Engineering Applications of Artificial Intelligence, 126, 107037.

Yan, X., Wang, W., Xiao, M., Li, Y., & Gao, M. (2024). Survival Prediction Across Diverse Cancer Types Using Neural Networks. arXiv preprint arXiv:2404.08713.

Yao, Z., Lin, F., Chai, S., He, W., Dai, L., & Fei, X. (2024). Integrating Medical Imaging and Clinical Reports Using Multimodal Deep Learning for Advanced Disease Analysis. arXiv preprint arXiv:2405.17459.

Chua, L. O., & Roska, T. (1993). The CNN paradigm. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 40(3), 147-156.

Bano, S., Khalid, S., Tairan, N. M., Shah, H., & Khattak, H. A. (2023). Summarization of scholarly articles using BERT and BiGRU: Deep learning-based extractive approach. Journal of King Saud University-Computer and Information Sciences, 35(9), 101739.

Dai, W., Tao, J., Yan, X., Feng, Z., & Chen, J. (2023, November). Addressing Unintended Bias in Toxicity Detection: An LSTM and Attention-Based Approach. In 2023 5th International Conference on Artificial Intelligence and Computer Applications (ICAICA) (pp. 375-379). IEEE.

Zhan, Q., Ma, Y., Gao, E., Sun, D., & Yang, H. (2024). Innovations in Time Related Expression Recognition Using LSTM Networks. International Journal of Innovative Research in Computer Science & Technology, 12(3), 120-125.

Mei, T., Zi, Y., Cheng, X., Gao, Z., Wang, Q., & Yang, H. (2024). Efficiency optimization of large-scale language models based on deep learning in natural language processing tasks. arXiv preprint arXiv:2405.11704.

Xu, K., Cheng, Y., Long, S., Guo, J., Xiao, J., & Sun, M. (2024). Advancing Financial Risk Prediction Through Optimized LSTM Model Performance and Comparative Analysis. arXiv preprint arXiv:2405.20603.

Xiao, L., Li, M., Feng, Y., Wang, M., Zhu, Z., & Chen, Z. (2024). Exploration of Attention Mechanism-Enhanced Deep Learning Models in the Mining of Medical Textual Data. arXiv preprint arXiv:2406.00016.

Wang, S., Liu, Z., & Peng, B. (2023, December). A Self-training Framework for Automated Medical Report Generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 16443-16449).

Chai, S., Fei, X., Wang, Y., Dai, L., & Sui, M. (2024). Deep Learning-Based Lung Medical Image Recognition. International Journal of Innovative Research in Computer Science & Technology, 12(3), 100-105.

Gao, Z., Wang, Q., Mei, T., Cheng, X., Zi, Y., & Yang, H. (2024). An Enhanced Encoder-Decoder Network Architecture for Reducing Information Loss in Image Semantic Segmentation. arXiv preprint arXiv:2406.01605.

Liu, S., Yan, K., Qin, F., Wang, C., Ge, R., Zhang, K., ... & Cao, J. (2024). Infrared Image Super-Resolution via Lightweight Information Split Network. arXiv preprint arXiv:2405.10561.

Xu, R., Yang, Y., Qiu, H., Liu, X., & Zhang, J. (2024). Research on Multimodal Generative Adversarial Networks in the Framework of Deep Learning. Journal of Computing and Electronic Information Management, 12(3), 84-88.

Jiang, H., Qin, F., Cao, J., Peng, Y., & Shao, Y. (2021). Recurrent neural network from adder’s perspective: Carry-lookahead RNN. Neural Networks, 144, 297-306.

Barbieri, F., Camacho-Collados, J., Neves, L., & Espinosa-Anke, L. (2020). TweetEval: Unified benchmark and comparative evaluation for tweet classification. arXiv preprint arXiv:2010.12421.

Li, Y., Yan, X., Xiao, M., Wang, W., & Zhang, F. (2024). Investigation of Creating Accessibility Linked Data Based on Publicly Available Accessibility Datasets. In Proceedings of the 2023 13th International Conference on Communication and Network Security (pp. 77–81). Association for Computing Machinery.

Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.

Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.