Investigating Hierarchical Term Relationships in Large Language Models

Main Article Content

Guohui Cai
Jiangchuan Gong
Junliang Du
Hao Liu
Anda Kai

Abstract

Hypernym-hyponym relationship detection plays a crucial role in knowledge organization, semantic search, and natural language understanding, with significant implications for artificial intelligence-driven information management. This study investigates the effectiveness of large language models (LLMs), including GPT-4o, LLaMA-2, and Falcon-40B, in automatically identifying hierarchical term relationships. Experimental results indicate that GPT-4o achieves the highest accuracy, particularly when fine-tuned, while longer terms and complex domains remain challenging for all models. The findings highlight key limitations, such as multilingual generalization issues and difficulties in processing extended terms, underscoring the need for improved context-aware embeddings and hierarchical reasoning techniques. By integrating AI-driven semantic understanding with external knowledge sources like ontologies and knowledge graphs, this study presents a scalable framework for hypernym detection, advancing both theoretical research and practical applications in areas such as intelligent search, automated question answering, and domain-specific knowledge extraction. Future work should focus on enhancing model interpretability, cross-domain adaptability, and efficiency, leveraging advancements in multimodal AI and self-supervised learning to refine hierarchical knowledge representation and improve AI-driven semantic computing.

Article Details

How to Cite
Cai, G., Gong, J., Du, J., Liu, H., & Kai, A. (2025). Investigating Hierarchical Term Relationships in Large Language Models. Journal of Computer Science and Software Applications, 5(4). https://doi.org/10.5281/zenodo.15165092
Section
Articles