Investigating Hierarchical Term Relationships in Large Language Models
Main Article Content
Abstract
Hypernym-hyponym relationship detection plays a crucial role in knowledge organization, semantic search, and natural language understanding, with significant implications for artificial intelligence-driven information management. This study investigates the effectiveness of large language models (LLMs), including GPT-4o, LLaMA-2, and Falcon-40B, in automatically identifying hierarchical term relationships. Experimental results indicate that GPT-4o achieves the highest accuracy, particularly when fine-tuned, while longer terms and complex domains remain challenging for all models. The findings highlight key limitations, such as multilingual generalization issues and difficulties in processing extended terms, underscoring the need for improved context-aware embeddings and hierarchical reasoning techniques. By integrating AI-driven semantic understanding with external knowledge sources like ontologies and knowledge graphs, this study presents a scalable framework for hypernym detection, advancing both theoretical research and practical applications in areas such as intelligent search, automated question answering, and domain-specific knowledge extraction. Future work should focus on enhancing model interpretability, cross-domain adaptability, and efficiency, leveraging advancements in multimodal AI and self-supervised learning to refine hierarchical knowledge representation and improve AI-driven semantic computing.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Mind forge Academia also operates under the Creative Commons Licence CC-BY 4.0. This allows for copy and redistribute the material in any medium or format for any purpose, even commercially. The premise is that you must provide appropriate citation information.