Improved Transformer for Cross-Domain Knowledge Extraction with Feature Alignment

Main Article Content

Pochun Li

Abstract

Cross-domain knowledge extraction is a relevant language processing research goal that aims to extract entities and relations from unstructured texts from different domains to help build knowledge graphs and aid in comprehension. However, traditional models face challenges of data distribution differences and semantic migration in cross-domain scenarios, resulting in insufficient generalization ability. This paper proposes a cross-domain knowledge extraction model based on an improved Transformer. By introducing domain adaptation modules, dynamic feature alignment mechanisms, and knowledge enhancement modules, it achieves effective modeling and migration of data from different domains. Experimental results show that the performance of this model on multiple datasets such as SemEval-2010 Task-8 is better than that of existing mainstream methods, especially in indicators such as ACC, AUC, F1, and Recall. In addition, this paper also verifies the contribution of each module to the model performance through ablation experiments, providing a new technical route and theoretical support for cross-domain knowledge extraction tasks. In the future, this study will be extended to multimodal data and larger-scale knowledge extraction scenarios to further promote the development of this field.

Article Details

How to Cite
Li, P. (2025). Improved Transformer for Cross-Domain Knowledge Extraction with Feature Alignment. Journal of Computer Science and Software Applications, 5(2). https://doi.org/10.5281/zenodo.14832321
Section
Articles