Building a large-scale testing dataset for conceptual semantic annotation of text

Xiao Wei, Dajun Zeng, Xiangfeng Luo, Wei Wu

Research output: Contribution to journalArticle

Abstract

One major obstacle facing the research on semantic annotation is lack of large-scale testing datasets. In this paper, we develop a systematic approach to constructing such datasets. This approach is based on guided ontology auto-construction and annotation methods which use little priori domain knowledge and little user knowledge in documents. We demonstrate the efficacy of the proposed approach by developing a large-scale testing dataset using information available from MeSH and PubMed. The developed testing dataset consists of a large-scale ontology, a large-scale set of annotated documents, and the baselines to evaluate the target algorithm, which can be employed to evaluate both the ontology construction algorithms and semantic annotation algorithms.

Original languageEnglish (US)
Pages (from-to)63-72
Number of pages10
JournalInternational Journal of Computational Science and Engineering
Volume16
Issue number1
DOIs
Publication statusPublished - Jan 1 2018
Externally publishedYes

    Fingerprint

Keywords

  • evaluation baseline
  • evaluation parameters
  • guided annotation method
  • MeSH
  • ontology auto-construction
  • ontology concept learning
  • priori knowledge
  • PubMed
  • semantic annotation
  • testing dataset

ASJC Scopus subject areas

  • Software
  • Modeling and Simulation
  • Hardware and Architecture
  • Computational Mathematics
  • Computational Theory and Mathematics

Cite this