Ontology-conformal recognition of materials entities using language models

04 September 2024, Version 1
This content is a preprint and has not undergone peer review at the time of posting.

Abstract

Retrieving structured materials information from unstructured textual data is essential for data mining and automatically developing comprehensive ontologies. Information extraction is a complex task composed of multiple subtasks and thus often relies on systems of task-specialized language models. A foundation language model can in principle address not only a variety of those subtasks but also a range of domains without the need of generating costly large-scale annotated datasets for each downstream task. While the materials science domain, which is adversely affected by data scarcity, would strongly benefit from this, foundation language models struggle with information extraction subtasks in domain-specific settings. This applies also to the so-called named entity recognition (NER) subtask which aims to detect relevant entity types in natural language. This work aims to assess whether foundation large language models (LLMs) can successfully perform NER in the materials mechanics and fatigue domain to alleviate the data annotation burden. Specifically, we compare the few-shot prompting of foundation LLMs with the current state-of-the-art, fine-tuned task-specific NER models. The study is performed on two materials fatigue datasets which contain annotations at a comparatively fine-grained level. Both datasets cover adjacent domains to assess how well both NER methodologies generalize when presented with typical domain shifts. Task-specific models are shown to significantly outperform general foundation models. However, the GPT-4 foundation model attains promising F1-scores with the proposed two-stage prompting strategy despite being provided with only ten demonstrations. Under those circumstances, it outperformed task-specific models for some rather general entity types. Different ways onwards to improve foundation LLM-based NER are discussed. Our findings reveal a strong dependence on the quality of few-shot demonstrations in ICL to handle domain-shift. The study also highlights the significance of domain-specific pre-training by comparing task-specific models that differ primarily in their pre-training corpus.

Keywords

Materials science
Fatigue
Large Language Models
Named Entity Recognition
Parameter-efficient fine-tuning
Foundation models
Prompt engineering
Ontology
Literature mining

Supplementary weblinks

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.