Zero- and Few-Shots Knowledge Graph Triplet Extraction with Large Language Models

dc.contributor.authorPapaluca, Andreaen
dc.contributor.authorKrefl, Danielen
dc.contributor.authorRodríguez Méndez, Sergio J.en
dc.contributor.authorLensky, Artemen
dc.contributor.authorSuominen, Hannaen
dc.date.accessioned2025-05-31T01:28:34Z
dc.date.available2025-05-31T01:28:34Z
dc.date.issued2024en
dc.description.abstractIn this work, we tested the Triplet Extraction (TE) capabilities of a variety of Large Language Models (LLMs) of different sizes in the Zero- and Few-Shots settings. In detail, we proposed a pipeline that dynamically gathers contextual information from a Knowledge Base (KB), both in the form of context triplets and of (sentence, triplets) pairs as examples, and provides it to the LLM through a prompt. The additional context allowed the LLMs to be competitive with all the older fully trained baselines based on the Bidirectional Long Short-Term Memory (BiLSTM) Network architecture. We further conducted a detailed analysis of the quality of the gathered KB context, finding it to be strongly correlated with the final TE performance of the model. In contrast, the size of the model appeared to only logarithmically improve the TE capabilities of the LLMs. We release the code on GitHub 1 for reproducibility.en
dc.description.sponsorshipAndrea Papaluca was supported by an Australian Government Research Training Program International Scholarship. Artem Lensky was partially supported by the Commonwealth Department of Defence, Defence Science and Technology Group.en
dc.description.statusPeer-revieweden
dc.format.extent12en
dc.identifier.isbn9798891761476en
dc.identifier.otherORCID:/0000-0001-7203-8399/work/171153707en
dc.identifier.scopus85204482864en
dc.identifier.urihttp://www.scopus.com/inward/record.url?scp=85204482864&partnerID=8YFLogxKen
dc.identifier.urihttps://hdl.handle.net/1885/733755750
dc.language.isoenen
dc.publisherAssociation for Computational Linguistics (ACL)en
dc.relation.ispartofKaLLM 2024 - 1st Workshop on Knowledge Graphs and Large Language Models, Proceedings of the Workshopen
dc.relation.ispartofseries1st Workshop on Knowledge Graphs and Large Language Models, KaLLM 2024en
dc.relation.ispartofseriesKaLLM 2024 - 1st Workshop on Knowledge Graphs and Large Language Models, Proceedings of the Workshopen
dc.rightsPublisher Copyright: ©2024 Association for Computational Linguistics.en
dc.titleZero- and Few-Shots Knowledge Graph Triplet Extraction with Large Language Modelsen
dc.typeConference paperen
dspace.entity.typePublicationen
local.bibliographicCitation.lastpage23en
local.bibliographicCitation.startpage12en
local.contributor.affiliationPapaluca, Andrea; Australian National Universityen
local.contributor.affiliationRodríguez Méndez, Sergio J.; School of Computing, ANU College of Systems and Society, The Australian National Universityen
local.contributor.affiliationLensky, Artem; University of New South Walesen
local.contributor.affiliationSuominen, Hanna; School of Computing, ANU College of Systems and Society, The Australian National Universityen
local.identifier.puref61e6154-68f4-40b2-95ac-6692bd7fc261en
local.identifier.urlhttps://www.scopus.com/pages/publications/85204482864en
local.type.statusPublisheden

Downloads