Skip navigation
Skip navigation

Task 1 of the CLEF eHealth Evaluation Lab 2016: Handover Information Extraction

Suominen, Hanna; Zhou, Liyuan; Goeuriot, Lorraine; Kelly, Liadh

Description

Cascaded speech recognition (SR) and information extraction(IE) could support the best practice for clinical handover and release clinicians’ time from writing documents to patient interaction and education. However, high requirements for processing correctness evoke methodological challenges and hence, processing correctness needs to be carefully evaluated as meeting the requirements. This overview paper reports on how these issues were addressed in a shared task of the eHealth evaluation...[Show more]

dc.contributor.authorSuominen, Hanna
dc.contributor.authorZhou, Liyuan
dc.contributor.authorGoeuriot, Lorraine
dc.contributor.authorKelly, Liadh
dc.contributor.editorCappellato, Larsen B.
dc.coverage.spatialEvora, Portugal
dc.date.accessioned2022-05-16T04:56:38Z
dc.date.available2022-05-16T04:56:38Z
dc.date.createdSeptember 5-8 2016
dc.identifier.isbn9783319445632
dc.identifier.urihttp://hdl.handle.net/1885/265427
dc.description.abstractCascaded speech recognition (SR) and information extraction(IE) could support the best practice for clinical handover and release clinicians’ time from writing documents to patient interaction and education. However, high requirements for processing correctness evoke methodological challenges and hence, processing correctness needs to be carefully evaluated as meeting the requirements. This overview paper reports on how these issues were addressed in a shared task of the eHealth evaluation lab of the Conference and Labs of the Evaluation Forum (CLEF) in 2016. This IE task built on the 2015 CLEF eHealth Task on SR by using its 201 synthetic handover documents for training and validation (appr. 8, 500 + 7, 700 words) and releasing another 100 documents with over 6, 500 expert-annotated words for testing. It attracted 25 team registrations and 3 team submissions with 2 methods each. When using the macro-averaged F1 over the 35 form headings present in the training documents for evaluation on the test documents, all participant methods outperformed all 4 baselines, including the organizers’ method (F1 = 0.25), published in 2015 in a top-tier medical informatics journal and provided to the participants as an option to build on, a random classifier (F1 = 0.02), and majority classifiers for the two most common classes (i.e., NA to filter out text irrelevant to the form and the most common form heading, both with F1 < 0.00). The top-2 methods (F1 = 0.38 and 0.37) had statistically significantly (p < 0.05, Wilcoxon signed-rank test) better performance than the third-best method (F1 = 0.35). In comparison, the top-3 methods and the organizers’ method (7th) had F1 of 0.81, 0.80, 0.81, and 0.75 in the NA class, respectively
dc.format.mimetypeapplication/pdf
dc.language.isoen_AU
dc.publisherSpringer
dc.relation.ispartofseries7th International Conference of the CLEF Association, CLEF 2016
dc.rights© 2016 Springer
dc.sourceLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
dc.titleTask 1 of the CLEF eHealth Evaluation Lab 2016: Handover Information Extraction
dc.typeConference paper
local.description.notesImported from ARIES
local.description.refereedYes
local.identifier.citationvolume1609
dc.date.issued2016
local.identifier.absfor080105 - Expert Systems
local.identifier.ariespublicationu4334215xPUB1690
local.publisher.urlhttps://link.springer.com/
local.type.statusAccepted Version
local.contributor.affiliationSuominen, Hanna, College of Engineering and Computer Science, ANU
local.contributor.affiliationZhou, Liyuan, NICTA
local.contributor.affiliationGoeuriot, Lorraine, Universite Grenoble Alpes
local.contributor.affiliationKelly, Liadh, Trinity College Dublin
local.bibliographicCitation.startpage1
local.bibliographicCitation.lastpage14
local.identifier.absseo920210 - Nursing
local.identifier.absseo970108 - Expanding Knowledge in the Information and Computing Sciences
dc.date.updated2020-12-27T07:30:48Z
local.identifier.scopusID2-s2.0-84984820786
dcterms.accessRightsOpen Access
dc.provenancehttps://www.springernature.com/gp/open-research/policies/book-policies..."Authors whose work is accepted for publication in a non-open access Springer or Palgrave Macmillan book are permitted to self-archive the accepted manuscript (AM), on their own personal website and/or in their funder or institutional repositories, for public release after an embargo period (see the table below). " from the publisher site (as at 16 May 2022)
CollectionsANU Research Publications

Download

File Description SizeFormat Image
Task 1 of the CLEF eHealth Evaluation Lab 2016.pdf380.04 kBAdobe PDFThumbnail


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  17 November 2022/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator