Skip navigation
Skip navigation

Testing realistic forensic speaker identification in Japanese : a likelihood ratio based approach using formants

Kinoshita, Yuko

Description

This thesis sets out to investigate whether or not forensic speaker identification can be performed, using forensically realistic data, which is natural and non-contemporaneous speech. To date, there is no forensic phonetic research which tests how accurately speakers can be discriminated on the basis of their voice under forensically realistic conditions, despite the fact that the use of speech recordings for forensic investigation or as a part of evidence in court is not an unusual...[Show more]

dc.contributor.authorKinoshita, Yuko
dc.date.accessioned2016-11-16T00:08:22Z
dc.date.available2016-11-16T00:08:22Z
dc.date.copyright2001
dc.identifier.otherb2107258
dc.identifier.urihttp://hdl.handle.net/1885/110339
dc.description.abstractThis thesis sets out to investigate whether or not forensic speaker identification can be performed, using forensically realistic data, which is natural and non-contemporaneous speech. To date, there is no forensic phonetic research which tests how accurately speakers can be discriminated on the basis of their voice under forensically realistic conditions, despite the fact that the use of speech recordings for forensic investigation or as a part of evidence in court is not an unusual practice today. This research thus aims to provide the first test of the accuracy of realistic forensic speaker identification using centre frequencies of formants, which are today the most commonly used acoustic parameter in actual forensic speaker identification today. The current state of forensic speaker identification in Japan also signifies this research. Forensic speaker identification in Japan has relied on visual examination of spectrograms and occasional use of the automatic speaker recognition technique. The research on forensic speaker identification also concentrates on the application of the automatic speaker recognition technique, and no linguistic analysis or interpretation of speech data has been included. This thesis therefore will serve as the first linguistic analysis in Japanese forensic speaker identification research. This thesis frrstly examines what segment f formant combinations are more promising as the speaker identification parameter. Those parameters are then incorporated, and how accurately they discriminate two speech samples is tested. For the testing, three different statistical approaches are presented and examined. As result, the distance based approach using likelihood ratio as the score for discrimination test (likelihood ratio-based distance method) was found to be most effective. The results of this testing showed that speakers can indeed be discriminated on the basis of their formant frequencies, as long as enough number of parameters are incorporated. With this approach incorporating six parameters, the successful discrimination rates were found to be approximately %.7% for positive discrimination (discriminating two different speakers) and 90% for negative discrimination (identifying the same speaker), when the threshold was set at likelihood ratio 1.
dc.format.extentxvii, 380 leaves
dc.language.isoen
dc.subject.lccPL541.K56 2001
dc.subject.lcshJapanese language Phonetics
dc.subject.lcshFormants (Speech)
dc.subject.lcshForensic linguistics
dc.subject.lcshPhonetics, Acoustic
dc.titleTesting realistic forensic speaker identification in Japanese : a likelihood ratio based approach using formants
dc.typeThesis (PhD)
local.contributor.supervisorRose, Phil
dcterms.valid2001
local.description.notesThis thesis has been made available through exception 200AB to the Copyright Act.
local.type.degreeDoctor of Philosophy (PhD)
dc.date.issued2001
local.identifier.doi10.25911/5d7638f8d32f9
dc.date.updated2016-11-01T00:13:08Z
local.mintdoimint
CollectionsOpen Access Theses

Download

File Description SizeFormat Image
b21072589-kinoshita.y.pdf10.88 MBAdobe PDFThumbnail


Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.

Updated:  17 November 2022/ Responsible Officer:  University Librarian/ Page Contact:  Library Systems & Web Coordinator