Convolutional neural networks for transient candidate vetting in large-scale surveys

dc.contributor.authorGieseke, Fabian
dc.contributor.authorBloemen, Steven
dc.contributor.authorvan den Bogaard, Cas
dc.contributor.authorHeskes, Tom
dc.contributor.authorKindler, Jonas
dc.contributor.authorScalzo, Richard
dc.contributor.authorRibeiro, Valerio A R M
dc.contributor.authorvan Roestel, Jan
dc.contributor.authorGroot, Paul J
dc.contributor.authorYuan, Fang
dc.contributor.authorMoller, Anais
dc.contributor.authorTucker, Brad
dc.date.accessioned2021-06-01T05:29:48Z
dc.date.available2021-06-01T05:29:48Z
dc.date.issued2017
dc.date.updated2020-11-23T10:22:11Z
dc.description.abstractCurrent synoptic sky surveys monitor large areas of the sky to find variable and transient astronomical sources. As the number of detections per night at a single telescope easily exceeds several thousand, current detection pipelines make intensive use of machine learning algorithms to classify the detected objects and to filter out the most interesting candidates. A number of upcoming surveys will produce up to three orders of magnitude more data, which renders high-precision classification systems essential to reduce the manual and, hence, expensive vetting by human experts. We present an approach based on convolutional neural networks to discriminate between true astrophysical sources and artefacts in reference-subtracted optical images. We show that relatively simple networks are already competitive with state-of-the-art systems and that their quality can further be improved via slightly deeper networks and additional pre-processing steps – eventually yielding models outperforming state-of-the-art systems. In particular, our best model correctly classifies about 97.3 per cent of all ‘real’ and 99.7 per cent of all ‘bogus’ instances on a test set containing 1942 ‘bogus’ and 227 ‘real’ instances in total. Furthermore, the networks considered in this work can also successfully classify these objects at hand without relying on difference images, which might pave the way for future detection pipelines not containing image subtraction steps at all.en_AU
dc.description.sponsorshipFG and VARMR acknowledge financial support from the Radboud Excellence Initiative. VARMR further acknowledges financial support from Fundac¸ao para a Ci ˜ encia e a Technologia (FCT) ˆ in the form of an exploratory project of reference IF/00498/2015, from Center for Research & Development in Mathematics and Applications (CIDMA) strategic project UID/MAT/04106/2013 and from Enabling Green E-science for the Square Kilometer Array Research Infrastructure (ENGAGE SKA), POCI-01-0145- FEDER-022217, funded by Programa Operacional Competitividade e Internacionalizac¸eo (COMPETE 2020) and FCT, Portugal.en_AU
dc.format.mimetypeapplication/pdfen_AU
dc.identifier.issn0035-8711en_AU
dc.identifier.urihttp://hdl.handle.net/1885/235775
dc.language.isoen_AUen_AU
dc.provenancehttps://v2.sherpa.ac.uk/id/publication/24618..."The Published Version can be archived in Institutional Repository" from SHERPA/RoMEO site (as at 1/06/2021). This article has been accepted for publication in [Journal Title] ©: 2017 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society. All rights reserved.en_AU
dc.publisherOxford University Pressen_AU
dc.rights© 2017 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Societyen_AU
dc.sourceMonthly Notices of the Royal Astronomical Societyen_AU
dc.subjectmethods: data analysisen_AU
dc.subjecttechniques: image processingen_AU
dc.subjectsurveysen_AU
dc.subjectsupernovae: generalen_AU
dc.titleConvolutional neural networks for transient candidate vetting in large-scale surveysen_AU
dc.typeJournal articleen_AU
dcterms.accessRightsOpen Accessen_AU
local.bibliographicCitation.issue3en_AU
local.bibliographicCitation.lastpage3114en_AU
local.bibliographicCitation.startpage3101en_AU
local.contributor.affiliationGieseke, Fabian, Radboud Universityen_AU
local.contributor.affiliationBloemen, Steven, Radboud Universityen_AU
local.contributor.affiliationvan den Bogaard, Cas, Radboud Universityen_AU
local.contributor.affiliationHeskes, Tom, Radboud Universityen_AU
local.contributor.affiliationKindler, Jonas, University of Osnabrücken_AU
local.contributor.affiliationScalzo, Richard, College of Science, ANUen_AU
local.contributor.affiliationRibeiro, Valerio A R M, Radboud Universityen_AU
local.contributor.affiliationvan Roestel, Jan, Radboud Universityen_AU
local.contributor.affiliationGroot, Paul J, Radboud Universityen_AU
local.contributor.affiliationYuan, Fang, College of Science, ANUen_AU
local.contributor.affiliationMoller, Anais, College of Science, ANUen_AU
local.contributor.affiliationTucker, Brad, College of Science, ANUen_AU
local.contributor.authoremailu4956999@anu.edu.auen_AU
local.contributor.authoruidScalzo, Richard, u4956999en_AU
local.contributor.authoruidYuan, Fang, u4981546en_AU
local.contributor.authoruidMoller, Anais, u1018833en_AU
local.contributor.authoruidTucker, Brad, u4362859en_AU
local.description.notesImported from ARIESen_AU
local.identifier.absfor170205 - Neurocognitive Patterns and Neural Networksen_AU
local.identifier.absfor020199 - Astronomical and Space Sciences not elsewhere classifieden_AU
local.identifier.absseo970102 - Expanding Knowledge in the Physical Sciencesen_AU
local.identifier.ariespublicationu4485658xPUB367en_AU
local.identifier.citationvolume472en_AU
local.identifier.doi10.1093/mnras/stx2161en_AU
local.identifier.scopusID2-s2.0-85052497993
local.identifier.uidSubmittedByu4485658en_AU
local.publisher.urlhttp://mnras.oxfordjournals.org/en_AU
local.type.statusPublished Versionen_AU

Downloads

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
01_Gieseke_Convolutional_neural_networks_2017.pdf
Size:
7.73 MB
Format:
Adobe Portable Document Format