Spatial-aware feature aggregation for cross-view image based geo-localization

dc.contributor.authorShi, Yujiao
dc.contributor.authorLiu, Liu
dc.contributor.authorYu, Xin
dc.contributor.authorLi, Hongdong
dc.coverage.spatialVancouver, Canada
dc.date.accessioned2024-01-21T23:38:47Z
dc.date.createdDecember 8-14 2019
dc.date.issued2019
dc.date.updated2022-10-02T07:17:08Z
dc.description.abstractRecent works show that it is possible to train a deep network to determine the geographic location of a ground-level image (e.g., a Google street-view panorama) by matching it against a satellite map covering the wide geographic area of interest. Conventional deep networks, which often cast the problem as a metric embedding task, however, suffer from poor performance in terms of low recall rates. One of the key reasons is the vast differences between the two view modalities, i.e., ground view versus aerial/satellite view. They not only exhibit very different visual appearances, but also have distinctive geometric configurations. Existing deep methods overlook those appearance and geometric differences, and instead use a brute force training procedure, leading to inferior performance. In this paper, we develop a new deep network to explicitly address these inherent differences between ground and aerial views. We observe that pixels lying on the same azimuth direction in an aerial image approximately correspond to a vertical image column in the ground view image. Thus, we propose a two-step approach to exploit this prior. The first step is to apply a regular polar transform to warp an aerial image such that its domain is closer to that of a ground-view panorama. Note that polar transform as a pure geometric transformation is agnostic to scene content, hence cannot bring the two domains into full alignment. Then, we add a subsequent spatial-attention mechanism which brings corresponding deep features closer in the embedding space. To improve the robustness of feature representation, we introduce a feature aggregation strategy via learning multiple spatial embeddings. By the above two-step approach, we achieve more discriminative deep representations, facilitating cross-view Geo-localization more accurate. Our experiments on standard benchmark datasets show significant performance boosting, achieving more than doubled recall rate compared with the previous state of the art. Remarkably, the recall rate@top-1 improves from 22.5% in [5] (or 40.7% in [11]) to 89.8% on CVUSA benchmark, and from 20.1% [5] to 81.0% on the new CVACT dataset.en_AU
dc.description.sponsorshipThis research is supported in part by China Scholarship Council (201708320417), the Australia Research Council ARC Centre of Excellence for Robotics Vision (CE140100016), ARC-Discovery (DP 190102261) and ARC-LIEF (190100080), and in part by a research gift from Baidu RAL (ApolloScapes-Robotics and Autonomous Driving Lab). The authors gratefully acknowledge the GPU gift donated by NVIDIA Corporation. We thank all anonymous reviewers for their constructive comments.en_AU
dc.format.mimetypeapplication/pdfen_AU
dc.identifier.urihttp://hdl.handle.net/1885/311666
dc.language.isoen_AUen_AU
dc.publisherNeural Information Processing Systems Foundationen_AU
dc.relationhttp://purl.org/au-research/grants/arc/CE140100016en_AU
dc.relationhttp://purl.org/au-research/grants/arc/DP190102261en_AU
dc.relationhttp://purl.org/au-research/grants/arc/LE190100080en_AU
dc.relation.ispartofseries33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019en_AU
dc.rights© 2019 Neural Information Processing Systems Foundationen_AU
dc.sourceAdvances in Neural Information Processing Systemsen_AU
dc.titleSpatial-aware feature aggregation for cross-view image based geo-localizationen_AU
dc.typeConference paperen_AU
local.bibliographicCitation.lastpage11en_AU
local.bibliographicCitation.startpage1en_AU
local.contributor.affiliationShi, Yujiao, College of Engineering and Computer Science, ANUen_AU
local.contributor.affiliationLiu, Liu, College of Engineering and Computer Science, ANUen_AU
local.contributor.affiliationYu, Xin, College of Engineering and Computer Science, ANUen_AU
local.contributor.affiliationLi, Hongdong, College of Engineering and Computer Science, ANUen_AU
local.contributor.authoremailu4056952@anu.edu.auen_AU
local.contributor.authoruidShi, Yujiao, u6293587en_AU
local.contributor.authoruidLiu, Liu, u1013337en_AU
local.contributor.authoruidYu, Xin, u5819038en_AU
local.contributor.authoruidLi, Hongdong, u4056952en_AU
local.description.embargo2099-12-31
local.description.notesImported from ARIESen_AU
local.description.refereedYes
local.identifier.absfor460306 - Image processingen_AU
local.identifier.absfor461103 - Deep learningen_AU
local.identifier.ariespublicationa383154xPUB14042en_AU
local.identifier.doi10.5555/3454287.3455192en_AU
local.identifier.scopusID2-s2.0-85090169788
local.identifier.uidSubmittedBya383154en_AU
local.publisher.urlhttps://dl.acm.org/en_AU
local.type.statusPublished Versionen_AU

Downloads

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Spatial-aware feature aggregation for cross-view image based geo-localization.pdf
Size:
4.02 MB
Format:
Adobe Portable Document Format
Description: