Scalable Large-Margin Mahalanobis Distance Metric Learning

dc.contributor.authorShen, Chunhua
dc.contributor.authorKim, Junae
dc.contributor.authorWang, Lei
dc.date.accessioned2015-12-07T22:21:18Z
dc.date.issued2010
dc.date.updated2016-02-24T11:29:43Z
dc.description.abstractFor many machine learning algorithms such as k-nearest neighbor ( k-NN) classifiers and k-means clustering, often their success heavily depends on the metric used to calculate distances between different data points. An effective solution for defining such a metric is to learn it from a set of labeled training samples. In this work, we propose a fast and scalable algorithm to learn a Mahalanobis distance metric. The Mahalanobis metric can be viewed as the Euclidean distance metric on the input data that have been linearly transformed. By employing the principle of margin maximization to achieve better generalization performances, this algorithm formulates the metric learning as a convex optimization problem and a positive semidefinite (p.s.d.) matrix is the unknown variable. Based on an important theorem that a p.s.d. trace-one matrix can always be represented as a convex combination of multiple rank-one matrices, our algorithm accommodates any differentiable loss function and solves the resulting optimization problem using a specialized gradient descent procedure. During the course of optimization, the proposed algorithm maintains the positive semidefiniteness of the matrix variable that is essential for a Mahalanobis metric. Compared with conventional methods like standard interior-point algorithms [2] or the special solver used in large margin nearest neighbor [24], our algorithm is much more efficient and has a better performance in scalability. Experiments on benchmark data sets suggest that, compared with state-of-the-art metric learning algorithms, our algorithm can achieve a comparable classification accuracy with reduced computational complexity.
dc.identifier.issn1045-9227
dc.identifier.urihttp://hdl.handle.net/1885/19986
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE Inc)
dc.sourceIEEE Transactions on Neural Networks
dc.subjectKeywords: Benchmark data; Classification accuracy; Conventional methods; Convex combinations; Convex optimization problems; Data points; Distance Metric Learning; Effective solution; Euclidean distance; Generalization performance; Gradient descent; Input datas; Int Distance metric learning; large-margin nearest neighbor; Mahalanobis distance; semidefinite optimization
dc.titleScalable Large-Margin Mahalanobis Distance Metric Learning
dc.typeJournal article
local.bibliographicCitation.issue9
local.bibliographicCitation.startpage7
local.contributor.affiliationShen, Chunhua, College of Engineering and Computer Science, ANU
local.contributor.affiliationKim, Junae, College of Engineering and Computer Science, ANU
local.contributor.affiliationWang, Lei, College of Engineering and Computer Science, ANU
local.contributor.authoremailrepository.admin@anu.edu.au
local.contributor.authoruidShen, Chunhua, a224095
local.contributor.authoruidKim, Junae, u4374373
local.contributor.authoruidWang, Lei, u4259382
local.description.embargo2037-12-31
local.description.notesImported from ARIES
local.identifier.absfor080104 - Computer Vision
local.identifier.absseo970109 - Expanding Knowledge in Engineering
local.identifier.ariespublicationu4963866xPUB10
local.identifier.citationvolume21
local.identifier.doi10.1109/TNN.2010.2052630
local.identifier.scopusID2-s2.0-77956342884
local.identifier.uidSubmittedByu4963866
local.type.statusPublished Version

Downloads

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
01_Shen_Scalable_Large-Margin_2010.pdf
Size:
432.45 KB
Format:
Adobe Portable Document Format