Kim, Junae

### Description

The development of an appropriate data-dependent distance metric is a compelling goal for many visual recognition tasks. This thesis proposes three efficient and scalable distance learning algorithms by employing the principle of margin maximization to secure better generalization performances. The proposed algorithms formulate metric learning as a convex optimization problem with a positive semidefinite (psd) matrix variable. A standard interior-point semidefinite programming (SDP) solver has...[Show more] a complexity of O(n to the power of 6.5) where n is the number of variables, and can only solve problems with up to a few thousand variables. Since the number of variables is D ( D+1 ) / 2, where D is the dimensionality of the input data, this corresponds to a limit of around D < 100. This high complexity hampers the application of metric learning to high-dimensional problems. Compared with conventional methods such as standard interior-point algorithms or the special solver used in the large-margin nearest neighbor (LMNN), our algorithms are much more efficient and perform better in terms of scalability. The first algorithm, SDPmetric is based on an important theorem in which a psd matrix with a trace of one can always be represented as a convex combination of multiple rank-one matrices. The algorithm not only naturally maintains the psd requirement of the matrix variable that is essential for metric learning but also significantly cuts down the computational overhead, making it much more efficient with increasing the dimensions of feature vectors. In brief, only the leading eigendecomposition is required for metric learning; hence, the time complexity is O ( t times D squared ) , where t is the number of iterations and D is the dimensionality of the feature vectors. The second algorithm, BoostMetric is based on a boosting technique to learn the Mahalanobis distance metric. One of the primary difficulties in learning this metric is ensuring that the Mahalanobis matrix remains psd. SDP is sometimes used to enforce this constraint but does not scale well. Similar to SDPMetric, BoostMetric is instead based on a key observation that any psd matrix can be decomposed into a linear positive combination of trace-one rank-one matrices. BoostMetric thus uses rank-one psd matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting method is easy to implement, does not require tuning, and can accommodate various types of constraints. The last algorithm, FrobMetric is designed for a very efficient approach to this metric learning problem. A Lagrange dual approach is formulated that is much simpler to optimize and we therefore be used to solve much larger Mahalanobis metric learning problems. In general, the proposed approach had a time complexity of O ( t times the cube of D ) with t = 20 ~ 30 for most problems in our experiments. As presented in each chapter, our experiments on various datasets in several applications showed that our algorithms can achieve comparable classification accuracy as state-of-the-art metric learning algorithms with reduced computational complexity.

Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.