Li, Hanxi
Description
Boosting is one of the most well-known learning methods for building highly accurate classifiers or regressors from a set of weak classifiers. Much effort has been devoted to the understanding of boosting algorithms. However, questions remain unclear about the success of boosting.
In this thesis, we study boosting algorithms from a new perspective. We started our research by empirically comparing the LPBoost and AdaBoost algorithms. The result and the corresponding analysis show that,...[Show more] besides the minimum margin, which is directly and globally optimized in LPBoost, the margin distribution plays a more important role. Inspired by this observation, we theoretically prove that the Lagrange dual problems of AdaBoost, LogitBoost and soft-margin LPBoost with generalized hinge loss are all entropy maximization problems. By looking at the dual problems of these boosting algorithms, we show that the success of boosting algorithms can be understood in terms of maintaining a better margin distribution by maximizing margins and at the same time controlling the margin variance. We further point out that AdaBoost approximately maximizes the average margin, instead of the minimum margin. The duality formulation also enables us to develop column-generation based optimization algorithms, which are totally corrective. The new algorithm, which is termed AdaBoost-CG, exhibits almost identical classification results to those of standard stage-wise additive boosting algorithms, but with much faster convergence rates. Therefore, fewer weak classifiers are needed to build the ensemble using our proposed optimization technique.
The significance of margin distribution motivates us to design a new column-generation based algorithm that directly maximizes the average margin while minimizes the margin variance at the same time. We term this novel method MDBoost and show its superiority over other boosting-like algorithms. Moreover, consideration of the primal and dual problems together leads to important new insights into the characteristics of boosting algorithms. We then propose a general framework that can be used to design new boosting algorithms. A wide variety of machine learning problems essentially minimize a regularized risk functional. We show that the proposed boosting framework, termed AnyBoostTc, can accommodate various loss functions and different regularizers in a totally corrective optimization way. A large body of totally corrective boosting algorithms can actually be solved very efficiently, and no sophisticated convex optimization solvers are needed, by solving the primal rather than the dual. We also demonstrate that some boosting algorithms like AdaBoost can be interpreted in our framework, even their optimization is not totally corrective, .
We conclude our study by applying the totally corrective boosting algorithm to a long-standing computer vision problem-face recognition. Linear regression face recognizers, constrained by two categories of locality, are selected and combined within both the traditional and totally corrective boosting framework. To our knowledge, it is the first time that linear-representation classifiers are boosted for face recognition. The instance-based weak classifiers bring some advantages, which are theoretically or empirically proved in our work. Benefiting from the robust weak learner and the advanced learning framework, our algorithms achieve the best reported recognition rates on face recognition benchmark datasets.
Items in Open Research are protected by copyright, with all rights reserved, unless otherwise indicated.