Adaptive regression and model selection in data mining problems

Date

Authors

Bakin, Sergey

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Data Mining is a new and rapidly evolving area which deals with problems related to extracting structure from massive commercial and scientific data sets. Regression analysis is one of the major Data Mining techniques. The data sets encountered in the Data Mining area are often characterized by a large number of attributes (variables) as well as data records.This imposes two major requirements on the regression analysis tools used in Data Mining: first, in order to produce accurate and parsimonious models exhibiting the most important features of the problem in hand, they should be able to perform model selection adaptively and, second, the cost of running such tools has to be reasonably low. Most of the modern regression tools fail to meet the above requirements. This thesis is intended to contribute to the improvement of the existing methodologies as well as to propose new approaches. We focus on two regression estimation techniques. The first one,called Probing Least Absolute Squares Modelling (PLASM), is a generalization of the Least Absolute Shrinkage and Selection Operator (LASSO) by R. Tibshirani which minimizes the residual sum of squares subject to the 11-norm of the regression coefficients being less than a constant. LASSO has been shown to enjoy stability of the ridge regression coupled with the ability to carry out model selection. In our approach called PLASM, we replace the constraint employed in LASSO with a different constraint. PLASM allows for an arbitrary grouping of basis functions in a model and includes LASSO as a special case.The implication of using the new constraint is that PLASM is able to perform model selection in terms of groups of basis functions. This turns out to be very useful in a number of data analytic problems.For example, as far as additive modelling is concerned,the dimensionality of the PLASM minimization problem is much less than that of LASSO and is independent (at least explicitly) of the number of datapoints which makes it suitable for use in the Data Mining context. The second tool we consider this thesis is the Multivariate Adaptive Regression Splines (MARS) developed by J.Friedman.In our version of MARS called BMARS, we use B-splines instead of truncated power basis functions. The fact that B-splines have the compact support property allows us to introduce a new strategy whereby at any momen the algorithm builds a model using B-splines of a certain scale only and it switches over to splines of smaller scale after the fitting ability of the current splines has been exhausted. Also, we discuss a parallel version of BMARS as well as an application of the algorithm to processing of a large commercial data set.The results of the numerical experiments demonstrate that, while being considerably more efficient, BMARS is able to produce models competitive with those of the original MARS.

Description

Keywords

Citation

Source

Book Title

Entity type

Access Statement

License Rights

Restricted until

Downloads