Linear Models with Regularization

Show simple item record

dc.contributor Helsingin yliopisto, Matemaattis-luonnontieteellinen tiedekunta, Matematiikan ja tilastotieteen laitos fi Huang, Zhiyong fi 2012-10-04T12:00:40Z 2012-10-04T12:00:40Z 2012-10-04
dc.description Vain tiivistelmä. Opinnäytteiden arkistokappaleet ovat luettavissa Helsingin yliopiston kirjastossa. Hae HELKA-tietokannasta ( fi
dc.description Abstract only. The paper copy of the whole thesis is available for reading room use at the Helsinki University Library. Search HELKA online catalog ( en
dc.description Endast avhandlingens sammandrag. Pappersexemplaret av hela avhandlingen finns för läsesalsbruk i Helsingfors universitets bibliotek. Sök i HELKA-databasen ( sv
dc.description.abstract In this master’s thesis we present two important classes of regularized linear models -regularized least squares regression (LS) and regularized least absolute deviation (LAD). Use of regularized regression in variable selection was pioneered by Tibshirani (1996) and his proposed LASSO rapidly became a popular and competitive method in variable selection. Properties of LASSO have been intensively studied and different algorithms to solve LASSO have been developed. While the success of LASSO was acclaimed during the process, its limitations were noticed and a number of alternative methods have been proposed in subsequent research. Among all of theses methods, adaptive LASSO (Zou, 2006) and SCAD (Fan and Li, 2001) attempt to improve the efficiency of LASSO; LAD LASSO (Wang et al., 2007) assumes non-Gaussian distributed errors; ridge, elastic net (Zou and Hastie, 2005) and Bridge (Frank and Friedman, 1993) adopt penalties other than L1; while fused LASSO (Tibshirani et al., 2005) and grouped LASSO (Yuan and Lin, 2006) take extra constrains of data into account. We discuss LASSO in length in the thesis. Its properties in orthogonal design, singular design and p > n design are examined. Its asymptotic performance is investigated and its limitations are carefully illustrated. Another two commonly used regularization methods in LS - ridge and elastic net - are discussed as well. The regularized LAD is another focus of the thesis. As a robust statistic, LAD, which fits the conditional median rather than the conditional mean of the response, has a bounded influence function and a high conditional breakdown point. It is natural to use regularized LAD to do variable selection in presence of long-tailed errors or outliers in the response. Compared with LASSO, LAD LASSO does robust estimation and variable selection simultaneously. We make a simulation study and examine two real examples on the performance of these regularized linear models. Our results demonstrate that no single one estimate dominates others in all cases. The sparsity of the true model, the distribution of the noise, noise-to-signal ratio, the sample size and the correction of predictors, all these factors matter. When the noise has a normal distribution, LASSO, adaptive LASSO and elastic net often outperform others in prediction accuracy. Adaptive LASSO is the best in variable selection and elastic net tends to reveal less sparsity than LASSO. When the noise follows a Laplace distribution, LAD LASSO is competitive with LASSO but is less efficient than adaptive LASSO. For noises with extremely long-tailed distribution such as Cauchy distribution, LAD LASSO dominates others in both the prediction accuracy and variable selection. fi
dc.language.iso en fi
dc.title Linear Models with Regularization fi
dc.type.ontasot Pro gradu -työ fi
dc.subject.discipline Bayesian Statistics and Decision Analysis fi

Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record