Ridge Regression and Ordinary Least Squares (OLS).
Initialization.
Parameters: |
|
---|
New in version 2.2.0.
Compute the regression coefficients.
Parameters: |
|
---|
Compute the predicted response.
Parameters: |
|
---|---|
Returns: |
|
Note
The predicted response is computed as:
\hat{y} = \beta_0 + X \boldsymbol\beta
Example (requires matplotlib module):
>>> import numpy as np
>>> import mlpy
>>> import matplotlib.pyplot as plt
>>> x = np.array([[1], [2], [3], [4], [5], [6]]) # p = 1
>>> y = np.array([0.13, 0.19, 0.31, 0.38, 0.49, 0.64])
>>> rr = mlpy.RidgeRegression(alpha=0.0) # OLS
>>> rr.learn(x, y)
>>> y_hat = rr.pred(x)
>>> plt.figure(1)
>>> plt.plot(x[:, 0], y, 'o') # show y
>>> plt.plot(x[:, 0], y_hat) # show y_hat
>>> plt.show()
>>> rr.beta0()
0.0046666666666667078
>>> rr.beta()
array([ 0.10057143])
Ridge Regression and Ordinary Least Squares (OLS).
Initialization.
Parameters: | alpha : float (> 0.0) |
---|
New in version 2.2.0.
Compute the regression coefficients.
Parameters: |
|
---|
Compute the predicted response.
Parameters: |
|
---|---|
Returns: |
|
Example (requires matplotlib module):
>>> import numpy as np
>>> import mlpy
>>> import matplotlib.pyplot as plt
>>> x = np.array([[1], [2], [3], [4], [5], [6]]) # p = 1
>>> y = np.array([0.13, 0.19, 0.31, 0.38, 0.49, 0.64])
>>> kernel = mlpy.KernelGaussian(sigma=0.01)
>>> krr = mlpy.KernelRidgeRegression(kernel=kernel, alpha=0.01)
>>> krr.learn(x,y)
>>> y_hat = krr.pred(x)
>>> plt.figure(1)
>>> plt.plot(x[:, 0], y, 'o') # show y
>>> plt.plot(x[:, 0], y_hat) # show y_hat
>>> plt.show()
Least Angle Regression is described in [Efron04].
Covariates should be standardized to have mean 0 and unit length, and the response should have mean 0:
\sum_{i=1}^n{x_{ij}} = 0, \hspace{1cm} \sum_{i=1}^n{x_{ij}^2} = 1, \hspace{1cm} \sum_{i=1}^n{y_i} = 0 \hspace{1cm} \mathrm{for} \hspace{0.2cm} j = 1, 2, \dots, p.
LAR.
Initialization.
Parameters: |
|
---|
New in version 2.2.0.
Compute the regression coefficients.
Parameters: |
|
---|
Compute the predicted response.
Parameters: |
|
---|---|
Returns: |
|
It implements simple modifications of the LARS algorithm that produces Lasso estimates. See [Efron04] and [Tibshirani96].
Covariates should be standardized to have mean 0 and unit length, and the response should have mean 0:
\sum_{i=1}^n{x_{ij}} = 0, \hspace{1cm} \sum_{i=1}^n{x_{ij}^2} = 1, \hspace{1cm} \sum_{i=1}^n{y_i} = 0 \hspace{1cm} \mathrm{for} \hspace{0.2cm} j = 1, 2, \dots, p.
LASSO computed with LARS algoritm.
Initialization.
Parameters: |
|
---|
New in version 2.2.0.
Compute the regression coefficients.
Parameters: |
|
---|
Compute the predicted response.
Parameters: |
|
---|---|
Returns: |
|
Gradient Descent Method
Initialization.
Parameters: |
|
---|
New in version 2.2.0.
Compute the regression coefficients.
Parameters: |
|
---|
Compute the predicted response.
Parameters: |
|
---|---|
Returns: |
|
[Efron04] | (1, 2) Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. Least Angle Regression. Annals of Statistics, 2004, volume 32, pages 407-499. |
[Tibshirani96] | Robert Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 1996, volume 58, number 1, pages 267-288. |