1、特征价格法的来龙去脉。

The Boston house-price data of Harrison, D. and Rubinfeld, D.L. ‘Hedonic prices and the demand for clean air’, J. Environ. Economics & Management, vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, ‘Regression diagnostics …’, Wiley, 1980. N.B. Various transformations are used in the table on pages 244-261 of the latter.

The Boston house-price data has been used in many machine learning papers that address regression problems. References – Belsley, Kuh & Welsch, ‘Regression diagnostics: Identifying Influential Data and Sources of Collinearity’, Wiley, 1980. 244-261. – Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.

2、参考算法:

机器学习数据集:http://archive.ics.uci.edu/ml/datasets.php

Python遇见机器学习 —- 线性回归算法 Linear Regression

版权声明:本文为博主 HNU_NJU_wjw原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/HNU_Csee_wjw/article/details/88086183
综述
“借问酒家何处有?牧童遥指杏花村。”

本文采用编译器:jupyter 

线性模型试图学得一个通过属性的线性组合来进 预测的函数,即

线性模型形式简单、易于建模,但却蕴涵着机器学习中一些重要的基本思想。许多功能更为强大的非线性模型可在线性模型的基础上通过引入层级结构或高维映射而得。此外,由于 ω 直观表达了各属性在预测中的重要性,因此线性模型有很好的可解释性。

例如,我们要判断一个西瓜是好吃的还是难吃的,现在我们拿到一个西瓜,可以直接看到它的颜色、根蒂形状,用手指敲击能听出敲声是什么样的。这里所说的“颜色”、“根蒂形状”、“敲声”就是“特征”

若在西瓜问题中学得:

则意味着可通过综合考虑色泽、根蒂和敲声来判断瓜好不好,其中根蒂最要紧,而敲声比色泽更重要。

其中,学习到线性方程的过程如图:

学习的目标是使得预测值与真实值的误差最小,这样我们的预测结果就会越准确,即

这是一个典型的最小二乘法问题。根据一系列数学推导,可以计算出a和b的值

 数学推导如下,其中牵扯到一些线性代数的知识,可以略过此部分

求极值问题,即令目标函数对a,b求导等于0

首先令J对b的偏导等于0,则有

之后令J对a的偏导等于0 ,则有

进一步化简得: 

即完成a,b的推导

接下来就进入激动人心的编程时刻!

01 实现Simple Linear Regression

  1. 准备数据阶段:

import numpy as np
import matplotlib.pyplot as plt

x = np.array([1., 2., 3., 4., 5.])
y = np.array([1., 3., 2., 3., 5.])

plt.scatter(x, y)
plt.axis([0, 6, 0, 6])
plt.show()

  1. 接下来计算a和b的值

x_mean = np.mean(x)
y_mean = np.mean(y)

分子

num = 0.0

分母

d = 0.0

for x_i, y_i in zip(x, y):
num += (x_i – x_mean) * (y_i – y_mean)
d += (x_i – x_mean) ** 2

a = num / d
b = y_mean – a * x_mean

a

Out[7]:

0.80000000000000004

b

Out[8]:

0.39999999999999947

预测线

y_hat = a * x + b
plt.scatter(x, y)
plt.plot(x, y_hat, color=’r’)
plt.axis([0, 6, 0, 6])
plt.show()

  1. 对某个点进行预测

预测某个点

x_predict = 6
y_predict = a * x_predict + b

y_predict

Out[11]:

5.2000000000000002

使用自己的SimpleLinear Regression
代码在文末rua~

from playML.SimpleLinearRegression import SimpleLinearRegression1

reg1 = SimpleLinearRegression1()
reg1.fit(x, y)

Out[12]:

SimpleLinearRegression()

reg1.predict(np.array([x_predict]))

预测结果

array([ 5.2])

查看a和b

reg1.a_

Out[14]:

0.80000000000000004

reg1.b_

Out[15]:

0.39999999999999947

绘制回归曲线

y_hat1 = reg1.predict(x)

plt.scatter(x, y)
plt.plot(x, y_hat1, color=’r’)
plt.axis([0, 6, 0, 6])
plt.show()

向量化实现SimpleLinearRegression
根据a和b的表达式

观察分子和分母都可以看成两个向量的点乘就可以省去for循环中间的累加操作,使程序执行的效率与可读性都大大提高

(向量的点乘没记错的话可是高一的知识emmmm)

from playML.SimpleLinearRegression import SimpleLinearRegression2

reg2 = SimpleLinearRegression2()
reg2.fit(x, y)

Out[19]:

SimpleLinearRegression()

reg2.a_

Out[20]:

0.80000000000000004

reg2.b_

Out[21]:

0.39999999999999947

y_hat2 = reg2.predict(x)
plt.scatter(x, y)
plt.plot(x, y_hat2, color=’r’)
plt.axis([0, 6, 0, 6])
plt.show()
 

向量化实现的性能测试
m = 1000000
big_x = np.random.random(size=m)
big_y = big_x * 2.0 +3.0 + np.random.normal(size=m) # 对数据进行加噪操作

“””
%timeit reg1.fit(big_x, big_y)
%timeit reg2.fit(big_x, big_y)
832 ms ± 18.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
13.3 ms ± 730 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
“””

reg1.a_

Out[26]:

1.9921719353685576

reg1.b_

Out[27]:

3.0060568970079213

reg2.a_

Out[28]:

1.9921719353687457

reg2.b_

Out[29]:

3.0060568970078272

05 衡量回归算法的标准
MSE(Mean Squared Error):均方误差

RMSE(Root Mean Squared Error):均方根误差

MAE(Mean Absolute Error):平均绝对误差

import numpy as np
import matplotlib.pyplot as plt

from sklearn import datasets
波士顿房产数据
“””
Boston House Prices dataset ===========================

Notes

Data Set Characteristics:
:Number of Instances: 506
:Number of Attributes: 13 numeric/categorical predictive
:Median Value (attribute 14) is usually the target
:Attribute Information (in order):
– CRIM per capita crime rate by town
– ZN proportion of residential land zoned for lots over 25,000 sq.ft.
– INDUS proportion of non-retail business acres per town
– CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
– NOX nitric oxides concentration (parts per 10 million)
– RM average number of rooms per dwelling
– AGE proportion of owner-occupied units built prior to 1940
– DIS weighted distances to five Boston employment centres
– RAD index of accessibility to radial highways
– TAX full-value property-tax rate per $10,000
– PTRATIO pupil-teacher ratio by town
– B 1000(Bk – 0.63)^2 where Bk is the proportion of blacks by town
– LSTAT % lower status of the population
– MEDV Median value of owner-occupied homes in $1000’s
:Missing Attribute Values: None
:Creator: Harrison, D. and Rubinfeld, D.L.
This is a copy of UCI ML housing dataset.
http://archive.ics.uci.edu/ml/datasets/Housing
This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.
The Boston house-price data of Harrison, D. and Rubinfeld, D.L. ‘Hedonic prices and the demand for clean air’, J. Environ. Economics & Management, vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, ‘Regression diagnostics …’, Wiley, 1980. N.B. Various transformations are used in the table on pages 244-261 of the latter. The Boston house-price data has been used in many machine learning papers that address regression problems. References – Belsley, Kuh & Welsch, ‘Regression diagnostics: Identifying Influential Data and Sources of Collinearity’, Wiley, 1980. 244-261. – Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.

  • many more! (see http://archive.ics.uci.edu/ml/datasets/Housing)
    “””
    boston.feature_names

Out[4]:

array([‘CRIM’, ‘ZN’, ‘INDUS’, ‘CHAS’, ‘NOX’, ‘RM’, ‘AGE’, ‘DIS’, ‘RAD’, ‘TAX’,

‘PTRATIO’, ‘B’, ‘LSTAT’], dtype='<U7′)

x = boston.data[:, 5] # 只使用房间数量这个特征

x.shape

Out[9]:

(506,)

y = boston.target

y.shape

Out[12]:

(506,)

plt.scatter(x, y)
plt.show()

np.max(y)

Out[14]:

50.0

去除可能因为统计方法引起的上限点,fancy indexing

x = x[y < 50.0]
y = y[y < 50.0]

plt.scatter(x, y)
plt.show()

使用简单线性回归法
from playML.model_selection import train_test_split

x_train, x_test, y_train, y_test = train_test_split(x, y, seed=666)

x_train.shape

Out[18]:

(392,)

x_test.shape

Out[19]:

(98,)

from playML.SimpleLinearRegression import SimpleLinearRegression
reg = SimpleLinearRegression()
reg.fit(x_train, y_train)

Out[21]:

SimpleLinearRegression()

reg.a_

Out[22]:

7.8608543562689563

reg.b_

Out[23]:

-27.45934280670555

plt.scatter(x_train, y_train)
plt.plot(x_train, reg.predict(x_train), color=’r’)
plt.show()

y_predict = reg.predict(x_test)

 MSE
mse_test = np.sum((y_predict – y_test) ** 2) / len(y_test)

mse_test

Out[27]:

24.15660213438743

RMSE
from math import sqrt

rmse_test = sqrt(mse_test)
rmse_test # 此结果与y所对应的量纲相同,平均误差在4.9(万元)左右

Out[29]:

4.914936635846634

MAE
mae_test = np.sum(np.absolute(y_predict – y_test)) / len(y_test)
mae_test

Out[30]:

3.5430974409463873

from playML.metrics import mean_squared_error
from playML.metrics import root_mean_squared_error
from playML.metrics import mean_absolute_error

mean_squared_error(y_test, y_predict)

Out[32]:

24.15660213438743

In [33]:

root_mean_squared_error(y_test, y_predict)

Out[33]:

4.914936635846634

mean_absolute_error(y_test, y_predict)

Out[34]:

3.5430974409463873

scikit-learn中的MSE和MAE
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error

mean_squared_error(y_test, y_predict)

Out[36]:

24.15660213438743

mean_absolute_error(y_test, y_predict)

Out[37]:

3.5430974409463873

以上所介绍的MSE,RMSE与MAE的方法虽然可以衡量线性回归算法,但是所显示的数值意义却不够显著。对于不同的数据集的预测,不能保证RMSE等于5(例如:预测学生成绩波动)一定比RMSE等于10(例如:房价波动)的好。

所以最常用的衡量方法是“R方”

 R Squared

仔细观察公示,我们不难发现分子分母所代表的实际意义

一般来说,生硬的将所有的输入x对应的输出固定为训练数据的均值误差比较大,而我们的模型就是用来是输出更近似于理论

进一步对公式变形可以发现R与y对方差及MSE有关

R Square
1 – mean_squared_error(y_test, y_predict) / np.var(y_test)

Out[32]:

0.61293168039373236

from playML.metrics import r2_score
r2_score(y_test, y_predict)

Out[34]:

0.61293168039373236

from sklearn.metrics import r2_score
r2_score(y_test, y_predict)

Out[35]:

0.61293168039373236

reg.score(x_test, y_test) # 测试自己的代码

Out[36]:

0.61293168039373236

以上就是关于简单线性回归的总结,接下来我们将进一步讨论当数据具有多个特征值时的线性回归算法,即多元线性回归。

此时y不仅仅由一个X决定了,而是由X的所有特征值Xi来决定

用矩阵形式则表示为:

此时我们的优化目标发生了改变,即

行向量 ✖️ 列向量

结果为:

从结果表达式来看,此方法的时间复杂度较高,为n的三次方。但是可以省去数据归一化操作。

通常我们还有如下定义:

06 实现多元线性回归模型
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets

boston = datasets.load_boston()

X = boston.data
y = boston.target

X = X[y < 50.0]
y = y[y < 50.0]

X.shape

Out[3]:

(490, 13)

from playML.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, seed=666)

from playML.LinearRegression import LinearRegression

​reg = LinearRegression()
reg.fit_normal(X_train, y_train)

Out[5]:

LinearRegression()

reg.coef_ # 查看系数
“””
Out[6]:
array([ -1.18919477e-01, 3.63991462e-02, -3.56494193e-02, 5.66737830e-02,
-1.16195486e+01, 3.42022185e+00, -2.31470282e-02, -1.19509560e+00, 2.59339091e-01,
-1.40112724e-02, -8.36521175e-01, 7.92283639e-03, -3.81966137e-01])
“””

reg.interception # 查看截距

Out[7]:

34.161435496217081

reg.score(X_test, y_test)

Out[8]:

0.81298026026584913

07 更多关于线性回归模型的讨论
import numpy as np
from sklearn import datasets

​boston = datasets.load_boston()

​X = boston.data
y = boston.target

​X = X[y < 50.0]
y = y[y < 50.0]

from sklearn.linear_model import LinearRegression

​lin_reg = LinearRegression()
lin_reg.fit(X, y)

Out[2]:

LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)

lin_reg.coef_ # 正系数代表正相关,负系数代表负相关
“””
Out[3]:
array([ -1.05574295e-01, 3.52748549e-02, -4.35179251e-02, 4.55405227e-01, -1.24268073e+01, 3.75411229e+00, -2.36116881e-02, -1.21088069e+00, 2.50740082e-01, -1.37702943e-02, -8.38888137e-01, 7.93577159e-03, -3.50952134e-01])
“””

np.argsort(lin_reg.coef_)

Out[4]:

array([ 4, 7, 10, 12, 0, 2, 6, 9, 11, 1, 8, 3, 5])

boston.feature_names

Out[5]:

array([‘CRIM’, ‘ZN’, ‘INDUS’, ‘CHAS’, ‘NOX’, ‘RM’, ‘AGE’, ‘DIS’, ‘RAD’, ‘TAX’,

‘PTRATIO’, ‘B’, ‘LSTAT’], dtype='<U7′)

print(boston.DESCR)

查之前打印的描述可以看出,正相关性最大的是RM,即房间数,负相关最大的是一氧化氮浓度。说明线性回归结果具有可解释性。

 总结
线性回归方法与kNN对比

线性回归:参数学习;只能解决回归问题;假设数据是线性;对数据具有强解释性

kNN:非参数学习;可以解决回归问题与分类问题;对数据无假设

附录:

SimpleLinearRegression.py
import numpy as np
from .metrics import r2_score

使用向量化运算加快速度

class SimpleLinearRegression:

def __init__(self):
    """初始化Simple Linear Regression 模型"""
    self.a_ = None
    self.b_ = None

def fit(self, x_train, y_train):
    """根据训练数据集x_train,y_train训练Simple Linear Regression模型"""
    # 只能处理一维数据
    assert x_train.ndim == 1, \
        "Simple Linear Regressor can only solve single feature training data."
    # 所有点的横坐标个数应该与纵坐标个数相同
    assert len(x_train) == len(y_train), \
        "the size of x_train must be equal to the size of y_train"

    x_mean = np.mean(x_train)
    y_mean = np.mean(y_train)

    """for x_i, y_i in zip(x_train, y_train):
        num += (x_i - x_mean) * (y_i - y_mean)
        d += (x_i - x_mean) ** 2"""

    # 分子,使用向量运算中的点乘
    num = (x_train - x_mean).dot(y_train - y_mean)
    # 分母
    d = (x_train - x_mean).dot(x_train - x_mean)

    self.a_ = num / d
    self.b_ = y_mean - self.a_ * x_mean

    return self

def predict(self, x_predict):
    """给定待预测数据集x_predict,返回表示x_predict的结果向量"""
    assert x_predict.ndim == 1, \
        "Simple Linear Regressor can only solve single feature training data."
    # 预测之前必须先进行fit操作
    assert self.a_ is not None and self.b_ is not None, \
        "must fit before predict!"

    return np.array([self._predict(x) for x in x_predict])

def _predict(self, x_single):
    """给定单个待预测数据x_single,返回x_single的预测结果值"""
    return self.a_ * x_single + self.b_

def score(self, x_test, y_test):
    """根据测试数据集x_test和y_test确定当前模型准确度"""
    y_predict = self.predict(x_test)
    return r2_score(y_test, y_predict)

def __repr__(self):
    return "SimpleLinearRegression()"

LinearRegression.py
import numpy as np
from .metrics import r2_score

class LinearRegression:

def __init__(self):
    """初始化inear Regression模型"""
    self.coef_ = None   # 系数(即seita1...N)
    self.interception = None    # 截距(即seita0)
    self._theta = None  # 私有向量

def fit_normal(self, X_train, y_train):
    """根据训练数据集X_train, y_train训练Linear Regression模型"""
    # 样本数量必须等于标记数量,有多少样本就要多少标记
    assert X_train.shape[0] == y_train.shape[0], \
        "the size of X_train must be equal to the size of y_train"

    # X_b为向量x_train左边添加一列1
    X_b = np.hstack([np.ones((len(X_train), 1)), X_train])
    self._theta = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y_train)

    self.interception = self._theta[0]
    self.coef_ = self._theta[1:]

    return self

def predict(self, X_predict):
    """给定待预测数据集X_predict,返回表示X_predict的结果向量"""
    assert self.interception is not None and self.coef_ is not None, \
        "must fit before predict!"
    # 传入数据的特征数量应该等于系数的个数,每一个特征对应一个系数
    assert X_predict.shape[1] == len(self.coef_), \
        "the feature number of X_predict must be equal to X_train"

    X_b = np.hstack([np.ones((len(X_predict), 1)), X_predict])
    return X_b.dot(self._theta)

def score(self, X_test, y_test):
    """根据测试数据集 X_test 和 y_test 确定当前模型的准确度"""
    y_predict = self.predict(X_test)
    return r2_score(y_test, y_predict)

def __repr__(self):
    return "LinearRegression()"

metrics.py
import numpy as np
from math import sqrt

def accuracy_score(y_true, y_predict):
“””计算y_true和y_predict之间的准确率”””
# 保证预测对象的个数与正确对象个数相同,才能一一对比
assert y_true.shape[0] == y_predict.shape[0], \
“the size of y_true must be equal to the size of y_predict”

return sum(y_predict == y_true) / len(y_true)

def mean_squared_error(y_true, y_predict):
“””计算y_true和y_predict之间的MSE”””
# 两个一一对比的数组长度需要一致
assert len(y_true) == len(y_predict), \
“the size of y_true must be equal to the size of y_predict”
return np.sum((y_predict – y_true) ** 2) / len(y_true)

def root_mean_squared_error(y_true, y_predict):
“””计算y_true和y_predict之间的RMSE”””

return sqrt(mean_squared_error(y_true, y_predict))

def mean_absolute_error(y_true, y_predict):
“””计算y_true和y_predict之间的RMSE”””
# 两个一一对比的数组长度需要一致
assert len(y_true) == len(y_predict), \
“the size of y_true must be equal to the size of y_predict”

return np.sum(np.absolute(y_true - y_predict)) / len(y_true)

def r2_score(y_true, y_predict):
“””计算y_true和y_predict之间的R Square”””

return 1 - mean_squared_error(y_true, y_predict) / np.var(y_true)

————————————————
版权声明:本文为CSDN博主「HNU_NJU_wjw」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/HNU_Csee_wjw/article/details/88086183


0 条评论

发表回复

Avatar placeholder

您的邮箱地址不会被公开。 必填项已用 * 标注