视频地址:吴恩达机器学习
课件地址:本地自存
模型
线性回归
梯度下降法
repeat} until convergence:{wj:=wj−α∂wj∂J(w,b)b :=b−α∂b∂J(w,b)for j = 0..n-1(1)
J(w,b)=2m1i=0∑m−1(fw,b(x(i))−y(i))2(3)
fw,b(x(i))=w⋅x(i)+b(4)
∂wj∂J(w,b)∂b∂J(w,b)=m1i=0∑m−1(fw,b(x(i))−y(i))xj(i)=m1i=0∑m−1(fw,b(x(i))−y(i))(2)(3)
其中wj为自变量系数,b为常系数,α为学习率,J(w,b)为成本函数,fwb(x)为预测函数
逻辑(logistic)回归
库
numpy库
python的数据拓展库
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| import numpy as np np.set_printoptions(precision=2)
x = np.array([[2104, 5, 1, 45], [1416, 3, 2, 40], [852, 2, 1, 35]]) print(x.shape)
x = np.array([[2104,1416,852]]) print(x.shape)
x = np.array([2104,1416,852]) print(x.shape)
x.reshape(a,b)
|
matplotlib库
python的绘图库
Scikit-Learn库
python的一个机器学习库
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| from sklearn.linear_model import LinearRegression, SGDRegressor from sklearn.preprocessing import StandardScaler
scaler = StandardScaler() X_norm = scaler.fit_transform(X_train)
sgdr = SGDRegressor(max_iter=1000) sgdr.fit(X_norm, y_train)
b_norm = sgdr.intercept_ w_norm = sgdr.coef_
y_pred_sgd = sgdr.predict(X_norm)
|