求解逻辑回归—-梯度下降
2020-06-28 14:38:03 来源:易采站长站 作者:易采站长站整理
scaled_data = orig_data.copy()
scaled_data[:, 1:3] = pp.scale(orig_data[:, 1:3])
runExpe(scaled_data, theta, n, STOP_ITER, thresh=5000, alpha=0.001)
***Scaled data - learning rate: 0.001 - Gradient descent - Stop: 5000 iterations
Theta: [[ 0.32653044] [ 0.84802277] [ 0.78686591]] - Iter: 5000 - Last cost: 0.38 - Duration: 1.02sarray([[ 0.32653044],
[ 0.84802277],
[ 0.78686591]])
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-pW4W3F13-1585660078921)(output_32_2.png)]](https://www.easck.com/d/file/200628/20200628143637710.jpg)
原始数据,只能达到达到0.61,而现在到达0.4以下,所以对数据做预处理是非常重要的
runExpe(scaled_data, theta, n, STOP_GRAD, thresh=0.02, alpha=0.001)
***Scaled data - learning rate: 0.001 - Gradient descent - Stop: gradient norm < 0.02
Theta: [[ 1.10868347] [ 2.57412148] [ 2.41283358]] - Iter: 58762 - Last cost: 0.22 - Duration: 12.75sarray([[ 1.10868347],
[ 2.57412148],
[ 2.41283358]])
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-MOZIj4H6-1585660078922)(output_34_2.png)]](https://www.easck.com/d/file/200628/20200628143637711.jpg)
更多的迭代次数会使得损失下降的更多
runExpe(scaled_data, theta, 16, STOP_GRAD, thresh=0.002*2, alpha=0.001)
***Scaled data - learning rate: 0.001 - Mini-batch (16) descent - Stop: gradient norm < 0.004
Theta: [[ 1.07757538] [ 2.51112557] [ 2.34671716]] - Iter: 54137 - Last cost: 0.22 - Duration: 5.31sarray([[ 1.07757538],
[ 2.51112557],
[ 2.34671716]])
![[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-yMfol8bI-1585660078922)(output_36_2.png)]](https://www.easck.com/d/file/200628/20200628143637712.jpg)
#设定阈值
def predict(X, theta):
return [1 if x >= 0.5 else 0 for x in model(X, theta)]精度
scaled_X = scaled_data[:, :3]y = scaled_data[:, 3]predictions = predict(scaled_X, theta)
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y)]accuracy = (sum(map(int, correct)) % len(correct))
print ('accuracy = {0}%'.format(accuracy))
accuracy = 60%
暂时禁止评论













闽公网安备 35020302000061号