Affine层
正向传播的时候,偏置会被加到每一个数据(第1个、第2个)上。因此,反向传播时:
各个数据的反向传播的值需要汇总为偏置的元素。–?
import numpy
as np
class Affine:
def __init__(self
, W
, b
):
self
.w
= W
self
.b
= b
self
.x
= None
self
.dW
= None
self
.db
= None
def forward(self
, x
):
self
.x
= x
out
= np
.dot
(x
, self
.W
) + self
.b
return out
def backward(self
, dout
):
dx
= np
.dot
(dout
, self
.W
.T
)
self
.dW
= np
.dot
(self
.x
.T
, dout
)
self
.dB
= np
.sum(dout
, axis
=0)
return dx
softmax层
softmax层将输入值正规化(将输出值的和调整为1)之后再输出。使用交叉熵误差作为softmax的损失函数
softmax层的反向传播得到(y1-t1,y2-t2,y3-t3)y为输出,t为监督数据
神经网络的学习的目的就是通过调整权重参数,使神经网络的输出接近正确的标签
神经网络:
神经网络中有合适的权重和偏置。调整权重和偏置以便拟合训练数据的过程称为学习,有以下几个步骤:
1.从训练数据中随机选择一部分数据 2.计算损失函数关于各个权重参数的梯度 3.将权重参数沿梯度方向进行微小的更新 4.重复步骤1、2、3
误差反向传播法的神经网络实现:
import sys
, os
sys
.path
.append
(os
.pardir
)
import numpy
as np
from common
.layers
import *
from common
.gradient
import numerical_gradient
from collections
import OrderedDict
class TwoLayerNet:
def __init__(self
, input_size
, hidden_size
, output_size
, weight_init_std
= 0.01):
self
.params
= {}
self
.params
['W1'] = weight_init_std
* np
.random
.randn
(input_size
, hidden_size
)
self
.params
['b1'] = np
.zeros
(hidden_size
)
self
.params
['W2'] = weight_init_std
* np
.random
.randn
(hidden_size
, output_size
)
self
.params
['b2'] = np
.zeros
(output_size
)
self
.layers
= OrderedDict
()
self
.layers
['Affine1'] = Affine
(self
.params
['W1'], self
.params
['b1'])
self
.layers
['Relu1'] = Relu
()
self
.layers
['Affine2'] = Affine
(self
.params
['W2'], self
.params
['b2'])
self
.lastLayer
= SoftmaxWithLoss
()
def predict(self
, x
):
for layer
in self
.layers
.values
():
x
= layer
.forward
(x
)
return x
def loss(self
, x
, t
):
y
= self
.predict
(x
)
return self
.lastLayer
.forward
(y
, t
)
def accuracy(self
, x
, t
):
y
= self
.predict
(x
)
y
= np
.argmax
(y
, axis
=1)
if t
.ndim
!= 1 : t
= np
.argmax
(t
, axis
=1)
accuracy
= np
.sum(y
== t
) / float(x
.shape
[0])
return accuracy
def numerical_gradient(self
, x
, t
):
loss_W
= lambda W
: self
.loss
(x
, t
)
grads
= {}
grads
['W1'] = numerical_gradient
(loss_W
, self
.params
['W1'])
grads
['b1'] = numerical_gradient
(loss_W
, self
.params
['b1'])
grads
['W2'] = numerical_gradient
(loss_W
, self
.params
['W2'])
grads
['b2'] = numerical_gradient
(loss_W
, self
.params
['b2'])
return grads
def gradient(self
, x
, t
):
self
.loss
(x
, t
)
dout
= 1
dout
= self
.lastLayer
.backward
(dout
)
layers
= list(self
.layers
.values
())
layers
.reverse
()
for layer
in layers
:
dout
= layer
.backward
(dout
)
grads
= {}
grads
['W1'], grads
['b1'] = self
.layers
['Affine1'].dW
, self
.layers
['Affine1'].db
grads
['W2'], grads
['b2'] = self
.layers
['Affine2'].dW
, self
.layers
['Affine2'].db
return grads
误差反向传播法的梯度确认 求梯度有两种方法:数值微分和误差反向传播法。 数值微分法优点:实现简单,不太容易出错,而误差反向传播法的实现很复杂,容易出错。 所以经常比较数值微分的结果和误差反向传播法的结果,以确认误差反向传播法求出的结果是否一致–梯度确认
for key
in grad_numerical
.keys
():
diff
= np
.average
( np
.abs(grad_backprop
[key
] - grad_numerical
[key
]) )
print(key
+ ":" + str(diff
))