首页 > 编程知识 正文

matlab relu,matlab画relu函数

时间:2023-05-06 10:34:57 阅读:272541 作者:1099

该部分代码从输出节点的增量开始,计算隐藏节点的输出误差,并将其用于下一次误差的计算。

This process starts from the delta of theoutput node, calculates the error of the hidden node, and uses it for the nexterror.

该过程在delta3、delta2和delta1之间循环重复相同的步骤。

It repeats the same steps through delta3,delta2, and delta1.

e = d - y;

delta = e;

e3 = W4’*delta;

delta3 = (v3 > 0).*e3;

e2 = W3’*delta3;

delta2 = (v2 > 0).*e2;

e1 = W2’*delta2;

delta1 = (v1 > 0).*e1;

代码中需要注意的是函数ReLU的导数。

Something noticeable from the code is thederivative of the function ReLU.

例如,在计算第三隐藏层delta3的增量时,ReLU函数的导数对应的代码如下:

For example, in the calculation of thedelta of the third hidden layer, delta3, the derivative of the ReLU function iscoded as follows:

(v3> 0)

让我们看看这行代码是如何成为ReLU函数的导数。

Let’s see how this line becomes the derivativeof the ReLU function.

如果括号中的表达式分别为真和假时,则MATLAB对应返回1和0。

MATLAB returns a unity and zero if theexpressions in the brackets are true and false, respectively.

因此,如果v3 > 0,则该行代码返回1,否则返回0。

Therefore, this line becomes 1 if v3 > 0and 0 otherwise.

与下面所示的ReLU函数的导数定义一样,输出结果是相同的:

The same result is produced as thedefinition of the derivative of the ReLU function shown here:

以下代码为TestDeepReLU.m文件的详细清单,该代码用于DeepReLU函数的测试。

The following listing shows theTestDeepReLU.m file, which tests the DeepReLU function.

该代码调用DeepReLU函数,并训练网络10000次。

This program calls the DeepReLU functionand trains the network 10,000 times.

它将训练数据输入到训练网络中并显示输出。

It enters the training data into thetrained network and displays the output.

我们通过比较训练输出和正确的输出来验证神经网络训练的充分性。

We verify the adequacy of the training bycomparing the output and correct output.

clear all

X = zeros(5, 5, 5);

X(:, :, 1) = [0 1 1 0 0;

0 0 1 0 0; 0 0 1 0 0; 0 0 1 0 0; 0 1 1 1 0 ];

X(:, :, 2) = [1 1 1 1 0;

0 0 0 0 1; 0 1 1 1 0; 1 0 0 0 0; 1 1 1 1 1 ];

X(:, :, 3) = [1 1 1 1 0;

0 0 0 0 1; 0 1 1 1 0; 0 0 0 0 1; 1 1 1 1 0 ];

X(:, :, 4) = [0 0 0 1 0;

0 0 1 1 0; 0 1 0 1 0; 1 1 1 1 1; 0 0 0 1 0 ];

X(:, :, 5) = [1 1 1 1 1;

1 0 0 0 0; 1 1 1 1 0; 0 0 0 0 1; 1 1 1 1 0 ];

D = [ 1 0 0 00;

0 1 0 0 0; 0 0 1 0 0; 0 0 0 1 0; 0 0 0 0 1 ];

W1 = 2*rand(20, 25) - 1;

W2 = 2*rand(20, 20) - 1;

W3 = 2*rand(20, 20) - 1;

W4 = 2*rand( 5, 20) - 1;

for epoch = 1:10000 % train

[W1,W2, W3, W4] = DeepReLU(W1, W2, W3, W4, X, D);

end

N = 5; % inference

for k = 1:N

x= reshape(X(:, :, k), 25, 1); v1= W1*x; y1 = ReLU(v1); v2 = W2*y1; y2 = ReLU(v2); v3 = W3*y2; y3 = ReLU(v3); v = W4*y3; y= Softmax(v)

end

由于该代码与以前测试程序的代码几乎相同,因此不再详细说明。

As this code is also almost identical tothe previous test programs, a detailed explanation is omitted.

有时候该代码可能会训练失败,产生错误输出,而在我们使用sigmoid激活函数时是从来没有发生过的。

This code occasionally fails to trainproperly and yields wrong outputs, which has never happened with the sigmoidactivation function.

ReLU函数对初始权值的敏感特性导致了这种异常现象。

The sensitivityof the ReLU function to the initial weight values seems to cause this anomaly.

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。