首页 > 编程知识 正文

l2正则化为什么能够减小过拟合,l2范数的定义

时间:2023-05-05 17:47:10 阅读:160305 作者:475

正则化是指在损失函数后加上自定义的正则项。 以线性模型损失函数的均值方差为例

Lss=1n ) w1x1w2x2by^(2(w ()2) loss=(FRAC1n_big(sum_1^n ) w_1x_1w_2x_2

3w 332=123w 12w 223|w|32=3frac 123w _1^ 2w _ 2323 w332=213 w3

其中,w2(lambda|||^2w2是一个L2范数正规项,如果将其加入损失函数中,则在坡度下降时模型倾向于选择参数小的模型,复杂模型的函数ssion

L2范数的正则化实际上是权重的衰减,为什么呢?

加入L2范数正规项的loss导出各参数

∂ l o s s w 1 = 2 n ∑ 1 n ( w 1 x 1 + w 2 x 2 + b − y ^ ) ∗ x 1 + λ w 1 frac {partial loss} {w_1} = frac 2 nsum_1^n(w_1x_1+w_2x_2+b-hat y) * x_1 + lambda w_1 w1​∂loss​=n2​∑1n​(w1​x1​+w2​x2​+b−y^​)∗x1​+λw1​
w 1 ∗ = ( 1 − λ ) w 1 − g r a d w_1^* = (1 - lambda)w_1 - grad w1∗​=(1−λ)w1​−grad
每次参数都会先乘上一个小于1的系数,然后再减去原来的梯度,所以叫权重衰减

下面进行一个高维线性实验
假设我们的真实方程是:

假设feature数200,训练样本和测试样本各20个

模拟数据集 num_train,num_test = 10,10num_features = 200true_w = torch.ones((num_features,1),dtype=torch.float32) * 0.01true_b = torch.tensor(0.5)samples = torch.normal(0,1,(num_train+num_test,num_features))noise = torch.normal(0,0.01,(num_train+num_test,1))labels = samples.matmul(true_w) + true_b + noisetrain_samples, train_labels= samples[:num_train],labels[:num_train]test_samples, test_labels = samples[num_train:],labels[num_train:] 定义带正则项的loss function def loss_function(predict,label,w,lambd): loss = (predict - label) ** 2 loss = loss.mean() + lambd * (w**2).mean() return loss 画图的方法 def semilogy(x_val,y_val,x_label,y_label,x2_val,y2_val,legend): plt.figure(figsize=(3,3)) plt.xlabel(x_label) plt.ylabel(y_label) plt.semilogy(x_val,y_val) if x2_val and y2_val: plt.semilogy(x2_val,y2_val) plt.legend(legend) plt.show() 拟合和画图 def fit_and_plot(train_samples,train_labels,test_samples,test_labels,num_epoch,lambd): w = torch.normal(0,1,(train_samples.shape[-1],1),requires_grad=True) b = torch.tensor(0.,requires_grad=True) optimizer = torch.optim.Adam([w,b],lr=0.05) train_loss = [] test_loss = [] for epoch in range(num_epoch): predict = train_samples.matmul(w) + b epoch_train_loss = loss_function(predict,train_labels,w,lambd) optimizer.zero_grad() epoch_train_loss.backward() optimizer.step() test_predict = test_sapmles.matmul(w) + b epoch_test_loss = loss_function(test_predict,test_labels,w,lambd) train_loss.append(epoch_train_loss.item()) test_loss.append(epoch_test_loss.item()) semilogy(range(1,num_epoch+1),train_loss,'epoch','loss',range(1,num_epoch+1),test_loss,['train','test'])


可以发现加了正则项的模型,在测试集上的loss确实下降了

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。