首页 > 编程知识 正文

计算机基础知识学习,电脑操作基础知识学习

时间:2023-05-06 10:59:38 阅读:181879 作者:327

转载: https://blog.csdn.net/sinat _ 36618660/article/details/99650804

学习率是神经网络训练中最重要的超参数之一,针对学习率的优化方式很多,Warmup是其中的一种

(一)、什么是Warmup?

Warmup是ResNet论文中提到的学习率预热方法,在训练开始时首先选择较小的学习率使用,训练几个epoches或steps (如4个epoches,10000steps ),并预设

(二)、为什么使用Warmup?

由于刚开始训练时会随机初始化模型的权重(weights ),因此此时选择较大的学习率可能会导致模型不稳定(振动)。 选择Warmup预热学习率的方式,可以减小开始训练的几个epoches或几个steps内学习率,预热小的学习率下模型逐渐稳定,模型相对稳定后

Example:ResNet论文中使用110层的Resnet在cifar10中训练时,首先以0.01的学习率训练至训练误差小于80%,然后以0.1的学习率训练。

(三)、Warmup的改进

)二)前述的Warmup为constant warmup,存在从小学习率变为比较大的学习率时训练误差可能会突然增大的缺点。 因此18年Facebook为了解决这个问题提出了gradual warmup。 也就是说,从最初的小学习率开始,每个step一点一点地增大,直到达到最初设定的比较大的学习率为止,按照最初设定的学习率进行训练。

1 .硬件的实现仿真代码为以下:

“' ' Implements gradual warmup,if train_steps warmup_steps, thelearningratewillbe ` train _ steps/warmup _ steps * init _ lr `.args : warmup _ steps 3360 warmup步长阈值,即train 用默认值的学习率train_steps:训练的步骤数init_lr:预设学习率“' importnumpyasnpwarmup _ steps=2500 init _ lr=0.1 #模拟训练15000步mm 3360 if warmup _ stepsandtrain _ steps warmup _ steps : warmup _ percent _ done=train _ steps/warmup _ steps warmup gradual warmup _ lr learning _ rate=warmup _ learning _ rate else 3360 # learning _ rate=NP学习率为sin衰减learning _ rate=leate 学习率为指数衰减(伪指数衰减) if(train_steps1) 100==0: print (train _ steps 3360 %.3f-- warmup _ steps 3360 %.3f-) learning_rate )2.通过上述代码实现的预热学习率和预热完成后的学习率衰减的曲线图为:

(四)总结

使用Warmup预热学习率的方式,是先用最初的小学习率训练,然后每次step一点点增大,直到达到最初设定的比较大的学习率,再用最初设定的学习率进行训练的方式,使模型收敛速度加快,效果良好。

maskrcnn_benchmark示例:

# separatingmultisteplrwithwarmuplr # butthecurrentlrschedulerdesigndoesn ' tallow it # reference 3360https://github.com/facebebebee

rcnn-benchmark/blob/master/maskrcnn_benchmark/solver/lr_scheduler.pyclass WarmupMultiStepLR(torch.optim.lr_scheduler._LRScheduler): def __init__(self, optimizer, milestones, gamma=0.1, warmup_factor=1.0 / 3, warmup_iters=500, warmup_method="linear", last_epoch=-1): super(WarmupMultiStepLR, self).__init__(optimizer, last_epoch) if not list(milestones) == sorted(milestones): raise ValueError( "Milestones should be a list of" " increasing integers. Got {}", milestones) if warmup_method not in ("constant", "linear"): raise ValueError( "Only 'constant' or 'linear' warmup_method accepted got {}".format(warmup_method)) self.milestones = milestones self.gamma = gamma self.warmup_factor = warmup_factor self.warmup_iters = warmup_iters self.warmup_method = warmup_method def get_lr(self): warmup_factor = 1 if self.last_epoch < self.warmup_iters: if self.warmup_method == 'constant': warmup_factor = self.warmup_factor elif self.warmup_factor == 'linear': alpha = float(self.last_epoch) / self.warmup_iters warmup_factor = self.warmup_factor * (1 - alpha) + alpha return [base_lr * warmup_factor * self.gamma ** bisect_right(self.milestones, self.last_epoch) for base_lr in self.base_lrs]class WarmupPolyLR(torch.optim.lr_scheduler._LRScheduler): def __init__(self, optimizer, target_lr=0, max_iters=0, power=0.9, warmup_factor=1.0 / 3, warmup_iters=500, warmup_method='linear', last_epoch=-1): if warmup_method not in ("constant", "linear"): raise ValueError( "Only 'constant' or 'linear' warmup_method accepted " "got {}".format(warmup_method)) self.target_lr = target_lr self.max_iters = max_iters self.power = power self.warmup_factor = warmup_factor self.warmup_iters = warmup_iters self.warmup_method = warmup_method super(WarmupPolyLR, self).__init__(optimizer, last_epoch) def get_lr(self): N = self.max_iters - self.warmup_iters T = self.last_epoch - self.warmup_iters if self.last_epoch < self.warmup_iters: if self.warmup_method == 'constant': warmup_factor = self.warmup_factor elif self.warmup_method == 'linear': alpha = float(self.last_epoch) / self.warmup_iters warmup_factor = self.warmup_factor * (1 - alpha) + alpha else: raise ValueError("Unknown warmup type.") return [self.target_lr + (base_lr - self.target_lr) * warmup_factor for base_lr in self.base_lrs] factor = pow(1 - T / N, self.power) return [self.target_lr + (base_lr - self.target_lr) * factor for base_lr in self.base_lrs]

 

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。