首页 > 编程知识 正文

leakyrelu激活函数(单层relu函数)

时间:2023-05-06 16:41:24 阅读:95716 作者:4269

从以往的调参结果来看,过拟合是最大的问题。 本文基于调制记录12,减少层数,减少为9个残差模块,并再次进行测试。

自适应参数ReLU激活函数的原理如下。

自适应参数化ReLU激活函数

Keras计划包括:

# #! /usr/bin/env python3

# #-* -编码: utf-8-* -

''''

createdontueapr 14043360173360452020

implementedusingtensorflow 1.0.1和keras 2.2.1

明航号,胜利号,雪云福,保鲜塘,烧江东,微信号,

epresidualnetworkswithadaptivelyparametricrectifierlinearunitsforfaultdiagnosis、

ieetransactionsonindustrialelectronics,2020,doi :10.1109/tie.2020.2972458

@ author :明航兆号

''''

从_函数_导入打印函数

导入密钥

导入编号为

来自keras.datasetsimportcifar 10

从密钥导入度,连接二维,批处理规格化,激活,最小

from keras.layersimportaveragepooling 2d,输入,全球一体化轮询2d,连接,重建

从keras .规则化RS导入L2

从Keras导入后端ask

从keras .模型导入模型

从密钥导入优化器

来自关键帧.处理. imageimportimagedatagenerator

来自Keras.Callbacksimportlearningratescheduler

k.set _学习_阶段(1)。

# The data,分割贝多芬三角网和测试集

(x_train,y_train ),) x_test,y_test中央陆军10 .载荷数据) ) ) ) ) ) ) )。

# #有节点的数据

x _ train=x _ train.as类型('浮点32 ' )/255。

x _ test=x _ test.as type ('浮点32 ' )/255。

x _ test=x _ test-NP.mean (x _三线)

x _ train=x _ train-NP.mean (x _ train )

打印(x _轨迹形状:x _轨迹形状) ) ) ) ) )。

打印(x _ train.shape [0],“三重采样”)

打印(x _ test.shape [0],“测试示例”)。

# # convertclassvectorstobinaryclassmatrices

y _ train=keras.utils.to _ categorical (y _ train,10 ) )。

y _ test=keras.utils.to _ categorical (y _ test,10 ) )。

# Schedule the learning rate,multiply 0.1 every 1500 epoches

EF排程器(EPOCH ) :

if epoch % 1500==0和epoch!=0:

lr=k.get _ value (模型.优化程序. lr )。

k.set _ value (模型.优化程序. lr,lr * 0.1) )。

print (lrchangedto ({格式lr*0.1 ) ) ) ) ) ) ) ) ) ) ) ) ) ) )。

返回点值(模型.优化器. lr )。

# # anadaptivelyparametricrectifierlinearunit (应用程序路) )。

EPrelu (输入) :

# #获取通道编号

channels=inputs.get _ shape (.as _ list ) ) [-1]

# #获取零功能地图

zeros _ input=keras.layers.subtract (

inputs, inputs]) # get a feature map with only positive features pos_input = Activation('relu')(inputs) # get a feature map with only negative features neg_input = Minimum()([inputs,zeros_input]) # define a network to obtain the scaling coefficients scales_p = GlobalAveragePooling2D()(pos_input) scales_n = GlobalAveragePooling2D()(neg_input) scales = Concatenate()([scales_n, scales_p]) scales = Dense(channels, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales) scales = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(scales) scales = Activation('relu')(scales) scales = Dense(channels, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales) scales = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(scales) scales = Activation('sigmoid')(scales) scales = Reshape((1,1,channels))(scales) # apply a paramtetric relu neg_part = keras.layers.multiply([scales, neg_input]) return keras.layers.add([pos_input, neg_part]) # Residual Block def residual_block(incoming, nb_blocks, out_channels, downsample=False, downsample_strides=2): residual = incoming in_channels = incoming.get_shape().as_list()[-1] for i in range(nb_blocks): identity = residual if not downsample: downsample_strides = 1 residual = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(residual) residual = aprelu(residual) residual = Conv2D(out_channels, 3, strides=(downsample_strides, downsample_strides), padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(residual) residual = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(residual) residual = aprelu(residual) residual = Conv2D(out_channels, 3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(residual) # Downsampling if downsample_strides > 1: identity = AveragePooling2D(pool_size=(1,1), strides=(2,2))(identity) # Zero_padding to match channels if in_channels != out_channels: zeros_identity = keras.layers.subtract([identity, identity]) identity = keras.layers.concatenate([identity, zeros_identity]) in_channels = out_channels residual = keras.layers.add([residual, identity]) return residual # define and train a model inputs = Input(shape=(32, 32, 3)) net = Conv2D(16, 3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs) net = residual_block(net, 3, 16, downsample=False) net = residual_block(net, 1, 32, downsample=True) net = residual_block(net, 2, 32, downsample=False) net = residual_block(net, 1, 64, downsample=True) net = residual_block(net, 2, 64, downsample=False) net = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(net) net = Activation('relu')(net) net = GlobalAveragePooling2D()(net) outputs = Dense(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net) model = Model(inputs=inputs, outputs=outputs) sgd = optimizers.SGD(lr=0.1, decay=0., momentum=0.9, nesterov=True) model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) # data augmentation datagen = ImageDataGenerator( # randomly rotate images in the range (deg 0 to 180) rotation_range=30, # Range for random zoom zoom_range = 0.2, # shear angle in counter-clockwise direction in degrees shear_range = 30, # randomly flip images horizontal_flip=True, # randomly shift images horizontally width_shift_range=0.125, # randomly shift images vertically height_shift_range=0.125) reduce_lr = LearningRateScheduler(scheduler) # fit the model on the batches generated by datagen.flow(). model.fit_generator(datagen.flow(x_train, y_train, batch_size=625), validation_data=(x_test, y_test), epochs=5000, verbose=1, callbacks=[reduce_lr], workers=10) # get results K.set_learning_phase(0) DRSN_train_score = model.evaluate(x_train, y_train, batch_size=625, verbose=0) print('Train loss:', DRSN_train_score[0]) print('Train accuracy:', DRSN_train_score[1]) DRSN_test_score = model.evaluate(x_test, y_test, batch_size=625, verbose=0) print('Test loss:', DRSN_test_score[0]) print('Test accuracy:', DRSN_test_score[1])

实验结果如下:

Epoch 3023/5000 12s 151ms/step - loss: 0.0976 - acc: 0.9960 - val_loss: 0.4282 - val_acc: 0.9107 Epoch 3024/5000 12s 152ms/step - loss: 0.0984 - acc: 0.9959 - val_loss: 0.4283 - val_acc: 0.9111 Epoch 3025/5000 12s 152ms/step - loss: 0.0969 - acc: 0.9960 - val_loss: 0.4288 - val_acc: 0.9090

过拟合依然存在,还是要继续减小网络。

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht, Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, IEEE Transactions on Industrial Electronics, 2020, DOI: 10.1109/TIE.2020.2972458

https://ieeexplore.ieee.org/document/8998530

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。