首页 > 编程知识 正文

卷积神经网络例子,vgg16与vgg19区别

时间:2023-05-05 16:53:39 阅读:161299 作者:3333

一. VGG概述VGGNet是牛津大学视觉几何组(Visual Geometry Group )提出的模型,该模型涉及2014ImageNet图像分类与定位挑战ILSVRC-2014中的分类任务二、定位任务二VGGNet的突出贡献是证明小卷积,通过增加网络深度有效地提高性能。 VGG很好地继承了Alexnet的衣鉴,同时具有鲜明的特征。 也就是说,网络的层次很深。

VGGNet结构

VGGNet模型有A-E五种结构网络,深度分别为11、11、13、16、19。 其中典型的网络结构主要是vgg16和vgg19,本文主要论述vgg16,共享vgg16的Keras实现。 其网络结构如下图d列(红色方框) :

VGG16网络结构

vggnet对输入图像的默认大小是224*224*3 (从表中input可以看出)。vgg16网络结构含有参数的网络层一共有bzdyl,即13个卷积层,5个池化层,3个全连接层,不包括激活层。

vgg16网络结构可以划分为6个模块层次加1个输入模块,分别如下

3358 www.Sina.com/http://www.Sina.com/输入模块http://www.Sina.com/第一模块conv3-64 conv3-64 maxpool第二模块conv3-128 conv3-128 maxpool第三模块conv3-256conv3-256maxpool第四模块conv3-512con ool 3-512最大池第六个模块(所有连接层和输出层) FC-4096 (实际上之前需要添加Flatten层) FC-4096 FC-1000 (负责分类) softmax

模块

从keras.model引入模型模块,为构建函数api的网络做准备,from keras.modelsimportmodelfromkeras.layersimportflatten、Dense、Dropout 输入,ZeroPadding2D, concatenatefromkeras.layers.convolutionalimportaveragepooling2dfromkerasimportregularizers # 规范化fromKeras.optimizersimporras优化器from keras.layersimportaveragepooling2dfrom keras.datasetsimportmnistfromkeras.utilst . utilsimportmatplotlib py as np#数据处理(X_train,Y_train )、X_test、 Y_test )=mnist.load_data ) x_test1=x_testy_testty 1) astype(float32 ) ) 255.0 x _ test=x _ test.test 1 ).as type (float 32 ) ).0y_255.0 10 ) y _ test=NP _ utils.to _ categorical (y _ test,10 ) print ) x _ x _ test

x_input = Input((28, 28, 1)) # 输入数据形状28*28*1 # Block 1 x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(x_input) x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x) # Block 2 x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x) x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x) # Block 3 x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x) x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x) x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x) # Block 4 x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x) x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x) x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x) # Block 5 x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x) x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x) x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x) #BLOCK 6 x=Flatten()(x) x=Dense(256,activation="relu")(x) x=Dropout(0.5)(x) x = Dense(256, activation="relu")(x) x = Dropout(0.5)(x) #搭建最后一层,即输出层 x = Dense(10, activation="softmax")(x) # 调用MDOEL函数,定义该网络模型的输入层为X_input,输出层为x.即全连接层 model = Model(inputs=x_input, outputs=x) # 查看网络模型的摘要 model.summary() return modelmodel=vgg16()optimizer=RMSprop(lr=1e-4)model.compile(loss="binary_crossentropy",optimizer=optimizer,metrics=["accuracy"])#训练加评估模型n_epoch=4batch_size=128def run_model(): #训练模型 training=model.fit( X_train, Y_train, batch_size=batch_size, epochs=n_epoch, validation_split=0.25, verbose=1 ) test=model.evaluate(X_train,Y_train,verbose=1) return training,testtraining,test=run_model()print("误差:",test[0])print("准确率:",test[1])def show_train(training_history,train, validation): plt.plot(training.history[train],linestyle="-",color="b") plt.plot(training.history[validation] ,linestyle="--",color="r") plt.title("training history") plt.xlabel("epoch") plt.ylabel("accuracy") plt.legend(["training","validation"],loc="lower right") plt.show()show_train(training,"accuracy","val_accuracy")def show_train1(training_history,train, validation): plt.plot(training.history[train],linestyle="-",color="b") plt.plot(training.history[validation] ,linestyle="--",color="r") plt.title("training history") plt.xlabel("epoch") plt.ylabel("loss") plt.legend(["training","validation"],loc="upper right") plt.show()show_train1(training,"loss","val_loss")prediction=model.predict(X_test)def image_show(image): fig=plt.gcf() #获取当前图像 fig.set_size_inches(2,2) #改变图像大小 plt.imshow(image,cmap="binary") #显示图像 plt.show()def result(i): image_show(X_test1[i]) print("真实值:",Y_test1[i]) print("预测值:",np.argmax(prediction[i]))result(0)result(1)

 

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。