实验3:卷积神经网络图像分类

实验3:卷积神经网络图像分类

2023年7月3日发(作者:)

实验3:卷积神经⽹络图像分类卷积神经⽹络图像分类本次实验将完成以下任务:按照 ,利⽤TensorFlow和Keras,⾃⼰搭建卷积神经⽹络完成狗猫数据集的分类实验;将关键步骤⽤汉语注释出来。解释什么是overfit(过拟合)?什么是数据增强?如果单独只做数据增强,精确率提⾼了多少?然后再添加的dropout层,是什么实际效果?⽤Vgg19⽹络模型完成狗猫分类,写出实验结果;1 理解卷积神经⽹络1.1 搭建环境⾸先,检查⼀下anaconda的版本号,随后创建名为tensorflow的conda环境conda --version //检查Anaconda版本号(安装失败则会提⽰)conda create -n tensorflow pip python=3.6 //创建名为tensorflow的conda环境创建好后,输⼊activate tensorflow激活环境,如果出现前缀则激活成功在进⼊环境的条件下下载Tensorflow的纯CPU版pip install --ignore-installed --upgrade tensorflow退出后查看环境,能够看到tensorflow就算成功安装keraspip install keras将环境改为刚才创建的tensorflow,再安装⼀个jupyter(需要⼀点时间)PS:可以在刚才的控制台中添加如下库,添加jupyter_contrib_nbextensions插件功能:⾃动补全代码功能+pep8+字体⼤⼩+代码⾏号+拼写检查+⽬录索引等功能pip install jupyter_contrib_nbextensions -i /simple/jupyter contrib nbextension install --user --skip-running-check勾了⼀个hinterland,代码⾃动补全1.2 猫狗分析实例回归题⽬,新建⼀个Python3后,引⼊keras包,查看版本(主要看是否下载成功)添加如下代码(注意⾃⼰训练图⽚的地址和创建⽂件夹的地址)import os, shutil #复制⽂件#

原始⽬录所在的路径#

数据集未压缩original_dataset_dir = 'D:QQkaggle_Dog&Cattrain'# The directory where we will# store our smaller datasetbase_dir = 'D:QQkaggle_Dog&Catfind_cats_and_dogs'(base_dir)# #

训练、验证、测试数据集的⽬录train_dir = (base_dir, 'train')(train_dir)validation_dir = (base_dir, 'validation')(validation_dir)test_dir = (base_dir, 'test')(test_dir)#

猫训练图⽚所在⽬录train_cats_dir = (train_dir, 'cats')(train_cats_dir)#

狗训练图⽚所在⽬录train_dogs_dir = (train_dir, 'dogs')(train_dogs_dir)#

猫验证图⽚所在⽬录validation_cats_dir = (validation_dir, 'cats')(validation_cats_dir)#

狗验证数据集所在⽬录validation_dogs_dir = (validation_dir, 'dogs')(validation_dogs_dir)#

猫测试数据集所在⽬录test_cats_dir = (test_dir, 'cats')(test_cats_dir)#

狗测试数据集所在⽬录test_dogs_dir = (test_dir, 'dogs')(test_dogs_dir)#

将前1000张猫图像复制到train_cats_dirfnames = ['cat.{}.jpg'.format(i) for i in range(1000)]for fname in fnames: src = (original_dataset_dir, fname) dst = (train_cats_dir, fname) le(src, dst)#

将下500张猫图像复制到validation_cats_dirfnames = ['cat.{}.jpg'.format(i) for i in range(1000, 1500)]for fname in fnames: src = (original_dataset_dir, fname) src = (original_dataset_dir, fname) dst = (validation_cats_dir, fname) le(src, dst)

#

将下500张猫图像复制到test_cats_dirfnames = ['cat.{}.jpg'.format(i) for i in range(1500, 2000)]for fname in fnames: src = (original_dataset_dir, fname) dst = (test_cats_dir, fname) le(src, dst)

#

将前1000张狗图像复制到train_dogs_dirfnames = ['dog.{}.jpg'.format(i) for i in range(1000)]for fname in fnames: src = (original_dataset_dir, fname) dst = (train_dogs_dir, fname) le(src, dst)

#

将下500张狗图像复制到validation_dogs_dirfnames = ['dog.{}.jpg'.format(i) for i in range(1000, 1500)]for fname in fnames: src = (original_dataset_dir, fname) dst = (validation_dogs_dir, fname) le(src, dst)

#

将下500张狗图像复制到test_dogs_dirfnames = ['dog.{}.jpg'.format(i) for i in range(1500, 2000)]for fname in fnames: src = (original_dataset_dir, fname) dst = (test_dogs_dir, fname) le(src, dst)点开⽂件夹看了⼀下,数据集已经被创建好了以防万⼀还是检查⼀下图⽚数量print('total training cat images:', len(r(train_cats_dir)))print('total training dog images:', len(r(train_dogs_dir)))print('total validation cat images:', len(r(validation_cats_dir)))print('total validation dog images:', len(r(validation_dogs_dir)))print('total test cat images:', len(r(test_cats_dir)))print('total test dog images:', len(r(test_dogs_dir)))可以看到,验证图⽚,测试图⽚,训练图⽚都已经准备好了。2 卷积神经⽹络接来下将要使⽤深度学习最经典的算法卷积神经⽹络CNN2.1 ⽹络模型搭建⾸先使⽤y()输出模型各层的参数状况from keras import layersfrom keras import modelsmodel = tial()(2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3)))(ling2D((2, 2)))(2D(64, (3, 3), activation='relu'))(ling2D((2, 2)))(2D(128, (3, 3), activation='relu'))(ling2D((2, 2)))(2D(128, (3, 3), activation='relu'))(ling2D((2, 2)))(n())((512, activation='relu'))((1, activation='sigmoid'))y()2.2 使⽤图像⽣成器读取图⽚e()优化器(loss:计算损失,这⾥⽤的是交叉熵损失,metrics: 列表,包含评估模型在训练和测试时的性能的指标)将所有图⽚的尺⼨设置为150150统⼀*ImageDataGenerator就像⼀个把⽂件中图像转换成所需格式的转接头,通常先定制⼀个转接头train_datagen,它可以根据需要对图像进⾏各种变换,然后再把放⼊相应位置中,规定数据的格式(图像的⼤⼩、每次的数量、样本标签的格式等)。其中train_generator是个(X,y)元组,X的shape为(20,150,150,3),y的shape为(20,)from keras import e(loss='binary_crossentropy', optimizer=p(lr=1e-4), metrics=['acc'])from import ImageDataGenerator#

所有图像将按1/255重新缩放train_datagen = ImageDataGenerator(rescale=1./255)test_datagen = ImageDataGenerator(rescale=1./255)train_generator = train__from_directory( #

这是⽬标⽬录 train_dir, #

所有图像将调整为150x150 target_size=(150, 150), batch_size=20, #

因为我们使⽤⼆元交叉熵损失,我们需要⼆元标签 class_mode='binary')validation_generator = test__from_directory( validation_dir, target_size=(150, 150), batch_size=20, class_mode='binary')

发布者:admin,转转请注明出处:http://www.yc00.com/news/1688382369a129641.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信