当前位置: 首页 > news >正文

wordpress邮箱插件下载郑州seo优化外包

wordpress邮箱插件下载,郑州seo优化外包,网络营销和传统营销有什么区别,做网站的如何找业务机器学习之实战篇——Mnist手写数字0~9识别#xff08;全连接神经网络模型#xff09; 文章传送MNIST数据集介绍#xff1a;实验过程实验环境导入模块导入MNIST数据集创建神经网络模型进行训练#xff0c;测试#xff0c;评估模型优化 文章传送 机器学习之监督学习#… 机器学习之实战篇——Mnist手写数字0~9识别全连接神经网络模型 文章传送MNIST数据集介绍实验过程实验环境导入模块导入MNIST数据集创建神经网络模型进行训练测试评估模型优化 文章传送 机器学习之监督学习一线性回归、多项式回归、算法优化[巨详细笔记] 机器学习之监督学习二二元逻辑回归 机器学习之监督学习三神经网络基础 机器学习之实战篇——预测二手房房价(线性回归 机器学习之实战篇——肿瘤良性/恶性分类器二元逻辑回归 MNIST数据集介绍 MNIST数据集是机器学习和计算机视觉领域中最知名和广泛使用的数据集之一。它是一个大型手写数字数据库包含 70,000 张手写数字图像60,000 张训练图像10,000 张测试图像。每张图像是 28x28 像素的灰度图像素值范围从 0白色到 255黑色每张图像对应一个 0 到 9 的数字标签。 在实验开始前为了熟悉这个伟大的数据集读者可以先做一下下面的小实验,测验你的手写数字识别能力。尽管识别手写数字对于人类来说小菜一碟但由于图像分辨率比较低同时有些数字写的比较抽象因此想要达到100%准确率还是很难的实验表明类的平均准确率约为97.5%到98.5%实验代码如下 import numpy as np from tensorflow.keras.datasets import mnist import matplotlib.pyplot as plt from random import sample# 加载MNIST数据集 (_, _), (x_test, y_test) mnist.load_data()# 随机选择100个样本 indices sample(range(len(x_test)), 100)correct 0 total 100for i, idx in enumerate(indices, 1):# 显示图像plt.imshow(x_test[idx], cmapgray)plt.axis(off)plt.show()# 获取用户输入user_answer input(f问题 {i}/100: 这个数字是什么? )# 检查答案if int(user_answer) y_test[idx]:correct 1print(正确!)else:print(f错误. 正确答案是 {y_test[idx]})print(f当前正确率: {correct}/{i} ({correct/i*100:.2f}%))print(f\n最终正确率: {correct}/{total} ({correct/total*100:.2f}%))实验过程 实验环境 pycharmjupyter notebook 导入模块 import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from sklearn.metrics import accuracy_score from tensorflow.keras.layers import Input,Dense,Dropout from tensorflow.keras.regularizers import l2 from tensorflow.keras.optimizers import Adam from tensorflow.keras.models import Sequential from tensorflow.keras.losses import SparseCategoricalCrossentropy from tensorflow.keras.callbacks import EarlyStoppingimport matplotlib matplotlib.rcParams[font.family] SimHei # 或者 Microsoft YaHei matplotlib.rcParams[axes.unicode_minus] False # 解决负号 -导入MNIST数据集 导入mnist手写数字集(包括训练集和测试集) from tensorflow.keras.datasets import mnist (x_train, y_train), (x_test, y_test) mnist.load_data()查看训练、测试数据集的规模 print(fx_train.shape:{x_train.shape}) print(fy_train.shape:{y_train.shape}) print(fx_test.shape:{x_test.shape}) print(fy_test.shape:{y_test.shape})x_train.shape:(60000, 28, 28) y_train.shape:(60000,) x_test.shape:(10000, 28, 28) y_test.shape:(10000,)查看64张手写图片 #查看64张训练手写图片内容 #获取训练集规模 mx_train.shape[0] #创建4*4子图布局 fig,axesplt.subplots(8,8,figsize(8,8)) #每张子图随机呈现一张手写图片 for i,ax in enumerate(axes.flat):idxnp.random.randint(m)#imshow():传入图片的像素矩阵cmapgray,显示黑白图片ax.imshow(x_train[idx],cmapgray)#设置子图标题将图片标签显示在图片上方ax.set_title(y_train[idx])# 移除坐标轴ax.axis(off) #调整子图之间的间距 plt.tight_layout()由于空间限制没有展现全64张图片 将图片灰度像素矩阵转为灰度像素向量[展平],同时进行归一化[/255]0-255-0-1 x_train_flatx_train.reshape(60000,28*28).astype(float32)/255 x_test_flatx_test.reshape(10000,28*28).astype(float32)/255查看展平后数据集规模 print(fx_train.shape:{x_train_flat.shape}) print(fx_test.shape:{x_test_flat.shape})x_train.shape:(60000, 784) x_test.shape:(10000, 784)创建神经网络模型进行训练测试评估 初步创建第一个三层全连接神经网络隐层中采用‘relu’激活函数使用分类交叉熵损失函数设置from_logitsTrue,减少训练过程计算误差Adam学习率自适应器设置初始学习率0.001 #创建神经网络 model1Sequential([Input(shape(784,)),Dense(128,activationrelu,nameL1),Dense(32,activationrelu,nameL2),Dense(10,activationlinear,nameL3),],namemodel1, ) #编译模型 model1.compile(lossSparseCategoricalCrossentropy(from_logitsTrue),optimizerAdam(learning_rate0.001)) #查看模型总结 model1.summary()Model: model1 _________________________________________________________________Layer (type) Output Shape Param # L1 (Dense) (None, 128) 100480 L2 (Dense) (None, 32) 4128 L3 (Dense) (None, 10) 330 Total params: 104,938 Trainable params: 104,938 Non-trainable params: 0model1拟合训练集开始训练迭代次数初步设置为20 model1.fit(x_train_flat,y_train,epochs20)Epoch 1/20 1875/1875 [] - 12s 5ms/step - loss: 0.2502 Epoch 2/20 1875/1875 [] - 9s 5ms/step - loss: 0.1057 Epoch 3/20 1875/1875 [] - 9s 5ms/step - loss: 0.0748 Epoch 4/20 1875/1875 [] - 9s 5ms/step - loss: 0.0547 Epoch 5/20 1875/1875 [] - 9s 5ms/step - loss: 0.0438 Epoch 6/20 1875/1875 [] - 8s 5ms/step - loss: 0.0360 Epoch 7/20 1875/1875 [] - 9s 5ms/step - loss: 0.0300 Epoch 8/20 1875/1875 [] - 9s 5ms/step - loss: 0.0237 Epoch 9/20 1875/1875 [] - 9s 5ms/step - loss: 0.0223 Epoch 10/20 1875/1875 [] - 9s 5ms/step - loss: 0.0201 Epoch 11/20 1875/1875 [] - 9s 5ms/step - loss: 0.0166 Epoch 12/20 1875/1875 [] - 9s 5ms/step - loss: 0.0172 Epoch 13/20 1875/1875 [] - 9s 5ms/step - loss: 0.0131 Epoch 14/20 1875/1875 [] - 9s 5ms/step - loss: 0.0124 Epoch 15/20 1875/1875 [] - 9s 5ms/step - loss: 0.0133 Epoch 16/20 1875/1875 [] - 10s 5ms/step - loss: 0.0108 Epoch 17/20 1875/1875 [] - 9s 5ms/step - loss: 0.0095 Epoch 18/20 1875/1875 [] - 10s 5ms/step - loss: 0.0116 Epoch 19/20 1875/1875 [] - 9s 5ms/step - loss: 0.0090 Epoch 20/20 1875/1875 [] - 9s 5ms/step - loss: 0.0084查看model1训练结果由于模型直接输出Logits需要通过softmax函数激活输出概率向量然后通过最大概率索引得出模型识别的手写数字 #查看训练结果 z_train_hatmodel1.predict(x_train_flat) #经过softmax激活后得到概率向量构成的矩阵 p_train_hattf.nn.softmax(z_train_hat).numpy() #找出每个概率向量最大概率对应的索引即识别的数字 y_train_hatnp.argmax(p_train_hat,axis1) print(y_train_hat)可以将上述代码编写为函数 #神经网络输出-最终识别结果 def get_result(z):ptf.nn.softmax(z)ynp.argmax(p,axis1)return y为了理解上面的输出处理过程查看第一个样本的逻辑输出、概率向量和识别数字 print(fLogits:{z_train_hat[0]}) print(fProbabilities:{p_train_hat[0]}) print(ftarge:{y_train_hat[0]})Logits:[-21.427883 -11.558845 -15.150495 15.6205845 -58.351833 29.704205-23.925339 -30.009314 -11.389831 -14.521982 ] Probabilities:[6.2175050e-23 1.2013921e-18 3.3101813e-20 7.6482343e-07 0.0000000e009.9999928e-01 5.1166414e-24 1.1661356e-26 1.4226123e-18 6.2059749e-20] targe:5输出model1训练准确率,准确率达到99.8% print(fmodel1训练集准确率{accuracy_score(y_train,y_train_hat)})model1训练集准确率0.998133测试model1准确率达到97.9%相当不戳 z_test_hatmodel1.predict(x_test_flat) y_test_hatget_result(z_test_hat) print(fmodel1测试集准确率{accuracy_score(y_test,y_test_hat)})313/313 [] - 1s 3ms/step model1测试集准确率0.9789为了方便后续神经网络模型的实验编写run_model函数包含训练、测试模型的整个过程引入早停机制即当10个epoch内训练损失没有改善则停止训练 early_stopping EarlyStopping(monitorloss, patience10, # 如果10个epoch内训练损失没有改善则停止训练restore_best_weightsTrue # 恢复最佳权重 )def run_model(model,epochs):model.fit(x_train_flat,y_train,epochsepochs,callbacks[early_stopping]) z_train_hatmodel.predict(x_train_flat)y_train_hatget_result(z_train_hat)print(f{model.name}训练准确率{accuracy_score(y_train,y_train_hat)})z_test_hatmodel.predict(x_test_flat)y_test_hatget_result(z_test_hat)print(f{model.name}测试准确率{accuracy_score(y_test,y_test_hat)})查看模型在哪些图片上栽了跟头 #显示n张错误识别图片的函数def show_error_pic(x, y, y_pred, n64):wrong_idx (y ! y_pred)# 获取错误识别的图片和标签x_wrong x[wrong_idx]y_wrong y[wrong_idx]y_pred_wrong y_pred[wrong_idx]# 选择前n张错误图片n min(n, len(x_wrong))x_wrong x_wrong[:n]y_wrong y_wrong[:n]y_pred_wrong y_pred_wrong[:n]# 设置图片网格rows int(np.ceil(n / 8))fig, axes plt.subplots(rows, 8, figsize(20, 2.5*rows))axes axes.flatten()for i in range(n):ax axes[i]ax.imshow(x_wrong[i].reshape(28, 28), cmapgray)ax.set_title(fTrue: {y_wrong[i]}, Pred: {y_pred_wrong[i]})ax.axis(off)# 隐藏多余的子图for i in range(n, len(axes)):axes[i].axis(off)plt.tight_layout()plt.show()show_error_pic(x_test,y_test,y_test_hat)出于空间限制只展示部分图片 模型优化 目前来看我们第一个较简单的神经网络表现得非常不错训练准确率达到99.8%测试准确率达到97.9%而人类的平均准确率约为97.5%到98.5%因此我们诊断模型存在一定高方差的问题可以考虑引入正则化技术或增加数据量来优化模型或者从另一方面考虑采用更加大型的神经网络看看能否达到更优的准确率。 model2:model1基础上增加迭代次数至40次 #创建神经网络 model2Sequential([Input(shape(784,)),Dense(128,activationrelu,nameL1),Dense(32,activationrelu,nameL2),Dense(10,activationlinear,nameL3),],namemodel2, ) #编译模型 model2.compile(lossSparseCategoricalCrossentropy(from_logitsTrue),optimizerAdam(learning_rate0.001)) #查看模型总结 model2.summary()run_model(model2,40)Model: model2 _________________________________________________________________Layer (type) Output Shape Param # L1 (Dense) (None, 128) 100480 L2 (Dense) (None, 32) 4128 L3 (Dense) (None, 10) 330 Total params: 104,938 Trainable params: 104,938 Non-trainable params: 0 _________________________________________________________________ Epoch 1/40 1875/1875 [] - 10s 5ms/step - loss: 0.2670 Epoch 2/40 1875/1875 [] - 10s 5ms/step - loss: 0.1124 Epoch 3/40 1875/1875 [] - 9s 5ms/step - loss: 0.0786 Epoch 4/40 1875/1875 [] - 9s 5ms/step - loss: 0.0593 Epoch 5/40 1875/1875 [] - 9s 5ms/step - loss: 0.0468 Epoch 6/40 1875/1875 [] - 8s 5ms/step - loss: 0.0377 Epoch 7/40 1875/1875 [] - 9s 5ms/step - loss: 0.0310 Epoch 8/40 1875/1875 [] - 9s 5ms/step - loss: 0.0266 Epoch 9/40 1875/1875 [] - 9s 5ms/step - loss: 0.0246 Epoch 10/40 1875/1875 [] - 9s 5ms/step - loss: 0.0183 Epoch 11/40 1875/1875 [] - 9s 5ms/step - loss: 0.0180 Epoch 12/40 1875/1875 [] - 9s 5ms/step - loss: 0.0160 Epoch 13/40 1875/1875 [] - 9s 5ms/step - loss: 0.0170 Epoch 14/40 1875/1875 [] - 9s 5ms/step - loss: 0.0133 Epoch 15/40 1875/1875 [] - 9s 5ms/step - loss: 0.0135 Epoch 16/40 1875/1875 [] - 9s 5ms/step - loss: 0.0117 Epoch 17/40 1875/1875 [] - 9s 5ms/step - loss: 0.0108 Epoch 18/40 1875/1875 [] - 9s 5ms/step - loss: 0.0110 Epoch 19/40 1875/1875 [] - 9s 5ms/step - loss: 0.0107 Epoch 20/40 1875/1875 [] - 9s 5ms/step - loss: 0.0086 Epoch 21/40 1875/1875 [] - 9s 5ms/step - loss: 0.0096 Epoch 22/40 1875/1875 [] - 9s 5ms/step - loss: 0.0101 Epoch 23/40 1875/1875 [] - 9s 5ms/step - loss: 0.0083 Epoch 24/40 1875/1875 [] - 9s 5ms/step - loss: 0.0079 Epoch 25/40 1875/1875 [] - 9s 5ms/step - loss: 0.0095 Epoch 26/40 1875/1875 [] - 9s 5ms/step - loss: 0.0087 Epoch 27/40 1875/1875 [] - 9s 5ms/step - loss: 0.0063 Epoch 28/40 1875/1875 [] - 9s 5ms/step - loss: 0.0087 Epoch 29/40 1875/1875 [] - 8s 4ms/step - loss: 0.0080 Epoch 30/40 1875/1875 [] - 7s 4ms/step - loss: 0.0069 Epoch 31/40 1875/1875 [] - 9s 5ms/step - loss: 0.0053 Epoch 32/40 1875/1875 [] - 8s 4ms/step - loss: 0.0071 Epoch 33/40 1875/1875 [] - 9s 5ms/step - loss: 0.0056 Epoch 34/40 1875/1875 [] - 8s 4ms/step - loss: 0.0089 Epoch 35/40 1875/1875 [] - 8s 4ms/step - loss: 0.0062 Epoch 36/40 1875/1875 [] - 8s 4ms/step - loss: 0.0084 Epoch 37/40 1875/1875 [] - 8s 4ms/step - loss: 0.0051 Epoch 38/40 1875/1875 [] - 8s 4ms/step - loss: 0.0063 Epoch 39/40 1875/1875 [] - 8s 4ms/step - loss: 0.0074 Epoch 40/40 1875/1875 [] - 9s 5ms/step - loss: 0.0063 1875/1875 [] - 5s 3ms/step model2训练准确率0.9984166666666666 313/313 [] - 1s 3ms/step model2测试准确率0.98 可以看到测试准确率达到98%略有提升但考虑到运行时间翻倍收益并不明显 model3:采用宽度和厚度更大的神经网络迭代次数20 #增加模型宽度和厚度 model3 Sequential([Input(shape(784,)),Dense(256, activationrelu, nameL1),Dense(128, activationrelu, nameL2),Dense(64, activationrelu, nameL3),Dense(10, activationlinear, nameL4), ], namemodel3)#编译模型 model3.compile(lossSparseCategoricalCrossentropy(from_logitsTrue),optimizerAdam(learning_rate0.001)) #查看模型总结 model3.summary()run_model(model3,20)Model: model3 _________________________________________________________________Layer (type) Output Shape Param # L1 (Dense) (None, 256) 200960 L2 (Dense) (None, 128) 32896 L3 (Dense) (None, 64) 8256 L4 (Dense) (None, 10) 650 Total params: 242,762 Trainable params: 242,762 Non-trainable params: 0 _________________________________________________________________ Epoch 1/20 1875/1875 [] - 12s 6ms/step - loss: 0.2152 Epoch 2/20 1875/1875 [] - 12s 6ms/step - loss: 0.0908 Epoch 3/20 1875/1875 [] - 12s 7ms/step - loss: 0.0623 Epoch 4/20 1875/1875 [] - 12s 7ms/step - loss: 0.0496 Epoch 5/20 1875/1875 [] - 12s 7ms/step - loss: 0.0390 Epoch 6/20 1875/1875 [] - 12s 6ms/step - loss: 0.0341 Epoch 7/20 1875/1875 [] - 12s 6ms/step - loss: 0.0291 Epoch 8/20 1875/1875 [] - 12s 6ms/step - loss: 0.0244 Epoch 9/20 1875/1875 [] - 12s 7ms/step - loss: 0.0223 Epoch 10/20 1875/1875 [] - 12s 7ms/step - loss: 0.0187 Epoch 11/20 1875/1875 [] - 12s 7ms/step - loss: 0.0206 Epoch 12/20 1875/1875 [] - 12s 6ms/step - loss: 0.0145 Epoch 13/20 1875/1875 [] - 12s 7ms/step - loss: 0.0176 Epoch 14/20 1875/1875 [] - 12s 7ms/step - loss: 0.0153 Epoch 15/20 1875/1875 [] - 12s 6ms/step - loss: 0.0120 Epoch 16/20 1875/1875 [] - 12s 6ms/step - loss: 0.0148 Epoch 17/20 1875/1875 [] - 12s 6ms/step - loss: 0.0125 Epoch 18/20 1875/1875 [] - 12s 6ms/step - loss: 0.0123 Epoch 19/20 1875/1875 [] - 13s 7ms/step - loss: 0.0120 Epoch 20/20 1875/1875 [] - 13s 7ms/step - loss: 0.0094 1875/1875 [] - 6s 3ms/step model3训练准确率0.9989333333333333 313/313 [] - 1s 4ms/step model3测试准确率0.9816model3训练准确率达到99.9%测试准确率也取得了目前为止的新高98.2% model4:model1基础上加入Dropout层引入正则化 #Dropout正则化 model4 Sequential([Input(shape(784,)),Dense(128, activationrelu, nameL1),Dropout(0.3),Dense(64, activationrelu, nameL2),Dropout(0.2),Dense(10, activationlinear, nameL3), ], namemodel4)#编译模型 model4.compile(lossSparseCategoricalCrossentropy(from_logitsTrue),optimizerAdam(learning_rate0.001)) #查看模型总结 model4.summary()run_model(model4,20)Model: model5 _________________________________________________________________Layer (type) Output Shape Param # L1 (Dense) (None, 128) 100480 dropout_2 (Dropout) (None, 128) 0 L2 (Dense) (None, 64) 8256 dropout_3 (Dropout) (None, 64) 0 L3 (Dense) (None, 10) 650 Total params: 109,386 Trainable params: 109,386 Non-trainable params: 0 _________________________________________________________________ Epoch 1/20 1875/1875 [] - 15s 7ms/step - loss: 0.3686 Epoch 2/20 1875/1875 [] - 12s 6ms/step - loss: 0.1855 Epoch 3/20 1875/1875 [] - 17s 9ms/step - loss: 0.1475 Epoch 4/20 1875/1875 [] - 17s 9ms/step - loss: 0.1289 Epoch 5/20 1875/1875 [] - 20s 11ms/step - loss: 0.1124 Epoch 6/20 1875/1875 [] - 19s 10ms/step - loss: 0.1053 Epoch 7/20 1875/1875 [] - 22s 12ms/step - loss: 0.0976 Epoch 8/20 1875/1875 [] - 15s 8ms/step - loss: 0.0907 Epoch 9/20 1875/1875 [] - 12s 6ms/step - loss: 0.0861 Epoch 10/20 1875/1875 [] - 9s 5ms/step - loss: 0.0807 Epoch 11/20 1875/1875 [] - 10s 5ms/step - loss: 0.0794 Epoch 12/20 1875/1875 [] - 11s 6ms/step - loss: 0.0744 Epoch 13/20 1875/1875 [] - 9s 5ms/step - loss: 0.0733 Epoch 14/20 1875/1875 [] - 8s 4ms/step - loss: 0.0734 Epoch 15/20 1875/1875 [] - 8s 4ms/step - loss: 0.0691 Epoch 16/20 1875/1875 [] - 10s 5ms/step - loss: 0.0656 Epoch 17/20 1875/1875 [] - 11s 6ms/step - loss: 0.0674 Epoch 18/20 1875/1875 [] - 12s 7ms/step - loss: 0.0614 Epoch 19/20 1875/1875 [] - 11s 6ms/step - loss: 0.0601 Epoch 20/20 1875/1875 [] - 9s 5ms/step - loss: 0.0614 1875/1875 [] - 5s 3ms/step model5训练准确率0.9951833333333333 313/313 [] - 1s 2ms/step model5测试准确率0.98model5训练准确率下降到了99.5%但是相比model1测试准确率98%略有提升Dropout正则化的确有效降低了模型方差增强了模型的泛化能力 综上考虑使用model3的框架同时引入Dropout正则化迭代训练40次构建model7 #最终全连接神经网络 model7 Sequential([Input(shape(784,)),Dense(256, activationrelu, nameL1),Dropout(0.3),Dense(128, activationrelu, nameL2),Dropout(0.2),Dense(64, activationrelu, nameL3),Dropout(0.1),Dense(10, activationlinear, nameL4), ], namemodel7)#编译模型 model7.compile(lossSparseCategoricalCrossentropy(from_logitsTrue),optimizerAdam(learning_rate0.001)) #查看模型总结 model7.summary()run_model(model7,40)Model: model7 _________________________________________________________________Layer (type) Output Shape Param # L1 (Dense) (None, 256) 200960 dropout_4 (Dropout) (None, 256) 0 L2 (Dense) (None, 128) 32896 dropout_5 (Dropout) (None, 128) 0 L3 (Dense) (None, 64) 8256 dropout_6 (Dropout) (None, 64) 0 L4 (Dense) (None, 10) 650 Total params: 242,762 Trainable params: 242,762 Non-trainable params: 0 _________________________________________________________________ Epoch 1/40 1875/1875 [] - 16s 8ms/step - loss: 0.3174 Epoch 2/40 1875/1875 [] - 14s 7ms/step - loss: 0.1572 Epoch 3/40 1875/1875 [] - 16s 9ms/step - loss: 0.1255 Epoch 4/40 1875/1875 [] - 23s 12ms/step - loss: 0.1047 Epoch 5/40 1875/1875 [] - 19s 10ms/step - loss: 0.0935 Epoch 6/40 1875/1875 [] - 30s 16ms/step - loss: 0.0839 Epoch 7/40 1875/1875 [] - 20s 11ms/step - loss: 0.0776 Epoch 8/40 1875/1875 [] - 21s 11ms/step - loss: 0.0728 Epoch 9/40 1875/1875 [] - 17s 9ms/step - loss: 0.0661 Epoch 10/40 1875/1875 [] - 14s 8ms/step - loss: 0.0629 Epoch 11/40 1875/1875 [] - 16s 8ms/step - loss: 0.0596 Epoch 12/40 1875/1875 [] - 26s 14ms/step - loss: 0.0566 Epoch 13/40 1875/1875 [] - 22s 12ms/step - loss: 0.0533 Epoch 14/40 1875/1875 [] - 16s 8ms/step - loss: 0.0520 Epoch 15/40 1875/1875 [] - 14s 7ms/step - loss: 0.0467 Epoch 16/40 1875/1875 [] - 15s 8ms/step - loss: 0.0458 Epoch 17/40 1875/1875 [] - 15s 8ms/step - loss: 0.0451 Epoch 18/40 1875/1875 [] - 19s 10ms/step - loss: 0.0443 Epoch 19/40 1875/1875 [] - 43s 23ms/step - loss: 0.0417 Epoch 20/40 1875/1875 [] - 38s 20ms/step - loss: 0.0409 Epoch 21/40 1875/1875 [] - 21s 11ms/step - loss: 0.0392 Epoch 22/40 1875/1875 [] - 16s 9ms/step - loss: 0.0396 Epoch 23/40 1875/1875 [] - 20s 11ms/step - loss: 0.0355 Epoch 24/40 1875/1875 [] - 17s 9ms/step - loss: 0.0368 Epoch 25/40 1875/1875 [] - 18s 10ms/step - loss: 0.0359 Epoch 26/40 1875/1875 [] - 18s 10ms/step - loss: 0.0356 Epoch 27/40 1875/1875 [] - 16s 8ms/step - loss: 0.0360 Epoch 28/40 1875/1875 [] - 17s 9ms/step - loss: 0.0326 Epoch 29/40 1875/1875 [] - 19s 10ms/step - loss: 0.0335 Epoch 30/40 1875/1875 [] - 19s 10ms/step - loss: 0.0310 Epoch 31/40 1875/1875 [] - 21s 11ms/step - loss: 0.0324 Epoch 32/40 1875/1875 [] - 16s 9ms/step - loss: 0.0301 Epoch 33/40 1875/1875 [] - 17s 9ms/step - loss: 0.0303 Epoch 34/40 1875/1875 [] - 15s 8ms/step - loss: 0.0319 Epoch 35/40 1875/1875 [] - 17s 9ms/step - loss: 0.0300 Epoch 36/40 1875/1875 [] - 17s 9ms/step - loss: 0.0305 Epoch 37/40 1875/1875 [] - 14s 7ms/step - loss: 0.0290 Epoch 38/40 1875/1875 [] - 19s 10ms/step - loss: 0.0288 Epoch 39/40 1875/1875 [] - 20s 11ms/step - loss: 0.0272 Epoch 40/40 1875/1875 [] - 38s 20ms/step - loss: 0.0264 1875/1875 [] - 18s 9ms/step model7训练准确率0.9984333333333333 313/313 [] - 2s 5ms/step model7测试准确率0.9831model7训练准确率99.8%测试准确率达到了98.3%相比model1的97.9%取得了接近0.4%的提升。 本实验是学习了神经网络基础后的一个实验练习因此只采用全连接神经网络模型。我们知道CNN模型在图像识别上能力更强因此在实验最后创建一个CNN网络进行测试gpt生成网络框架。 from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flattenmodel8 Sequential([Input(shape(28, 28, 1)),Conv2D(32, kernel_size(3, 3), activationrelu),MaxPooling2D(pool_size(2, 2)),Conv2D(64, kernel_size(3, 3), activationrelu),MaxPooling2D(pool_size(2, 2)),Flatten(),Dense(128, activationrelu),Dense(10, activationlinear) ], namecnn_model)#编译模型 model8.compile(lossSparseCategoricalCrossentropy(from_logitsTrue),optimizerAdam(learning_rate0.001)) #查看模型总结 model8.summary()model8.fit(x_train,y_train,epochs20,callbacks[early_stopping]) z_train_hatmodel8.predict(x_train) y_train_hatget_result(z_train_hat) print(f{model8.name}训练准确率{accuracy_score(y_train,y_train_hat)})z_test_hatmodel8.predict(x_test) y_test_hatget_result(z_test_hat) print(f{model8.name}测试准确率{accuracy_score(y_test,y_test_hat)}) cnn网络 cnn_model训练准确率0.9982333333333333 cnn_model测试准确率0.9878 可以看到测试准确率达到了98.8%比我们上面的全连接神经网络要优异。
http://www.tj-hxxt.cn/news/225136.html

相关文章:

  • 郑州企业微网站建设wordpress后台登陆不进去
  • 网站开发 公司 深圳济宁建设网站
  • 四川省广安建设局网站外贸网站做哪些语言
  • 万远翔网站建设嘉兴网站建设的地方
  • 哪个网站做app做房地产开网
  • 网站优化排名公司哪家好wordpress一键优化
  • 淘宝客网站建设要注意什么sns社交网站 建设
  • 宝塔建站详细教程桂林北站到两江机场有多远
  • 深圳品牌网站建设公司vps服务器购买
  • 企业网站设计目的和意义免费静态网页托管
  • 小程序网站app定制开发如何在电脑里做网站
  • 电子商务网站建设需要哪些技术兰州网站设计公司排名
  • 如何做百度的网站旅游电子商务网站建设技术规范
  • htm5移动网站开发小企业想做网站推广找哪家强
  • 平顶山河南网站建设百度云建站教程
  • 网站开发规范有哪些php空间购买
  • 网站开发经理具备什么知识大型电子商务网站建设方案
  • 长沙好的网站建设成都招聘网站制作
  • 营销网站建设价格网站制作 网络推广
  • 做网络平台的网站有哪些成都装修网
  • 做公司网站大概需要多少钱啊百度秒收录
  • php网站开发就业前景做视频网站多少钱
  • 专业做足球体彩网站代理注册公司排名
  • 餐饮网站建设教程余姚网站公司
  • 益阳公司网站建设重庆如何做聚政网站
  • 做效果图赚钱的网站包头市网站建设公司
  • 学习做网站建设的学校苏州网络公司有哪些
  • 推销商务网站的途径有哪些建筑公司企业宗旨
  • 大兴安岭做网站市场监督管理局是干什么的
  • 卖网站赚钱吗wordpress 头部空白