目录

1.基础介绍
<https://blog.csdn.net/sinat_35821976/article/details/81503953#%E5%86%85%E5%AE%B9%E4%BB%8B%E7%BB%8D>

2.网络结构
<https://blog.csdn.net/sinat_35821976/article/details/81503953#%E7%BD%91%E7%BB%9C%E7%BB%93%E6%9E%84>

2.1卷积层
<https://blog.csdn.net/sinat_35821976/article/details/81503953#%E5%8D%B7%E7%A7%AF%E5%B1%82>

2.1.1 Padding
<https://blog.csdn.net/sinat_35821976/article/details/81503953#2.1.1%20Padding>

2.1.2 Stride
<https://blog.csdn.net/sinat_35821976/article/details/81503953#2.1.2%20Stride>

2.1.3 多通道计算
<https://blog.csdn.net/sinat_35821976/article/details/81503953#2.1.3%20%E5%A4%9A%E9%80%9A%E9%81%93%E8%AE%A1%E7%AE%97>

2.2池化层
<https://blog.csdn.net/sinat_35821976/article/details/81503953#%E6%B1%A0%E5%8C%96%E5%B1%82>

2.2.1 最大池化
<https://blog.csdn.net/sinat_35821976/article/details/81503953#2.2.1%20%E6%9C%80%E5%A4%A7%E6%B1%A0%E5%8C%96>

2.2.2 平均池化
<https://blog.csdn.net/sinat_35821976/article/details/81503953#2.2.2%20%E5%B9%B3%E5%9D%87%E6%B1%A0%E5%8C%96>

2.3全连接层
<https://blog.csdn.net/sinat_35821976/article/details/81503953#%E5%85%A8%E8%BF%9E%E6%8E%A5%E5%B1%82>

3.代码实例
<https://blog.csdn.net/sinat_35821976/article/details/81503953#%E4%BB%A3%E7%A0%81%E5%AE%9E%E4%BE%8B>

1.基础介绍

卷积神经网络的基础内容可以参考:机器学习算法之卷积神经网络
<https://blog.csdn.net/sinat_35821976/article/details/78700377>

2.网络结构

卷积神经网络一般包括卷积层,池化层和全连接层,下面分别介绍一下

2.1卷积层

卷积神经网络里面的这个卷积和信号里面的卷积是有些差别的,信号中的卷积计算分为镜像相乘相加,卷积层中的卷积没有镜像这一操作,直接是相乘和相加,如下图所示




最左边的是卷积的输入,中间的为卷积核,最右边的为卷积的输出。可以发现卷积计算很简单,就是卷积核与输入对应位置相乘然后求和。除了图中绿颜色的例子,我们可以计算一下图中红色圈对应的卷积结果:(-1)*2+(-1)*9+(-1)*2+1*4+1*4=-5。以上就是卷积计算的过程,对于整个输入来说,计算结果还取决于两个参数:padding
和 stride,下面分别介绍下这两个参数

2.1.1 Padding


padding是很多地方都会用到的一种操作比如在加密过程中明文不够长就需要加padding来使得明文与密钥长度相同,其意思就是在原有的基础之上增加一些东西是其规模符合后续操作。如下图所示:

.

在Tensorflow中padding有两种属性,一种是valid,这种表示不需要padding操作,假设输入大小为n*n,卷积核大小为f*f,此时输出大小为
(n-f+1);另一种是same,表示输入和输出的大小相同,假设padding的大小为p,此时为了保持输出和输入消息相同p = (f-1)/2,
但是此时卷积核要是奇数大小。

2.1.2 Stride

stride是指卷积核在输入上移动时每次移动的距离,直接上图来说明。其中按红框来移动的话stride = 1;按蓝色框来移动的话stride =
2。加入stride后,输出的计算有一些变化,假设输入大小为n*n,卷积核大小为f*f,padding大小为p,stride大小为 s,那么最后的输出大小为:





举个代码的例子,下面两行代码是Tensorflow进行卷积计算的代码,x为输入,W为权重,stride的原型为strides =[b, h, w,
c],其中b表示在样本上的步长,默认为1表示每个样本都会进行计算;h,w表示高度和宽度,即横向和纵向步长;c表示通道数,默认为1,表示每个通道都会参与计算。
tf.nn.conv2d(x, W, stride=[1, 1, 1, 1], padding='SAME') tf.nn.conv2d(x, W,
stride=[1, 2, 2, 1], padding='VALID')
2.1.3 多通道计算


卷积核除了长宽这两个参数之外还有通道数这个参数,首先需要明确的是单个卷积核的通道数要等于图像的通道数,一般图像是RGB模式的话,卷积核的大小为h*w*3。用吴恩达的视频内容讲解一下,只有一个卷积核的时候,图像经过卷积计算后的结果通道数是一维的,计算方法也简单粗暴每个通道的对应位置相乘然后,不同通道数之间相加。



一般卷积核不止一个,对于多个卷积核的情况也不复杂,直接对每个卷积核进行单个卷积核的操作,然后把它们拼在一起就行了,如图所示:



2.2池化层

池化层(pooling)的作用主要是降低维度,通过对卷积后的结果进行降采样来降低维度,分为最大池化和平均池化两类。

2.2.1 最大池化

最大池化顾名思义,降采样的时候采用最大值的方式采样,如图所示,其中池化核的大小为2*2,步长也为2*2



2.2.2 平均池化

平均池化就是用局部的平均值作为采样的值,还是上面的数据,平均池化后的结果为:



同样上个代码,其中的ksize的参数和stride差不多,就不赘述了。
tf.nn.max_pool(x, ksize=[1, 2, 2, 1], stride=[1, 2, 2, 1], padding='SAME')
tf.nn.avg_pool(x, ksize=[1, 2, 2, 1], stride=[1, 2, 2, 1], padding='SAME')
2.3全连接层

全连接层就是把卷积层和池化层的输出展开成一维形式,在后面接上与普通网络结构相同的回归网络或者分类网络,一般接在池化层后面,如图所示;



用代码会更好理解一些,其中dims为池化层展开成一维数组的维度,第二行采用relu作为全连接层的激活函数。
fc = tf.reshape(pool_out,[-1, dims]) fc_out = tf.nn.relu(tf.matmul(fc, W_fc) +
b_fc)
3.代码实例

下面以AlexNet为例子,给出一个详细的卷积神经网络架构,首先AlexNet架构以及每部分学习到的特征如下图所示



代码如下:
import tensorflow as tf import numpy as np # 定义各层功能 # 最大池化层 def
maxPoolLayer(x, kHeight, kWidth, strideX, strideY, name, padding = "SAME"):
"""max-pooling""" return tf.nn.max_pool(x, ksize = [1, kHeight, kWidth, 1],
strides = [1, strideX, strideY, 1], padding = padding, name = name) # dropout
def dropout(x, keepPro, name = None): """dropout""" return tf.nn.dropout(x,
keepPro, name) # 归一化层 def LRN(x, R, alpha, beta, name = None, bias = 1.0):
"""LRN""" return tf.nn.local_response_normalization(x, depth_radius = R, alpha
= alpha, beta = beta, bias = bias, name = name) # 全连接层 def fcLayer(x, inputD,
outputD, reluFlag, name): """fully-connect""" with tf.variable_scope(name) as
scope: w = tf.get_variable("w", shape = [inputD, outputD], dtype = "float") b =
tf.get_variable("b", [outputD], dtype = "float") out = tf.nn.xw_plus_b(x, w, b,
name = scope.name) if reluFlag: return tf.nn.relu(out) else: return out # 卷积层
def convLayer(x, kHeight, kWidth, strideX, strideY, featureNum, name, padding =
"SAME", groups = 1): """convolution""" channel = int(x.get_shape()[-1]) conv =
lambda a, b: tf.nn.conv2d(a, b, strides = [1, strideY, strideX, 1], padding =
padding) with tf.variable_scope(name) as scope: w = tf.get_variable("w", shape
= [kHeight, kWidth, channel/groups, featureNum]) b = tf.get_variable("b", shape
= [featureNum]) xNew = tf.split(value = x, num_or_size_splits = groups, axis =
3) wNew = tf.split(value = w, num_or_size_splits = groups, axis = 3) featureMap
= [conv(t1, t2) for t1, t2 in zip(xNew, wNew)] mergeFeatureMap = tf.concat(axis
= 3, values = featureMap) # print mergeFeatureMap.shape out =
tf.nn.bias_add(mergeFeatureMap, b) return tf.nn.relu(tf.reshape(out,
mergeFeatureMap.get_shape().as_list()), name = scope.name) class
alexNet(object): """alexNet model""" def __init__(self, x, keepPro, classNum,
skip, modelPath = "bvlc_alexnet.npy"): self.X = x self.KEEPPRO = keepPro
self.CLASSNUM = classNum self.SKIP = skip self.MODELPATH = modelPath #build CNN
self.buildCNN() # 构建AlexNet def buildCNN(self): """build model""" conv1 =
convLayer(self.X, 11, 11, 4, 4, 96, "conv1", "VALID") lrn1 = LRN(conv1, 2,
2e-05, 0.75, "norm1") pool1 = maxPoolLayer(lrn1, 3, 3, 2, 2, "pool1", "VALID")
conv2 = convLayer(pool1, 5, 5, 1, 1, 256, "conv2", groups = 2) lrn2 =
LRN(conv2, 2, 2e-05, 0.75, "lrn2") pool2 = maxPoolLayer(lrn2, 3, 3, 2, 2,
"pool2", "VALID") conv3 = convLayer(pool2, 3, 3, 1, 1, 384, "conv3") conv4 =
convLayer(conv3, 3, 3, 1, 1, 384, "conv4", groups = 2) conv5 = convLayer(conv4,
3, 3, 1, 1, 256, "conv5", groups = 2) pool5 = maxPoolLayer(conv5, 3, 3, 2, 2,
"pool5", "VALID") fcIn = tf.reshape(pool5, [-1, 256 * 6 * 6]) fc1 =
fcLayer(fcIn, 256 * 6 * 6, 4096, True, "fc6") dropout1 = dropout(fc1,
self.KEEPPRO) fc2 = fcLayer(dropout1, 4096, 4096, True, "fc7") dropout2 =
dropout(fc2, self.KEEPPRO) self.fc3 = fcLayer(dropout2, 4096, self.CLASSNUM,
True, "fc8") def loadModel(self, sess): """load model""" wDict =
np.load(self.MODELPATH, encoding = "bytes").item() #for layers in model for
name in wDict: if name not in self.SKIP: with tf.variable_scope(name, reuse =
True): for p in wDict[name]: if len(p.shape) == 1: #bias
sess.run(tf.get_variable('b', trainable = False).assign(p)) else: #weights
sess.run(tf.get_variable('w', trainable = False).assign(p))
 

友情链接
KaDraw流程图
API参考文档
OK工具箱
云服务器优惠
阿里云优惠券
腾讯云优惠券
华为云优惠券
站点信息
问题反馈
邮箱:ixiaoyang8@qq.com
QQ群:637538335
关注微信