Preface: This is based ontensorflow frame, Created with only one hidden layerBP neural network, Image recognition done, The content is also relatively simple, It's all my study notes.

–—-—-—-—-—-—-—-—-—-—-—-—–—-—-—-—-—-—-—-——-—-—-—-—-—-—-—-—-—-—-—-—-—-——-
–—-—-—-—-—-—-—-—-—-—-—-—–—-—-—-—-—-—-—-——-—-—-—-—-—-—-—-—-—-—-—-—-—-——-

1, LoadMNIST data set
from tensorflow.examples.tutorials.mnist import input_data mnist =
input_data.read_data_sets('MNIST_data', one_hot=True)
mnist Is a lightweight class. It takesNumpy Training is stored in the form of array, Checksum test data set, AlsoGoogle Classic data set for image recognition.MNIST Dataset download link:
https://pan.baidu.com/s/1d9ty82 <https://pan.baidu.com/s/1d9ty82> Password: jcam

2, FunctionTensorFlow frame
import tensorflow as tf sess = tf.InteractiveSession() init = tf.global
_variables_initializer() sess.run(init)
TensorFlow The connection between the framework and the back end is calledsession, That is to say, we usesession start-upTensorFlow frame( I need to know more about it)

3, Predefined input valuesX, True valueY
X = tf.placeholder(tf.float32, shape=[None, 784]) Y = tf.placeholder(tf.float
32, shape=[None,10])
* X,Y Now represented by placeholders, Can be inTensorFlow When running a calculation, Calculate based on the specific value entered by the placeholder;
* tf.float32 Is the type of storage;shape=[None,
784] Is the data dimension size—— becauseMNIST The size of each picture in the dataset is28*28 Of, When calculating, it will28*28 2-D data is transformed into a 1-D, Count Reg784 New vectors.None Indicates that its value size is variable, Means selectedX,Y The number of
4, EstablishBP neural network
""" The way of generating with random sequence, Creating a neural network with a hidden layer.(784,300,10) """
#truncated_normal: Select the mean value of normal distribution=0.1 Nearby random value w1 = tf.Variable(tf.truncated_normal([784
,300],stddev=0.1)) w2 = tf.Variable(tf.zeros([300,10])) b1 =
tf.Variable(tf.zeros([300])) b2 = tf.Variable(tf.zeros([10]))
#relu,softmax Are activation functions L1 = tf.nn.relu(tf.matmul(X,w1)+b1) y =
tf.nn.softmax(tf.matmul(L1,w2)+b2)

BP The input layer of neural network has784 Neurons, Hidden layer300 Neurons, Output layer10 Neurons. Initialize weights at all levelsw1,w2; Offset value of each stageb1,b2—— They are all generated by random sequence. Define hidden layers, Calculation method of output layer and its activation function.

5, Calculate error and optimize weight by gradient descent method
# Quadratic cost function: Calculate forecasty And true valueY Error between loss = tf.reduce_mean(tf.square(Y - y))
# Gradient descent method: SelectionGradientDescentOptimizer Optimizer, The learning rate is0.5 train_step = tf.train
.GradientDescentOptimizer(0.5).minimize(loss)

Error is also called loss function, Cost function.TensorFlow There are a lot of built-in optimization algorithms in, Here we choose the simplestGradientDescentOptimizer Optimizer reduces cross entropy, Step is set as0.5

6, Calculation accuracy
# Results are stored in a Boolean list correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(Y,1))
# Accuracy rate accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
*
tf.argmax() function: Is the index value corresponding to the maximum data value of the returned object in a certain dimension, Because the label vectors here are all generated by0,1 Form, So maximum1 The index position is the corresponding category label
* tf.argmax(y,1) What is returned is for any inputx Predicted label value,tf.argmax(Y,1) Represents the correct label value
* correct_prediction
Here is the return of a Boolean array. In order to calculate the accuracy of our classification, We convert Boolean values to floating-point numbers to represent right and wrong, Then take the average. for example:[True, False, True,
True] Turn into[1,0,1,1], The accuracy is0.75
7, Other instructions
batch_xs,batch_ys = mnist.train.next_batch(batch_size) sess.run
(train_step,feed_dict=({X:batch_xs,Y:batch_ys})) acc = sess.run
(accuracy,feed_dict={X:mnist.test.images,Y:mnist.test.labels})
* batch_xs Andbatch_ys: FromMNIST The number of batches in the data set: Data items and label items
* feed_dict=({X:batch_xs,Y:batch_ys} Sentence: Will bebatch_xs,batch_ys Value passed in forX,Y
Source code and effect display
# -*- coding:utf-8 -*- # -*- author:zzZ_CMing # -*- 2018/01/23;21:49 # -*-
python3.5 import tensorflow as tf from tensorflow.examples.tutorials.mnist
import input_data import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # readMNIST data set
mnist = input_data.read_data_sets("MNIST_data",one_hot=True) # Set the size of each batch
batch_size =500 # How many batches are there( Floor removal) n_batch = mnist.train.num_examples//batch_size
# Predefined input valuesX, Output true valueY placeholder Placeholder X = tf.placeholder(tf.float32,[None,784]) Y =
tf.placeholder(tf.float32,[None,10]) """ The way of generating with random sequence, Creating a neural network with a hidden layer.(784,300,10)
""" #truncated_normal: Select the mean value of normal distribution=0.1 Nearby random value w1 =
tf.Variable(tf.truncated_normal([784,300],stddev=0.1)) w2 =
tf.Variable(tf.zeros([300,10])) b1 = tf.Variable(tf.zeros([300])) b2 =
tf.Variable(tf.zeros([10])) #relu,softmax Are activation functions L1 =
tf.nn.relu(tf.matmul(X,w1)+b1) y = tf.nn.softmax(tf.matmul(L1,w2)+b2)
# Quadratic cost function: Error between predicted value and real value loss = tf.reduce_mean(tf.square(Y - y))
# Gradient descent method: SelectionGradientDescentOptimizer Optimizer, The learning rate is0.5 train_step =
tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Results are stored in a Boolean list
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(Y,1)) # Accuracy rate
accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))# initialize variable, activationtf frame
init = tf.global_variables_initializer() sess = tf.Session() sess.run(init)for i
in range(21): for batch in range(n_batch): batch_xs,batch_ys =
mnist.train.next_batch(batch_size)
sess.run(train_step,feed_dict=({X:batch_xs,Y:batch_ys})) acc =
sess.run(accuracy,feed_dict={X:mnist.test.images,Y:mnist.test.labels}) print(
"Iter " + str(i)+",Testing Accuracy "+str(acc))
Effect display:


*
iteration10 The accuracy of the second time has arrived90.85%, For today's technology, This accuracy is still relatively low.CNN Convolutional neural network has been able to achieve the accuracy of more complex image recognition98% Above, So I have to go further in my study
* The program runs slowly, Let's all learn
–—-—-—-—-—-—-—-—-—-—-—-—–—-—-—-—-—-—-—-——-—-—-—-—-—-—-—-—-—-—-—-—-—-——-
–—-—-—-—-—-—-—-—-—-—-—-—–—-—-—-—-—-—-—-——-—-—-—-—-—-—-—-—-—-—-—-—-—-——-

Series recommendation:

【 Supervised learning】1:KNN Three methods of handwritten digit recognition based on Algorithm
<https://blog.csdn.net/zzz_cming/article/details/78938107>
–—-—-—-—-—-—-—-—-—-—-—-—–—-—-—-—-—-—-—-——-—-—-—-—-—-—-—-—-—-—-—-—-—-——-
【 Unsupervised learning】1:K-means Introduction to algorithm principle, And code implementation
<https://blog.csdn.net/zzz_cming/article/details/79859490>
【 Unsupervised learning】2:DBSCAN Introduction to algorithm principle, And code implementation
<https://blog.csdn.net/zzz_cming/article/details/79863036>
【 Unsupervised learning】3:Density Peaks clustering algorithm( Local density clustering)
<https://blog.csdn.net/zzz_cming/article/details/79889909>
–—-—-—-—-—-—-—-—-—-—-—-—–—-—-—-—-—-—-—-——-—-—-—-—-—-—-—-—-—-—-—-—-—-——-
【 Deep learning】1: Perceptron principle, And multi-layer perceptron to solve XOR problem
<https://blog.csdn.net/zzz_cming/article/details/79031869>
【 Deep learning】2:BP The principle of neural network, And resolution of exclusive or problems
<https://blog.csdn.net/zzz_cming/article/details/79118894>
【 Deep learning】3:BP neural net recognitionMNIST data set
<https://blog.csdn.net/zzz_cming/article/details/79136928>
【 Deep learning】4:BP neural network+sklearn Digital recognition
<https://blog.csdn.net/zzz_cming/article/details/79182103>
【 Deep learning】5:CNN Principle of convolutional neural network,MNIST Dataset identification
<https://blog.csdn.net/zzz_cming/article/details/79192815>
【 Deep learning】8:CNN Convolutional neural network recognitionsklearn data set( Source code)
<https://blog.csdn.net/zzz_cming/article/details/79691459>
【 Deep learning】6:RNN Principle of recurrent neural network,MNIST Dataset identification
<https://blog.csdn.net/zzz_cming/article/details/79235475>
【 Deep learning】7:Hopfield neural network(DHNN) Principle introduction
<https://blog.csdn.net/zzz_cming/article/details/79289502>
–—-—-—-—-—-—-—-—-—-—-—-—–—-—-—-—-—-—-—-——-—-—-—-—-—-—-—-—-—-—-—-—-—-——-
TensorFlow Brief introduction to the framework <https://blog.csdn.net/zzz_cming/article/details/79235469>
–—-—-—-—-—-—-—-—-—-—-—-—–—-—-—-—-—-—-—-——-—-—-—-—-—-—-—-—-—-—-—-—-—-——-