Original link <http://www.asimovinstitute.org/neural-network-zoo/>

Feed forward neural networks (FF or FFNN) and perceptrons(P)

Feedforward network and perceptron are straight ahead , Information from front to back ( Input and output respectively ) spread . Neural network is usually described as multi-layer , Each layer is input by , Hide layer , Output unit composition . There is absolutely no connection in a single network of one layer, usually between adjacent layers , Neurons are completely connected ( Each neuron in each layer is connected to each neuron in the other layer ). The simplest and, to some extent, the most practical network consists of two input units and one output unit , This kind of network can be used as logic gate model . usually FFNNs It's through backward propagation training , Data sets that group networks include “ input ” and “ Expected output ”. This is called supervised learning , Contrary to unsupervised learning . Error is propagated backward , And the error can be MSE Or linear error . Suppose the network has enough hidden neurons , In theory, it can always simulate the relationship between input and output . In fact, this kind of network itself is the first , But they are usually combined with other networks to generate other types of networks .


Hopfield network(HN)

Each neuron of Hopfield network is connected with other neurons ; It's a bowl of spaghetti completely intertwined . Each node is an input point before training , And then there are hidden nodes in the training , After the training, it is the output node . These networks set the value of the neuron to the desired pattern , Then calculate full time , In this way, we can train the model . The weight doesn't change after that . Once trained into one or more modes , The network will converge to a good learning mode , Because networks are stable only in these states . Notice that it doesn't always match the desired state . It's partially stable because it's global “ energy ” or “ temperature ” It is gradually reduced in training .


Convolutional neural networks (CNN or DCNN)

Convolutional neural networks are very different from most other types of networks . They were originally used for image processing , Later, it was also used for other types of input data, such as audio . A typical CNN Application is , When you input an image to the network , The network classifies the data , For example, if you enter a picture of a cat , It will give a label “ cat ”.CNN Usually with one input “ Scanner ” start , And it doesn't parse all the training data in science . for instance , Enter a 200*200 Pixel image , You don't want to have one 40000 A layer of nodes . contrary , You build a scan input layer like 20*20, Put the 20*20 Pixel scan . Once before 20*20 Processed , Move the scanner pixel by pixel to the right to scan all remaining images . be aware , We haven't dealt with it 20*20 Pixel move , I didn't divide the image into two parts 20*20 Small pieces of , It's using this 20*20 All pixels are scanned by the scanner of . The input data is then convoluted instead of the normal , It means that not all nodes are connected to other nodes . Each node is only connected to its nearest node ( Distance depends on Realization , But usually not a lot ). These convolutional layers tend to get smaller as they get older and deeper , It is usually the factor with the easiest division of input size ( as 20 May become 10, then 5).2 The power of is often used here , Because they can be completely separated :32,16,8,4,2,1. Except for these convolutions , There are also characteristic pooling layers . Pooling is a way to filter out details : The most commonly used pooling technology is maximum pooling , For example, we are right 2*2 Pixels of , Return to its R Pixel with the largest value . Use for audio CNN, We just need to input the audio wave , And then increase the length little by little . In practice CNN The use of is usually increased by one at the end FFNN Used to drill down on data , In general, it is necessary to be able to deal with highly nonlinear Abstract classification problems .CNN+FFNN This network is often called DCNN, however DCNN and CNN The names and abbreviations of can usually be replaced by each other .


Deconvolutional networks (DN)

Deconvolution neural network , Also known as inverse graph network , Is the inverse process of convolution neural network . Enter words for this network “ cat ”, The Internet compares the pictures it produces with real cat pictures , Output the picture of the cat that it thinks meets the input conditions .DNN Can and FFNN Use together .


Generative adversarialnetworks (GAN)

The generative adversary network is a different kind of network , They are twins : Two networks working together . There are any two networks in the adversary generation network ( Usually FF and CNN Combination of ), One is responsible for generating content, the other is to judge content . Discriminating network either receives training data , Either accept the data generated by the generated network as input . The prediction accuracy of discrimination network is regarded as a part of the error of generating network . This creates a set of confrontations , When the discrimination network can distinguish the generated data and the real data more and more precisely , Generating networks will also generate increasingly unpredictable data . This approach works well to some extent, because the most complex noisy patterns are ultimately predictable , However, it is difficult to learn how to distinguish the generated data with similar characteristics to the input data . It's very difficult to train against the generative network , Because we're not just training two networks ( Each has its own problems ), And deal with the dynamic balance between them . If the prediction or generation network is better than another network , Then the network will not converge , Because in essence, there are differences between the two networks .


Recurrent neural networks(RNN)

Periodic neural network with time period FFNN: They are not stateless ; They are relevant in time . Neurons not only receive information from inputs , And they also need to receive information from their previous cycle point . This means , It's important that we input and train the network : Input first “ milk ” after “ Biscuits ” And first “ Biscuits ” after “ milk ”, May produce different results .RNN An important problem is degradation ( Or explosive ) Gradient problem , Depends on the use of activation functions , Rapid loss of information over time , It's like very deep FFNN As the depth increases, the loss information is the same . Intuitively, it's not a big problem because they're just weights, not neuron states , But weight with time is actually where information is stored ; If the weight value is 0 perhaps 1
000 000, The previous state is not very useful .RNN In principle, it can be used in many fields , Although most forms of data don't actually have a timeline ( such as
Don't want sound and video ), But they can all be expressed as sequences . A picture or a string of words can be regarded as a pixel or a character at each time point , So the time-dependent weight is used on a previous value in the sequence , Not how many seconds ago . usually , Periodic networks are very effective for evolving or complementing information , For example, automatic completion .