trainview
Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)
WHAT IS IT?
This is a model of a very small neural network. It is based on the Perceptron model, but instead of one layer, this network has two layers of "perceptrons". Furthermore, the layers activate each other in a nonlinear way. These two additions means it can learn operations a single layer cannot.
The goal of a network is to take input from its input nodes on the far left and classify those inputs appropriately in the output nodes on the far right. It does this by being given a lot of examples and attempting to classify them, and having a supervisor tell it if the classification was right or wrong. Based on this information the neural network updates its weight until it correctly classifies all inputs correctly.
HOW IT WORKS
Initially the weights on the links of the networks are random.
The nodes on the left are the called the input nodes, the nodes in the middle are called the hidden nodes, and the node on the right is called the output node.
The activation values of the input nodes are the inputs to the network. The activation values of the hidden nodes are equal to the activation values of inputs nodes, multiplied by their link weights, summed together, and passed through the sigmoid function. Similarly, the activation value of the output node is equal to the activation values of hidden nodes, multiplied by the link weights, summed together, and passed through the sigmoid function. The output of the network is 1 if the activation of the output node is greater than 0.5 and 0 if it is less than 0.5.
The sigmoid function maps negative values to values between 0 and 0.5, and maps positive values to values between 0.5 and 1. The values increase nonlinearly between 0 and 1 with a sharp transition at 0.5.
In order for the network to learn anything, it needs to be trained. In this example, the training algorithm used is called the backpropagation algorithm. It consists of two phases: propagate and backpropagate. The propagate phase was described above: it propagates the activation values of the input nodes to the output node of the network. In the backpropagate phase, the error in the produced value is passed back through the network layer by layer.
To do the backpropagation phase, the error is first calculated as a difference between the correct (expected) output and the actual output of the network. Since all of the hidden nodes connected to the output contribute to the error, all of the weights need to be updated. To do this, we need to calculate how much each of the nodes contributed to the overall error on the output. This is done by calculating a local gradient for each of the nodes, excluding the input nodes (since the input is the activation we provide to the network, and thus has no error associated with it).
The local gradients are calculated layer by layer. For the output nodes, it is the multiplication of the error with the result of passing the activation value to the derivative of the activation function. Since, in this model, the activation function is the sigmoid function, its simplified derivative ends up being:
activation_value * (1 - activation_value)
If we wished to use a different activation function, we would use the derivative of that function instead.
For each hidden node, the local gradient is calculated as follows:
For each output node connected to the hidden node, multiply its local gradient with the weight of the link connecting them;
Sum all the results from the previous step;
Multiply that sum with the result of passing the activation value of the hidden node to the derivative of the activation function.
To update the weights of each of the links, we multiply the learning rate with the local gradient of end2
(this will be the output node in case the link connects a hidden node with the output node) and the activation value of end1
(this will be the hidden node in case the link connects a hidden node with the output node). The result is then added to the old weight.
The propagate and backpropagate phases are repeated for each example shown to the network.
HOW TO USE IT
To use it press SETUP to create the network and initialize the weights to small random numbers.
Press TRAIN ONCE to run one epoch of training. The number of examples presented to the network during this epoch is controlled by EXAMPLES-PER-EPOCH slider.
Press TRAIN to continually train the network.
In the view, the larger the size of the link the greater the weight it has. If the link is red then it has a positive weight. If the link is blue then it has a negative weight.
If SHOW-WEIGHTS? is on then the links will be labeled with their weights.
To test the network, set INPUT-1 and INPUT-2, then press the TEST button. A dialog box will appear telling you whether or not the network was able to correctly classify the input that you gave it.
LEARNING-RATE controls how much the neural network will learn from any one example.
TARGET-FUNCTION allows you to choose which function the network is trying to solve.
THINGS TO NOTICE
Unlike the Perceptron model, this model is able to learn both OR and XOR. It is able to learn XOR because the hidden layer (the middle nodes) and the nonlinear activation allows the network to draw two lines classifying the input into positive and negative regions. A perceptron with a linear activation can only draw a single line. As a result one of the nodes will learn essentially the OR function that if either of the inputs is on it should be on, and the other node will learn an exclusion function that if both of the inputs or on it should be on (but weighted negatively).
However unlike the perceptron model, the neural network model takes longer to learn any of the functions, including the simple OR function. This is because it has a lot more that it needs to learn. The perceptron model had to learn three different weights (the input links, and the bias link). The neural network model has to learn ten weights (4 input to hidden layer weights, 2 hidden layer to output weight and the three bias weights).
THINGS TO TRY
Manipulate the LEARNING-RATE parameter. Can you speed up or slow down the training?
Switch back and forth between OR and XOR several times during a run. Why does it take less time for the network to return to 0 error the longer the network runs?
EXTENDING THE MODEL
Add additional functions for the network to learn beside OR and XOR. This may require you to add additional hidden nodes to the network.
Backpropagation using gradient descent is considered somewhat unrealistic as a model of real neurons, because in the real neuronal system there is no way for the output node to pass its error back. Can you implement another weight-update rule that is more valid?
NETLOGO FEATURES
This model uses the link primitives. It also makes heavy use of lists.
RELATED MODELS
This is the second in the series of models devoted to understanding artificial neural networks. The first model is Perceptron.
CREDITS AND REFERENCES
The code for this model is inspired by the pseudo-code which can be found in Tom M. Mitchell's "Machine Learning" (1997).
See also Haykin (2009) Neural Networks and Learning Machines, Third Edition.
Thanks to Craig Brozefsky for his work in improving this model and to Marin Aglić Čuvić for info tab improvements.
HOW TO CITE
If you mention this model or the NetLogo software in a publication, we ask that you include the citations below.
For the model itself:
- Rand, W. and Wilensky, U. (2006). NetLogo Artificial Neural Net - Multilayer model. http://ccl.northwestern.edu/netlogo/models/ArtificialNeuralNet-Multilayer. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.
Please cite the NetLogo software as:
- Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.
COPYRIGHT AND LICENSE
Copyright 2006 Uri Wilensky.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at uri@northwestern.edu.
Comments and Questions
links-own [ weight ] breed [ bias-nodes bias-node ] breed [ input-nodes input-node ] breed [ output-nodes output-node ] breed [ hidden-nodes hidden-node ] turtles-own [ activation ;; Determines the nodes output err ;; Used by backpropagation to feed error backwards ] globals [ epoch-error ;; measurement of how many training examples the network got wrong in the epoch input-node-1 ;; keep the input and output nodes input-node-2 ;; in global variables so we can output-node-1 ;; refer to them directly ] ;;; ;;; SETUP PROCEDURES ;;; to setup clear-all ask patches [ set pcolor gray ] set-default-shape bias-nodes "bias-node" set-default-shape input-nodes "circle" set-default-shape output-nodes "output-node" set-default-shape hidden-nodes "output-node" set-default-shape links "small-arrow-shape" setup-nodes setup-links propagate reset-ticks end to setup-nodes create-bias-nodes 1 [ setxy -6 14 ] ask bias-nodes [ set activation 1 set label activation set size 4] create-input-nodes 1 [ setxy -14 -6 set input-node-1 self set size 4 ] create-input-nodes 1 [ setxy -14 6 set input-node-2 self set size 4 ] ask input-nodes [ set activation random 2 set label (word "值为" activation) set label-color red] create-hidden-nodes 1 [ setxy 6 -6 ] create-hidden-nodes 1 [ setxy 6 6 ] ask hidden-nodes [ set activation random 2 set size 4 ] create-output-nodes 1 [ setxy 26 0 set output-node-1 self set activation random 2 set size 4 ] end to setup-links connect-all bias-nodes hidden-nodes connect-all bias-nodes output-nodes connect-all input-nodes hidden-nodes connect-all hidden-nodes output-nodes end to connect-all [ nodes1 nodes2 ] ask nodes1 [ create-links-to nodes2 [ set weight random-float 0.2 - 0.1 ] ] end to recolor ask turtles [ set color item (step activation) [ black white ] ] ask links [ set thickness 0.05 * abs weight ifelse show-weights? [ set label precision weight 4 ] [ set label "" ] ifelse weight > 0 [ set color [ 255 0 0 196 ] ] ; transparent red [ set color [ 0 0 255 196 ] ] ; transparent light blue ] end ;;; ;;; TRAINING PROCEDURES ;;; to train set epoch-error 0 repeat examples-per-epoch [ ask input-nodes [ set activation random 2 set label (word "值为" activation)] propagate backpropagate ] set epoch-error epoch-error / examples-per-epoch tick end ;;; ;;; FUNCTIONS TO LEARN ;;; to-report target-answer let a [ activation ] of input-node-1 = 1 let b [ activation ] of input-node-2 = 1 ;; run-result will interpret target-function as the appropriate boolean operator report ifelse-value run-result (word "a " target-function " b") [ 1 ] [ 0 ] end ;;; ;;; PROPAGATION PROCEDURES ;;; ;; carry out one calculation from beginning to end to propagate ask hidden-nodes [ set activation new-activation set label sigmoid sum [ [ activation ] of end1 * weight ] of my-in-links] ask output-nodes [ set activation new-activation set label sigmoid sum [ [ activation ] of end1 * weight ] of my-in-links] recolor end ;; Determine the activation of a node based on the activation of its input nodes to-report new-activation ;; node procedure report sigmoid sum [ [ activation ] of end1 * weight ] of my-in-links end ;; changes weights to correct for errors to backpropagate let example-error 0 let answer target-answer ask output-node-1 [ ;; `activation * (1 - activation)` is used because it is the ;; derivative of the sigmoid activation function. If we used a ;; different activation function, we would use its derivative. set err activation * (1 - activation) * (answer - activation) set example-error example-error + ((answer - activation) ^ 2) ] set epoch-error epoch-error + example-error ;; The hidden layer nodes are given error values adjusted appropriately for their ;; link weights ask hidden-nodes [ set err activation * (1 - activation) * sum [ weight * [ err ] of end2 ] of my-out-links ] ask links [ set weight weight + learning-rate * [ err ] of end2 * [ activation ] of end1 ] end ;;; ;;; MISC PROCEDURES ;;; ;; computes the sigmoid function given an input value and the weight on the link to-report sigmoid [input] report 1 / (1 + e ^ (- input)) end ;; computes the step function given an input value and the weight on the link to-report step [input] report ifelse-value input > 0.5 [ 1 ] [ 0 ] end ;;; ;;; TESTING PROCEDURES ;;; ;; test runs one instance and computes the output to test let result result-for-inputs input-1 input-2 let correct? ifelse-value result = target-answer [ "correct" ] [ "incorrect" ] user-message (word "The expected answer for " input-1 " " target-function " " input-2 " is " target-answer ".\n\n" "The network reported " result ", which is " correct? ".") end to-report result-for-inputs [n1 n2] ask input-node-1 [ set activation n1 ] ask input-node-2 [ set activation n2 ] propagate report step [ activation ] of one-of output-nodes end ; Copyright 2006 Uri Wilensky. ; See Info tab for full copyright and license.
There is only one version of this model, created over 1 year ago by 彪 宋.
Attached files
No files
This model does not have any ancestors.
This model does not have any descendants.