Human brains consist of billions of neurons that continually process information. Each neuron is like a tiny computer of limited capability that processes input information from other neurons into output information to other neurons. Connected together, these neurons form the most intelligent system known.
For some years now, researchers have been developing models that mimic the activity of neurons to produce a form of artificial intelligence. These "Neural Networks" are formed from tens or hundreds of simulated neurons connected together in much the same way as the brain's neurons. Just like the neurons of the brain, artificial neurons can change their response to a given set of inputs, or "learn".
In the past many learning algorithms have been developed, mostly with limited success. The introduction of the backpropagation paradigm, however, appeared a turning point. It is an extremely effective learning tool that can be applied to a wide variety of problems.
The backpropagation paradigm require supervised training. This means neural networks must be trained by repeatedly presenting examples to the network. Each example includes both inputs (information you would use to make a decision) and desired outputs (the resulting decision, prediction, or response). Based on a calculation on the inputs, the desired outputs and the networks own response to the inputs, the backpropagation paradigms tries to adapt the response of each neuron to achieve an improved neural network behavior. This trial-and-error process continues until the network reaches a specified level of accuracy.
Once a network is trained and tested it's neurons can be "frozen". This means the neurons ability to learn" or adapt is stopped. The network can then be used to perform the task it was trained for.