Activity states of neurons are organised with the help of exitatory and inhibitory signals exchanged between them. Sensory information along with stored data result in convergence to a stationary state which lasts a fraction of a second. (Sensory store of time-scale tuned to typical progression of events in real life)
LTM is reorganised with the help of synaptic plasticity, eg. Hebb's rule - connections are strengthened when both neurons are active together. The effect is to increase the likelihood of those activity states that occured in the past.
Supervised learning: A desired output signal is provided to the circuit along with the input signal. Over several cycles the system adjusts its parameters to bring its response closer to the desired output. (Example: the back-propagation algorithm).
Most human intelligence (eg. perception - recognising objects and events in the environment) is aquired by mere exposure, without a teacher. Tasks like sorting by appearance can be done by very young children. The mechanism behind such ``unsupervised learning'' is assumed to be ``self organisation''.
Unsupervised learning in a neural network does in fact involve target values: most often the targets are the same as the inputs. With this target, unsupervised learning performs the task of dimensionality reduction, compressing the information from the inputs. Unsupervised learning (I think under the name of ``cluster analysis'') is used in data visualization. (Interpreted from ftp://ftp.sas.com/pub/neural/FAQ.html)
Though sl was the popular form in the '80s, usl now predominates (DeLiang Wang. review of ``Unsupervised Learning: Foundations of Neural Computation'' Eds. Geoffrey Hinton and Terrence Sejnowski - reviewed in AI Magazine, Summer 2001).
Objective of usl is to find hidden structures/ correlations in input data: discover clusters, extract features to characterise data compactly, uncover non-accidental coincidences.
Although most connectionist models of high-level cognition are still over-simplistic and biologically implausible, the underlying (distributed/ bottom-up) principle remains an important feature of cognitive science.
Current NN applications range trom home appliances to predicting the stock market, extracting image data from radar information, controlling cars, robots ...