Categorization and effective perceptron learning in feed-forward neural networks
Por:
Waelbroeck H., Zertuche F.
Publicada:
1 ene 2000
Resumen:
In the initial stages of back-propagation learning, weights in the first synaptic layer are small and hidden neurons operate in the linear regime; so the input-output map is then the same as that of a perceptron. As the weights increase, the hidden neurons begin to saturate; each saturation pattern represents a particular category of input vectors, for which the neural network (NN) locally behaves as a perceptron. The effective perceptron weights for each category are functions of the synaptic weights of the NN. We define an internal temperature as the inverse of the norm of the first synaptic matrix, and an entropy function that measures the disorder of the hidden-neuron layer. The learning curves are recast in the language of a condensation process. The entropy drops abruptly when the hidden neurons begin to saturate, marking a clear transition from the global perceptron state to an ordered state characterized by different local effective perceptrons in different categories. The problem of optimizing the effective perceptrons in different categories is frustrated.
|