Fast learning of biased patterns in neural networks.
Int J Neural Syst 4:3 (1993) 223-230
Abstract:
Usual neural network gradient descent training algorithms require training times of the same order as the number of neurons N if the patterns are biased. In this paper, modified algorithms are presented which require training times equal to those in unbiased cases which are of order 1. Exact convergence proofs are given. Gain parameters which produce minimal learning times in large networks are computed by replica methods. It is demonstrated how these modified algorithms are applied in order to produce four types of solutions to the learning problem: 1. A solution with all internal fields equal to the desired output, 2. The Adaline (or pseudo-inverse) solution, 3. The perceptron of optimal stability without threshold and 4. The perceptron of optimal stability with threshold.Neural networks optimally trained with noisy data.
Phys Rev E Stat Phys Plasmas Fluids Relat Interdiscip Topics 47:6 (1993) 4465-4482
Coupled Dynamics of Fast Neurons and Slow Interactions
Advances in Neural Information Processing Systems 6 (1993) 447-454
Abstract:
A simple model of coupled dynamics of fast neurons and slow interactions, modelling self-organization in recurrent neural networks, leads naturally to an effective statistical mechanics characterized by a partition function which is an average over a replicated system. This is reminiscent of the replica trick used to study spin-glasses, but with the difference that the number of replicas has a physical meaning as the ratio of two temperatures and can be varied throughout the whole range of real values. The model has interesting phase consequences as a function of varying this ratio and external stimuli, and can be extended to a range of other models.A FEATURE RETRIEVING ATTRACTOR NEURAL-NETWORK
JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL 26:10 (1993) 2333-2342
A SOLUBLE SUPERCONDUCTIVE GLASS MODEL
JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL 26:23 (1993) L1201-L1205