New Learning Theory - Learning can be made without iteratively tuning (artificial) hidden nodes (or hundred types of biological neurons) even though the modeling of biological neurons may be unknown as long as they are nonlinear piecewise continuous, and such a network can approximate any continuous target function with any small error and can also separate any disjoint regions without tuning hidden neurons.
G.-B. Huang, et al., “Universal approximation using incremental constructive feedforward networks with random hidden nodes,” IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang and L. Chen, “Convex Incremental Extreme Learning Machine,” Neurocomputing, vol. 70, pp. 3056-3062, 2007.
G.-B. Huang, et al, “Extreme Learning Machine for Regression and Multiclass Classification,” IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, vol. 42, no. 2, pp. 513-529, 2012.
G.-B. Huang, “An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels,” Cognitive Computation, vol. 6, 2014.
G.-B. Huang, “What are Extreme Learning Machines? Filling the Gap between Frank Rosenblatt's Dream and John von Neumann's Puzzle,” Cognitive Computation, vol. 7, 2015.
J. von Neumann, Father of Computers’ Puzzles [Neumann 1951, 1956]:
Why ``an imperfect (biological) neural network, containing many random connections, can be made to perform reliably those functions which might be represented by idealized wiring diagrams”
Answered by ELM Learning Theory[Huang, et al 2006, 2007, 2008]
“As long as the output functions of hidden neurons are nonlinear piecewise continuous and even if their shapes and modeling are unknown, (biological) neural networks with random hidden neurons attain both universal approximation and classification capabilities, and the changes in finite number of hidden neurons and their related connections do not affect the overall performance of the networks.” [Huang 2014]
ELM Learning Theory[Huang, et al 2006, 2007, 2008, 2014, 2015]
more and more biological evidences coming
O. Barak, et al, "The importance of mixed selectivity in complex cognitive tasks," Nature, vol.497, 2013
M. Rigotti, et al, "The sparseness of mixed selectivity neurons controls the generalization-discrimination trade-off," Journal of Neuroscience, vol. 33, no. 9, 2013
S. Fusi, E. K Miller, and M. Rigotti, "Why neurons mix: high dimensionality for higher cognition," Current Opinion in Neurobiology, vol. 37, 2015
J. Xie and C. Padoa-Schioppa, “Neuronal remapping and circuit persistence in economic decisions,” Nature Neuroscience, vol. 19, 2016
E. L Rich and J. D Wallis, “What stays the same in orbitofrontal cortex,” Nature Neuroscience, vol. 19, no. 6, 2016
R. I. Arriaga, et al. “Visual Categorization with Random Projection,” Neural Computation, vol. 27, 2015
T. Tuma, et al, “Stochastic phase-change neurons,” Nature Nanotechnology, vol. 11, 2016