The Community for Technology Leaders
Proceedings of IEEE 24th International Symposium on Fault- Tolerant Computing (1994)
Austin, TX, USA
June 15, 1994 to June 17, 1994
ISBN: 0-8186-5520-8
pp: 360-369
Ching-Tai Chin , Sch. of Comput. & Inf. Sci., Syracuse Univ., NY, USA
K. Mehrotra , Sch. of Comput. & Inf. Sci., Syracuse Univ., NY, USA
C.K. Mohan , Sch. of Comput. & Inf. Sci., Syracuse Univ., NY, USA
S. Rankat , Sch. of Comput. & Inf. Sci., Syracuse Univ., NY, USA
ABSTRACT
This paper addresses methods of improving the fault tolerance of feedforward neural nets. The first method is to coerce weights to have low magnitudes during the backpropagation training process, since fault tolerance is degraded by the use of high magnitude weights; at the same time, additional hidden nodes are added dynamically to the network to ensure that desired performance can be obtained. The second method is to add artificial faults to various components (nodes and links) of a network: during training. The third method is to repeatedly remove nodes that do not significantly affect the network: output, and then add new nodes that share the load of the more critical nodes in the network. Experimental results have shown that these methods can obtain better robustness than backpropagation training, and compare favorably with other approaches.<>
INDEX TERMS
feedforward neural nets, backpropagation, fault tolerant computing, learning (artificial intelligence)
CITATION

Ching-Tai Chin, K. Mehrotra, C. Mohan and S. Rankat, "Training techniques to obtain fault-tolerant neural networks," Proceedings of IEEE 24th International Symposium on Fault- Tolerant Computing(FTCS), Austin, TX, USA, 1994, pp. 360-369.
doi:10.1109/FTCS.1994.315624
93 ms
(Ver 3.3 (11022016))