1 AIT Asian Institute of Technology

Training algorithms for recurrent neural networks

AuthorNguyen Nhi Gia Vinh
Call NumberAIT Thesis no. CS-02-14
Subject(s)Neural networks (Computer science)
Computer algorithms
NoteA thesis submitted in partial fulfillment of the requirements for the degree of Master of Engineering, School of Advanced Technologies
PublisherAsian Institute of Technology
Series StatementThesis ; no. CS-02-14
AbstractIn recent years, artificial neural networks have been applied in many fields, especially recurrent neural networks have attracted a lot of attention and important focus in research and publication. A recurrent neural network is one in which self-loops and backward connections between nodes are allowed. One of the consequences of this fact is that their dynamic behaviors not possible with strictly feed forward neural networks, such as limit cycle and chaos, can be produced with recurrent networks. The diversity of dynamic behavior leads to the situation where recurrent neural networks well suit to many important problems including filtering and forecasting. Another possible benefit of recurrent neural networks is that smaller networks may provide the functionality of much larger feed forward neural networks. Despite the potentiality and capability of recurrent networks, the main problem is the difficulty of training them: the complexity and slow convergence of the existing training algorithms. This study focuses on understanding recurrent neural networks dynamic behaviors and on improvements of algorithms to achieve faster convergence. An experiment is carried out by means of the use of two sets of rainfall-discharge data for the fully recurrent networks and Elman (partially) recurrent networks. The results obtained show that: - Autocorrelation and cross correlation analysis can be used to determine the number of input nodes - The number of hidden nodes can be determined by combining the Baum-Haussler rule and Bayesian Information criterion as proposed in this study - In terms of the computation time, the offline training algorithm of Atiya and Parlos is fastest, while the real time learning algorithm is slowest - In terms of the performance statistics, the offline training algorithm of Atiya and Parlos has the highest performance
Year2002
Corresponding Series Added EntryAsian Institute of Technology. Thesis ; no. CS-02-14
TypeThesis
SchoolSchool of Advanced Technologies (SAT)
DepartmentDepartment of Information and Communications Technologies (DICT)
Academic Program/FoSComputer Science (CS)
Chairperson(s)Huynh Ngoc Phien
Examination Committee(s)Sadananda, Ramakoti ;Hoang Le Tien
Scholarship Donor(s)Ministry of Education and Training of Viet Nam
DegreeThesis (M.Eng.) - Asian Institute of Technology, 2002


Usage Metrics
View Detail0
Read PDF0
Download PDF0