1 AIT Asian Institute of Technology

A comparison of selected training algorithms for recurrent neural networks

AuthorAree Chairatanatrai
Call NumberAIT Thesis no. IM-01-02
Subject(s)Neural networks (Computer science)

NoteA thesis submitted in partial fulfillment of the requirements for the degree of Master of Engineering, School of Advanced Technologies
PublisherAsian Institute of Technology
Series StatementThesis ; no. IM-01-02
AbstractRecurrent Neural Networks (RNNs) are one type of neural networks in which self-loops and backward connections between nodes are allowed. One of the consequences of this fact is that dynamic behaviors not possible with strictly feed-forward networks, such as limit cycles and chaos, can be produced with recurrent networks. To gain further insight into RNNs, this study focused on the network perfo1mance of two fast on-line algorithms, namely Error Back Propagation and Exponentially Weighted Least Squares (EBP-EWLS), and Accelerating Convergence using Approximated Gradient (ACAG). To evaluate the performance of the two aforementioned algorithms in the forecasting problem, three types of data, namely daily stock prices, quarterly export and gross domestic product, and daily streamflow, were considered. In terms of the efficiency index, root mean squared error, mean absolute deviation, and maximum relative error, both algorithms perform very satisfactorily, with the EBP-EWLS being slightly better. However, it talces more computation time than the ACAG does. Based upon the results obtained, a new algorithm was developed by combining three different methods; Error Back Propagation, Error self-recurrent, and Recursive Least Squares methods. From its applications to the above data sets, it was found that the new algorithm is considerably faster than the EBP-EWLS, at the same time, it can eliminate the very sensitive parameter in the ACAG algorithm. Moreover, this new algorithm performs much better than the ACAG, and reaches almost the same performance as the EBP-EWLS. Finally, a simple analysis on the complexity of the RNNs was also carried out. It was found that as the number of hidden nodes increases, both the storage and computation time increases. With the same number of nodes added to hidden layer or the input layer, the increase in storage and computation time is much higher than the former case.
Year2001
Corresponding Series Added EntryAsian Institute of Technology. Thesis ; no. IM-01-02
TypeThesis
SchoolSchool of Advanced Technologies (SAT)
DepartmentDepartment of Information and Communications Technologies (DICT)
Academic Program/FoSInformation Management (IM)
Chairperson(s)Huynh Ngoc Phien;
Examination Committee(s)Sadananda, Ramakoti;Hoang Le Tien ;
Scholarship Donor(s)His Majesty the King of Thailand ;
DegreeThesis (M.Eng.) - Asian Institute of Technology, 2001


Usage Metrics
View Detail0
Read PDF0
Download PDF0