Main content

Efficient architectures for MLP-BP Artificial Neural Networks implemented on FPGAS

Show full item record

Title: Efficient architectures for MLP-BP Artificial Neural Networks implemented on FPGAS
Author: Savich, Antony Walter
Department: School of Engineering
Advisor: Moussa, MedhatAreibi, Shawki
Abstract: Artificial Neural Networks, and Multi-Layer Perceptron with Back Propagation (MLP-BP) algorithm in particular, have historically suffered from slow training. Unfortunately, many applications require real-time training. This thesis studies aspects of MLP-BP implementation in FPGA hardware (Field Programmable Gate Arrays) for accelerating network training. This task is accomplished through analysis of numeric representation and its effect on network convergence, hardware performance and resource consumption. The effects of pipelining on the Back Propagation algorithm are analyzed, and a novel hardware architecture is presented. This new architecture allows extended flexibility in terms of selected numeric representation, degree of system-level parallelism, and network virtualization. A high degree of resource consumption efficiency is accomplished through a careful architectural design, which allows placement of large network topologies within a single FPGA. Examination of performance for this pipelined architecture demonstrates at least three orders of magnitude improvement over software implementation techniques.
URI: https://hdl.handle.net/10214/25007
Date: 2006
Terms of Use: All items in the Atrium are protected by copyright with all rights reserved unless otherwise indicated.


Files in this item

Files Size Format View
Savich_AntonyW_MSc.pdf 5.674Mb PDF View/Open

This item appears in the following Collection(s)

Show full item record