• search hit 1 of 1
Back to Result List

Inference and Training of a Multilayer Perceptron in a Deep Reinforcement Learning Context on a FPGA

  • This thesis addresses the design and verification of a multilayer perceptron (MLP) and the corresponding optimization algorithm, the batch gradient descent (BGD), on a FPGA using high level synthesis (HLS) for Xilinx devices. The solutions developed in this project are used in a reinforcement learning environment for the control of power electronic systems. The thesis briefly presents the principle of reinforcement learning, a mathematical description of the MLP and the BGD as well as programming techniques for HLS. The structure of the solutions and performance examinations are presented in the results part of the thesis. The project delivers functionally verified solutions for the execution on a FPGA. The solutions are able to process a three layer MLP with 16 inputs and outputs and 128 neurons in the hidden layer in 2,361 clock cycles at 100 MHz clock frequency which results in a runtime of 23.6 µs. The corresponding BGD for one training example features a minimum runtime of 13,141 clock cycles or 1,314 µs. However, performance isThis thesis addresses the design and verification of a multilayer perceptron (MLP) and the corresponding optimization algorithm, the batch gradient descent (BGD), on a FPGA using high level synthesis (HLS) for Xilinx devices. The solutions developed in this project are used in a reinforcement learning environment for the control of power electronic systems. The thesis briefly presents the principle of reinforcement learning, a mathematical description of the MLP and the BGD as well as programming techniques for HLS. The structure of the solutions and performance examinations are presented in the results part of the thesis. The project delivers functionally verified solutions for the execution on a FPGA. The solutions are able to process a three layer MLP with 16 inputs and outputs and 128 neurons in the hidden layer in 2,361 clock cycles at 100 MHz clock frequency which results in a runtime of 23.6 µs. The corresponding BGD for one training example features a minimum runtime of 13,141 clock cycles or 1,314 µs. However, performance is expected to further improve after resolving several issues described in the thesis.show moreshow less

Download full text files

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Thilo Wendt
URN:urn:nbn:de:bvb:92-opus4-8521
Referee:Armin Dietz, Tobias Schindler
Document Type:Master's Thesis
Language:English
Publishing Institution:Technische Hochschule Nürnberg Georg Simon Ohm
Granting Institution:Technische Hochschule Nürnberg Georg Simon Ohm
Date of final exam:2022/05/09
Release Date:2022/06/29
Tag:Artificial Neural Network; Batch Gradient Descent; FPGA; High Level Synthesis
GND Keyword:Neuronales Netz; Logiksynthese; Field programmable gate array
Pagenumber:38
institutes:Institut für leistungselektronische Systeme ELSYS
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Informatik
Licence (German):Keine Lizenz - Deutsches Urheberrecht gilt
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.