Neural Networks and Memristive Hardware Accelerators
Neural network systems are modeled after the human brain and can accomplish many difficult tasks such as image, speech recognition and generation as well as autonomous driving. Traditional computer hardware is suffering from the von Neumann bottleneck and thus require new strategies to handle the enormous growth in data. This can be tackled with the so-called in-memory computing, which uses memory hardware to store and compute data in the same device. One promising solution are novel materials in the nanoscale offering resistors with memory function, so-called memristors.
This course will introduce the basic concepts of machine learning and neural networks for different data types like time series and images. The audience will learn about different network learning methods, optimizers and loss functions and understand the reliance of these methods on vast amounts of data and that computing power is a limiting factor in neural model development.
Then course will bridge from software to hardware accelerators for implementing neural networks specifically based on memristive devices. Basic memristor theory and its application in crossbars, logic circuits and neurons are introduced, and insights into the concepts of crossbar mapping, peripherals, and spiking neural networks are provided.
In addition, the students learn how to apply the python language and how to implement basic neural models in code utilizing ML related python library’s like PyTorch and simulate memristive circuits using LTSpice.