• German
German

Main Navigation

Judith Hemp, Dr. Schenk GmBH, Munich, ONLINE

Event Date: July 8, 2021 16:15


Runtime and Power-Demand Estimation for Inference on Embedded Neural Network Accelerators

Abstract - Deep learning is an important method and research area in science in general and in computer science in particular. Following the same trend, big companies such as Google implement neural networks in their products, while many new startups dedicate themselves to the topic. The ongoing development of new techniques, caused by the successful use of deep learning methods in many application areas, has led to neural networks becoming more and more complex. This leads to the problem that applications of deep learning are often associated with high computing costs, high energy consumption, and memory requirements. General-purpose hardware can no longer adapt to these growing demands, while cloud-based solutions can not meet the high bandwidth, low power, and real-time requirements of many deep learning applications. In the search for embedded solutions, special purpose hardware is designed to accelerate deep learning applications in an efficient manner, many of which are tailored for applications on the edge. But such embedded devices have typically limited resources in terms of computation power, on-chip memory, and available energy. Therefore, neural networks need to be designed to not only be accurate but to leverage such limited resources carefully. Developing neural networks with their resource consumption in mind requires knowledge about these non-functional properties, so methods for estimating the resource requirements of a neural network execution must be provided. Featuring this idea, the presentation presents an approach to create resource models using common machine learning methods like random forest regression. Those resource models aim at the execution time and power requirements of artificial neural networks which are executed on an embedded deep learning accelerator hardware. In addition, measurement-based evaluation results are shown, using an Edge Tensor Processing Unit as a representative of the emerging hardware for embedded deep learning acceleration.

Judith about herself - I am one of the students who studied at the university for a long time and with pleasure. The peaceful humming cips of Friedrich-Alexander University Erlangen-Nuremberg were my home for many years (2012-2020). During this time, I took advantage of the university's rich offerings by participating in competitions (Audi Autonomous Driving Cup 2018, RuCTF 2020, various ICPCs), working at 3 different chairs (Cell Biology, Computer Architecture, Operating Systems) as a tutor/research assistant, not learning two languages (Spanish, Swahili), and enjoying the culinary delights of the Südmensa. I had many enjoyable experiences at the university, but probably one of the best was presenting part of my master's thesis in Austin, Texas during the 'First International Workshop on Benchmarking Machine Learning Workloads on Emerging Hardware' in 2020. After graduation, however, real-life caught up with me and now I am working as a software developer at a company with the pleasant name 'Dr. Schenk GmBH' in Munich where I write fast and modern C++ code.

Github: Inesteem
LinkedIn: judith-hemp-b1bab11b2



https://www.linkedin.com/in/judith-hemp-b1bab11b2

Rings at TU Dortmund
SFB-876 NEWSLETTER
Newsletter RSS Twitter

NEWEST TECHREPORTS