Energy-Aware Hardware/Software Co-Design for Deep Neural Networks on Reconfigurable Platforms
DOI:
https://doi.org/10.31838/ESA/03.01.06Keywords:
Energy-Aware Computing, Hardware/Software Co-Design, Deep Neural Networks (DNNs), Reconfigurable Platforms (FPGAs), Edge AI Acceleration, Low-Power Embedded SystemsAbstract
This paper defines an energy-aware hardware/software co-design system that is optimized to achieve deep neural networks (DNNs) executions on flexible hardware, like field-programmable gate arrays (FPGAs). The framework could optimize power consumption and computational performance by coupling fine-grained runtime energy monitoring, adaptive quantization strategy, and dynamic layer-wise hardware mapping. Conventional FPGA-based solutions tend to be lacking a co-ordinated adaptation between a hardware and software level, thus making the overall energy efficiency less than ideal. The proposed method, by contrast, allows smart trade-offs with the energy consumption and model accuracy due to close coupling of the hardware/software optimisation. To validate the framework, a great amount of experimentation is run on widely used DNN models, including ResNet-18 and MobileNetV2 achieving up to 52 decrease in energy consumption and 1.8\times increase in inference throughput, relative to conventional baseline architectures that have not been co-optimized. It gives device portability that successfully supports a wide range of neural network topologies and hardware configurations, supporting it to be particularly fitted to deployment in edge-AI systems, systems with energy needs, and embedded systems, where effectiveness is of the essence.