Fine-Grained Dynamic Partial Reconfiguration for Energy-Efficient AI Inference on FPGAs

Authors

  • Elena Lindberg Assistant Prof ECE Yrkeshögskolan Arcada – Arcada University of Applied Sciencesin Helsinki, Finland. Author

Keywords:

FPGAs, Dynamic Partial Reconfiguration (DPR), Fine-Grained Architectures, AI Inference, Energy-Efficiency, Bitstream Optimization, Hardware Accelerators, Deep Learning.

Abstract

With the increase in complexity of Deep Neural Networks (DNNs) and the complexity of the required hardware, the internal problems with instruments of Dark silicon effects are becoming clear: over-provisioned hardware resources are damaged by unwise power consumption both during low-load conditions and during peak performance. Conventional realisations are generally made to meet the worst-case layer needs of a network at a cost of a high energy usage in execution of smaller or less computationally demanding layers. The current paper suggests a new Fine-Grained Dynamic Partial Reconfiguration (DPR) architecture that would be used to allocate hardware more efficiently by refining hardware on a sub-layer level. In contrast with rough methods which replace complete models, our model splits up the FPGA fabric as modular reconfigloadable tiles. This makes it possible to tune functional units, e.g. changing the convolution engines and binary logic switching between INT4 and INT8 or binary logic, on the fly and programme them to suit the needs of a particular DNN layer. We combine a predictive pre-fetching controller to reduce the latency overhead of the Internal Configuration Access Port (ICAP) by employing a double-buffering scheduling policy to permit reconfiguration and computation to co-exist. The results of the experiments carried out in a Xilinx Zynq UltraScale+ MPSoC prove that our fine-grained DPR method can provide a dynamic power consumption saving of up to 30 percent. Remarkably, because of predictive controller, reconfiguration latency is effectively concealed, which yields high throughput and the performance overhead and performance difference between a predictive controller and a static architecture is less than 5 percent. These results imply that the development of power-proportional AI accelerator through fine-grained DPR would be a feasible course to implement to power-constrained edge computing systems.

Downloads

Published

2026-04-12

Issue

Section

Articles

How to Cite

Elena Lindberg. (2026). Fine-Grained Dynamic Partial Reconfiguration for Energy-Efficient AI Inference on FPGAs. SCCTS Transactions on Reconfigurable Computing , 3(3), 1-6. https://ecejournals.in/index.php/rcc/article/view/518