Carbon Nanotube FET-Enabled VLSI Architecture for Energy-Efficient Deep Learning Accelerators in Edge AI Systems
DOI:
https://doi.org/10.31838/JIVCT/03.01.05Keywords:
Carbon Nanotube FET (CNTFET), VLSI Architecture, Deep Learning Accelerator, Edge AI Systems, Energy Efficiency, Quantized Neural Networks, Hardware Accelerator, Low-Power DesignAbstract
The growing need to apply edge artificial intelligence (AI), including technologies like live image recognition, self-driving cars and healthcare logistics, requires both energy-saving and high throughput hardware solutions. Conventional power-hungry CMOS-based accelerators are significantly affected by power, scaling, and thermal constraints, which encourages the development of exploratory approaches to new device technologies. The proposed VLSI architecture in this study is a Carbon Nanotube Field-Effect Transistors (CNTFETs) based deep learning accelerator, due to its outstanding electrical characteristics, such as ultra-high carrier mobility, extremely low leakage current, and great scalability. The aim is to design and prove the effectiveness of a convolutional neural network (CNN)-optimised CNTFET-optimised architecture to fit resource-constrained edge computing conditions. As energy consumption is a concern in the proposed system it uses quantization at arithmetic operators, memory-efficient dataflow, and power-gates on memory banks. The 7 layer CNN was implemented and simulated at both architecture and CNTFET device-level behaviors at 7-layer with Verilog (architecture level) and HSPICE (device level for CNTFET devices). The findings illustrate a 53 percent better energy usage and 41 percent less silicon extent than corresponding CMOS-based plans with little performance reduction. The results demonstrate the promise of CNTFET-based designs and architectures as potential future energy-efficient edge artificial intelligence hardware, in the emerging neuromorphic computing and deep learning fields.