AI-Integrated System-on-Chip (SoC) Architectures for High-Performance Edge Computing: Design Trends and Optimization Challenges

Authors

  • Noel Unciano Environment and Biotechnology Division- Industrial Technology Development Institute, Philippines Author
  • Jeon Sungho Department of Electrical and Computer Engineering, Seoul National University, Seoul 08826, Korea Author

DOI:

https://doi.org/10.31838/JIVCT/02.03.06

Keywords:

AI-SoC Architectures, Edge AI Computing, Neural Processing Units (NPUs), Heterogeneous System-on-Chip, Low-Power Inference, Hardware-Software Co-Design, Embedded AI Accelerators, Thermal-Aware SoC Design, Real-Time AI Processing, Energy-Efficient AI Hardware

Abstract

This leads to the fact that the development of edge applications in autonomous systems, healthcare, and smart environments requires very efficient and scalable computing frameworks. The paper gives an overview of the state-of-the-art of AI-integrated System-on-Chip (SoC) architectures being tailor-made to satisfy performance, energy, and latency requirements of the modern edge computing. This is aimed at examining how the embedded AI accelerators (e.g. neural processing units (NPUs) and digital signal processors (DSPs)) may be easily implemented in heterogeneous SoC platform. Methodologically, the research analyzes the progress of the last few years in the field of hardware-software co-design, dataflow and memory hierarchies. It also looks into heterogeneous core coupling plans, thermally conscious floor planning, and energy-conscious task scheduling. Comparative lessons based on commercially available SoCs such as Apple ANE, Google Edge TPU, and NVIDIA Jetson are given in an attempt to highlight trade-offs in the real world. Indicators demonstrate that AI-tailored accelerations--including systolic array-based accelerators, near-memory computing and quantization-aware processing--are representative of multi-seemingly magnified inference acceleration and power efficiency. But the problems encountered are scaling the memory bandwidth, real-time workload scheduling, as well as thermal dissipation. It is observed in conclusion of the paper that the future AI-SoC architectures should focus on being modular, reconfigurable, and secure with runtime and power profiles that allow them to be used at the edge. This overview lays out helpful design considerations and outlines every major research area of projected work on the next-generation SoCs with a means to sustain, intelligent computing at the edge.

Downloads

Published

2025-07-13

Issue

Section

Articles

How to Cite

[1]
Noel Unciano and Jeon Sungho , Trans., “AI-Integrated System-on-Chip (SoC) Architectures for High-Performance Edge Computing: Design Trends and Optimization Challenges”, Journal of Integrated VLSI, Embedded and Computing Technologies, vol. 2, no. 3, pp. 47–55, Jul. 2025, doi: 10.31838/JIVCT/02.03.06.