Transformer-Based Deep Feature Learning for AI-Enhanced Fault Diagnosis in Deep Submicron VLSI Circuits
DOI:
https://doi.org/10.31838/ECE/03.02.02Keywords:
Fault Diagnosis, VLSI, Deep Submicron, Transformers, Feature Learning, AI, IC Testing, Self-Attention, Fault ClassificationAbstract
The paper introduces an artificial intelligence fault diagnostic system based on transformers to diagnose any fault(s) in deep submicron Very-Large-Scale Integration (VLSI) based circuits accurately and efficiently. Scaling in VLSI inevitably makes fault localization and classification harder because all models are highly sensitive to noise, variability, and smaller sizes in feature. The goal of the research is to scale a diagnostic model that could measure complex temporal and spatial correlations in circuit signal data. The suggested solution makes use of Deep feature learning, transformer, and multi-head self-attentions to make the model able to capture the long-range dependencies in the test waveforms and logic signatures. The architecture is trained on simulated datasets produced using industry-standard ISCAS, and ITC99 benchmark circuits, and extensive fault-types including stuck-at, bridging and delay faults. Examples of evaluation criteria are diagnosis accuracy, inference speed, and generalization in the face of variations over the processes. The experimental data demonstrate that transformer model works better compared to other traditional CNN and LSTM baselines, as the new model presents 96.3 percent of fault classification accuracy and 30 percent of inference time reduction. The model records a solid noise tolerance in the test data and generalization across circuits too. To summarize everything, this study concludes that transformer-based architectures are effective to improve VLSI fault diagnosis, and offers practical evidence to future AI-based post-silicon validation tools, capable of real-time diagnosis in the state-of-the-art semiconductor fabrication facilities.