IR drop analysis aims to determine if there is any voltage drop beyond specific constraint, e.g. 10% of supply voltage. But the analysis is too time consuming, especially for dynamic IR drop because it considers switching activities of all gates for a long simulation time. In order to overcome this, machine learning-based approach is proposed to predict IR drop hotspot for fast analysis. Early-stage design analysis is enabled to open up chances for better optimization. Our IR drop hotspot predictor is featured to apply ML model for image recognition with recurrent network added to detect hotspots considering transient components.

Lithography hotspot is a kind of defects causing manufacturing failure as bridging or necking on wafer pattern. These hotspots can be detected after tape-out, hence, it is hard to check in P&R stage if hotspot exists. In addition, large lithographical computation is required for detecting hotspot. Proposed ML-based lithography hotspot detector acts as a bridge between manufacturing and P&R by offering early feedback on P&R stage. First, our detector quickly predicts PV band on post-OPC layout from pre-OPC layout. Second, hotspot fix guide is provided by finding pattern causing hotspot. Two key features enable detecting and fixing lithography hotspot on early stage in VLSI design.

 

A standard model-based OPC (MB-OPC) tries to minimize the average edge placement error (EPE), or maximum EPE, over all the segments. This may not be ideal for layers where some segments are more important than the others. It has been reported that worst negative slack can become more negative by 36.4% if circuit timing analysis is performed on post-OPC resist contour of poly layer. The OPC method which minimizes EPE more on important segments is called as context-aware OPC. Our proposed context-aware OPC is based on recurrent neural network (RNN) which sets higher weight for the segments associated with timing-critical paths by considering its importance and their neighbors.

A chemical mechanical polishing (CMP) dummy fill method is commonly used for the planarization of the CMP process, resulting in the development of many automated methods. We propose a dummy fill method using a generative adversarial network (GAN) that further improves the existing dummy fill methods in terms of uniformity of metal density and timing of critical nets. The dummy patterns created are similar to the existing methods. However, the GAN dummy fill method applies additional optimizations to make the CMP dummy fill pattern efficient.

Recent evolution of semiconductor technology needs concurrent optimization of process/device, Standard Cell(SC) architecture, digital implementation. This has brought the concept of DTCO. Especially, the co-optimization of SC architecture and design rule which would bring best chip PPAY(power, performance, area, yield) is an important stage in DTCO. In this project, we will develop SC layout generator which enables technology migration easily and produces competitive QoR(Quality of Result), then will develop DSE(design space exploration) SW using Machine Learning which enables chip PPA analysis according to diverse SC architectures and design rules. Finally, we will construct DTCO framework for the co-optimization of cell architecture and design rule.

Design-Technology Co-Optimization (DTCO) activities become crucial in the early state of technology developments. Our research is to develop methods to predict standard cell and circuit performance by reflecting layout and process factors in a Design Technology Co- optimization (DTCO) environment for next-generation devices and processes. Principal Component Analysis (PCA) and Bayesian Inference such as Adaptive Importance Sampling (AIS) can be used to efficiently take into account process variability to predict statistical yields of standard cells and circuits.