ASIC and VLSI Design Flashcards
“Lambda” (λ) in VLSI design
It doesn’t have a name, they just call it lambda.
It represents the
Moore’s Law
The observation that the number of transistors on a chip doubles approximately every two years (not quite true anymore, beginning to slow)
Origin of the term VLSI
The level of integration of chips has been classified as small-scale, medium-scale, large-scale, and very large scale. Small-scale integration (SSI) circuits, such as the 7404 inverter, have fewer than 10 gates, with roughly half a dozen transistors per gate. Medium-scale integration (MSI) circuits, such as the 74161 counter, have up to 1000 gates. Large-scale integration (LSI) circuits, such as simple 8-bit microprocessors, have up to 10,000 gates. It soon became apparent that new names would have to be created every five years if this naming trend continued and thus the term very large-scale integration (VLSI) is used to describe most integrated circuits from the 1980s onward
Historical Rate at which the number of transistors on a chip increases
Since the beginning of the atomic age (1945), the number of transistors (N) that can fit in a processor has gone up 100-fold every 15 years:
N=100 ^ ((t - 1945) / 15)
*This gives a trendline very close to actual historical data for real Intel processors
Small scale integration
Up to 10 gates per chip
Medium scale integration
Up to 1000 gates per chip
Large scale integration
Up to 10,000 gates per chip
Example: 8-bit processors
Very Large Scale Integration (VLSI)
Over 10,000 gate gates per chip
*Nearly all modern chips are VLSI
*Expected to achieve 100 billion transistors per processor by Summer 2027
three Reasons for the decline of Moore’s Law
1) Physical Limitations: As transistors approach atomic scales, issues like quantum tunneling and heat dissipation become significant challenges. Shrinking transistors further to fit more on a chip becomes increasingly difficult due to these effects
2) Rising Costs: The complexity of manufacturing smaller transistors, using technologies like FinFET and Gate-All-Around (GAA), has made semiconductor production more expensive and challenging. This has slowed down the frequency of doubling transistor counts
3) Alternative Strategies: Instead of continuing with transistor scaling, chip manufacturers like Intel are focusing on multichip architectures and 3D stacking to increase performance. These methods allow for more transistors without needing to shrink individual transistors
Heterogeneous computing
Instead of relying solely on general-purpose CPUs, companies are developing specialized accelerators for specific tasks, such as AI processing, graphics (GPUs), and networking. This has led to a rise in application-specific integrated circuits (ASICs) and system-on-chip (SoC) designs that optimize for particular workloads
Financial Implications of the decline of Moore’s Law
Semiconductor companies have been forced to invest heavily in research and development (R&D) for advanced manufacturing techniques like Extreme Ultraviolet Lithography (EUV) and new materials (e.g., FinFETs, Gate-All-Around FETs). This has driven up costs significantly. As a result, the industry has seen consolidation, with fewer companies able to compete at the leading edge of semiconductor technology.
The investment required to maintain competitive manufacturing capabilities has pushed some companies to focus on fabless models, relying on pure play foundries like TSMC or Samsung for fabrication, while concentrating on chip design
“Slower Node Transitions”
Because of the decline of Moore’s Law, the move from one process node to the next (e.g., 10nm to 7nm) now takes longer than in the past, resulting in longer product cycles. This impacts both IDMs and pure play foundries as they need to optimize their current process nodes and innovate in other areas like power efficiency and packaging to differentiate their products
Current strategy used by manufacturers to optimize power consumption
They are designing low-power cores and adopting multi-core architectures, where different cores handle different tasks to balance performance with efficiency.
Apple’s M-series chips (M1, M2) exemplify this trend, integrating performance and efficiency cores to optimize energy usage, especially in mobile and battery-powered devices
Emerging Chip manufacturing business models
1) Chiplet Ecosystem: Companies are now developing chiplets that can be integrated into larger systems. This has led to collaborations between chipmakers and foundries to develop modular and customizable chip solutions.
2) Collaboration Across the Supply Chain: As costs rise, collaboration between design firms, foundries, and packaging companies has intensified to share the burden of R&D and manufacturing investments. For instance, Intel’s IDM 2.0 strategy involves manufacturing for external clients, similar to pure play foundries
3) Vertical Integration: Companies like Apple are increasingly designing their own processors (e.g., M-series), controlling the entire hardware-software stack to optimize performance without relying on general-purpose chips from external vendors
Multi-chip Modules
MCMs are packages that contain multiple integrated circuits (ICs) or dies assembled onto a single substrate. These dies are typically placed side by side (in a 2D arrangement) and connected using wire bonds or a silicon interposer.
Key Features:
- 2D Integration: Multiple dies are placed on a substrate in close proximity.
- Parallel Processing: MCMs allow for several chips to work together, which increases overall system performance by distributing workloads across multiple dies.
- Cost-Effective: MCMs can reduce costs by using different process nodes for different chips (e.g., using an older process for less critical parts).
- Used for: Applications requiring multiple functionalities (e.g., CPU and memory) on the same package, such as in graphics processors (GPUs) or system-on-chip (SoC) designs.
Example: GPUs often use MCMs to combine processing cores and memory chips within a single package, allowing faster communication and reducing overall system latency.
3D Stacking in VLSI
Overview:
3D stacking involves stacking multiple layers of integrated circuits (ICs) on top of each other, forming a vertical integration. These layers are interconnected through Through-Silicon Vias (TSVs), which allow for shorter interconnects between layers compared to traditional 2D layouts.
Key Features:
- Vertical Integration: Chips are stacked vertically, resulting in a smaller footprint, which reduces latency and power consumption by minimizing the distance signals need to travel.
- Higher Density: By stacking layers of logic, memory, or both, 3D ICs achieve higher transistor densities.
- Better Power Efficiency: Signals travel shorter distances, and stacking reduces power loss, making 3D ICs highly efficient.
Applications:
3D stacking is particularly useful in high-performance computing, where memory bandwidth is crucial, such as in High Bandwidth Memory (HBM) for GPUs and AI processors.
Example: High Bandwidth Memory (HBM) and stacked DRAM solutions are popular examples of 3D stacking technology used in advanced GPUs and AI chips.
Chiplets
Overview:
Chiplets are small functional blocks or “mini-chips” that can be mixed and matched within a package to form a larger, more complex system. Each chiplet performs a specific function (e.g., I/O, memory, or CPU), and they are connected using high-speed interconnects on a substrate or interposer.
Key Features:
- Modularity: Chiplets allow designers to reuse blocks of IP across different products, saving design time and costs.
- Scalability: Different chiplets can be manufactured using different process nodes, optimizing costs and performance for specific functions.
- Customization: Chiplets enable a modular approach to building chips, allowing manufacturers to create highly customized solutions for specific workloads.
Application: High-performance processors (e.g., AMD’s Ryzen and EPYC processors) and heterogeneous systems where different tasks require specialized chiplets for optimal performance.
Example: AMD’s Zen architecture uses chiplets to separate CPU cores from I/O, allowing the cores to be manufactured on an advanced node (e.g., 7nm) while using an older process for I/O, thus optimizing cost and performance.