Compared with 10 years ago, and certainly compared with 20 or 30 years ago, the number of ways in which processor developers are able to differentiate their silicon has grown tremendously. And that has implications for a lot of companies within the chip industry.
There was a time when processor advances revolved to a large degree around a couple of things:
- Improving CPU clock speeds and power efficiency with the help of manufacturing advances that shrunk transistor sizes (i.e., Moore's Law).
- Microarchitecture advances that improved the number of instructions a processor could handle per clock cycle (IPC gains).
Later, as multi-core CPUs became the norm and accelerators featuring many tiny processing cores (for example, GPUs) proliferated, core count increases (made possible by Moore's Law) were also an important way to differentiate, as were design advances that improved how effectively and efficiently all of the cores on a given chip worked together.
Today, chip developers definitely haven't stopped trying to improve clock speeds, IPC or core counts. Indeed, AMD's (AMD) CPU share gains in recent years have a lot to do with how (with the help of manufacturing partner Taiwan Semiconductor (TSM) ) it has driven core counts and clock speeds higher for its PC and server CPUs, and how its latest CPU core microarchitectures have narrowed Intel's (INTC) traditional IPC lead.
But as some recent chip announcements and news reports drive home -- see this one, or this one, or this one -- there are also now several other ways that processor developers see avenues for differentiating their offerings. These include:
- Embracing a different instruction set architecture (ISA) that -- although potentially creating software compatibility challenges -- can deliver cost, performance and/or power efficiency advantages relative to an established ISA. Amazon.com (AMZN) and Marvell's (MRVL) development of ARM-architecture server CPUs are a case in point, as are Apple's (AAPL) reported plans to launch Macs powered by ARM CPUs.
- Developing 2.5D and 3D chip packaging technologies that improve system performance and power efficiency -- for example, Intel's Foveros 3D packaging tech, or its Co-EMIB tech for connecting two groups of chips stacked used Foveros.
- Developing co-processors dedicated to functions such as processing photos and videos, deploying deep learning models or encrypting and decrypting content. Such co-processors are now a common sight on everything from Apple and Qualcomm's (QCOM) mobile processors to Nvidia's (NVDA) GPUs.
- Creating solutions that eliminate communications bottlenecks, whether within a system or between systems. Nvidia's acquisition of high-speed server interconnect developer Mellanox Technologies was driven in large part by a desire to better address system-to-system bottlenecks within data centers packed with Nvidia server GPUs.
- Optimizing a processor's ability to run a particular piece of software, or a type of software. Google's Tensor Processing Units (TPUs), which are designed from the ground up to run the popular TensorFlow machine learning framework, are one good example.
This broad expansion in the number of ways companies try to, and are able to, make their processors stand out has a couple of important implications.
First, it spells an increase in the total amount of processor R&D going on. That in turn is a positive for electronic design automation (EDA) software vendors such as Cadence Design Systems (CDNS) and Synopsys (SNPS) .
Second, and perhaps more importantly, it's leading processor and accelerator markets to become more fragmented, with many smaller players able to carve out niches even if there's a clear market leader. In addition to being a positive for the smaller players (and at times, a negative for market leaders), that's probably also good news for TSMC, considering how many of the smaller players are turning to TSMC to manufacture their chips.