Microprocessor architecture choice in the age of AI: exploring ARM and RISC-V
In a world where the everyday devices we use are based on programmable
silicon, choosing the right microprocessor architecture is key to
delivering a successful product. In the age of AI, the microprocessor
selection for anchoring an AI solution is especially important.
BY GOPAL HEGDE, SENIOR VICE PRESIDENT OF ENGINEERING AND OPERATIONS AT SIMA.AI.
INNOVATORS building their own silicon are familiar with the process of selecting an Instruction Set Architecture (ISA) that dictates how the instructions and data types a CPU uses to perform calculations, manage data and interact with memory and other components in a given product. It is also important to understand the ecosystem of design implementations, tools and extended software support, as well as the flexibility of the licensing options.
For AI/ML products, other considerations include support for data types used in AI/ML and native instructions to accelerate AI/ML applications.
While x86 ISA has dominated the computing market, x86 CPU core licenses are widely available outside of the Intel foundry ecosystem. There are two primary players in the ISA space with licensable ISA and strong ecosystems: ARM and RISC-V. ARM boasts a legacy of domination in the embedded device market, while open source RISC-V claims to be the architecture of choice for flexibility desired by emerging AI companies. These factors are largely determining their adoption and how they are used in AI systems.
ARM and RISC-V side-by-side
ARM emerged in the 1990s and has become ubiquitous due to its energy efficiency, broad ecosystem support, flexible licensing terms and integrated design implementations. The general RISC design philosophy prioritizes a reduced number of instruction classes, parallel pipeline units and a large general-purpose register set, though ARM has further evolved this with extensions. The ARM processor has been specially designed to reduce power consumption and extend battery operation, and contains features for multi-threading, co-processors and higher code density, along with comprehensive software compilation and hardware debug technology.
ARM is a licensable IP, allowing many companies to implement custom ARM-based designs into their products. These license fees fund ongoing ARM developments and allow the company to continue improving its technology, such as the development of new extensions and optimizations for modern workloads like AI and hardened implementations targeted at deep submicron processes. However, ARM has utilized its own view of how to support an AI/ML workload as it defines its roadmap for the ARM ISA. This has not always been met with acceptance by companies, as they feel there are better approaches to support AI/ML algorithms on a programmable processor than ARM is promoting.
ARM’s licensing model and vast ecosystem support have made it the dominant architecture for mobile, IoT and embedded use cases. In fact, chips containing ARM IP power most of today’s devices, used by Apple, Nvidia, Qualcomm, Mediatek, Google and many more vendors in mobile, consumer and embedded silicon products. ARM has not been adopted widely for AI/ML workloads (except in microcontrollers for tinyml use cases), but to host stacks, due to the higher computational performance and power efficiency needed to support these emerging algorithms.
Companies flock to ARM for a simple reason: Its solutions work with their software. In addition, Arm closely controls the ISA and provides Arm validation suites (AVS) to ensure that software implemented for any Arm ISA implementation is binary compatible with other Arm ISA implementations.