+44 (0)24 7671 8970
More publications     •     Advertise with us     •     Contact us
 
Loading...
Interview

Magazine Feature
This article was originally featured in the edition:
2023 Issue 3

The potential of AI

News

Realising impact for businesses and consumers.

By Leo Charlton (research interests are in quantum technologies and nanophotonics) is a Technology Analyst with IDTechEx.

The emergence of generative AI over the past five years – the most famous examples of which being OpenAI’s DALL-E 2 image generator and ChatGPT – has been a key milestone for the ongoing AI boom.

The robust predictive abilities of ChatGPT in particular have given a glimpse into the transformational power of AI across numerous industry verticals, where companies will be faced with the dilemma of how to effectively utilize AI tools for maximum business impact as the breadth and depth of AI models grow.

While by and large software has received more media attention of late than hardware – a not unnatural occurrence, given end-users and those that analyse the impact of such technologies care ultimately about what a tool can do, not how it can do it – the promise of AI models would remain unrealized were it not for the design and manufacture of hardware that can run these models in a cost-effective manner. As software develops in complexity (the most advanced AI models being more computationally intensive than those from yesteryear), advanced hardware is needed in order to facilitate growth.

According to a recently published report by IDTechEx on AI chips – the semiconductor circuitry that enables such AI functionalities as natural language processing, object detection and classification, and speech recognition – the global AI chips market will grow to more than US$250 billion by 2033, with the IT & Telecoms, Banking, Financial Services and Insurance (BFSI), and Consumer Electronics industries being key beneficiaries of emerging AI technologies.

Growing AI usage in edge devices
In the aforementioned AI Chips: 2023 – 2033 report, IDTechEx consider recent trends in investments for AI hardware at both the design and manufacture phase of the supply chain.

In addition to new product launches and financial data from the key market players – and model revenue growth for AI chips over the next ten years via several degrees of granularity.

A key finding from the report is related to the revenue split over the forecast period between chips used for inference purposes versus those used for training.

Training and inference are the two stages of the machine learning process, wherein computer programs utilize data to make predictions based on a model, and then optimize the model to better fit with the data provided by adjusting the weightings used. The first stage of implementing an AI algorithm is the training stage, where data is fed into the model and the model adjusts its weights until it fits appropriately with the provided data.

The second stage is the inference stage, where the trained AI algorithm is executed, and new data (that was not provided in the training stage) is classified in a manner consistent with the acquired data. Of the two stages, the training stage is more computationally intensive, given that this stage involves performing the same computation millions of times.

The training for some leading AI algorithms can take days to complete, with ChatGPT using around 10,000 Nvidia A100 GPUs to train the GPT-3.5 large language model (LLM) on which it is based. Yet, despite these already impressive numbers, IDTechEx forecast that chips used for inference purposes will grow to contribute to more than two-thirds of total AI chip market revenue as of 2033.

As all AI training takes place within the data centre in a cloud computing environment, this speaks to not only the continued use of inference chips in a cloud environment, but also the higher growth rate of AI chips used in edge devices than in a cloud environment (given that AI chips within edge devices are used for inference purposes) over the next ten years.

Adoption of AI-capable chips within edge devices is imperative to certain applications – such as fully-autonomous vehicles – and increasingly commonplace in others (such as in mobile phones). However crucial AI is to a particular application, effective deployment has the potential to create ‘new normals’ across industries.