At the event, AMD Chairman and CEO Lisa Su said that MI300X is AMD’s real product designed for generative artificial intelligence, compared to NVIDIA’s H100 chip, MI300X provides 2.4 times more memory and 1.6 times more memory bandwidth, generative AI and large language models require computer computing power and memory to be greatly improved.

With the large memory of the AMD Instinct MI300X, users can install large language models such as Falcon-40, 40B parameter models, on a single MI300X accelerator. AMD also introduced the AMD Instinct™ platform, which combines eight MI300X accelerators into a single industry-standard design to provide the ultimate solution for AI inference and training.
At the event, AMD Chairman and CEO Lisa Su said that MI300X is AMD’s real product designed for generative artificial intelligence, compared to NVIDIA’s H100 chip, MI300X provides 2.4 times more memory and 1.6 times more memory bandwidth, generative AI and large language models require computer computing power and memory to be greatly improved. Earlier, Reuters reported that Amazon’s cloud computing business is considering using AMD’s new artificial intelligence (AI) chips.

In addition, AMD announced that the AMD Instinct MI300A, the world’s first APU accelerator for HPC and AI workloads, is now available to customers. The MI300X began sampling to major customers in the third quarter.
Previously, the 61st issue of the global supercomputer TOP500 information released in early June showed that NVIDIA was the biggest winner in the number of accelerators/coprocessors used in all supercomputers. Of the top 28 accelerator/coprocessor products, 20 are from NVIDIA and only 1 is from AMD.
Among them, NVIDIA Tesla V100, NVIDIA A100, NVIDIA A100 SXM4 40GB, NVIDIA Tesla A100 80G, NVIDIA Tesla V100 SXM2 and AMD Instinct MI250X are in the top six, with 61 installations, 27 sets, 18 sets, 10 sets, 10 sets, 10 sets, 10 sets, respectively, accounting for 12.2%, 5.4%, 3.6% and 2% 、2% 、2%.