• Microsoft’s ND H100 v5 VMs appear to be among its most potent computational offerings to date.
  • Enterprises can register their interest in the new VMs, and the company promises to expand the offering to make hundreds of thousands of H100 GPUs available to customers by next year.

Microsoft Corp. plans to encourage organizations to deploy their most advanced artificial intelligence projects to the Azure cloud platform by providing access to Nvidia Corp.’s latest and most powerful graphics processing units.

The company has announced that its ND H100 v5 Virtual Machine series is now generally accessible via the Azure cloud. It gives customers access to the high-performance computing infrastructure needed to train and run generative AI models.

Furthermore, it is expanding the availability of its Azure OpenAI Service, which allows clients to study the most advanced AI models developed by ChatGPT maker OpenAI LP. According to the company, Azure OpenAI is available worldwide in different regions.

Microsoft’s ND H100 v5 VMs appear to be among its most potent computational offerings to date. They’re already available in Azure’s East and West US regions, and they’re outfitted with eight of Nvidia’s most powerful H100 GPUs.

Nvidia introduced the H100 GPUs last year, stating that they are built on the company’s revolutionary Hopper architecture. They can provide orders of magnitude more processing power than the Nvidia A100 GPUs utilized to train the initial ChatGPT.

The benefit of the H100 GPUs is that they offer “significantly faster AI model performance” than the earlier generation of GPUs, Nvidia stated at the time of release. The ND H100 v5 also exhibits Intel Corp’s modern fourth Gen Intel Xeon Scalable CPUs featuring low-latent networking through Nvidia’s Quantum-2 CX7 InfiniBand technology. They also integrate DDR5 memory that facilitates quicker data transfer speeds to tackle the largest AI training datasets and PCIe Gen5, which offers 64 GB/sec bandwidth per GPU.

Microsoft makes some bold promises regarding the ND H100 v5 instances’ performance, including a six-fold increase in matrix multiplication operations and 1.8 times faster inference on large language models like OpenAI’s GPT-BLOOM 175B.

Organizations can express their interest and register for the latest VMs, and the company assures to extend the offering to create a large number of H100 GPUs for customers by next year.

Because the ND H100 v5 VMs are designed for generative AI workloads, Microsoft is also expanding the Azure OpenAI Service, which enables direct access to OpenAI’s cutting-edge AI models, GPT-4, and GPT-35-Turbo. The Azure OpenAI service, which was launched in January, was initially available only in Azure’s East United States, France Central, South Central United States, and West Europe regions but has now been expanded to the East United States 2, UK South Canada East, and Japan East, as per the company.

Holger Mueller of Constellation Research Inc. mentioned this latest development shows how keen Microsoft has been to bolster its AI solutions and make them robust and extensively available. “It wants to make OpenAI available to more customers and it’s offering Nvidia-based virtual machines alongside it, for customers to run and monetize their custom AI models. The speed of the rollout, along with availability and stability, is key for customer adoption, as many enterprises are tied to specific regions due to data residency and security requirements,” he added.

According to Microsoft, the Azure OpenAI Service is already utilized by over 11,000 commercial customers and is growing at a rate of roughly 100 new users daily.

Nidhi Chappell, General Manager of Azure HPC, AI, SAP, and Confidential Computing at Microsoft, said, “As part of this expansion, we are increasing the availability of GPT-4, our most advanced generative AI model, across the new regions. This enhancement allows more customers to leverage GPT-4’s capabilities for content generation, document intelligence, customer service and beyond.”