NVIDIA GPUs has put in great effort to power your AI/HPC/visualization workload now claiming “broadest GPU availability.” In addition to putting T4 instances in its U.S. and Netherlands GCP regions, with its recent beta launch, Google is extending the GPU option to Brazil, India, Singapore, and Tokyo datacenters, marking the first time GPUs have been offered in those Google Cloud Platform regions.
NVIDIA at a communication meeting in Beijing released the latest version of the Virtual GPU software v7.x for the virtualization platform. NVIDIA GPU virtualization senior solution gave a detailed introduction to the performance, features, and functioning of the latest technology.
Today, GPUs has transformed into a very significant capital and productivity tool for enterprises. Virtualization can help enterprises fully allot GPU resources to its users. Earlier, the GRID virtual GPU (vGPU) platform, combined with the VMware Horizon vDGA platform, enabled virtualization for Tesla GPUs.
Based on Nvidia’s Turing architecture, the T4 is the successor to the P4 Pascal-based chips, introduced in 2016. Incorporating 320 Turing Tensor Cores and 2,560 CUDA cores, the T4 claims a theoretical 8.1 teraflops of single-precision performance, 65 teraflops of mixed-precision, 130 teraflops of INT8 and 260 teraflops of INT4 performance. Google notes that the T4’s 16 GB of memory benefits both large training models and the running of many smaller inference models.
The Nvidia T4 is indicated for use with applications that use machine learning and data visualization, as well as other GPU-accelerated workloads. The piece now features GDDR6 memory that improves performance and power efficiency.
The highly efficient design also makes it cost effective by installing additional GPUs in a server. The key feature provides for deep learning inference workflows as well as new RT Cores that makes way for real-time ray tracing acceleration and batch rendering.
The most important update of vGPU is the expansion to support the allocation of multiple physical GPUs to one VM. If the user has more requirements on the GPU’s resources and computing power, multiple GPUs can be implemented in the virtualization platform to meet the user’s high computing power requirements.
In the future, NVIDIA will continue to work closely with VMware to leverage the strengths of both parties to meet the needs of the market.