estadisticas web Skip to content

NVIDIA increases AI computing power with Volta and Tesla V1007

nvidia data-center-tesla-v100-pcie-625-ud2x-640x0

Nvidia at the forefront of GPU use for AI and machine learning applications and presented the Volta GPU architecture and the Tesla V100 GPU data center

Nvidia is at the forefront in the use of GPUs for AI and machine learning applications and has recently presented the Volta GPU calculation architecture and the Tesla V100 GPU data center.

Artificial intelligence (AI) and machine learning are incredibly important technological developments that require enormous amounts of computing power. Microsoft is currently holding the Build 2017 developer conference and does not include any products that, in one way or another, do not integrate AI or machine learning.

One of the best ways to design the best type of high-speed computing infrastructure is to use GPUs, which can be more efficient than commonly used CPUs.

Nvidia defined Volta as the "most powerful in the world" and built it with 21 billion transistors that provide in-depth learning performance of 100 CPUs. This equates to five times the performance of the Pascal architecture in terms of peak teraflops and 15 times the performance of Maxwell. According to Nvidia, Volta's performance quadruples that improvement that Moore's law provides.

Secondor Jensen Huang, founder and CEO of Nvidia "In-depth learning, which is AI's innovative approach to creating learning computer software, has an insatiable demand for processing power. Thousands of Nvidia engineers have spent three years building Volta not only to help meet this need, but also to allow the industry to understand the potential to change everyone's life thanks to AI ".nvidia data-center-tesla-v100-pcie-625-ud2x-640x0

In addition to the Volta architecture, Nvidia also presented Tesla V100 GPU data center, which incorporates a number of new technologies. They include the following, as announced in the announcement by Nvidia: – Tensor Cores, designed to accelerate AI workloads. Equipped with Tensor Cores 640, the V100 offers 120 teraflops of in-depth learning performance, equivalent to the performance of 100 CPUs. – New GPU architecture with over 21 billion transistors. It unites the CUDA cores and the Tensor Cores within a unified architecture, providing the performance of an AI supercomputer in a single GPU.– NVLink provides the next generation of high-speed interconnects that connect GPUs and GPUs to CPUs, twice as much as first-generation NVLinks – 900 GB / sec HBM2 DRAM, developed in collaboration with Samsung, it reaches 50% more memory bandwidth than the previous generation GPUs, essential to support the extraordinary processing speed of Volta.– Software optimized for Volta, including CUDA, cuDNN and TensorRT software, both the frameworks that applications can exploit to accelerate AI and research. Some organizations are planning to use Volta in their applications, including Amazon Web Services, Baidu, Facebook, Google and Microsoft. As AI and machine learning are increasingly integrated into the technology used every day, it will probably be solutions such as the Volta and Tesla V100 GPUs to power them.