A monster that is hard to describe in words: NVIDIA exposes its Volta core

With more than 20 billion transistors, a new production process and enormous physical dimensions, the green chip developer hopes to revolutionize the world of computational learning and artificial intelligence - and we are already fantasizing about the next generation, The new gaming

We hoped to get some references to the new and intriguing Volta architecture As part of its annual GTC event in California - but we have to admit that we did not really expect full official disclosure, massive and impressive as we actually received. Which developed from Santa Monica again managed to leave us without too many words when it announced its core GV100, which is supposed to take the world of high-performance processing to a completely different level. again.

Get updates from everyone in TelgramGet updates from us all at TelgramJoin the channel now


The monster we've been waiting for since 2014

The Volta era of We will start with a new production process with TSMC, the old partner, who even got the name of the company - 12 nmn FFN, with initials for 3D FinFET transistors and for . In addition to the production process, which is supposed to be more efficient than the giant Taiwanese company's 16 nanometer process (and apparently also effective from the 14 nanometer processes of competitors such as And GlobalFoundries). Also, we have Which is now completely different to the new Tensor Cores and is added to the well-known CUDA cores, with a specific correlation to multiply and multiply small matrices in 4 dimensions on 4 and storing the results obtained in 16 floating-point cells Bit or 32 bits as needed for further processing by the rest of the cores.

Tenor calculations are the name of the new game

This system should be particularly effective and useful for computational learning and deep learning applications, as well as artificial intelligence applications derived in many cases from this data, which is modeled on simple parallel processing as wide and as fast as possible - Announce that the massive GV100 core gives Up to 120TFLOPS for 30TFLOPS calculations in 16 bit calculations, up to 15TFLOPS in standard 32 bit calculations and up to 7.5TFLOPS of 64 bit calculations - almost 50 more than the full GP100 core, which until now was the powerful parallel processing element Most in the world. Stunning!

Imaginary by any measure whatsoever

In order to reach these fantastic performance data, Have decided to use one of the largest silicon chips seen on the chip market - with a total area of ​​815 square millimeters (about 33 more than the GP100), where up to 5,376, CUDA, 672, 1 L128 caching cache, 6 megabyte of A L2 level cache (both common to all processing units for maximum efficiency) and a gigabyte of 16 Which delivers four gigabytes of bandwidth and provides a gigantic effective bandwidth of 900 / Gbps, close to the maximum theoretical level of terabyte per second we were promised in the technology.

Almost Fata-FLOP (or PetaFLOP) of computational learning performance in one single server package

Using more than one 21 billion transistors on one giant core (in the Tesla V100 processing unit, the first practical product based on the GV100), at a very phenomenal 1,455MHz frequency, for the groundbreaking giant performance mentioned above is one thing - but Achieving this while maintaining the power envelope of 300 watts only, like that of last year's Tesla P100 units based on the GP100 core, is something that keeps us really open-mouthed. Has succeeded in improving its processing efficiency by at least 50 at least on the paper,, Which was the most prominent in this field so far.

Detailed technological comparison, from NVIDIA's official website

NVIDIA will offer the V100 with advanced processing arrays for the world of servers, research, science and HPC with four and eight processing units (to be connected in the NVLink 2.0 interface with double-sided 25 gigabits per second for each artery), and price tags ranging from 70,000 USD to $ 150,000 - with the key here available for early purchase for companies and entities that are already interested, and a promise of practical availability during the third quarter of 2017.

Do you have a few tens of thousands of extra dollars? You can enjoy Before everyone else

GeForce time

Just like the core of the GP100 before it, the GV100 core is also irrelevant to home consumers looking for record-breaking gaming - Will offer additional Volta cores in the not-too-distant future that will be designed specifically for this audience, probably without most of the circuits provided Maximal 64 double bit, and without the new cores of the tensor - but yes with (At least in some models) and up to 2 processing clusters and 84 processing units, which are an addition of 5,376 to more than the maximum number we received in the next generation, Which may indicate the degree of potential improvement in various price levels, in a particularly rosy and optimistic world.

It is possible that this year too we will get a jump Dozens of percent of the maximum and effective performance of our graphics cards? The potential for this definitely exists

NVIDIA did not say a word about Volta in the gaming market and the GeForce family, but last year we saw the announcement of the Tesla P100 units and the GP100 cores at the GTC conference in early April - GTX 1080 and the GTX 1070 at the beginning of May, only one month later - and although we are skeptical about the chances of seeing an exact reconstruction of this move this year (after all, the flagship product GeForce GTX 1080 Ti was launched with noise and ringing only about a month ago) Not a bad chance at all for a new generation of GeForce models with Completely new is a few months away from us.

The announcement of new home models in the near future, even if the practical availability will come only a few weeks or even months later, you will be a winning pre-emptive strike of For the delayed generation of Vega , And a move to ensure that Sun-Sun Huang and his men continue to maintain their significant technological advantage in all categories of parallel processing. The bottom line is that if you think we have already received almost all the surprises 2017 has to give us, you should think again - the situation is just getting warmer now.