With more than 20 billion transistors, a new production process and enormous physical dimensions, the green chip developer hopes to revolutionize the world of computational learning and artificial intelligence - and we are already fantasizing about the next generation,Geforce The new gaming
We hoped to get some references to the new and intriguing Volta architecture NVIDIA As part of its annual GTC event in California - but we have to admit that we did not really expect full official disclosure, massive and impressive as we actually received. Which developed from Santa Monica again managed to leave us without too many words when it announced its core GV100, which is supposed to take the world of high-performance processing to a completely different level. again.
The Volta era of NVIDIA We will start with a new production process with TSMC, the old partner, who even got the name of the company - 12 nmn FFN, with initials for 3D FinFET transistors and for NVIDIA. In addition to the production process, which is supposed to be more efficient than the giant Taiwanese company's 16 nanometer process (and apparently also effective from the 14 nanometer processes of competitors such as סמסונג And GlobalFoundries). Also, we have architecture Which is now completely different to the new Tensor Cores and is added to the well-known CUDA cores, with a specific correlation to multiply and multiply small matrices in 4 dimensions on 4 and storing the results obtained in 16 floating-point cells Bit or 32 bits as needed for further processing by the rest of the cores.
This system should be particularly effective and useful for computational learning and deep learning applications, as well as artificial intelligence applications derived in many cases from this data, which is modeled on simple parallel processing as wide and as fast as possible -NVIDIA Announce that the massive GV100 core gives Performence Up to 120TFLOPS for 30TFLOPS calculations in 16 bit calculations, up to 15TFLOPS in standard 32 bit calculations and up to 7.5TFLOPS of 64 bit calculations - almost 50 more than the full GP100 core, which until now was the powerful parallel processing element Most in the world. Stunning!
In order to reach these fantastic performance data,NVIDIA Have decided to use one of the largest silicon chips seen on the chip market - with a total area of 815 square millimeters (about 33 more than the GP100), where up to 5,376, CUDA, 672, memory 1 L128 caching cache, 6 megabyte of memory A L2 level cache (both common to all processing units for maximum efficiency) and a gigabyte of 16 memory HBM2 Which delivers four gigabytes of bandwidth and provides a gigantic effective bandwidth of 900 / Gbps, close to the maximum theoretical level of terabyte per second we were promised in the technology.
Using more than one 21 billion transistors on one giant core (in the Tesla V100 processing unit, the first practical product based on the GV100), at a very phenomenal 1,455MHz frequency, for the groundbreaking giant performance mentioned above is one thing - but Achieving this while maintaining the power envelope of 300 watts only, like that of last year's Tesla P100 units based on the GP100 core, is something that keeps us really open-mouthed. NVIDIA Has succeeded in improving its processing efficiency by at least 50 at least on the paper,Pascal, Which was the most prominent in this field so far.
NVIDIA will offer theTesla V100 with advanced processing arrays for the world of servers, research, science and HPC with four and eight processing units (to be connected in the NVLink 2.0 interface with double-sided 25 gigabits per second for each artery), and price tags ranging from 70,000 USD to $ 150,000 - with the key here available for early purchase for companies and entities that are already interested, and a promise of practical availability during the third quarter of 2017.
Just like the core of the GP100 before it, the GV100 core is also irrelevant to home consumers looking for record-breaking gaming -NVIDIA Will offer additional Volta cores in the not-too-distant future that will be designed specifically for this audience, probably without most of the circuits provided Performence Maximal 64 double bit, and without the new cores of the tensor - but yes with Memories (At least in some models) and up to 2 processing clusters and 84 processing units, which are an addition of 5,376 to more than the maximum number we received in the next generationPascal, Which may indicate the degree of potential improvement in various price levels, in a particularly rosy and optimistic world.
NVIDIA did not say a word about Volta in the gaming market and the GeForce family, but last year we saw the announcement of the Tesla P100 units and the GP100 cores at the GTC conference in early April - GTX 1080 and the GTX 1070 at the beginning of May, only one month later - and although we are skeptical about the chances of seeing an exact reconstruction of this move this year (after all, the flagship product GeForce GTX 1080 Ti was launched with noise and ringing only about a month ago) Not a bad chance at all for a new generation of GeForce models with architecture Completely new is a few months away from us.
The announcement of new home models in the near future, even if the practical availability will come only a few weeks or even months later, you will be a winning pre-emptive strike of NVIDIA For the delayed generation of Vega I HAVE D, And a move to ensure that Sun-Sun Huang and his men continue to maintain their significant technological advantage in all categories of parallel processing. The bottom line is that if you think we have already received almost all the surprises 2017 has to give us, you should think again - the situation is just getting warmer now.