The company's most powerful core from Santa Clara will serve as the basis for creating the most powerful system in the world for Development artificial intelligence
The core of the GP100 that it presented to us NVIDIA June 2016 is a particularly impressive technological achievement, allowing us to get not only performance of close to 10 TFLOPS (ten trillion calculations in one second) in standard mode with 32-bit accuracy, but also performance of almost 5TFLOPS in dual-64-bit precision, which is very important for applications Specific in the world of powerful computing - or in other words for large and powerful computers that need to perform particularly complicated calculations in real time, which are often also simply called 'supercomputers'.
These unique abilities led to the fact that this core was originally launched only as part of the E-Tesla P100, which is not intended for graphical display, but rather for accelerating computations in large and compressed processing systems, and although we recently also received the first GP100-based graphical product - it was also a family productQuadro Designed for professionals who know exactly what they need and are willing to pay accordingly.
The GP100 cores has one more impressive capability, providing performance of close to 20TFLOPS (i.e., 20,000,000,000,000 calculations every second and second) with half the accuracy of 16 Bit, which is increasingly used today as part of machine learning and computational learning that enables computer learners 'Have the ability to analyze a huge amount of new data and understand the connections between them with minimal interference from human operators - and now it seems that this ability will helpNVIDIA And its fearsome Tesla P100 units to break into the list of the top 10 most powerful computers in the world, at a higher point than any other system based on the E-Pascal The latest.
This prestigious achievement is due to the TSUBAME 3.0, the fearsome computer at the Tokyo Institute of Technology, which claims to become the platform Development Artificial intelligence (through deep computational learning and neural networks), the most advanced and powerful in the world - with 47PFLOPS (this is 47,000 trillion calculations per second) of half-accuracy calculations, and about 23PFLOPS standardized calculations that are supposed to be sufficiently accurate for Which is published twice a year.
TSUBAME 3.0 will start operating in the middle of this year, joining the TSUBAME 2.5 platform already in Tokyo Tech and based on processing units of NVIDIA The Kepler section for use in similar applications, when running the two arrays together can provide an even more imaginative performance, of about 64 PFLOPS for 16-bit computing, thus strengthening the institute's position as the main force in this emerging processing - at least until Super Computer New at the AIST Institute (also in Tokyo), which is expected to deliver about 130 PFLOPS of performance for computational learning applications somewhere early next year.
TSUBAME 3.0 will reach its huge performance thanks to 540 processing units, each with a pair of up-to-date Xeon E5-2680 v4 processors responsible for running the relevant operating system and applications for the development, plus four Tesla P100 interconnects communicating through NVX A gigabyte of dynamic memory that will be shared by all elements within the unit - when communication between hundreds of processing units is based on the Omni-Path arteries of Intel, With a bandwidth of 100 gigabits per second.
This is a large and massive array, but it should be noted that GP100's cores are large NVIDIA They are also one of the most efficient processing products ever created, so there is a good chance that TSUBAME 3.0 will enter not only the list of the ten most powerful supercomputers - but also the list of the ten greenest supercomputers in the field, which is no less important in the long run Energy consumption prices for operators, and as part of the vision to move to Exascale systems (with a performance of 1,000,000 trillion floating point calculations per second) by the end of the decade, will require them to be as cost-effective as possible in order to become truly logical and applicable.
Later in the 2017 year, it would be interesting to see whether AMD would attempt to challenge some of these innovative capabilities NVIDIA By using theZen attitude and the-Vega Its news - or the developer from the Green Camp will remain unrivaled when it comes to parallel processing on unprecedented scale, as it has so far.