New graphics core, RTX benefits
In this review we take for a spin another video card from among the new ones of NVIDIA In the series GeForce RTX 30 - When this time it's a name with a lot of responsibility behind it, theRTX 3060 which should turn to a particularly large market share
We are at the beginning of the summer of 2021 and the hardware world during this period every year is supposed to "thaw" with refreshments, new products and promising technologies. This time, due to the crazy and unconventional market situation, we come to the launch of additional hardware (albeit slightly late), when the market situation does not exactly plan a particularly soft landing. Still, the world continues to turn around even when achieving hardware becomes particularly challenging.
Earlier this year we saw the GeForce RTX 30 family of NVIDIA Expands with another product - is GeForce RTX 3060. From the name of this video card, it can be understood that it continues a tradition of graphic cards Especially popular for the middle market of the gaming world. This market usually gains huge market shares, significantly larger than those of my husband graphic cards More expensive. By "expensive", we usually mean video cards that cost more than $ 350.
The RTX 2060 has gained great popularity, as have the GTX 1060 and GTX 960 before it. According to hardware surveys such as those of Steam, Usually the user share of these video cards (and also of the GTX 970, which if to be fair can also also come in under less than $ 350) was huge.
The GeForce RTX 3060 is a fresh graphics card from the Ampere series of NVIDIA (The new architecture), and it places itself at a lower price and performance point than that of theRTX 3060 Ti launched at the end of last year (believe it has been half a year since then?). Its official price is $ 329, although its actual price is due to the high demand and the crypto complex is about three times that in the country.
The core of theRTX 3060 includes 3,584 CUDA processors along with 112 Tensor processors for machine learning, and 28 RT (RayTracing) processors. For comparison, the Ti version has 4,864 CUDA processors with 152 Tensor processors and 38 RT processors. These are quite large differences. At the hardware level between these two video cards, in fact, I do not remember the last time that the difference between a video card that officially costs 400 and one that costs only $ 70 is so significant in hardware.
There is also a difference in favor of the RTX 3060 in terms of graphics memory. Although the RTX 3060 Ti contains 256BIT memory controller and 8GB of GDDR6 memory, the RTX 3060 contains 12GB of memory on a 192BIT controller. It is very possible that there is a connection to the launch Radeon RX 6700 XT, a slightly more expensive video card but one that contains 12GB of graphics memory. When using a 192BIT controller, sometimes the decision is whether to use 6GB or 12GB simply due to the technical ability to pin chip זיכרון Per channel תקשורת. 12GB is a significantly smarter choice than 6GB today, there is no doubt about it.
Specifically, the video card in this review is the GAMING of the hardware maker GIGABYTE. This video card comes with a core frequency of 1,320MHz which is the default frequency it has set NVIDIA. The video card itself comes with Cooling The Advanced Windforce of GIGABYTE Has three fans. The printed circuit board itself is relatively short, and the heat sink extends far beyond. Some of the hot air that comes out of the cooler comes out from the back. The rest of the heat comes out of the sides.
This video card uses a single 8PIN power connector PCI-Express. The heat envelope of the video card is set to 170W. This is a shell with a very precise limitation, and switching this power consumption is not possible without making hardware changes. Of course it may sometimes seem a few watts more at maximum effort. Because the printed circuit board is relatively short, the connector is in the middle of the card.
This video card has four display ports, two HDMI 2.1 and two DisplayPort 1.4a. All four ports can be used simultaneously for views.
In this review it is also important for us to bring the issue of DLSS 2.0 and the various RTX features that are unique toNVIDIA.
When the RTX 20 series was launched NVIDIA, The company announced a significant change in the approach of features towards hardware and advances in hardware capability dedicated to specific goals. In an unusual move, instead of taking certain visual features by the way, she decided NVIDIA Embed Processors Hardware within the core for access to them will be done for the purpose of very specific computational purposes.
As a result, we got in addition to the routine CUDA cores the Tensor and RT cores. This step completely changes the approach you take NVIDIA Towards the seriousness with which she wants to make performance improvements in a variety of games and apps.
Since those cards were launched RTX 20, Tensor cores are used by the world of Deep Learning and Machine Learning to perform various learning tasks faster. When launched, there appeared to be relatively small differences between series RTX 20 for the GTX 10, but after the code versions were updated, users began to see a tens of percent increase in performance. Series RTX 30 brings an extra amount of those Tensor processors and even compared to challenging product availability, the ones that use cards RTX 30 Advanced Machine Learning users benefit from an additional tens of percent more performance than the previous generation.
Another technology, and in my opinion the most important among all the new set of technologies NVIDIA Is DLSS (Deep Learning Super Sample). The idea is quite simple, NVIDIA Teaches AI machines graphics of a particular game, and she then uses post-learning machine knowledge to help graphics cards complete information in games to improve performance by saving the same details to the card.
This means that in practice the game is rotated as if it were at a lower resolution than that displayed, and the use of AI helps the game to show as if it is displayed at a natural resolution of the screen or close to it. The assimilation of the technology relies mainly on the partnership between NVIDIA And the studio in charge of the game. When DLSS technology was launched, the amount of titles it supported was quite minimal, and it was clear that a little more work was needed on its methodology. NVIDIA Implements the feature.
When a new version of DLSS was launched called DLSS 2.0, games that supported it showed a tremendous improvement in performance with minimal damage to image quality. Sometimes, there is even an improvement in image quality along with an improvement in system performance. Because of this, we believe DLSS 2.0 is one of the most important performance enhancing features in the gaming world today.
Games that support DLSS 2.0 include Cyberpunk 2077 andCall of Duty The popular Warzone.
Here are three examples of using DLSS in the Warzone game. One mode is without DLSS at all, another is Ultra Performance mode that uses only a quarter of the natural resolution, there is Performance mode that uses half, and Quality mode that uses 75% of the natural resolution. Notice the performance data in the upper right corner of the screen. Using Performance mode adds close to 50% performance to the amount of frames per second. This, while visually it is very difficult to call a poor image, or significantly worse than that in DLSS mode completely off.
The performance improvement is absolutely dramatic. This is not magic, as technically there is a decrease in image quality, at least in Performance mode. Admittedly, in Quality mode the image quality is great and the improvement in performance is tremendous. DLSS 2.0 is a very promising technology that has already begun to exist. In the near future we will find many more titles using it. We know that every Call of Duty andBattlefield Coming out (at least for now) are going to use DLSS technology.
Another technology for which there is dedicated hardware in the video card is Realtime RayTracing (or RTRT for short). The role of the RT cores in the video card is to calculate and process advanced beam tracking for realistic lighting. I think among the games that best illustrate this technology is actually Minecraft.
Making simple textures and returning light according to different levels culminates with this title. Of course there are other games that benefit from severely accelerated RayTracing. Cyberpunk 2077 is also a part of them.
RayTracing shading is very demanding. Although theRTX 3060 supports this technology, sometimes it is better to use standard shading without RTRT, and stick to DLSS. In fact, we recommend sticking to DLSS whenever possible. RTRT might be worth leaving for more powerful graphics cards, for example RTX 3070 and north of it. Admittedly, there are cases where certain easy games make use of RTRT, and then the use of this technology with RTX 3060 can make a lot of sense.
So far, we have mostly talked about the new technologies of NVIDIA Without attributing too many performance tests to them. Now, when a set of features RTX Becoming more and more common in the gaming world, we will start to get into performance tests and image quality comparisons in different situations.
At this point we move on to the performance tests on the next page