Is it possible that the RX 6800XT is faster than the 3090? - Page 11 - Video Cards - HWzone Forums
adplus-dvertising
Skip to content
  • Create an account
  • About Us

    Hello Guest!

     
    Please note - in order to participate in our community, comment and open new discussions, you must join as a registered member.

    Our members enjoy many advantages, including the ability to participate in discussions, enjoy raffles and promotions for members of the site, and receive our weekly content directly by email.

    Do not like being harassed by email? You can register for the site but do not submit your registration to the weekly email updates.

Is it possible that the RX 6800XT is faster than the 3090?


nec_000
 Share

Recommended Posts

Quote of ch33

I rely on news I read and what those same companies said to polls such as HWUB. I am not under the impression that they will lower the prices of their third party models when there is stock because that is not the reason they claim the high prices. I guess these 3,500 shekels are for a reference model, they actually sell for $ 650 if anyone can find them in stock.

Niuag also has 3080 for less than 3000, what exactly is the point here? You can not get any of them at this price unless you use bots.

Link to content
Share on other sites

Quote of yoavke

This claim was dropped when you performed an AGESA update and received a 10 FPS upgrade where we estimated there was no CPU limitation.

 

 

 

Regarding your quote the words of lanzar above, here they are in full:

image.png.bc2361e5545504f14a8985307053bd6c.png

 

We learned this week (although it was quite surprising) that at 1440P we are entering a significant portion of the titles, bottleneck

In processors. of course Generation 10, but sometimes we saw it happen in the Bryan 5000 as well. We talked about it, the new generation of accelerators

Empirical and RDNA2 is such a powerful generation that it finishes processing the air first even at 1440P, which is a heavy resolution

Considered and still ....

Not in all cases, but in a significant number of cases.

 

Therefore the improvement of CPU performance, after the update of aegesa that you point to, in some cases improved the performance of 1440P

Where there was a bottleneck = 1440P. We have not seen it happen under

 

If we pay attention this is what - lanzar wrote: "Certainly not at high resolutions on full settings". 

And hence the statement of lanzar to which you have referred things, is valid for the highest resolution And this is how it should be treated.

 

Edited By nec_000
Link to content
Share on other sites

Meanwhile, the compilation of results is concentrated in the 5700XT and 1080Ti, on the same oiled and optimized Risen 5900X machine under 1440P.

We wanted to see if the 5700XT improved position compared to the launch day a year and a half ago,

Back then it averaged about 10% slower than the 1080TI. We examined 7 titles that give us a general direction:

 

stock clocks

the tomb of the tomb raider
1080ti = 90 fps
5700xt = 82 fps

 

borderlands 3
1080ti = 59.53 fps
5700xt = 60.25 fps

 

hitman 2
1080ti = 83.21 fps
5700xt = 71.98 fps

 

gears 5
1080ti = 89.4 fps
5700xt = 73 fps

 

horizon zero dawn
1080ti = 72 fps
5700xt = 69 fps

 

metro exodus
1080ti = 62.86 fps
5700xt = 58.13 fps

 

odessey
1080ti = 58 fps
5700xt = 64 fps


overclock

the tomb of the tomb raider
1080ti = 95 fps
5700xt = 89 fps

 

borderlands 3
1080ti = 63.14 fps
5700xt = 65.55 fps

 

hitman 2
1080ti = 86.82 fps
5700xt = 78.31 fps

 

gears 5
1080ti = 95 fps
5700xt = 77.6 fps

 

horizon zero dawn
1080ti = 76 fps
5700xt = 73 fps

 

metro exodus
1080ti = 65.89 fps
5700xt = 63.65 fps

 

odessey
1080ti = 61 fps
5700xt = 68 fps

 

Summary:

 1080ti stock = 515 fps total
 5700xt stock = 478.36 fps total
 1080ti oc = 542.85 fps total
 5700xt oc = 515.11 fps total
1080ti faster then 5700xt stock clocks = 7.6%
 1080ti faster then 5700xt clocks = 9.6%

 

1080TI WINS 5/7 games
5700xt WINS 2/7 games

 

* There has been no significant change in gaps compared to the launch day of a year and a half ago, and if so, then probably only in a small way. As far as a sample of 7 titles can tell us.

Edited By nec_000
Link to content
Share on other sites

 

We also took an exam to answer the question:

Is RDNA2 represented by 6800XT twice as fast as 2XT as Declared?

 

To this end after we have realized that in 1440P not always 6800XT hurried is able to reach one hundred percent saturation and in some of the titles

The CPU limits it, so we explicitly looked for the cases where this does not happen, i.e. the GPU is indeed at 100% load as required by this resolution.

 

For example gear5 in 1440P:

6800XT OC 170.8FPS

5700XT OC 77.6FPS

2.2 times ratio

 

Or borderlands at 1440P:

6800XT OC 137FPS

5700XT OC 65.55FPS

2.1 times ratio

 

Or tombraider at 1440P:

6800XT OC 176FPS

5700XT OC 89FPS

Ratio of 2 times ~

 

And when considering cases where the 4K volume The RX8's 5700GB provides the runtime for it, even then this ratio is kept a rough order of magnitude.

 

If so,

AMD's statement in this case was not a flower crow, it is an architecture that is capable of giving twice the performance of a rough order of magnitude than its predecessor.

Let's not forget - this is a chip with a size of 519 mm compared to 251mm in size, both in the same 7nm lithograph of .

Given that they doubled and slightly doubled the chip, and the results are also twice and slightly, it makes sense, there is no contradiction here. This was also the premise

Before we were exposed to RDNA 2 this year. Discount from last year = 2019 with the title big Released to the world air as slang, when they realized the size

A chip is about twice as good as the previous one (according to the leaks), even then fans of the field and industry assumed that performance should be doubled if it is doubled.

The size of the chip and any result less, will be considered as an architectural failure.

 

What you should pay attention to:

Of the 519 mm of RDNA2, about a quarter of the space is allocated to a 128MB cache (about 6 billion transitors out of 26).

Since the chip without the cache is theoretically in the area corresponding to about 400 square meters. In fact the part allocated only for calculations is about 400 mm

Squares and they manage to double what 251 mm RDNA1 provided, thanks to improved utilization achieved through higher working frequency,

And thanks to the work of the chip in front of cache and not directly in front of VRAM. It in turn manages to "saturate the pistons of the core engine" at a very fast pace

And do not make her waste time on unnecessary gifts.

 

400 mm (computational part), 2 times better (up to 2.2) compared to 251, although it is 1.6 times brighter.

When the balance is above 1.6 is the improved utilization of the architecture (working frequency + cache).

 

You can also see the following interesting thing:

If we quickly ran the 5700XT at 2100mhz, and quickly managed to reach 2700mhz at 6800XT,

There is an improvement of about 30% in the working frequency between RDNA2 and RDNA1.

We have also already seen a 1.6-fold increase in core area (the one allocated to the computational tier), 400mm versus 251.

 

Doubling the area increase (1.6) by improving the running frequency (1.3) -> yields the value 2.1 ~ roughly.

Which is exactly the improvement we have seen happening in measurements. It's amazing how the values ​​converge here in an impressive linear way.

 

We learn from this linear convergence, that in fact RDNA2 is very algorithmically very similar to the same as the efficiency of RDNA1.

It also makes sense that this is an evolution of it . That is, what they succeeded in doing was increase the frequency of work in the same lithograph,

And to your match Fast cache. These are the two major improvements that have entered this attraction.

The more significant of them and the more influential is as stated the running frequency, because the improvement is linear right in front of it.

 

 

 

 

 

 

 

Edited By nec_000
Link to content
Share on other sites

Now given the findings we have seen and collected, we have (probably?) A not bad / reasonable ability .... 🤔

Perform a judicious interpolation, about what the performance of RDNA2 will be Intermediate, the one to be launched with the 6700XT series

This coming January (according to the best news running on the net this is the possible launch date).

 

We know about It's navi 22 named, which also works with cache (96MB to be exact), with 12GB ,

It means that he also exploits like his big brother 21, the full capacity of the core pistons to be properly saturated

And without the unnecessary gifts to IO versus VRAM.

 

We know he contains 40CU, and know his working frequency and as demonstrated by his big brother 21 on the 6800xt card

That in lanzar, should be at least the same. Probably a bit more if already, since usually for smaller chips

There is an ability to reach a slightly better working frequency (silicon physics and the velocity of electricity flow in the conductor).

 

According to this interpolation which includes:

width = 40CU

Working frequency OC to 2.7Ghz in good custom cards such as או .... (We will not estimate a working frequency beyond that at this point),

 

Then we are expected (in my estimation Warned And preview only at this point), to reach about 7-10% a rough order of magnitude below the performance of 3070 (or 2080TI for that matter).

If the ticket is priced at $ 400 MSRP (as rumor has it?) It will definitely be another interesting product in the line of possible graphic cards for the consumer

And will put pressure on the price of the 3060Ti.   

** 7% -10 performance below 3070, it's a few percent faster than 3060ti.

 

 

*** It can be assumed that 6700XT Anemic reference versions, We call them vanilla versions, the ones that run (probably?) At a frequency of 2.1-2.2ghz somewhat clumsy,

They are unlikely to impress or intimidate the 3060TI in anything, and will be required here (As stated above and as illustrated in this thread well), in order 

Show what he is really capable of accomplishing.

 

Edited By nec_000
Link to content
Share on other sites

Quote of Moon-Mage

I personally very much enjoy CP and I play with RT which I think adds (although there are areas in the game that must cancel the RT because the FPS drops) and again, if I had a card of AMD I would not see a significant difference in other games and I did not have the option to RT or DLSS in CP so even without RT I would get less FPS.

 

Are you really playing, and starting to change the settings depending on the area you are in? Personally, do not see how fun to play like that. Prefers to have lower settings and always work stable / smooth.

 

In any case, to say that CP is "broken" is just an exaggeration. There are bugs in the game, but they are not really that frequent, and do not really interfere. The world of the game Huge, And impressive, and in relation to that, it's negligible.

 

Hardware comparisons according to the MSRP is a joke. First of all, not the whole world lives in the US. Second, even within the US the MSRP is not exactly true. Not that the final conclusions of the reviews of the major sites are of particular interest to me. I know how to look at the graphs myself, and weigh what you see there with the price available to me. In general, in my opinion the reviews of the major American sites are lacking. Not all of us are constantly buying the newest generations, and integrating them with the newest platforms. Recently GAMERS And HARDWARE UNBOXED have started doing a little more intergenerational reviews and GPU / CPU SCALING that meet this need a little, but still, not enough. To my great joy, a generation of TECH YOUTUBERS has given this a solution, sometimes impressive.

 

Edited By Ido.G
Link to content
Share on other sites

Quote of ch33

Do not understand this narrative of "Fine Wine". I do not know in which communities you hang out, in the ones I hang out in it is mostly a joke Network Such. You could call it "drivers that get better with time" ... you could also call it bad drivers on DAY 1 that leave performance on the table and take AMD a year + to fix, while the drivers of Anodia Good from launch day.

 

How did we age the VEGA 56 \ VEGA 64 cards by the way? And what are the owners of the Radeon 7 doing today with their formidable 16GB, someone asked how they were? Or we forgot that these cards exist as AMD About. Matured less like wine, more like milk. Indeed, they have improved over the years compared to the "neglect" of Anodia That its 1000-year-old GTX4 series still occupies the vast majority of market share and offers excellent performance.

 

This is exactly what you wrote and you were right, folks extension Small:

You wrote "Bad drivers on DAY 1 that leave performance on the table and take AMD a year + to fix, while the drivers of Anodia Good from the launch day. "

 

וPlus -> That Nodia abandons old series and stops optimizing them in terms of , And this is what helps AMD

To rise after a year plus in relation to its products which did not rise similarly.

Nodea quite neglects Of old series, especially 2 series back which is now a 1000 series.

Because she is also sorry for the resources she invests in the subject and which cost her a lot of money, but also because it is better for her to optimize the

The performance of new series, those that are on the shelf in stores and it sells and makes money on them, is clear

Spontaneously.

It is also better for her not to improve old series, because then it will only prevent customers from the urgent need for a new purchase (if the old

Still works great). Just like Apple intentionally slowed down iPhones after 3 years (and was caught in 2017 by the regulator) just to

Encourage the purchase of a new iPhone. 

This is the same ancient method known in consumerism and industry as planned obsolescence.

 

The small manufacturer = underdog in his name Can not practice such monopolistic tactics, because barely able to work and sell

And convince the few customers who have chosen it. AMD prefers as a business strategy, not to slow down old products (using

Neglecting drivers) in order to gain at least some credit or relative advantage, in this aspect in the face of competition in the eyes of the consumer. 

 

And as evidence they gain, everyone in the community is familiar with the phenomenon that a card of After a year, Plus often changes positions and moves

In performance its direct competitor originally. Show this in countless comparisons of the HD7970 versus the GTX680. Show

Comparing R9-290 to GTX780ti, show comparing RX480 / 580 to GTX970 / 980 / GTX1060, and so on ...

 

Women notice that Still aphthous To fucken GCN architecture. Crazy.

These cards were launched back in December 2011, why are they wasting and wasting resources on such an old product? 

 

In my opinion they want to create a reputation among the community of the field, which cards Maintain their value in the second-hand market,

Similar to the brand Which retains its value as it ages. Let’s say a Toyota brand for the sake of example to understand the concept, this brand also owns

Has as a relative advantage the feature of value preservation as used, ie that slowly grows and traded in the used market.

 

Therefore the AMD video card drivers that you say are crappy on day1 and do not show the full capacity of

The product at the time, ripens fruits only some time later (sometimes a year plus after). And this trait is called in the community

In fine wine terminology. ** and if so very interesting what will be the gap of RDNA2 over empire in a year if 

Today RDNA2 still does not show us its full potential while Amfir does ....

 

Program the method however you like, this is the terminology in jargon that has taken root throughout the community on the net and is called finewine.

If you do not want to present this terminology in a positive light, ie something that improves and improves over time, definitely understandable and acceptable, 

It can then be presented in reverse, as a non-negative. We will call it in the long terminology something:

 not fucking the customer on purpose by planned obsolescence.

 

This is the tactic of the big ones that dominate the market. of , Of Nodia, and the like ... because they can afford, the customer is captive

Whether in their hands he wills or not. And this is not a tactic that AMD can afford to implement, because if you dare, then the little

Since she has customers and the little reputation she has managed to produce in this aspect, she will lose it as well.

 

(I) and the community have no problem formulating it this way if you will, and not using the term fine wine. 

I will call it in the bold letters nftcbpo

 

That is, it is not AMD at all that causes the phenomenon to occur, ie these are not products that improve in any way,

But it is its competitors who "destroy" the performance quality of their products "by neglecting drivers". 

 

Kerry can be called something positive and attributed to AMD, and I realized you did not want to do that,

But it can also be called something negative and attributed to their competitors, in this aspect of product aging over time.

Choose which one of the two you would like to work with, and I will happily adjust myself ...

 

 

 

Edited By nec_000
Link to content
Share on other sites

Quote of nec_000


Before we convene to concentrate all the results we have measured so far (from the selection that Lior made) in one neat table,

Does anyone have a request for another title to be reviewed?

 

I would love if you could check out Flight Simulator 2020

 

Link to content
Share on other sites

@nec_000

 

Even 3 lines were enough. Or paragraphs, for that matter.

What I extracted from the text here is that you started talking about neglect as one thing, moved on to talking about planned obsolescence as another thing, and in the end you decided to close this vague logical circle and talk about both as the same thing. 

 

So I did not understand, what of them does Anodia? Because in fact it is not the same thing. If you blame her onDeliberate slowdown of its products, Ie a GTX980 objective video card today slower than it was in 2017 because of some malicious code in the driver, this is a serious charge and I hope it is based on something. If you accuse her of "neglect", ie she stops at some point trying to milk more performance from 4 and 5 year old architectures, that's already something else. In my opinion this is an acceptable practice as long as the card is still officially supported, and to the best of my knowledge the drivers of Support today back to 600 series.

 

I have no intention of getting into the AMD vs On the forum pages, I do not officially encourage any corporation and do not think they are my friends, nor do I have any hammer for the underdogs. So I suggested that what you boast about as "Fine Wine", is simply a matter of perspective.

Link to content
Share on other sites

ch33 Probably did not understand then unite.

 

Do not confuse smartphone slowdown (intentional and of course illegal action by Apple and therefore fined by the regulator),

And this is done by slowing down the working frequency of the processor of the device, (and then you will want something that is to

Keep the battery and all sorts of engravings like this and others that have not caught on anyone),

 

And reverse reverse deceleration:

Which is simply avoiding writing up-to-date code for new titles that have come out in the performance aspect. That's what Noida does. 

 

Graphic accelerator maker optimizes by writing code and improving driver parameter settings, for each new title that comes out,

But Nudeia is only transferring this process to its newest series that it has on the market (ie new in stores) at the time.

For previous series, it only makes sure that the code is correct without bugs, ie the title will run.

 

Probably the topic is new to you and you have not been exposed to it, so in my recommendation read about it. The network is full of work on the subject, including measurements

Who have created over the years showing this. Our forum has also discussed this in the past, you can see threads with links to measurements made.

 

When we compare the performance of a NVIDIA card, When it's new, Has the power X in front of the competition, and in front of tickets

Others of Nodia itself as well of course. Except when the next generation of Nudeya is born (two years later), Nudeya invests in writing

Iptomize only to its new generation (the one on store shelves) and neglects the previous one.

 

The main purpose of a business test is clear, beyond the direct financial savings of course: 

And this will produce a larger artificial gap, between the new series and the previous series, thus indirectly speeding up the pace of ticket purchase

From the new series, a pace that was slower and more general. This is called planned obsolescence.

 

Edited By nec_000
Link to content
Share on other sites

Thank you very much for all the information. 
 

I would love to hear where you can purchase the ticket (6800XT from the recommended models) without tearing your pocket (up to 4200 NIS).

"It is recommended to purchase one of the custom cards such as the ASRock Taichi, or the XFX 319 (and say that the red devil is also excellent) ".

Edited By joseph samuel
Link to content
Share on other sites

 

The temporary update is, that as it seems by default in the lanzar the filter sharpening No Was turned on.

This means that all the measurements he provided are relevant.

 

Yes we measured spot to make sure, if there is a performance difference between with and without, and we did get two values ​​in the measurement of sanity that shows

 89.7 versus 87.6 FPS. This is a gap of about 2%, which means that there is indeed a price for the function.

 

But that's the great mourning language,

If the function was not active, and something else causes differences in image, This already requires a completely different kind of research.

 

A quick scrolling of 15 minutes of reading on the net that I did, brings up all sorts of interesting findings. That is, it is not something rare on the one hand.

There is a lot of evidence from users in the various forums on redit and what not, that describe differences in the outgoing image

From each of the cards in the transition between the different brands. Some of the more profound ones make the impression, talking about the difference being related

For the signal transmitted from the video card and its interpretation on the various monitors, it is possible that the result must be photographed with an external camera and not sampled

The frame buffer in a programmatic way, which will not necessarily show us differences at this level.

 

But this topic is not related to the current thread about "performance", and if we go in to explore differences in the image that each card produces

On top of the display itself (the screen) it is done in a completely separate thread. 

One that before entering into it, and in order to be serious about the subject, one must first do a comprehensive and thorough study because the subject is not so clear and not so well known

In the community as it seems. It would be a shame to dive in without having enough specific knowledge on this subject, experience, and most importantly, real evidence,

If there is a difference in the picture. 

 

** So let's put this issue aside for now. If there are friends who know and know ... Feel free to open a separate thread and discuss it together on the subject.

At the same time I will wait for Lanzar to achieve High quality DSLR and will send me a raw image. Or we will jump into it and try to see with our own eyes, in my opinion

Need to find another twin monitor for the one he has, to make a side-by-side comparison at the same time. Otherwise it will be difficult to spot differences if the monitor

Is not identical to one another. And the truth is that I also have enough cards and monitors at home, maybe I will do them locally, only I do not have a pair of screens 

twin. There are almost, but not identical to the end U2414h and also P2414h. Almost the same screen, the difference is that U comes factory calibrated

While the P without the mentioned calibration. I might try to perform for the uncalibrated screen, a calibration using a calibrated pedestal (as a reference for comparison). It can help a lot

From experience I have done in the past.

Edited By nec_000
Link to content
Share on other sites

Join the discussion

You can then join the discussion and then join our community. If you already have an account with us, please Log in now To comment under your username.
Note that: The comment will appear to the surfers after approval by the board management team.

guest
Add a comment

×   The content pasted is with formatting.   Remove formatting

  Only 75 emoji are allowed.

×   Your link has been automatically assimilated.   Show as regular link

×   Your previous content has been automatically restored.   Clear all

×   You can not paste images directly. Upload or insert images from URL.

 Share


  • Latest news

  • Buzz-Zone: Everything hot on the net

  • Popular now

×
  • Create new ...

At the top of the news:

new on the site

Gaming treats for the weekend

Gaming treats for the weekend

Another free game from the Epic Games Store, free titles to try out from Steam and Ubisoft Connect, fresh game packages on the Humble Bundle website and more are waiting for you inside