Lately - HWzone Forums
adplus-dvertising
Skip to content
  • Create an account

Top Friends

Popular Content

Showing content with the highest reputation since 18 / 09 / 20 in all regions

  1. Kfir strikes again (or for the hundredth time, I lost count). Proves time and time again that he is a Kaka child whose place in FXP is.
    6 points
  2. Written by forum member nec_000, on 29.10.2020 ** The author has permission for the site owners to use the article below as they see fit, and provided they mention the author when posting / using. Here is an article about the groundbreaking new technology, which was first implemented in the new generation of video cards. In the article we will understand what it is, why it entered the picture, what its greatness and genius is, what root problem it came to solve in the graphic industry, and why it could only be implemented for the first time (technologically), and why in fact from now on everyone will adopt it. Background: We all heard yesterday at the launch of Nudeia's new RX6000 series, about a new technology called infinity cache. We understood that this was something that AMD had implemented in the new product, but we did not understand exactly how it works, why, and what it does. So today we will all understand and will understand well. First of all for the sake of simplicity the word infinity is branding, it has no meaning. And we will call the mechanism in the simplest and most terminologically correct way, cache. The article is presented for the benefit of the members, enlightening their knowledge and expanding the subject, to give an explanation at such an academic (but concise) level that can be entered in one concentrated page, and to convey the Torah. At a level that makes order and allows the average amateur to understand the subject at a good level. Historical background: Decades ago, the first video card was born that allows the computer to display the desired output on a screen. At first it was just text, and that was enough for him to be able to speak directly in front of the slow RAM and nothing beyond was required. But later it became necessary to present not only a simple text, but a more complex graphic ability, which gave rise to difficulty: one of the heaviest resources consumed in graphic work is high and large memory traffic. We will explain later why. Therefore the graphics cards have had to stop over the years and use the computer's generic RAM (which was too slow for this task), and start working with the help of fast memory assembled on the video card itself. It served as a dedicated (and fast) memory buffer for the purely dedicated graphics processor use. It was faster than the size of a computer's RAM. It was called a graphic memory. Later its name was determined as Vram. This allowed the graphics processor (now called a GPU) to have high access speeds and upgraded bandwidths compared to those that could be provided with standard computer RAM. The upgraded bandwidth allows the GPU to perform effective image processing work. Without it the GPU would actually be in starvation mode - suffocated and hungry for DATA. Over the years Moore's Law has shown that every 24 months (about two years) there is a doubling in the power / processing rate of the graphics processors. The reason for the doubling is based on a step change in the lithography level once every two years on a rough average, which made it possible to insert 2 times transistors in the same chip size, (and of course a slight addition to the working frequency). Thanks to the two together, this has resulted in a lady doubling her performance by 2 times (at least) every attraction (two years). The problem is that while the graphics processing chips increase their power as described above at a fast and exponential rate, it is not possible to do so on the issue of the memory bandwidth available to the graphics processor. Because thanks to the reduction of lithography twice by each attraction, unfortunately no parallel doubling in the width of the memory strip is obtained, but only a doubling of the memory volume, which is not the problematic resource in the count before us. Let's see what leads to such a situation over the years: While the attrition meter, once every two years as stated, the computational power that can be obtained from the graphics processor is doubled, it was not possible to provide for it the same and corresponding increase in memory bandwidth. Thus a gap is created over time, on the one hand the processing power increases at a rate twice as fast, while the bandwidth increases lazily. A so-called resource distancing from another was created that could not keep up with its pace. So over the years we have addressed this problem in the form of the following techniques: Repeated doubles of bandwidth access to memory in bits: increased from 64 bits to 128, then to 256, and so on to 512 ... And finally reached such a glass ceiling, which limits the ability to push additional traces onto a single-card PCB board. Stop (more or less) within a limit of 512 traces on a PCB board. Beyond that it is too complex and expensive. Another method has also been adopted, which is to transmit more information with each clock beat. Initially memory worked in SDR, which is writing or reading a single bit per clock. Then they switched to DDR, which is one write and one read of bit rate per clock. Eventually the GDDR5 generation switched to two writes and two readings of bit per clock and a total of 4 bits per beat. Today we are already in the GDDR6X generation that has increased the use of 4 bits per beat, which means 16 different values ​​per clock. This represents a very difficult signal for control and reliability. For a 4-bit signal means 16 different voltage values ​​in each beat, and is therefore very close to each other, so identifying what was the correct value becomes increasingly difficult to control. The task requires a very delicate diagnosis on the cattle side which leads to quite a few errors. There is a limit in the work of electrical signals and we are approaching this limit in huge strides. And trying to keep going from here on out is starting to be a particularly difficult challenge, bordering on impractical or technically unprofitable. Another method that has been exhausted to the last drop in the last decade is compression of information, initially performed at a lossless level and recently increased even at a lossy compression level, all with the aim of trying and milking every last drop of juice given to give the memory bandwidth a factor. The graphics progress and soar upwards, while the memory fails to catch up with it at a satisfactory pace. All of the above methods, widening the bandwidth in bits (up to 512), using complex signals to transmit more information per clock to memories (4 bits), compressing information, have reached the limit of capacity in modern times, so that no further progress can be made, and must, really must Find a new and groundbreaking method to overcome the memory bandwidth limit and the hungry graphics processor. Understand, without a bandwidth that will continue to rise in order to support the increasing processing power, we will not be able to move forward with the graphic work and we will reach stagnation. The new method found: caching. The big problem with graphic work is, as mentioned, the enormous amounts of memory that this work requires. The reason why a large amount of memory is required is because the input size is huge. A large and heavy texture space and is the one that drinks most of the memory volume actually went. Is the one who places it in the buffer of the card, hereinafter known as Vram. Because if they were stored in the general memory of the computer (in RAM) we already understood from the explanation above, the graphics processor had very slow access to it, so most of the time the graphics processor would spend waiting idle for information to reach it. Or in short, we would encounter very low efficiency in the graphics processor and poor performance. That is why VRAM was originally invented, in order to bring close to the graphics processor all the information it needs for its work, so that it can work fast and not suffer from starvation for DATA. It would be silly if there was a powerful processor, which is constantly waiting for information and therefore its work is delayed. Since the memory consumed as aforesaid is large in volume, and textures consume the vast majority of it, there was no way to produce a graphics processor with a cache of memory right on it, in such a volume, that it could hold all the necessary information. What was good and suitable for the world of CPU processors, tiny cache memory, is not suitable for graphics applications. A very small cache volumes are enough for a CPU processor, this is its way of working, which is different from a graphics processor, which requires a huge cache to be relevant. Thus for all the decades the ability to embed cache memory has not been possible or practical in graphics processors. Only now that lithography has first reached a level of 7nm, a sufficient transistor density is possible that gives the possibility to hold a large enough amount of transistors in one chip. This is possible by the manufacturers of graphics accelerators the technical possibility yes to embed a caching memory that is practically satisfactory in the graphics processor. The world's first graphics processor, hereinafter the technological pioneer that assimilates cache memory (in addition to Vram), as an intermediate layer, is the Big Navi core known as the code Navi21. This core is about 500 (plus) mm in size, which includes a huge amount of 26 billion transistors, of which about 6 billion (a quarter of all the chip's real estate) are allocated for 128MB of cache memory. 128 megabytes, are 128 million bytes. Byte is known to have 8 bits, so 128Mbytes is actually 1,024 megabits, or 1.024 billion bits. The electronics in the house of the type in question are required, in order to store information of one bit, about 6 transistors. Thus 1.024 billion bits, 6 billion transistors are needed to represent them. Below is the same allocation of 6 billion transistors, which are required to be taken from the real estate of the chip, in favor of a 128Mbytes cache array. Why is it only now, for the first time, possible to embed cache memory in a graphics processor? Hence if we go only one lithographic generation backwards, to a generation of 14 / 16nm, we can see that the video cards based on it were about the size of about 11 billion transitors at most (the largest chips in them). It is clear that if an allocation of 6 billion transistors is required (out of 11), a situation is created where more than half of the chip goes to waste only for caching memory, so that a sufficient amount of transistors would not remain in favor of the chip's graphics processing component. But today when lithography for the first time allows the assimilation of no less than 26 billion transistors in a single chip, allocating a quarter of them in favor of the subject (6 out of 26) becomes a practical option for implementation. Because there are still 20 billion transistors left in the chip in favor of the graphic work and that is a sufficient amount. A ratio of 25% in favor of cache and 75% in favor of labor is a ratio that is first made practical. And this is something that could not be done in the past, because there was no possibility of such a large density of transistors in a single chip, and if they did dare to embed a large enough cache, there would not be enough transistors available for the rest of the chip needs. The next smart question arises: what is so special about a 128MB cache, which is the magic number, that it could not be executed in sizes smaller than 128MB for example? Some understanding of how a graphic work operation occurs (concisely): There are several stages of work until the image is built, and we will talk very simply to ease the thickness of the sheet. The graphics processor receives from the main processor a set of instructions for making and building an image. These include vertices that make up the polygons, and the textures, which, as mentioned, are stored locally in Vram. All the video card receives from the processor is kept close to it in the Vram so that it can assemble the image from them quickly and without delays or gifts, the VRAM is the fast and private work surface of the graphics processor. In the first stage, the processor builds the polygons (three-dimensional polygons) from the vertices, called the geometric stage. Next he renders the wigs of the polygons, ie draws and paints them. He draws / paints them from the textures that make up the same object (texture is a piece of image). That is, he takes a texture and spreads it across the wig. The mathematical concept here is a linear transformation from a two-dimensional space of the image (texture) to a three-dimensional space that is the wig (the polygon). There are a few more stages of course, intermediate stages, and a stage after the restoration (rendering the wigs from the textures) including lighting calculations and the like ... Does not go into the current article beyond, although the topic is very interesting, but mainly because at the moment it is not the relevant scope for the discussion title. What we are interested in understanding and most importantly is that the rendering phase (which is taking the textures and drawing / spreading them on the wigs) is the heaviest phase in terms of memory consumption, ie IO versus VRAM. As they consumed more resolution, or consumed more frames per second, the processing consumption doubled linearly and with it the consumption of the memory bandwidth doubled. For every pixel operation requires reading it from the texture map, and writing it on the appropriate polygon wig. Reading, working, writing ... And repeats God forbid pixel after pixel. And X times more pixels, requires X times the processing power (which is easy to achieve we already understood) but also X times the memory bandwidth. They go hand in hand and are close to each other. The processor cannot process another pixel if the memory does not allow it to read it. And he can not write another pixel on the polygon, if the memory does not allow him to write it. Our problem and limitation is that the memory bandwidths cannot be improved at the same rate that the graphics processor can be improved. Here is the problem of hunger across a memory strip. One of the newest ways to break the glass ceiling in the problematic layer which is as mentioned memory bandwidth, is to create something faster than VRAM, hereinafter cache. It is a memory that is even closer to the graphics processor and is really complex on it, which is in principle, no longer limited, and increases in power directly with the growth of the chip itself. More transmitters lead to more size that can be allocated as a cache, but also to more bandwidth, because the bandwidth in cache memory is linear to the frequency of operation of the chip, and the amount of interconnects built on the chip itself, between the cache memory and the memory controller sitting in the chip itself. Why then is 128MB cache memory a constant, which for the first time gives a sufficient size, and could not have been satisfied with much less than that? Because once two basic operations can be performed, and they hold at least one whole texture in the cache, and once a single frame can be stored in the cache at the same time, for the first time you can actually do all the calculations needed to render one whole image, using only the memory Cache, which is what minimal money needed to get to. A single frame of HD size that is 1080P, without compression, is about 2 million pixels, at a representation of 32 bits per pixel, yields 64 million bits, which is 8 million bytes, which means 8MB of memory is needed to hold it. And in a 4K size image, the amount is known to be multiplied by 4 times, ie 32MB is needed to hold a single frame. Pretty soon one realizes here something like this, that in order to perform all the calculation work required in favor of a full frame, a memory capacity of the order of 128MB, is the desired threshold to ensure, that everything goes into it at once. In such a situation, the graphics card takes one single texture, caches it all, ie reads it only once from VRAM, let's say it consumes a few bits, and it starts drawing pixel by pixel from it and writing them to a frame buffer, when all together Succeed in converging at the same time to 128MB cache memory allocated to the graphics processor. 128MB This is where the first practical minimum volume, which allows this business to work (also) in 4K. Smaller volumes make it difficult, to the point of not being able to do everything at once, and deplete all the wisdom and rationale used in cache memory. We said we did not go into depth in the whole graphic work process, there are of course more steps and it is not just painting textures, but we will be content to explain, that all the intermediate calculations of all the other steps, and the lighting calculations at the end ... Perform their calculations without consuming a large amount of memory, and they can perform a purge after they are finished and inject out of the cache memory and free it up because they are no longer needed. What is important to understand is that the heaviest consumer is the restoration phase, or the rendering phase, which means painting the textures across the wigs of the polygons. And if one whole texture is cached and a whole frame is cached at the same time, this is the most critical size in the whole process. And this is what is made possible by 128MB cache memory, and not so much possible by smaller sizes. Therefore they had to wait until 2020 and that 7nm would be born, so that it would be possible practically / practically to build a cache with a sufficient minimum size. The graphics card shell loads one cache texture, renders the hundreds and thousands of polygons it uses to draw the same image, then caches it, loads the next texture in the queue, and so on until it's done. We can note that in this method, each texture is called only once per complete image, no matter how many polygons it has. The processor will not finish with the texture in the cache until it has finished drawing the entire polygon that this texture is used for drawing, which is hundreds and thousands of times in a single image. We can immediately understand that the volume of the memory band in the read direction in front of the VRAM saved here is in the thousands. Dramatic improvement. And on the writing side, remember that the frame buffer no longer sits in VRAM (meaning that every pixel writer must record information in VRAM) but sits entirely in the cache as well, which means there are no writings to VRAM. The processor draws all the pixels directly in the cache until the production of the entire frame is finished, and from there sends it to the screen. The savings in writing level are also an order of magnitude of thousands of counters. In fact, by working directly with a cache that provides everything needed as a workspace to build a single complete image, we have reduced the memory traffic consumed in front of the vram to something smaller than what was required before. It is already understandable what genius this thing produces for us. We broke the glass ceiling of the memory bandwidth, the one that has obscured graphics processors since the graphics field was born decades ago, and moved the problem into the chip itself, where it is easy to address, where the limitation is almost non-existent as caching memory increases directly to lithography itself. hallelujah. Bandwidth calculations to understand the business: AMD has chosen in its technical implementation, as mentioned in the Navi21 chip, to implement a cache bandwidth of 4096 bits. This means that in each beat (per clock) it allows writing or reading of 4096 bits. Because cache memory works by the SDR method it is more flexible than DDR and certainly more flexible than all subsequent methods, and can perform write or read per clock as it pleases. DDR on the other hand requires one write and one read, but not two of the same type. In fact DDR is half the bandwidth for any read / write operation that is displayed together as a connection, but in practice this is misleading. The same limitations apply even worse to GDDR5 and today GDDR6X. Because there is more than one soup in one reading and one writing, it is several of each type in a ratio of half to half. So here's another cache advantage which is SDR and unlimited. He can utilize his 100% only for reading or only for writing and is not required to compromise on half of any kind at most. This further improves traffic in practical practice. The Navi21 chip is calculated in terms of bandwidth, as one that works at a typical average operating frequency of 2150mhz, that is 2150 million clocks per second. Thus the memory bandwidth created in front of the cache is 4096 bits times 2150 million = 8.8 billion bits per second. Divide by 8 and get 1.1 billion bytes per second or in short, 1100Gbyte per second. The overall memory bandwidth of the VRAM on this card is 256 bits in GDDR6 configuration which means 16bit per clock per pin. Or in other words only 512GByte. We see that the cache memory actually more than doubles the available bandwidth available to the CPU core, now the graphics processor core actually sees it as a work surface, a crazy speed of about 1100Gbyte per second and another preferred type that is all SDR. For comparison, the bandwidth speed of rtx3090 using 384-bit ultra-fast GDDR6X memory 19.5bit per clock per pin is only 936Gbyte (which is limited to half writing and half reading only). In other words, the assimilation of this pioneering cache memory allows Navi21 to gain an even greater effective bandwidth than the RTX3090 flagship card. In fact at the moment the situation is, that the core of Navi21 gets more bandwidth than it is capable of processing at its full output. The fact that the said bandwidth is limited to only 128MB of volume is already quite understandable to the reader, not really bothering here, since at every stage in the graphic work to complete the full picture, this volume satisfies the needs of the method / technique in which the graphic work works. He caches what he needs for one image, caches everything, then throws it in the trash after finishing what he does not need. This is until you complete one image that is ready to be transmitted to the screen. AMD was asked to describe to us what the equivalent of this bandwidth is compared to the (traditional) method of the past, so we built the following slide that we will explain immediately - scroll down after the image: In the left column, bandwidth obtained only through traditional GDDR6 memory chips, 256 bits wide, fast 16bit per clock per pin, which produces 512Gbyte bandwidth per second. Of course in the middle column it is the same only one and a half times, i.e. 384 bits, which produces 768Gbyte per second. And in the right column they took the bandwidth of the cache memory of 1100Gbyte combined with the additional potential provided by the generic memory Vram which is itself 256 bits (512Gbyte), coming out 1100 + 512 = 1612Gbyte total. And if we divide the value 1612 by the value of the left column 512 we get the ratio they wrote which is 3 times the order of magnitude. And here we have genius, which is quite trivial and simple to understand, actually exists in the world of computation and work since time immemorial, a cache, which for the first time thanks to sufficiently advanced lithography, allows the implementation of a practical and effective cache for graphics, in a graphics processor. Something that could not be done until modern lithography was born. Because this method is so dramatic and technologically and application-breaking, it will now be adopted by the entire industry in general, Nvidia and Intel as well. It is simple, inexpensive, effective, and easy to implement. It cannot be patented either. cache is cache and it is older than all the members here in the forum. From now on, the graphic processing technology will no longer depend on the limitation of the memory chips and bandwidth (in bits) of which VRAM is composed, which can now be much slower and cheaper - ie use slow and cheap memories for implementation, provided the memory cache - its speed and size provide and complete The graphics processor's hunger, to the rhythm of readings and writing that provide Randora with a discrete image. In the next generation of lithography, probably 5nm, which will allow in the next two years the next generation of cards, will double the amount of transistors from the region 26-28 billion where it is today (26 billion for navi21 and 28 billion for GA102) to the region of 50 billion transistors. In such a situation, graphics cards will be able to allocate even more than 6 billion caching transmitters, such as 12 billion 256MB caching transmitters, an element that will further improve the flexibility of order and performance in which the graphics processor does its job. For when we already have 50 billion transistors in the chip, allocating 12 billion of them still leaves 38 billion for the working component of the chip. The ratio 12 versus 38 is a reasonable ratio. They will examine what is the optimum in this aspect, in the ratio between cache and processing power, and accordingly decide how much optimal cache to allocate in the graphics processor. * To the extent that members request to enter to discuss and delve deeper into the subject, and / or to enter into the areas stated in the article as such that we will not expand on now, feel free to ask and dig. I will do my best and educated the field to respond and expand.
    6 points
  3. CSM means backward compatibility mode for an old BIOS. In other words: MBR and not GPT. Not UEFI mode. This is why the system does not boot. The appropriate disk partition is missing. There is a built-in tool in Windows that allows you to convert but there is a certain risk of information loss. Before you begin it is advisable to create an Image of the disk with software like Macrium Reflect so that it can be restored if something goes wrong. Instructions for converting disk partitions here. Refer to the second chapter which explains how to do it from the desktop. The alternative option is to reinstall the system after turning off CSM mode in UEFI (which will lead to it being installed in UEFI mode).
    5 points
  4. I connected with them this week with a supplier from Bezalel (first and currently the only customer in Kiryat Malachi), the installation cost me 450, in the meantime the option to have this or the BE router which I think is not worth it because as a router is one big trash can !, (there was maybe more Nice if it will be possible to get an optical fiber adapter for rj45 connection from the infrastructure in order to connect a router from another company privately instead of the router be very limited in its options like BCYBER cancellation, dmz setting, scope change to another more normal, Google dns change, meanwhile I work with External mesh solution and not Bezeq's, I called Bezeq technical support I was told that there is another particular model that they do work with but I did not find anything except instructions online, I attached a PDF with 3 licensed models in Bezeq, the last model on the list told me it is the one that catches , The type of connection of Bezeq's infrastructure is GPON ONT SC APC (the connection in the green frame and not the blue !!!!) The matter itself that if I connect something else then they told me in support that they can not help me if I have faults, another interest, Bezeq just at the beginning And their technical support has no answers as to how it works so it turns out that if you buy a suitable fiber optic adapterFor Bezeq's optical connection you are alone in the campaign, I have already plowed a few places on the Internet regarding what is suitable and what is not, what I have found online with delivery so far are two models of ubiquiti ufiber in one of which it has an option for POE, one model from huawei and another from ZTE, there is A particular TP LINK router with a suitable GPON connection but there is no way to buy it online. gpon.pdf Update: Today I bought a TP LINK fiber optic adapter together with a VR600 router, I also connected two switches that work in a gigabit the entire existing network in the house, in the case of WIFI in the house I work with an external MESH (3 boxes), I set what is needed In terms of opening ports and assigning IP according to MAC address, I switched SCOPE to another one that I have been used to since the TD-W9970 period and everything works peaks, tomorrow I will return to Bezeq their router and Salamat
    5 points
  5. I'm in favor of open source, and it's great that AMD is based on it and not on proprietary tools. The problem is that it does not rely on it because it supports open source, it relies on it because it has no choice. NVIDIA has a highly developed software department, and the industry is familiar and experienced with their tools. AMD's alternatives are not the ones it created. You mentioned ML. There are two options - NVIDIA's proprietary CUDA or OpenCL which is open source. AMD supports OPENCL, but not because it's better or anything. OPENCL is not hers, neither has she developed it nor does NVIDIA support it. In fact, NVIDIA started with CUDA and OPENCL came much later. Which means that as I mentioned here before, the tools of the industry work mainly with CUDA. But its software is very good. I would even dare say that it is a software company no less than a hardware company. AMD, on the other hand, does not have this advantage. This is also the case with Intel, by the way: both Intel and NVIDIA help compilers / game developers and game engines respectively. Guess who they are especially suitable for ... AMD does not do this to the best of my knowledge. As for the latter I am not 8%). She also participates in decision making. AMD does not (as above). But there is still hope for optimism, because even though Intel invests in the software and AMD does not, AMD managed to overtake Intel ... or not. Let us not forget that although Intel optimizes for its processors, the architecture is the same. What is not true As for the graphics cards. Edit 100: Not that it's a measure, but I checked the number of repositories in GitHub. NVIDIA has 2, of which 214 forks (i.e. its 26). Intel 188, of which 715 forks (i.e. its 21). AMD has - snake! 694, Of which 22 forks. Its total 9. Intel has 13 times its capacity and NVIDIA 53.3846154 times. It gives some order of magnitude.
    5 points
  6. It is best to remove the Driver Booster immediately, it only makes trouble.
    5 points
  7. Leave, he's a professional. He understands more than you. In fact, he understands more than you do about anything. In economics, in psychology, in everything related to technology and computer science. Too bad he did not learn how to write paragraphs in the forum without going down lines in random places that make reading his posts one big nightmare even if we do not address the bizarre tendency to smear, repeat things, ridiculous attempt to use high language (which was a little more impressive if not accompanied by lots of spelling mistakes ). In addition, the obsessive editing of old posts must be noted so that it is impossible to respond to them without showing an idiot that the paragraph you were referring to has been deleted (Tip: always quote it). And bottom line, go steal games because that's what he does with all his many years of experience in the industry.
    5 points
  8. Oh, lay down, you old, crazy piece of shit. Do not throw your complexes at me. Go near 2 buy some second hand fans and clean them. If you think I have any sentiment for one of the companies that I purchased a product from one of them, you are completely not in the right direction. I will not get to it because: 1. Corona. 2. I have better things to do with my time. 3. I will have no trouble sleeping even if my card turns out to be not the fastest in the universe. My God how much you blow.
    4 points
  9. What closes with these messages? What do they contribute? Porn of box card boxes? Not so clear to me ...
    4 points
  10. If so, at a good time we finished from the first end where we measure the eight titles we originally tested on 6800xt so we have a reference for reference and comparison against each other. We are very happy that we insisted on finding a good 3080 card, and not only the first card we opened EVGA, because we found in the second ASUS card that we caught such an exemplifier, which in any internet comparison found online, is actually considered the fastest card available = golden sample The Empire's architecture. Moreover, it helps us to give a not bad estimate also to the capabilities of the 3090 card to a considerable extent, because it is faster than it in some situations, and that was our overarching goal originally. In the absence of 3090 under hand, we have achieved as close as possible in the current circumstances and there are no happier of us. We very much hope that the members appreciate the hard work of lanzar that lasts for hours and days as well as the considerable financial outlay invested in purchasing the tickets. Understand, this is many tens of thousands of shekels and only for that comes to a lanzar one huge nose. This is much more than what a journalist in the field does with his own money. All measurements at the first end are 1440P. And later we will also upload in 4K, of course, possibly tonight (if we have enough). ** lanzar, this 3080 Asus, put a sticker on it and mark it, it passes to me when you finish with it and you do not pass it to any other customer or friend. My name is written on it Deir Balak. Note that in the current post we will now attach here - only the latest measurements of the 3080, since all the measurements of the 6800xt are in the current thread anyway at the very beginning, on pages 1-6. So whoever wants to go there to take a look - there is no point in uploading them again and it's a shame about the place it occupies in the forum. Also after presenting the measurement of the 3080, we will put the reference table for comparison, so that we can follow the improvement we achieved on the 3080 compared to the default of the card at the stock frequency and as measured by Lior / TPU. In the Gears5 title our 3080 puts out 146.2FPS, compared to the 170.8FPS the 6800xt put out. This is where the 6800xt is 16.8% faster in this test. This is the TPU measurement in the same title we chose to address in the current thread: in the horizon title our 3080 puts out 136FPS, compared to the 142FPS the 6800xt put out. Due to where the 6800xt is 4.4% faster in this test: this is Lior's measurement in the same title. Note that our accelerated 3080 bypasses the 3090 OC version of Asus, with us it released 136FPS in this title: in the red dead redemption title our 3080 releases 105.6FPS, compared to the 116FPS released by the 6800xt. Due to where the 6800xt is 9.8% faster in this test: this is Lior's measurement in the same title. Let us note that our hurried 3080 bypasses the 3090 OC version of Asus, by an inconceivable gap: with us it released 105.6FPS while the 3090 barely 82.6. Look at how much the speed and Platforma Risen 5000 contribute, and maybe even the latest drivers that have come out since and improved something else you will know: in the hitman2 title our 3080 releases 131.5FPS, compared to the 146.2FPS released by the 6800xt. Due to where the 6800xt is 11.1% faster in this test: in the hitman title we used as a reference in TPU's work which was easier to reference in front of it because the running parameters were possible to copy the same, and here too our 3080 flies nicely and opens an impressive gap over 3080 in stock. We have 131.5 compared to 122.8 in his stock: in the title assassin's creed odyssey our 3080 releases 91FPS, compared to the 95FPS released by the 6800xt. Due to where the 6800xt is 4.4% faster in this test: here as a reference to Lior's work, our accelerated 3080 overtakes the Asus 3090 OC with 91FPS: in the borderlands title our 3080 puts out 110FPS, compared to the 137FPS released by the 6800xt. (We will mention in this case that the second 6800xt card, our faster one, in the sanity test took out 138, but we will use the result 137 as a representative of course). This is due to the fact that the 6800xt is 24.5% faster in this test: and this is Lior's measurement as a reference for comparison. We improved, but not dramatically in this case: in the metro title our 3080 releases 109.93FPS, compared to the 118.4FPS released by the 6800xt. This is due to the fact that the 6800xt is 7.7% faster in this test: here is a comparison as a reference to Lior's measurements. We were able to use the accelerated 3080 to match the performance of the Asus 3090 OC: in the title tombraider, our 3080 releases 168FPS, compared to the 177FPS released by the 6800xt. This is due to the fact that the 6800xt is 4.3% faster in this test: these are Lior's measurements. Here too we notice that our rushed 3080 opened a nice gap over the 3090 OC version of Asus: so these are the total of 8 measurements, which show us one of the better 3080 cards to be found, and not only that but if we compare the results to the reference indices that were With Lior and TPU, we improved significantly in most cases, mainly because we accelerated our 3080 quite a bit, and also because our platform is a reasonably optimized Risen 5000, and not the outdated intellects that Lior and TPU measured, which particularly hurt performance at 1440P Where the platform and the Intel processor form a bottleneck. We are very pleased with what we learned today, the Risen 5000 and the conversion to the 3080 card are very significant to the subject. Now we are waiting for 4K measurements and we will see what happens there compared to the reference indices.
    4 points
  11. The hallucinatory post. You did not buy components on a computer, so the warranty will be for each component separately. In principle he did you a favor by checking for you the source of the problem, for his part he could tell you to break your head in identifying the defective component and only bring it in for testing. Full monetary credit for the component if there is no identical or equivalent component above and beyond the store. Asking for a monetary credit for all the components you bought separately after using them is a ridiculous and silly requirement.
    4 points
  12. @zone glide I'll be out here a little nec and tell you something from a place of "teaching you". The problem with your indecision stems from a lack of experience in the hardware world. You can always wait and there is always something around the corner. Wait for the RTX4080, you will get another nec message in the face in January 2022 that actually the 5080 and AMD's next product is the real thing and there will be a huge jump in performance. Or maybe you'll just wait for the 3080 with more VRAM because it's terribly important and yet we want to own a computer for two whole decades, just do not forget that until there is such a card, already just around the corner will be 3080 Ti SuperDuper or any other invented Refresher so maybe you'll wait longer Some. I remember you even a year ago asking about upgrading your antique computer, I guess at the time you did not upgrade because you decided to wait for the next generation, now you are again considering waiting for the next generation. Buy what you need and stop letting the people here get you anxious about what will come in two years. No matter what you choose to buy and when, within 6-12 months something will come that will make your purchase look funny. I bought a 3060 ti and actually got a 2080 super that someone else maybe a few months ago paid double for. And next year my card will also be embarrassed in the benchmarks by a card that costs half. That's what it is, that's the world of hardware, if you can not get it do not buy technology. I have never seen a person who spends so much time buying hardware, and cares so much about it.
    4 points
  13. Well think about the future, buy a 3080TI with 20 GB for NIS 5000. In two more years, you will receive the RTX4060 for NIS 2000, which gives the same performance. What do I come to say? A consumer who wants to get value for money in advance does not buy these tickets, do you want to insure yourself for the future? Take 2000 shekels Put them aside, go buy RTX3060 when it comes out and in two years take the 2000 you saved now and buy 4060. You also saved money, you will also get similar performance in the future but with newer technology, and you can also sell your current card when it still has good market value And return part of the amount. Maybe even you do not let yourself be left with a video card without a warranty in the system. So what's the conclusion? Why buy a 3080TI with 20GB? Because you want the performance it offers. Not tomorrow, today.
    4 points
  14. 16GB of memory is twice as big a gimmick as RT and of course DLSS which is in general the main game changer at Noidia today and not the RT itself. The day you will need 2GB in your video card, even in 16K resolution, it will already be such an antique in terms of computing power that it has not been on your computer for a long time, replaced with a new Dendesh video card. Absolute gimmick and both AMD knows it, the only reason it's 4 at all and not some more modest number like Noidia's 16 is because AMD wanted to stick to the same 10-bit bass width. There is no doubt that Noydia has meanwhile defeated this generation. Even for those who are not impressed with RT, it still offers more here and now than what 256 GB offers here and now - which is nothing and nothing. And DLSS has no answer from the red side, most of the serious games that will be launched in the coming years will support this and it is a completely free performance. I would say that these 16GB are useful outside of gaming for those who work with approx. Its screen in some way but also there is a clear and distinct advantage for its users in most areas because of much better software and features. A situation has arisen where in areas you might have wanted it (ML for example) no one wants an AMD card anyway.
    4 points
  15. Hello dear forum members! It's been a long time (6.5 years) since I installed Xeon processors on the LGA771 boards that work to this day (reminder for those who want to remember - link) today I come to you with another installation I did on the computer (against Intel's wishes), and this is the 6700K upgrade I had so far to 9900K on board Z170 (not officially supported). It should be noted that I do not take responsibility for this guide and everyone carries it out on their own initiative while understanding the dangers involved in the process. The complete installation guide is here: https://linustechtips.com/topic/1118475-guide-running-coffeelakerefresh-cpus-on-skykabylake-motherboards/. Note that there are some updates (due to the update of CoffeTime software to version 0.92) and additional distinctions that I discovered during the attempts to install the processor. Each motherboard requires a different installation, but for the sake of clarity to the guide here I will mention the highlights for Gigabyte 1 boards. First download the BIOS files and Coffetime 0.92 software as shown in the manual, along with the FlashProgrammingTool (FPT) software. Place them in folders on drive C 2. Prepare the BIOS using the software with administrator privileges, as shown in the following image: Note that both the ME and the VBIOS + GOP must be updated so that they can work with the processors (and the appropriate microcodes, and make sure it is saved!). In the EXTRA personal label I added a memory expansion to 128 GB whatever. Also important - in MAC1 add the MAC address of the Intel network card in your possession and keep the number for yourself (can be found in the network card properties) 3. If your operating system is installed on NVME, and the NVME is in MBR format and not GPT - you will need to convert it to GPT before executing the mode, using RECOVERY mode with MBR2GPT command: 3. Great, you have the BIOS, the SSD drive in the appropriate format and you are ready to burn. Before that please make sure you have a backup BIOS (before editing) on ​​an on-key disk if you need to rewind through the BIOS (I should have). Now comes the step of using FPT to burn the BIOS (in other boards the way may be different, like using Programmer), as shown in the guide. It is important not to disconnect the computer from the power supply at this point otherwise the BIOS will be Corrupted. Once the burn is complete, you are ready to install the CPU. 4. Shut down the computer via FPT only using the fptw64 -greset command, and before the computer shuts down, turn it off again and do not let it turn on. Disconnect it from the power supply and remove the battery. 5. Before installing the processor, you will need to cover some of its pins according to the board (Gigabyte covers the most), and connect some of them (depending on the board): Here is the gluing I did using a kind seller from AliaCapsers who also brought tweezers: 6. Install the processor on the board. Put the cooler on for a moment and make sure the board goes up (if you can, I used the board's number bulbs to make sure it goes up) before you put everything together (if it does not come up you may not have put the BIOS properly, or you may not have put the codes properly). If it comes up, turn it off and shut down the computer. 7. Make sure the cooler is seated properly and turn on the computer, go into BIOS8. Great, you're almost done! In my case the CPU was at too high a voltage (1.4VCORE) which put a load on the VRM and they also crash. This is of course exceptional and the VCORE should be lower. In the BIOS use Adaptive Vcore and download at least 0.100V (in my case, play with it to see that the CPU does not get too much voltage) and check the stability. Monitor the temperature of the VRM and CPU using HWinfo 9. Renew PS - My old Mugen 2 (which still cooled the Q9300) cools this processor well (better than the 6700K! Thanks to the fact that the processor is soldered to IHS as opposed to 6700K) and is silent. I get higher and more stable frames with the computer, my brother can encode movies at 2x higher speed, and I can be with a quiet mind for a few more years until I have to upgrade. The computer is stable after I have arranged the voltages. Was very enjoyable and worth it! Just make sure you have a board with VRM good enough for that and be prepared for the complexity of the process. Too bad Intel did not let us just install the CPU in a normal way (because as you can see, it works great), but for that there is Modding I would be happy to help with any question / request!
    4 points
  16. I think about devoting time to researching VRAM occupancy, VRAM cashing in modern game engines, and why conventional thinking in terms of VRAM occupancy might be misleading. Not sure it will be particularly popular, but wonder how much is required.
    4 points
  17. If @captaincaveman has already commented (and rightly so) to @ aviv00 then I will allow myself to comment here that they say "on the contrary" and not "to a great extent". The origin of the word is in Aramaic, for those who are wondering. This is not the first time, so I comment ...
    4 points
  18. Today, AMD's Ryzen 5000 processors are officially released to the world, which is a historic turning point for the processor manufacturer. We took a look at the Ryzen 9 5950X and were amazed at its tremendous power - all the numbers here
    4 points
  19. My slowest core reaches 4750 and the speed exceeds 5 GHz. crazy
    4 points
  20. Please let's not get into personal tracks here. In my opinion, the discussion is interesting and important. If @ napoleon45 thinks otherwise, he's guaranteed. He's welcome not to come in here. If you want to explain why @ nec_000 is wrong, also happily. Just please, give a scientific and technical explanation, just as @ nec_000 scientifically explained why it is a novelty and a breakthrough.
    4 points
  21. I informed my wife that I was coming home early tonight, that she would make me a fresh watermelon with a Bulgarian in the family room / TV, next to it nuts, a cold Zero Cola bottle, a red rising towel, and that she would take the kids out somewhere ... Question Why? I told her the final game against Maccabi. Let them not dare to disturb me
    4 points
  22. Hallucination. A person arrives, subscribes to the forum, poops on a series as his first and only action on the site. So boring to you?
    4 points
  23. It is not possible to split each discussion into a thousand sub-discussions, like it or not it is a discussion about reasons for buying or not buying. It would be better for the principals to simply unite the clusters
    4 points
  24. The specifications you received are reasonable -, the case is really expensive in relation to what it gives and also the power supply is super expensive and not better from suppliers that cost significantly less, the storage is really unclear, there is a tiny disk of 256 for 220 shekels ?! Instead of a 1T SSD at ~ 500 that will at least let you install games on it in addition the cooling of the super ultra expensive processor without being better than cooling costs hundreds of shekels less (this even if we ignore the fact that 5600 X is a super efficient and cool processor that does not need cooling over 150 shekels). If you plan to stay with the computer for many years I would also upgrade to 5800X or switch to 10850K and the screen recommended for you to spin, but I would go for the Dell S2721DGF or Gigabyte M27Q. I would go for something like this: https://tms.co.il/246215 and max in the future adds memory if you see the need. I just wanted to add that KSP had the meager until very recently at ~ 1700 NIS and is about as good as the screen I put here and costs less which is in stock. If you prefer AMD then replace the board and processor with those.
    3 points
  25. Come turn on ray tracing and enjoy what your slide show is.
    3 points
  26. Strange that I came across this discussion just now but better late than never. The customer bought several parts, one is dead, the official importer does not hold it anymore, we offered full cash credits for the faulty product and encountered a very long time of telephone quarrels. We had no solution he liked and his board was exotic and uncommon and once the official importer of Asus has no solution all we could was give him a credit with us or return his money for the fucking part. We have a specific reference to the problem of products going out of circulation and reimbursing the amount paid for them. We come across the story mainly after the passage of several technological generations and the cessation of production of such and other products and so it is with the board in his hands. And if a few days had passed from the day of purchase to the day of death we might have behaved differently but when selling hundreds of computers a month and dead parts sometimes it is not possible to return to the customer all his purchase but only the price of the damaged / defective / sick part. Unfortunately I do not have recordings of these conversations but a lot of bad water flowed in discussing what the customer deserves in such a case. And to our great regret he did not accept our opinion. I was happy to read the variety of comments here that did not even understand where the problem is in our decision and regarding the parking problems with us there is no reason to really shop. We know how to take a defective product to the lab directly from the car and also return a product to the car both in case of repair and a new order. It is beautifully written and prominent on our address page. Happy holiday
    3 points
  27. AMD wipes the floor with Intel's product, just like that. Those with a keen eye will be able to notice improvements, not only in the area of ​​performance, but in a very important area, which is the area of ​​security and encryption of all layers of virtualization, a solution for which Intel has no answer. AMD will pick up a huge amount of sales, today cyber is a very sensitive issue, whoever has an answer to the issue, wins the deal:
    3 points
  28. The problem I understand is divided into two vectors: in the level of Nvidia this is competition with the miners and they are what to do, willing to pay much more than a domestic / private consumer, because it pays off in their business aspect, as a production line machine they need in their factory and so they measure it. Even if you pay double for the machine yes, it still makes them profits. And so they are pretty much killing this market for home consumers who are not willing / able to pay double for the product. At AMD's level, its products in this generation are less attractive to mine (thankfully), but the problem is different, supply-side supply. AMD managed to get a share of 150 silicon wafers per quarter from TSMC and that is already very impressive, but not beyond that even though it could have doubled even if it had only been allocated for it. The rest of the production at 7nm goes to other TSMC customers, who also need this production infrastructure for their products. From the allocation that AMD received for 150 silicon wafers in the quarter, 120 armored to create Macrosoft and Sony console chips, an 80% share of oil as stated, as a result of AMD's contractual obligations to supply about 9 million processors per quarter. Annual rate of 36 million. Which is the rate at which consoles are sold globally in a normal year. In Corona's year this too is not enough and there are shortages in the consoles on the shelves. Which leaves only a small share of only 20% (30 silicon wafers) for everything else that AMD produces, including processors and video cards, for all segments all together, ie for servers and laptops. As a result, the volume of video cards and Risen 5000 processors for the home / private consumer is very small, probably on the order of only a tenth of what it used to be. And that's the result. Unfortunately this is not going to change in the foreseeable future, AMD's commitment to continue to supply chips to consoles exists at the same time and does not end, and output at TSMC at 7nm is not going to grow any more as they have moved to the 5nm generation. Which means that whoever wants another production share at TSMC should already order an allotment on the 5nm production line at a price that is sky high, and that's what it's. Except that right now the whole creature at 5nm is drinking Apple and pretty much letting no one else take anything. Every new plant that starts producing 5nm right now Apple is taking on its full capacity. It is willing to pay the most money of all, and TSMC of course chooses the most serious / big / heavy customer who pays the most of all below Apple. This is one of the reasons why Apple products are expensive by the way, not only this, but one of many grandparents. Its customers pay a premium to ride the latest lithography in the industry. IPhone is a premium product and its A14 processor is manufactured in the latest lithography that has = 5nm today. In the duopoly of chipmaking in the world, Samsung's situation is quite similar, it has the older 8nm that NVIDIA managed to grab this year, and it has the newest and most expensive 5nm, which only huge and rich customers like qualcomm can afford. So AMD and NVIDIA were not left with too many choices. We will probably be carrying on this infrastructure 7 / 8nm for an entire year and the shortages will continue to dominate the area as long as this mining boom is not over. At least on the Nvidia side there is hope that if the mining is over, the problem will be solved. On the AMD side there is no optimistic horizon in the coming year and until it can allocate itself something from a 5nm slice at TSMC. And even when you do get a share, it will be a little expensive because everyone will fight for an allotment at TSMC in this lithography. Therefore the next generation that AMD is planning on 5nm, including Risen 6000 and RDNA3, will probably also suffer from significant shortages just as they suffer today on 7nm of its current products. We are in an age where there are only two advanced lithography providers TSMC and Samsung, and the whole industry and its sister want them. So there is a limited supply, there is hardly any sky (a combination of mining + a new generation of consoles launched + a corona that the three of them joined together at the same time), and we are in trouble.
    3 points
  29. Have you ever seen a man, courted by two women, throw one of the women because one of them told him she was here before? No! It's convenient for them and it's clear to everyone! The executives are throwing us sentences like "We really care about gamers, we're going to do everything we can to raise supplies" You will learn the world you will accept it and most importantly stop thinking that your discussions in the forums are causing the money machine to stop moving.
    3 points
  30. It's still a good ticket if you buy it for its main feature - burn cockroaches that have crawled into your case.
    3 points
  31. I did not say it was a bad thing. I gave you a perspective in order to help because I see you appear in a lot of threads and ask different people similar questions. If that's the reaction, then I'll really stop trying to help.
    3 points
  32. Do not understand this narrative of "Fine Wine". I do not know in which communities you hang out, in the ones I hang out in it is mostly such a network joke. You could call it "drivers that get better with time" ... You could also call it bad drivers on DAY 1 that leave performance on the table and take AMD a year + to fix, while Anodia's drivers are good from launch. How did we age the VEGA 56 \ VEGA 64 cards by the way? And what are the owners of the Radeon 7 doing today with their formidable 16GB, someone asked how they were? Or we forgot that these cards exist like AMD sort of. Matured less like wine, more like milk. Indeed, they have improved over the years compared to the "neglect" of Anvidia, whose 1000-year-old GTX4 series still occupies the vast majority of the market share and offers excellent performance.
    3 points
  33. Well come on. Let's start with threads and lock, it's simpler. threads are used in a very simple function: running code in parallel. Take Word as an example. The word processor saves the document every ten minutes (can be quipped). Now, suppose I have a huge document, with lots of graphics. How huge? I have had presentations in the past that took a few minutes to save. I really do not want the middle I work, just after I came up with some cool idea (because after all, Murphy's Law) the Word will suddenly hang for two minutes in a desperate attempt to save the document in case of a power outage or who knows what. If such a thing were to happen to me ... I would not be jealous of Microsoft programmers. In any case, this is why Word enables autosaving in a separate thread. That way I will not get stuck (or rather, I will not feel stuck. More on that soon), but the document will be saved in case Murphy decides to show his power. Now, there are two reasons to run code in parallel. Actually, three, but I'll talk about the third later. The first is, if I want two actions to be performed "as if in parallel". This is an example of Word, and not surprisingly, a lot of actions that can get the UI stuck are done in a separate thread. In fact, any action that can take more than a few milliseconds should be performed in a separate thread. The second is a complicated, algorithmic calculation. Let’s say I need to check if a very large number is prime, and I want to do it in the naive method of checking all the numbers down to the root if they divide. Why this bad method? Because I like to annoy my customers. The problem is, I do like to annoy the customers, but not the boss (still, he can fire). So I divide the work: I will put out say 4 threads, each one will check a different quarter of the numbers. Will it be helpful or maybe harmful? Depends, of course. The eternal answer. If you have enough cores in the CPU, the operating system will be happy to give each thread its own core, and then of course things will be faster. But if not, the operating system will still want you, so it will use one core for several threads: every so often it will switch between them, the so-called context switch. This is how it seems that things are done in parallel. This helps the first problem (of the Word), but not the second. The context switch will slow down the work, because the processor has more work to do now. So in this problem you usually check how many cores the processor has and do a spawn in the same number of threads. There are programming languages ​​/ work environments that do this automatically, like Go. Note that usually a few dozen other programs run on the computer, and each one also wants CPU time, and this may affect the matter. Everything is fine. So what is a lock? Or, it's related to the question: the waiter (not kidding). There is about this a problem of dextra with philosophers and a waiter). Let's take a simpler example. Run the following C # code (before that make sure you understand it, in case you did not know Task.Run runs a new thread (inaccurate but not critical) and Task.WaitAll blocks the current thread until all specified threads are finished): using System; using System.Threading.Tasks; public static class Program {private static int _Counter = 0; public static void DoWork () {for (int i = 0; i <1_000_000; i ++) {_Counter ++; }} public static void Main () {Task t1 = Task.Run (DoWork); Task t2 = Task.Run (DoWork); Task.WaitAll (t1, t2); Console.WriteLine ("{0: n0}", _Counter); }} I made you dotnetfiddle here. Enter the link and Run. I'm waiting. ................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................ Good. Did you run? Run again. And again. And again. ten times. Pay attention to the result! ................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................ what happened here? Not only is the result not 2 million as expected, it varies from time to time! What is up?? So to really understand what is closed, one has to go into the assembly. Do not worry, I will not abuse you. I'll leave the assembly for another time. Instead I will try to explain what happened in words (intentionally simplistic explanation, do not preach to me about it, I know): when you do ++, the CPU can not really go into memory and add one. This is a more complicated matter. First of all, the value should be taken from memory. Then, add 1. At the end, you will recall the new value. Now, if you do it in a loop, it's a real waste. Why load the value every time? It is best to load once at the beginning and put once at the end. Let's imagine it: I'm starting thread 1. He starts running. Loading _Counter from memory. Adds 1. Adds 1. Adds 1. Adds 1. Adds 1. Adds 1. Adds 1. . Good. It's thread 2's turn. He starts running. Loading _Counter. Let's say that at this point thread 1 has already been executed 50 thousand times, so _Counter contains 50 thousand. Adds 1. Adds 1. Adds 1. Adds 1. Adds 1. Adds 1. ... Back to thread 1. Adds 1 - to his Counter which is still 50! Adds 1. Adds 1. Adds 1. ... And so it goes. If thread 1 finishes first, the result will be a million. If 2, the result will be one million and fifty thousand. Now it is also possible to understand why the values ​​have changed between the runners: it depends on which core each thread runs on - maybe by chance thread 2 runs on a faster core, so it will finish first and lose the race? (Yes yes, first. This is not a mistake. Read again and understand). It depends on the temperature of the computer, the speed of the cores, whether the antivirus is running a scan right now and a thousand and one more things. Race. That's exactly what's going on here. This is also the reason for the name of the term: race condition. Do not underestimate the matter. The Bell telephone exchange (if I remember correctly, I could not find it on Google) collapsed due to unexpected race condition. In 1985 to 1987 a number of people died of cancer due to race condition (source). How do you prevent this? We need some way to say to the processor: "Honey, do not rush anywhere. If anyone else is busy right now, calm down. Wait for him to finish. " Well, thankfully, there is such a way. A lot, to be honest. We will deal with one: locking. The idea is this: we create an object that is usually called a mutex (RT of mutual exclusion) or a similar name. Before accessing the shared resource (_Counter, in our case), we "lock" the mutex, and release it when we are done. If we try to lock a mutex that is already locked by another thread, our thread is blocked until the other releases its lock. It does not take up CPU time: it is executed at the operating system level and to some extent even the CPU. Is this in C #: using System; using System.Threading; using System.Threading.Tasks; public static class Program {private static int _Counter = 0; private static Mutex _Mutex = new Mutex (); public static void DoWork () {for (int i = 0; i <1_000_000; i ++) {_Mutex.WaitOne (); _Counter ++; _Mutex.ReleaseMutex (); }} public static void Main () {Task t1 = Task.Run (DoWork); Task t2 = Task.Run (DoWork); Task.WaitAll (t1, t2); Console.WriteLine ("{0: n0}", _Counter); }} And as usual there is a link here. C # has a tool to make it more comfortable: the sentence lock: using System; using System.Threading.Tasks; public static class Program {private static int _Counter = 0; private static object _CounterLock = new object (); public static void DoWork () {for (int i = 0; i <1_000_000; i ++) {lock (_CounterLock) {_Counter ++; }}} public static void Main () {Task t1 = Task.Run (DoWork); Task t2 = Task.Run (DoWork); Task.WaitAll (t1, t2); Console.WriteLine ("{0: n0}", _Counter); }} And the link: https://dotnetfiddle.net/UJENnV. This translates to the following code: using System; using System.Threading; using System.Threading.Tasks; public static class Program {private static int _Counter = 0; private static object _CounterLock = new object (); public static void DoWork () {for (int i = 0; i <1_000_000; i ++) {bool lockWasTaken = false; try {Monitor.Enter (_CounterLock, ref lockWasTaken); _Counter ++; } finally {if (lockWasTaken) {Monitor.Exit (_CounterLock); }}}} public static void Main () {Task t1 = Task.Run (DoWork); Task t2 = Task.Run (DoWork); Task.WaitAll (t1, t2); Console.WriteLine ("{0: n0}", _Counter); }} https://dotnetfiddle.net/L7Z5QW. From here you can understand why I needed a separate object from Counter to lock - need reference type, because otherwise it will be copied and we will not be able to purchase a lock. And last but not least, for common cases like arithmetic operations we have the class Interlocked: using System; using System.Threading; using System.Threading.Tasks; public static class Program {private static int _Counter = 0; public static void DoWork () {for (int i = 0; i <1_000_000; i ++) {Interlocked.Increment (ref _Counter); }} public static void Main () {Task t1 = Task.Run (DoWork); Task t2 = Task.Run (DoWork); Task.WaitAll (t1, t2); Console.WriteLine ("{0: n0}", _Counter); }} https://dotnetfiddle.net/Ks6Tt1. It's also faster because locking it is slow, and Interlocked translates to CPU-level instruction (at least at x86). Important Note: Most classes in .NET, including the collection classes, are not thread-safe. We have System.Collections.Concurrent for collection classes that are. Well, enough for now. I wrote a lot. About async / await below ...
    3 points
  34. I understand what your point is, it is simply incorrect because the comparison you make is incorrect. If such power insists on comparing consoles to a computer, a more logical comparison would be 16 GB on consoles compared to 16 + 8 = 24 on a computer. Take a look at the previous generation of consoles, both had 8GB already when they came out in 2013. 7 years later, people are still getting along without much trouble with video cards with 4-6GB of memory. If you want to worry about the consoles I would worry a lot more about their processing power, not a marginal matter like memory. They have an RDNA2-based graphics processor that according to what can be seen so far is on the order of 2080super in terms of processing power. Meaning - the 3070 is only slightly faster, the 3060 12GB will be slower. The whole current generation (even the 3080 is not that much faster than what is there) will start to feel a bit old as soon as games that are really next-gen come out that is developed for this generation of consoles without back support for PS4 \ XBOX ONE. And no amount of VRAM will help with that. An order of magnitude of a year-and-a-half estimates.
    3 points
  35. Even if there was only one game in the whole world that currently supports DLSS and RT it would still be one more game where you get some benefit than the amount of games that benefit from 16GB of memory. So there is no contradiction here. @zone glide There are some things you need to understand about DLSS - a. It's not something born yesterday, Anoidia has been working on it for years and it took a long time until it got to where it is today. B. DLSS is built on artificial intelligence and machine learning algorithms, a field in which Nodia is huge (this is not a company that only deals with giving you FPS in games) and invests a lot of money in research and development. This is not a stupid upscale technique that has always existed. third. DLSS works with dedicated hardware on the approx. Screen, Nodia's tensor cores that are specifically designed for deep learning. AMD has no equivalent. When you see all this it is not inconceivable that AMD will take a similar number of years to put up a worthy parallel to DLSS that took DLSS itself to brew to the point we are at today.
    3 points
  36. Tomorrow at 16:00 Israel time - some pictures because this is a date that allows "Anbukisang". Exactly two days later, at 16pm on Wednesday - reviews with performance tests. The little big navy will be there too, the big big navy will be there too. The very great Nabi gets his own launch later
    3 points
  37. Each time the same grievance over prices on launch day. It is known that one has to wait 2-3 months until the price (derived from stock) stabilizes. Until then, they will stick to whatever price they want and whoever it is important enough for him - will pay. Not good - no money.
    3 points
  38. 3 points
  39. Hi, a month ago I asked who is interested in reviewing the efficiency of this technology at the refractory level and say that for me the reason for using SSD CACHE is so I can store all my games in one place, that it needs at least 4TB that the price of such SSD is around 2000 NIS cheapest, so I wanted a standard HDD for storage but I did not want the loading times to be high, but as close as possible to the SSD that is possible before I publish here the only benchmark I did (CRYSTAK DISK MARK) I will mention, that I do not use STOREMI even though I am on AMD platform because I decided to try SSD CACHE and RAID 0 and the platforms of Intel and AMD just do not support it, if you want more details on how I did it, etc., happily. I pulled from an old 4TB NAS that serves as a MIRROR, and even so most of my files are cloud-backed games. So come on to the pictures above you can see my basemark which is a standard HDD up you can see the speed of two HDD drives in RAID 0 without SSD CACHE and here's our thing, That sometimes SEQ1M reaches 600 and sometimes convenient To him about 400-500, it is not clear to me what causes this oscillation. Game loading times were filmed but I have no knowledge of video editing, I tried ... I have the recordings but I do not really have the patience to learn editing, maybe someone here will want the clips and turn them into a YouTube video or something, not really burning for me. I will also note that the loading times are slightly better than my regular SSD which is relatively old unlike the SSD that is used as CACHE. All I wanted to get was a 4TB game library for the price of an HDD and a little more for a small SSD, which would load quickly, and the goal for it was fully achieved. Hopefully someone will benefit from this discussion if suddenly someone comes up with an idea for a benchmark write to me here and I will try to execute it. If anyone is wondering what my specifications are, it is excellent in my profile, even though I wrote here a lot I did not feel it was necessary simply.
    3 points
  40. When Intel Wants a Piece from AMD: Intel CEO Taken Behind the Scenes:
    3 points
  41. I believe they will only be treated if they are in a better condition than NVIDIA.
    3 points
  42. Even if we start from the assumption that the increase in performance is really that big, how much is it that it is much more impressive what will be at the end of the cost-benefit ratio. If I compare the current generation of Risen against Intel in the Gymneg for example, then Intel has a pretty big advantage in FHD with 2080TI but in 2K the advantage is already really negligible. I guess a significant performance improvement in Brizen will translate at best into a significant advantage in FHD with a 2080TI and a negligible advantage in 2K. Since no one buys a 2080TI or 3080 for FHD then the advantage of the Risen will not be manifested just as the advantage of Intel is not manifested today. In my opinion the rise in prices is really taking its toll on this launch and that's why I'm not excited about it. What will happen in the end is that for the average user it will be better to buy an Intel processor cheaper than to buy Risen and in general the price level of processors in the middle segment will go up.
    3 points
  43. I did not claim that if you play against Sherwood, when you are with an end computer and he is with a 386 of yesteryear, you will beat him because yours is stronger. The talent aspect is existent and primary, but unrelated to the debate over whether or not latency is felt. Factual yes. And I'm tired of prolonging the issue, the voter will choose. And if I may use your parables: buy a powerful computer to compete for the same reason that if you send your son to a football team, you will not send him neither with flip-flops nor with red-back, and trust that the talent will be embodied by itself, but with good quality stopwatch shoes. And do not send your son to learn ping pong to play like the British above, with a matka from everything in shekels, so that he can learn on his own. And do not go racing with a standard Fiesta gear, because you already know how to shift gears nicely and throw a clutch when you go out of a turn, because the main thing is talent. Talent is paramount, but it does not give you the ability to give up good and right tools to use. There are those for whom gaming is a pleasure, who will buy a 2600K based computer from Mali Express and enjoy, legitimate. Whoever it is a sport for him, and a competitive game, (and competitive does not mean to win trophies on the West Coast in the US), but a game and a systematic competitor in his free time, and manages to reach a good level of performance, so called in professional K / D high (between 2-3) willing to invest Lots of money to avoid the situations shown in the videos above, from a router that allows you to select nearby servers (price in 4 digits), a subscription to EXITLAG with a monthly payment to get a VPN with a connection that automatically detects servers to get low ping, and a video card, suitable screen, mouse With 1MS response time etc. To claim that you will play with 30 FPS and win for sure and all talent is ignorance, talent is the most dominant part of the components of success, but not everything starts and ends there. With all due respect to the King of Ping Pong who made the switch to Coucher. Play with a system with a ping 250-300 in front of someone with 50, you will throw the keyboard on the wall in frustration. You write from the perspective of whatever it is you do in the defense / air force system I write as a gamer who lives breathing and kicks the uninhabited and mainstream "niche" of gaming as a sport and competition. And I'm done with that.
    3 points
  44. Here is a comparison of the new series compared to the outgoing one: 5950 costs $ 799, while 3950 costs $ 749 (increases of $ 50 or about 6.7%) 5900 costs $ 549, while 3900 costs $ 499 (increases of $ 50 or about - 10%) 5800 costs $ 449, while 3700 costs $ 329 (increases of $ 120 or about 36% increase, that's what I expected) 5600 costs $ 299 while 3600 costs $ 199 (increases of $ 100 or approx. - 50% acceleration squared!) The cheap 6 and 8 core processors had a significant increase of about $ 100-120 $. The most powerful processors had a relatively minor ceiling of only $ 50. Which are even smaller in light of the fact that the prices of these processors are more expensive in the first place, so $ 50 in percent compared to them is a bit. I understand from that AMD has a very strong card and anyway most of the sales are at 6/8 cores for home consumers, so they are currently maximizing profitability. On the other hand, in the more expensive processors that are sold in smaller quantities -> especially to power users, where they do not go wild, because without a tempting price, it will be difficult to reach (probably) high volumes, which is what they are looking for at the moment. Hitting the iron as long as it's hot, as long as Intel is still not putting together a competitive answer and the clock is ticking, is an opportunity to make money. Due to the strong increase in 6/8 core processors and the relatively small increase in 12/16 processors, a situation has arisen in which the price gap between the cheap and the expensive has narrowed, to the point that we have a "rising marginal value law". This will make quite a few consumers consider, maybe worth paying another X percent, for getting more than X percent cores? Definitely a nice thought which has come to maximize the sales volume in this round. I liked. Women notice, sitting a user who did not think for a moment to buy more than 8 hearts. He does not need more than 8 per PC in his home for some games and work on the desktop. But come and tell him. Audio: Another $ 100 yes, it costs 549 instead of 449, will not cost a 12 core processor? Very tempting. All with the aim of getting as many consumers as possible to do exactly that consideration and squint at the more expensive processor even if they do not need one. That's another $ 100 for AMD's pocket, while the creature gap is zero for it. That's less than $ 20 extra to cost a production, for another $ 100 to collect revenue from the customer.
    3 points
  45. In general I do not like filmed articles and always prefer a written article. On the other hand I think it is not respectful to respond in the discussion without seeing the relevant video. I tried to see the video but the guy's accent just makes me uncomfortable so I could not see it to the end - I apologize for that in advance to you. Yes I did manage to hear one sentence that caught me strongly "I collected the information from trusted and reputable sites". I think it is important to take information from the same place for consistency and therefore, even before I watched the video I went to techpowerup and checked the launches of several cards. I have a fear that maybe the guy chose the information in a somewhat trending way. Since I did not expect the whole video I may not have understood it properly. It is worth noting that in the past cards with OC from the factory showed a more serious improvement than I see today, in at least one article I saw a 10% improvement over the NVIDIA reference. All the numbers I took are from the date of tapping for NVIDIA reference and FE cards, it also means drivers are not the most mature for a card that has just been launched. As the drivers mature, some of the gaps increase a bit. I guess even now the drivers for 3080 have not yet fully matured. At first I only looked at FHD, at 1080 relative to 980 I was already looking at QHD and in those that came after 1080 I looked at 4K. In fact, already in the 980TI the improvement in QHD and 4K is higher than the improvement in FHD. The improvement of 3080 and 2080 over their predecessors in FHD is smaller in percentage than the improvement in 4K and QHD and I chose not to comment on the improvement in FHD in these cards. I was not looking at a specific game but a performance summary. In terms of price, I refer to MSRP in the US. The 280 cost $ 620 at the time. (My starting point) The 480 was launched at $ 500 and gave a 56% improvement over 280. The 580 was launched at $ 500 and gave a 15% improvement over the 480 The 680 was launched at $ 500 and gave an improvement of 23% over the 580. The 780 was launched at $ 650 and gave an improvement of 27% over the 680. The 780 TI-3GB was launched at $ 700 and gave an improvement of 17% over the 780. Its launch lowered the price The 780 for $ 500. The 9804GB was launched at $ 550, giving an 8% improvement over the 780TI and 31% over the 780. The 980ti-6gb was launched at $ 650, giving an improvement of 22% over the 980 and 32% over the 780TI. Lowered the price of the 980 to $ 500 as mentioned, so far I have tested FHD. The 1080 was launched at $ 600/700 depending on the version, gave a 66% improvement over the 980 and 37% over the 980ti. I tested the QHD. The 1080TI was launched at $ 700 And gave a 28% improvement over the 1080 and 75% over the 980TI, lowered the price by 1080 to 500. QHD. The 2080 was launched at $ 700/800 depending on the version and gave a 45% improvement over the 1080 and 9% over the 1080TI. 4K from now on. The 2080 S was launched at $ 700, giving a 7% improvement over the 2080 and 19% over the 1080ti. I could not find any information if it lowered the official price of the 2080. The 3080 Launched at $ 700 and gives an improvement of 67% over 2080 and 56% over 2080S. I did not refer to extreme cards like the 2080TI, 3090 and Titans. I allow myself to guess that tickets that went on the market at a competitive price probably went on the market when there was relevant competition in the segment. I would love if someone would explain to me, what I care about all this information and why I wasted all my time collecting and bringing it.
    3 points
  46. you are a thief. The fact that you are doing good things at the same time does not change this fact. The poor argument for the 1000001 copy can be applied to anything. Stolen a vehicle manufactured 1000000 from it. Stolen an iPhone made 1000000 from it. Enter the mafia and steal bread because they baked 1000000 of it and they will not feel if one goes. Even if it's the billionth copy, it's not your property. you are a thief. This is not socialism, this is theft. And what a section, I also contribute, I also volunteer, and do it all without stealing. zero.
    3 points
×
  • Create new ...

At the top of the news:

new on the site

Sony's Power Show

Sony's Power Show

The Japanese maker has made an impressive appearance for its PlayStation 5 console, with a large number of new gameplay videos and some special surprises