Lately - HWzone Forums
adplus-dvertising
Skip to content
  • Create an account

Top Friends

Popular Content

Showing content with the highest reputation since 27 / 01 / 20 in all regions

  1. The problem that the basic figure on which your doctorate was based also negates your position. The video clearly shows that although the average number of frames is similar between the processors, the i3 processor often crashes to a number of single digits. And that's when they made all the unreasonable efforts to minimize the CPU load. I'm not a big fan of FPSHOOTERS but from what I remember if you hit some enemies and just then the computer falters to an unplayable level so the computer doesn't do its job properly. From what you see here, it goes down to my digit, just walking everywhere. Anyone who is aiming for 4k needs a very powerful video card and it would be best to lend it to a processor that swings all the time and not most of the time and certainly not just a fraction of the time. Nowadays there may still be room for deliberation between 6 and 8 cores. For the whole concept, while office work on a computer falls under "basic consumption" (today elementary teachers require print jobs, for example) gaming is a recreational culture and a recreational culture is very different. You can watch a football game on TV and you can fly to Spain and see the game of honor. You can watch videos of cirque du soleil on YouTube and you can go to the show and pay a few hundred shekels. Especially if you have to fly to Vegas to see the show. You can buy flavored fur ice cream and texture of cardboard mixed with milk and sugar and you can pay up to NIS 100 per kilo of ice cream in the ice cream. Using a smart pack of cards (or two) can provide hundreds of hours of play alone and several times with a partner. So why buy a computer for gaming? And in general, need to connect with reality and adjust yourself to the target audience, do you really think that those who have such a basic misunderstanding about financial behavior are those who sit and read you your scrolls? You preach to the wise, the true (mental, unreasonable) target audience More than two consecutive sentences. Some people know how much money he has in his wallet right now - it's his budget. On the tip of it, it's a good deal. Some people know he doesn't have money and he knows how much he is willing to pay each month and for how long - this is his budget. Less wise there are those who do not know how much money he has and this is an unwise behavior, not sure that a computer forum is what will save him, not sure he should be saved, maybe Dad spritz money off the ear. But it is worth checking what his expectations from the computer and try to match him to a computer that will answer Have him on expectations, or send him to buy cards.
    8 points
  2. I understood your head. So you're right, this is really not a 'theft' in the literal sense that you can go and file a complaint against it with the police. But in our vernacular, 'theft' is when you take a sum that has exceeded the moral threshold worth taking, and most of us are not stupid consumers - even if you start your post with examples that try to present us as such, believe it or not - I'm pretty sure we're all pretty rational people. And know what theft is. Do not be surprised if more people in this thread will choose even more to avoid shopping in the country when you fall on us with this passive aggressive. Israel and a free economy are not a pair of words that come together, we are a country of monopolies / duopoly. There is a lot of decay here - we pay a lot more than Europeans and get a little more. "Communist" culture, "extreme socialist" culture, sorry? You talk like someone's spinning you importers (assuming you are, because you sound like SHILL even though I do not see a title in your username) your hand and forcing you to sell to us at such and such prices - we will just 'vote with the credit'. And we will criticize the pricing policy here in the country as much as we want, because there is no North Korean style censor here that prevents us from comparing with Europe and the US and understanding how much they oppress us in Israel. If you can sum up your message, it's something like Black, if you criticize their prices - coupons as they may be, you cheeky Israelis. "When the tickets come out, we will see where it is most profitable to buy. If you get off the tree when the prices from abroad arrive, what good? I am in favor of supporting Israelis and making an easy life for myself on the road, but I will not do spaghetti for the importers in the country.
    7 points
  3. Thanks so much to everyone for the support. I appreciate that and the decision of the site team and would love to come back. I hope and believe that we can take advantage of the case to continue to help the forum together in a more respectful and tolerant atmosphere because in the end that is the main reason and the reason we are here.
    7 points
  4. A few days ago and after hours of tinkering with what seems like a waste of time between users, we decided to completely eliminate the possibility of a negative site ranking. Sometimes one or the other exploits the system to "harm" and harass another user. Currently, only neutral or positive comments are available in the forum, so no reputation can be tampered with. The user's long affair with the site and its staff is something the veterans are familiar with, but I find it pointless to discuss it publicly. We are aware of the criticism that we make in disciplinary decisions with users, and it is in our best interest to ensure that users also receive service when they need to, and also run and circulate blood in the forum system. Personally, in recent months the amount of comments and my participation as a member of a site team has been lowered, part of that is due to a great deal of disciplinary action with users. There is a high degree of concern about discipline issues that are quite serious. Another part is focusing on hardware reviews, as some of you probably notice in the site sections. We (the site team) try not to get excited about politics, not visual protests, nor private messages from users who in all intend to exclude another user - and this has worked in both directions in many situations. Napoleon's exclusion was removed. We know it can create a new user, as it did in previous users, but for the sake of the existing reputation, it is understandable. If he wants to come back - he can. It is important to understand that the situation is a bit complex on what it seems to be on the subject - since it is a user who is a significant part of positive traffic for forum users. On the other hand, every negative situation has become a disciplinary attack with 20 ricochets towards other users. We (the site team) have created general rules to enforce them, and the conflict of an active user who regularly helps but also breaks the rules regularly is very problematic. This can show many as if the user has complete immunity from exclusion, and Multicour has made the decision that there are many things that are understandable and justified with that exclusion. We do hope that it will improve on the other side of this coin if the user does decide to return. Everyone without exception will benefit from it.
    7 points
  5. Kfir strikes again (or for the hundredth time, I lost count). Proves time and time again that he is a Kaka child whose place in FXP is.
    6 points
  6. Written by forum member nec_000, on 29.10.2020 ** The author has permission for the site owners to use the article below as they see fit, and provided they mention the author when posting / using. Here is an article about the groundbreaking new technology, which was first implemented in the new generation of video cards. In the article we will understand what it is, why it entered the picture, what its greatness and genius is, what root problem it came to solve in the graphic industry, and why it could only be implemented for the first time (technologically), and why in fact from now on everyone will adopt it. Background: We all heard yesterday at the launch of Nudeia's new RX6000 series, about a new technology called infinity cache. We understood that this was something that AMD had implemented in the new product, but we did not understand exactly how it works, why, and what it does. So today we will all understand and will understand well. First of all for the sake of simplicity the word infinity is branding, it has no meaning. And we will call the mechanism in the simplest and most terminologically correct way, cache. The article is presented for the benefit of the members, enlightening their knowledge and expanding the subject, to give an explanation at such an academic (but concise) level that can be entered in one concentrated page, and to convey the Torah. At a level that makes order and allows the average amateur to understand the subject at a good level. Historical background: Decades ago, the first video card was born that allows the computer to display the desired output on a screen. At first it was just text, and that was enough for him to be able to speak directly in front of the slow RAM and nothing beyond was required. But later it became necessary to present not only a simple text, but a more complex graphic ability, which gave rise to difficulty: one of the heaviest resources consumed in graphic work is high and large memory traffic. We will explain later why. Therefore the graphics cards have had to stop over the years and use the computer's generic RAM (which was too slow for this task), and start working with the help of fast memory assembled on the video card itself. It served as a dedicated (and fast) memory buffer for the purely dedicated graphics processor use. It was faster than the size of a computer's RAM. It was called a graphic memory. Later its name was determined as Vram. This allowed the graphics processor (now called a GPU) to have high access speeds and upgraded bandwidths compared to those that could be provided with standard computer RAM. The upgraded bandwidth allows the GPU to perform effective image processing work. Without it the GPU would actually be in starvation mode - suffocated and hungry for DATA. Over the years Moore's Law has shown that every 24 months (about two years) there is a doubling in the power / processing rate of the graphics processors. The reason for the doubling is based on a step change in the lithography level once every two years on a rough average, which made it possible to insert 2 times transistors in the same chip size, (and of course a slight addition to the working frequency). Thanks to the two together, this has resulted in a lady doubling her performance by 2 times (at least) every attraction (two years). The problem is that while the graphics processing chips increase their power as described above at a fast and exponential rate, it is not possible to do so on the issue of the memory bandwidth available to the graphics processor. Because thanks to the reduction of lithography twice by each attraction, unfortunately no parallel doubling in the width of the memory strip is obtained, but only a doubling of the memory volume, which is not the problematic resource in the count before us. Let's see what leads to such a situation over the years: While the attrition meter, once every two years as stated, the computational power that can be obtained from the graphics processor is doubled, it was not possible to provide for it the same and corresponding increase in memory bandwidth. Thus a gap is created over time, on the one hand the processing power increases at a rate twice as fast, while the bandwidth increases lazily. A so-called resource distancing from another was created that could not keep up with its pace. So over the years we have addressed this problem in the form of the following techniques: Repeated doubles of bandwidth access to memory in bits: increased from 64 bits to 128, then to 256, and so on to 512 ... And finally reached such a glass ceiling, which limits the ability to push additional traces onto a single-card PCB board. Stop (more or less) within a limit of 512 traces on a PCB board. Beyond that it is too complex and expensive. Another method has also been adopted, which is to transmit more information with each clock beat. Initially memory worked in SDR, which is writing or reading a single bit per clock. Then they switched to DDR, which is one write and one read of bit rate per clock. Eventually the GDDR5 generation switched to two writes and two readings of bit per clock and a total of 4 bits per beat. Today we are already in the GDDR6X generation that has increased the use of 4 bits per beat, which means 16 different values ​​per clock. This represents a very difficult signal for control and reliability. For a 4-bit signal means 16 different voltage values ​​in each beat, and is therefore very close to each other, so identifying what was the correct value becomes increasingly difficult to control. The task requires a very delicate diagnosis on the cattle side which leads to quite a few errors. There is a limit in the work of electrical signals and we are approaching this limit in huge strides. And trying to keep going from here on out is starting to be a particularly difficult challenge, bordering on impractical or technically unprofitable. Another method that has been exhausted to the last drop in the last decade is compression of information, initially performed at a lossless level and recently increased even at a lossy compression level, all with the aim of trying and milking every last drop of juice given to give the memory bandwidth a factor. The graphics progress and soar upwards, while the memory fails to catch up with it at a satisfactory pace. All of the above methods, widening the bandwidth in bits (up to 512), using complex signals to transmit more information per clock to memories (4 bits), compressing information, have reached the limit of capacity in modern times, so that no further progress can be made, and must, really must Find a new and groundbreaking method to overcome the memory bandwidth limit and the hungry graphics processor. Understand, without a bandwidth that will continue to rise in order to support the increasing processing power, we will not be able to move forward with the graphic work and we will reach stagnation. The new method found: caching. The big problem with graphic work is, as mentioned, the enormous amounts of memory that this work requires. The reason why a large amount of memory is required is because the input size is huge. A large and heavy texture space and is the one that drinks most of the memory volume actually went. Is the one who places it in the buffer of the card, hereinafter known as Vram. Because if they were stored in the general memory of the computer (in RAM) we already understood from the explanation above, the graphics processor had very slow access to it, so most of the time the graphics processor would spend waiting idle for information to reach it. Or in short, we would encounter very low efficiency in the graphics processor and poor performance. That is why VRAM was originally invented, in order to bring close to the graphics processor all the information it needs for its work, so that it can work fast and not suffer from starvation for DATA. It would be silly if there was a powerful processor, which is constantly waiting for information and therefore its work is delayed. Since the memory consumed as aforesaid is large in volume, and textures consume the vast majority of it, there was no way to produce a graphics processor with a cache of memory right on it, in such a volume, that it could hold all the necessary information. What was good and suitable for the world of CPU processors, tiny cache memory, is not suitable for graphics applications. A very small cache volumes are enough for a CPU processor, this is its way of working, which is different from a graphics processor, which requires a huge cache to be relevant. Thus for all the decades the ability to embed cache memory has not been possible or practical in graphics processors. Only now that lithography has first reached a level of 7nm, a sufficient transistor density is possible that gives the possibility to hold a large enough amount of transistors in one chip. This is possible by the manufacturers of graphics accelerators the technical possibility yes to embed a caching memory that is practically satisfactory in the graphics processor. The world's first graphics processor, hereinafter the technological pioneer that assimilates cache memory (in addition to Vram), as an intermediate layer, is the Big Navi core known as the code Navi21. This core is about 500 (plus) mm in size, which includes a huge amount of 26 billion transistors, of which about 6 billion (a quarter of all the chip's real estate) are allocated for 128MB of cache memory. 128 megabytes, are 128 million bytes. Byte is known to have 8 bits, so 128Mbytes is actually 1,024 megabits, or 1.024 billion bits. The electronics in the house of the type in question are required, in order to store information of one bit, about 6 transistors. Thus 1.024 billion bits, 6 billion transistors are needed to represent them. Below is the same allocation of 6 billion transistors, which are required to be taken from the real estate of the chip, in favor of a 128Mbytes cache array. Why is it only now, for the first time, possible to embed cache memory in a graphics processor? Hence if we go only one lithographic generation backwards, to a generation of 14 / 16nm, we can see that the video cards based on it were about the size of about 11 billion transitors at most (the largest chips in them). It is clear that if an allocation of 6 billion transistors is required (out of 11), a situation is created where more than half of the chip goes to waste only for caching memory, so that a sufficient amount of transistors would not remain in favor of the chip's graphics processing component. But today when lithography for the first time allows the assimilation of no less than 26 billion transistors in a single chip, allocating a quarter of them in favor of the subject (6 out of 26) becomes a practical option for implementation. Because there are still 20 billion transistors left in the chip in favor of the graphic work and that is a sufficient amount. A ratio of 25% in favor of cache and 75% in favor of labor is a ratio that is first made practical. And this is something that could not be done in the past, because there was no possibility of such a large density of transistors in a single chip, and if they did dare to embed a large enough cache, there would not be enough transistors available for the rest of the chip needs. The next smart question arises: what is so special about a 128MB cache, which is the magic number, that it could not be executed in sizes smaller than 128MB for example? Some understanding of how a graphic work operation occurs (concisely): There are several stages of work until the image is built, and we will talk very simply to ease the thickness of the sheet. The graphics processor receives from the main processor a set of instructions for making and building an image. These include vertices that make up the polygons, and the textures, which, as mentioned, are stored locally in Vram. All the video card receives from the processor is kept close to it in the Vram so that it can assemble the image from them quickly and without delays or gifts, the VRAM is the fast and private work surface of the graphics processor. In the first stage, the processor builds the polygons (three-dimensional polygons) from the vertices, called the geometric stage. Next he renders the wigs of the polygons, ie draws and paints them. He draws / paints them from the textures that make up the same object (texture is a piece of image). That is, he takes a texture and spreads it across the wig. The mathematical concept here is a linear transformation from a two-dimensional space of the image (texture) to a three-dimensional space that is the wig (the polygon). There are a few more stages of course, intermediate stages, and a stage after the restoration (rendering the wigs from the textures) including lighting calculations and the like ... Does not go into the current article beyond, although the topic is very interesting, but mainly because at the moment it is not the relevant scope for the discussion title. What we are interested in understanding and most importantly is that the rendering phase (which is taking the textures and drawing / spreading them on the wigs) is the heaviest phase in terms of memory consumption, ie IO versus VRAM. As they consumed more resolution, or consumed more frames per second, the processing consumption doubled linearly and with it the consumption of the memory bandwidth doubled. For every pixel operation requires reading it from the texture map, and writing it on the appropriate polygon wig. Reading, working, writing ... And repeats God forbid pixel after pixel. And X times more pixels, requires X times the processing power (which is easy to achieve we already understood) but also X times the memory bandwidth. They go hand in hand and are close to each other. The processor cannot process another pixel if the memory does not allow it to read it. And he can not write another pixel on the polygon, if the memory does not allow him to write it. Our problem and limitation is that the memory bandwidths cannot be improved at the same rate that the graphics processor can be improved. Here is the problem of hunger across a memory strip. One of the newest ways to break the glass ceiling in the problematic layer which is as mentioned memory bandwidth, is to create something faster than VRAM, hereinafter cache. It is a memory that is even closer to the graphics processor and is really complex on it, which is in principle, no longer limited, and increases in power directly with the growth of the chip itself. More transmitters lead to more size that can be allocated as a cache, but also to more bandwidth, because the bandwidth in cache memory is linear to the frequency of operation of the chip, and the amount of interconnects built on the chip itself, between the cache memory and the memory controller sitting in the chip itself. Why then is 128MB cache memory a constant, which for the first time gives a sufficient size, and could not have been satisfied with much less than that? Because once two basic operations can be performed, and they hold at least one whole texture in the cache, and once a single frame can be stored in the cache at the same time, for the first time you can actually do all the calculations needed to render one whole image, using only the memory Cache, which is what minimal money needed to get to. A single frame of HD size that is 1080P, without compression, is about 2 million pixels, at a representation of 32 bits per pixel, yields 64 million bits, which is 8 million bytes, which means 8MB of memory is needed to hold it. And in a 4K size image, the amount is known to be multiplied by 4 times, ie 32MB is needed to hold a single frame. Pretty soon one realizes here something like this, that in order to perform all the calculation work required in favor of a full frame, a memory capacity of the order of 128MB, is the desired threshold to ensure, that everything goes into it at once. In such a situation, the graphics card takes one single texture, caches it all, ie reads it only once from VRAM, let's say it consumes a few bits, and it starts drawing pixel by pixel from it and writing them to a frame buffer, when all together Succeed in converging at the same time to 128MB cache memory allocated to the graphics processor. 128MB This is where the first practical minimum volume, which allows this business to work (also) in 4K. Smaller volumes make it difficult, to the point of not being able to do everything at once, and deplete all the wisdom and rationale used in cache memory. We said we did not go into depth in the whole graphic work process, there are of course more steps and it is not just painting textures, but we will be content to explain, that all the intermediate calculations of all the other steps, and the lighting calculations at the end ... Perform their calculations without consuming a large amount of memory, and they can perform a purge after they are finished and inject out of the cache memory and free it up because they are no longer needed. What is important to understand is that the heaviest consumer is the restoration phase, or the rendering phase, which means painting the textures across the wigs of the polygons. And if one whole texture is cached and a whole frame is cached at the same time, this is the most critical size in the whole process. And this is what is made possible by 128MB cache memory, and not so much possible by smaller sizes. Therefore they had to wait until 2020 and that 7nm would be born, so that it would be possible practically / practically to build a cache with a sufficient minimum size. The graphics card shell loads one cache texture, renders the hundreds and thousands of polygons it uses to draw the same image, then caches it, loads the next texture in the queue, and so on until it's done. We can note that in this method, each texture is called only once per complete image, no matter how many polygons it has. The processor will not finish with the texture in the cache until it has finished drawing the entire polygon that this texture is used for drawing, which is hundreds and thousands of times in a single image. We can immediately understand that the volume of the memory band in the read direction in front of the VRAM saved here is in the thousands. Dramatic improvement. And on the writing side, remember that the frame buffer no longer sits in VRAM (meaning that every pixel writer must record information in VRAM) but sits entirely in the cache as well, which means there are no writings to VRAM. The processor draws all the pixels directly in the cache until the production of the entire frame is finished, and from there sends it to the screen. The savings in writing level are also an order of magnitude of thousands of counters. In fact, by working directly with a cache that provides everything needed as a workspace to build a single complete image, we have reduced the memory traffic consumed in front of the vram to something smaller than what was required before. It is already understandable what genius this thing produces for us. We broke the glass ceiling of the memory bandwidth, the one that has obscured graphics processors since the graphics field was born decades ago, and moved the problem into the chip itself, where it is easy to address, where the limitation is almost non-existent as caching memory increases directly to lithography itself. hallelujah. Bandwidth calculations to understand the business: AMD has chosen in its technical implementation, as mentioned in the Navi21 chip, to implement a cache bandwidth of 4096 bits. This means that in each beat (per clock) it allows writing or reading of 4096 bits. Because cache memory works by the SDR method it is more flexible than DDR and certainly more flexible than all subsequent methods, and can perform write or read per clock as it pleases. DDR on the other hand requires one write and one read, but not two of the same type. In fact DDR is half the bandwidth for any read / write operation that is displayed together as a connection, but in practice this is misleading. The same limitations apply even worse to GDDR5 and today GDDR6X. Because there is more than one soup in one reading and one writing, it is several of each type in a ratio of half to half. So here's another cache advantage which is SDR and unlimited. He can utilize his 100% only for reading or only for writing and is not required to compromise on half of any kind at most. This further improves traffic in practical practice. The Navi21 chip is calculated in terms of bandwidth, as one that works at a typical average operating frequency of 2150mhz, that is 2150 million clocks per second. Thus the memory bandwidth created in front of the cache is 4096 bits times 2150 million = 8.8 billion bits per second. Divide by 8 and get 1.1 billion bytes per second or in short, 1100Gbyte per second. The overall memory bandwidth of the VRAM on this card is 256 bits in GDDR6 configuration which means 16bit per clock per pin. Or in other words only 512GByte. We see that the cache memory actually more than doubles the available bandwidth available to the CPU core, now the graphics processor core actually sees it as a work surface, a crazy speed of about 1100Gbyte per second and another preferred type that is all SDR. For comparison, the bandwidth speed of rtx3090 using 384-bit ultra-fast GDDR6X memory 19.5bit per clock per pin is only 936Gbyte (which is limited to half writing and half reading only). In other words, the assimilation of this pioneering cache memory allows Navi21 to gain an even greater effective bandwidth than the RTX3090 flagship card. In fact at the moment the situation is, that the core of Navi21 gets more bandwidth than it is capable of processing at its full output. The fact that the said bandwidth is limited to only 128MB of volume is already quite understandable to the reader, not really bothering here, since at every stage in the graphic work to complete the full picture, this volume satisfies the needs of the method / technique in which the graphic work works. He caches what he needs for one image, caches everything, then throws it in the trash after finishing what he does not need. This is until you complete one image that is ready to be transmitted to the screen. AMD was asked to describe to us what the equivalent of this bandwidth is compared to the (traditional) method of the past, so we built the following slide that we will explain immediately - scroll down after the image: In the left column, bandwidth obtained only through traditional GDDR6 memory chips, 256 bits wide, fast 16bit per clock per pin, which produces 512Gbyte bandwidth per second. Of course in the middle column it is the same only one and a half times, i.e. 384 bits, which produces 768Gbyte per second. And in the right column they took the bandwidth of the cache memory of 1100Gbyte combined with the additional potential provided by the generic memory Vram which is itself 256 bits (512Gbyte), coming out 1100 + 512 = 1612Gbyte total. And if we divide the value 1612 by the value of the left column 512 we get the ratio they wrote which is 3 times the order of magnitude. And here we have genius, which is quite trivial and simple to understand, actually exists in the world of computation and work since time immemorial, a cache, which for the first time thanks to sufficiently advanced lithography, allows the implementation of a practical and effective cache for graphics, in a graphics processor. Something that could not be done until modern lithography was born. Because this method is so dramatic and technologically and application-breaking, it will now be adopted by the entire industry in general, Nvidia and Intel as well. It is simple, inexpensive, effective, and easy to implement. It cannot be patented either. cache is cache and it is older than all the members here in the forum. From now on, the graphic processing technology will no longer depend on the limitation of the memory chips and bandwidth (in bits) of which VRAM is composed, which can now be much slower and cheaper - ie use slow and cheap memories for implementation, provided the memory cache - its speed and size provide and complete The graphics processor's hunger, to the rhythm of readings and writing that provide Randora with a discrete image. In the next generation of lithography, probably 5nm, which will allow in the next two years the next generation of cards, will double the amount of transistors from the region 26-28 billion where it is today (26 billion for navi21 and 28 billion for GA102) to the region of 50 billion transistors. In such a situation, graphics cards will be able to allocate even more than 6 billion caching transmitters, such as 12 billion 256MB caching transmitters, an element that will further improve the flexibility of order and performance in which the graphics processor does its job. For when we already have 50 billion transistors in the chip, allocating 12 billion of them still leaves 38 billion for the working component of the chip. The ratio 12 versus 38 is a reasonable ratio. They will examine what is the optimum in this aspect, in the ratio between cache and processing power, and accordingly decide how much optimal cache to allocate in the graphics processor. * To the extent that members request to enter to discuss and delve deeper into the subject, and / or to enter into the areas stated in the article as such that we will not expand on now, feel free to ask and dig. I will do my best and educated the field to respond and expand.
    6 points
  7. CSM means backward compatibility mode for an old BIOS. In other words: MBR and not GPT. Not UEFI mode. This is why the system does not boot. The appropriate disk partition is missing. There is a built-in tool in Windows that allows you to convert but there is a certain risk of information loss. Before you begin it is advisable to create an Image of the disk with software like Macrium Reflect so that it can be restored if something goes wrong. Instructions for converting disk partitions here. Refer to the second chapter which explains how to do it from the desktop. The alternative option is to reinstall the system after turning off CSM mode in UEFI (which will lead to it being installed in UEFI mode).
    5 points
  8. I connected with them this week with a supplier from Bezalel (first and currently the only customer in Kiryat Malachi), the installation cost me 450, in the meantime the option to have this or the BE router which I think is not worth it because as a router is one big trash can !, (there was maybe more Nice if it will be possible to get an optical fiber adapter for rj45 connection from the infrastructure in order to connect a router from another company privately instead of the router be very limited in its options like BCYBER cancellation, dmz setting, scope change to another more normal, Google dns change, meanwhile I work with External mesh solution and not Bezeq's, I called Bezeq technical support I was told that there is another particular model that they do work with but I did not find anything except instructions online, I attached a PDF with 3 licensed models in Bezeq, the last model on the list told me it is the one that catches , The type of connection of Bezeq's infrastructure is GPON ONT SC APC (the connection in the green frame and not the blue !!!!) The matter itself that if I connect something else then they told me in support that they can not help me if I have faults, another interest, Bezeq just at the beginning And their technical support has no answers as to how it works so it turns out that if you buy a suitable fiber optic adapterFor Bezeq's optical connection you are alone in the campaign, I have already plowed a few places on the Internet regarding what is suitable and what is not, what I have found online with delivery so far are two models of ubiquiti ufiber in one of which it has an option for POE, one model from huawei and another from ZTE, there is A particular TP LINK router with a suitable GPON connection but there is no way to buy it online. gpon.pdf Update: Today I bought a TP LINK fiber optic adapter together with a VR600 router, I also connected two switches that work in a gigabit the entire existing network in the house, in the case of WIFI in the house I work with an external MESH (3 boxes), I set what is needed In terms of opening ports and assigning IP according to MAC address, I switched SCOPE to another one that I have been used to since the TD-W9970 period and everything works peaks, tomorrow I will return to Bezeq their router and Salamat
    5 points
  9. I'm in favor of open source, and it's great that AMD is based on it and not on proprietary tools. The problem is that it does not rely on it because it supports open source, it relies on it because it has no choice. NVIDIA has a highly developed software department, and the industry is familiar and experienced with their tools. AMD's alternatives are not the ones it created. You mentioned ML. There are two options - NVIDIA's proprietary CUDA or OpenCL which is open source. AMD supports OPENCL, but not because it's better or anything. OPENCL is not hers, neither has she developed it nor does NVIDIA support it. In fact, NVIDIA started with CUDA and OPENCL came much later. Which means that as I mentioned here before, the tools of the industry work mainly with CUDA. But its software is very good. I would even dare say that it is a software company no less than a hardware company. AMD, on the other hand, does not have this advantage. This is also the case with Intel, by the way: both Intel and NVIDIA help compilers / game developers and game engines respectively. Guess who they are especially suitable for ... AMD does not do this to the best of my knowledge. As for the latter I am not 8%). She also participates in decision making. AMD does not (as above). But there is still hope for optimism, because even though Intel invests in the software and AMD does not, AMD managed to overtake Intel ... or not. Let us not forget that although Intel optimizes for its processors, the architecture is the same. What is not true As for the graphics cards. Edit 100: Not that it's a measure, but I checked the number of repositories in GitHub. NVIDIA has 2, of which 214 forks (i.e. its 26). Intel 188, of which 715 forks (i.e. its 21). AMD has - snake! 694, Of which 22 forks. Its total 9. Intel has 13 times its capacity and NVIDIA 53.3846154 times. It gives some order of magnitude.
    5 points
  10. It is best to remove the Driver Booster immediately, it only makes trouble.
    5 points
  11. Leave, he's a professional. He understands more than you. In fact, he understands more than you do about anything. In economics, in psychology, in everything related to technology and computer science. Too bad he did not learn how to write paragraphs in the forum without going down lines in random places that make reading his posts one big nightmare even if we do not address the bizarre tendency to smear, repeat things, ridiculous attempt to use high language (which was a little more impressive if not accompanied by lots of spelling mistakes ). In addition, the obsessive editing of old posts must be noted so that it is impossible to respond to them without showing an idiot that the paragraph you were referring to has been deleted (Tip: always quote it). And bottom line, go steal games because that's what he does with all his many years of experience in the industry.
    5 points
  12. Recommended This is not a question of budget and requirements, if the budget allows for a balanced hexagonal system then go for hexagon and if octagon allows then go for octagon. It is a pity to shed some light on a subject that is so toddler that, in the end, this specification makes up a whole set of parts of decisions.
    5 points
  13. The long wait is coming to an end and Alder Lake processors are becoming available for everyone - has Intel succeeded in its mission to regain the performance crown? We dived into an in-depth examination of the new Core i5 and Core i9 models!
    4 points
  14. Oh, lay down, you old, crazy piece of shit. Do not throw your complexes at me. Go near 2 buy some second hand fans and clean them. If you think I have any sentiment for one of the companies that I purchased a product from one of them, you are completely not in the right direction. I will not get to it because: 1. Corona. 2. I have better things to do with my time. 3. I will have no trouble sleeping even if my card turns out to be not the fastest in the universe. My God how much you blow.
    4 points
  15. What closes with these messages? What do they contribute? Porn of box card boxes? Not so clear to me ...
    4 points
  16. If so, at a good time we finished from the first end where we measure the eight titles we originally tested on 6800xt so we have a reference for reference and comparison against each other. We are very happy that we insisted on finding a good 3080 card, and not only the first card we opened EVGA, because we found in the second ASUS card that we caught such an exemplifier, which in any internet comparison found online, is actually considered the fastest card available = golden sample The Empire's architecture. Moreover, it helps us to give a not bad estimate also to the capabilities of the 3090 card to a considerable extent, because it is faster than it in some situations, and that was our overarching goal originally. In the absence of 3090 under hand, we have achieved as close as possible in the current circumstances and there are no happier of us. We very much hope that the members appreciate the hard work of lanzar that lasts for hours and days as well as the considerable financial outlay invested in purchasing the tickets. Understand, this is many tens of thousands of shekels and only for that comes to a lanzar one huge nose. This is much more than what a journalist in the field does with his own money. All measurements at the first end are 1440P. And later we will also upload in 4K, of course, possibly tonight (if we have enough). ** lanzar, this 3080 Asus, put a sticker on it and mark it, it passes to me when you finish with it and you do not pass it to any other customer or friend. My name is written on it Deir Balak. Note that in the current post we will now attach here - only the latest measurements of the 3080, since all the measurements of the 6800xt are in the current thread anyway at the very beginning, on pages 1-6. So whoever wants to go there to take a look - there is no point in uploading them again and it's a shame about the place it occupies in the forum. Also after presenting the measurement of the 3080, we will put the reference table for comparison, so that we can follow the improvement we achieved on the 3080 compared to the default of the card at the stock frequency and as measured by Lior / TPU. In the Gears5 title our 3080 puts out 146.2FPS, compared to the 170.8FPS the 6800xt put out. This is where the 6800xt is 16.8% faster in this test. This is the TPU measurement in the same title we chose to address in the current thread: in the horizon title our 3080 puts out 136FPS, compared to the 142FPS the 6800xt put out. Due to where the 6800xt is 4.4% faster in this test: this is Lior's measurement in the same title. Note that our accelerated 3080 bypasses the 3090 OC version of Asus, with us it released 136FPS in this title: in the red dead redemption title our 3080 releases 105.6FPS, compared to the 116FPS released by the 6800xt. Due to where the 6800xt is 9.8% faster in this test: this is Lior's measurement in the same title. Let us note that our hurried 3080 bypasses the 3090 OC version of Asus, by an inconceivable gap: with us it released 105.6FPS while the 3090 barely 82.6. Look at how much the speed and Platforma Risen 5000 contribute, and maybe even the latest drivers that have come out since and improved something else you will know: in the hitman2 title our 3080 releases 131.5FPS, compared to the 146.2FPS released by the 6800xt. Due to where the 6800xt is 11.1% faster in this test: in the hitman title we used as a reference in TPU's work which was easier to reference in front of it because the running parameters were possible to copy the same, and here too our 3080 flies nicely and opens an impressive gap over 3080 in stock. We have 131.5 compared to 122.8 in his stock: in the title assassin's creed odyssey our 3080 releases 91FPS, compared to the 95FPS released by the 6800xt. Due to where the 6800xt is 4.4% faster in this test: here as a reference to Lior's work, our accelerated 3080 overtakes the Asus 3090 OC with 91FPS: in the borderlands title our 3080 puts out 110FPS, compared to the 137FPS released by the 6800xt. (We will mention in this case that the second 6800xt card, our faster one, in the sanity test took out 138, but we will use the result 137 as a representative of course). This is due to the fact that the 6800xt is 24.5% faster in this test: and this is Lior's measurement as a reference for comparison. We improved, but not dramatically in this case: in the metro title our 3080 releases 109.93FPS, compared to the 118.4FPS released by the 6800xt. This is due to the fact that the 6800xt is 7.7% faster in this test: here is a comparison as a reference to Lior's measurements. We were able to use the accelerated 3080 to match the performance of the Asus 3090 OC: in the title tombraider, our 3080 releases 168FPS, compared to the 177FPS released by the 6800xt. This is due to the fact that the 6800xt is 4.3% faster in this test: these are Lior's measurements. Here too we notice that our rushed 3080 opened a nice gap over the 3090 OC version of Asus: so these are the total of 8 measurements, which show us one of the better 3080 cards to be found, and not only that but if we compare the results to the reference indices that were With Lior and TPU, we improved significantly in most cases, mainly because we accelerated our 3080 quite a bit, and also because our platform is a reasonably optimized Risen 5000, and not the outdated intellects that Lior and TPU measured, which particularly hurt performance at 1440P Where the platform and the Intel processor form a bottleneck. We are very pleased with what we learned today, the Risen 5000 and the conversion to the 3080 card are very significant to the subject. Now we are waiting for 4K measurements and we will see what happens there compared to the reference indices.
    4 points
  17. The hallucinatory post. You did not buy components on a computer, so the warranty will be for each component separately. In principle he did you a favor by checking for you the source of the problem, for his part he could tell you to break your head in identifying the defective component and only bring it in for testing. Full monetary credit for the component if there is no identical or equivalent component above and beyond the store. Asking for a monetary credit for all the components you bought separately after using them is a ridiculous and silly requirement.
    4 points
  18. @zone glide I'll be out here a little nec and tell you something from a place of "teaching you". The problem with your indecision stems from a lack of experience in the hardware world. You can always wait and there is always something around the corner. Wait for the RTX4080, you will get another nec message in the face in January 2022 that actually the 5080 and AMD's next product is the real thing and there will be a huge jump in performance. Or maybe you'll just wait for the 3080 with more VRAM because it's terribly important and yet we want to own a computer for two whole decades, just do not forget that until there is such a card, already just around the corner will be 3080 Ti SuperDuper or any other invented Refresher so maybe you'll wait longer Some. I remember you even a year ago asking about upgrading your antique computer, I guess at the time you did not upgrade because you decided to wait for the next generation, now you are again considering waiting for the next generation. Buy what you need and stop letting the people here get you anxious about what will come in two years. No matter what you choose to buy and when, within 6-12 months something will come that will make your purchase look funny. I bought a 3060 ti and actually got a 2080 super that someone else maybe a few months ago paid double for. And next year my card will also be embarrassed in the benchmarks by a card that costs half. That's what it is, that's the world of hardware, if you can not get it do not buy technology. I have never seen a person who spends so much time buying hardware, and cares so much about it.
    4 points
  19. Well think about the future, buy a 3080TI with 20 GB for NIS 5000. In two more years, you will receive the RTX4060 for NIS 2000, which gives the same performance. What do I come to say? A consumer who wants to get value for money in advance does not buy these tickets, do you want to insure yourself for the future? Take 2000 shekels Put them aside, go buy RTX3060 when it comes out and in two years take the 2000 you saved now and buy 4060. You also saved money, you will also get similar performance in the future but with newer technology, and you can also sell your current card when it still has good market value And return part of the amount. Maybe even you do not let yourself be left with a video card without a warranty in the system. So what's the conclusion? Why buy a 3080TI with 20GB? Because you want the performance it offers. Not tomorrow, today.
    4 points
  20. 16GB of memory is twice as big a gimmick as RT and of course DLSS which is in general the main game changer at Noidia today and not the RT itself. The day you will need 2GB in your video card, even in 16K resolution, it will already be such an antique in terms of computing power that it has not been on your computer for a long time, replaced with a new Dendesh video card. Absolute gimmick and both AMD knows it, the only reason it's 4 at all and not some more modest number like Noidia's 16 is because AMD wanted to stick to the same 10-bit bass width. There is no doubt that Noydia has meanwhile defeated this generation. Even for those who are not impressed with RT, it still offers more here and now than what 256 GB offers here and now - which is nothing and nothing. And DLSS has no answer from the red side, most of the serious games that will be launched in the coming years will support this and it is a completely free performance. I would say that these 16GB are useful outside of gaming for those who work with approx. Its screen in some way but also there is a clear and distinct advantage for its users in most areas because of much better software and features. A situation has arisen where in areas you might have wanted it (ML for example) no one wants an AMD card anyway.
    4 points
  21. Hello dear forum members! It's been a long time (6.5 years) since I installed Xeon processors on the LGA771 boards that work to this day (reminder for those who want to remember - link) today I come to you with another installation I did on the computer (against Intel's wishes), and this is the 6700K upgrade I had so far to 9900K on board Z170 (not officially supported). It should be noted that I do not take responsibility for this guide and everyone carries it out on their own initiative while understanding the dangers involved in the process. The complete installation guide is here: https://linustechtips.com/topic/1118475-guide-running-coffeelakerefresh-cpus-on-skykabylake-motherboards/. Note that there are some updates (due to the update of CoffeTime software to version 0.92) and additional distinctions that I discovered during the attempts to install the processor. Each motherboard requires a different installation, but for the sake of clarity to the guide here I will mention the highlights for Gigabyte 1 boards. First download the BIOS files and Coffetime 0.92 software as shown in the manual, along with the FlashProgrammingTool (FPT) software. Place them in folders on drive C 2. Prepare the BIOS using the software with administrator privileges, as shown in the following image: Note that both the ME and the VBIOS + GOP must be updated so that they can work with the processors (and the appropriate microcodes, and make sure it is saved!). In the EXTRA personal label I added a memory expansion to 128 GB whatever. Also important - in MAC1 add the MAC address of the Intel network card in your possession and keep the number for yourself (can be found in the network card properties) 3. If your operating system is installed on NVME, and the NVME is in MBR format and not GPT - you will need to convert it to GPT before executing the mode, using RECOVERY mode with MBR2GPT command: 3. Great, you have the BIOS, the SSD drive in the appropriate format and you are ready to burn. Before that please make sure you have a backup BIOS (before editing) on ​​an on-key disk if you need to rewind through the BIOS (I should have). Now comes the step of using FPT to burn the BIOS (in other boards the way may be different, like using Programmer), as shown in the guide. It is important not to disconnect the computer from the power supply at this point otherwise the BIOS will be Corrupted. Once the burn is complete, you are ready to install the CPU. 4. Shut down the computer via FPT only using the fptw64 -greset command, and before the computer shuts down, turn it off again and do not let it turn on. Disconnect it from the power supply and remove the battery. 5. Before installing the processor, you will need to cover some of its pins according to the board (Gigabyte covers the most), and connect some of them (depending on the board): Here is the gluing I did using a kind seller from AliaCapsers who also brought tweezers: 6. Install the processor on the board. Put the cooler on for a moment and make sure the board goes up (if you can, I used the board's number bulbs to make sure it goes up) before you put everything together (if it does not come up you may not have put the BIOS properly, or you may not have put the codes properly). If it comes up, turn it off and shut down the computer. 7. Make sure the cooler is seated properly and turn on the computer, go into BIOS8. Great, you're almost done! In my case the CPU was at too high a voltage (1.4VCORE) which put a load on the VRM and they also crash. This is of course exceptional and the VCORE should be lower. In the BIOS use Adaptive Vcore and download at least 0.100V (in my case, play with it to see that the CPU does not get too much voltage) and check the stability. Monitor the temperature of the VRM and CPU using HWinfo 9. Renew PS - My old Mugen 2 (which still cooled the Q9300) cools this processor well (better than the 6700K! Thanks to the fact that the processor is soldered to IHS as opposed to 6700K) and is silent. I get higher and more stable frames with the computer, my brother can encode movies at 2x higher speed, and I can be with a quiet mind for a few more years until I have to upgrade. The computer is stable after I have arranged the voltages. Was very enjoyable and worth it! Just make sure you have a board with VRM good enough for that and be prepared for the complexity of the process. Too bad Intel did not let us just install the CPU in a normal way (because as you can see, it works great), but for that there is Modding I would be happy to help with any question / request!
    4 points
  22. I think about devoting time to researching VRAM occupancy, VRAM cashing in modern game engines, and why conventional thinking in terms of VRAM occupancy might be misleading. Not sure it will be particularly popular, but wonder how much is required.
    4 points
  23. If @captaincaveman has already commented (and rightly so) to @ aviv00 then I will allow myself to comment here that they say "on the contrary" and not "to a great extent". The origin of the word is in Aramaic, for those who are wondering. This is not the first time, so I comment ...
    4 points
  24. Today, AMD's Ryzen 5000 processors are officially released to the world, which is a historic turning point for the processor manufacturer. We took a look at the Ryzen 9 5950X and were amazed at its tremendous power - all the numbers here
    4 points
  25. My slowest core reaches 4750 and the speed exceeds 5 GHz. crazy
    4 points
  26. Please let's not get into personal tracks here. In my opinion, the discussion is interesting and important. If @ napoleon45 thinks otherwise, he's guaranteed. He's welcome not to come in here. If you want to explain why @ nec_000 is wrong, also happily. Just please, give a scientific and technical explanation, just as @ nec_000 scientifically explained why it is a novelty and a breakthrough.
    4 points
  27. I informed my wife that I was coming home early tonight, that she would make me a fresh watermelon with a Bulgarian in the family room / TV, next to it nuts, a cold Zero Cola bottle, a red rising towel, and that she would take the kids out somewhere ... Question Why? I told her the final game against Maccabi. Let them not dare to disturb me
    4 points
  28. Hallucination. A person arrives, subscribes to the forum, poops on a series as his first and only action on the site. So boring to you?
    4 points
  29. It is not possible to split each discussion into a thousand sub-discussions, like it or not it is a discussion about reasons for buying or not buying. It would be better for the principals to simply unite the clusters
    4 points
  30. Depends on which startup and which role. Experts in important fields with an understanding of depth and breadth will be coveted at any age. What damage do you mean? I can think of two things - it is clear that someone who starts working at 30 will accumulate less seniority and less money and less pension than someone who starts at 20, but the relevant question here is of course the comparison to other areas. Let’s say you start at age 30 with a 20K salary, while someone who started at age 20 will be the same age as you already with a 30K salary and a respectable accrual. It is true that he has an advantage over you, but if your personal alternative is another job where you will continue to earn (say) 15K - then where is the damage? There is of course a matter of stability. If you are in a place of tenure, then even with lower wages, it can have long-term benefits. The second thing is that you may be afraid that if you start at age 30, then you will not have time to acquire knowledge and experience and become a professional until you reach an age where it is difficult to give output of many hours, then you will be uncompetitive against those who have more experience. It's something to think about, but I would not call it "harm", it's more of a risk than a chance. And that, in the end, is the right advice. It has nothing to do with the profession, but with the specific workplace you work in versus the place where you live. Tens of thousands of high-tech people exist in the exact same reality. A matter of choice, priorities and willingness to invest to be based in a place that may be more appropriate. This is true to some extent, but the reason you need some love (or I would say, "affinity" for the profession) is that if you have none at all, it's just not an area you can succeed in. It's not a job as a cashier or a guard or a bank clerk, or a sanitation inspector, who can be more or less a robot, work according to a template, come knock a card and go out. This is a job that requires creativity and thought on a regular basis. You have no chance if you do not come with a certain desire (and of course ability, but in my opinion it is something that anyone with normal intelligence can develop).
    4 points
  31. Your interpretation is incorrect, the tests of those sites are measurements of the actual traffic speed, that is, they test the combination of infrastructure and power, they are unable to separate the two.
    4 points
  32. Enough ... it's tiring all this mantra already. One might think, Apple products do not suffer from bug and security issues, especially "Safari" which allows free entry for hackers. Remind me which iPhone is the one that got an update and became a deliberately slow "turtle" device ... In short friend, there are no significant differences between an iPhone and an Android device except for the price.
    4 points
  33. Don't understand why anyone thinks AMD can \ should \ "behave \" differently from Intel ... This is another global, capitalist, stock exchange company that has done, does, and will do everything it can to present as many positive quarters as possible. Just business, don't look for friends there.
    4 points
  34. True, nothing to do, quality costs money. And those who want quality will have to pay for it. Although today's price differentials are estimated at several tens of shekels, at the point of purchase (which was somewhere in 2018) the Silicon Power SSD was an excellent VFM, so I considered it a lucrative purchase. In fact, the mere writing of these lines on the computer running the drive proves this to me again. Regarding survivability - as long as there are no statistics that can show black and white that the survivability of Silicon Power drives is too low, I don't think reliability / survivability can be determined. Probably not based on the experience of two or three people who commented here throughout the discussion. Needless to say there is no problem with the manufacturer, people have malfunctions with drives of all types and models. Silicon Power may show lower reliability, but that's definitely a hypothesis. And even if there is lower credibility, it must be ascertained how much lower the reliability level is for the price level. Regarding reviews - I think differently. A person who loves his product, will enjoy it and forget about his existence. A person who suffers from his product - will make sure others know about it. That is, chances are that a person will want to post a negative review of a good product than a person will worry about posting a good thing about a product they enjoy. This is his stage of expressing his disapproval of the product he purchased.
    4 points
  35. In advance, I did not claim that exclusion was unjustified, I asked that you not give another chance, I feel that it annoys you and other people so you ignore his contribution, not that I claim that if someone donates then he can do everything. To put it simply, it didn't do you any good so you put it away (with a reason for the protocol), if you didn't notice, this forum is more important to it than any other user and you missed it big time. Good luck to everyone.
    4 points
  36. Any news. Finally I got a reply from their service. Answer me by email! From a quick Google search on Gett Basket the company gives me several results with different reviews from 1 to 5 stars, and a number of sites that group a lot of claims. I wouldn't be anywhere near being smart consumers, research before buying. May everyone have an excellent weekend without Corona!
    4 points
  37. In the mood, I certainly agree with you, I haven't bought Intel and probably won't buy it (sure not 10900 or any version of its evolution) as long as it's the market. Not that there are gamers who make a living as a main occupation, the tournaments in Israel are more competitive for competition and title than the financial aspect. What I meant was that today there are a fair number of gamers who sit and grind for hours to compete in games (every season and its crop) and learn from the network to channel a considerable budget to reach MAX FPS, no matter the price, as long as the parents' visa is not shrinking from ironing. Not that I think that's true, certainly in the way it comes from an unprofessional obsessive sporting side, but this phenomenon is growing, feel free to check out how many discord communities in the country there are for each Battle Royal game, and what the mood is when choosing specifications for the guys competing there. Again, not that it is a wise consumer, but it is a hobby that "teaches" you a network to stage, and the herd, well .... the herd. But each one and his loves.
    4 points
  38. AMD wipes the floor with Intel's product, just like that. Those with a keen eye will be able to notice improvements, not only in the area of ​​performance, but in a very important area, which is the area of ​​security and encryption of all layers of virtualization, a solution for which Intel has no answer. AMD will pick up a huge amount of sales, today cyber is a very sensitive issue, whoever has an answer to the issue, wins the deal:
    3 points
  39. The problem I understand is divided into two vectors: in the level of Nvidia this is competition with the miners and they are what to do, willing to pay much more than a domestic / private consumer, because it pays off in their business aspect, as a production line machine they need in their factory and so they measure it. Even if you pay double for the machine yes, it still makes them profits. And so they are pretty much killing this market for home consumers who are not willing / able to pay double for the product. At AMD's level, its products in this generation are less attractive to mine (thankfully), but the problem is different, supply-side supply. AMD managed to get a share of 150 silicon wafers per quarter from TSMC and that is already very impressive, but not beyond that even though it could have doubled even if it had only been allocated for it. The rest of the production at 7nm goes to other TSMC customers, who also need this production infrastructure for their products. From the allocation that AMD received for 150 silicon wafers in the quarter, 120 armored to create Macrosoft and Sony console chips, an 80% share of oil as stated, as a result of AMD's contractual obligations to supply about 9 million processors per quarter. Annual rate of 36 million. Which is the rate at which consoles are sold globally in a normal year. In Corona's year this too is not enough and there are shortages in the consoles on the shelves. Which leaves only a small share of only 20% (30 silicon wafers) for everything else that AMD produces, including processors and video cards, for all segments all together, ie for servers and laptops. As a result, the volume of video cards and Risen 5000 processors for the home / private consumer is very small, probably on the order of only a tenth of what it used to be. And that's the result. Unfortunately this is not going to change in the foreseeable future, AMD's commitment to continue to supply chips to consoles exists at the same time and does not end, and output at TSMC at 7nm is not going to grow any more as they have moved to the 5nm generation. Which means that whoever wants another production share at TSMC should already order an allotment on the 5nm production line at a price that is sky high, and that's what it's. Except that right now the whole creature at 5nm is drinking Apple and pretty much letting no one else take anything. Every new plant that starts producing 5nm right now Apple is taking on its full capacity. It is willing to pay the most money of all, and TSMC of course chooses the most serious / big / heavy customer who pays the most of all below Apple. This is one of the reasons why Apple products are expensive by the way, not only this, but one of many grandparents. Its customers pay a premium to ride the latest lithography in the industry. IPhone is a premium product and its A14 processor is manufactured in the latest lithography that has = 5nm today. In the duopoly of chipmaking in the world, Samsung's situation is quite similar, it has the older 8nm that NVIDIA managed to grab this year, and it has the newest and most expensive 5nm, which only huge and rich customers like qualcomm can afford. So AMD and NVIDIA were not left with too many choices. We will probably be carrying on this infrastructure 7 / 8nm for an entire year and the shortages will continue to dominate the area as long as this mining boom is not over. At least on the Nvidia side there is hope that if the mining is over, the problem will be solved. On the AMD side there is no optimistic horizon in the coming year and until it can allocate itself something from a 5nm slice at TSMC. And even when you do get a share, it will be a little expensive because everyone will fight for an allotment at TSMC in this lithography. Therefore the next generation that AMD is planning on 5nm, including Risen 6000 and RDNA3, will probably also suffer from significant shortages just as they suffer today on 7nm of its current products. We are in an age where there are only two advanced lithography providers TSMC and Samsung, and the whole industry and its sister want them. So there is a limited supply, there is hardly any sky (a combination of mining + a new generation of consoles launched + a corona that the three of them joined together at the same time), and we are in trouble.
    3 points
  40. Have you ever seen a man, courted by two women, throw one of the women because one of them told him she was here before? No! It's convenient for them and it's clear to everyone! The executives are throwing us sentences like "We really care about gamers, we're going to do everything we can to raise supplies" You will learn the world you will accept it and most importantly stop thinking that your discussions in the forums are causing the money machine to stop moving.
    3 points
  41. I did not say it was a bad thing. I gave you a perspective in order to help because I see you appear in a lot of threads and ask different people similar questions. If that's the reaction, then I'll really stop trying to help.
    3 points
  42. I believe they will only be treated if they are in a better condition than NVIDIA.
    3 points
  43. you are a thief. The fact that you are doing good things at the same time does not change this fact. The poor argument for the 1000001 copy can be applied to anything. Stolen a vehicle manufactured 1000000 from it. Stolen an iPhone made 1000000 from it. Enter the mafia and steal bread because they baked 1000000 of it and they will not feel if one goes. Even if it's the billionth copy, it's not your property. you are a thief. This is not socialism, this is theft. And what a section, I also contribute, I also volunteer, and do it all without stealing. zero.
    3 points
  44. The wait and expectation was worth it - we took the Taiwanese giant Geforce RTX 3080 TUF OC for in-depth exams to find out what performance leap you can expect in the Ampere era for the article
    3 points
  45. If this interests anyone - I have received official confirmation from NVIDIA that 3080 reviews in their design alone will go up before September 17th. NVIDIA did not assign samples to the Israeli media. Not closed on the date of the rise of reference reviews - think it will happen on the 14th of the month. The hazelnut for next week is 3080 reviews, and the following week 3090 reviews.
    3 points
  46. https://www.techpowerup.com/review/intel-core-i7-10700/22.html
    3 points
  47. Updating as I said then ..... According to the lab, the NVME card was screwed and replaced, hoping to help other people experiencing the same problem in the future. And again, thank you so much to everyone who spent their time trying to help, very much appreciate it!
    3 points
  48. Thanks a lot Yitzchak you came out huge !!! A guy friend managed to solve the problem in a few minutes in one call, a thousand thanks to everyone you tried
    3 points
  49. Wow, look what happened: Inadvertently, Bethesda's producer of the legendary Doom eternal title we've been waiting for, which markets it on two channels, one is under steam and the other is directly through it (Bethesda's platform), not paying attention, but In a version that it markets directly, it accidentally and terribly forgot about the exe file that runs the game in pre-encryption mode by Denuvo protection. And so it took just a few hours for them to notice it in the gamer community, scratching their heads, how it could be. In fact, the game requires no hacking at all, it's open to free copying to anyone who wants it by no other than the title maker. Since then, the game has been distributed online to those who want the free version. The Bethesda people have done a quick fix on the site version to remove the open file, but as they say, it's too late for anyone who wants to read this fiasco, feel free to enjoy the link below: https://www.reddit.com/r/CrackWatch/comments/fli390 / um_guys_i_think_i_cracked_doom_eternal_serious / ** As a result, anyone who wants to get the title without it for $ 60 can, if he searches for 10 seconds on Google, find it in the open version easily unimaginable. It is already in every possible corner. It should be emphasized again - this is not a break-in at all, but a human error by the manufacturer herself, who released it into the open air, and when they did, it was too late. There is thought to be revenge on one of the employees.
    3 points
×
  • Create new ...

At the top of the news:

new on the site