Lately - HWzone Forums
adplus-dvertising
Skip to content
  • Create an account

Top Friends

Popular Content

Showing content with the highest reputation since 20 / 09 / 19 in all regions

  1. The problem that the basic figure on which your doctorate was based also negates your position. The video clearly shows that although the average number of frames is similar between the processors, the i3 processor often crashes to a number of single digits. And that's when they made all the unreasonable efforts to minimize the CPU load. I'm not a big fan of FPSHOOTERS but from what I remember if you hit some enemies and just then the computer falters to an unplayable level so the computer doesn't do its job properly. From what you see here, it goes down to my digit, just walking everywhere. Anyone who is aiming for 4k needs a very powerful video card and it would be best to lend it to a processor that swings all the time and not most of the time and certainly not just a fraction of the time. Nowadays there may still be room for deliberation between 6 and 8 cores. For the whole concept, while office work on a computer falls under "basic consumption" (today elementary teachers require print jobs, for example) gaming is a recreational culture and a recreational culture is very different. You can watch a football game on TV and you can fly to Spain and see the game of honor. You can watch videos of cirque du soleil on YouTube and you can go to the show and pay a few hundred shekels. Especially if you have to fly to Vegas to see the show. You can buy flavored fur ice cream and texture of cardboard mixed with milk and sugar and you can pay up to NIS 100 per kilo of ice cream in the ice cream. Using a smart pack of cards (or two) can provide hundreds of hours of play alone and several times with a partner. So why buy a computer for gaming? And in general, need to connect with reality and adjust yourself to the target audience, do you really think that those who have such a basic misunderstanding about financial behavior are those who sit and read you your scrolls? You preach to the wise, the true (mental, unreasonable) target audience More than two consecutive sentences. Some people know how much money he has in his wallet right now - it's his budget. On the tip of it, it's a good deal. Some people know he doesn't have money and he knows how much he is willing to pay each month and for how long - this is his budget. Less wise there are those who do not know how much money he has and this is an unwise behavior, not sure that a computer forum is what will save him, not sure he should be saved, maybe Dad spritz money off the ear. But it is worth checking what his expectations from the computer and try to match him to a computer that will answer Have him on expectations, or send him to buy cards.
    8 points
  2. I understood your head. So you're right, this is really not a 'theft' in the literal sense that you can go and file a complaint against it with the police. But in our vernacular, 'theft' is when you take a sum that has exceeded the moral threshold worth taking, and most of us are not stupid consumers - even if you start your post with examples that try to present us as such, believe it or not - I'm pretty sure we're all pretty rational people. And know what theft is. Do not be surprised if more people in this thread will choose even more to avoid shopping in the country when you fall on us with this passive aggressive. Israel and a free economy are not a pair of words that come together, we are a country of monopolies / duopoly. There is a lot of decay here - we pay a lot more than Europeans and get a little more. "Communist" culture, "extreme socialist" culture, sorry? You talk like someone's spinning you importers (assuming you are, because you sound like SHILL even though I do not see a title in your username) your hand and forcing you to sell to us at such and such prices - we will just 'vote with the credit'. And we will criticize the pricing policy here in the country as much as we want, because there is no North Korean style censor here that prevents us from comparing with Europe and the US and understanding how much they oppress us in Israel. If you can sum up your message, it's something like Black, if you criticize their prices - coupons as they may be, you cheeky Israelis. "When the tickets come out, we will see where it is most profitable to buy. If you get off the tree when the prices from abroad arrive, what good? I am in favor of supporting Israelis and making an easy life for myself on the road, but I will not do spaghetti for the importers in the country.
    7 points
  3. Thanks so much to everyone for the support. I appreciate that and the decision of the site team and would love to come back. I hope and believe that we can take advantage of the case to continue to help the forum together in a more respectful and tolerant atmosphere because in the end that is the main reason and the reason we are here.
    7 points
  4. A few days ago and after hours of tinkering with what seems like a waste of time between users, we decided to completely eliminate the possibility of a negative site ranking. Sometimes one or the other exploits the system to "harm" and harass another user. Currently, only neutral or positive comments are available in the forum, so no reputation can be tampered with. The user's long affair with the site and its staff is something the veterans are familiar with, but I find it pointless to discuss it publicly. We are aware of the criticism that we make in disciplinary decisions with users, and it is in our best interest to ensure that users also receive service when they need to, and also run and circulate blood in the forum system. Personally, in recent months the amount of comments and my participation as a member of a site team has been lowered, part of that is due to a great deal of disciplinary action with users. There is a high degree of concern about discipline issues that are quite serious. Another part is focusing on hardware reviews, as some of you probably notice in the site sections. We (the site team) try not to get excited about politics, not visual protests, nor private messages from users who in all intend to exclude another user - and this has worked in both directions in many situations. Napoleon's exclusion was removed. We know it can create a new user, as it did in previous users, but for the sake of the existing reputation, it is understandable. If he wants to come back - he can. It is important to understand that the situation is a bit complex on what it seems to be on the subject - since it is a user who is a significant part of positive traffic for forum users. On the other hand, every negative situation has become a disciplinary attack with 20 ricochets towards other users. We (the site team) have created general rules to enforce them, and the conflict of an active user who regularly helps but also breaks the rules regularly is very problematic. This can show many as if the user has complete immunity from exclusion, and Multicour has made the decision that there are many things that are understandable and justified with that exclusion. We do hope that it will improve on the other side of this coin if the user does decide to return. Everyone without exception will benefit from it.
    7 points
  5. Kfir strikes again (or for the hundredth time, I lost count). Proves time and time again that he is a Kaka child whose place in FXP is.
    6 points
  6. Written by forum member nec_000, on 29.10.2020 ** The author has permission for the site owners to use the article below as they see fit, and provided they mention the author when posting / using. Here is an article about the groundbreaking new technology, which was first implemented in the new generation of video cards. In the article we will understand what it is, why it entered the picture, what its greatness and genius is, what root problem it came to solve in the graphic industry, and why it could only be implemented for the first time (technologically), and why in fact from now on everyone will adopt it. Background: We all heard yesterday at the launch of Nudeia's new RX6000 series, about a new technology called infinity cache. We understood that this was something that AMD had implemented in the new product, but we did not understand exactly how it works, why, and what it does. So today we will all understand and will understand well. First of all for the sake of simplicity the word infinity is branding, it has no meaning. And we will call the mechanism in the simplest and most terminologically correct way, cache. The article is presented for the benefit of the members, enlightening their knowledge and expanding the subject, to give an explanation at such an academic (but concise) level that can be entered in one concentrated page, and to convey the Torah. At a level that makes order and allows the average amateur to understand the subject at a good level. Historical background: Decades ago, the first video card was born that allows the computer to display the desired output on a screen. At first it was just text, and that was enough for him to be able to speak directly in front of the slow RAM and nothing beyond was required. But later it became necessary to present not only a simple text, but a more complex graphic ability, which gave rise to difficulty: one of the heaviest resources consumed in graphic work is high and large memory traffic. We will explain later why. Therefore the graphics cards have had to stop over the years and use the computer's generic RAM (which was too slow for this task), and start working with the help of fast memory assembled on the video card itself. It served as a dedicated (and fast) memory buffer for the purely dedicated graphics processor use. It was faster than the size of a computer's RAM. It was called a graphic memory. Later its name was determined as Vram. This allowed the graphics processor (now called a GPU) to have high access speeds and upgraded bandwidths compared to those that could be provided with standard computer RAM. The upgraded bandwidth allows the GPU to perform effective image processing work. Without it the GPU would actually be in starvation mode - suffocated and hungry for DATA. Over the years Moore's Law has shown that every 24 months (about two years) there is a doubling in the power / processing rate of the graphics processors. The reason for the doubling is based on a step change in the lithography level once every two years on a rough average, which made it possible to insert 2 times transistors in the same chip size, (and of course a slight addition to the working frequency). Thanks to the two together, this has resulted in a lady doubling her performance by 2 times (at least) every attraction (two years). The problem is that while the graphics processing chips increase their power as described above at a fast and exponential rate, it is not possible to do so on the issue of the memory bandwidth available to the graphics processor. Because thanks to the reduction of lithography twice by each attraction, unfortunately no parallel doubling in the width of the memory strip is obtained, but only a doubling of the memory volume, which is not the problematic resource in the count before us. Let's see what leads to such a situation over the years: While the attrition meter, once every two years as stated, the computational power that can be obtained from the graphics processor is doubled, it was not possible to provide for it the same and corresponding increase in memory bandwidth. Thus a gap is created over time, on the one hand the processing power increases at a rate twice as fast, while the bandwidth increases lazily. A so-called resource distancing from another was created that could not keep up with its pace. So over the years we have addressed this problem in the form of the following techniques: Repeated doubles of bandwidth access to memory in bits: increased from 64 bits to 128, then to 256, and so on to 512 ... And finally reached such a glass ceiling, which limits the ability to push additional traces onto a single-card PCB board. Stop (more or less) within a limit of 512 traces on a PCB board. Beyond that it is too complex and expensive. Another method has also been adopted, which is to transmit more information with each clock beat. Initially memory worked in SDR, which is writing or reading a single bit per clock. Then they switched to DDR, which is one write and one read of bit rate per clock. Eventually the GDDR5 generation switched to two writes and two readings of bit per clock and a total of 4 bits per beat. Today we are already in the GDDR6X generation that has increased the use of 4 bits per beat, which means 16 different values ​​per clock. This represents a very difficult signal for control and reliability. For a 4-bit signal means 16 different voltage values ​​in each beat, and is therefore very close to each other, so identifying what was the correct value becomes increasingly difficult to control. The task requires a very delicate diagnosis on the cattle side which leads to quite a few errors. There is a limit in the work of electrical signals and we are approaching this limit in huge strides. And trying to keep going from here on out is starting to be a particularly difficult challenge, bordering on impractical or technically unprofitable. Another method that has been exhausted to the last drop in the last decade is compression of information, initially performed at a lossless level and recently increased even at a lossy compression level, all with the aim of trying and milking every last drop of juice given to give the memory bandwidth a factor. The graphics progress and soar upwards, while the memory fails to catch up with it at a satisfactory pace. All of the above methods, widening the bandwidth in bits (up to 512), using complex signals to transmit more information per clock to memories (4 bits), compressing information, have reached the limit of capacity in modern times, so that no further progress can be made, and must, really must Find a new and groundbreaking method to overcome the memory bandwidth limit and the hungry graphics processor. Understand, without a bandwidth that will continue to rise in order to support the increasing processing power, we will not be able to move forward with the graphic work and we will reach stagnation. The new method found: caching. The big problem with graphic work is, as mentioned, the enormous amounts of memory that this work requires. The reason why a large amount of memory is required is because the input size is huge. A large and heavy texture space and is the one that drinks most of the memory volume actually went. Is the one who places it in the buffer of the card, hereinafter known as Vram. Because if they were stored in the general memory of the computer (in RAM) we already understood from the explanation above, the graphics processor had very slow access to it, so most of the time the graphics processor would spend waiting idle for information to reach it. Or in short, we would encounter very low efficiency in the graphics processor and poor performance. That is why VRAM was originally invented, in order to bring close to the graphics processor all the information it needs for its work, so that it can work fast and not suffer from starvation for DATA. It would be silly if there was a powerful processor, which is constantly waiting for information and therefore its work is delayed. Since the memory consumed as aforesaid is large in volume, and textures consume the vast majority of it, there was no way to produce a graphics processor with a cache of memory right on it, in such a volume, that it could hold all the necessary information. What was good and suitable for the world of CPU processors, tiny cache memory, is not suitable for graphics applications. A very small cache volumes are enough for a CPU processor, this is its way of working, which is different from a graphics processor, which requires a huge cache to be relevant. Thus for all the decades the ability to embed cache memory has not been possible or practical in graphics processors. Only now that lithography has first reached a level of 7nm, a sufficient transistor density is possible that gives the possibility to hold a large enough amount of transistors in one chip. This is possible by the manufacturers of graphics accelerators the technical possibility yes to embed a caching memory that is practically satisfactory in the graphics processor. The world's first graphics processor, hereinafter the technological pioneer that assimilates cache memory (in addition to Vram), as an intermediate layer, is the Big Navi core known as the code Navi21. This core is about 500 (plus) mm in size, which includes a huge amount of 26 billion transistors, of which about 6 billion (a quarter of all the chip's real estate) are allocated for 128MB of cache memory. 128 megabytes, are 128 million bytes. Byte is known to have 8 bits, so 128Mbytes is actually 1,024 megabits, or 1.024 billion bits. The electronics in the house of the type in question are required, in order to store information of one bit, about 6 transistors. Thus 1.024 billion bits, 6 billion transistors are needed to represent them. Below is the same allocation of 6 billion transistors, which are required to be taken from the real estate of the chip, in favor of a 128Mbytes cache array. Why is it only now, for the first time, possible to embed cache memory in a graphics processor? Hence if we go only one lithographic generation backwards, to a generation of 14 / 16nm, we can see that the video cards based on it were about the size of about 11 billion transitors at most (the largest chips in them). It is clear that if an allocation of 6 billion transistors is required (out of 11), a situation is created where more than half of the chip goes to waste only for caching memory, so that a sufficient amount of transistors would not remain in favor of the chip's graphics processing component. But today when lithography for the first time allows the assimilation of no less than 26 billion transistors in a single chip, allocating a quarter of them in favor of the subject (6 out of 26) becomes a practical option for implementation. Because there are still 20 billion transistors left in the chip in favor of the graphic work and that is a sufficient amount. A ratio of 25% in favor of cache and 75% in favor of labor is a ratio that is first made practical. And this is something that could not be done in the past, because there was no possibility of such a large density of transistors in a single chip, and if they did dare to embed a large enough cache, there would not be enough transistors available for the rest of the chip needs. The next smart question arises: what is so special about a 128MB cache, which is the magic number, that it could not be executed in sizes smaller than 128MB for example? Some understanding of how a graphic work operation occurs (concisely): There are several stages of work until the image is built, and we will talk very simply to ease the thickness of the sheet. The graphics processor receives from the main processor a set of instructions for making and building an image. These include vertices that make up the polygons, and the textures, which, as mentioned, are stored locally in Vram. All the video card receives from the processor is kept close to it in the Vram so that it can assemble the image from them quickly and without delays or gifts, the VRAM is the fast and private work surface of the graphics processor. In the first stage, the processor builds the polygons (three-dimensional polygons) from the vertices, called the geometric stage. Next he renders the wigs of the polygons, ie draws and paints them. He draws / paints them from the textures that make up the same object (texture is a piece of image). That is, he takes a texture and spreads it across the wig. The mathematical concept here is a linear transformation from a two-dimensional space of the image (texture) to a three-dimensional space that is the wig (the polygon). There are a few more stages of course, intermediate stages, and a stage after the restoration (rendering the wigs from the textures) including lighting calculations and the like ... Does not go into the current article beyond, although the topic is very interesting, but mainly because at the moment it is not the relevant scope for the discussion title. What we are interested in understanding and most importantly is that the rendering phase (which is taking the textures and drawing / spreading them on the wigs) is the heaviest phase in terms of memory consumption, ie IO versus VRAM. As they consumed more resolution, or consumed more frames per second, the processing consumption doubled linearly and with it the consumption of the memory bandwidth doubled. For every pixel operation requires reading it from the texture map, and writing it on the appropriate polygon wig. Reading, working, writing ... And repeats God forbid pixel after pixel. And X times more pixels, requires X times the processing power (which is easy to achieve we already understood) but also X times the memory bandwidth. They go hand in hand and are close to each other. The processor cannot process another pixel if the memory does not allow it to read it. And he can not write another pixel on the polygon, if the memory does not allow him to write it. Our problem and limitation is that the memory bandwidths cannot be improved at the same rate that the graphics processor can be improved. Here is the problem of hunger across a memory strip. One of the newest ways to break the glass ceiling in the problematic layer which is as mentioned memory bandwidth, is to create something faster than VRAM, hereinafter cache. It is a memory that is even closer to the graphics processor and is really complex on it, which is in principle, no longer limited, and increases in power directly with the growth of the chip itself. More transmitters lead to more size that can be allocated as a cache, but also to more bandwidth, because the bandwidth in cache memory is linear to the frequency of operation of the chip, and the amount of interconnects built on the chip itself, between the cache memory and the memory controller sitting in the chip itself. Why then is 128MB cache memory a constant, which for the first time gives a sufficient size, and could not have been satisfied with much less than that? Because once two basic operations can be performed, and they hold at least one whole texture in the cache, and once a single frame can be stored in the cache at the same time, for the first time you can actually do all the calculations needed to render one whole image, using only the memory Cache, which is what minimal money needed to get to. A single frame of HD size that is 1080P, without compression, is about 2 million pixels, at a representation of 32 bits per pixel, yields 64 million bits, which is 8 million bytes, which means 8MB of memory is needed to hold it. And in a 4K size image, the amount is known to be multiplied by 4 times, ie 32MB is needed to hold a single frame. Pretty soon one realizes here something like this, that in order to perform all the calculation work required in favor of a full frame, a memory capacity of the order of 128MB, is the desired threshold to ensure, that everything goes into it at once. In such a situation, the graphics card takes one single texture, caches it all, ie reads it only once from VRAM, let's say it consumes a few bits, and it starts drawing pixel by pixel from it and writing them to a frame buffer, when all together Succeed in converging at the same time to 128MB cache memory allocated to the graphics processor. 128MB This is where the first practical minimum volume, which allows this business to work (also) in 4K. Smaller volumes make it difficult, to the point of not being able to do everything at once, and deplete all the wisdom and rationale used in cache memory. We said we did not go into depth in the whole graphic work process, there are of course more steps and it is not just painting textures, but we will be content to explain, that all the intermediate calculations of all the other steps, and the lighting calculations at the end ... Perform their calculations without consuming a large amount of memory, and they can perform a purge after they are finished and inject out of the cache memory and free it up because they are no longer needed. What is important to understand is that the heaviest consumer is the restoration phase, or the rendering phase, which means painting the textures across the wigs of the polygons. And if one whole texture is cached and a whole frame is cached at the same time, this is the most critical size in the whole process. And this is what is made possible by 128MB cache memory, and not so much possible by smaller sizes. Therefore they had to wait until 2020 and that 7nm would be born, so that it would be possible practically / practically to build a cache with a sufficient minimum size. The graphics card shell loads one cache texture, renders the hundreds and thousands of polygons it uses to draw the same image, then caches it, loads the next texture in the queue, and so on until it's done. We can note that in this method, each texture is called only once per complete image, no matter how many polygons it has. The processor will not finish with the texture in the cache until it has finished drawing the entire polygon that this texture is used for drawing, which is hundreds and thousands of times in a single image. We can immediately understand that the volume of the memory band in the read direction in front of the VRAM saved here is in the thousands. Dramatic improvement. And on the writing side, remember that the frame buffer no longer sits in VRAM (meaning that every pixel writer must record information in VRAM) but sits entirely in the cache as well, which means there are no writings to VRAM. The processor draws all the pixels directly in the cache until the production of the entire frame is finished, and from there sends it to the screen. The savings in writing level are also an order of magnitude of thousands of counters. In fact, by working directly with a cache that provides everything needed as a workspace to build a single complete image, we have reduced the memory traffic consumed in front of the vram to something smaller than what was required before. It is already understandable what genius this thing produces for us. We broke the glass ceiling of the memory bandwidth, the one that has obscured graphics processors since the graphics field was born decades ago, and moved the problem into the chip itself, where it is easy to address, where the limitation is almost non-existent as caching memory increases directly to lithography itself. hallelujah. Bandwidth calculations to understand the business: AMD has chosen in its technical implementation, as mentioned in the Navi21 chip, to implement a cache bandwidth of 4096 bits. This means that in each beat (per clock) it allows writing or reading of 4096 bits. Because cache memory works by the SDR method it is more flexible than DDR and certainly more flexible than all subsequent methods, and can perform write or read per clock as it pleases. DDR on the other hand requires one write and one read, but not two of the same type. In fact DDR is half the bandwidth for any read / write operation that is displayed together as a connection, but in practice this is misleading. The same limitations apply even worse to GDDR5 and today GDDR6X. Because there is more than one soup in one reading and one writing, it is several of each type in a ratio of half to half. So here's another cache advantage which is SDR and unlimited. He can utilize his 100% only for reading or only for writing and is not required to compromise on half of any kind at most. This further improves traffic in practical practice. The Navi21 chip is calculated in terms of bandwidth, as one that works at a typical average operating frequency of 2150mhz, that is 2150 million clocks per second. Thus the memory bandwidth created in front of the cache is 4096 bits times 2150 million = 8.8 billion bits per second. Divide by 8 and get 1.1 billion bytes per second or in short, 1100Gbyte per second. The overall memory bandwidth of the VRAM on this card is 256 bits in GDDR6 configuration which means 16bit per clock per pin. Or in other words only 512GByte. We see that the cache memory actually more than doubles the available bandwidth available to the CPU core, now the graphics processor core actually sees it as a work surface, a crazy speed of about 1100Gbyte per second and another preferred type that is all SDR. For comparison, the bandwidth speed of rtx3090 using 384-bit ultra-fast GDDR6X memory 19.5bit per clock per pin is only 936Gbyte (which is limited to half writing and half reading only). In other words, the assimilation of this pioneering cache memory allows Navi21 to gain an even greater effective bandwidth than the RTX3090 flagship card. In fact at the moment the situation is, that the core of Navi21 gets more bandwidth than it is capable of processing at its full output. The fact that the said bandwidth is limited to only 128MB of volume is already quite understandable to the reader, not really bothering here, since at every stage in the graphic work to complete the full picture, this volume satisfies the needs of the method / technique in which the graphic work works. He caches what he needs for one image, caches everything, then throws it in the trash after finishing what he does not need. This is until you complete one image that is ready to be transmitted to the screen. AMD was asked to describe to us what the equivalent of this bandwidth is compared to the (traditional) method of the past, so we built the following slide that we will explain immediately - scroll down after the image: In the left column, bandwidth obtained only through traditional GDDR6 memory chips, 256 bits wide, fast 16bit per clock per pin, which produces 512Gbyte bandwidth per second. Of course in the middle column it is the same only one and a half times, i.e. 384 bits, which produces 768Gbyte per second. And in the right column they took the bandwidth of the cache memory of 1100Gbyte combined with the additional potential provided by the generic memory Vram which is itself 256 bits (512Gbyte), coming out 1100 + 512 = 1612Gbyte total. And if we divide the value 1612 by the value of the left column 512 we get the ratio they wrote which is 3 times the order of magnitude. And here we have genius, which is quite trivial and simple to understand, actually exists in the world of computation and work since time immemorial, a cache, which for the first time thanks to sufficiently advanced lithography, allows the implementation of a practical and effective cache for graphics, in a graphics processor. Something that could not be done until modern lithography was born. Because this method is so dramatic and technologically and application-breaking, it will now be adopted by the entire industry in general, Nvidia and Intel as well. It is simple, inexpensive, effective, and easy to implement. It cannot be patented either. cache is cache and it is older than all the members here in the forum. From now on, the graphic processing technology will no longer depend on the limitation of the memory chips and bandwidth (in bits) of which VRAM is composed, which can now be much slower and cheaper - ie use slow and cheap memories for implementation, provided the memory cache - its speed and size provide and complete The graphics processor's hunger, to the rhythm of readings and writing that provide Randora with a discrete image. In the next generation of lithography, probably 5nm, which will allow in the next two years the next generation of cards, will double the amount of transistors from the region 26-28 billion where it is today (26 billion for navi21 and 28 billion for GA102) to the region of 50 billion transistors. In such a situation, graphics cards will be able to allocate even more than 6 billion caching transmitters, such as 12 billion 256MB caching transmitters, an element that will further improve the flexibility of order and performance in which the graphics processor does its job. For when we already have 50 billion transistors in the chip, allocating 12 billion of them still leaves 38 billion for the working component of the chip. The ratio 12 versus 38 is a reasonable ratio. They will examine what is the optimum in this aspect, in the ratio between cache and processing power, and accordingly decide how much optimal cache to allocate in the graphics processor. * To the extent that members request to enter to discuss and delve deeper into the subject, and / or to enter into the areas stated in the article as such that we will not expand on now, feel free to ask and dig. I will do my best and educated the field to respond and expand.
    6 points
  7. CSM means backward compatibility mode for an old BIOS. In other words: MBR and not GPT. Not UEFI mode. This is why the system does not boot. The appropriate disk partition is missing. There is a built-in tool in Windows that allows you to convert but there is a certain risk of information loss. Before you begin it is advisable to create an Image of the disk with software like Macrium Reflect so that it can be restored if something goes wrong. Instructions for converting disk partitions here. Refer to the second chapter which explains how to do it from the desktop. The alternative option is to reinstall the system after turning off CSM mode in UEFI (which will lead to it being installed in UEFI mode).
    5 points
  8. I connected with them this week with a supplier from Bezalel (first and currently the only customer in Kiryat Malachi), the installation cost me 450, in the meantime the option to have this or the BE router which I think is not worth it because as a router is one big trash can !, (there was maybe more Nice if it will be possible to get an optical fiber adapter for rj45 connection from the infrastructure in order to connect a router from another company privately instead of the router be very limited in its options like BCYBER cancellation, dmz setting, scope change to another more normal, Google dns change, meanwhile I work with External mesh solution and not Bezeq's, I called Bezeq technical support I was told that there is another particular model that they do work with but I did not find anything except instructions online, I attached a PDF with 3 licensed models in Bezeq, the last model on the list told me it is the one that catches , The type of connection of Bezeq's infrastructure is GPON ONT SC APC (the connection in the green frame and not the blue !!!!) The matter itself that if I connect something else then they told me in support that they can not help me if I have faults, another interest, Bezeq just at the beginning And their technical support has no answers as to how it works so it turns out that if you buy a suitable fiber optic adapterFor Bezeq's optical connection you are alone in the campaign, I have already plowed a few places on the Internet regarding what is suitable and what is not, what I have found online with delivery so far are two models of ubiquiti ufiber in one of which it has an option for POE, one model from huawei and another from ZTE, there is A particular TP LINK router with a suitable GPON connection but there is no way to buy it online. gpon.pdf Update: Today I bought a TP LINK fiber optic adapter together with a VR600 router, I also connected two switches that work in a gigabit the entire existing network in the house, in the case of WIFI in the house I work with an external MESH (3 boxes), I set what is needed In terms of opening ports and assigning IP according to MAC address, I switched SCOPE to another one that I have been used to since the TD-W9970 period and everything works peaks, tomorrow I will return to Bezeq their router and Salamat
    5 points
  9. Welcome to the forum members. This discussion I open after alarming it in the number of posts here in various help discussions for users, where forum users provide unreliable and false sources for comparisons of hardware products such as processors, video cards and memories. Fork hardware comparisons As of now, in the age of written and photographed media, hardware comparisons are commonplace for those who run websites and communities. HWzone is also among the various sites in the world as having relationships with major hardware companies such as Intel, AMD and NVIDIA for providing product reviews. Many of the various media outlets around the world are in friendly contact with a variety of manufacturers in order to receive hardware products for review and to deliver them in time and to one or another audience. Sometimes, media also rely on the purchase of hardware for coverage, especially in smaller or smaller media that cannot cover an important product that is not auditable. The technical part of making the comparison depends mainly on the media body itself. There are some who simply cover the simple and general Rebel, and there are those who reach the depths of the transistor, its entire body and niche. Typically, technical comparisons for written or video reviews (or both) are a lengthy and rigorous procedure that involves repetitive testing to verify the reliability of the information, as well as ensuring that the results obtained represent something the user has something to do with it. Again, all media and its choices in making comparisons. Clicks for money for fake or stolen information Here we come to YouTube of recent years. Thanks to the video platform, we can expose a lot of great information about computer, hardware and intelligent consumer issues. Admittedly, the platform does not belong to those with the knowledge and desire to distribute it alone. Recently, a surge in hardware comparisons has raised a lot of concern about people's perception of what hardware can really do. Channels without reputation, technical background or introduction to information submitted to the network like mushrooms after the rain. These are channels with generic names that are most often associated with the hardware world, and they push out comparisons at a high rate, most often significantly higher than any experienced and multi-year media capable. Many of these channels originate in Eastern countries, but some are also common in Eastern Europe and South America. The method is simple - record an excerpt from a computer game or its built-in comparison, or download one from the network, put a number of hardware models and let the numbers run on the screen. Sometimes there are also technical and fairly straightforward graphs. The next step is to insert as much general information as the video name and in the description, it speaks nicely with search bots, usually it looks something like this: Yes, it's a real part of a video description. Usually there will also be a huge amount of affiliate links for purchase from Amazon and Chinese stores, and Bitcoin number for "donations". The information for the graphs can easily be taken from one of the hundreds of hardware and computer sites, and the money is already in your pocket. The data is easy to take anywhere you want. You don't have to spend hours and days on hardware. While it is important to understand that reliable information can also be found among the small and anonymous channels. It's a platform for everyone. Therefore, it is important to take any information into consideration and see that the same channel does not suffer from the ills of the benchmark factories. Sometimes this is the only information of its kind available to us. Ok, I convinced - so where's the information? Well, I'm of course biased to say that HWzone is always a great source of information, but of course we can't cover every game and every hardware configuration, we are one media body overall. My recommendation is to stick to the good old media bodies, those whose technical background and exam essence matter, and spend a lot of time bringing data to their users, not videos with bouncy music and generic graphs only. It is important to cross data, it is important to understand trends in the behavior of hardware versus software. If a Y card lags behind an X in a certain game, formulate it as a fact on the ground with reliance on other sources, because maybe this specific thing in the lab is changing the results negatively. Apart from us, sites across the web such as PugetSystems, Techpowerup, Guru3D, GamersNexus, Techspot and Anandtech, to name a few, are media veterans who know the significance of hardware results reliability, and don't rush to publish spam for clicks. Of course there are many other good ones, and this post will not end if we look for any media body that is known to be important and important to the hardware communities in the world. This is of course my concept, as a person who has lived it daily for a short period of time. I hope that at least I have given some thought material here about the YouTube benchmark industry, and those who are always worried that something may not be most representative of the information illustrated in these videos to the extent that it is slightly dramatic and inflated for viewing. Not always very sensitive information to a user's problem, but only those faceless videos on the network, which is a problem. But sometimes it is necessary to understand that not so much a technical dilemma is a solution to which research has been conducted, and some lines have to be drawn between them as they have been made in order to find a point of reference. This discussion remains open to user comments.
    5 points
  10. I'm in favor of open source, and it's great that AMD is based on it and not on proprietary tools. The problem is that it does not rely on it because it supports open source, it relies on it because it has no choice. NVIDIA has a highly developed software department, and the industry is familiar and experienced with their tools. AMD's alternatives are not the ones it created. You mentioned ML. There are two options - NVIDIA's proprietary CUDA or OpenCL which is open source. AMD supports OPENCL, but not because it's better or anything. OPENCL is not hers, neither has she developed it nor does NVIDIA support it. In fact, NVIDIA started with CUDA and OPENCL came much later. Which means that as I mentioned here before, the tools of the industry work mainly with CUDA. But its software is very good. I would even dare say that it is a software company no less than a hardware company. AMD, on the other hand, does not have this advantage. This is also the case with Intel, by the way: both Intel and NVIDIA help compilers / game developers and game engines respectively. Guess who they are especially suitable for ... AMD does not do this to the best of my knowledge. As for the latter I am not 8%). She also participates in decision making. AMD does not (as above). But there is still hope for optimism, because even though Intel invests in the software and AMD does not, AMD managed to overtake Intel ... or not. Let us not forget that although Intel optimizes for its processors, the architecture is the same. What is not true As for the graphics cards. Edit 100: Not that it's a measure, but I checked the number of repositories in GitHub. NVIDIA has 2, of which 214 forks (i.e. its 26). Intel 188, of which 715 forks (i.e. its 21). AMD has - snake! 694, Of which 22 forks. Its total 9. Intel has 13 times its capacity and NVIDIA 53.3846154 times. It gives some order of magnitude.
    5 points
  11. It is best to remove the Driver Booster immediately, it only makes trouble.
    5 points
  12. Leave, he's a professional. He understands more than you. In fact, he understands more than you do about anything. In economics, in psychology, in everything related to technology and computer science. Too bad he did not learn how to write paragraphs in the forum without going down lines in random places that make reading his posts one big nightmare even if we do not address the bizarre tendency to smear, repeat things, ridiculous attempt to use high language (which was a little more impressive if not accompanied by lots of spelling mistakes ). In addition, the obsessive editing of old posts must be noted so that it is impossible to respond to them without showing an idiot that the paragraph you were referring to has been deleted (Tip: always quote it). And bottom line, go steal games because that's what he does with all his many years of experience in the industry.
    5 points
  13. Recommended This is not a question of budget and requirements, if the budget allows for a balanced hexagonal system then go for hexagon and if octagon allows then go for octagon. It is a pity to shed some light on a subject that is so toddler that, in the end, this specification makes up a whole set of parts of decisions.
    5 points
  14. That Intel is cutting prices and not because of 8 rank typhoons in the East, probably AMD is doing something right. It retains billions of reserves to absorb the cutbacks - a legitimate move, it surely shows competition. It is precisely their sales in numbers that do not interest consumers at all, it is of interest to investors. Why the hell didn't we lock that stroller I don't understand? Even when I return from a vacation abroad I am always surprised how he is not resting. @Askme @KAKADU999 You are just delusional. How much poison do you keep behind the keyboard .. Your parents I wish them only health, should have taught you not to respond in discussions that No interest in them, it would have solved tens of thousands of unnecessary comments for this forum.
    5 points
  15. Oh, lay down, you old, crazy piece of shit. Do not throw your complexes at me. Go near 2 buy some second hand fans and clean them. If you think I have any sentiment for one of the companies that I purchased a product from one of them, you are completely not in the right direction. I will not get to it because: 1. Corona. 2. I have better things to do with my time. 3. I will have no trouble sleeping even if my card turns out to be not the fastest in the universe. My God how much you blow.
    4 points
  16. What closes with these messages? What do they contribute? Porn of box card boxes? Not so clear to me ...
    4 points
  17. If so, at a good time we finished from the first end where we measure the eight titles we originally tested on 6800xt so we have a reference for reference and comparison against each other. We are very happy that we insisted on finding a good 3080 card, and not only the first card we opened EVGA, because we found in the second ASUS card that we caught such an exemplifier, which in any internet comparison found online, is actually considered the fastest card available = golden sample The Empire's architecture. Moreover, it helps us to give a not bad estimate also to the capabilities of the 3090 card to a considerable extent, because it is faster than it in some situations, and that was our overarching goal originally. In the absence of 3090 under hand, we have achieved as close as possible in the current circumstances and there are no happier of us. We very much hope that the members appreciate the hard work of lanzar that lasts for hours and days as well as the considerable financial outlay invested in purchasing the tickets. Understand, this is many tens of thousands of shekels and only for that comes to a lanzar one huge nose. This is much more than what a journalist in the field does with his own money. All measurements at the first end are 1440P. And later we will also upload in 4K, of course, possibly tonight (if we have enough). ** lanzar, this 3080 Asus, put a sticker on it and mark it, it passes to me when you finish with it and you do not pass it to any other customer or friend. My name is written on it Deir Balak. Note that in the current post we will now attach here - only the latest measurements of the 3080, since all the measurements of the 6800xt are in the current thread anyway at the very beginning, on pages 1-6. So whoever wants to go there to take a look - there is no point in uploading them again and it's a shame about the place it occupies in the forum. Also after presenting the measurement of the 3080, we will put the reference table for comparison, so that we can follow the improvement we achieved on the 3080 compared to the default of the card at the stock frequency and as measured by Lior / TPU. In the Gears5 title our 3080 puts out 146.2FPS, compared to the 170.8FPS the 6800xt put out. This is where the 6800xt is 16.8% faster in this test. This is the TPU measurement in the same title we chose to address in the current thread: in the horizon title our 3080 puts out 136FPS, compared to the 142FPS the 6800xt put out. Due to where the 6800xt is 4.4% faster in this test: this is Lior's measurement in the same title. Note that our accelerated 3080 bypasses the 3090 OC version of Asus, with us it released 136FPS in this title: in the red dead redemption title our 3080 releases 105.6FPS, compared to the 116FPS released by the 6800xt. Due to where the 6800xt is 9.8% faster in this test: this is Lior's measurement in the same title. Let us note that our hurried 3080 bypasses the 3090 OC version of Asus, by an inconceivable gap: with us it released 105.6FPS while the 3090 barely 82.6. Look at how much the speed and Platforma Risen 5000 contribute, and maybe even the latest drivers that have come out since and improved something else you will know: in the hitman2 title our 3080 releases 131.5FPS, compared to the 146.2FPS released by the 6800xt. Due to where the 6800xt is 11.1% faster in this test: in the hitman title we used as a reference in TPU's work which was easier to reference in front of it because the running parameters were possible to copy the same, and here too our 3080 flies nicely and opens an impressive gap over 3080 in stock. We have 131.5 compared to 122.8 in his stock: in the title assassin's creed odyssey our 3080 releases 91FPS, compared to the 95FPS released by the 6800xt. Due to where the 6800xt is 4.4% faster in this test: here as a reference to Lior's work, our accelerated 3080 overtakes the Asus 3090 OC with 91FPS: in the borderlands title our 3080 puts out 110FPS, compared to the 137FPS released by the 6800xt. (We will mention in this case that the second 6800xt card, our faster one, in the sanity test took out 138, but we will use the result 137 as a representative of course). This is due to the fact that the 6800xt is 24.5% faster in this test: and this is Lior's measurement as a reference for comparison. We improved, but not dramatically in this case: in the metro title our 3080 releases 109.93FPS, compared to the 118.4FPS released by the 6800xt. This is due to the fact that the 6800xt is 7.7% faster in this test: here is a comparison as a reference to Lior's measurements. We were able to use the accelerated 3080 to match the performance of the Asus 3090 OC: in the title tombraider, our 3080 releases 168FPS, compared to the 177FPS released by the 6800xt. This is due to the fact that the 6800xt is 4.3% faster in this test: these are Lior's measurements. Here too we notice that our rushed 3080 opened a nice gap over the 3090 OC version of Asus: so these are the total of 8 measurements, which show us one of the better 3080 cards to be found, and not only that but if we compare the results to the reference indices that were With Lior and TPU, we improved significantly in most cases, mainly because we accelerated our 3080 quite a bit, and also because our platform is a reasonably optimized Risen 5000, and not the outdated intellects that Lior and TPU measured, which particularly hurt performance at 1440P Where the platform and the Intel processor form a bottleneck. We are very pleased with what we learned today, the Risen 5000 and the conversion to the 3080 card are very significant to the subject. Now we are waiting for 4K measurements and we will see what happens there compared to the reference indices.
    4 points
  18. The hallucinatory post. You did not buy components on a computer, so the warranty will be for each component separately. In principle he did you a favor by checking for you the source of the problem, for his part he could tell you to break your head in identifying the defective component and only bring it in for testing. Full monetary credit for the component if there is no identical or equivalent component above and beyond the store. Asking for a monetary credit for all the components you bought separately after using them is a ridiculous and silly requirement.
    4 points
  19. @zone glide I'll be out here a little nec and tell you something from a place of "teaching you". The problem with your indecision stems from a lack of experience in the hardware world. You can always wait and there is always something around the corner. Wait for the RTX4080, you will get another nec message in the face in January 2022 that actually the 5080 and AMD's next product is the real thing and there will be a huge jump in performance. Or maybe you'll just wait for the 3080 with more VRAM because it's terribly important and yet we want to own a computer for two whole decades, just do not forget that until there is such a card, already just around the corner will be 3080 Ti SuperDuper or any other invented Refresher so maybe you'll wait longer Some. I remember you even a year ago asking about upgrading your antique computer, I guess at the time you did not upgrade because you decided to wait for the next generation, now you are again considering waiting for the next generation. Buy what you need and stop letting the people here get you anxious about what will come in two years. No matter what you choose to buy and when, within 6-12 months something will come that will make your purchase look funny. I bought a 3060 ti and actually got a 2080 super that someone else maybe a few months ago paid double for. And next year my card will also be embarrassed in the benchmarks by a card that costs half. That's what it is, that's the world of hardware, if you can not get it do not buy technology. I have never seen a person who spends so much time buying hardware, and cares so much about it.
    4 points
  20. Well think about the future, buy a 3080TI with 20 GB for NIS 5000. In two more years, you will receive the RTX4060 for NIS 2000, which gives the same performance. What do I come to say? A consumer who wants to get value for money in advance does not buy these tickets, do you want to insure yourself for the future? Take 2000 shekels Put them aside, go buy RTX3060 when it comes out and in two years take the 2000 you saved now and buy 4060. You also saved money, you will also get similar performance in the future but with newer technology, and you can also sell your current card when it still has good market value And return part of the amount. Maybe even you do not let yourself be left with a video card without a warranty in the system. So what's the conclusion? Why buy a 3080TI with 20GB? Because you want the performance it offers. Not tomorrow, today.
    4 points
  21. 16GB of memory is twice as big a gimmick as RT and of course DLSS which is in general the main game changer at Noidia today and not the RT itself. The day you will need 2GB in your video card, even in 16K resolution, it will already be such an antique in terms of computing power that it has not been on your computer for a long time, replaced with a new Dendesh video card. Absolute gimmick and both AMD knows it, the only reason it's 4 at all and not some more modest number like Noidia's 16 is because AMD wanted to stick to the same 10-bit bass width. There is no doubt that Noydia has meanwhile defeated this generation. Even for those who are not impressed with RT, it still offers more here and now than what 256 GB offers here and now - which is nothing and nothing. And DLSS has no answer from the red side, most of the serious games that will be launched in the coming years will support this and it is a completely free performance. I would say that these 16GB are useful outside of gaming for those who work with approx. Its screen in some way but also there is a clear and distinct advantage for its users in most areas because of much better software and features. A situation has arisen where in areas you might have wanted it (ML for example) no one wants an AMD card anyway.
    4 points
  22. Hello dear forum members! It's been a long time (6.5 years) since I installed Xeon processors on the LGA771 boards that work to this day (reminder for those who want to remember - link) today I come to you with another installation I did on the computer (against Intel's wishes), and this is the 6700K upgrade I had so far to 9900K on board Z170 (not officially supported). It should be noted that I do not take responsibility for this guide and everyone carries it out on their own initiative while understanding the dangers involved in the process. The complete installation guide is here: https://linustechtips.com/topic/1118475-guide-running-coffeelakerefresh-cpus-on-skykabylake-motherboards/. Note that there are some updates (due to the update of CoffeTime software to version 0.92) and additional distinctions that I discovered during the attempts to install the processor. Each motherboard requires a different installation, but for the sake of clarity to the guide here I will mention the highlights for Gigabyte 1 boards. First download the BIOS files and Coffetime 0.92 software as shown in the manual, along with the FlashProgrammingTool (FPT) software. Place them in folders on drive C 2. Prepare the BIOS using the software with administrator privileges, as shown in the following image: Note that both the ME and the VBIOS + GOP must be updated so that they can work with the processors (and the appropriate microcodes, and make sure it is saved!). In the EXTRA personal label I added a memory expansion to 128 GB whatever. Also important - in MAC1 add the MAC address of the Intel network card in your possession and keep the number for yourself (can be found in the network card properties) 3. If your operating system is installed on NVME, and the NVME is in MBR format and not GPT - you will need to convert it to GPT before executing the mode, using RECOVERY mode with MBR2GPT command: 3. Great, you have the BIOS, the SSD drive in the appropriate format and you are ready to burn. Before that please make sure you have a backup BIOS (before editing) on ​​an on-key disk if you need to rewind through the BIOS (I should have). Now comes the step of using FPT to burn the BIOS (in other boards the way may be different, like using Programmer), as shown in the guide. It is important not to disconnect the computer from the power supply at this point otherwise the BIOS will be Corrupted. Once the burn is complete, you are ready to install the CPU. 4. Shut down the computer via FPT only using the fptw64 -greset command, and before the computer shuts down, turn it off again and do not let it turn on. Disconnect it from the power supply and remove the battery. 5. Before installing the processor, you will need to cover some of its pins according to the board (Gigabyte covers the most), and connect some of them (depending on the board): Here is the gluing I did using a kind seller from AliaCapsers who also brought tweezers: 6. Install the processor on the board. Put the cooler on for a moment and make sure the board goes up (if you can, I used the board's number bulbs to make sure it goes up) before you put everything together (if it does not come up you may not have put the BIOS properly, or you may not have put the codes properly). If it comes up, turn it off and shut down the computer. 7. Make sure the cooler is seated properly and turn on the computer, go into BIOS8. Great, you're almost done! In my case the CPU was at too high a voltage (1.4VCORE) which put a load on the VRM and they also crash. This is of course exceptional and the VCORE should be lower. In the BIOS use Adaptive Vcore and download at least 0.100V (in my case, play with it to see that the CPU does not get too much voltage) and check the stability. Monitor the temperature of the VRM and CPU using HWinfo 9. Renew PS - My old Mugen 2 (which still cooled the Q9300) cools this processor well (better than the 6700K! Thanks to the fact that the processor is soldered to IHS as opposed to 6700K) and is silent. I get higher and more stable frames with the computer, my brother can encode movies at 2x higher speed, and I can be with a quiet mind for a few more years until I have to upgrade. The computer is stable after I have arranged the voltages. Was very enjoyable and worth it! Just make sure you have a board with VRM good enough for that and be prepared for the complexity of the process. Too bad Intel did not let us just install the CPU in a normal way (because as you can see, it works great), but for that there is Modding I would be happy to help with any question / request!
    4 points
  23. I think about devoting time to researching VRAM occupancy, VRAM cashing in modern game engines, and why conventional thinking in terms of VRAM occupancy might be misleading. Not sure it will be particularly popular, but wonder how much is required.
    4 points
  24. If @captaincaveman has already commented (and rightly so) to @ aviv00 then I will allow myself to comment here that they say "on the contrary" and not "to a great extent". The origin of the word is in Aramaic, for those who are wondering. This is not the first time, so I comment ...
    4 points
  25. Today, AMD's Ryzen 5000 processors are officially released to the world, which is a historic turning point for the processor manufacturer. We took a look at the Ryzen 9 5950X and were amazed at its tremendous power - all the numbers here
    4 points
  26. My slowest core reaches 4750 and the speed exceeds 5 GHz. crazy
    4 points
  27. Please let's not get into personal tracks here. In my opinion, the discussion is interesting and important. If @ napoleon45 thinks otherwise, he's guaranteed. He's welcome not to come in here. If you want to explain why @ nec_000 is wrong, also happily. Just please, give a scientific and technical explanation, just as @ nec_000 scientifically explained why it is a novelty and a breakthrough.
    4 points
  28. I informed my wife that I was coming home early tonight, that she would make me a fresh watermelon with a Bulgarian in the family room / TV, next to it nuts, a cold Zero Cola bottle, a red rising towel, and that she would take the kids out somewhere ... Question Why? I told her the final game against Maccabi. Let them not dare to disturb me
    4 points
  29. Hallucination. A person arrives, subscribes to the forum, poops on a series as his first and only action on the site. So boring to you?
    4 points
  30. It is not possible to split each discussion into a thousand sub-discussions, like it or not it is a discussion about reasons for buying or not buying. It would be better for the principals to simply unite the clusters
    4 points
  31. Your interpretation is incorrect, the tests of those sites are measurements of the actual traffic speed, that is, they test the combination of infrastructure and power, they are unable to separate the two.
    4 points
  32. Enough ... it's tiring all this mantra already. One might think, Apple products do not suffer from bug and security issues, especially "Safari" which allows free entry for hackers. Remind me which iPhone is the one that got an update and became a deliberately slow "turtle" device ... In short friend, there are no significant differences between an iPhone and an Android device except for the price.
    4 points
  33. Don't understand why anyone thinks AMD can \ should \ "behave \" differently from Intel ... This is another global, capitalist, stock exchange company that has done, does, and will do everything it can to present as many positive quarters as possible. Just business, don't look for friends there.
    4 points
  34. True, nothing to do, quality costs money. And those who want quality will have to pay for it. Although today's price differentials are estimated at several tens of shekels, at the point of purchase (which was somewhere in 2018) the Silicon Power SSD was an excellent VFM, so I considered it a lucrative purchase. In fact, the mere writing of these lines on the computer running the drive proves this to me again. Regarding survivability - as long as there are no statistics that can show black and white that the survivability of Silicon Power drives is too low, I don't think reliability / survivability can be determined. Probably not based on the experience of two or three people who commented here throughout the discussion. Needless to say there is no problem with the manufacturer, people have malfunctions with drives of all types and models. Silicon Power may show lower reliability, but that's definitely a hypothesis. And even if there is lower credibility, it must be ascertained how much lower the reliability level is for the price level. Regarding reviews - I think differently. A person who loves his product, will enjoy it and forget about his existence. A person who suffers from his product - will make sure others know about it. That is, chances are that a person will want to post a negative review of a good product than a person will worry about posting a good thing about a product they enjoy. This is his stage of expressing his disapproval of the product he purchased.
    4 points
  35. In advance, I did not claim that exclusion was unjustified, I asked that you not give another chance, I feel that it annoys you and other people so you ignore his contribution, not that I claim that if someone donates then he can do everything. To put it simply, it didn't do you any good so you put it away (with a reason for the protocol), if you didn't notice, this forum is more important to it than any other user and you missed it big time. Good luck to everyone.
    4 points
  36. Any news. Finally I got a reply from their service. Answer me by email! From a quick Google search on Gett Basket the company gives me several results with different reviews from 1 to 5 stars, and a number of sites that group a lot of claims. I wouldn't be anywhere near being smart consumers, research before buying. May everyone have an excellent weekend without Corona!
    4 points
  37. In the mood, I certainly agree with you, I haven't bought Intel and probably won't buy it (sure not 10900 or any version of its evolution) as long as it's the market. Not that there are gamers who make a living as a main occupation, the tournaments in Israel are more competitive for competition and title than the financial aspect. What I meant was that today there are a fair number of gamers who sit and grind for hours to compete in games (every season and its crop) and learn from the network to channel a considerable budget to reach MAX FPS, no matter the price, as long as the parents' visa is not shrinking from ironing. Not that I think that's true, certainly in the way it comes from an unprofessional obsessive sporting side, but this phenomenon is growing, feel free to check out how many discord communities in the country there are for each Battle Royal game, and what the mood is when choosing specifications for the guys competing there. Again, not that it is a wise consumer, but it is a hobby that "teaches" you a network to stage, and the herd, well .... the herd. But each one and his loves.
    4 points
  38. Update for early December 2019. There is more and more evidence that DELL's service quality does not quite justify its reputation. In light of this, I decided not to overweight DELL as a superior service provider which gives them an advantage over another manufacturer with the same data. Hi, everyone. Below are some important parameters to consider when deciding which mobile to buy. For your use! Please - what makes mobile good? A. Good hardware. B. Good performance in practice. third. Good construction quality. D. Long battery life. ה. Low weight. and. Good Warranty and Quality Providing Good Service. G. Price as low as possible. 1. The two mobile phones in question, the Lenovo E490, which receives IBM service rather than CPM like most simple Lenovo mobile phones, and of course Dell, enjoy high quality and reliable service, which is the most important parameter when it comes to mobile over time. Because any mobile can break down, cheap expensive, just as expensive. Then the quality of the warranty and the quality of service are critical. 2. So, in addition to performance, screen quality, and easy-to-upgrade and low-cost options, these two mobile devices stand above all else in their price range in terms of quality and warranty. So true, there are thinner, more beautiful, etc., etc., but as a whole of all considerations, each of the mobiles on your list falls into one section or another. 3. As for the I7 vs. I5 processor - this is almost meaningless in practice. The I7 differs from the I5 at a higher frequency, but 99 percent of the mobile, and certainly within the budget range of your budget, the mobile itself is not what is written on the page, but a build and battery assembly and cooling system, etc. Processor performance so as not to exceed the heat exchanger set by the manufacturer. So in practice, there is no performance difference between processors, and sometimes vice versa. Now, Blanovo has an excellent keyboard, because it's a ThinkPad mobile, you're welcome to roll. But it is not lit (backlight). The DELL 5481 keyboard is lit, but battery life is short at least an hour compared to the Lenovo. Every company has series and sub-series of laptops. For example, Novo - has a home series, a business series and a premium series. The S340 like its predecessor The S330 belongs to the manufacturer's basic series, ostrich, the most basic one can produce in terms of the quality of the product and its components, and I try to speak polite language ..... Then you will understand alone. More normal models in the same base series are the S540 and I don't even know if there's the S740 either. In other words, each series consists of three families. Of course there are other series like yoga etc. For that matter, the E490 belongs to the business series, and also shares the level of the ThinkPad series I mentioned earlier. At the end of the ThinkPad series stands the CARBON X1, considered the best mobile in its category (mobile for business students who are the opposite of the students, and also costs accordingly). Comparing E490 to S340 is like comparing Ferrari to Mazda. The service provided for the S340 series that is part of the IDEAPAD family is also provided by CPM, which can be defined as among the worst in the country. This is unlike IBM's reputed among the best in the country, alongside Dell. As of 2019. Mobile examples are for illustrative purposes only. It is clear that models will change over time. What will not change are the considerations for wise choice.
    4 points
  39. The truth I didn't mean to discuss politics at all, but more about the consumerism and the meaning for us as consumers. Against this background, I appreciate the man's work. I am old enough to remember a great deal here in Israeli politics and honestly tell you, again, do not (remember) me in documented Israeli history, a political factor in consumerism and expensive living, has made consumers more for us. That's all I'm saying, that's what I mean. Less perhaps to the man himself, but to the essence of what has been done here in the last decade for us by elements no one expects anything from. Much has changed, but there is another way. For example, I'm annoyed at the gas grill issue, very much. So much so that I still haven't bought any designers yet. This is my principle, I am willing to give up gas grills and continue to work on coal, just not pay the exorbitant profit importer because "I am his captive customer". The words of the importer himself, which is amazing and outrageous. By the small contribution that creative lab products have started selling from Amazon to Israel, which has actually been our engine until last summer, it's my fault. The chain of events is like this: When I was looking to purchase creative products this year, I suddenly found that in the country much more expensive from abroad. I also found out that they do not allow the introduction of creative products because of a major international supplier like Amazon and more. A short study I did why this is, they found that they were trying to protect the Israeli importer (their representative in the country) and block any private import channel of their products to Israel just to allow it to sell here at a high / exorbitant price. Along the way, I discovered (and indirectly taught Creative Europe) that in the distant past their representation in Israel was Astronix and they still think it is. Thanks to me (the mastermind snooze) they learned that their franchise in Israel is no longer in the hands of Astronix, but moved to Amtal a few years ago. So much was their mess in European offices. Calculate how many phones and correspondence I did just to understand the situation that Creative Europe itself was unaware of. They initially sent me to Astronix, who said time was not Creative's representative. When I returned with the information to the creative, they inquired thoroughly and came back to me with the details of the latest importer (Amtel). You already know this writer, to the extent that it angered him, this attempt to protect Astronix (sorry Amtel). I did some correspondence with the global creative with a copy to the CEO of Singapore, some assertive with a tangible threat that if they do not take the initiative to change the protection of the Israeli importer through the embargo they created, the investigation I did with the evidence goes to the Israeli regulator who has long ago established (I knew them), That parallel import barriers will be punished by canceling the franchise of the same brand's representative in the country, leading to a ban on the introduction of the brand on any official import channel. About two months after starting my correspondence with creative, I was delighted and truly surprised, changed their policy and ended with this embargo of importing their products to Israel through Amazon. So here we are - sometimes the fight gets results. Even the representative of the importer in the country a very nice guy by the name of Achishi, called to talk to me on the phone to try and apologize for the fact that it is something that they probably have forgotten and still mistakenly existed (the remnants of the history of the business practice between importers in Israel and overseas suppliers, Remains from their agreement with Astronix but not in any way related to Amtel). They took it seriously and complimented the creative. The truth surprises me, but a fact - it succeeded. By the way, I got the opportunity to illustrate the price of Creative Dealers in Israel, with the agreement to sell me personally at the dealer's price, whatever I wanted. That is nice and really unexpected too. They know how to please and reassure an annoyed customer, and that's what they aspired to. I was a moment to involve the regulator, much to my delight it was resolved without involving it. creative and their Israeli representative has developed fast and it is flattering for them.
    4 points
  40. Step one is to shoot the package when it is thrown outside before you pick it up. This second step is to photograph it when it is closed after you have collected it (indoors with proper lighting) This third step is to photograph the contents of the package (even if the inner cartons are wet) after removing them from the outer container. If you haven't done all of the above maybe at least you can use your camera or your neighbors who shows the package is thrown over the gate and stands all night in the rain. If you don't have a camera taking pictures of the yard - you can order from Amazon. If you haven't done any of the above then It seems hard to prove the allegations. As for the interior parts - they do not absorb moisture. If you opened the package and they were dry then they probably didn't get wet. There are also quite a few packaging (board or processor carton) protected by hermetically sealed plastic or, for example, memories in hermetically sealed plastic packaging. These parts are even more protected. Option A - the parts are wet - that is, you see water on the parts. You should probably take photos and send them to Amazon and you must not assemble or plug in electricity! Option B - The parts were wet but dried before opening the packaging (which makes no sense if they are enclosed in wet outside packaging in the cold) In such a situation if the water is clean rainwater then there should be no corrosion without electricity (electricity can only come from the bios battery in the panel Do in this case) but you may see soluble residue on the board after the water has dried. As mentioned, the situation makes no sense but if there are any leftovers, shoot and send to Amazon. Option C - The parts are dry and if there is any concern let them stand in a dry and warm room for a whole day. (Not under the air conditioner or window and not reach anyone who can do them harm)
    4 points
  41. Nonsense, that's really wrong and just a stigma created simply because high-tech (in technologies that work with today) is a young field, where people who are still quite literal just couldn't get older ... Someone from 35 who stuck with 20 technologies a year ago won't find a job after two years of searching , But someone who is 35 and even 40 who was hired to keep up to date with the current technology - has no problem finding work and even being in great demand.
    4 points
  42. On this occasion - something I feel a moral obligation to note to our readers, because I know they will ask. If Threadripper 3000 processors were to reach us so far in a reasonable time, we would reject the Intel CPU review until both launches and present concrete data for both together in the two separate reviews. These processors are indeed on our way - but AMD has decided that there are very few samples in the world for review, and we are probably a "second wave". They are expected to arrive in a few days. There will be comprehensive review that will also include more important exams for all processors. In addition, this review by Intel will be updated retroactively to maintain its relevance and correctness in looking at the entire market, so that when one searches for 10980XE it will also find results from competitors in the market. Another chance, because I don't get much - thanks to the hardware and computer community who continue to support, read and watch the content we prepare. The hard core and interested in blue and white content are the fuel that leaves me awake for many hours to gather comparison data. Thanks. And there will also be more video content coming soon because there are more tools for producing such.
    4 points
  43. The launch of the Risen Generation 3000 last summer and its move to 7nm's LOGO, in fact the line of products that Snooker gave Intel under the belt, led to AMD's best business quarter in 14 years. The picture is interesting not only to the potential investor in the capital market, but also to the computer enthusiasts, because this news means that there is good competition in the market, which in turn will help to bring the products to the general public And, at the same time, help speed up some technological advances in the field, as when fierce competition is underway, all players are making an effort to move forward with the goal of catching up with their competitors. / amd-reports-third-quarter-2019-financial-results Highlights: Until the launch of Risen Generation in April 2017, AMD was the unprofitable, unprocessed company that sold 4 $ 1 billion Since the first launch of Risen as mentioned at the beginning of 2016, AMD has been able to close 2017 this year with a relatively small loss to what was normal, with sales rising by as much as 2017% to 25 billion, the year that marks the turning point. Accompanied by the launch of the second generation Risen 5 series, with continued growth in market share and a first-year finish with minimal profit, and sales increase to 2018 billion. This year 2000 marks the move to the third generation Risen 6 series, beyond the new lithograph 2019nm and first * Amateur private consumer ** more than Intel sells to it, while sales rate is $ 3000 billion. You can see that since Risen was born, AMD has grown by about $ 1 billion in sales a year. Rose from a low of 7 billion by 7 to 1 billion by the end of 4. This is a fast pace of around 2016% per annum, something in the industry that is considered extreme and indicates a major boom of that company. Also, gross profit increased from a poorly 7% area at the close of 2019, to a fairly optimistic 20% (Quarterly 23 2016 quarterly data). This is important to us, not just from the investor's point of view, but from the consumer point of view: the better the competition, the better for us. Competition grows as the underdog becomes stronger at the expense of the strong, forcing it to bend in turn for the benefit of the consumer. And we definitely saw what it did at Intel in terms of pricing and product launch rate. That will only continue that way. AMD Q43'3 Earnings Slides.pdf
    4 points
  44. Competition does not mean "threat" from competitor A to competitor B. This is a negligible way of saying nothing about looking at things. Although such a threat exists, only you did not bring in the correct figure describing it. Take a look at what is going on at Intel itself, what pressure it is undergoing, what it has caused to it in the processors' portfolio and pricing, not to mention pressure in the production layer. Take a look at what happened to Intel's profitability that fell because it had to sell at a lower price, reducing its profitability. That's the right figure. And back in our case, the competition certainly says that in the segments where the two (only one) manufacturers work, then the consumer benefits. Here we are in the segment of processors for the PC field at home, competition has been a drama since 2017, no less. And not only for this segment, the HEDT segment, too, has pushed forward the amount of available cores, as well as the prices. What did we have in Intel's 6900 series, max 10 cores? What was the price, $ 2000? 3000 $? Here's just now Intel cut its HEDT prices in half. That's a lot. And this is a reminder after it already lowered HEDT prices last year, as well as two years ago, each time as a response step for launching a current series of threadripper at the time. Because the threadripper has cut Intel for at least half in each iteration itself. Intel had to respond. It is a revolution in this market and it would not have made the competition elites that way. And the amount of cores in HEDT is 32 no longer 10 - another year that has passed. This year we will probably see even 64 cores. Which is multiplying the number of cores if we momentarily notice the details, more than 6 in just two years. Incomprehensible. This is the way to analyze the competition that AMD breach has brought - its significance to the consumer 👍 You and I have a lot more product today for less money. And progress compared to what was up to 2016, so there was a great stagnation to mention. I'm sure you are as happy with it as any other amateur in our field. As for 10's sales ratio, let's see what it will be after AMD starts to grab relevant market share from the server segment as well. Because that's where the big money is. Most of the sales there. When AMD gets tight, business will change. It currently has something like 3% of my speed memory - closing data for the beginning of 2019. This means that there is still a very wide potential to grow.
    4 points
  45. The discussion is in any case, the 2500k debate should not be opened so the question is whether it is good enough or not relevant. I'm not trying to convince anyone to upgrade from 2500k, but I also can't wholeheartedly recommend anyone today to buy 2500k as an "upgrade" to gaming. In my opinion this is a mistake. I would not recommend any 4 cores without HT for purchase on 2019. All the more so, a user who probably doesn't want to mess with Overclock is all this processor's tick for relevance in 2019. Summarizes my Mishnah on the subject ... for consideration by the discussion opener. Still recommend opening a thread and explaining exactly what your computer needs. Ask a Generic and Dull Question You will get a generic and dull answer that is currently Raisen 3600 \ 2600.
    4 points
  46. Tomorrow at 16:00 Israel time - some pictures because this is a date that allows "Anbukisang". Exactly two days later, at 16pm on Wednesday - reviews with performance tests. The little big navy will be there too, the big big navy will be there too. The very great Nabi gets his own launch later
    3 points
  47. 3 points
  48. You talk like you were some kind of ignorant prophet that everything you describe is exactly what everyone said all along, no one thought RT and DLSS were ready for the launch of the 20th generation. And video cards are going down in price at a new, very surprising and new generation. Here is what you said at the time (September 2018) something of what is written here is correct? No, not even close. Here's what I answered interestingly, this is exactly what happened with the 5700 XT and its little brother. And here is what you answered I will let others judge whether you were some prophet or not. And if anyone is interested in knowing what people really thought, feel free to read here: https://hwzone.co.il/community/topic/588303- violet-cards-rtx-20- launched-discussion-concentrated / # comments or here: https: / /hwzone.co.il/community/topic/588302-2080- flop /? tab = comments # comment-5123141
    3 points
  49. https://www.reddit.com/r/Amd/comments/epmvtc/what_mhz_ram_speed_for_ryzen_5_3600_is_best/ https://www.tomshardware.com/reviews/amd-ryzen-3000-best-memory-timings,6310.html הנה ספציפית לגבי 3600, ואין לי זמן ליותר מזה, זזתי לעבודה. לא נראה לי שמישהו ימליץ לה על לוח B550 וזכרון פחות מ3000MHZ, לדעתי גרוע. לא צריך יותר מצ'יפסט B450, היא לא עושה OC וזה סתם כסף לפח. 970EVO הוא NVME יקר בטירוף שלא נועד לגיימינג כלל ועיקר, עיקר אפקט המהירות שלו מתבטא בהעתקת קבצים גדולים - יוצרי תוכן\עריכת וידאו וכו'. לגיימינג הוא לא יתן לה שום תוספת ביצועים, לרדת מזה. מסך 27" FHD?! הרזולוציה האידאלית ל27" היא QHD. FHD הולך על מסכי 21-24". תמהני אם אתה באמת מונח בפרטים של מה שאתה ממליץ עליו באמת זזתי לעבודה. בהצלחה לכולם.
    3 points
×
  • Create new ...

At the top of the news:

new on the site