14.05.2008, 10:27
Again, regurgitated talking points. You might try reading a few detailed tech articles and educating yourself, because while what you're saying might sound plausible to some people here... well, let's just delve into some details again, shall we? This time, try to argue with me on a level that displays technical knowledge of the subject material rather than things you've read at Gamespot.
The amount of stream processors is not relevant in approximately 95 percent of the PC games that exist in the world today, shader/stream processors are only ONE method of measuring speed and definitely not the best single one. The Radeon 3870 has 320 shader processors in a wholly different array architecture but it's not as fast as an 8800 with 112 or 128 (depending on variant, even the 64 stream 9600GT outperforms the 3870 in many games), so raw numbers are not all you need to look at. While it is of course possible games will be designed to take advantage of the greater stream processors available to a 9800GX2, it is unlikely this will matter because this "dual core via SLI" GPU solution is likely to be eclipsed by the upcoming Geforce 10 (which may or may not feature a multi-core GPU in its lineup).
If you'd read the actual specification sheet for the GX2 and regular 8800 cards you would notice a few important points, for example they always list "double" the 8800 spec for ROPs, fillrates, et al. This is a LIE, and represents completely theoretical performance if the 9800 could run both cores at full strength in a truly "load sharing" SLI solution. This doesn't exist right now. Even games that take proper advantage of SLI (the best way nvidia could design after over 5 years of R&D) do not do so efficiently. Not even SLIGHTLY efficiently! There have been a few tech articles recently about quad-SLI via 2 9800GX2s and they are often outperformed by two 8800 series cards.... fancy that! And of course, this is all based on games that actually gain some sort of benefit from SLI, which to date remains a pitifully small list. Nvidia's "official" list of games that support SLI is, again, a lie. The games that "support" it often merely show up as "SLI enabled" via the load sharing graph or give a 5-40 percent boost in performance only when load sharing is possible (i.e. areas that are not taxing the card's internal bandwidth too greatly, at least this is my understanding of it, on this point I may be incorrect).
In reference to PCIE 2.0 once again, you pay too much attention to marketing and hype. It is true that 2.0 doubles the theoretical throughput bandwidth, but remember AGP 8x hype days? If not, here is what happened: AGP 4x was the top, 8x was being pushed, and was slowly adopted due to the realization that cards might come out that would need the extra bandwidth of 8x (which is half the speed of PCIE 1.1 if I remember right, at 2.0gb/s). At the time, NO card, not a single one, needed even a single full 4x AGP lane, but they wanted to plan ahead. Today, to date, and to be the best of my knowledge, there is still not a single card that utilizes more bus throughput than AGP8X provides. Even today. There are a couple of very obscure articles about this that I can drag up if you want to read them, and if you do some research you will see that a couple of companies are doggedly still supporting AGP with the latest series of Radeon HD3000 series. Of course these are limited because the CPU support on older mobos with AGP just is not there. A 3850 with a P4 2.8 is stupid, but notice that bus throughput is not the limiting factor, it's the slow CPU. That was my point all along.
http://www.newegg.com/Product/Product.a ... 6814102730
http://forums.guru3d.com/showthread.php?t=256030
AGP was phased out because the bus technology needed to keep ahead of graphics card technology and the best way to do that was to simply overengineer. Nothing wrong with that, but at this point in time PCIE is well more than enough for the next few years. Plus, 2.0 is backwards compatible, so a card designed for 2.0 will still work on 1.1, and the day when 1.1 is the limiting factor in your graphic speed is far, far away.
As for the core2duo comment -- this discussion has been done to death and the argument always ends the same. Video throughput is currently limited by the hardware available. The 8800GTX 768, still the single fastest card for high resolution gaming out there, does not see ANY increase in performance as the processor speed is raised past ~3.2 -- I will allow you that for a few, VERY SPECIFIC applications, a 9800GX2 with core2duo @ 3.8-4 will outperform a Q6600@3.2 with a 8800GT/GTS512 (triplehead2go with IL2, for example!) -- But these are not common and the benefit of having 4 cores is already, today, much greater than being restricted to two, and this technology is already taking off in a big way. I would estimate at least 4 out of every 5 big titles that come out in the next two years will fully support scaling cpu architecture, meaning the more you have, the better off you are. Do some math here: 4x3.2 = 12.8 -- 2x4 = 8 -- Even with a 4ghz overclock it will still be far inferior, and most Q6600s are actually capable of reaching speeds much higher than 3.2. Until we get the next generation of video hardware (and with ATI flailing things are not looking bright -- have you looked at the spec sheet for the 4870? ICK.) we have to work based on the 3.2ghz limit model. If you do a lot of non-game work that requires a fast processor AND you use software that doesn't take advantage of multiple cores, then C2D is a pretty smart choice. Don't get me wrong C2D is never a "bad" choice, but it isn't the "best" choice for the majority of people out there.
PS: Bioshock was a momumental disappointment. That is my opinion as a very dedicated, very long-suffering PC gamer who has played both prequels (and other TTLG games) many times. I don't suppose you played System Shock 2? Pity. And DX10 on an engine that didn't even properly utilize DX9 OR support widescreen.... pardon me I must have a good chuckle. MS sure knows propaganda, boy oh boy... and before you claim it was widescreen do some more research. Vert minus without a FOV change is not acceptable for any PC game trying to be "A list". Thank god for racer_s.
Loooong post sorry, but I've got to combat the spread of disinformation.
The amount of stream processors is not relevant in approximately 95 percent of the PC games that exist in the world today, shader/stream processors are only ONE method of measuring speed and definitely not the best single one. The Radeon 3870 has 320 shader processors in a wholly different array architecture but it's not as fast as an 8800 with 112 or 128 (depending on variant, even the 64 stream 9600GT outperforms the 3870 in many games), so raw numbers are not all you need to look at. While it is of course possible games will be designed to take advantage of the greater stream processors available to a 9800GX2, it is unlikely this will matter because this "dual core via SLI" GPU solution is likely to be eclipsed by the upcoming Geforce 10 (which may or may not feature a multi-core GPU in its lineup).
If you'd read the actual specification sheet for the GX2 and regular 8800 cards you would notice a few important points, for example they always list "double" the 8800 spec for ROPs, fillrates, et al. This is a LIE, and represents completely theoretical performance if the 9800 could run both cores at full strength in a truly "load sharing" SLI solution. This doesn't exist right now. Even games that take proper advantage of SLI (the best way nvidia could design after over 5 years of R&D) do not do so efficiently. Not even SLIGHTLY efficiently! There have been a few tech articles recently about quad-SLI via 2 9800GX2s and they are often outperformed by two 8800 series cards.... fancy that! And of course, this is all based on games that actually gain some sort of benefit from SLI, which to date remains a pitifully small list. Nvidia's "official" list of games that support SLI is, again, a lie. The games that "support" it often merely show up as "SLI enabled" via the load sharing graph or give a 5-40 percent boost in performance only when load sharing is possible (i.e. areas that are not taxing the card's internal bandwidth too greatly, at least this is my understanding of it, on this point I may be incorrect).
In reference to PCIE 2.0 once again, you pay too much attention to marketing and hype. It is true that 2.0 doubles the theoretical throughput bandwidth, but remember AGP 8x hype days? If not, here is what happened: AGP 4x was the top, 8x was being pushed, and was slowly adopted due to the realization that cards might come out that would need the extra bandwidth of 8x (which is half the speed of PCIE 1.1 if I remember right, at 2.0gb/s). At the time, NO card, not a single one, needed even a single full 4x AGP lane, but they wanted to plan ahead. Today, to date, and to be the best of my knowledge, there is still not a single card that utilizes more bus throughput than AGP8X provides. Even today. There are a couple of very obscure articles about this that I can drag up if you want to read them, and if you do some research you will see that a couple of companies are doggedly still supporting AGP with the latest series of Radeon HD3000 series. Of course these are limited because the CPU support on older mobos with AGP just is not there. A 3850 with a P4 2.8 is stupid, but notice that bus throughput is not the limiting factor, it's the slow CPU. That was my point all along.
http://www.newegg.com/Product/Product.a ... 6814102730
http://forums.guru3d.com/showthread.php?t=256030
AGP was phased out because the bus technology needed to keep ahead of graphics card technology and the best way to do that was to simply overengineer. Nothing wrong with that, but at this point in time PCIE is well more than enough for the next few years. Plus, 2.0 is backwards compatible, so a card designed for 2.0 will still work on 1.1, and the day when 1.1 is the limiting factor in your graphic speed is far, far away.
As for the core2duo comment -- this discussion has been done to death and the argument always ends the same. Video throughput is currently limited by the hardware available. The 8800GTX 768, still the single fastest card for high resolution gaming out there, does not see ANY increase in performance as the processor speed is raised past ~3.2 -- I will allow you that for a few, VERY SPECIFIC applications, a 9800GX2 with core2duo @ 3.8-4 will outperform a Q6600@3.2 with a 8800GT/GTS512 (triplehead2go with IL2, for example!) -- But these are not common and the benefit of having 4 cores is already, today, much greater than being restricted to two, and this technology is already taking off in a big way. I would estimate at least 4 out of every 5 big titles that come out in the next two years will fully support scaling cpu architecture, meaning the more you have, the better off you are. Do some math here: 4x3.2 = 12.8 -- 2x4 = 8 -- Even with a 4ghz overclock it will still be far inferior, and most Q6600s are actually capable of reaching speeds much higher than 3.2. Until we get the next generation of video hardware (and with ATI flailing things are not looking bright -- have you looked at the spec sheet for the 4870? ICK.) we have to work based on the 3.2ghz limit model. If you do a lot of non-game work that requires a fast processor AND you use software that doesn't take advantage of multiple cores, then C2D is a pretty smart choice. Don't get me wrong C2D is never a "bad" choice, but it isn't the "best" choice for the majority of people out there.
PS: Bioshock was a momumental disappointment. That is my opinion as a very dedicated, very long-suffering PC gamer who has played both prequels (and other TTLG games) many times. I don't suppose you played System Shock 2? Pity. And DX10 on an engine that didn't even properly utilize DX9 OR support widescreen.... pardon me I must have a good chuckle. MS sure knows propaganda, boy oh boy... and before you claim it was widescreen do some more research. Vert minus without a FOV change is not acceptable for any PC game trying to be "A list". Thank god for racer_s.
Loooong post sorry, but I've got to combat the spread of disinformation.