The previous Radeon HD 4000 cards hit the sweetspots just right, both in price, chip cost and performance. While the 3870 from previous generations was an underpowered card, the 2.5x increase in shaders provided enough power to compete with Nvidia and the GT200 based cards. The RV740 was even better, just building itself upon that success of die size and compute to bandwidth ratio, which was more balanced in the HD 4850 than on the 4870.
Today, these new cards are at least as expensive to build as the old G92a, probably more if TSMC's 40nm process is still a problem, due to the clear shader power overshoot that happenned with "Evergreen" chips.
Turns out I wasn't too far off on those claims. Anandtech had an article published on Monday that unveiled much of the "Evergreen" line of cards design targets, namely the ones set for the Radeon HD 5800 series:
What resulted was sort of a lame compromise. The final PRS was left without a die size spec. Carrell agreed to make the RV870 at least 2x the performance of what they were expecting to get out of the RV770. I call it a lame compromise because engineering took that as a green light to build a big chip. They were ready to build something at least 20mm on a side, probably 22mm after feature creep.Twice the performance and bringing DX11 support? Big die obviously. It would also have to come with reasonably paired memory bus, definitely more than the final 256bits present for that increase in shader power to translate into actual performance increases. Someone forgot that and, though the leading engineer on the design defended the Radeon HD 5800 cards, he rightfully wasn't very happy with the early targets:
Carrell reluctantly went along with the desire to build a 400+ mm2 RV870 because he believed that when engineering wakes up and realizes that this isn’t going to be cheap, they’d be having another discussion.Sideport technology was one of the first to go, which meant it still wasn't this time around that two GPUs were able to be used on a high performance, micro stutter free card. The memory bus was short of what is needed to keep 1600 "shaders" properly fed but in the end the title of the story holds true: AMD showed up to the fight. Nvidia is still fighting to deliver it's new architecture and problems with TSMC's 40nm process have surfaced in worse ways than what AMD had to face with the "Cypress" Radeon HD 5800 GPUs and the end result will very likely materialize as cut down first gen GPUs to just 448 shaders instead of 512.
In early 2008, going into Februrary, TSMC started dropping hints that ATI might not want to be so aggressive on what they think 40nm is going to cost. ATI’s costs might have been, at the time, a little optimistic.
Engineering came back and said that RV870 was going to be pretty expensive and suggested looking at the configuration a second time.
Which is exactly what they did.
The team met and stuck with Rick Bergman’s compromise: the GPU had to be at least 2x RV770, but the die size had to come down. ATI changed the configuration for Cypress (high end, single GPU RV870) in March of 2008.
Nvidia has another GeForce FX 5800 in it's hands and while AMD's top chip has not been a groundbreaking chip, it is the undisputed leader and is available in quantity.
After the misstep that I call at the "Cypress" chip, we may not see results of this experiment with big dies in the upcoming "Northern Ireland" architecture but only on the one after that. By then, it could very well be 2008 all over again - or so I hope.
No comments:
Post a Comment