Conclusive thoughtsLooking back at the findings in terms of stability and performance, this
Back-to-Back Cas Delay timing reminds me of the Performance Level setting on the Core 2 Duo platforms we've been playing with for so long now. In fact, I think it's pretty much the equivalent since the performance level timing is described as tRD, or Read Delay, which is pretty similar to what the B2B timing is referred to: Burst Read Delay. The hardware enthousiasts will agree with me that the tRD timing was one of the most powerful on the C2D platform, especially in terms of performance.
As we already explained in the second part of the first page, this timing is vital when trying to stabilize your high-frequency memory overclock. Both myself and Leeghoofd, my fellow Madshrimps reviewer, have experienced exactly the same behavior when trying to improve stability over 1GHz memory (2GHz effective): increase B2B to 10 or even 12 and you'll be able to get it running flawlessly. The downside to this story is of course the loss in performance.
For those people who have an i7 processor which has a locked multiplier, this timing might be the key to a higher BCLK frequency, especially in combination with high-frequency memory. As already said, on the Rampage 2 Gene, I was only able to run 200/2000 when increasing the B2B timing to a value of 12. For those who want to tune their memory for highest performance, this timing might also be interesting when your memory kit isn't of the most high-binned stack. As the table on the previous page already showed: 1600CL8 isn't slower than 2000CL8 by definition, as long as you're able to keep the B2B value as low as possible.
We already sent this feedback to different motherboard manufacturers and MSI already gave us a beta bios to play with the B2B timing. Strangely enough, apart from Asus and MSI, there's no other motherboard manufacturer who has enabled this timing in the bios. Judging from overclocking capabilities and memory performance, most motherboards have this particular timing set at 10/12. Also, the Asus motherboard reports the auto setting as "0", but we are not entirely sure the timing is indeed set to a value of "0" since first tests lead us to think the auto setting is rather "4" than a real "0". Let's hope other manufacturers will follow and give the end-user the opportunity to manually change this rather important memory timing.
More tests will be conducted soon and you'll hear from us in the forums!
To end with, I'd like to thank:
Milan from Asus for the Rampage 2 Gene
Manu from Tones for the Core i7 965
Leona, Hendry and Eric from MSI for the motherboard and taking the time to answer my mails
Albrecht from Madshrimps for providing me with factual data on the B2B timing.
Q: "On page 1 it sounds like no matter what setting you use, its not stable."
A: No, what I say on page 1 is that one of the weird characteristics of the issue is that the instability doesn't scale. So, it's not because you'd raise the timing by one that you'll get a more stable system by definition. I tried from 0 upto 10 and every single setting was equally unstable: 2M no problem, 4M crash after 1 or 2 loops. The non-scaling characteristic is also shown at the point where it gets stable: at 11 I couldn't do 4M, at 12 I could do anything.
Q: "So you mean, at some point losening up b2b doesnt improve stability, that memory speed is simply unstable and changing b2b doesnt change that."
A: That's something I forgot to mention in the article, well, at least forgot to mention explicitly. It's indeed true that the instability is only under load ... and apparently not under all load. As the article states: 2M was completely stable, copying files also ... 4M crashed after 2 loops or so.
But, changing the B2B value definitly helps to get more stability, no doubt about that. The problem is that it doesn't scale like you'd expect. For instance, tRas you can increase by one and make it more stable. With B2B, that's not the case: it's either fully stable or half 'n' half.
Eg:
0 - unstable
4 - unstable
8 - unstable
9 - unstable
10 - stable
11 - stable
So, 0 would be equally unstable as 9.
Q: "I don't understand the Performance scaling graph on page two"
A: I have had a colleague ask me about the graph as well, haha. Basicly, I calculated the effect of changing each variable in the different test. The longer the bar, the more effect a certain variable has in that particular test. It's a different representation of the 5 other performance charts. I thought it would be more clear, but apparently the opposite is true. So, for instance, in the Everest-Copy benchmark, the B2B timing has the most effect.
Q: "I find the title very misleading, i like the article but its not at all what i expected to find with that title"
A: Sorry about that