RAM and Latency What you Need to Know

No one likes a slow computer. Well, at least not anyone I know. People spend thousands of dollars on upgrades and cutting edge systems to use a faster computer. However, upgrading your computer is not the only way to have a quicker computing experience. For those looking to squeeze the last bit of performance out of their machines, there are many ways to optimize your system for speed.

Overclocking your CPU, tweaking your registry, and fine tuning the bios are some the most common ways to optimize your system. Another often overlooked method is reducing your RAM’s latency.

RAM latency occurs when the CPU needs to retrieve information from memory. In order to receive information from RAM, the CPU sends out a request through the front side bus (FSB.) However, the CPU operates faster than the memory, so it must wait while the proper segment of memory is located and read, before the data can be sent back.

RAM latency is measured in wasted FSB clock cycles, since the data is transferred through the FSB. The bigger the latency number, the more FSB clock cycles it missed. The goal in reducing latency is to get the data back to the CPU in the least amount of FSB clock cycles possible.

The easiest way to reduce RAM latency is to increase the speed of the front side bus. This means that the FSB can send and receive data between the CPU and memory faster. However, this also overclocks the CPU, RAM, and possibly the AGP bus as well.

Overclocking your system will void your computer’s warranties and could possibly damage and/or destroy your system, so only attempt it if you’re willing to risk frying your computer. Adjusting your PC’s FSB is usually performed through the BIOS or through jumpers on your motherboard, although not all motherboards support overclocking.

A safer method is to adjust your RAM’s timings, although this can still potentially damage your system and usually only wields nominally performance gains. There’s no simple way to say it, so I’ll apologize in advance for spitting some tech jargon your way.

RAM timing are measured in CAS, RCD, RP, and RAS. CAS refers to the amount of clock cycles to reach the correct column of memory, RCD refers to the amount of cycles between RAS to CAS, RP refers to the amount of cycles needed to close a row and open the next row for reading, and RAS refers to the smallest number of clock cycles a row must be actively accessed. To simplify that explanation, remember that RAM timings are measured in FSB clock cycles, so the lower the number, the faster your system is.

For example, my RAM timings are 3-3-3-8 (CAS-RCD-RP-RAS.) To optimize my timings, I first tried lowering my CAS in my bios to 2.5 and rebooting. Windows booted just fine and everything worked correctly, so I then rebooted and went back into my bios and dropped my CAS down to 2, so now my timings were 2-3-3-8. Again, this setup seemed stable so I went in and tried reducing my RAS, since it was pretty high. This was the last stable tweak that I could do. When I tried to go below 2-3-3-7 in any timing my system either wouldn’t boot or Windows would generate a mass amount of memory related areas.

I used 3dMark 2005 to get an idea of how much improvement my memory adjustments made, if any. Before tweaking my timings I posted a 2105 3dMark score, after the tweaking I scored 2114. Not much of difference, but when it comes to optimizing your system for speed, every last bit helps.

While reducing RAM latency may not have a huge impact on system performance, it can give it a little extra kick, which combined with other methods of optimization, can result in a much quicker PC. So until you’re ready to buy a new computer, consider tuning up your current one to it’s fullest potential.