While looking into low Cpk on a Common Mode Rejection Ratio (CMRR) test in a legacy test program I took a close look at the code. I didn’t have to look too closely to see the problem. The value of N, or Number of Samples is all I need look at any more to tell if a test is done correctly or incorrectly. There it was in it’s shining glory: 2000.

Yep, someone took 2000 samples to do a CMRR test. Why is this a problem? Is it too big? Too small? Nope, I can tell because it’s not an* integer power of two*. That tells me something very important that I later confirmed by reading further.

No FFT for this test, oh no, that would be to complex! Well in fact it is, because most often the result of an FFT is an array of Complex, real and imaginary magnitudes that need to be processed further to give you magnitude and phase.

But then again, if the ancient Greeks and Egyptians can calculate the hypotenuse of a triangle with papyrus and quill or clay tablet and stylus while you have a modern sub-Terahertz processor waiting patiently to do your bidding why are you letting that stop you?

It was so disappointing to move down the page and see that the test is being done with an equation like Vpp = Max(array)-Min(array). How sad, the Cpk is suffering and the test results are unreliable because someone is trying to measure a 2mVpp signal by actually measuring peak-to-peak. The noise is seriously clouding the measurement and the Cpk gives it away so plainly.

By capturing 2048 samples properly, that is, coherently they could have done an FFT which would, at the very least, spread the noise out across 1024 bins, meaning that the amount of noise at the frequency of interest would be one tenth of one percent of the value it is in a simple Max-Min test. It is almost a certainty that the Cpk for this test would improve by at least one order of magnitude. Such a fantastic improvement for such little effort.

No problem then, I can spend a few minutes doing the math, implementing the test and savoring the new results.