Its interesting how history tends to repeat itself. People hire me on to help them out on their data converters, and then for one reason or another not take advantage of my knowledge. "We got that data converter guy, now what?" .. Its funny how much people will pay you to hang around. History repeats..
So this first happened while I was working at a previous large company not known for analog. There was a group of experts working at a remote design center in Israel. I was told "these guys are experts at math" and mixed-signal circuits so for me that should make the job easier. Math can solve an argument in any language, doesn't matter what language you think in if the math works.
The design was an 8 bit low-power ADC for a gigabit Ethernet chip. (over Copper in 2000) The gigabit version of the Ethernet standard still uses time-domain templates and is a PAM 5 solution. Of course, you need to transmit more levels than that since there is a transmit low-pass filter. Link margin was low and in the standard a DFSE and a Viterbi were used. What this all means is that the "eye" at the input of the receiver is a mess. It has a bunch of intersymbol interference and echo. (Echo is the transmit signal reflecting back into the receiver). So basically this IEEE standard requires an ADC instead of a simple slicer at the receiver input. Of course, some analog pre-processing can reduce the spec on the ADC, but you still can't ever get rid of it. So you have to build it. (Paper on Gig RX)
So its my duty to "architect" and lead a team to "design" this ADC. The schedule is short, just a few months, which is not long for an ADC. There was all kinds of things we could do to reduce the area and power of that ADC, but given the tight schedule and 20+hrs/wk in meetings, we had little choice but to go bare bones, a non-scaled pipelined ADC. No problem for this fine team (Link to Paper) who ended up building lowest power embedded ADC at the time later published by Springer thanks to Perry's help. So I put together an error and power budget, broke the design into subsections and staffed it with a team who comprise the author list on the Springer paper. The capacitor array layout was drawn and balanced perfectly by Mel Sparkman.
During the development of this ADC it was important for the Israeli design team to be involved in all the steps of the process. "ADCs are magical things" is what some people think even as recently as maybe a week ago before I typed this. There is fear and confusion on many electrical engineers about data converters, error correction and calibration. I admit they are tricky and combine system expertise along with circuit knowledge. System level specifications end up setting the size of capacitors, transistors and amplifier architectures inside the ADC. Several times during my career, people from the sidelines try to get involved an micro-manage the design process to "make sure" I am doing the right thing. I don't mind people watching me, I think I know what I am doing but it does slow me and the team down to explain everything. You want an ADC or a lecture?
So I put together a block diagram and wrote the behavioral model in C. The Israel team also put together their own "matlab" model to confirm my results. During the first meeting there was "a problem" with my ADC. For some reason I was accused of "sandbagging" my design. It was always my fault.
"How much margin do you put in your design, SSA?" - asks senior architect
"Normally I don't comment on design margin, but there is some" - SSA
"I think your ADC is WAY better than what you state - why so much margin?" --asks senior architect
"Show me the data" - SSA
So he brings out a MATLAB result, showing our 8Bit ADC with an SNDR of 59dB. Now, there is an equation for ENOB (Effective Number of Bits). The math is:
6.02(ENOB)+1.76 = SNDR
So for SNDR=59dB, N=9.51
Its an 8 bit ADC so theoretical max SNDR = 8*6.02+1.76 = 49.9dB
59dB(result) > 49.9dB (Theoretical 8 bit)
So it did appear that I had~10dB of margin? He got "better than theoretical" performance..
The SDNR values are affected by things in the test-vectors going into the ADC (or out of a DAC). There are underlying assumptions, that when violated can give you better than theoretical results.
"Ahh I see, you have higher than theoretical, so you have a problem in your test vector" - SSA
"What do you mean, I am a mathematical expert with much more experience.. etc" -Senior arch
SSA Responds:
There are assumptions behind 6.02(ENOB)+1.76.
1. The input signal is a sinewave, this formula ONLY works for sine-waves (first non-streetsmart mistake)
2. The quantization noise is a "uniformly distributed" random variable with the width of one of the "stair steps" of the ADC or LSB
20*Log10(Signal_rms/Quant_noise_rms)=6.02(ENOB)+1.76
Quant_noise_rms = LSB / Sqrt(12) - which is the Standard deviation for a uniform Probability density function. LSB = VFS/2^8 for an 8 bit ADC
Now a common mistake is to violate the Quantization noise uniform PDF criteria.
"What input signal frequency did you use?" - SSA
Answer: Fs/4, or 0.25* Sample rate. - Senior Arch
So now SSA knows your problem. "I know your problem"
For Fs/4, the ADC pattern is +1, 0, -1, 0, +1, 0
So the quantization error is: e1, e2, e3, e2, e1, e2... (repeating... thats NOT random!!!)
So a couple weeks ago, it happened again with a senior expert showing greater than theoretical ENOB... and it reminded me of this. All the guy had to do was call or ask, but what would I know soaking up that paycheck as a data converter expert...
So Fs/4 is a bad test-signal for ENOB since it violates the "uniform probability density" assumption in the quantization noise power calculation. Fs/4 is not bad for debug, if you are trying to isolate a bad code, so don't get me wrong. There is a time and place for everything.
So how should we pick the input signal?
Thats based on a "sentence" in my mind. If we pick a signal that is related to the sample rate by a prime-number, we know we will break up that "pattern" and whiten the PDF of the quantization noise. It goes like
"In 2^N samples I want a prime number of input signal periods"
2^N * Ts = Prime * Tinput
2^N is desired # of data points
Ts = Sample Period
Prime = A prime number, 3, 5, 7, 11..
Tinput = Period of input signal
This can be "re-written" as the form found in Maxim and ADI Appnotes:
Finput = Prime * Fsample / 2^N
So this where the Prime # Table comes in!!!
If your Input frequency meets the above equation, I promise you will NEVER get better than theoretical SNDR. Especially with larger prime #s. A popular one was suggested..
Fin=20.20263672MHz
Prime = 331
Fsample=125MHz
2^N=2048
Notice that I had to carry all those digits. This is one of those cases where you need to key in all those digits. To this day nearly 14 years later I still have that frequency memorized, we used it during simulations, verification and validation in the lab.
Senior Architect guy plugged in the above numbers, and lost about 11dB in SNDR. Oh well, I could have just acted like I could architect a better than theoretical ADC.
So this is the reason I have a prime number table on my desk. I need it to set signal frequencies for test-vectors that do not violate the assumptions of the ENOB equation. All data converter experts should have a prime table. They are free, and available via Google.
If you ever see higher-than theoretical ENOB (or SNDR) for a data converter, then there is a violation of the assumptions underneath. This means the ADC was not being "exercised" fully and the results are invalid. (You could always look at a code histogram). If you are an experienced guy and show better than average ENOB, then you will no longer look experienced in front of your peers.