Serial Data Quality Comparator Ideas
kf4hcw
kf4hcw at lifeatwarp9.com
Mon Dec 18 13:53:27 EST 2017
On 12/18/2017 12:48 PM, Joe Leikhim wrote:
>
> Thoughts. If I am reinventing the wheel, please let me know!
>
> This is to be a device, so using test equipment is not the solution.
I'm fond of reinventing wheels -- it teaches us about wheels and we need
folks that can invent them instead of just buying them... plus, there's
a lot more to know about wheels than appears in the brochure.
This device essentially becomes a piece of test equipment.
I spent some time thinking about this kind of thing when working on the
DMDL concept -- where the first stages of the demodulator for a DMDL
chain would have to recover the carrier and then make predictions about
what has happened to it.
You need to define a bit more closely what characteristics you want to
determine about signal quality.
1. Is there enough of it that you can capture the carrier/clock with a
PLL, and how much confidence do you have?
It sounds like you've been thinking of this part because you talk about
measuring jitter. Jitter is one part of measuring clock recovery
confidence. Another would be signal strength.
2. Deliberate phase shift vs unintended phase shift. Specifically, in
most HF cases anyway, you're going to expect that a deliberate phase
shift in the carrier will be some multiple of 90 degrees. Other vhf/uhf
cases might include finer shifts than that... but the goal is to measure
it precisely so:
A good way to think about this is to determine at your PLL whether
you've seen a 90, 180, or 270 degree phase shift. If you have then your
PLL should be able to remain locked on the "original" clock. If your PLL
is driving an oscillator at 4x the desired frequency and you divide that
down into quadrature then you can build the phase detection logic to use
the best match and determine the confidence of that match. Choose the
quad with the highest confidence by freely switching to it for your PLL
loop, make note of the shift if it occurs, and keep track of the
confidence value.
This gets you a couple of useful things... An SSB signal will typically
switch phase by 180 degrees at zero crossings of the modulated signal --
but otherwise shouldn't jitter much. Similarly bpsk - except that the
bpsk signal's amplitude should not change much at all between phase
shifts -- also qpsk. You shouldn't start to see amplitude shifts very
much until you get to qam type signals.
FM signals will tend to show a more continuous phase shift that you
should be able to track -- but that's more specialized.
FSK signals will tend to show high confidence an one of several
predicted frequencies but not elsewhere.
3. Amplitude consistency. If you're looking at a digital modulation then
you should have some expected amplitudes that you can determine from the
median maximum amplitude. Signal quality measured in this aspect will
depend on the kind of signal you're receiving and the amount of
deviation from the expected amplitudes.
If you're receiving qpsk, bpsk, you should expect the amplitude of the
highest quality signal to remain almost constant at the median
amplitude. The more it varies the lower the quality of the signal. If
qam then there should be some discrete amplitudes and the amplitudes
that you see should not vary much from the median discrete values seen
and those calculated from the maximum median amplitude. The more
variability the lower the quality of the signal.
If you're looking at an am, dsb, or ssb signal then you can decode the
amplitude and examine it's spectra. If the spectra of the amplitude
modulation is a reasonable match for voice traffic and tends to remain
self consistent then the quality is good. If the spectra departs from
this model then the quality is reduced. In particular, the more random
noise that is present the lower the quality of the signal. Within a
given 2-3K bandwidth there should be some fairly specific peaks and
valleys --- if the spectra looks closer to flat that means there is less
intelligible data present (more like random noise).
===
All of this boils down to making measurements that compare
characteristics of the apparent signal to the characteristics predicted
by some model. Most of these models are fairly easy to construct (at
least logically).
You could output the scalar values of the model matches individually or
you could combine them into a single scalar value to represent signal
quality.
Decibels may be helpful here -- if you define a perfect match as 0db,
and reduce the signal quality in each domain based on the percentage
match for that model as a ratio then a signal that matches the model
less will have a negative score associated with that characteristic.
For example, if the model predicts that the phase of the recovered clock
will be either 0 degrees or 180 degrees then the model's match
characteristic should be 0 at 90 degrees (as far as possible from
matching). So your ratio will be between 1/1 and 0/1 --- If your
recovered clock jitters by 10 degrees (say from 80 to 100) then that's
roughly 10 % of your available margin giving you a signal quality of
10log (9/10) or -0.46 dB. Get off a bit more, say 20% either side and
now you can say you have only (90-20)/90 == 7/9 or -1.1 dB... and so forth.
You can get even more accurate about phase if you also model the baud
clock... Presume that a particular phase state should persist for at
least n amount of time and that if it changes each change must occur
after cn time. An easier way to do this is to start a clock at the last
detected phase change and run the clock until either resetting it at the
expected baud clock (phase didn't change, but aligns with the baud
clock) or it changes again. Essentially measuring jitter in the
recovered baud clock.
The number of clock ticks that occur before the phase change divided by
the number of clock ticks that can occur in a perfectly received
symbol... Let's say 100ms and a 1ms clock giving you a maximum of 100
ticks per phase state.
If the detected phase lasts for 80 tics out of 100, but remember that
once you get to 50% you're no better than flipping a coin so you've got
a maximum loss of 50/100... so do your calculation ends up being more
like 30/50 and for that measurement you get about -5.74 dB... integrate
those in a sliding average and you get some idea of how accurately you
can detect the bits that are encoded on the apparent signal.
-- these are all just suggestions that I've played with; but hopefully
you get the idea of how they might be implemented:
Start with a collection of PLLs (as you've suggested); adjust their
phase detection logic to accommodate predicted phase shifts and measure
jitter. That gives you a scalar value for the instantaneous
"unpredictability" of carrier phase angle.
Use the synchronized clock to help detect the amplitude of the apparent
carrier. This might even come easily from a quadrature phase detection
system in the first step.
Having detected the amplitude compare the amplitude signal(s) to the
expected model. BPSK and QPSK amplitudes should be very consistent so
measuring them against a low pass (integrated) version will give you a
scalar value of the "unpredictability" of the amplitude.
Go a step further and for digital signals where you can predict the baud
rate measure the predictability of system to detect new signals (for QAM
include discrete amplitude values, for AM, DSB, SSB use spectral
characteristics).
===
Let me know if any of this was useful... and if you decide to build some
of it, as I've not had time yet but I've been dying to put it together.
Your signal quality measurement device could be the same hardware
(mostly) as my DMDL detector.
_M
PS: One of the other things I've toyed with regarding amplitude
characterization is the idea of measuring the slope of any shifts in
amplitude. For audio type signals the slopes (normalized dv/t) will have
an upper limit associated with the highest frequency expected in the
audio channel. In a digital modulation scheme the slopes will have a
bipolar limit since symbol shifts tend to create either no shift or a
high frequency shift. A generalized DMDL detector might measure this as
a matter of course and then actual signal recovery would occur down
stream by appropriately integrating and/or differentiating these slopes.
In signal quality detection the problem is one simply of allowing slope
ranges that are appropriate for the expected signal and considering
slopes outside of that envelope to be misses (unpredicted). The ratio of
predicted changes to unpredicted changes tells you the signal quality.
--
kf4hcw
Pete McNeil
lifeatwarp9.com/kf4hcw
More information about the Tacos
mailing list