Normalized q Scale Analysis Theory and Background

© All Rights Reserved

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

13 Aufrufe

Normalized q Scale Analysis Theory and Background

© All Rights Reserved

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

- AQA GCSE Mathematics-Unit 3H-Practice Paper Set 4-V1.1
- OM Assignment 2_Group1
- MAT212.pdf
- Computer Science & Engineering Syllabus
- Manufacturing Process Capability and Specification Limits
- estadistica.pdf
- DGREENWOOD Normal Distribution
- 05976388
- Full Text
- Prob Notes
- I 6 Diffusion
- Histogram constrtn
- Hw 82012 Sol
- 20150130 - Unit 50 Condition Monitoring and Fault Diagnosis - Part 02
- 6&7
- 19 1 random variables - continuous
- 119119
- Chapter 9 problems
- Comparison of ruin probability approximations in case of real data
- 14610graphingnumericaldata-170118234538

Sie sind auf Seite 1von 10

ZZZHGQFRP

A robust, repeatable, and accurate technique estimates and measures random and bounded, uncorrelated jitter components.

By Martin Miller, PhD, Lecroy Corporation

Jitter is an important aspect of signal integrity for both optical and electrical serial data streams (and clocks). As data

rates become faster and timing budgets smaller, it has become crucial for engineers to more accurately measure jitter

and its components: random jitter (Rj), deterministic jitter (Dj), data-dependent jitter (DDj), duty-cycle distortion

(DCD), and periodic jitter (Pj). A newly introduced method called Normalized Q-scale Analysis offers a more robust, repeatable, and accurate technique to estimate and measure the random and bounded, uncorrelated jitter components. The following article presents the technical background underlying this 1method.

For the purposes of this discussion (in connection with jitter measurements) the entire subject surrounds the matter

of interpreting the observed distribution of timing errors. This observed distribution is the histogram of TIE (timeinterval-error) values, which you obtain through analysis of either clock or NRZ data waveforms that a digital recording instrument (such as a digital oscilloscope) acquires.

HISTOGRAMS AND PDF

A histogram is a form of data representation that expresses the frequency of occurrence of measurement values

sorted or binned into adjacent, equal-width contiguous intervals (or bins). When you collect the timing errors (TIE)

as a histogram, the histogram serves as an approximation to the PDF (Probability Density Function) of this statistically

based phenomenon (jitter). The PDF is (in theory) a smooth function that underlying physics of the measured phenomenon determines (and, of course, what you actually observe includes the2 physics of the instrumentation, as well).

The PDF is a continuous function, and reflects integrals of the probability over each interval of measurement value,

x.

(0.1)

x b

Pr(a d x b)

x a

PDF ( x) dx

That is to say, the density of probability as a function of the measured quantity when integrated over a given region

gives the probability that any measurement value will be within that region.

The process of forming a histogram is based upon a pre-specified set of bin boundaries (meeting the above conditions of contiguity and equal width). A further, usually unspoken, constraint is that the histogram range must cover all

as:

possible observation values if it is to be useful. You express the set of bin boundaries

^hi `

hi

hleftmost i hwidth

(0.2)

The set of observations (of measured quantity x ) are binned or counted for each range of histogram bin. The resulting histogram is a collection of populations (or counts) for each bin region.

Hi

Populationi

# observations (hi d x hi 1 )

(0.3)

Now a measurement histogram (such as the one we will analyze to estimate jitter) represents a single experiment,

with some number of trials or individual TIE measured values. It is only an approximation of the PDF insofar as the

true PDF plays its probabilistic role and so is reflected in the resulting observations.

You can apply the normalized Q-scale method to other statistical studies and measurements, in particular for

examining the nature of vertical noise distributions.

2

See http://en.wikipedia.org/wiki/Probability_density_function

3XEOLVKHGE\('1ZZZHGQFRP

&RS\ULJKW5HHG%XVLQHVV,QIRUPDWLRQ$OOULJKWVUHVHUYHG

ZZZHGQFRP

Pr(hi d x hi 1 )

x hi 1

x hi

# observations (hi d x hi 1 )

# observations (f d x f)

PDF ( x) dx

Hi

Hi

(0.4)

For this case (jitter), the observed distribution is the histogram of TIE values, obtained through analysis of either

clock or NRZ data waveforms that a digital recording instrument (such as a digital oscilloscope) acquires.

INTEGRATING PDF TOTAL JITTER AND ITS RELATIONSHIP TO BIT ERROR RATIO (BER)

The CDF (Cumulative Distribution Function) expresses the probability that an observation will fall between

the value x

xc x

Pr(f xc x)

CDF ( x)

xc f

f and

(0.5)

Of course, this value is purely theoretical. You can however calculate the EDF (Empirical Distribution Function) by

summing the histogram from the left extreme to some value x .

(0.6)

k i 1

EDF ( x

Hk

Hk

1

Ptotal

k 0

k N 1

hi )

k i 1

k 0

k 0

Now, for our purposes and in keeping with tradition for discussions of jitter, we make a small change of representation. We are interested in both variations in timing (jitter) before and after the mean timing value (or to left or right of

the mean). As such, and to view the timing errors in a symmetric fashion, you paste together two halves, the right-hand

and the left-hand parts of the EDF, where the left-hand part is summed from the left (increasing bin index, starting with

leftmost bin) and the right-hand part is summed from the rightmost bin with decreasing bin index, noting that these are

approximations of the right-hand and left-hand parts of the jitter bathtub curve:

EDFleft ( x

EDFright ( x

hi )

hi )

1

Ptotal

(0.7)

k i 1

k 0

1

Ptotal

(0.8)

k i

Hk

k N 1

In practice, these two functions are joined at the median of the histogram (the bin containing the median, or the bin

for which 50% of the total population is in that bin or those with lower index). Consequently, 50% of the total population also falls within that bin and those bins with higher index.

3

In telecommunications or datacommunications, the relative rate of bit4errors is called BER . It should be obvious that

the probability of exceeding some timing limit is the cause of a bit error .

Sometimes called bit error ratio, BER is the relative rate of bit errors compared to the bit rate, often expressed as

a power of 10. For example: the BER is 10 to the minus 15th.

4

This assumption is a naive one in many systems or designs.

3XEOLVKHGE\('1ZZZHGQFRP

&RS\ULJKW5HHG%XVLQHVV,QIRUPDWLRQ$OOULJKWVUHVHUYHG

ZZZHGQFRP

Historical note: Before the introduction of Normalized Q-Scale Analysis, we applied the extrapolation to the histogram tails, whereas now we apply the extrapolation to the EDF or in fact to each half of the EDF as described above.

Although the EDF is well-defined in the central region of the histogram, where events are populous, it is poorly defined at the extremes. This, of course, is the nature of the problem of analyzing jitter, because we are most interested in

learning about the nature of those most rare (timing error) events.

So we strive to obtain an estimation of this extremal behavior even beyond where we have real observed data. To this

end, we strive to fit the data at the extremes of the EDF to plausible mathematical forms (suggested by the underlying

physics). One such mathematical form is the error function, which relates closely to the CDF of a Gaussian distribution.

The error function erf(x), inverse error function erf-1(x) and related functions

CDF ( x)

1 erf ( x)

2

(0.9)

erf ( x)

2

S

(0.10)

e u du

erfc( x)

2

S

e u du 1 erf ( x)

(0.11)

But more importantly, the inverse error function erf-1(x) is the inverse function (neither reciprocal nor complemen5

tary). This function gives the displacement x, for which a given value is the error function of that displacement.

(0.12)

erf (erf 1 ( z ))

z , for 1 z 1

And

(0.13)

The following picture can help to put the general shapes of these functions in perspective. In particular, remember

not to confuse the complementary error function (erfc) with the inverse error function (erf-1).

3XEOLVKHGE\('1ZZZHGQFRP

&RS\ULJKW5HHG%XVLQHVV,QIRUPDWLRQ$OOULJKWVUHVHUYHG

ZZZHGQFRP

erfc

1

CDFnormal (G )

erf

1 erf (G )

2

1

CDFnormal

( p) erf 1 (2 p 1)

0.0

1.0

2.0

Figure 1

The relationship between the inverse error function and total jitter

It is noteworthy that the inverse error function has no closed analytical form. It is also noteworthy that the heuristic

jitter equation (often used as a model of signal jitter behavior) is linearly related to this inverse error function. Many

engineers who measure jitter have seen this equation :

Tj ( BER )

Dj D ( BER ) Rj (0.14)

Usually this equation is presented with qualifying remarks, such as and the function

standard deviations for a Gaussian with sigma of 1, that corresponds to the specified bit error ratio (BER). Now, it

turns out, D ( BER ) is to within a constant factor, exactly the inverse error function,

. That is:

1

erf (1 BER)

D ( BER )

BER

2 CDFGausiian (

)

2 Utx

D ( BER )

BER

2 erf (

1)

Utx

(0.15)

1

or

(0.16)

1

3XEOLVKHGE\('1ZZZHGQFRP

&RS\ULJKW5HHG%XVLQHVV,QIRUPDWLRQ$OOULJKWVUHVHUYHG

ZZZHGQFRP

Where in both cases we explicitly incorporate the transition density , because the purpose of the alpha factor is to

calculate Tj, and transition density is required for this purpose (because jitter is pertinent only in causing errors for bits

that have transitions).

APPLYING THE ERROR FUNCTION TO THE MEASURED JITTER CDF (ON Q-SCALE)

A notion in science of preferred coordinates says that a physical-mathematical relationship can be most simply

expressed when the coordinates of the problem are well chosen. An example a professor gave me in graduate school

was the case of Keplers Laws. Those laws expressed in the usual Cartesian coordinate system are rather obscure, but

expressed in polar coordinates are dead simple.

Forgive the digression, but this problem of analyzing the EDF to predict a CDF is best served by a coordinate transformation for the variable BER.

78

Several sources have proposed and described the subject of Q-Scale. ,

The desired transformation is to a new variable Q obtained from BER as follows:

1

Q( BER) CDFGaussian

( BER / 2)

(0.17)

The Cumulative Distribution Function for a normal Gaussian distribution is related to the error function in the following way (see Figure 1):

CDF ( x)

1 erf ( x)

2

(0.18)

The CDF is the function that provides the probability for an event (for a system obeying a Gaussian probability density function) occurring to the left of the value x.

Note also, by inspection:

(0.19)

1

1

CDF ( p ) erf (2 p 1)

The representation for the CDF and EDF is so interesting (and thus preferred) because on this scale, the CDF of a

Gaussian PDF is a straight line. When the CDF or EDF is the modified symmetric form, their graphs appear as a teepee or the upper lines of a triangle. Below is a plot of an EDF (simulation) for a Gaussian PDF with a sigma of 1 psec.

This is the ratio of transitions between bit values to the total number of bits (so upper bounded by 1) but normally about 0.5 for standard test patterns and in particular for PRBS patterns. See whitepaper from LeCroy on

this

subject.

'JCSF$IBOOFMo.FUIPETGPS+JUUFSBOE4JHOBM2VBMJUZ4QFDJGJDBUJPOo.+42
5QSPKFDU%53FW

+VOF

+JUUFS"OBMZTJT5IF%VBM%JSBD.PEFM
3+%+
BOE2TDBMF
3BOTPN4UFQIFOT
"HJMFOU5FDIOPMPHJFTXIJUF

QBQFS
%FDFNCFS

3XEOLVKHGE\('1ZZZHGQFRP

&RS\ULJKW5HHG%XVLQHVV,QIRUPDWLRQ$OOULJKWVUHVHUYHG

ZZZHGQFRP

Figure 2

The other interesting attribute of this representation is that the slope of the lines gives the sigma of the distribution.

This treatment so far is common. All is well for a single Gaussian distribution.

Now enter the idea that this coordinate transformation should have a variable normalization factor. This normaliza, then that resulting CDF manifests as straight

tion is such that when the area of an un-normalized Gaussian is

U norm

lines with slope revealing sigma, and intercept with Q=0 giving the mean of the Gaussian PDF. This is the normalized

Q-Scale where:

BER

)

2 U norm

(0.20)

As an example, when you analyze two Gaussian distributions in this way, their EDFs appear as below. The linear

fits on this coordinate scale yield the proper (simulated) values for their sigmas and means.

3XEOLVKHGE\('1ZZZHGQFRP

&RS\ULJKW5HHG%XVLQHVV,QIRUPDWLRQ$OOULJKWVUHVHUYHG

ZZZHGQFRP

Figure 3

Automatic re-normalization of the Q-scale

This is the heart of the (patent-pending) method for analysis of EDF yielding best estimates of the underlying PDFs

defining the measured systems behavior. The procedure is very straightforward: You create the symmetric EDF for

each side of the distribution

The method can be described in steps. First, you form a set of elements to be fitted that are three values each, a vertical (Q-Scale) value, a horizontal coordinate (for the case of jitter, time, but the method is extensible to vertical noise

is furanalysis, or in fact any variable), and an associated error. Statistically, the error of the input variable

nished as the square-root of the total population of observations (the same sum) contributing to the estimate (of EDF in

the case). Upper and lower values of BER are obtained for these variations, and an error inversely proportional to the

error is assigned to the data point.

Varying the Q-Scale normalization factor over a range of plausible values, the best fit to linear behavior is determined.

) for both the

Using this value to correctly scale the probability, you obtain an intercept (with

Qnorm ( BER ) 0

right-hand-side and the left-hand-side of the CDF(Qnorm(BER)). You may interpret this value as the mean-value associated with the noise/jitter contributor with sigma equal to the reciprocal of the slope obtained from the best linear fit, and

amplitude (area) equal to the best normalization factor itself.

Under the assumption of symmetric noise/jitter contributions, you may obtain and use a single, ideal normalization.

) and sigmas

However, ideally, you must treat both extremes of the EDF independently, yielding strengths (

U norm

(reciprocal slope) to characterize the extremal behavior of each side of the EDF distribution. In this case, we show only

an example of one side of the EDF.

OBTAINING THE DETERMINISTIC AND RANDOM (GAUSSIAN) COMPONENTS FROM THE NORMALIZED Q-SCALE DIAGRAM

Although there are detailed differences between this approach for obtaining Rj and Dj and other methods, the most

important figure we need to establish is total jitter, referring roughly back to the heuristic jitter equation. We rewrite it

as:

3XEOLVKHGE\('1ZZZHGQFRP

&RS\ULJKW5HHG%XVLQHVV,QIRUPDWLRQ$OOULJKWVUHVHUYHG

ZZZHGQFRP

Tj ( BER )

BER

BER

Dj 0.5D (

) Rjleft 0.5D (

) Rjright

Wleft

Wright

(0.21)

where Dj is the separation between the two means of the distributions. If the distribution is governed by only two

Gaussian distributions of equal weight, this equation degenerates into

Tj ( BER )

(0.22)

And if in addition, the two Gaussians have the same sigma (or Rj), then

Tj ( BER)

Dj D (2 BER) Rj (0.23)

This equation is disturbingly different from equation (1.14), because in the traditional dual-Dirac discussion, we

typically forget that the two Gaussian distributions are only half-strength.

In general, to approximate the traditional Dj and Rj value, you still use the separation of the outermost means, and

approximate Rj by the mean of the two sigmas.

You do, however, calculate Tj from the most precise estimation of the CDF from the renormalized Q-Scale fit, using

correct weights and sigmas to reconstruct a theoretical PDF using the central region of the histogram as is and replacing the extremes by theoretical Gaussians of the correct strength, sigma, and mean to estimate each extreme region.

You then reconstruct this information into a CDF, so Tj(BER) becomes a simple calculation on that curve.

Authors biography

Martin Miller, PhD, has been a hands-on engineer and designer at LeCroy for 29 years. He has contributed analog,

digital, and software designs, and in the past 16 years, he has focused on measurement-and-display software capabilities for LeCroy scopes. Miller holds several US patents and participates in task groups for JEDEC concerning jitter

measurements. He has a doctorate in high-energy physics from the University of Rochester (Rochester, NY).

PDF of Gaussian

Histograms are closest in terms of a measurable probability

density, and are said to approximate the PDF when normalized.

The probability density function for the Gaussian distribution is

U ( x)

CDF

w

V 2S

( x P )2

2V 2

is

3XEOLVKHGE\('1ZZZHGQFRP

&RS\ULJKW5HHG%XVLQHVV,QIRUPDWLRQ$OOULJKWVUHVHUYHG

ZZZHGQFRP

CDF ( x)

CDF of Gaussian

f

(that is, where the PDF(x) is a Gaussian function), or:

CDFGaussian ( x)

Inverse CDF

SCDF

BER

Q-Scale

PDF (u )du

1 erf ( x)

2

governed by a Gaussian distribution of an event occurring

to the left of that coordinate is p.

Symmetric Cumulative Distribution Function, which ranges

nominally from 0 to 0.5 and rather than growing larger than

0.5, grows back down to zero.

Bit Error Ratio or Bit error rate. Since roughly 2004, there

has been a tendency to always say Bit Error Ratio, because

rate carries a connotation of errors per time.

An alternative vertical scale than BER or Log10(BER),

where

1

Q( BER) CDFGaussian

( BER / 2)

Re-Normalized Q-Scale

where

1

Qc( BER) CDFGaussian

(

EDF

Histogram

BER

)

2 U norm

The Empirical Distribution Function is the observed analogy to the CDF that you obtain by summing the histogram

(in the same way that the CDF is the integral of the PDF).

A series of populations associated with (usually) adjacent

horizontal intervals of equal size. Also known as a frequency of occurrence graph.

A histogram is formed from a series of measurements or

observations, by binning values (of the measurement or

observation) into bins or intervals of the histogram.

Distribution

Error Function

An observed or measured distribution is usually characterized by its number of bins, a bin width or interval, and a

coordinate associated (usually with the left edge) of the first

or leftmost bin.

Refers to either the predicted distribution of a set of trial

events, or the ideal distribution associated with an/the underlying physical process. Many theoretical distributions

exist, such as Gaussian and Poisson.

erf() is defined as

3XEOLVKHGE\('1ZZZHGQFRP

&RS\ULJKW5HHG%XVLQHVV,QIRUPDWLRQ$OOULJKWVUHVHUYHG

ZZZHGQFRP

erf ( x)

2

S

e u du

cumulative distribution function for a Gaussian.

1 2CDFGaussian ( x)

Complimentary Error

Function

Inverse Error Function

Rj or RJ

Dj or DJ

erf ( x)

erfc(x) -erf(x) or

erfc( x)

2

S

e u du

-1P

error function gives a value related to the probability of an

event occurring to the left of x, for a given probability, p;

the inverse error function gives (as value related to) the

value (related to probability) of an event occurring to the

left of a horizontal coordinate, when only the coordinate is

specified.

This function has no closed analytic form but can be obtained from numerical methods.

Common notation for Random jitter, a concept that really

means random Gaussian unbounded jitter, or more generally, the part of jitter that grows with decreasing BER

Common notations for Deterministic jitter or bounded

jitter. Included in this category are periodic jitter and datadependent jitter.

3XEOLVKHGE\('1ZZZHGQFRP

&RS\ULJKW5HHG%XVLQHVV,QIRUPDWLRQ$OOULJKWVUHVHUYHG

- AQA GCSE Mathematics-Unit 3H-Practice Paper Set 4-V1.1Hochgeladen vonnaz2you
- OM Assignment 2_Group1Hochgeladen vonHiteshSharma
- MAT212.pdfHochgeladen vonkrishna135
- Computer Science & Engineering SyllabusHochgeladen vonapi-26465547
- Manufacturing Process Capability and Specification LimitsHochgeladen vonMitul
- estadistica.pdfHochgeladen vonpaty
- DGREENWOOD Normal DistributionHochgeladen vonYong Lee
- 05976388Hochgeladen vonanjan_debnath
- Full TextHochgeladen vonFrancesco Guardini
- Prob NotesHochgeladen vonEkkAcEkka2332
- I 6 DiffusionHochgeladen vonPhạm Việt Dũng
- Histogram constrtnHochgeladen vonArun Biswal
- Hw 82012 SolHochgeladen vontalk2meena_s
- 20150130 - Unit 50 Condition Monitoring and Fault Diagnosis - Part 02Hochgeladen vonalexandre_motta_3
- 6&7Hochgeladen vonAmsalu Walelign
- 19 1 random variables - continuousHochgeladen vonapi-299265916
- 119119Hochgeladen vonMohsin Shaikh
- Chapter 9 problemsHochgeladen vonJohn Santeiu
- Comparison of ruin probability approximations in case of real dataHochgeladen vonnlrlibrary
- 14610graphingnumericaldata-170118234538Hochgeladen vonJasMisionMXPachuca
- MAT212Hochgeladen vonkrishna135
- Error Function PDFHochgeladen vonSara
- Estimation of Parameters Confidence Interval 2Hochgeladen vonUary Buza Regio
- Statistics ReleaseNotes SGbkOStHochgeladen vonTony
- yybooknotes.processfileHochgeladen vonyasit10
- Group Assignment Statistic 2Hochgeladen vonMOHD MU'IZZ BIN MOHD SHUKRI
- MCA-IVHochgeladen vonangel arushi
- Practice Question on Normal Dost.docxHochgeladen vonTrung Lê
- 1.Introduction 552Hochgeladen vonArthur Legaspina
- AIOU Solved Assignments Code 8614.docxHochgeladen vonM Fayaz

- Elliptic FunctionsHochgeladen vonMark
- 04777040Hochgeladen vonasdss
- 1 DOF ControlHochgeladen vonasdss
- [Neo]Classification of Nuclear C-Algebras - Entropy in Operator AlgebrasHochgeladen vonasdss
- Introduction to Modern Solid State PhysicsHochgeladen vonBilouBB
- Introduction to Automata TheoryHochgeladen vonasdss
- CESDHochgeladen vonasdss
- 5989-4823ENHochgeladen vonasdss
- ADN4650-4651-4652.pdfHochgeladen vonasdss
- Control Sys Last PartHochgeladen vonasdss
- odzHochgeladen vonTudor Ratiu
- dissertation.pdfHochgeladen vonasdss
- An_Introduction_To_Error_Analysis_The_Study_Of_Uncertainties_In_Physical_Measurements_Taylor_John.pdfHochgeladen vonasdss

- Foundations of MathematicsHochgeladen vonAhmad Alhour
- 5.1.FDsHochgeladen vonBiryani Khan
- Review Vector AnalysisHochgeladen vons_eiko5
- TopologyHochgeladen vonSabiju Balakrishnan
- Math2000 FormulasHochgeladen vonJack
- Solution 15Hochgeladen vonOswaldo Chihuala Bustamante
- Livingston - Badiou Mathematics and Model TheoryHochgeladen vonfilmterence
- AP Statistics SyllabusHochgeladen vonAndy Nguyen
- Assignment 1Hochgeladen voncarlosbislip
- Applied CombinatoricsHochgeladen vonrnkaushal
- Zany World of Basic Math the Module 8 Adding and Subtracting FractionsHochgeladen vonsarniacollegebound
- Algebraic-IdentitiesHochgeladen vonNardeep Bhasin
- [Reviews in Chemical Engineering] Numerical Solution of the Isothermal Isobaric Phase Equilibrium ProblemHochgeladen vonMohamad Fathi
- Lecture4 Engineering Curves and Theory of ProjectionsHochgeladen vonRobert Walusimbi
- Math Counts Prep PacketHochgeladen vonNeed Moar Sleep
- MA1307 Unit-I Practice QuestionsHochgeladen vonMankush Jain
- Unit -IIHochgeladen vonRobertBellarmine
- BE-MECH%20SYLLABUS_FTHochgeladen vonGiridharan Murali
- CS701 FT SolvedHochgeladen vonaslamzohaib
- Rad IanHochgeladen vonMadhuBabu Chinnamsetti
- Definitions of CEC2014 Benchmark Suite Part a(1)Hochgeladen vonjdpatel28
- POV N.P.rajamane M.C.nataraja T.P.ganesanHochgeladen vonprs226
- The Fourier Transform Bracewell SciAmHochgeladen vonPippo Caruso
- AeroelasticityHochgeladen vonkikzaero
- test review section 2 mixed problemsHochgeladen vonapi-298215585
- 1720110(Dec 2011)Hochgeladen vonAmit Thoriya
- Using Vectors - Solutions.pdfHochgeladen vonwolfretonmaths
- Exploring Students- Learning Difficulties in Secondary MathematicHochgeladen vonFazalHayat
- Ch 3.pdfHochgeladen vonShoshAlmazroeui
- 4. Moment of InertiaHochgeladen vonLalith Koushik Ganganapalli

## Viel mehr als nur Dokumente.

Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.

Jederzeit kündbar.