Sie sind auf Seite 1von 370

Theory and Background

E Copyright LMS International 2000

Table of Contents
Part I

Signal processing

Chapter 1
1.1
1.2
1.3

1.4
1.5

Digital signal processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Leakage and windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Window characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Window types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choosing window functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Window correction mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Window correction factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reading list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 2
2.1
2.2
2.3

3.2

3.3

3.4
3.5

2
8
10
11
12
13
15
15
17
18
20

Structural dynamics testing

Signal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
System analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Signature analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 3
3.1

Spectral processing

24
25
27

Functions

Time domain functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Time Record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Crosscorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Probability Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Probability Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Frequency domain functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Autopower Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Crosspower spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Principal Component Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Frequency Response Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Composite functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overall level (OA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Frequency section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Order sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Octave sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rms calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32
32
32
33
34
34
34
36
36
37
38
39
41
42
44
46
46
46
48
48
49
50

Part II

Acoustics and Sound Quality

Chapter 4
4.1

4.2

4.3
4.4

Chapter 5
5.1

5.2
5.3
5.4
5.5

6.2
6.3

56
56
56
56
57
58
59
60
60
60
60
61
61
62
64

Acoustic measurements

Acoustic measurement functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Sound pressure level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Residual intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pressure residual intensity index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Calculation of acoustic quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acoustic measurement surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acoustic ISO standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Frequency bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Field indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
F1 Sound field temporal variability indicator . . . . . . . . . . . . . . . . . . . . . .
F2 Surface pressure-intensity indicator . . . . . . . . . . . . . . . . . . . . . . . . . .
F3 Negative partial power indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
F4 Non-uniformity indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5.1 The criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Measurement mesh adequacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 6
6.1

Terminology and definitions

Acoustic quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound power (P) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound (Acoustic) intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Free field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Particle velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acoustic impedance (Z) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reference conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
dB scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound power level Lw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Particle velocity level Lv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound (Acoustic) intensity level LI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound pressure level LP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Octave bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acoustic weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68
68
68
69
70
72
74
74
76
77
77
77
78
79
79
80

Sound quality

The basic concepts of Sound Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Sound signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The perception of sounds by the human ear . . . . . . . . . . . . . . . . . . . . . . .
Binaural hearing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Loudness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Critical bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Temporal effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sound quality analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Analysis of sound signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Binaural recording and playback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reading list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

84
84
85
86
86
87
87
88
89
90
91
91
93
95

Chapter 7
7.1
7.2
7.3

7.4
7.5
7.6
7.7
7.8
7.9
7.10

Chapter 8
8.1
8.2

Sound metrics

Sound pressure level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Time domain sound pressure level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Equivalent sound pressure level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Loudness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.1 Stevens Mark VI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.2 Stevens Mark VII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.3 Loudness Zwicker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sharpness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Roughness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Fluctuation strength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Articulation index (AI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Speech interference level (SIL, PSIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Impulsiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Acoustic holography

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acoustic holography concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Temporal and spatial frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summation of plane waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Propagating and evanescent waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
(Back) propagating to other planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Wiener filter and the AdHoc window . . . . . . . . . . . . . . . . . . . . . . . . .
Derivation of other acoustic quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part III

100
100
101
102
103
104
104
107
109
110
111
112
114
115
118
119
119
121
122
124
125
126

Time data processing

Chapter 9

Statistical functions

Minimum, maximum, range and extremum . . . . . . . . . . . . . . . . . . . . . . . .


Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Root mean square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Crest factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Percentiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Variance and standard deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mean absolute deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Extreme deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Skewness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kurtosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Markov regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

130
130
130
131
131
131
132
133
133
134
134
134
135
136

Chapter 10 Time frequency analysis


10.1
10.2
10.3
10.4

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linear time-frequency representations . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Short Time Fourier Transform (STFT) . . . . . . . . . . . . . . . . . . . . . . . .
Wavelet analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Quadratic time-frequency representations . . . . . . . . . . . . . . . . . . . . . . . .
The Wigner-Ville distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

140
142
142
143
146
147
148
150

Chapter 11
11.1

11.2
11.3

Resampling

Fixed resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1.1 Integer downsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1.2 Integer upsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1.3 Fractional ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1.4 Arbitrary ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adaptive resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Implementation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

152
153
154
156
157
159
159
162

Chapter 12 Digital filtering


12.1
12.2

12.3
12.4
12.5

Basic definitions relating to digital filtering . . . . . . . . . . . . . . . . . . . . . . . .


FIR and IIR filter design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2.1 Filter design terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Filter characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linear phase filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Filter types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2.2 Design of FIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Design of an FIR window filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FIR multi window Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FIR Remez filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2.3 Design of IIR filters using analog prototypes . . . . . . . . . . . . . . . . .
Step 1) Specify the filter characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . .
Step 2) Compute the analog frequencies . . . . . . . . . . . . . . . . . . . . . . . . . .
Step 3) Select the suitable analog filter . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bessel filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Butterworth filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chebyshev (type I) filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Inverse Chebyshev (type II) filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cauer (elliptical) filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Step 4) Transform the prototype low pass filter . . . . . . . . . . . . . . . . . . . .
Step 5) Apply a bilinear transformation . . . . . . . . . . . . . . . . . . . . . . . . . .
Determining the filter order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2.4 IIR Inverse design filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Applying filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

164
170
171
171
171
172
174
174
176
177
178
178
179
180
180
181
182
183
183
185
185
185
187
188
189
191

Chapter 13 Harmonic tracking


13.1
13.2

13.3

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conditions for use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2.1 Determination of the Rpm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2.2 Waveform tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Structural equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Data equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Practical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

194
194
195
195
195
196
196
199

Chapter 14 Counting and histogramming


14.1
14.2

14.3

14.4

Part IV

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
One dimensional counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.2.1 Peak count methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.2.2 Level cross counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.2.3 Range counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Counting of single ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Counting of range-pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-dimensional counting methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.3.1 From-to-counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.3.2 Range-mean counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.3.3 ``Range pair-range" or ``Rainflow'' method . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

204
206
206
207
208
208
209
211
211
212
213
217

Analysis and design

Chapter 15 Estimation of modal parameters


15.1
15.2

15.3

15.4
15.5

Estimation of modal parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


A note about units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Types of analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.1 Single or multiple degree of freedom method . . . . . . . . . . . . . . . .
15.2.2 Local or global estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.3 Multiple input analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2.4 Time vs frequency domain implementation . . . . . . . . . . . . . . . . . .
15.2.5 Vibro-acoustic modal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Parameter estimation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Selection of a method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.1 Peak picking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.2 Mode picking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.3 Circle fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.4 Complex mode indicator function . . . . . . . . . . . . . . . . . . . . . . . . . .
Cross checking and tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.5 Least squares complex exponential . . . . . . . . . . . . . . . . . . . . . . . . .
Model for continuous data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Model for sampled data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Practical implementation of the method . . . . . . . . . . . . . . . . . . . . . . . . . . .
Determining the optimum number of modes . . . . . . . . . . . . . . . . . . . . . . .
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Model for sampled data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Practical implementation of the method . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.6 Least squares frequency domain . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3.7 Frequency domain direct parameter identification . . . . . . . . . . . .
Maximum likelihood method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.4.1 Theoretical aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Calculation of static compensation modes . . . . . . . . . . . . . . . . . . . . . . . . .

220
222
223
223
225
226
228
230
233
233
234
236
237
238
242
243
244
244
245
246
248
250
251
253
254
256
260
260
264

Chapter 16 Operational modal analysis


16.1
16.2

16.3

Why operational modal analysis? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Theoretical aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.1 Stochastic substate identification methods . . . . . . . . . . . . . . . . . . .
16.2.2 Natural Excitation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2.3 Selection of the modal parameter identification method . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

268
270
270
275
277
279

Chapter 17
Running modes analysis
17.1
17.2

17.3
17.4

Running mode analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Measuring running modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.2.1 Transmissibility functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.2.2 Crosspower spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Identification and scaling of running modes . . . . . . . . . . . . . . . . . . . . . . .
17.3.1 Scaling of running modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Interpretation of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Modal Scale Factors and Modal Assurance Criterion . . . . . . . . . . . . . . . .
Modal decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

282
284
284
286
288
288
290
290
291

Chapter 18
Modal validation
18.1
18.2
18.3
18.4
18.5
18.6
18.7
18.8
18.9
18.10
18.11

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MSF and MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mode participation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reciprocity between inputs and outputs . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generalized modal parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mode complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Modal phase collinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparison of models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mode indicator functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summation of FRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Synthesis of FRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

294
295
297
298
300
302
303
304
305
307
308

Chapter 19 Rigid body modes


19.1

19.2
19.3

Calculation of rigid body properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Derivation of rigid body properties from measured FRFs . . . . . . . . . . . .
Calculation of the rigid body properties . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rigid body mode analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19.2.1 Decomposition of measured modes into rigid body modes . . . . . .
19.2.2 Synthesis of rigid body modes based on geometrical data . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

310
310
311
316
317
318
320

Chapter 20 Design
20.1
20.2
20.3

20.4

Using the modal model for modal design . . . . . . . . . . . . . . . . . . . . . . . . . .


Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.2.1 Mathematical background to sensitivity analysis . . . . . . . . . . . . . .
Modification prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.3.1 Mathematical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.3.2 Implementation of Modification prediction . . . . . . . . . . . . . . . . . .
20.3.3 Definition of modifications to the model . . . . . . . . . . . . . . . . . . . .
20.3.4 Modification prediction calculation . . . . . . . . . . . . . . . . . . . . . . . . .
20.3.5 Units of scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of the application of a beam element . . . . . . . . . . . . . . . . . . . . .
Static condensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Forced response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.4.1 Mathematical background for forced response . . . . . . . . . . . . . . . .

322
325
325
329
329
338
339
347
348
349
351
354
354

Chapter 21 Geometry concepts


21.1
21.2

The geometry of a test structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

358
359
359
359

Theory and Background

Part I
Signal processing

Chapter 1
Spectral processing . . . . . . . . . . . . . . . . . . . . .

Chapter 2
Structural dynamics testing . . . . . . . . . . . . . .

23

Chapter 3
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

Chapter 1

Spectral processing

This chapter provides an overview of the terminology and tech


niques used in general signal processing of vibrational and acoustic
data.
Digital signal processing
Aliasing
Leakage and windows
Averaging
This is by no means a comprehensive treatment of the subject and a
reading list is given at the end.

Chapter 1

1.1

Spectral processing

Digital signal processing


Time and frequency domains
It is a property of all real waveforms that they can be made up of a number of
sine waves of certain amplitudes and frequencies. Viewing these waves in the
frequency domain rather than the time domain can be useful in that all the
components are more readily revealed.
amplitude

time

frequency

Each sine wave in the time domain is represented by one spectral line in the
frequency domain. The series of lines describing a waveform is known as its
frequency spectrum.
Fourier transform
The conversion of a time signal to the frequency domain (and its inverse) is
achieved using the Fourier Transform as defined below.


S x(f) 

 x(t)e

j2ft

dt

Eqn. 1-1




x(t) 

 S (f)e
x

j2ft

df

Eqn. 1-2



This function is continuous and in order to use the Fourier Transform digitally
a numerical integration must be performed between fixed limits.
The Discrete Fourier Transform (DFT)
The digital computation of the Fourier Transform is called the Discrete Fourier
Transform. It calculates the values at discrete points (mf) and performs a nu
merical integration as illustrated below between fixed limits (N samples).

The Lms Theory and Background Book

Spectral processing

x(t)e j2mft

time

t

Since the waveform is being sampled at discrete intervals and during a finite
observation time, we do not have an exact representation of it in either domain.
This gives rise to shortcomings which are discussed later.

Hermitian symmetry
The Fourier transform of a sinusoidal function would result in complex func
tion made up of real and imaginary parts that are symmetrical. This is illus
trated below. In the majority of cases only the real part is taken into account
and of this only the positive frequencies are shown. So the representation of the
frequency spectrum of the sine wave shown below would become the area
shaded in grey.
S(f) imaj

X(t)

S(f) real
A/2

A/2

A
-f

+f

-f

+f

A/2

The Fast Fourier Transform (FFT)


The Fast Fourier Transform is a dedicated algorithm to compute the DFT. It
thus determines the spectral (frequency) contents of a sampled and discretized
time signal. The resulting spectrum is also discrete. The reverse procedure is
referred to as an inverse or backward FFT.

Part I

Signal processing

Chapter 1

Spectral processing

N samples

time

inverse

N/2 spectral lines

frequency

To achieve high calculation performance the FFT algorithm requires that the
number of time samples (N) be a power of 2 (such as 2, 4, 8, ...., 512, 1024, 2048).

Blocksize
Such a time record of N samples is referred to as a block of data with N being
the blocksize. N samples in the time domain converts to N/2 spectral (frequency)
lines. Each line contains information about both amplitude and phase.

Frequency range
The time taken to collect the sample block is T. The lowest frequency that can be
detected then is that which is the reciprocal of the time T.

T
The frequency spacing between the spectral lines is therefore 1/T and the high
est frequency that can be determined is (N/2).(1/T).

The Lms Theory and Background Book

Spectral processing

N/2 spectral lines

frequency
1
T

2
T

3
T

f  1
T

N2
T

fmax  N . 1  N .f
2 T 2

The frequency range that can be covered is dependant on both the blocksize (N)
and the sampling period (T). To cover high frequencies you need to sample at a
fast rate which implies a short sample period.

Real time Bandwidth


Remember that an FFT requires a complete block of data to be gathered before it
can transform it. The time taken to gather a complete block of data depends on
the blocksize and the frequency range but it is possible to be gathering a second
time record while the first one is being transformed. If the computation time
takes less than the measurement time, then it can be ignored and the process is
said to be operating in real time.
time
record 1

time
record 1

time
record 2

time
record3

time
record 4

FFT 1

FFT 2

FFT 3

time
record 2

time
record3

time
record 4

FFT 1

FFT 2

FFT 3

Real time
operation

This is not the case if the computation time is taking longer than the measure
ment time or if the acquisition requires a trigger condition.

Overlap
Overlap processing involves using time records that are not completely inde
pendent of each other as illustrated below.

Part I

Signal processing

Chapter 1

Spectral processing

time
record 1
time
record 2
time
record3
time
record 4

FFT 1

FFT 2

FFT 3

If the time data is not being weighted at all by the application of a window,
then overlap processing does not include any new data and therefore makes no
statistical improvement to the estimation procedure. When windows are being
applied however, the overlap process can utilize data that would otherwise be
ignored.
The figure below shows data that is weighted with a Hanning window. In this
case the first and last 20% of each sample period is practically lost and contrib
utes hardly anything towards the averaging process.

Sampled
data

Processed data
with no overlap

The Lms Theory and Background Book

Spectral processing

Applying an overlap of at least 30% means that this data is once again included
- as shown below. This not only speeds up the acquisition (for the same num
ber of averages) but also makes it statistically more reliable since a much higher
proportion of the acquired data is being included in the averaging process.

Sampled
data

Processed data with 30% overlap

Part I

Signal processing

Chapter 1

1.2

Spectral processing

Aliasing
Sampling at too low a frequency can give rise to the problem of aliasing which
can lead to erroneous results as illustrated below.

This problem can be overcome by implementing what is known as the Nyquist


Criterion, which stipulates that the sampling frequency (fs ) should be greater
than twice the highest frequency of the interest (fm ).

fs  2fm
The highest frequency that can be measured is fmax which is half the sampling
frequency (fs ), and is also known as the Nyquist frequency (fn ).

f
fmax  s  fn
2

The problem of aliasing can also be illustrated in the frequency domain.


measured
frequency

fn
f1
f1

f2
fn

f3

2 fn = fs

input
frequency

f4
3 fn

4 fn

All multiples of the Nyquist frequency (fn ) act as `folding lines'. So f4 is folded
back on f3 around line 3 fn , f3 is folded back on f2 around line 2 fn and f2 is
folded back on f1 around line fn . Therefore all signals at f2 , f3 , f4 are all seen as
signals at frequency f1 .
The only sure way to avoid such problems is to apply an analog or digital antialiasing filter to limit the high frequency content of the signal. Filters are less
than ideal however so the positioning of the cut off frequency of the filters must
be made with respect to fmax and the roll off characteristics of the filter.

The Lms Theory and Background Book

Spectral processing

ideal filter

fmax

fmax

Part I

Signal processing

fs
roll off
characteristics
of a real filter

fs

Chapter 1

1.3

Spectral processing

Leakage and windows


A further problem associated with the discrete time sampling of the data is that
of leakage. A continuous sine wave such as the one shown below should result
in the single spectral line.
continuous
waveform

time

frequency
Because the signals are measured over a sample period T, the DFT assumes that
this is representative for all time. When the sine wave is not periodic in the
sample time window, the result is a consequent leakage of energy from the
original line spectrum due to the discontinuities at the edges.
discretely
sampled
waveform

time

time

DFT
assumed
waveform

frequency
The user should be aware that leakage is one of the most serious problems
associated with digital signal processing. Whilst aliasing errors can be reduced
by various techniques, leakage errors can never be eliminated. Leakage can be re
duced by using different excitation techniques and increasing the frequency
resolution, or through the use of windows as described below.

10

The Lms Theory and Background Book

Spectral processing

1.3.1

Windows
The problem of discontinuities at the edge can be alleviated either by ensuring
that the signal and the sampling period are synchronous or by ensuring that the
function is zero at the start and end of the sampling period. This latter situa
tion can be achieved by applying what is called a `window function' which
normally takes the form of an amplitude modulated sine wave.

sample
period T.

Frequency spectrum of
a sine wave, periodic in
the sample period T.

sample
period T.

Frequency spectrum of a
sine wave, not periodic
with the sample period
without a window.

Frequency spectrum of
a sine wave that is not
periodic with the sample
period with a window.

The use of windows gives rise to errors itself of which the user should be aware
and should be avoided if possible. The various types of windowing functions
distribute the energy in different ways. The choice of window depends on the
input function and on your area of interest.

Self windowing functions


Self windowing functions are those that are periodic in the sample period T or
transient signals. Transient signals are those where the function is naturally
zero at the start and end of the sampling period such as impulse and burst sig
nals. Self windowing functions should be adopted whenever possible since the
application of a window function presents problems of its own. A rectangular
or uniform window can then be used since it does not affect the energy dis
tribution.
Note!

Part I

It should be noted that synchronizing the signal and the sampling time, or using
a self windowing function is preferable to using a window.

Signal processing

11

Chapter 1

Spectral processing

Window characteristics
The time windows provided take a number of forms - many of which are am
plitude modulated sine waves. There are all in effect filters and the properties
of the various windows can be compared by examining their filter characteris
tics in the frequency domain where they can be characterized by the factors
shown below.
noise Bandwidth
0dB
side lobe falloff
highest side
lobe

log f
The windows vary in the amount of energy squeezed in to the central lobe as
compared to that in the side lobes. The choice of window depends on both the
aim of the analysis and the type of signal you are using. In general, the broader
the noise Bandwidth, the worse the frequency resolution, since it becomes more
difficult to pick out adjacent frequencies with similar amplitudes. On the other
hand, selectivity (i.e. the ability to pick out a small component next to a large
on) is improved with side lobe falloff. It is typical that a window that scores
well on Bandwidth is weak on side lobe fall off and the choice is therefore a
trade off between the two. A summary of these characteristics of the windows
provided is given in Table 1.1.
Highest
side lobe
(dB)

Sidelobe falloff
(dB/decade)

Noise Band
width (bins)

Max.
Amp er
ror (dB)

Uniform

-13

-20

1.00

3.9

Hanning

-32

-60

1.5

1.4

Hamming

-43

-20

1.36

1.8

Kaiser-Bessel

-69

-20

1.8

1.0

Blackman

-92

-20

2.0

1.1

Flattop

-93

3.43

<0.01

Window type

Table 1.1 Properties of time windows

12

The Lms Theory and Background Book

Spectral processing

Window types

Uniform window
This window is used when leakage is not a prob
lem since it does not affect the energy distribu
tion. It is applied in the case of periodic sine
waves, impulses, transients... where the function
is naturally zero at the start and end of the sam
pling period.
The following windows Hanning, Hamming, Blackman, Kaiser-Bessel and
Flattop all take the form of an amplitude modulated
sine wave in the time domain. For a comparison of
their frequency domain filter characteristics - see
Table 1.1.

Hanning
This window is most commonly applied for general purpose analysis of ran
dom signals with discrete frequency components. It has the effect of applying a
round topped filter. The ability to distinguish between adjacent frequencies of
similar amplitude is low so it is not suitable for accurate measurements of small
signals.

Hamming
This window has a higher side lobe than the Hanning but a lower fall off rate
and is best used when the dynamic range is about 50dB.

Blackman
This window is useful for detecting a weak component in the presence of a
strong one.

KaiserBessel
The filter characteristics of this window provide good selectivity, and thus
make it suitable for distinguishing multiple tone signals with widely different
levels. It can cause more leakage than a Hanning window when used with ran
dom excitation.

Part I

Signal processing

13

Chapter 1

Spectral processing

Flattop
This window's name derives from its low ripple characteristics in the filter pass
band. This window should be used for accurate amplitude measurements of
single tone frequencies and is best suited for calibration purposes.

Force window
This type of window is used with a tran
sient signal in the case of impact testing.
It is designed to eliminate stray noise in
the excitation channel as illustrated here.
It has a value of 1 during the impact peri
od and 0 otherwise.

Exponential window
This window is also used with a transient
signal. It is designed to ensure that the sig
nal dies away sufficiently at the end of the
sampling period as shown below. The
form of the exponential window is de
scribed by the formula e -t . The `Exponen
tial decay' determines the % level at the
end of the time window.

An exponential window is normally applied to the response (output) channels


during impact testing. It is also the most appropriate window to be used with a
burst excitation signal in which case it should be applied to all channels i.e.
force(s) and response(s). It does however introduce artificial damping into the
measurement data which should be carefully taken into account in further pro
cessing in modal analysis.

14

The Lms Theory and Background Book

Spectral processing

Choosing window functions


For the analysis of transient signals use :
Uniform

for general purposes

Force

for short impulses and transients to improve the signal to


noise ratio

Exponential

for transients which are longer than the sample period or


which do not decay sufficiently within this period.

For the analysis of continuous signals use :


Hanning

for general purposes

Blackman or
Kaiser-Bessel

if selectivity is important and you need to distinguish be


tween harmonic signals with very different levels

Flattop

for calibration procedures and for those situations where


the correct amplitude measurements are important.

Uniform

only when analyzing special sinusoids whose frequencies


coincide with center frequencies of the analysis.

For system analysis i.e. measurement of FRFs use :


Force

for the excitation (reference) signal when this is a hammer

Exponential

for the response signal of lightly damped systems with


hammer excitation

Hanning

for reference and response channels when using random


excitation signals

Uniform

for reference and response channels when using pseudo


random excitation signals

Window correction mode


Applying a window distorts the nature of the signal and correction factors have
to be applied to compensate for this. This correction can be applied in one of
two ways.

Part I

Amplitude

where the amplitude is corrected to the original value.

Energy

where the correction factor gives the correct signal energy for a
particular frequency band. This is the only method that should be
used for broad band analysis.

Signal processing

15

Chapter 1

Spectral processing

If a number of windows is applied to a function, the effect of the window may


be squared or cubed, and this affects the correction factor required.

Amplitude correction
Consider the example of a sine wave signal and a Hanning window.
amplitude
time

time

amplitude

unwindowed signal

amplitude

frequency

windowed signal

frequency

When the windowed signal (sine wave x Hanning window) is transformed to


the frequency domain, then the amplitude of the resulting spectrum will be
only half that of the equivalent unwindowed signal. Thus in order to correct
for the effect of the Hanning window on the amplitude of the frequency spec
trum, the resulting spectrum has to be multiplied by an amplitude correction
factor of 2.
Amplitude correction must be used for amplitude measurements of single tone
frequencies if the analysis is to yield correct results.

Energy correction
Windowing also affects broadband signals.

original signal

16

window function

windowed signal

The Lms Theory and Background Book

Spectral processing

In this case however it is the energy in the signal which it is usually important
to maintain, and an energy correction factor will be applied to restore the ener
gy level of the windowed signal to that of the original signal.
In the case of a Hanning window, the energy in the windowed signal is 61% of
that the original signal. The windowed data needs to be multiplied by 1.63
therefore to correct the energy level.

Window correction factors


The actual correction factor that is needed to compensate for the application of
the time window depends on the window correction mode and the number of
windows applied. Table 1.2 lists the values used.
Window type

Amplitude mode

Energy mode

Uniform

Hanning x1

1.63

Hanning x2

2.67

1.91

Hanning x3

3.20

2.11

Blackman

2.80

1.97

Hamming

1.85

1.59

Kaiser-Bessel

2.49

1.86

Flattop

4.18

2.26

Table 1.2 Window correction factors

Part I

Signal processing

17

Chapter 1

1.4

Spectral processing

Averaging
Signals in the real world are contaminated by noise -both random and bias.
This contamination can be reduced by averaging a number of measurements in
which the random noise signal will average to zero. Bias errors however, such
as nonlinearities, leakage and mass loading are not reduced by the averaging
process. A number of different techniques for averaging of measurements are
provided.

Linear
This produces a linearly weighted average in which all the individual measure
ments have the same influence on the final averaged value. If the average value
of M consecutive measurement ensembles is x then M1

x  1  xm
M m0

Eqn 1-3

x  x a(n1)  x n
The intermediate average is an
. The final averaging can be
done at the end of the acquisition.

Stable
In the case of stable averaging again all the individual measurements have the
same influence on the final averaged value. In this case though, the intermediate
averaging result is based on -

xn
1
xn  n 
n x n1  n

Eqn 1-4

The advantage of stable averaging is that the intermediate averaging results are
always properly scaled. This scaling however makes the procedure slightly
more time consuming.

Exponential
Exponential averaging on the other hand yields an averaging result to which
the newest measurement has the largest influence while the effect of the older
ones is gradually diminished. In this case -

xn
1 x
xn   
n1  


18

Eqn 1-5

The Lms Theory and Background Book

Spectral processing

where  is a constant which acts as a weighting factor.

Peak level hold


In this case a comparison has to be made between individual measurement en
sembles. When they contain complex data, comparison is done based on the
amplitude information. For peak level hold averaging, the last measurement
ensemble consisting of k individual samples, xn (k), (where k= 0...N-1 and N is
the blocksize) is compared to the average of the n-1 previous steps, xn-1 (k).
The new average xn (k), is then defined as x n(k)  x n(k)

if

x n(k)  x n1(k)

|x n(k)| |x n1(k)|

or

Eqn. 1-6

otherwise

In this way, the averaging result contains, for a specific k, the maximum value
in an absolute sense of all the ensembles, considered during the averaging pro
cess.

Peak reference hold


In peak reference hold averaging, one channel determines the averaging pro
cess. If x i is the ensemble for channel i and x r represents the reference channel,
then the last measurement ensemble x rn (k) (where k= 0...N-1) is compared to
the average of the n-1 previous steps, x rn-1 (k).
The new average xn (k), is then defined as x i n(k)  x i n(k)

if

x i n(k)  x i n1(k)

|x r n(k)| |x r n1(k)|

or

Eqn 1-7

otherwise

This way, the averaging result contains all values that coincide with the maxi
mum values for the reference channel.

Part I

Signal processing

19

Chapter 1

1.5

Spectral processing

Reading list
Signal and system theory
J. S. Bendat and A.G. Piersol.
Random Data : Analysis and Measurement Procedures
Wiley - Interscience, 1971.
J. S. Bendat and A.G. Piersol.
Engineering Applications of Correlation and Spectral Analysis
Wiley - Interscience, 1980.
R.K. Otnes and L. Enochson.
Applied Time Series Analysis
John Wiley & Cons, 1978.
J. Max
Mthodes et Techniques de Traitement du Signal (2 Tomes)
Masson, 1972, 1986.
General literature in digital signal processing
A.V. Oppenheimer and R.W. Schafer
Digital Signal Processing
Prentice Hall, Englewood Cliffs N.J., 1975.
L.R. Rabiner and B. Gold
Theory and Application of Digital Signal Processing
Prentice Hall, Englewood Cliffs N.J., 1975.
K.G. Beauchamp and C.K. Yueu
Digital Methods for Signal Analysis
George Allen & Unwin, London 1979.
M. Bellanger
Traitement Numrique du Signal
Masson, Paris 1981.
A. Peled and B. Liu
Digital Signal Processing
Theory, Design And Implementation
John Wiley & Sons.
Discrete Fourier Transform
E.O. Brigham
The Fast Fourier Transform
Prentice Hall, Englewood Cliffs N.J., 1974.

20

The Lms Theory and Background Book

Spectral processing

R.W. Ramirez
The FFT : Fundamentals and Concepts
Prentice Hall, Englewood Cliffs N.J., 1985.
C.S. Burrus and T.W. Parks
DFT/FFT and Convolution Algorithms : Theory and Implementation
John Wiley & Sons, 1985.
H.J. Nussbaumer
Fast Fourier Transform and Convolution Algorithms
Springer Verlag, 1982.
R.E. Blahut
Fast Algorithms for Digital Signal Processing
Addison Wesley, 1985.
IEEE-ASSP Society
Programs for Digital Signal Processing
IEEE Press, New York, 1979.

Part I

Signal processing

21

Chapter 2

Structural dynamics
testing

Understanding the structural dynamics of a structure is essential for


both improving the performance of existing structures and the de
sign and development of new ones.
This chapter provides an introduction to types of analysis used in ex
amining the dynamic behavior of structures
Signal analysis
Signature analysis
System analysis

23

Chapter 2

2.1

Structural dynamics testing

Signal analysis
The dynamic analysis of a linear physical system can be achieved by measuring
the response of the system (output) to a form of excitation. This excitation can
be operational forces which, while typical, are not necessarily known. Measur
ing the response to known excitation forces is discussed in section 2.2
In examining the vibrational behavior of a structure, there are a range of func
tions that can be acquired which will provide information on the frequencies at
which particular phenomena occur. These measurement functions are de
scribed in chapter 3.
Noise levels are a common problem and specific information about acoustic
measurement functions are given in a separate set of documentation on Acous
tics and sound quality.
The examination of the behavior of a structure due to a changing environment,
such as during an engine run up is termed signature analysis and this subject is
discussed in section 2.3.

24

The Lms Theory and Background Book

Structural dynamics testing

2.2

System analysis
System analysis refers to a method of examining the properties of a system, i.e.
how a structure responds to a specific input. In the case of a linear system, this
relationship between the input and the output is a fundamental characteristic of
the system and can be used to predict the behavior of the system due to differ
ent stimuli.
output

output

output

output
output

input

input

Modal analysis is a form of system analysis which results in a modal model of


the system composed of a set of frequencies, damping values and mode shapes.
The Frequency Response Function (FRF) is a frequency domain function ex
pressing the ratio between a response (output) signal and a reference (input)
signal. The position and direction of the measurements are termed Degrees Of
Freedom DOFs. An FRF thus always depends on 2 DOFs, the response DOF
(numerator) and the reference DOF (denominator).
Input from
reference DOF
Xj

FRF
H(f)

H(f) 

Output from
response DOF
Xi

Xi
Xj

For modal purposes the response signal is most commonly the acceleration at
the response DOF due to a force input at another. In this case peaks in the FRF
indicate that low input levels generate high response levels (resonances), while
minima indicate low response levels, even for high inputs (anti-resonances).

Part I

Signal processing

25

Chapter 2

Structural dynamics testing

resonance
log Amp

anti-resonance

frequency

Measurement points
The number of acquisition channels determines the number of response and ex
citation points that can be measured at any one time. Their position on the test
system can be defined as part of the geometry of the structure. In order to visu
alize the response of each DOF, then their geometrical position must be defined.

Exciting the structure


The input to the structure can be applied either from a hammer or a shaker. Us
ing a shaker will require a `Source' signal. The nature of this signal can take a
number of forms. The choice of signal depends on the nature of the analysis.
If the response is measured at several response DOFs and the system excited at
a number of inputs then the resulting FRFs are termed Multiple Input Multiple
Output.
When a hammer is used to excite a mechanical structure the procedure is
termed Impact testing. This type of testing can be done in one of two ways.
Using the first method means measuring the response at a fixed point and ap
plying the hammer at a number of excitation points. This case is termed `Rov
ing hammer' The alternative is to apply the hammer to one point and to mea
sure the response at all the other points. This case is termed `Fixed hammer'.

26

The Lms Theory and Background Book

Structural dynamics testing

2.3

Signature analysis
This involves analyzing a series of non-stationary signals that are varying over
the analysis period. An example would be the vibrational/acoustical behavior
of a structure as a function of rotational speed. Thus during `run-up' and/or
`run-down' a series of signals are measured to determine the behavior of the
structure and to determine the rotational speed. (the tacho signal).
Spectral data are analyzed and plotted against the external parameter as illus
trated below. Such an arrangement is known as a waterfall or map of mea
sured functions. The functions that can be acquired during a run and placed in
a waterfall are listed in sections 3.1 and 3.2.
basic
function
tracking parameter

composite function
As well as the waterfall of measured functions, signature analysis enables you
to obtain so-called composite functions. These are two-dimensional functions
that are directly related to the tracking parameter value. Such functions are
overall levels and frequency sections and they are described in section 3.3.
Measurements are taken during the acquisition but further analyses of the mea
sured functions in relation to the tracking parameters can be performed during
post processing.
Tracking
The dominant parameter describing the change of a signal is termed the track
ing parameter. This could be time, rpm, temperature or other. The rotational
speed is a commonly used as a tracking parameter and for this a tacho signal is
used to determine the rpm.
A number of pulses per revolution are
generated by the rotating shaft. The tacho
channel uses a positive slope crossing of
a trigger level to determine the time be
tween pulses and thus the rpm.
t1

Part I

Signal processing

t2

t3

27

Chapter 2

Structural dynamics testing

While a number of channels can be used to measure tracking values, one must
be used to control the acquisition, i.e. to determine when the measurements
will be made.
Parameters relating to signature analysis
Sampling frequency f s  1
T

Sampling period

T  N.T

Number of revs P

M samples/rev

Blocksize N = MP samples
P= Number of revs/
b
lock =( Number of revs/sec) . ( Number of secs)
rpm(Hz)
P  rpm(Hz).T 
f
M = Number of samples/rev = (Number of samples/sec) . (Number of
secs/rev)
fs
M
rpm(Hz)
N= Number of samples
= (Number of samples/rev) . (Number of revs)
(data acquisition size)
(blocksize)
N  M.P
Orders
For rotating machinery most signal phenomena are related to the rotational
speed and its harmonics.
A rotational speed harmonic is called an order. It is the proportionality
constant (O) between the rotational speed (rpm) and the frequency (f).

28

The Lms Theory and Background Book

Structural dynamics testing

f= O . rpm (Hz)
For stationary signals the
relevant analysis
parameters are -

For rotational equipment


the relevant analysis
parameters are -

maximum frequency (fmax)

maximum order (Omax)

fmax= fs / 2

fmax= Omax . rpm

and frequency resolution (f)


f = 1 / T

Omax= M / 2
and order resolution (O)

f= O . rpm

O = 1 / P

Fixed sampling
This is another term for basic signature analysis, where signals are measured
using the standard data acquisition techniques as described above i.e. with a
fixed sampling frequency and sampling period. The rpm is measured but is
used only for control of the acquisition, and annotation of the acquired blocks.
In this case, the maximum order and the order resolution will vary with the ro
tational speed (rpm).

Order tracking
This involves measuring signals at different rotational speeds but in this case,
the sampling frequency (fs ) and observation time (T) are dependent on the rpm.
The data is sampled synchronously with the rotational speed (rpm). In this
way the number of samples per revolution is kept constant. The signals are in fact
sampled at constant shaft angle increments rather than time increments. This
implies that the maximum order measured remains constant (Omax= M / 2).
When order tracking, the number of revolutions /measurement (P) is indepen
dent of the rotational speed. Thus with a constant P, the order resolution is a
constant (O= 1 / P). The orders lie on spectral lines and leakage problems are
avoided when an integer number of revolutions are measured.

Part I

Signal processing

29

Chapter 3

Functions

This chapter gives a brief description of the various functions that


can be measured and their uses.
Time domain measurements
Frequency domain measurement functions
This chapter does not deal with acoustic measurements which are
dealt with in a separate set of documents Acoustics and sound qual
ity".
It does describe the specific functions that are associated with signa
ture analysis and which are based on a tracking parameter.
Composite functions
In addition this chapter mentions the use of consistent units and how
rms values are calculated for the various measurement functions.
Units
Calculation of rms values

Chapter 3

3.1

Functions

Time domain functions


Time Record
N instantaneous time samples x(n), are taken where N = the blocksize. The re
sult of a time record measurement x(n), is the ensemble average of a series of M
instantaneous time records, where M= the number of averages and A desig
nates the averaging operator.
x(n)  A M1
m0 (x m(n))

Eqn 3-1

n  0 N  1

Averaging is useful in perceiving signals disguised by the presence of noise.


The specification of the number of averages taken in the determination of a
block of data as well as the various averaging methods used are described in sec
tion 1.4.
In the case of Signature Analysis, a map or waterfall is obtained of all the time
measurements taken during the acquisition. Because this analysis deals with
changing signals, averaging is only useful with signals that change slowly or in
a stepwise fashion.

Autocorrelation
Correlation is a measure of the similarity between two quantities. The autocor
relation function is found by taking a signal and comparing it with a time
shifted version of itself.
The time domain autocorrelation function Rxx () is thus acquired by multiply
ing a signal by the same signal displaced by time () and integrating the prod
uct over all time.

R xx() 

lim
T 

 x(t)x(t  )dt

Eqn 3-2

However this function is more commonly computed by using the correspond


ing frequency domain function. In this case the discrete auto correlation func
tion Rxx (n) of a sampled signal x(n) is calculated as,
R xx(n)  F 1 Sxx(k)
,

k  0...N  1

Eqn 3-3

n  0...N  1

32

The Lms Theory and Background Book

Functions

where F -1 is the inverse Fourier Transform and Sxx (k) is the discrete autopower
spectrum.
It can be seen that the greatest correlation will occur when    and the auto
correlation function will thus be a maximum at this point equal to the mean
square value of x(t). Purely random signals will therefore exhibit just one peak
at    Periodic signals however will exhibit another peak when the time
shift equals a multiple of the period.
The autocorrelation function of a periodic signal is also periodic and has the
same period as the wave form itself. This property is useful in detecting sig
nals hidden by noise. The advantage of using the auto correlation function
rather than linear averaging, is that no synchronizing trigger is required. Cer
tain impulse type signals also show up better using the autocorrelation function
rather than using a frequency domain function.

Crosscorrelation
Cross correlation is a measure of the similarity between two different signals. It
therefore requires multiple channels. In terms of the time domain it is defined
as:

R xy() 

lim
T 

 x(t)y(t  )dt

Eqn 3-4

As in the case of the autocorrelation function the discrete cross correlation func
tion Rxy (n) between two sampled signals x(n) and y(n) is calculated as,
R xy(n)  F 1 Sxy(k)
,

k  0...N  1
n  0...N  1

Eqn 3-5

with Sxy (k) being the discrete crosspower spectrum between the two signals.
Cross correlation indicates the similarity between two signals as a function of
the time shift. It is therefore useful in determining the time difference between
such signals.

Part I

Signal processing

33

Chapter 3

Functions

Histogram
The probability histogram q(j) describes the relative occurrence of specific sig
nal levels. Let the signal input range of a sampled signal x(n) be divided in J
classes. Each class j,j = 0...J-1, can be characterized by an average value xj and
a class increment x.
2

nr of classes

signal range

1
0
-1
-2

-3 -2 -1 0 1 2

-3

nr of classes

Figure 3-1 Histogram

The probability histogram of a sampled signal x(n) can then be defined as,

q(j)  1
N

where

N1

k x(n)
, j  0...J  1

Eqn 3-6

n  0
k x(n)
 1, ifx j  x  x(n)  x j  x
2
2
k x(n)
 0, otherwise

The maximum value of J is either the number of time samples (Time data) or
spectral lines in the block.

Probability Density
The probability density p(j) is a normalized representation of the probability
histogram q(j),
p(j)  100 q(j), j  0...J  1
x

Eqn 3-7

This function is expressed in percents per engineering unit.

Probability Distribution
The probability distribution d(j) gives the probability (in percent) that the signal
level is below a given value. This function is calculated from the probability
histogram, q(t) given in equation 3-6.

34

The Lms Theory and Background Book

Functions

d(j) 

 q(i), j  0...J  1

Eqn 3-8

i0

Part I

Signal processing

35

Chapter 3

3.2

Functions

Frequency domain functions


Spectrum
The instantaneous discrete frequency spectrum X(k),is defined as the discrete
Fourier transform of the instantaneous sampled time record.
X(k)  F(x(n)),

Eqn 3-9

n  0...N  1
k  0...N  1

The result of a frequency spectrum measurement is the ensemble average of a


series of M instantaneous discrete frequency spectra Xm (k), m = 0...M - 1,

X(k)  A M1
m0 (X m(k)), k  0...N  1

Eqn 3-10

Since only real valued time records are considered the frequency spectrum has
a Hermitian symmetry.
X(k)  X * ( k)  X * (N  k), k  0.. N
2

Eqn 3-11

where X * is the complex conjugate.


The number of spectral lines is equal to half the number of time samples.
The FFT algorithms produce a double sided Fourier transform which is cor
rected to single-sided spectral quantities. Only the positive frequency values
are considered. These are then adapted according to the format required. A
Peak amplitude multiplies the result by a factor 2, so producing the amplitude
of the time signal in case of a sine wave. Rms amplitude multiplies the result
by 2.
As with time record averaging, the non-synchronous signals will average out.
This function is useful therefore in distinguishing a signal that is contaminated
by noise. When a trigger signal is available the frequency spectrum has the ad
vantage over autopower spectrum averaging in that the noise averages to zero,
rather than to its mean square value.

36

The Lms Theory and Background Book

Functions

Autopower Spectrum
The autopower spectrum is the squared magnitude of the frequency spectrum.
The discrete autopower spectrum of a sampled time signal Sxx (k) is defined as
the ensemble average of the squared magnitude of M instantaneous discrete
frequency spectra Xm (k),
*
S xx(k)  A M1
m0 (Xm(k)X m(k)), k  0...N  1

Eqn 3-12

where X * is the complex conjugate.


Thus if the frequency spectrum is complex you have phase information, while
the autopower spectrum will be real and contain no phase information.
Since only real valued time records are considered, the autopower spectrum is
symmetric with respect to zero-frequency,
S (xx)(k)  S xx( k)  S xx(N  k), k  0... N
2
X

Sxx

Gxx

-f
T

double sided
frequency spectrum

-f

double sided
autopower spectrum

signal
Figure 3-2

A2/2

(A/2)2

A/2

Eqn 3-13

single sided
(rms power)
autopower spectrum

Autopower spectra

Of this double sided frequency spectrum, only the positive frequency values
are considered. In order to obtain a time signal power estimate, a summation
of the power spectra values at the positive and negative frequencies must be
made, resulting in the so-called RMS Autopower spectra Gxx (k),
G xx(k)  S xx, whenk  0
G xx(k)  2S xx(k), whenk  1... N  1
2

Eqn 3-14

The power spectrum values correspond to the Fourier coefficients resulting


from a double sided Fourier transform but these values are corrected to singlesided spectral quantities, expressed as RMS or as PEAK amplitude values.

Part I

Signal processing

37

Chapter 3

Functions

There are a number of formats in which autopower spectra are presented.


The Power Spectral Density normalizes the level with respect to the frequency
resolution. This overcomes differences that may arise from using a specific
Bandwidth. This is the standard way of measuring stationary broadband sig
nals.
For transient signals the Energy Spectral Density may be more interesting
since this looks at the level of the energy rather than the average power over
the total acquisition time and is obtained by multiplying the Power Spectral
Density by the measurement period.
The interrelationship of these autopower formats is shown in Table 3.1. The pa
rameters A and T are as illustrated in Figure 3-2 , and F is the frequency reso
lution. Examples of the different modes and units are shown below.
Amplitude mode

Amplitude format

Value other than DC line

RMS

Power

A2/2

RMS

Linear

A/2

RMS

PSD

A2/2F

RMS

ESD

A2 T/2 F

Peak

Power

A2

Peak

Linear

Peak

PSD

A2/ F

Peak

ESD

A2 T/ F

Table 3.1 Autopower spectrum formats

Crosspower spectrum
The cross power spectrum Sxy is a measure of the mutual power between two
signals at each frequency in the analysis band. It is the dual of the cross cor
relation function.
It is defined as the following product -

S xy(k)  A M1
m0 X m(k)  Y m(k) , k  0...N  1
X*M (K)

38

Eqn 3-15

Is the complex conjugate of the instantaneous frequency spectrum


of the one time signal X(n),
and

The Lms Theory and Background Book

Functions

Ym (K)

Is the instantaneous frequency spectrum of a related time signal


Y(n),

The crosspower spectrum contains information about both the magnitude and
phase of the signals. Its phase at any frequency is the relative phase between
the two signals and as such it is useful in analyzing phase relationships.
Since it is a product, it will have a high value when the both signal levels are
high, and a low value when both signal levels are low. It is therefore an indica
tor of major signal levels on both the input and output. Its use in this respect
should be treated with caution however since a high value can also arise from
just the output level without indicating that the input is the cause. The interde
pendence of input and output is revealed in the coherence function which is de
scribed in the following subsection.
The cross power spectrum is used in the calculation of frequency response func
tions.
The Amplitude mode in which the crosspower spectrum is presented is as de
scribed in the previous section on Autopower spectrum. Rms and PEAK val
ues are considered.

Coherence
There are three types of coherence functions; the ordinary coherence, partial co
herence and virtual coherence.
Ordinary Coherence
The (squared) ordinary coherence between a signal Xi (N) and Xj (N) is defined
by,

 2 0 ij(k) 

Sij(k) 2
S ii(k)  S jj(k)

Eqn 3-16

where S ij(k) is the averaged crosspower. S ii(k) and S jj(k) are the averaged auto
powers.
It is a ratio of the maximum energy in a combined output signal due to its vari
ous components, and the total amount of energy in the output signal. Coher
ence can be used as a measure of the power in one channel that is caused by the
power in the another channel. As such it is useful in assessing the accuracy of
transfer function measurements. It does not however need to apply to input
and output and can also be measured between shakers.

Part I

Signal processing

39

Chapter 3

Functions

The coherence function can take values that range between 0 and 1. A high val
ue (near 1) indicates that the output is due almost entirely to the input and you
can feel confident in the frequency response function measurements. A low
value (near 0) indicates problems such as extraneous input signals not being
measured, noise, nonlinearities or time delays in the system.
Multiple coherence (used in the calculation of the measurement function FRF)
The multiple coherence function is the coefficient that describes, in the frequen
cy domain, the causal relationship between a single signal (an output spectrum)
and a set of other signals (the considered input spectra) as a function of fre
quency and all considered references. It is the ratio of the energy in an output
signal, caused by several input signals to the total amount of energy in the out
put signal. It is used to verify the amount of noise on the measurements, as all
responses should be related to the applied references (inputs).
The multiple coherence function between a single response spectrum Y(k) and a
set of reference spectra Xi (k) is calculated from

 2 y:x(k)  1 

S yy.n!(k)

Eqn 3-17

S yy(k)

where Syy (k) is the autopower of response signal y(n)


Syy.n! (k) is the part of autopower Syy (k) of which the contributions of all
reference spectra Xi (k) have been eliminated
The value of the multiple coherence is always between 0 and 1.

Partial Coherence
The partial coherence is the ordinary coherence between conditioned signals.
Conditioned signals are those where the causal effects of other signals are re
moved in a linear least squares sense.
To define the partial coherence, consider the signals X1 ..., Xi , Xj ,... The partial
coherence between Xi and Xj , after eliminating the signals X1 ... Xg is given by,

 2p ijg(k) 

 Sijg(k) 2
Siig(k)  S jjg(k)

Eqn 3-18

with :
Sii g (k) =

40

autopower of signal Xi without the influences of the signals X1 ...xg

The Lms Theory and Background Book

Functions

Sjjg (k) =

autopower of signal Xj without the influence of the signals X1 ...xg

Sijg (k) =

crosspower between signals Xi and Xj without the influences of the


signals X1 ...xg .

The partial coherence can take values between 0 and 1.


Virtual Coherence
The Virtual coherence is an ordinary coherence between a signal and a princi
pal component which is discussed below. The virtual coherence is calculated
from,
 2 vij(k) 

Sij(k) 2
S ii(k)  S jj(k)

Eqn 3-19

with :
S'ii (k)

autopower of principal component X'i

S'ij (k)

crosspower between signal xj and principal component X'i

The value of the virtual coherence is always between 0 and 1. The sum of the
virtual coherences between any signal and all principal components is also in
the range [0,1].

Principal Component Spectra


Consider a set of signals, X... Xn . Now assume that a set of perfectly uncorre
lated signals can be determined such that, by linear combinations, they de
scribe the original set of signals. These signals (indicated by X'1 ... X'n .) are
called the principal components of the signals in the original set. Note that the
coherence between the principal components is exactly 0, as they are by defini
tion, perfectly uncorrelated. The principal components are in a sense the main
independent mechanisms (sources) observable in the signal set.
The Principal components can be calculated either on the sampled time data or
on the corresponding spectra. The fundamental relations are,

 X(k)   [ U ] h X(k) 

Eqn 3-20

 X(k)   [ U ] X(k) 
[ U ] h[ U ]  I
[S xx]  [U ]hS xx[ U ]

Part I

Signal processing

41

Chapter 3

Functions

where
S'xx =

diagonal matrix with the autopower of the principal component


spectra on the diagonal.

{X'(K)} =

an uncorrelated set of principal component signals.

[U] =

unitary transformation matrix.

The major application of the principal component spectra is in determining the


number of uncorrelated mechanisms (sources) in a signal set. A well known
example is the diagnosis of multiple input excitation for multiple input/multi
ple output FRF estimation.

Frequency Response Function


The frequency response function (FRF) matrix [H(k) ] expresses the frequency
domain relationship between the inputs and outputs of a linear time-invariant
system.
references

responses

Input
Input

Output
System

Input
X(k)

Output
Output

H(k)

Y(k)

If Ni be the number of system inputs and No the number of system outputs, let
{X(N)} be a Ni -vector with the system input signals and {Y(N)} a No -vector with
the system output signals. A frequency response function matrix [ H(k)] of size
(No , Ni ) can then be defined such that,

 Y(k)    H(k)    X(k) 

Eqn 3-21

The system described above is an ideal one where the output is related directly
to the input and there is no contamination by noise. This is not the case in real
ity and various estimators are used to estimate [H(k)] from the measured input
and output signals.
The H1 Estimator
The most commonly used one is the H1-estimator, which assumes that there is
no noise on the input and consequently that all the X measurements are accu
rate.

42

The Lms Theory and Background Book

Functions

N
Y = HX + N

X
It minimizes the noise on the output in a least squares sense. In this case the
transfer function is given by -

H 1(k)  

 Syx(k) 

Eqn 3-22

 Sxx(k) 

This estimator tends to give an underestimate of the FRF if there is noise on the
input. H1 estimates the anti-resonances better than the resonances. Best results
are obtained with this estimator when the inputs are uncorrelated.

The H2 Estimator
Alternatively, the H2 estimator can be used. This assumes that there is no noise
on the output and consequently that all the Y measurements are accurate.
H

M
Y = H(X - M)
X
It minimizes the noise on the input in a least squares sense and in this case the
transfer function is given by -

H 2(k)  

 Syy(k) 
 Syx(k) 

Eqn 3-23

This estimator tends to give an overestimate of the FRF if there is noise on the
output. this estimator estimates the resonances better than the anti-resonances.

Part I

Signal processing

43

Chapter 3

Functions

Note!

This estimator can only be implemented in the case of a single output

The Hv Estimator
Finally with the Hv estimator, [ H(k)] is calculated from the eigenvector corre
sponding to the smallest eigenvalue of a matrix [ Sxxy ]:

Sxxy 

S xx S xy
S yx S yy

Eqn 3-24

This estimator minimizes the global noise contribution in a total least squares
sense. When using this estimator the partitioning of the noise over the input
and output signals can be scaled.
Y

X
M

Y
N

Y-N =H (X-M)
X
This estimator provides the best overall estimate of the frequency function. It
approximates to the H2 estimator at the resonances and the H1 estimator at the
anti-resonances. It does however require more computational time than the
other two.
Frequency response functions depend on there being at least one reference
channel and one response channel.

Impulse Response
The impulse response (IR) function matrix [h(t)] expresses the time domain
relationship between the inputs and outputs of a linear system. This relation
ship takes the form of a convolution integral.
y(t) 

44

 x()x(t  )d

Eqn 3-25

The Lms Theory and Background Book

Functions

[h(t)] is calculated using the inverse Fourier transform of the frequency re


sponse function as shown below -

 h(t)   F 1 H(k) 

Eqn 3-26

Impulse response functions depend on there being at least one reference chan
nel and one response channel.
The FRF estimators (H1, H2 and Hv) are as described above.

Part I

Signal processing

45

Chapter 3

3.3

Functions

Composite functions
The functions described in this section represent functions that can be acquired
or processed during a Signature analysis. Since this type of analysis is intended
to examine the evolution of signals as a function of changing environment (e.g.
rpm, time, ...), then there needs to be functions that express this evolution.
These are called composite functions as they are derived from the `basic' mea
surement functions described in the previous section, for different environmen
tal conditions.

Overall level (OA)


This function describes the evolution of the total energy in the measured signal.
As such it is always expressed as a frequency spectrum rms value. It is avail
able with all basic measurement functions. Energy correction is applied to this
function.

ANSI 1.4 time based OA level calculation


The time signal is exponentially averaged to calculate the Overall level over a
t

particular bandwidth. An exponential weighting factor is used (e  ) where t


is the sample period of the signal and  is a time constant. The values of  de
pends on the type of signal and three standardized values are supplied.
 = 35ms for impulse (peaky) signals
 = 125 ms for fast changing signals
 = 1000 ms for slow changing signals.
When the signal contains spikes and is therefore defined as impulse" an addi
tional peak detector mechanism is implemented. In this case the signal is first
averaged using the 35ms averaging time constant and then peaks are detected
using a decay rate of 1500ms.

Frequency section
This function describes the evolution of the energy of the measured signal over
the rpm range in a specified frequency band. It is always expressed as an Rms
frequency spectrum and is available only when the basic measurement function
is a frequency domain function.
The frequency section is calculated by integrating over a Bandwidth around the
center frequency value.

46

The Lms Theory and Background Book

Functions

Bandwidth
Lower
bandvalue

Center
frequency

Upper
bandvalue

The center frequency is the frequency at which the section will be calculated
and is specified by the Center parameter. The Lower bandvalue and the Up
per bandvalue are given by
Center frequency +/- {Bandwidth/2}
The Bandwidth is determined by the Band mode parameter. Possible ways in
which to express the Bandwidth are V

a fixed frequency range

a fixed number of spectral lines


the lines closest to the exact frequency value are used.

a percentage of the selected center frequency

These options are illustrated below.


rpm

f
f
Band mode=frequency
Band mode=lines
f=constant

Part I

Signal processing

rpm

f
Band mode=%

f
 constant
fc

47

Chapter 3

Functions

Order sections
This function describes the evolution of the energy of the measured signal in a
specified `order' band. Orders are introduced chapter 2.3 in the chapter on
types of testing. An `order' band is a frequency band whose center frequency
changes as a function of the measurement environment or tracking parameter.
It is necessary therefore that the tracking parameter be a `frequency' type of pa
rameter (e.g. rotation speed in rpm). An order is nothing other than a multiple
of this basic tracking parameter. The evolution of the energy in a specified or
der band is expressed as a function of the measured rpm. Through post proces
sing it is also possible to examine it in terms of measured time or frequency.
Possible means of defining the span for integration are:
V

a fixed frequency range

a fixed number of spectral lines


the lines closest to the exact value are used.

a fixed order Bandwidth

a percentage of the selected order value

These three options are illustrated below.


rpm

f
f
Band mode=frequency
Band mode=lines
f=constant

rpm

f
f
Band mode=order
O=constant
f=constant . rpm

rpm

f

Band mode= %
O=constant
Oorder i=Bandwidth (%) . i
Oorder i+1=Bandwidth (%) . (i+1)

f=constant . rpm

Octave sections
An octave section represents the summation of values over octave bands. The
center frequencies of the bands are defined in the ISO norm 150 266. Possible
octave bands are are 1/1, 1/2, 1/3, 1/12 and 1/24 octaves.

48

The Lms Theory and Background Book

Functions

3.4

Units
To ensure consistency in the manipulation of data LMS software always oper
ates with an internal set of reference units. The physical quantities with a ca
nonical dimension of length, angle, mass, time, temperature, current, and light,
each have a corresponding reference unit as listed below:
Canonical dimension

Abbreviation

Reference unit

Abbreviation

length

le

meter

angle

an

radian

rad

mass

ma

kilogram

kg

time

ti

second

current

cu

Ampre

temperature

te

deg Kelvin

light

li

candela

0K

cd

Table 3.2 Reference units

This means that all data in either the internal data structures of the LMS soft
ware or the database is stored in these units. A physical quantity with a dimen
sion that is a combination of the above canonical dimensions will be allocated a
unit in the internal unit system that is a combination of the corresponding refer
ence units.
For example, a quantity with dimension of acceleration (length/time2) will
have a unit that is the reference unit of length divided by the reference unit of
time squared (m/s2).

Part I

Signal processing

49

Chapter 3

3.5

Functions

Rms calculations
This section describes the ways in which rms calculations are performed for
different measurement functions. RMS stands for Root Mean Square and is a
measure of the energy in a signal.
If the data is amplitude corrected, then it is automatically converted to energy
correction for the calculations.

Time and Impulse records


When dealing with time samples, then a certain number of sample must be
analyzed in order to obtain a measure of the nature and the energy in the sig
nal. This is done by squaring values, summing them and then taking an aver
age (mean) remove the influence of the number of samples. Then the square
root of the mean is taken to arrive at the rms value. So for a range of samples
starting at sample 0 and ending at sample k
Rms 

1 .
k1

i0

y 2i

yi
Eqn.
3-27

i0

Taking the example of a sine wave, of am


plitude A, then the rms value is
A
2

ik

rms
A

Frequency spectra
The frequency spectrum is first converted to a double sided amplitude spectrum
Amplitude 2A

Amplitude A

frequency

50

-f

f=0

frequency

The Lms Theory and Background Book

Functions

The frequency range over which you want the rms value computed is defined
by the upper and lower values of f1 and f2. All lines completely within the
range will be included in the calculations (Ai) where i takes values of 1 to k-1.
For the lines at the beginning and the end (A0 and A k), half of each value is
taken.
f2

f1
Ai
-f

i=0

i=k

frequency

The rms value is then computed using the following formula


Rms 

A 2k
A 20 k1

2
2   A i  
2
 2 i1

Eqn. 3-28

Autopower and crosspower spectra


These spectra are first converted to a double sided power spectrum. The num
ber of lines (k) included in the calculations depends on the defined frequency
span. As was the case for the frequency spectrum shown above, the value for
the first and last sample (A0 and A k) are halved. The rms value is then com
puted using the following formula
Rms 

A 0 k1
A

2   A i  k
2
 2 i1

Eqn. 3-29

FRF, Impedance, Transmissibility and Transmittance


Rms values for these types of functions are not well defined. The Lms inter
pretation for an FRF is to find the rms response when a force of amplitude 1 is
applied. A force of amplitude 1 has an rms value Frms equal to
F rms  1  k

Eqn. 3-30

where k is the number of samples in the range.


The rms of the response, Xrms is derived from equation 3-28.
The rms of the FRF therefore Hrms is

Part I

Signal processing

51

Chapter 3

Functions

H rms 

H rms 

X rms
F rms

k1
2
A 2k
1 A 0 
2
Ai  
2
k1
2

i1

Eqn. 3-31

Sound power, sound intensity (active and reactive), SFTVI and SFUI
SFTVI (sound field temporal uniformity indicator) and SFUI (sound field uni
formity indicator) are ISO defined functions for acoustic measurements and
analysis. The rms computes the total energy in a band, so since these are al
ready a measure of energy, then the values of the spectral lines can simply be
added.
Rms 

A0

2

 Ai  A2k

Eqn. 3-32

Particle velocity (active and reactive)


Although the particle velocity is basically a frequency spectrum, since it is cal
culated as a single sided spectrum it differs by a factor of 2 from equation 3-28.
Rms 

52

A 20
2

k1

 A2i  A2k

Eqn. 3-33

i1

The Lms Theory and Background Book

Theory and Background

Part II
Acoustics and Sound Quality

Chapter 4
Terminology and definitions . . . . . . . . . . . . . .

55

Chapter 5
Acoustic measurements . . . . . . . . . . . . . . . . .

67

Chapter 6
Sound quality . . . . . . . . . . . . . . . . . . . . . . . . . .

83

Chapter 7
Sound metrics . . . . . . . . . . . . . . . . . . . . . . . . .

99

Chapter 8
Acoustic holography . . . . . . . . . . . . . . . . . . . .

53

117

Chapter 4

Terminology and
definitions

This chapter contains definitions of basic terms associated with


acoustics.
Acoustic quantities
Reference conditions
Octave bands
Acoustic weighting

55

Chapter 4

4.1

Terminology and definitions

Acoustic quantities

Sound power (P)


The amount of noise emitted from a source depends on the sound power of that
source. The sound power is a basic characteristic of a noise source, providing
an absolute parameter that can be used for comparison. This differs from the
sound pressure levels it gives rise to, which depend on a number of external
factors.
The total sound power PI of a source surrounded by N measurement surfaces is
given by
PI 

 Pi

Eqn 4-1

i1

The power of a sound source is expressed in Joules per second, or Watts.


The sound power can also be represented by the letter W.

Sound pressure
The effect of the sound power emanating from a source is the level of sound
pressure. Sound pressure is what the ear detects as noise, the level of which de
pends to a great extent on the acoustic environment and the distance from the
source. The sound pressure is defined as the difference between the actual and
ambient pressure.
This is a scalar quantity that can be derived from measured sound pressure
spectra or autopower spectra either at one specific frequency (spectral line), or
integrated over a certain frequency band.
Sound pressure measurements can be obtained at each measurement point, and
are independent of the measurement direction (X,Y, or Z). The units are Pascal
(Pa) or N/m 2.

Sound (Acoustic) intensity


An important quantity to be derived from the sound power is sound intensity.
The sound intensity of a sound wave describes the direction and net flow of
acoustic energy through an area.

56

The Lms Theory and Background Book

Terminology and definitions

TotalpowerP I 

 I.dS

Eqn 4-2

Sound intensity is a vector, orientated in 3D-space with the fundamental units


of W/m 2, (power transmitted per unit area).
The area is represented as a vector in 3D space with a length equal to the
amount of geometrical area, and a direction perpendicular to the measurement

surface. As such, the vector product ( I i.S i) represents the flow of acoustic ener
gy in a direction perpendicular to a surface. This is the usual direction in
which intensity is measured. If the acoustic intensity vector lies within the sur
face itself, the transmitted sound power equals zero.
Intensity is also the time-averaged rate of energy flow per unit area.
I1
T

 I(t)dt
T

Eqn 4-3

As such, if the energy is flowing back and forth resulting in zero net energy
flow then there will be zero intensity.
Normal sound intensity
This is the component of the sound intensity vector normal to the measurement
surface.

Free field
This term refers to an idealized situation
where the sound flows directly out from
the source and both pressure and intensi
ty levels drop with increasing distance
from the source according to the inverse
square law.

Part II

Acoustics and Sound Quality

57

Chapter 4

Terminology and definitions

Diffuse field

In a diffuse field the


sound is reflected
many times such that
the net intensity can
be zero.

Particle velocity
Pressure variations give rise to movements of the air particles. It is the product
of pressure and particle velocity that results in the intensity. In a medium with
mean flow therefore

I  pv

where

Eqn 4-4

p= sound pressure (Pa)



v = particle velocity (m/s)

The particle velocity of a medium is defined as the average velocity of a vol


ume element of that medium. This volume element must be large enough to
contain millions of molecules so that it may be thought of as a continuous fluid,
yet small enough so that acoustic variables such as pressure, density and veloc
ity may be considered to be constant throughout the volume element.
Equation 4-4 can be used to compute the particle velocity, once the acoustic in
tensity and the sound pressure have been measured. Particle velocity is a vec
tor in 3D-space expressed in units of (m/s).
In a diffuse field the pressure and velocity phase vary at random giving rise to
a net intensity of zero.
Under certain circumstances (i.e. plane progressive waves in a free field), the
particle velocity can also be calculated from the pressure and the impedance of
the medium (c).
v  p ec

58

Eqn 4-5

The Lms Theory and Background Book

Terminology and definitions

where

pe = effective sound pressure (Pa)


= mass density of the medium (kg/m 3)
c= velocity of sound in the medium (m/s)

By combining equations 4-4 and 4-5 it can be seen that in a free field a relation
ship exists enabling the acoustic intensity to be determined from the effective
pressure of a plane wave.

p 2e
 I  .c

Eqn 4-6

Acoustic impedance (Z)


This is defined as the product of the mass density of a medium and the velocity
of sound in that medium.
Z  .c

Eqn 4-7

where
 = mass density (kg/m 3)
c = velocity of sound in the medium (m/s)

Part II

Acoustics and Sound Quality

59

Chapter 4

4.2

Terminology and definitions

Reference conditions
It is a common practise to define standards for acoustic intensity, pressure, etc...
at an air temperature of 20_C and a standard atmospheric pressure of 1023 hPa
(1 bar). Under these conditions
the density of air o
= 1.21 (kg/m 3)
the velocity of sound in air c
= 343 (m/s)
= 415 rayls (kg/m 2s)
the acoustic impedance o .c

dB scale
Since the range of pressure levels that can be detected is large and the ear re
sponds logarithmically to a stimulus, it is practical to express acoustic parame
ters as a logarithmic ratio of a measured value to a reference value. Hence the
use of the decibel scales for which the reference values for intensity, pressure
and power are defined below.

Sound power level Lw


This is defined as the logarithmic measure of the absolute (unsigned) value of
the sound power generated by a source.
L W  10 log 10

|P I|
P0

Eqn 4-8

The reference sound power is P0 = 10-12 (W)

Particle velocity level Lv


This is defined as the logarithmic measure of the particle velocity.

L v  20 log 10 vv
0

Eqn 4-9

The reference particle velocity is v0 = 50 10-9 (m/s)

60

The Lms Theory and Background Book

Terminology and definitions

Sound (Acoustic) intensity level LI


This is the logarithmic measure of the absolute value of the intensity vector.

|I|
L I  10 log 10
I0

Eqn 4-10

The commonly used reference standard intensity for airborne sounds is


=
10-12 (W/m 2)

Normal acoustic intensity level (LI )


This is the logarithmic measure of the absolute value of the normal intensity
vector.

|I n|
L In  10 log 10
I0

dB

Eqn 4-11

Sound pressure level LP


This is defined as

p
L p  10 log 10 p
0
 20 log 10

p
p0

Eqn 4-12

p is the rms value of the acoustic pressure (in Pa)


The above reference values for intensity and power correspond to an effective
rms reference pressure of
po
= 0.00002 (Pa)
= 20 Pa
This sound pressure level of 20 Pa is known as the standardized normal hear
ing threshold and represents the quietest sound at 1000Hz that can be heard by
the average person.

Part II

Acoustics and Sound Quality

61

Chapter 4

4.3

Terminology and definitions

Octave bands
Complete (1/1) octave bands represent frequency bands where the center fre
quency of one band is approximately twice (according to standardized values)
that of the previous one.

fc, i

fc, i+1

fc, i+2
f c, i1  2. f c,i

Partial octave bands (1/3, 1/12 1/24 . . .) represent frequency bands where
f c, i1  ( 2 1x ). f c,i
and where x = 3,12, 24 . . .

1/1 bands
1/3 bands

12x
The Lower band limit of a 1/x octave band is f c.2
12x
The Upper band limit of a 1/x octave band is f c.2

The bands defined by these formulas are termed the `natural' bands. The Inter
national ISO norm 150266 defines normalized center frequencies for octave
bands and the values for 1/1, 1/2 and 1/3 octave bands are listed in table 4.1.
Natural frequencies are used for calculations but the normalized frequencies
are used for annotation. Octave bands above or below the normalized values
are annotated with the natural frequencies.

62

The Lms Theory and Background Book

Terminology and definitions

Normalized
frequency
16

1/ 1/ 1/
1
2
3
oct oct oct
x

18
x

22.4

25

x
x

x
x

45

50

250

x
x

71

x
x

400

90

100

112

x
x

140

800

x
x

3150

4000

5000

x
x

6300

8000

9000
x

1600

10000

11200

1250
1400

160

7100

1120
x

2250

5600

630

1000

4500

900
x

2000

3550
x

710

80

2800

560
x

1600

2240

315

500

1/ 1/ 1/
1
2
3
oct oct oct
1800

450

56

Part II

200

355

40

Table 4.1

280

35.5

125

224

28

63

160
180

20

31.5

1/ 1/ 1/
1
2
3
oct oct oct

x
x

12500

14000
x

16000

Normalized frequencies (Hz)

Acoustics and Sound Quality

63

Chapter 4

4.4

Terminology and definitions

Acoustic weighting

Frequency weighting
The human ear has nonlinear, frequency dependent characteristics, which
means that the sensation of loudness cannot be perfectly described by the
sound pressure level or its spectrum. To derive an experienced loudness level
from the sound pressure signal, the frequency spectrum of the sound pressure
signal is multiplied by a frequency weighting function. These weighting func
tions are based on experimentally determined equal loudness contours which
express the loudness sensation as a function of sound pressure level and fre
quency. A number of equal loudness contours are shown in Figure 4-1. The
loudness level is expressed in `Phons'. 1 kHz-tones are used as the reference,
which means that for a 1000 Hz tone, the Phon value corresponds to the dB
sound pressure level.

Figure 4-1

64

Equal loudness perception contours

The Lms Theory and Background Book

Terminology and definitions

A, B and C - weighting for acoustic signals. A-weighting modifies the fre


quency response such that it follows approximately the equal loudness curve of
40 phons and is applied to signals with a sound pressure level of 40dB. The Aweighted sound level has been shown to correlate extremely well with subjec
tive responses. The B and C-weighting follow more or less the 70 and 100
phon contours respectively. These contours can be seen in Figure 4-2. The re
sulting value is then denoted by LA, LB,.... with unit dBA, dBB...
Table 4.2 (overleaf) shows the relative response attenuations or amplifications
of the 3 types of filters. In between the listed normal frequencies, these filter
spectra are linearly interpolated on a log-log scale. Figure 4-2 shows the same
information in a graphical form.
Relative response (dB)

20
10
0

-10
-20

-30

-40
-50

-60

Frequency (Hz)

-70

10

Figure 4-2

Part II

102

103

104

Standardized weighting curves

Acoustics and Sound Quality

65

Chapter 4

Terminology and definitions

1/3 Octave band


Center frequency Hz
16
20
25
31.5
40
50
63
80
100
125
160
200
250
315
400
500
630
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
8000
10,000
12,500
16,000
20,000
Table 4.2

66

A weighting dB

B weighting dB

C weighting dB

-56.7
-50.5
-44.7
-39.4
-34.6
-30.2
-26.2
-22.5
-19.1
-16.1
-13.4
-10.9
-8.6
-6.6
-4.8
-3.2
-1.9
-0.8
0
+0.6
+1.0
+1.2
+1.3
+1.2
+1.0
+0.5
-0.1
-1.1
-2.5
-4.3
-6.6
-9.3

-28.5
-24.2
-20.4
-17.1
-14.2
-11.6
-9.3
-7.4
-5.6
-4.2
-3.0
-2.0
-1.3
-0.8
-0.5
-0.3
-0.1
0
0
0
0
-0.1
-0.2
-0.4
-0.7
-1.2
-1.9
-2.9
-4.3
-6.1
-8.4
-11.1

-8.5
-6.2
-4.4
-3.0
-2.0
-1.3
-0.8
-0.5
-0.3
-0.2
-0.1
0
0
0
0
0
0
0
0
0
-0.1
-0.2
-0.3
-0.5
-0.8
-1.3
-2.0
-3.0
-4.4
-6.2
-8.5
-11.2

Weighting of acoustic signals

The Lms Theory and Background Book

Chapter 5

Acoustic measurements

This chapter discusses the measurement of acoustic quantities.


Measured acoustic functions
In addition it describes the calculation of acoustic quantities based
on measured ones and other parameters used in these calculations
Calculation of acoustic quantities
Acoustic measurement surfaces
Frequency bands
Field indicators

67

Chapter 5

5.1

Acoustic measurements

Acoustic measurement functions


This section describes the acoustic quantities that can be measured. From mea
sured quantities it is possible to derive further quantities as described in section
5.2.

Sound pressure level


This is defined by equation 4-12 and can be measured using a single channel.
It will result in an averaged pressure or autopower spectrum.
For measurements in the free field, and in the direction of propagation, the nor
mal sound intensity level will be equal to the sound pressure level. In practice,
when not working under free field conditions, the sound intensity level will be
lower than the sound pressure level.

Sound Intensity
The sound intensity in a specified direction at a point is the average rate of
sound energy transmitted in the specified direction through a unit area normal
to this direction at the point considered.
In most situations it is the component of the sound intensity vector normal to

the measurement surface, I n , which is measured.
In order to determine sound intensity you can measure both the instantaneous
pressure and the corresponding particle velocity simultaneously. In practice,
the sound pressure can be obtained directly using a microphone. The instanta
neous particle velocity can be calculated from the pressure gradient between
two closely spaced microphones. A sound intensity probe can therefore consist
of two closely spaced pressure microphones which measure both the sound
pressure and the pressure gradient between the microphones.
For frequency domain calculations, it can be shown that the sound intensity can
be calculated from the imaginary part of the crosspower between the two mi
crophone signals. The following formula is used
I  Imag

S 1,2

2fd

Eqn 5-1

Where S1,2 is the double sided crosspower between the two microphone sig
nals, f is the signal frequency, d is the microphone distance and  is the air den
sity.

68

The Lms Theory and Background Book

Acoustic measurements

For this function, all channels are processed as channel pairs, each pair consist
ing of two consecutive channels. It therefore requires that an even number of
channels is defined.
The reactive sound intensity (non propagating energy) is calculated as
I reactive 

S 1,1  S 2,2
2fd

Eqn. 5-2

For the idealized case of measurements in the free field (free space without re
flections) and in the direction of propagation, the reactive intensity is zero.

Residual intensity
This is defined as

RI  L p   pI 0

Eqn 5-3

where L p is the measured sound pressure level and  pI0 is the pressure residual
intensity index. To calculate the residual intensity therefore it is necessary to
have the pressure residual intensity index available. This is described below.
Intensity measurements can be made in a sound field where the sound intensity
level is in the range

L p   pIo  L I  L p

Eqn 5-4

Lp is defined in equation 4-12, and LI in equation 4-10. In a free field the pres
sure and intensity levels are the same, whereas in all other cases, the measured
intensity will be less than the pressure. The residual intensity ( L p   pI 0) rep
resents the lowest intensity level which can be detected by the system for the
given sound pressure level.

Part II

Acoustics and Sound Quality

69

Chapter 5

Acoustic measurements

Pressure residual intensity index


For the calculation of the pressure residual intensity index of a sound intensity
probe, it is required to place the intensity probe in a sound field such that the
sound pressure is uniform over the volume. In these conditions there will be
no difference between the two signals at both microphones, and hence the mea
sured intensity should be zero. However, the phase mismatch between the two
measuring channels causes a small difference between the two signals making
it appear as if there is some intensity. The intensity detected can be likened to a
noise floor below which measurements cannot be made. This intensity lower
limit is not fixed but varies with the pressure level. What is fixed, is the differ
ence between the pressure and the intensity level when the same signal is fed to
both channels. It is this which is defined as the pressure residual intensity in
dex. Mathematically therefore the pressure residual intensity index is

 pIo  (L p  L In) dB

Eqn 5-5

where Lp is the sound pressure level and LIn is the normal sound intensity lev
el.

Dynamic capability index


In order to ensure a particular level of accuracy for the measurements it is nec
essary to increase the measurement floor defined by the residual intensity level
by an amount termed the `bias error factor' ( )

L p   pIo  dB  LI  Lp

Eqn 5-6

LI dB
Lp
Ld
pIo

residual intensity level


frequency

Figure 5-1

70

Dynamic capability index Ld

The Lms Theory and Background Book

Acoustic measurements

The `bias error factor' ( ) is selected according to the grade of accuracy re


quired from the table below.
Grade of accuracy

Bias error factor


dB

Precision

(class 1)

10

Engineering

(class 2)

10

Survey

(class 3)

Table 5.1

Bias error factor ( )

The difference between the residual pressure-intensity index and therefore


represents the range in which the probe should be operating and is termed the
`dynamic capability index' (Ld ) for the probe.

L d  ( pIo  ) dB

Part II

Acoustics and Sound Quality

Eqn 5-7

71

Chapter 5

5.2

Acoustic measurements

Calculation of acoustic quantities


Acoustic functions can be derived from ones that have been measured. This
section describes these analysis functions and Table 5.2 gives an overview of
them and the measured quantities required for their derivation.
Calculations will be made over specific frequency bands This subject is dis
cussed in section 5.4. Some functions are computed over a known area. The
subject of defining surfaces (meshes) for acoustic functions is discussed in sec
tion 5.3.

Effective sound pressure


The effective sound pressure pe or prms may be computed from a measured
sound pressure spectrum or from its autopower spectrum.
f2

p 2e  2

  p(f)  df
2

f1
f2

2

 A (f)df

Eqn 5-8

f1

Acoustic intensity
This as a vector quantity calculated directly from measured acoustic intensity
functions.
f2

I

 I(f)df

Eqn 5-9

f1

When intensity measurements are not available but sound pressure measure
ments are available, then the magnitude of the acoustic intensity can be com
puted from the effective sound pressure p and the acoustic impedance  .c
p2e
I  .c
0

72

Eqn 5-10

The Lms Theory and Background Book

Acoustic measurements

but only under the assumption of plane progressive waves in a free field.
Sound power
This is calculated from the geometrical area S and the acoustic intensity compo
nent perpendicular to a surface
P  I n.S

Eqn 5-11

Under certain circumstances, intensity can be assumed to be proportional to ef


fective sound pressure, and then
p2
P  ec .S

Eqn 5-12

Particle velocities
These can be calculated when both acoustic intensity and sound pressure data
are available

v  pI

Eqn 5-13

All the possible analysis functions are summarized in Table 5.2. (These are
based on the assumption of plane progressive waves in a free field.)
Acoustic quantity Symbol
Effective (RMS)
sound pressure
p

Intensity

Sound power
p

Particle velocity
Table 5.2

Part II

pe

Required data

Formula

MKS
units

sound pressure
spectrum p

2   p 

Pa or
N/m2

pressure autopower
A

2   A 

intensity

I

W/m2

Intensity and area

Sound pressure
spectrum and area

p 2e
0c .S

(1)

pressure autopower
and area

p 2e
0c .S

(1)

intensity and sound


pressure

I.S

I
p

m/s

Overview of analysis functions for acoustic signals

Acoustics and Sound Quality

73

Chapter 5

5.3

Acoustic measurements

Acoustic measurement surfaces


Acoustic measurements differ from other types of signals in that they are mea
sured some distance away from the object rather than on the test structure it
self. The measurement points are termed associated nodes, that are surrounded
by a hypothetical measurement surface. An organized collection of measure
ment surfaces and nodes are termed a measurement mesh and there are ISO
standards that define such meshes for particular measurement types.
acoustic measurement nodes

source

reflecting plane

Figure 5-2 Sound source, acoustic measurement mesh and nodes

Acoustic measurement meshes can be parallelepiped, cylindrical or spherical in


shape.
Associated nodes on measurement meshes have a nodal orientation. This is al
ways Cartesian, and the orientation of the +Z nodal coordinate system for a
measurement defines the measurement direction.

Acoustic ISO standards


The ISO-3744 and ISO-3745 standards describe sound pressure measure
ments. The microphone positions are defined on a (hemi-) spherical or a paral
lelepiped measurement mesh. The possible dimensions of the measurement
mesh depend on the characteristic distance of the reference surface. This refer
ence surface is defined as the smallest rectangular box that encloses the noise
source.

74

The Lms Theory and Background Book

Acoustic measurements

ISO-3744

Acoustics - Determination of sound power levels of noise sources


- Engineering methods for free-field conditions over a reflecting
plane.

ISO-3745

Acoustics - Determination of sound power levels of noise sources


- Precision methods for anechoic and semi-anechoic rooms

The ISO-9614-1 standard describe sound intensity measurements. In this case


the microphone positions of the measurement meshes are not defined. The
quality of the mesh has to be judged during measurements.
It describes a number of field indicators that allow a judgment of the accuracy
of the measurements and the mesh.

Part II

Acoustics and Sound Quality

75

Chapter 5

5.4

Acoustic measurements

Frequency bands
Whenever an acoustic quantity is integrated over a certain frequency band, the
following formula applies

 a(f)df
f2

a

Eqn 5-14

f1

The integration of a continuous function a(f) is replaced by a finite sum over the
corresponding discrete samples:

a  1 a1 
2

where

 a  12 a
i

a1
a2
f1 < fi < f2

Eqn 5-15

= a(f1 )
= a(f2 )

This integration takes into account the full value of all data samples between
the two limits, and 50 % of the first and last sample. It can be obtained between
any two measured frequency limits.
It is good practise to maintain the type of frequency band that was used in the
acquisition of the data for the calculation. In fact data acquired in octave bands
must remain in those bands for the analysis. The calculation of the field indica
tors also makes little sense unless the analysis bands correspond with the mea
surement bands.

76

The Lms Theory and Background Book

Acoustic measurements

5.5

Field indicators
When attempting to analyze the sound power being radiated from a noise
source in situ, the international standard ISO 9614-1 lays out a number of mea
surement conditions which must be adhered to if the results are to be consid
ered acceptable for this purpose. A number of criteria must be satisfied, based
on the values of particular indicator functions, to ensure the requisite adequacy
of the measurements and meshes. This section describes both the field indica
tors themselves and the criteria used to assess the results.

F1 Sound field temporal variability indicator


This gives the measure of temporal (or time) variability of the field. It is de
fined as follows
F1  1
In

1
M1

 (Ink  In)2

Eqn 5-16

k1

Where I n is the mean value of M short time averages of Ink defined in the fol
lowing equation.

In  1
M

 Ink

Eqn 5-17

k1

F2 Surface pressureintensity indicator


In a free field where sound is only radiating out from a source, the pressure and
intensity levels are equal in magnitude. In a diffuse or reactive field however,
intensity can be low when the pressure is high. A lower measured intensity
can also arise if the sound wave is incident at an angle to the probe since this
also affects the phase change detected across the probe. The pressure-intensity
indicator examines the difference between the pressure and the absolute values
of intensity. This function can be determined on a point to point basis during
the acquisition, but the function F2 described here represents the value aver
aged over all the measured surfaces.

Part II

Acoustics and Sound Quality

77

Chapter 5

Acoustic measurements

F 2  L p  L |I n|

Eqn 5-18

L p is the surface sound pressure level defined as


N
p i 2!

1
 p

L p  10 log 10
o
N

i1

Eqn 5-19

where i indicates the measurement surface and N is the total number of sur
faces (of the local component).

L |I n|

is the surface normal unsigned acoustic intensity level defined as

1 N |Ini|!
L |I |  10 log 10 
Io 
N

i1

Eqn 5-20

where |I ni| is the absolute (unsigned) value of the normal intensity vector.
Note!

A large difference between intensity and pressure suggests that the probe is
not well aligned or that you are operating in diffuse field.
In order to calculate F2 it is necessary to have both intensity and autopower (or
pressure) measurements for all points on the mesh.

F3 Negative partial power indicator


This indicator also examines the difference between measured intensity and
pressure, but in this case the direction of the intensities is taken into account.
Thus this function expresses the variation between intensities arising from the
source under investigation (positive) and those being generated by extraneous
sources (negative).

F3  L p  LIn
Lp

78

Eqn 5-21

is the surface sound pressure level defined above.

The Lms Theory and Background Book

Acoustic measurements

L In is the surface normal signed acoustic intensity level defined as

1
L In  10 log 10
N

 IInio 
N

i1

Note!

If the quantity

 II

ni

Eqn 5-22

is negative, then the effect of extraneous sources is

too great and the set of measurements do not satisfy the ISO requirements.
In order to calculate F3 it is necessary to have both intensity and autopower (or
pressure) measurements for all points on the mesh.

F4 Nonuniformity indicator
This indicates the measure of spatial (or positional) variability that exists in the
field. It can be compared with the statistical parameter standard deviation.
F4  1
In

1
N1

(Ini  In)2

Eqn 5-23

i1

Where i indicates the measurement surface and N is the total number of sur
faces. I n is the mean of the normal acoustic intensity vectors taken over the N
surfaces.
In  1
N

 Ini

Eqn 5-24

i1

In order to calculate F4 , only intensity measurements are required.

5.5.1

The criteria
Three criterion can be evaluated in verifying the results of an acoustic intensity
analysis.

Part II

Acoustics and Sound Quality

79

Chapter 5

Acoustic measurements

Ld F2

Measurement chain accuracy

If a measurement array is to be considered suitable for determining the sound


power level of a noise source according to ISO 9614-1, then the dynamic capa
bility index (Ld ) must be greater than the indicator F2 for each frequency band.
Ld  F2 0

Criterion 1

Ld is dependent on the measurement equipment and is defined in equation


NO TAG. F2 is defined in equation 5-18. Ld is derived from the pressure resid
ual intensity index which must be computed during the measurement phase.
If this criterion is not satisfied then it is an indication that the levels being mea
sured are too low for the source and that it is necessary to reduce the average
distance between the measurement surface and the source.

F3 F2

Extraneous noise sources

If the difference between field indicators F2 and F3 is significant (greater than


3dB), it is a strong indication of the presence of a directional extraneous noise
source in the vicinity of the noise source under test.
If the difference between these two indicators is greater than 3 dB, then the situ
ation can be improved by reducing the average distance between the measure
ment surface and the source, shielding measurement sources from the extrane
ous noises or reducing some reflections towards the source under investigation.

Measurement mesh adequacy


A check on the adequacy of the measurement positions (mesh) can be made us
ing the following criterion.
N C.F 4 2

Criterion 2

where N is the number of measurement (probe) positions


F4 is the indicator defined in equation 5-23
C is a factor selected from table 5.3 depending on the accuracy re
quired.
Where the same mesh is used for a number of bands then the maximum value
of C.F4 2 will be considered when evaluating the criterion.

80

The Lms Theory and Background Book

Acoustic measurements

Center frequencies (Hz)

Octave band

1/3 Octave band

Precision
class 1

Engineering
class 2

63-125

50-160

19

11

250-500

200-630

19

1000-4000

800-5000

57

29

6300

19

14

A weighted (63 - 4k or 50 - 6.3k) Hz


Table 5.3

Part II

Survey
class 3

Values of factor C for measurement mesh accuracy

Acoustics and Sound Quality

81

Chapter 6

Sound quality

The purpose of this chapter is to introduce you to the fundamentals


of sound quality.
Basic theory relating to sound quality
Sound quality analysis
An extensive reading list is included at the end of the chapter for
more detailed information.

83

Chapter 6

6.1

Sound quality

The basic concepts of Sound Quality


Sound signals
The characteristics of a sound as it is perceived are not exactly the same as the
characteristics of sound being emitted. The discussion starts with definitions
which describe the actual sound signals, and then discusses the physical and
psychological effects that influence the perception of a particular signal.
Sound power and sound pressure
The amount of noise emitted from a source depends on the sound power of that
source.
The effect of the sound power emanating from a source is the level of sound (or
acoustic) pressure. Sound pressure is what the eardrum detects - the level of
which depends to a great extent on the acoustic environment and the distance
from the source.
Sound pressure is what is measured by microphones and the majority of data
used in the a sound quality analysis would have the dimension pressure and
thus be referred to as a sound signal. This is not an absolute condition however
and vibrational data too can be analyzed.
Sound pressure level
The basic descriptor of a sound signal is the sound pressure level (SPL) denoted
by L and described in equation 4-12. The sound pressure level of 20 Pa is
known as the standardized normal hearing threshold and represents the quiet
est sound at 1000Hz that can be heard by the average person.
Since the range of pressure levels that can be detected is large and the ear re
sponds logarithmically to a stimulus, it is practical to express acoustic parame
ters as a logarithmic ratio of a measured value to a reference value. Hence the
use of the decibel scales.
Hearing frequency range
The threshold frequency for human hearing is around 20kHz. Signals with a
frequency content below this value are referred to as audio signals. Sampling of
audio signals therefore requires a sampling rate at least twice the maximum
that can be detected by the ear in order to avoid aliasing problems. You will
find therefore that CD recorders use a sampling rate of 44.1KHz and DAT re
corders 48kHz.
Loudness and pitch
A sound can be characterized by its loudness (related to the SPL) and its fre
quency content. The common term for describing the frequency content of a
sound (or tone) is its `pitch'. However pitch is very much a perceived frequency
sensation and depends on its frequency and the sound pressure level. Both
loudness and pitch are discussed further below.

84

The Lms Theory and Background Book

Sound quality

The perception of sounds by the human ear


An important element in explaining why two sounds with an equal dB level
may have a totally different subjective quality is related to the physics of the
human hearing process. The human ear is a complex, nonlinear device, with
specific frequency dependent transmission characteristics. In addition, the fact
that hearing usually involves two ears (is `binaural') has a considerable influ
ence on sound perception. The correct understanding of the hearing processes
will lead to a better appreciation of why a sound has its specific quality, which
in turn will result in improved models and quantitative analysis procedures.
Physics alone, however, are not sufficient to explain all aspects of sound per
ception. It is also influenced by psychological factors such as attitude, back
ground, expectations, environment, context, etc. As a consequence, there is no
better `judge' of sound quality than the human listener, despite all efforts at
quantification and modelling.
The purpose of this section is merely to highlight the salient points of this sub
ject. For a more thorough understanding of this topic you should refer to the
reading list at the end of the chapter. Specific references to items in this list are
contained within brackets {1}thus.

The hearing process


Before reaching the eardrum, an incident acoustic signal is considerably modi
fied by the spectral and spatial filtering characteristics of the human body and
the ear. The human torso itself acts as a directional filter through diffraction,
resulting in the fact that very significant interaural differences in sound pres
sure level occur depending on the direction of the source,{2}.
Figure 6-1 shows the various parts of the ear (from {5}). The outer ear consists
of the pinna and the ear canal. Diffraction effects at the pinna and direction in
dependent effects within the ear canal result in the human ear being most sensi
tive in the frequency range 1 to 10 kHz. The middle ear links the eardrum to
the cochlea, which is the actual sound receptor. The final link between an
acoustic signal and a neural response takes place in the cochlea, which is in the
inner ear.

Part II

Acoustics and Sound Quality

85

Chapter 6

Sound quality

Stirrup Oval window


Hammer Anvil
Semicircular canal

Nerve fibers
Cochlea

Scala vestibuli

Pinna

Scala tympani
Eustachian tube

Ear canal

Ear drum
Outer ear

Figure 6-1

Round window
Middle
ear

Inner ear

The main parts of the ear

Binaural hearing
Another essential characteristic of human hearing is that it is binaural in nature.
The sound signals received by the left and right ear show a relative time delay
as well as a spectral difference dependent on the direction of the sound. Below
about 1500 Hz, the phase difference between the two signals will be the main
contribution to localization, while above this frequency the interaural level dif
ference and difference in spectrum will be the principal factors.
Processing in the human brain not only allows the sound to be spatially local
ized, but also to suppress unwanted sounds and to concentrate on a sound
coming from a specific direction {2 6}. This is the well known `party' effect
where it is possible to focus one's hearing on an individual a certain distance
away in the presence of significant background noise.

Sound perception
The body, head and outer ear effects consist mainly of a spatial and spectral fil
tering that is applied to the acoustic stimulus. Consequently, just looking at the
frequency spectrum of a free positioned microphone does not necessarily lead
to a correct assessment of the human response. In other words, there is no sim
ple relationship between the measured physical sound pressure level and the
human perception of the same sound.

86

The Lms Theory and Background Book

Sound quality

The effects of the inner ear are many, but the most important are its nonlinear
characteristics. This means that the auditory impression of sound strength,
which is referred to by the term `loudness' is not linearly related to the sound
pressure level. In addition, the perceived loudness of a pure tone of constant
sound pressure level varies with its frequency. Also the auditory impression of
frequency, which is referred to by the term `pitch' is not linearly related to the
frequency itself. These and other effects are described below.

Loudness
The sound pressure level is not linearly related to the auditory impression of
sound strength (or loudness). Together with the frequency dependencies dis
cussed above, this means that the sensation of loudness cannot be correctly de
scribed by the acoustic pressure level or its spectrum. Figure 6-2 {5} shows a
number of curves representing levels of perceived equal loudness (for sinusoidal
tones) across a frequency range as a function of acoustic pressure level.

Figure 6-2

Equal loudness perception contours {5}

Pitch
The perceived `frequency sensation', referred to as `pitch', is not directly related
to the frequency itself {6}.

Part II

Acoustics and Sound Quality

87

Chapter 6

Sound quality

The pitch of a pure tone varies with both the frequency and the sound pressure
level, and this relationship is itself dependent on the frequency of the tone.
Pure tones can be used though to determine how pitch is perceived. One possi
bility is to measure the sensation of `half pitch'. In this case the subject is asked
to listen to one pure tone, and then adjust the frequency of a second one such
that it produces half the pitch of the first one. At low frequencies, the halving
of the pitch sensation corresponds to a ratio of 2:1 in frequency. At high fre
quencies however this does not occur and the corresponding frequency ratio is
larger than 2:1. For example a pure tone of 8kHz produces a `half pitch' of only
1300Hz.
So although the ratio between pitches can be determined from experiments, to
obtain absolute values, it is necessary to determine a reference for the sensation
`ratio pitch'. A reference frequency of 125 Hz was chosen so that at low fre
quencies, the numerical value of the frequency is identical to the numerical val
ue of the ratio pitch. Because ratio pitch determined in this way is related to
our sensation of melodies, it was assigned the dimension `mel'. Therefore a
pure tone of 125 Hz has a ratio pitch of 125 mel, and the tuning standard, 440
Hz, shows a ratio pitch with almost the same numerical value.
At high frequencies, the numerical value of frequency and that of the ratio pitch
deviate substantially from another. The experimental finding that a pure tone
of 8kHz has a `half pitch' of 1300Hz, is reflected in the numerical values of the
corresponding ratio pitch. The frequency of 8 kHz corresponds to a ratio pitch
of 2100 mel and the frequency of 1300 Hz corresponds to a ratio pitch of 1050
mel, which are half of 2100 mel.

Critical bands
The inner ear can be considered to act as a set of overlapping constant percent
age Bandwidth filters. The noise Bandwidths concerned are approximately
constant with a Bandwidth of around 110 Hz, for frequencies below 500 Hz,
evolving to a constant percentage value (about 23 %) at higher frequencies.
This corresponds perfectly with the nonlinear frequency-distance characteris
tics of the cochlea. These Bandwidths are often referred to as `critical Band
widths' and a `Bark' scale is associated with them as shown in Table 6.1.

88

The Lms Theory and Background Book

Sound quality

Critical Band (Bark)


Center Frequency (Hz)
Bandwidth (Hz)

1
50
100

2
150
100

3
250
100

4
350
100

5
450
110

6
570
120

7
700
140

8
840
150

Critical Band (Bark)


Center Frequency (Hz)
Bandwidth (Hz)

9
1000
160

10
1170
190

11
1370
210

12
1600
240

13
1850
280

14
2150
320

15
2500
380

16
290
450

Critical Band (Bark)


Center Frequency (Hz)
Bandwidth (Hz)

17
3400
550

18
4000
700

19
4800
900

20
5800
1100

21
7000
1300

22
23
24
8500 10500 13500
1800 2500 3500

Table 6.1

Table of critical bands

Masking
The critical bands described above, have important implications for sounds
composed of multiple components. For example, narrow band random sounds
falling within one such filter Bandwidth will add up to the global sensation of
loudness at the center frequency of the filter. On the other hand, a high level
sound component may `mask' another lower level sound which is too close in
frequency.
An example of masking is shown below {5}. A 50 dB, 4 kHz tone (marked +)
can be heard in the presence of narrow-band noise, centered around 1200 Hz,
up to a level of 90 dB. If the noise level rises to 100 dB, the tone is not heard.

Sound pressure level (dB)

Level of masking noise

Threshold of hearing

Frequency

Figure 6-3

Part II

Masking effects of narrow band noise [5]

Acoustics and Sound Quality

89

Chapter 6

Sound quality

The higher the level of the masking sound, the wider the frequency band over
which masking occurs. Again, it turns out that multiple sound components fal
ling within one of the ear filter Bandwidths add up to the masking level, while
when they are wider apart each can be considered as a separate sound with its
own masking properties.

Temporal effects
Finally, a number of temporal effects are associated with the hearing process.
Sounds must `build up' before causing a neural reaction, the reaction time how
ever is dependent on the sound level. This has an effect on the perceived loud
ness since the loudness of a tone burst decreases for durations smaller than
about 200 ms. For larger durations, the loudness is almost independent of
duration.
This also has its consequences for masking :
- Short sounds preceding a second loud sound can be reduced in loudness or
even masked. The time intervals for this temporal `pre-masking' phenome
non are in the order of tens of milliseconds.
- A similar effect may occur after switching off a loud sound. During a time
interval up to 200 ms (dependent on masking and tone level), short tone
bursts may be masked (post-masking).
- In the presence of a given continuous sound, tone bursts with levels exceed
ing that of the first signal, might be obscured, depending on their length.
This is called `simultaneous masking'.
A detailed discussion of these temporal effects can be found in {6}.

90

The Lms Theory and Background Book

Sound quality

6.2

Sound quality analysis


One of the fundamental problems with sound quality is that `what-you-hearis-not-what-you-get'. Nonlinear physical characteristics of the human ear
mean that the sound perceived is not directly related to the sound level being
generated. Furthermore `what-you-like-is-not-what-you-hear' since the ap
preciation or non-appreciation of a sound depends to a great extent on the situ
ation and the attitude of the listener. An appreciation of the physical and psy
cho-acoustic aspects of human hearing is essential to the understanding of
sound quality and to this end a short summary of the significant points and
terms used is given in section 6.1.
In the majority of problems or studies related to acoustics, the issue at hand is
acoustic comfort, and not hearing damage or structural integrity.
In order to properly describe this acoustic comfort, it has long since become
clear that the acoustic pressure level is by no means sufficient or even adequate
to correctly represent the actual hearing sensations. This is due to the very
complex nature of the auditory impressions of acoustic signals (or `sounds'),
leading to the use of concepts such as the `quality' of the sound.
Auditory impressions can be annoying, in which case the sound is unwanted
and is often referred to as `noise'. Typical examples are irritating engine, road
or wind noise in a car, aircraft noise, machine or fan noise in the working envi
ronment.
Examples of vehicle noises which while being annoying do not contribute sig
nificantly to the sound pressure level, are wiper noise, fuel pump noise, alter
nator whine, dashboard squeaks. To express this negative quality or annoy
ance, a multitude of qualitative concepts like whine, rattle, boom, rumble, hiss,
beat, squeak, speech interference, harshness, sharpness, roughness, fluctuation,
strength... are used.
But not everything you hear is either bad or unwanted. A sound can be an im
portant messenger of information in which, it conveys a positive feeling. Ex
amples are the solidity of a door-slam, the feeling of sportiveness of a car en
gine (or exhaust) during acceleration, the smoothness of a limousine engine, the
`catching' of a door lock, or a seat belt....
In these cases, the noise does not need to be removed, but it has to sound
`right'.

Analysis of sound signals


Having identified a problem the aim is to measure, evaluate and modify
sounds and a prerequisite for this is a high quality recording of the sound.

Part II

Acoustics and Sound Quality

91

Chapter 6

Sound quality

Digital
Spectral
processing

Digital

Filtering

output

Replay

input
Comfort
analysis

Figure 6-4

Reporting

Sound quality analysis

Analysis

Measurements
Sound quality measurements are acoustic measurements made with micro
phones. These can be digitally recorded and imported into the computer sys
tem, but in order to successfully evaluate a sound it is absolutely essential that
it is both recorded and replayed in the most accurate and representative way
possible. Binaural recording is a technique whereby microphones are mounted
inside the ear in an artificial head to represent the sensation of human hearing
as closely as possible.
Evaluation
The next step in dealing with a sound quality issue, is to gain a proper under
standing of the quality of the sound. In order to evaluate sound quality charac
teristics, different (non-exclusive) approaches may be followed.
(a) The acoustic signal can be evaluated subjectively by a specialist or jury of
listeners. This can be achieved by replaying the signal either digitally via a
recorder or directly via an analog output to headphones or speakers. When
using direct replay, cyclic repetition of a particular segment can be per
formed and techniques are provided to suppress the `click' at the start and
end of a segment as well as on-line notch filtering. This latter facility can
give a very fast assessment of the critical spectral characteristics of a sound.
(b) The acoustic pressure signal is processed in such a way that perception-rele
vant quantitative values can be obtained through the use of adequate sound
quality metrics. Such metrics form part of the comfort analysis.
Modification
Important information on the nature of a sound can be obtained by modifying
the sound signal and comparing its perceived quality with the original. This
modification can be imposed in the time, frequency or order domains.

92

The Lms Theory and Background Book

Sound quality

An important consequence of sound modification, is that it can also serve to


generate the `target' sounds which become the specifications for the subsequent
product modifications.

Binaural recording and playback


The ultimate goal of a sound quality analysis must be to record, analyze, possi
bly modify and then playback a sound in such a way as to reproduce exactly
what the listener would have experienced if he had listened to the original
sound. The purpose of this section is to give an overview of this whole process
and to introduce the factors that are involved in it. It also serves as a means of
clarifying the terminology used in such a process.
Free or Diffuse field

Artificial head

DAT recorder

Recording
equalization

Figure 6-5

Calibration

Listener

Computer

Sound
Quality
Analysis

de-equalization
Equalization

Binaural recording and playback

Recording
The first stage in this process is to make an exact recording of a sound. A single
microphone situated in free space is insufficient for this since at least four mi
crophones would be necessary to correctly capture the 3D nature of the sound.
It has been demonstrated in the previous section that the pressure experienced
by the eardrum will be greatly influenced by the presence of the head and torso
of the listener and is further affected by the non-linear operating characteristics
of the ear itself. As a consequence of this, one of the most accurate ways to re
cord a sound is to mimic the function of the ears themselves and place two mi
crophones inside the ear canals. Such a technique is known as binaural record
ing, which involves two inputs representing what the left and the right ears
would hear.
Although it is possible to place the earphones inside the head, it is more com
mon to use an artificial head which provides similar spatial filtering to that of
an actual head shoulders and torso.
Equalization
You may wish to reconstruct this recording as if it were the original sound and
not as it is heard inside the head. In this case, you will need to `undo' the mod
ifications that were caused by the presence of the head. The sound can be re
constructed as if it were in a free field or a diffuse field.

Part II

Acoustics and Sound Quality

93

Chapter 6

Sound quality

A free field refers to an idealized situation where the sound flows directly out
from the source and the pressure levels drop with increasing distance from the
source. A diffuse field occupies a smaller space and the sound is reflected
many times.
Thus, when you are recording a sound you can determine the type of field you
wish to reconstruct it in and the appropriate compensation or equalization will
be applied. If you only wish to replay the sound through headphones, then
you do not need equalization and so you can either select to have a non-equal
ized recording or you will have to de-equalize it before it is replayed through
headphones.
Transfer to computer
The recording on the DAT recorder is held in a 16 bit audio format. When this
is transferred to a computer system, it will be then converted to a 32 bit floating
point format. To achieve this conversion a calibration factor is required.
Replay
When you need to replay the signal on the headphones, then de-equalization
may be necessary if free-field or diffuse field equalization has been applied to
the original recording. In addition, compensation is required to take account of
the transfer function associated with the particular set of headphones to be
used.

94

The Lms Theory and Background Book

Sound quality

6.3

Part II

Reading list
1

D.LUBMAN, Noise Quality, Toward a Larger Vision of Noise Control Engineering, Jour
nal of Noise Control Engineering, ....

J.BLAUERT, Spatial Hearing, MIT Press, Cambridge (MA), 1983.

W.BRAY ET AL, Development and Use of Binaural Measurement Technique, Proc. Noise
Con. `91, Tarytown (NY), July 14-16, 1991, pp 443-450.

D.HAMMERSHOI, H.MOLLER, Binaural Auralisation : Head-Related Transfer Func


tion Measured on Human Subjects, Proceedings 93rd AES Convention, Vienna (A),
March, 24-27, 1992, 7pp.

J.HASSAL, K.ZAVERI, Acoustic Noise Measurements, Bruel & Kjaer, DK2850 Naer
um, Denmark, 1988

E.ZWICKER,H.FASTL, Psychoacoustics, Facts and Models, Springer Verlag, Berlin


(Germany), 1990.

J.HOLMES, Speech Synthesis and Recognition, Van Nostrand Reinhold, Wokingham,


Berkshire (UK), 1988.

M.HUSSAIN, J.GOELLES, Statistical Evaluation of an Annoyance Index for Engine


Noise Recordings, SAE Paper 911080, Proc. SAE Noise and Vibration Conference, Tra
verse City (MI), May 16-18 1991 pp 359-368.

H.SHIFFBAENKER ET AL, Development and Application of an Evaluation Technique


to Assess the Subjective Character of Engine Noise, SAE paper 911081, Proc. SAE Noise
and Vibration Conference, Traverse City (MI), May 16-18 1991, pp 369-379.

10

K.TAKANAMI ET AL, Improving Interior Noise Produced During Acceleration, SAE


paper 911078, Proc. SAE Noise and Vibration Conference, Traverse City (MI), May
16-18 1991, pp 339-348.

11

G.IRATO, G.RUSPA, Influence of the Experimental Setting on the Evaluation of Subjec


tive Noise Quality, Proceeding of the second International Conference on Vehicle Com
fort, Oct 14-16, 1992, Bologna (Italy), pp. 1033-1044.

12

INTERNATIONAL ORGANIZATION FOR STANDARDIZATION, Method for


Calculating Loudness Level, ISO-532-1975 (E)

13

E.ZWICKER ET AL, Program for Calculating Loudness According to DIN45631


(ISO532B), Journal Acoustic Society Jpn (E), Vol. 12, Nr.1, 1991.

14

S.J.STEVENS, Procedure for Calculating Loudness : Mark VII, J. Acoust. Soc. Am., Vol.
33, Nr.11, pp.1577-1585, 1961.

15

S.J.STEVENS, Perceived Level of Noise by Mark VII and Decibel, J. Acoust. Soc. Am.,
Vol.511, Nr.2, pp. 575-601, 1971.

16

E.ZWICKER, Procedure for Calculating Loudness of Temporally Variable Sounds, J.


Acoust. Soc. Am., Vol. 62, Nr. 3, pp 675-681, 1977.

17

L.L.BERANEK, Criteria for Noise and Vibration in Communities, Buildings and Vehicles
in Noise and Vibration Control, revised edition, McGraw-Hill Inc., 1988.

18

W.AURES, Berechnungsverfahren f r den Sensorischen Wohlklang beliebigen Schallsig


nale, Acustica, Vol.59, pp. 130-141, 1985

Acoustics and Sound Quality

95

Chapter 6

96

Sound quality

19

M.ZOLLNER, Psychoacoustic Roughness. A New Quality Criterion, Cortex Electronic,


1992.

20

W.AURES, Ein Berechnungsverfahren der Rauhigkeit, Acustica, Vol.58, pp. 268-280,


1985.

21

M.F.RUSSEL, What Price Noise Quality Indices, Proc. Engineering Integrity Society
Symposium on NVH Challenges - Problem Solutions, Oct.21, 1992.

22

M.F.RUSSEL ET AL. Subjective Assessment of Diesel Vehicle Noise, IMechE paper


925187, Ref. C389/044, FISITA Conference Engineering for the customer, pp.37-42,
1992.

23

D.G.FISH, Vehicle Noise Quality - Towards Improving the Correlation of Objective Mea
surements with Subjective Rating, I. Mech. E. - paper 925186, Ref. C389/468 FISATAconference, Engineering for the customer, pp. 29-36, 1992.

24

G.TOWNSEND, A New Approach to the Analysis of Impulsiveness in the Noise of Motor


Vehicles, Proc. Autotech `89, paper 7/26.

25

MOTOR INDUSTRY RESEARCH ASSOCIATION, Improving Correlation of Objec


tive Measurements with Subjective Rating of Vehicle Noise, MIRA research report
K3866326.

26

F.K.BRANDL ET AL, A Concept for Definition of Subjective Noise Character - A Basis


for More Efficient Vehicle Noise Reduction Strategies, - Proceedings Internoise-89,
Newport Beach (CA), Dec. 4-6, 1989, pp.1279-1282.

27

R.S.THOMAS, A Development Process to Improve Vehicle Sound Quality, SAE paper


911079, Proc, SAE Noise and Vibration Conference, Traverse City (MI), May 13-16
1991, pp. 349-358.

28

G.R.BIENVENUE, M.A.NOBILE, The Prominence Ratio Technique in Characterizing


Perception of Noise Signals Containing Discrete Tones, Proc. Internoise `92, Toronto,
Canada, July 20-22, 1992, pp. 1115-1118.

29

K.TSUGE ET AL, A Study of Noise in Vehicle Passenger Compartment during Accelera


tion, SAE paper 8509665, Proceedings SAE Noise and Vibration Conference, Traverse
City (MI), May 15-17, 1985, pp. 27-34.

30

T.WAKITA ET AL, Objective Rating of Rumble in Vehicle


Passenger Compartment during Acceleration, SAE paper 891155, Proceedings SAE
Noise and Vibration Conference, Traverse City (MI), May 16-18, 1989, pp. 305-312.

31

W.YAGISHASHI, Analysis of Car Interior Noise during Acceleration Taking into Ac


count Auditory Impressions, JSAE Review (E), Vol. 12, nr.4, Oct. 1991, pp. 58-61.

32

K.FUJITA ET AL, Research on Sound Quality Evaluation Methods for Exhaust Noise,
JSAE Review (E), Vol. 9, Nr. 2, April 1988, pp. 28-33.

33

American National Standard, S.3.14-1977 (R886), Rating Noise with Respect to Speech
Interference, Acoustical Society of America.

34

H.STEENEKEN, T.HOUTGAST, RASTI, A Tool for Evaluating Auditoria, Bruel &


Kjaer Technical Review, nr.3-1985, pp. 13-30.

35

M.NAKAMURA, T.YAMASHITA, Sound Evaluation in Cars by RASTI Method, JSAE


Review, Vol.11, Nr.4, Oct 1990, pp.38-41.

36

H.MOLLER, Fundamentals of Binaural Technology, Applied Acoustics, Vol. 36, 1992,


pp. 171-218.

The Lms Theory and Background Book

Sound quality

Part II

37

K.GENUIT, M.BURKHARD, Artificial Head Measurement System for Subjective Eval


uation of Sound Quality, Sound and Vibration, March 1992, pp. 18-23.

38

G.MICHEL, G.EBBIT, Binaural Measurements of Loudness as a Parameter in the Eval


uation of Sound Quality in Automobiles, Proc. Noise Con. `91, Tarytown (NY), July
14-16, 1991, pp. 483-490.

39

G.THEILE, The Importance of Diffuse Field Equalisation for Stereophonic Recording


and Reproduction, Proc. 13-th Tonmeistertagung, 1984.

40

D.S.MANDIC, P.R.DONOVAN, An Evaluation of Binaural Measurement Systems as


Acoustic Transducers, Proc. Noise Con 91, Tarytown (NY), July 14-16, 1991, pp.
459-466.

41

H.HAMMERSHOI, H.MOLLER, Artificial Head for Free Field Recording ; How Well
Do They Simulate Real Heads ?, Proc. 14th ICA, Beijing, 1992, Paper H6-7 (2pp).

42

K.GENUIT, H.GIERLICH, Investigation between Objective Noise Measurement and


Subjective Classification, SAE Paper 891154, Proceedings SAE Noise and Vibration
Conference, Traverse City (MI), May 16-18 1989, pp 295-303.

43

H.MOLLER ET AL, Transfer Characteristics of Headphones, Proc. 92nd AES Conven


tion, Vienna (A), March 24-27, 1992, 28 pp.

44

Y.OKAMOTO ET AL, Evaluation of Vehicle SOunds Through Synthesized Sounds that


Respond to Driving Operation, JSAE Review (E), Vol.12, Nr.4, Oct.1991,pp.52-57.

45

S.M.HUTCHINS ET AL, Noise, Vibration and Harshness from the customer's Point of
View, IMechE paper 925181, Ref. C389/049, Proc. FISATA-92 Conf, Engineering for
the Customer.

46

H.AOKI ET AL, Effects of Power Plant Vibration on Sound Quality in the Passenger
Compartment During Acceleration, SAE paper 870955, Proc. SAE Noise and Vibration
Conf., Traverse City (MI), Apr. 28-30, 1987, pp.53-62.

47

K.C. PARSONS, M.J. GRIFFIN, Methods for predicting Passenger Vibration Discom
fort, Society of Automotive Engineers Technical Paper Series 831921

48

M.J. GRIFFIN, Handbook of Human Vibration, Academic Press Ltd.


0-12-03040-4

49

J.D. LEATHERWOOD, L.M BARKER, A User-Oriented and Computerized Model


for Estimating Vehicle Ride Quality, NASA Technical Paper 2299 (1984)

50

International Standard, Ref. No. ISO 2631/1 - 1985 (E)

51

International Standard, Ref. No. ISO 5349 - 1986 (E)

52

British Standards Institution, Measurement and evaluation of human exposure to wholebody mechanical vibration and repeated shock Ref. No. BS 6841 - 1987

53

American National Standard, S3.14 - 1977 (R-1986), Rating Noise with Respect to
Speech Interference, order from the Acoustical Society of America.

54

ANSI S3.5, Calculation of the Articulation index, American National Standards Insti
tute, Inc., 1430 Broadway, New York, New York 10018 USA, 1969

55

International Standard, Ref. No. ISO 532 - 1975 (E)

Acoustics and Sound Quality

ISBN

97

Chapter 7

Sound metrics

It may be said that the best way to evaluate the quality of a sound is
to listen to it and express an opinion about it, but in a lot of cases there
is also a strong interest in correlating the results from these subjective
evaluations with measurable parameters. Therefore a number of
sound quality metrics exist where perception-relevant quantitative
values are calculated from the acoustic pressure signal.
Sound pressure levels
Loudness metrics
Sharpness
Roughness
Fluctuation strength
Pitch
Articulation index
Speech interference levels
Impulsiveness
The references are listed in chapter 6

99

Chapter 7

7.1

Sound metrics

Sound pressure level


The basic descriptor of a sound signal is sound pressure level (SPL) denoted by
L. and described in equation 4-12.
The stimulus of the sound pressure level needs to be interpreted as a hearing
sensation and one approach consists of multiplying the frequency spectrum of
the acoustic pressure signal with a weighting function before calculating the
RMS level. Several weighting functions have been defined, of which the AB-C and D weightings are the most widely used. They are based on experi
mentally determined equal loudness contours which express the loudness
sensation of single tones as a function of sound pressure level and frequency.

Time domain sound pressure level


This function calculates the frequency and time weighted sound pressure level
according to the IEC 651 and ANSI SI.4-1983 standards.
Frequency weighting can be applied to the time signal using the A, B or C
weightings described above. The time signal is then exponentially averaged to
arrive at the sound pressure level. An exponential weighting factor is used
t

(e  ) where t is the sample period of the signal and  is the time constant. The
values of  depends on the type of signal (mode) and three default (standard
ized) values are supplied.
 = 35ms for impulse (peaky) signals
 = 125 ms for fast changing signals
 = 1000 ms for slow changing signals.
By selecting the type of signal (mode) then the appropriate time constant is ap
plied.
When the signal contains spikes and is therefore defined by the mode im
pulse" an additional peak detector mechanism is implemented. In this case
when an increase in the averaged signal is detected, then the signal is followed
exactly. When the signal is decreasing, then exponential averaging is used with
a long time constant, set by default to 1500 ms. The time constant used in this
situation is termed the decay time constant.

100

The Lms Theory and Background Book

Sound metrics

7.2

Equivalent sound pressure level


The iso standards: ISO1996/1-1982 and ISO1999:1990 provide a definition for
the `equivalent A-weighted sound pressure level in decibels' identified as
LAeq,T.
This function gives the value of the A-weighted sound pressure level of a con
tinuous, steady sound that, within a specified time interval T, has the same
mean square sound pressure as the sound under consideration whose level var
ies with time. This leads to the expression:
t


p 2A(t)
1
L Aeq,T  10 logt  t 2 dt
p0
1
2
t



Eqn 7-1

where
LAeq,T is the equivalent continuous A-weighted sound pressure level, in deci
bels, determined over a time interval T starting at t1 and ending at t2
p0

is the reference sound pressure (20mPa);

pA(t)

is the instantaneous A-weighted sound pressure of the sound signal.

In practice with sampled data the equivalent sound pressure level is computed
by a summation of the sampled values of the pressure level, in dB over the
number of samples required.
As a generalization, you can apply the same formula to a non-A-weighted
sound pressure signal p(t) to obtain Leq,T.

Part II

Acoustics and Sound Quality

101

Chapter 7

7.3

Sound metrics

Loudness
The equal loudness contours shown in Figure 6-2 in the document Sound
quality" are the result of large numbers of psycho-acoustical experiments and
are in principle only valid for the specific sound types involved in the test.
These curves are valid for pure tones and depict the actual experienced loud
ness for a tone of given frequency and sound pressure level when compared to
a reference tone. The resulting value is called the `loudness level'.
The loudness level itself is expressed in Phons. 1 kHz-tones are used as the ref
erence, which means that for a 1 kHz tone, the Phon value corresponds to the
dB sound pressure level. The equal loudness contours for free field pure tones
and diffuse field narrow-band random noise are standardized as ISO 226-1987
(E).
A linear unit derived from the (logarithmic) Phon values is the Sone (S), which
is related to the Phon (P) in the following way :

S  2 (P40)10

Eqn 7-2

The Sone scale's linear relationship to the experienced loudness makes it easier
to interpret. A loudness of 1 Sone corresponds to a loudness level of 40 Phons.
A tone which is twice as loud, will have double the loudness (Sone) value, and a
loudness level which is 10 Phons higher.
When broadband or multi-tone sounds are being considered, the frequency
spectrum of the loudness is made in terms of critical bands instead of the total
value. Critical bands and barks are described in Table 6.1 in the chapter
onSound quality". In this case the terminology `specific loudness' is used, ex
pressed in Sones/Bark.
For steady state sounds, standardized calculation procedures have been de
fined by Zwicker and Stevens and are accepted as ISO standards {12, 13, 14}. A
more recent procedure by Stevens {15} has not yet been accepted as an ISO stan
dard.
They are both based on :
V

a convention for the relation between octave band sound pressure lev
els and octave band partial (specific) loudness descriptions

a convention to combine the specific loudness values into a global loud


ness, taking into account masking effects.

For temporally varying sounds, Zwicker has also proposed an approach taking
into account temporal effects {16}, which is not yet accepted as an ISO standard.

102

The Lms Theory and Background Book

Sound metrics

7.3.1

Stevens Mark VI
The Stevens (Mark VI) method, standardized as ISO 532-A-1975 and ANSI
S3.4-1980, starts from octave band sound pressure levels. Their loudness is
compared to that of a critical band noise at 1 kHz. It is only defined for diffuse
sound fields with relatively smooth, broadband spectra. Through a set of stan
dardized curves, each octave band level is converted into a partial loudness in
dex (s) see Figure 7-1. The partial loudness values are then combined into a
total loudness (in Sones), using equation 7-3.

st  s m  F(s  sm)
where st =
sm =
s=
F=

the total loudness in Sones


the greatest of the loudness indices, in Sones
the sum of the loudness indices of all bands, in Sones
fractional loudness contribution factor, reflecting masking
effects. It depends on the type of octave measurement (0.3
1/1 octaves,0.15 for 1/3 octaves).

for

Figure 7-1

Part II

Eqn 7-3

Loudness (Mark VI)

Acoustics and Sound Quality

103

Chapter 7

7.3.2

Sound metrics

Stevens Mark VII


A more recent calculation scheme is Stevens Mark VII {15, 17}, which uses a
more refined partial loudness calculation ( see Figure 7-2 ), as well as a level
dependent calculation for F in equation 7-3. The reference frequency is 3150
Hz. Apart from the loudness (in Sones), the logarithmic unit `perceived loud
ness level' (PLdB) is used here, which is 32 dB for a loudness of 1 Sone at 3150
Hz. PLdB values will be about 8 dB lower than the loudness level in Phones.
Examples are discussed in {5} and {17}.

Figure 7-2

7.3.3

Loudness (Mark VII)

Loudness Zwicker
Loudness assessment using the Zwicker method (standardized as ISO 532B)
starts from 1/3 octave band sound pressure level data, which can originate
from either a free or diffuse sound field. It is capable of dealing with complex
broadband noises, which may include pure tones.

104

The Lms Theory and Background Book

Sound metrics

The method takes masking effects into account. Masking effects are important
for sounds composed of multiple components. A high level sound component
may `mask' another lower level sound which is too close in frequency. An ex
ample of masking is shown below {5}. A 50 dB, 4 kHz tone (marked +) can be
heard in the presence of narrow-band noise, centered around 1200 Hz, up to a
level of 90 dB. If the noise level rises to 100 dB, the tone is not heard.

Sound pressure level (dB)

Level of masking noise

Threshold of hearing

Frequency

Figure 7-3

Masking effects of narrow band noise (5)

The method uses different sets of graphs for diffuse and free fields that relate
loudness level to sound pressure level and that take the masking into account
by a sloping-edge filter characteristic for each octave band. This way, domi
nant and hence masking frequency bands will show their influence over a large
frequency range and prevent masked sounds contributing to the total level.
Figure 7-4 shows an example of the Zwicker method. The 1/3 octave band
data are transferred to the appropriate Zwicker diagram.

Part II

Acoustics and Sound Quality

105

Sound metrics

Sound pressure level (dB)

Chapter 7

Frequency

Figure 7-4

Example loudness calculation according to Zwicker's method {5}

The partial loudness contours are computed for each defined segment (global
evaluation) or frame (tracked evaluation) using a classical Zwicker loudness
calculation. The frame or segment size should be selected to ensure that the
spectral resolution needed for the FFT-based octave band analysis can be
achieved. The frame size can be used to restrict the analysis to time periods
over which time-varying signals can be regarded as stationary.
The Zwicker loudness analysis allows you to distinguish between unmasked
and masked contours thus allowing you to see that certain levels are either
wholly or completely masked by previous ones.
The total loudness is calculated as the surface under the enveloping partial loud
ness contours and can be expressed in Sones, or as loudness level in Phones as a
function of time. This is presented as a single value in the global evaluation
and a trace of values for the tracked evaluation.

106

The Lms Theory and Background Book

Sound metrics

7.4

Sharpness
A sensation which is relevant to the pleasantness of a sound is its `sharpness',
allowing you to classify sounds as shrill (sharp) or `dull'. The sharpness sensa
tion is strongly related to the spectral content and center frequency of narrowband sounds and is not dependent on loudness level or the detailed spectral
content of the sound.
Roughly, it corresponds to the first spectral moment of the specific loudness,
with a pre-emphasis for higher frequencies. A quantitative procedure has been
proposed, expressing the sharpness in the unit `acum'. The reference sound of
1 acum is a narrow-band noise, one critical band wide, and at a center frequen
cy of 1 kHz and having a level of 60 dB.
The dependency of sharpness on the center frequency and bandwidth of the
noise is shown in Figure 7-5 {6}. The middle curve represents a noise of one
critical bandwidth as a function of center frequency, the upper and lower
curves representing the sharpness of noises with respect to fixed upper (10
kHz) or lower (0.2 kHz) cut-off frequency as a function of the other cut-off val
ue. Higher frequency noises produce higher sharpness.

Figure 7-5

Part II

Loudness of bandlimited noise

Acoustics and Sound Quality

107

Chapter 7

Sound metrics

The specific sharpness calculation (S '(z) ) is made according to:

S (z) 

0.11N (z)g(z)z
24Bark

N(z)z

Eqn 7-4

0Bark

where

N'(z) is the specific Zwicker loudness


g(z) a weighting function that pre-stresses higher frequency compo
nents (Figure 7-6). g(z) has unit value below 16 Bark and rises expo
nentially as

g(z)  0.066e 0.171z

Figure 7-6

Eqn 7-5

Sharpness calculation weighting function

The total sharpness S expressed in `acums' is obtained by integrating the specific


sharpness.
24Bark

S

S (z)z

Eqn 7-6

0Bark

108

The Lms Theory and Background Book

Sound metrics

7.5

Roughness
The roughness or harshness of a sound is a quality associated with amplitude
modulations of tones. When this modulation frequency is very low (15 Hz), the
actual time varying loudness fluctuations can be perceived. This fluctuation
sensation is discussed in section 7.6.
At high modulation frequencies (above 150-300 Hz), three separate tones can
be heard. In the intermediate frequency range (15-300 Hz), the sensation is of a
stationary, but rough tone, which renders it rather unpleasant. This sensation is
often associated with engine noise, where fractional orders can cause the modu
lation effects.
Roughness increases with degree of modulation and with modulation frequen
cy, and is less sensitive to the base frequency. The unit used to describe rough
ness is the asper"; 1 asper being produced by a 100%, 70 Hz modulated 1 kHz
tone of 60 dB.
The dependency relationship between modulation depth and frequency is how
ever not straightforward. An important element is that the temporal variations
of the loudness can cause masking effects, and a temporal masking depth (L)
is introduced, representing the difference between maximum and minimum in
the actually perceived time dependent loudness pattern. Due to post masking,
this masking depth is smaller than the modulation depth, with the difference
becoming greater at higher frequencies. The roughness (R) of an amplitude
modulated sound can then be approximated as

R " f mod L

Eqn 7-7

Quantitative procedures to calculate roughness have been proposed. They in


volve the calculation of partial or specific roughness" in each critical band,
based on modulation and depth, including masking effects and integrating
them to obtain total roughness.

Part II

Acoustics and Sound Quality

109

Chapter 7

7.6

Sound metrics

Fluctuation strength
When the sound functions have modulation frequencies below 20 Hz, they are
perceived as changes in the sound volume over time. Typically, fluctuation sig
nal sound louder (and more annoying) than steady state signals of the same
rms amplitude. In this case, the intensity of the sensation is referred to as
Fluctuation strength" with the unit vacil". A reference sound of 1 vacil corre
sponds to a 1 KHz tone of 60 dB with a 100 % amplitude modulation of 4Hz.
The ear is most sensitive to fluctuations at 4 Hz. Quantitative models have
been proposed for the fluctuation strength {6} which take into account the tem
poral masking effects due to the sound fluctuation.
The dependency of the fluctuation strength (F) on the modulation frequency
(fmod) and masking depth (L) is then the following

F#

110

L
(f mod4Hz)  (4Hzf mod)

Eqn 7-8

The Lms Theory and Background Book

Sound metrics

7.7

Pitch
Pitch is a sound attribute that classifies sounds on a scale from low to high. For
pure tones, pitch depends largely on the frequency of the tone, but it is also in
fluenced by its level.
In a complex tone, consisting of many spectral components, one or more pitches
can be perceived. These pitches also depend to a large extent on the frequen
cies of the constituent components, but also masking effects can occur, making
some pitches more prominent than others.
Pitches, both for pure and complex tones, which can be derived from the spec
tral content of the signals, are called spectral pitches.
It has been observed that in a complex tone, consisting of a fundamental fre
quency and a number of its harmonics, a pitch corresponding to the fundamen
tal frequency is perceived, even when that fundamental frequency is filtered
out of the signal. In this case, the perceived pitch does not relate anymore to a
component actually present in the signal but relates to the difference between
the higher harmonics. This type of pitch is called residue pitch or virtual pitch.
The pitch calculation is implemented according the method developed by
Terhardt (J. Acoust. Soc. Am. Vol 71, pp 679-688, 1982). Both spectral and
virtual pitches can be derived as well as the weight of each calculated pitch.
These indicate how prominently the pitches are perceived.
If, in the calculation the effect of the tone level on the pitch is taken into ac
count, the calculated pitch is called true pitch. If the influence of level on the
tone is neglected, it is called nominal pitch.

Part II

Acoustics and Sound Quality

111

Chapter 7

7.8

Sound metrics

Articulation index (AI)


The Articulation Index is a parameter developed with a view to assuring
speech privacy. Speech privacy can be defined as the lack of intrusion of recog
nizable speech into an area when background sound or noise then provides a
positive quality of privacy.
The measure of interference caused by noise to the masking of speech can be
calculated by weighting the noise spectrum (in 1/3 octave bands) according to
its importance to the understanding of speech. From this weighted spectrum,
the Articulation Index is derived.

A graphical equivalent of the calculation


is given in Figure 7-7 (from {17}). The 1/3
octave bands relevant to speech are
weighted by a number of dots. When the
sound pressure level is plotted on this
graph, the AI can be derived as the num
ber of dots above the spectrum divided
by the total number. Practical calcula
tions are of course based on tables.
Figure 7-7

Graphical representation of the Articulation Index

This index can then be related to a per


centage of syllables understood (see Fig
ure 7-8 from {17}) For complete privacy,
an AI of 0.05 is the limit, for semi-privacy
to discuss non-confidential matters, an
AI of 0.1 is acceptable {17}.

Figure 7-8 Intelligibility of sentences as a function of articulation index

There are two methods available.


V

112

Standard
The calculation is based on the work of Beranek as set out in The de
sign of speech communication systems", Proceedings of the IRE, Vol 45,
880-884, 1947. The results of this method will lie in the range 0-100%

The Lms Theory and Background Book

Sound metrics

Part II

Modified
These calculations are based upon the AIM method which has been de
scribed in the work mentioned above, but which opens up the internal
floating range of 30dB to a fixed range of 80dB between the limits of 20
and 100dB. The results of this method will lie in the range-107% to al
most 160%

Acoustics and Sound Quality

113

Chapter 7

7.9

Sound metrics

Speech interference level (SIL, PSIL)


When the comprehension of speech is the goal, background sound or noise has
the negative quality of interference. It can cause annoyance, and even be haz
ardous in a working environment where instructions need to be correctly un
derstood. Therefore, a noise rating called `Speech Interference Level' (SIL) was
developed.
Beranek originally defined it as the arithmetic average of the sound pressure
levels in the bands 600-1200, 1200-2400 and 2400-4800 Hz. Since the definition
of the new preferred octave band limits, this definition was changed to the Pre
ferred Speech Interference Level' or PSIL, defined as the average sound pressure
level in the 500, 1000 and 2000 Hz octave bands {5,17}.
In 1977, the Speech Interference level was standardized as ANSI
S3.14-1977(R-1986) {33}, which also included the 4 kHz octave band. This is in
accordance with an ISO suggestion, described in ISO Technical Report TR
3352-1974. On average, the ANSI-SIL is about 1 dB higher than the original
(Beranek) and about 2.5 dB lower than the PSIL {17}.
The application of the SIL to the actual understanding of speech is presented in
several graphs and tables {see 5,17}. These papers show the relationship be
tween SIL and the conditions under which speech can be understood. As an
example, Figure 7-9 shows the relationship between ease of face-to-face con
versation with ambient noise level in PSIL, and separation distance in meters
{5}.

Figure 7-9

114

Communication limits in the presence of background noise (after Web


ster)

The Lms Theory and Background Book

Sound metrics

7.10

Impulsiveness
This metric is used to quantify the impulsive nature of a signal. It is used for
instance in the quantification of the diesel engine noise.
The algorithm for calculating impulsiveness is based on the signal envelop, and
results in a number of output values; the mean impulse peak level, mean im
pulse rise rate and mean impulse duration. Each of these parameters is de
scribed in the Figure below. In addition the mean impulse rate (occurrence) is
determined.
Peak level

signal envelope

rise
rate

threshold
center position

threshold offset
rms level

rise time

fall time

A certain threshold is used to determine the occurrence of an impulsive event.


That threshold is the sum of the overall segment RMS value (in the global com
putation) and the RMS of the frame (in the case of tracked calculation) and the
a user-defined threshold offset.
The start of the impulse is defined by the minimum which occurs before the
crossing of the threshold. The rise time is the time between the impulse start,
and the moment at which the impulse peak level is reached. The peak level is
expressed in dB, and is the difference between the impulse peak and the thresh
old level. The end of the peak is defined by the first minimum which occurs
after the threshold level. The duration of the peak duration is the sum of rise
time and fall time.
The rise rate is the maximum rise rate occurring between the impulse start and
the impulse peak.

Part II

Acoustics and Sound Quality

115

Chapter 8

Acoustic holography

This chapter describes the background to acoustic holography.

117

Chapter 8

8.1

Acoustic holography

Introduction
Acoustic holography allows you to accurately localize noise sources. It there
fore helps in both the reduction of unwanted vibro-acoustic noise and opti
mization of noise levels. It :
V

estimates the acoustic power and the spectral content emitted by the
object under examination.

maps sound pressure, velocity and intensity on the measurement plane


and on all parallel planes. The mapping of these acoustical quantities
outside the measurement plane is done through acoustical holography
(near field - far field).

estimates the acoustic level of the principal sources, including contribu


tion analysis.

This document describes the principles of taking acoustic measurements and


the subsequent analysis of acoustic holography data, for both stationary and
transient measurements.

Basic principles
In performing acoustic holography, you need to measure cross spectra between
a set of reference transducers and the hologram microphones. From these mea
surements you can derive sound intensity, particle velocity and sound power
values.
A basic assumption is that you are operating in free field conditions and that
the energy flow is coming directly from the source. Measurements need to be
taken close to the source.
It provides you with an accurate 3D characterization of the sound field and the
source with a higher spatial resolution than is possible with conventional inten
sity measurements.

118

The Lms Theory and Background Book

Acoustic holography

8.2

Acoustic holography concepts


The principle of acoustic holography is to decompose the measured pressure
field in plane waves, by using a spatial Fourier transform. With the frequency
being fixed, we can calculate how each of these plane waves propagates, and
by adding them we can find the pressure field on any plane which is parallel to
the measurement plane.
Consider an acoustic wave. Measuring the pressure on a plane means cutting
the wavefronts by the measurement plane :
measurement plane

The goal is to determine the whole acoustic wavefront from the known pressure
on the measurement plane. Each microphone in the array measures the com
plex pressure (amplitude and phase).

Temporal and spatial frequency


In considering how to do this we will compare the time and the spatial domain.
Time domain
When considering measurements in the time domain, then the position from
the sound source (m) is fixed and we obtain a measure of the pressure variation
as a function of time.

pressure
T
time

f=1/T

=c/f

The transformation from the time to the frequency domain is achieved using
the Fourier Transform given below

Part II

Acoustics and Sound Quality

119

Chapter 8

Acoustic holography



F() 

 f (t)e

Eqn 8-1

jtdt



Spatial domain
If we now consider measurements where time is fixed and pressure varies as a
function of distance, we can obtain a measure of energy flow.

pressure
distance

The spatial frequency of this function or wavenumber (k0) is defined as :

k0 = 2 f = = 2
c

where c is the speed of sound


and f is the temporal frequency

If we fix the temporal frequency, this means that the acoustic wavelength is
fixed too.
The complex pressure as a function of the space is called the pressure image at
the specified frequency.
Conversion from the spatial domain is also done using a Fourier transform. In
Acoustic holography pressure is measured in two dimensions (x and y for
example), so a 2-dimensional transformation is performed.

S(k x, k y) 

  P

measured(x, y)e

j(k xxk yy)

dx.dy

Eqn 8-2

where S (kx , ky) is the spatial transform of the measured pressure field to the
wavenumber (kx and ky ) domain resulting in the 2-D hologram pressure field.

120

The Lms Theory and Background Book

Acoustic holography

Pmeasured

spatial domain x,y

wavenumber domain kx, ky

A measured pressure (sound) wave with a particular temporal frequency can


propagate in a number of directions, so the wavenumber vector (k) will have a
number of components. The appearance of these vectors depends on the plane
on which you are looking at them. The aim is to find the components of these
vectors in the 2 dimensions that define the plane and to do this projections of
the vectors in the plane are made.
pressure levels

kx
ky

wavenumber domain kx ky

Summation of plane waves


The spatial Fourier transform implies that a measured pressure field can be
considered as a sum of sinusoidal functions.

Each of these sinusoidal functions can be understood as the result of cutting the
wavefronts of a plane wave by the measurement plane.

Part II

Acoustics and Sound Quality

121

Chapter 8

Acoustic holography

measurement plane

spatial periodicity

wavelength
= c/f

wavelength
= c/f

There is a coincidence between the nodes of the sinusoidal function and the wa
vefronts. In effect, decomposing the pressure field into a sum of sinusoidal
functions means decomposing the real acoustic wave into a sum of plane
waves.
Whatever the angle of incidence, the spatial periodicity must be greater than
the wavelength
.

Propagating and evanescent waves


There are two kinds of plane waves :
propagating waves
whose level remains the same as they
propagate but who undergo a phase shift.

evanescent waves
whose level decreases as they
propagate.

Propagating waves represent the sound field that is propagated away from the
near towards the far field. Evanescent waves describe the complex sound field
in the near field of the source.
To understand why we must take evanescent plane waves into account, let us
consider our decomposition of the pressure field into sinusoidal functions. If
the spatial periodicity of a sinusoidal function is shorter than the wavelength, it
cannot be the result of cutting a propagating plane wave by the measurement
plane :

122

The Lms Theory and Background Book

Acoustic holography

spatial periodicity

measurement plane

Whatever the direction of the propagating plane wave may be, there is no pos
sible coincidence between the nodes of the sinusoidal function and the wave
fronts. Therefore, this sinusoidal function must be understood as the intersec
tion between an evanescent wave (which can have a smaller spatial periodicity
than propagating waves) and the measurement plane.
A mathematical interpretation of the evanescent waves is based on the value of
kz which is the component perpendicular to the measurement directions in the
wave number domain.
kz

ky

measurement plane

kx
kz can be determined from the wave number k0 and the known values of kx
and ky from the transformation.
k 0  k 2x  k 2y  k 2z  
c
kz 

 c
 (k  k )
2

2
x

Eqn 8-3

2
y

 2
2
2
kz is real when k x  k y  ( c ) (the spatial periodicity is greater than the wave
length). This means that the waves lie in the circle defined by the radius /c in
the wave number domain. kz is imaginary outside of this region.

Part II

Acoustics and Sound Quality

123

Chapter 8

Acoustic holography

ky
k 2x  k 2y k 20
k z is imaginary
evanescent waves

k0  
c

k 2x  k 2y  k 0

kx

k z is real
propagating
waves

When kz is imaginary, the propagation factor becomes a damped exponential


function ( e jk zz) meaning that a propagated wave undergoes an amplitude
modification while the phase is not changed.

(Back) propagating to other planes


Pressure levels at other planes can be found using Raleigh's integral Equation
with Dirichlet's Green function :
P(r) 

 P(r)G (r  r)dxdy

Eqn 8-4

where the Green function Gd can be thought of as the transformation function


to transform the sound pressure field from one plane to another.
We can use wave domain properties (k) to predict the pressure at a different
spatial position (z).
The practical computation of Raleigh's equation is
for z > z'

S(k z, k y, z)  S(k z, k y, z)g d(k z, k y, z  z)

for 0 < z < z'

S(k x, k y, z)  S(k x, k y, z)

1
g d(k x, ky, z  z)

Eqn 8-5
Eqn 8-6

where z' is the measurement plane and z is the position of the required plane.
The green function is given by
g d  e jkzz
and kz can be found from equation 8-3.

124

The Lms Theory and Background Book

Acoustic holography

The final step is to perform an inverse transformation back to the temporal do


main.

The Wiener filter and the AdHoc window


As mentioned above, evanescent waves undergo a change in amplitude when
propagating. Propagating towards the source implies an amplification of the
signal that is a function of kz. Evanescent waves that lie far away from the unit
circle have a large kz, therefore their amplitude is amplified significantly when
propagating to the source. The contribution of these evanescent waves results
in an increase of spatial resolution. Note that the inclusion of evanescent waves
is only appropriate when propagating towards the source.
Propagating away from the source, the evanescent waves decrease so rapidly in
amplitude that their contribution to the spatial resolution becomes negligible.
However the further away a wave is located from the circle, the less accurate
the amplitude estimate becomes so that at a certain point noise is propagated
and at that point the propagated image starts to blur.
propagating waves
evanescent waves

ky

kx

radius = k0
When propagating towards the source, a Wiener filter can be used to include a
certain number of evanescent waves to improve the resolution. Taking a higher
number of waves taken into account may result in the amplification becoming
unstable. This depends on a parameter of the Wiener filter known as the Signal
to Noise Ratio (SNR). When the SNR value is greater than 15dB, then the am
plification will become unstable as the number of evanescent waves included
increases. Using an low SNR value (5dB for example) means that the evanes
cent waves are taken into account but they are so attenuated that the improve
ment in resolution is negligible. The default value of 15dB provides the best
compromise in terms of resolution and amplification.
When the Wiener filter is used, the pressure image needs to be multiplied by a
two-dimensional window. As is the case with a single FFT, the observed pres
sure must be `periodic' within the observed hologram. If this is not the case,
then truncation errors occur as with a single FFT. These truncation errors mani
fest themselves as ghost sources at the borders of the observed area.
Two windows are used

Part II

Acoustics and Sound Quality

125

Chapter 8

Acoustic holography

The rectangular window,


which does not modify the pressure image. In case of a rectangular window,
only propagating waves are included in the calculations resulting in a resolu
tion equivalent to an intensity measurement.
The so-called Ad Hoc window
For a time signal, the FFT algorithm takes the time signal and duplicates it from
minus to plus infinity. If the amplitude of the measured time signal differs be
tween the start and the end of the window, a discontinuity occurs during this
multiplication introducing an error in the FFT algorithm. This can be corrected
using a Hanning window. Holography used a double FFT so the AdHoc win
dow is used, which is basically a two dimensional Hanning window thus re
moving discontinuities in the both the x and y directions.
The one-dimension Ad Hoc window (W) would be:
When

N  1 (1  )  I  N  1 (1  )
2
2
W[I]  1

When

I  N  1 (1  )
2
W[I]  0.5  0.5 cos

When

I N  1 (1  )
2
W[I]  0.5  0.5 cos

2
I  N 2 1 (1  )

N  .N


2
I  N 2 1 (1  )

N  .N


Derivation of other acoustic quantities


If we know how the plane waves propagate, we can calculate the pressure field
in any parallel plane, by adding the contributions of all plane waves. This will
be correct only if all acoustic sources are on the same side of both planes :
correct calculation plane
measurement plane

126

correct calculation plane

incorrect

The Lms Theory and Background Book

Acoustic holography

Knowing the pressure field on the parallel plane, it is possible to calculate the
particle velocity and eventually the intensity on this plane.
The particle velocity (V) will be known if the pressure differential can be deter
mined -which is the case with Acoustic holography since the pressure can be
measured at r and (r + r)
P(r)  f(P(r), P(r  dr))

V

Eqn 8-7

j
P(r)
ck

Once the pressure and the velocity are known then the intensity is just the
product of the two.
I  PV

Part II

Acoustics and Sound Quality

Eqn 8-8

127

Theory and Background

Part III
Time data processing
Chapter 9
Statistical functions . . . . . . . . . . . . . . . . . . . . .

129

Chapter 10
Time frequency analysis . . . . . . . . . . . . . . . . .

139

Chapter 11
Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . .

151

Chapter 12
Digital filtering . . . . . . . . . . . . . . . . . . . . . . . . .

163

Chapter 13
Harmonic tracking . . . . . . . . . . . . . . . . . . . . . .

193

Chapter 14
Counting and histogramming . . . . . . . . . . . . .

128

203

Chapter 9

Statistical functions

Descriptive statistics provide information that characterizes sets of


data. This chapter gives a very brief summary of a variety of some
statistical functions.

129

Chapter 9

Statistical functions

Minimum, maximum, range and extremum


These functions are shown in Figure 9-1 and described below.
x

maximum
+ extremum

range

t
minimum

real value

Figure 9-1

Nt

absolute value

Minimum, maximum, range and extreme of a function

Minimum
This is defined as the lowest value contained within the specified range of val
ues.
Maximum
This is defined as the highest value contained within the specified range of val
ues.
Range
The range is the difference between the minimum and maximum values.
Extremum
The extreme is the highest absolute value contained within the specified range.
It is equal to the maximum when the absolute value of the maximum is greater
than the absolute value of the minimum, and is equal to the minimum value
otherwise.

Sum
This is the summation of all the (N) values within the frame
Sum 

N1

 xj

Eqn. 9-1

j0

Integration
This is the area under the curve of values, found by multiplying the half of the
sum of the values by the time increment.

130

The Lms Theory and Background Book

Statistical functions

x j1
xj

area 

N2

 xj 2xj1 t

Eqn. 9-2

j0

t

Root mean square


The root mean square, also called effective value, is given by

RMS 

1
N

N1

 x2j

Eqn 9-3

j0

where N is the number of samples. Its energy content is equivalent to that of


the original time series.

Crest factor
The crest factor is given by
|max  min|
2RMS

Eqn 9-4

The crest factor provides a measure of the ``spikeness'' in the data. A sine sig
nal has a crest factor of 1.4. A random signal has a crest factor of about 3 or 4.
A short spike will yield a high crest factor.

Mean
The mean of a set of data values (x) estimates the central value contained with
in the set. It is defined as


x 1
N

Part III Time data processing

N1

 xj

Eqn 9-5

j0

131

Chapter 9

Statistical functions

where N is the number of samples.


The mean is not the only parameter which characterizes the central value of a
distribution. An alternative is the median.
The mean and the median both provide information on the average or central
value of the data. The choice of the most suitable one to use depends on the
skewness described on page 134.

Median
The median of a probability function p(x) is the value for which larger and
smaller values of x are equally probable:
x med



 p(x)dx   p(x)dx  12



Eqn 9-6

xmed

For discrete data, the median is defined as the middle value of the data samples
when they are arranged in increasing (or decreasing) order.
When N is odd, the median is

Eqn 9-7

x med  x N1
2

Thus half the values are numerically greater than the median and half are
smaller.
When N is even, the median is estimated as the mean of the two unique central
values.

x med 

x N1  x N
2

Eqn 9-8

The mean and median both provide information on the average or central val
ue of a set of data. Which is the most suitable one to use in a particular circum
stance depends on the skewness of the data. Skewness is illustrated in Figure
9-2.

132

The Lms Theory and Background Book

Statistical functions

p(x)

p(x)

p(x)

x
(a) mean = median

Figure 9-2

Negative skewness

Positive skewness

Symmetrical data
no skew

x
(b) mean > median

x
(c) mean < median

Symmetrical and skew data distributions.

Skewness refers to the shape of the distribution about the central value. Per
fectly symmetrical data has no skew. Data distributions where there is a small
number of extremely high values are said to exhibit positive skew. Those with
a few extremely low values show negative skew. The mean is more influenced
by such extreme values than the median, but can be used with confidence if the
skewness lies within the range -1 to 1. For the calculation of skewness see
Equation 9-13 below.

Percentiles
The median can also be expressed as the 50th percentile since it represents the
value where 50% of all the values in the data set are below it and 50% lie above
it. It is also possible to compute the 10th, 25th, 75th and 90th percentiles.
The nth percentile of a probability function p(x) is the value at which n% of the
values in the set are smaller then the percentile value. So 10% of the values are
smaller than the 10th percentile and 90% are larger.

Variance and standard deviation


Further information on the range of values in a distribution can be obtained by
determining how much the data values vary from the mean value. So the vari
ance is given by
var(x 0, ..., x N1)  1
N1

N1

 xj  x
2

Eqn 9-9

j0

and as such can also be regarded as the second order moment of a distribution.
The standard deviation is defined as the square root of the variance:

Part III Time data processing

133

Chapter 9

Statistical functions

(x 0, ..., x N1)  var(x 0, ..., x N1)

Eqn 9-10

The standard deviation is in the same units as the original measurement.

Mean absolute deviation


It is not uncommon, in real life, to be dealing with a distribution whose second
order moment does not exist (i.e. is infinite). In this case, the variance or stan
dard deviation is useless as a measure of the data width around its central val
ue. This can occur even when the width of the peak looks perfectly finite to the
eye.
A more robust estimator of the width is the average deviation or mean absolute
deviation, defined by:
ADev(x 0, ..., x N1)  1
N

N1

 xj  x 

Eqn 9-11

j0

Extreme deviation
The extreme deviation is given by
max(max  mean, mean  min)


Eqn 9-12

The extreme deviation is similar to the crest factor, except that it is referenced to
the mean and will therefore follow data which drifts away from zero.

Skewness
Skewness was illustrated in Figure 9-2. It characterizes the degree of asymme
try of the distribution around its central value. It is defined as
 3

x j  x !
skew(x 0, ..., x N1)  1    
N

j0
N1

Eqn 9-13

The skewness is a unitless parameter known as the third order moment of a


distribution.

134

The Lms Theory and Background Book

Statistical functions

Even if the estimated skewness is other than zero, it does not necessarily mean
that the data is in fact skewed. You can have confidence in the skewness only
when the estimated skewness is larger than the standard deviation on this esti
mated parameter (Eqn 9-13). For the idealized case of a normal (Gaussian) dis
tribution, the standard deviation on the estimated skewness is approximately
6N . In real life it is good practice to place confidence in skewness only when
the estimated value is several times as large as this.

Kurtosis
One further characteristic of a distribution can be obtained from the kurtosis of
a function. This is also a unitless parameter that measures the relative sharp
ness or flatness of a distribution relative to a normal or Gaussian one.
This is illustrated in Figure 9-3.
p(x)

p(x)

normal distribution

Figure 9-3

p(x)

positive kurtosis

negative kurtosis

Distributions with positive and negative kurtosis compared to a normal


distribution.

The kurtosis is defined as


 4

N1 x  x

!!
j

1

kurt(x 0, ..., x N1)  $
   
%3
 N j0



Eqn 9-14

The term -3 is necessary, so that a Gaussian distribution has a kurtosis of zero.


The kurtosis is the fourth order moment of a distribution and is a unitless pa
rameter. A positive value indicates that the distribution has longer tails than
the Gaussian distribution, while a negative value indicates that the distribution
has shorter tails.
The standard deviation of (Eqn 9-14) is 24N, for the idealized case of a nor
mal (Gaussian) distribution. However, the kurtosis depends on such a high
moment, that there are many real-life distributions for which the standard
deviation of equation 9-14 is effectively infinite.

Part III Time data processing

135

Chapter 9

Statistical functions

Note!

Higher order moments (skewness and kurtosis), are often less robust than lower order moments which are based on linear sums. (It is possible that the calculation of the skewness or kurtosis generates an overflow.) They must be
used with caution.

Markov regression
This function provides you with a measure of the likelihood of one data value
within a set being similar to another.
It is based on the circular autocorrelation R(.) of a set of data. This calculates
the correlation between one particular value and a value displaced by a certain
lag, as illustrated below.

lag

lag

The circular correlation takes the last shifted value and wraps it to the start.
The circular correlation for a lag of 1 data sample is given by

 N2
 xjxj1 x0xN1
 j0


R(1) 

Eqn 9-15

The circular correlation for a lag of (0) is given by

R(0) 

N1

 x2j

Eqn 9-16

j0

The Markov regression coefficient is the ratio of these two quantities


Markovregressioncoefficient 

136

R(1)
R(0)

Eqn 9-17

The Lms Theory and Background Book

Statistical functions

This function can therefore take values between 0 (very low correlation) and 1
(high similarity). It approaches 1 for a narrow or filtered band and 0 for broad
band signals. It provides an indication therefor of how much a broadband sig
nal has been filtered.

Part III Time data processing

137

Chapter 10

Time frequency analysis

The objective of a time-frequency analysis is to examine the spectral


(frequency) contents of a signal when this is varying in time. This
chapter provides very brief account of the background theory re
lated to this type of analysis.
Introduction to the theory
Linear representations
Quadratic representations

139

Chapter 10 Time frequency analysis

10.1

Introduction
A great deal of physical signals are non-stationary. Fourier analysis establishes
a one-to-one relationship between the time and the frequency domain, but pro
vides no time localization of a signal's frequency components. Whilst an over
all representation of all frequencies that appeared during the observation peri
od is presented, there is no indication as to exactly at what time which
frequencies were present.
Time-frequency analysis methods describe a signal jointly in terms of both time
and frequency. The aim is to find a distribution that determines the portion of
the signal's energy which lies in a particular time and/or frequency range. In
addition these distributions might or might not satisfy some other interesting
mathematical properties, such as the marginal equations".
The instantaneous power of a signal at time t is given by
|s(t)| 2

= Energy or intensity per unit time at time t

The intensity per unit frequency is given by the squared modulus of the Fourier
transform S()
|S()| 2

= Energy or intensity per unit frequency at frequency 

The joint function P(,t) should represent the energy per unit time and per unit
frequency
P(, t)

Energy or intensity per unit frequency (at frequency )


per unit time (at time t )

Ideally summing this energy distribution over all frequencies should give the
instantaneous power


 P(, t)d  |s(t)|

Eqn 10-1



and summing over all time should give the energy density spectrum.


 P(, t)dt  |S()|

Eqn 10-2



140

The Lms Theory and Background Book

Time frequency analysis

Equations 10-1 and 10-2 are known as the `marginal' equations and in addition
the total energy, E


E

 P(, t)dtd

Eqn 10-3



should be equal to the total energy in the signal while satisfying the marginals.
There are a number of distributions which satisfy equations 10-1 and 10-2 but
which demonstrate very dissimilar behavior.
In general there are two main classes of time-frequency analysis methods V

linear techniques discussed in section 10.2

quadratic techniques discussed in section 10.3.

Part III Time data processing

141

Chapter 10 Time frequency analysis

10.2

Linear timefrequency representations


These are representations that satisfy the linearity principle. If x1 , and x2 are
signals, then T(t,f) is a linear time-frequency representation if x 1(t) & T x1(t, f)
x 2(t) & T x2(t, f)
x(t)  c 1x 1(t)  c 2x 2(t) & T x(t, f)  c 1T x1(t, f)  c 2T x2(t, f)
Two linear techniques are discussed V

The Short Time Fourier Transform

Wavelet analysis

The Short Time Fourier Transform (STFT)


A standard method used to investigate time-varying signals is the so-called
Short Time Fourier Transform (STFT). This involves selecting a relatively nar
row observation period, applying a time window and then computing the fre
quencies in that range. The observation window then slides along the entire
time signal to obtain a series of spectra shown as vertical bands in Figure 10-1.
time window g(t)
t

sliding
frame

time t

frequency
Figure 10-1

The Short Time Fourier Transform

For a time signal s(t) multiplied by a window function g(t), the Short Time
Fourier Transform located at time  is given by

142

The Lms Theory and Background Book

Time frequency analysis

STFT(, )  1
2

e

jts(t)g *(t  )dt

Eqn 10-4



This is a useful technique if it is possible to select the observation period so that


the signal can be regarded as being stationary within that period. There are a
whole range of signals however where the frequency contents change so rapid
ly that the time period required would be unacceptably small.
This technique suffers from a further disadvantage in that the same time win
dow is used throughout the analysis and it is this that determines the frequency
resolution (f= 1/T). This fixed relationship means that there has to be a trade
off between frequency resolution and time resolution. So, if you have a signal
composed of short bursts interspersed with long quasi stationary periods, then
each type of signal component can be analyzed with either good time resolu
tion or good frequency resolution but not both.
An alternative view of the STFT is gained if it is expressed in terms of the
Fourier transforms of the signal S() and the window function G(). Equation
10-4 then becomes


STFT(, )  1
2

 e

jS()G *(  )d

Eqn 10-5



By analogy with the previous discussion this reflects the behavior around the
frequency  ``for all times'' as illustrated by the horizontal bands in Figure
10-1. These bands can be regarded as a bank of bandpass filters which have
impulse responses corresponding to the window function.

Wavelet analysis
A method that provides an alternative for the analysis of non-stationary sig
nals, where it becomes difficult to find the right compromise between time and
frequency resolution for the analysis window of the STFT is the Wavelet analy
sis.
In effect, the Fourier transform decomposes the signal using a set of basis func
tions, which in this case are sine waves. The Wavelet transform also decom
poses the signal, but it uses another set of basis functions, called wavelets.
These basis functions are concentrated in time, which results in a higher time
localization of the signal's energy. One prototype basis function is defined, and
a scaling factor is then used to dilate or contract this prototype function to ar
rive at the series of basis functions needed for the analysis.

Part III Time data processing

143

Chapter 10 Time frequency analysis

This brings us to the definition of the Continuous Wavelet transform. If h(t) is


the prototype function (basic wavelet) localized in time t0 and frequency 0
then the scaled versions (wavelets) are given by

h a(t) 

1 h t

|a| a

Eqn 10-6

where a is the scale factor given by 0 /


The Continuous Wavelet Transform CWT is given by


CWT(a, t) 

|a|

 s()h  a t
d

Eqn 10-7



where  is the time localization.


A disadvantage of the STFT is that it uses a single analysis window of constant
width. The result is that there is a fixed relationship between the frequency
and time resolutions. Improving one could only be achieved at the cost of the
other. Mapping this onto the time/frequency plane results in a fixed grid as
shown on Figure 10-2(a).
The use of the scaling factor to dilate or contract the basic wavelet results in an
analysis window that is narrow at high frequencies and wide at low frequen
cies. Figure 10-1 likens the STFT to a series of constant width bandpass filters.
Using this concept again, the wavelet transform can be considered as a bank of
constant relative Bandwidth filters.
f
c
f
where c is a constant. This is illustrated in Figure 10-2(b) where by allowing
both the frequency and time resolutions to vary, a multi-resolution analysis is
possible.

144

The Lms Theory and Background Book

Time frequency analysis

time

time

(a) STFT
frequency
Figure 10-2

(b) Wavelet analysis


frequency

Mapping of the time/frequency plane

This is in fact a very natural way to analyse a signal. Low frequencies are phe
nomena that change slowly with time so requiring a low resolution in this do
main. In this situation, a good time resolution can be sacrificed for a high fre
quency resolution. High frequency phenomena vary rapidly with time which
then becomes the important dimension, so under these conditions wavelet anal
ysis increases the time resolution at the cost of frequency. This type of analysis
is also very closely related to the human hearing process, since the human ear
seems to analyse sounds in terms of octave bands.

Part III Time data processing

145

Chapter 10 Time frequency analysis

10.3

Quadratic timefrequency representations


Whilst linearity is a desirable property, in many cases, it is more interesting to
interpret a time-frequency representation as a time-frequency energy distribu
tion which is a quadratic signal representation. This type of time-frequency
representations will exhibit many desirable mathematical properties, but it is
important to investigate the consequences of the bilinearity principle.
x(t) & T x(t, f)
y(t) & T y(t, f)
Eqn. 10-8
z(t)  c 1x(t)  c 2y(t) & T z(t, f)
 |c1| 2T x(t, f)  |c 2| 2T y(t, f)  c 1c 2 * T xy(t, f)  c 2c 1 * T yx(t, f)
The first two terms in this result can be seen as signal terms", and the last two
terms as the interference terms". These interference terms are necessary to sat
isfy mathematically desirable properties like the marginal equations", but they
often make interpretation of the results difficult.
The interference terms can be recognized by their oscillatory nature, and differ
ent so -called smoothing" techniques can be used to reduce their effect. This,
however, leads us to a new tradeoff; that of a reduction of interference terms
against time-frequency localization. The spectral smearing effect of the
smoothing windows will disperse the signal's energy in the time-frequency
plane, thereby reducing the time-frequency localization of all signal compo
nents.
Two examples of quadratic time-frequency representations are the spectrogram
and the scalogram.
spectrogram = |STFT|2
scalogram = |WT|2
which are the two energy-counterparts of the Short-Time Fourier Transform
(STFT) and the Wavelet Transform (WT) respectively. The interference terms
for these representations only exist where different signal components overlap.
Hence if the signal components are sufficiently far apart in the time-frequency
plane, the interference terms will be essentially zero. While neither of these
representations satisfies the marginal equations, this is not of great concern for
a qualitative energy localization assessment.
For an adequate interpretation of time-frequency analysis results, it is often
good practice to use several techniques (STFT or WT together with a quadratic
method), which makes it possible to distinguish the signal components" from
the interference terms".

146

The Lms Theory and Background Book

Time frequency analysis

The WignerVille distribution


The Wigner-Ville distribution is


W(, t)  1
2

 s * t  2
e

js



t  2
d

Eqn 10-9

where  is the local time. In terms of the spectrum it is




W(, t)  1
2

 S *   2
e



  2
d

jtS

Eqn 10-10

where  is the local frequency.


This distribution satisfies the marginals and is real. In addition, time and fre
quency shifts in the signal cause corresponding shifts in the distribution.
Many of its characteristics can be understood by considering the fact that in
equation 10-9 at any point (t) a section of data prior to this point is being multi
plied with a section following this point and the results summed. This can be
visualized by imagining that the segment to the left is folded over on top of the
segment to the right. Where there is an overlap there will be a product and
therefor a value for the distribution.
The diagram below demonstrates that for a signal only starting a time (tstart ), all
points to its left have value zero resulting in a distribution with the same value.
The same will apply at the end point (tend ),
tstart

tend

Thus one characteristic of the Wigner Ville distribution is that for a signal of finite
duration the distribution is zero up to the start and beyond the end. The same can be
said when considering the frequency version which means that for a band lim
ited signal, the Wigner Ville distribution will be zero outside of that range.
The same manoeuvre can be used to see why the reverse is true if at some point
the signal level drops to zero. Consider the situation illustrated below.

Part III Time data processing

147

Chapter 10 Time frequency analysis

t0

At a point where the signal itself is zero (t0 ), multiplying the section to the left
by the section to the right results in a non-zero value. In general it can be said
that the Wigner distribution is not zero when the signal is. This unwelcome charac
teristic makes it difficult to interpret, especially when analyzing signals with
many components.
The same mechanism accounts for noisiness that can be seen in the distribution
in places where it is not present in the signal as shown below.
t1

t2

When evaluating the distribution at point (t1 ) the overlapping sections will not
include the noise, but even at point (t2 ) where there is no noise in the signal, it
will already influence the distribution. Noise will be spread over a wider peri
od than occurred in the actual signal therefore.
The same reasoning can be used to explain the appearance of the interference
terms along the frequency axis. This is especially so when a signal contains
multiple frequency components at the same moment in time, which will result
in interference terms at a frequency mid way between the frequencies of the
different components. As mentioned above, these terms can easily be recog
nized by their oscillatory nature and smoothing techniques can reduce their ef
fect. Some possible smoothing techniques are discussed below.

Generalization
A generalization of the Wigner-Ville distribution leads to a whole class of timefrequency representations, with as main desirable mathematical property their
invariance against operations like time shift, frequency shift or time/frequency
scaling. This means that a shift in time or frequency of the signal leads to an
equivalent shift of the time-frequency representation of that signal, or that scal
ing the signal leads to a corresponding scaling of the time-frequency represen
tation.

148

The Lms Theory and Background Book

Time frequency analysis

This more general class of time/frequency representations are defined as fol


lows
T x(t, f) 

   (t  t, f  f)W (t, f)dtdf


T

Eqn. 10-11

t f

where Wx (t',f') is the Wigner-Ville distribution of the signal x(t), and where T
is the kernel function". It is the choice of this kernel function that determines
the basic properties of each specific time-frequency representation derived
from this general definition. The kernel function can also be seen as a smooth
ing function applied to the Wigner-Ville distribution.
Typical examples of techniques that can be defined in this framework are
Spectrogram
where the kernel = Wigner distribution of the analysis window.
Smoothed Pseudo-Wigner Distribution (SPWD)
where the kernel =separable smoothing function with independent smooth
ing spread in time- and frequency domain.
Pseudo-Wigner Distribution (PWD)
the same as the SPWD, but with no smoothing along the frequency axis.
This can also be considered as short-time Wigner distribution".
Choi-Williams Distribution (CWD)
where the kernel = exponential smoothing function.
The class of shift-invariant representations (time- and frequency shifting) is
also called Cohen's class, and examples of representations belonging to that
class are the spectrogram, the Wigner-Ville distribution, PWD, SPWD, ...
The class of time shift/time scale invariant representations is also known as the
Affine class, and examples of representations belonging to this class are the sca
logram, the Wigner-Ville distribution, CWD, ...

Part III Time data processing

149

Chapter 10 Time frequency analysis

10.4

References

Books
Time-frequency analysis :
Leon Cohen - Prentice Hall - 1995 - 299 pp. - ISBN 0-13-594532-1

Papers
Linear and Quadratic Time-frequency Signal Representations :
F. Hlawatsch, G.F.Boudreaux-Bartels (IEEE SP Magazine, April 1992)
Time-frequency distributions - A review :
Leon Cohen (Proc. of IEEE, July 1989)
Wavelets and signal processing :
O. Rioul, M. Vetterli (IEEE SP Magazine, October 1991)
Time-frequency analysis applied to door slam sound quality problems. :
H. Van der Auweraer, K. Wyckaert, W. Hendrickx (Journal de physique IV,
May 1994)

150

The Lms Theory and Background Book

Chapter 11

Resampling

This chapter is concerned with both fixed resampling and adaptive


(or synchronous) resampling. It discusses the general principles in
volved in both of these processes and contains a reading list for fur
ther information.
Fixed resampling
Adaptive resampling

151

Chapter 11 Resampling

11.1

Fixed resampling
The process of converting a signal that has been sampled at a particular rate to
one that is sampled at a different rate is known as resampling.
Resampling may be necessary for a number of reasons. A DAT recorder, for ex
ample, samples a signal at a rate of 48000 samples per second. If the signal has
a Bandwidth of only 200Hz then 500 samples a second would be adequate and
as a consequence far more data than is needed to describe the signal exists. In
this situation the sample rate could be decreased, a process which is referred to
as decimation or downsampling.
On the other hand, while a critically sampled signal may contain all the infor
mation to adequately describe the frequency contents of the signal, it may not
look good, or be easy to interpret, in the time domain.

Increasing the sampling rate will generate a signal which has identical spectral
contents but a much better defined time waveform. When the resampling in
volves an increase in the sampling rate it is referred to as interpolation or upsam
pling.

A further instance of when a specific sampling rate is required, is when it is re


quired to replay a signal through a D/A convertor. It may well be that the sig
nal must possess the very specific sampling rate supported by the D/A conver
tor.
This section considers the theoretical background to the process of digital re
sampling and the factors that must be taken into account to realize resampling
and achieve the required accuracy of results. It should be noted however that
the contents of this document are by no means a comprehensive treatment of
this subject. For a more thorough understanding of this subject you should re
fer to the reading list given at the end of the section and in particular to refer
ences [3] and [4].

152

The Lms Theory and Background Book

Resampling

11.1.1

Integer downsampling
Integer downsampling by a factor n effectively means retaining every nth point
of the source data. However it is necessary to take measures to avoid aliasing
problems when doing this. The example below shows the effects of downsam
pling by a factor of 13, when the original number of samples per period was 16.
Sampling a signal at a rate lower than 2 points per period of the highest fre
quency in the signal will give rise to erroneous results.

To avoid aliasing, due to the resampling process, it is necessary to ensure that


the signal does not contain frequencies that are any higher than can be de
scribed by the reduced sample rate. The use of a low pass filter will achieve
this. To illustrate this, consider the example of downsampling by a factor of 5
described below.
The original signal was sampled at 1kHz, implying a Bandwidth of 500Hz. It
contains 2 spectral components, one at 8Hz and another at 325 Hz.
1.0
0.5
8Hz

325Hz

500Hz
Bandwidth

Downsampling by a factor of 5 will reduce the sample rate to 200Hz and the
Bandwidth to 100Hz. It is first necessary to apply a low pass filter to limit the
spectral content of the data to the 100 Hz Bandwidth. This will remove the
higher frequency component leaving a time domain signal containing 125
points per period for the remaining 8 Hz component.
1.0
0.5
8

100
Bandwidth

Part III Time data processing

325

frequency (Hz)

125 points per period

153

Chapter 11 Resampling

The downsampling by a factor 5 is performed by taking every 5th point.


1.0
0.5
8

100
Bandwidth

frequency (Hz)

Not applying the filter would result in the following. The 325 Hz component
will fold to 75Hz in the 100Hz Bandwidth and as a consequence the result is
heavily distorted.
1.0
0.5

8Hz

11.1.2

75Hz
100Hz
Bandwidth

325Hz

Integer upsampling
Integer upsampling by a factor n involves inserting (n-1) data points between
the original measured ones. Normally the inserted points will have a value of
zero, and it is then necessary to apply an appropriate filter to remove the har
monics introduced by the process.

The trace shown here is upsampled by a


factor 4, which means that 3 zeros are
added between each of the existing data
points. The result is that in the time do
main the signal looks highly distorted.

It can be proven that the spectrum of the upsampled signal consists of the origi
nal one plus a mirrored version of it at all higher frequencies.

154

The Lms Theory and Background Book

Resampling

low pass filter

The `distortion' introduced by inserting zeroes, can therefore be filtered out by


a properly designed low pass filter which will retain just the spectral contents
of the original signal Bandwidth.
The improvement in the time domain representation of a signal by upsampling
is illustrated below for the case of a critically sampled sine wave. The sample
rate is just greater than 2 points per period so the Nyquist criterion is satisfied.
Although the time domain representation is poor, there is enough information
for an accurate representation in the frequency domain. Upsampling by a fac
tor 10 will make the time domain description of the signal more accurate.

f bw

bw

The resulting signal has identical spectral contents to the original. The in
creased number of points per cycle provides a much improved time domain de
scription of the waveform.

Part III Time data processing

155

Chapter 11 Resampling

11.1.3

Fractional ratios
Resampling by a non-integer ratio can be realized by a combination of upsam
pling and downsampling. So downsampling by a factor of 2.5 can be achieved
by first upsampling by a factor 2 then downsampling by a factor 5. The order
in which these two processes are done is very important if the original signal
content of interest is to be preserved.
Consider a signal sampled at 2kHz and which contains signals up to 300 Hz. A
new sampling rate of 800Hz is required, representing a downsampling by a fac
tor of 2.5

If the signal is first downsampled by a


factor of 5, a filter at 200Hz is required.
As a result, all the signal content be
tween 200 and 300Hz will be eliminated,
and the subsequent upsampling will not
of course be capable of restoring this.

The correct procedure is to first upsample to 4kHz (a Bandwidth of 2kHz). A


lowpass filter set at 1kHz will retain the original spectral content. The next
stage is to downsample by a factor of 5 with a low pass filter at 400 Hz thus
maintaining the original frequencies of up to 300Hz.

When a non-integer resampling factor is required, the software determines the


optimum ratios and the sequence of resample operations required to achieve
the desired sample rate conversion.

156

The Lms Theory and Background Book

Resampling

11.1.4

Arbitrary ratios
Some resampling requirements can not be easily realized by a simple combina
tion of an upsampling and a downsampling. For some ratios, even though they
can be expressed as a fraction one needs an extremely high intermediate up
sampling ratio. The process imposes a heavy computational load and the result
is numerically not well conditioned.
Consider for instance a measurement at 8192 samples per second that is to be
resampled to 8000 Hz for replay on digital audio hardware. This can theoreti
cally be realized by upsampling by a factor 125 followed by a downsampling
with a factor 128, but this is computationally extremely costly.
In this situation another strategy is used. Consider the signal shown below
which was originally sampled at a rate indicated by the white circles. The re
quired sample rate is indicates by the filled circles. The new sample rate is not
an integer ratio to the original.

The first stage is to upsample by a relatively high factor (a). This factor is
known as the `Upsampling factor before interpolation' parameter and the de
fault value used is 15. The resulting sample rate is indicated by the squares.
The second stage then involves performing a linear in
terpolation on the upsampled signal to arrive at a new
sample rate that is an integer multiple (b) of the target
frequency. This introduces an error  which will be
small as long the source trace is upsampled at a high
enough ratio. The maximum distortion that can occur
with the upsampling factor is indicated by the soft
ware.

This error is indicated in the form of the `SDR' (Signal to Distortion Ratio). It
depends on the `Upsampling factor before interpolation' parameters and the
filter 's cut-off frequency as shown below:
SDR=10log10 (80*(100 R / ( <cut-off in percent> )))
where
R = Upsampling factor before interpolation
cut-off in % = the cut off frequency as a % of the Nyquist frequency.

Part III Time data processing

157

Chapter 11 Resampling

The final stage in this process is to downsample by this integer factor (b) to the
required rate. It is also possible that the downsampling is achieved directly by
the interpolation process itself as long as the downsampling rate being per
formed is lower that the preceding upsampling rate (a).

158

The Lms Theory and Background Book

Resampling

11.2

Adaptive resampling
Adaptive or synchronous resampling enables you to resample a signal such
that its characteristics can be examined in a different domain. A well known
mechanical application is the extraction of ``order-related" phenomena of en
gine vibrations based on the measurement of the rotation speed of one of its
components. Phenomena which are very difficult to analyze or interpret in one
domain, become clear and obvious in another.
For synchronous averaging, for example, it is essential that repetitive phenome
na occur at the very same instant for the different signal sections that are aver
aged. Using the synchronous resampling technique, the data can be trans
formed into that particular domain in which the phenomena are indeed
repetitive.
In the same way as the Fourier transform presents the contents of time domain
data in the frequency domain, it converts angle domain data to the order do
main. Just as something that happens twice every second has a frequency of 2
Hz, something that occurs twice every cycle is related to order 2. Consider the
example of measurements taken on an engine at a supposedly constant rpm.
Even very slight variations in rpm will result in a frequency domain represen
tation where the related spectral components are sharp for the low orders, but
become smeared out for higher frequencies. The small RPM variations, lead to
leakage errors in the frequency domain.
For applications where there is a need to investigate higher order phenomena
(such as gear box analysis for example), such smearing makes it very difficult
to discriminate order from resonance components. Transforming such data to
the order domain, will result in all orders being clearly shown, but any reso
nance phenomenon present will be smeared out. The frequency and order do
main representations are therefore complementary to one another and useful
information can be obtained in the domain most suited for analysis. Adaptive
resampling facility enables you to convert from one domain to another.

Implementation example
This example below illustrates the procedure involved in converting from the
time to the angle domain. The principle can be used to convert between any
two domains.
Your original time signal must be measured in conjunction with a tracking sig
nal. This is most likely to be a tacho signal; a pulse train, that can be converted
to an rpm /time function and then integrated to obtain an angle/time function.

Part III Time data processing

159

Chapter 11 Resampling

angle
Ordinate
time

Abscissa

In the case of a transformation from the time domain into the angle domain, the
required (constant) resolution in the angle domain () defines the time inter
vals at which data samples of the vibration measurement should be available.


angle
measured points
required points
t1

t2

t3

time

The most appropriate resolution () is based on the minimum slew rate which
must be coped with.
When sampling in the time domain the time increment is the reciprocal of the
sampling frequency.
1  T
Fs
So according to the Nyquist criterion, information is available up to Fs/2.
Adaptive resampling conforms to the same rules: so if you do not have enough
samples then information is lost, while if you use too many samples then the
processing effort is unnecessarily increased.
It is necessary to determine the angle which corresponds to the required Fs in
the time/frequency domain. Adaptive resampling uses a varying time incre
ment if the angle/time relationship is non linear. Data loss will occur first at
the lowest rpm values (slew rate) and the aim is to determine the threshold
angle  between over and under sampling.

 

d

dt

min

Fs

rpm min
Fs

So for example if the minimum slew rate (d/dt) is 500 rpm and the sample
frequency is 2000Hz, then the threshold angle will be

160

The Lms Theory and Background Book

Resampling

 = 500/2000 = 0.25{ x (360/60) } = 1.5 degrees


Using an angle increment less than this value will yield more data points in the
angle domain without any gain in information thus representing excessive
processing. Using a higher increment value will result in a loss of information
in the lower rpm ranges which will not be recovered if the data is transformed
back to the original domain.
From the required point in the angle do
main 1, the (t) function is consulted to
find the corresponding time instant (t1).
The value of the measured time signal at
that instant (y'1) must then be determined
as the value for the angle position. This is
repeated for every value in the angle do
main.
Depending on the resolution of the origi
nal signal and the relation between both
domains, interpolation may be required
as illustrated here. In order to maintain
the dynamic nature of the signal, it is es
sential to preserve its spectral contents, so
the signal is first upsampled before inter
polation.
The last interpolation ratio (and thus the
corresponding upsampling factor) is gov
erned by the actual local distance be
tween the available and required data
samples. As a final stage the constructed
angle-domain signal needs to be re
sampled (usually down-sampled) to
match the desired angle resolution.

1
t

t1
y(t)

t1

t, t

t1

t, t

y(t)

y1
y'(')

y1



y''()

y
1

 

downsampling





Preservation of the spectral characteristics during the up-sampling, interpola


tion, and downsampling steps indicated above, requires the correct application
of these procedures as well as a low-pass finite impulse response (FIR) filters
with enough suppression in the stop-band, a low ripple in the pass-band, and
yet optimal speed performance for acceptable computing times. The principles
of resampling are discussed in section 11.1.4.

Part III Time data processing

161

Chapter 11 Resampling

11.3

162

References
[1]

A. V. Oppenheimer and R.W. Schafer


Digital Signal Processing
Prentice Hall 1975

[2]

L.R. Rabiner and B. Gold


Theory and Application of Digital Signal Processing
Prentice Hall 1975

[3]

R.E. Crochiere and L.R. Rabiner


Multirate Digital Signal Processing
Prentice Hall 1983

[4]

J.G. Proakis and D.G. Manolakis


Digital Signal Processing: Principles Algorithms and Applications
MacMillan Publishing 1992

The Lms Theory and Background Book

Chapter 12

Digital filtering

Filtering is most often used to enhance signals by removing un


wanted components. This chapter describes the theoretical basis
used in the design of digital filters.
Basic definitions related to digital filtering
Types of filters and their design
Analysis of filters
Application of filters
This is by no means a comprehensive text and aims just to give some
insight into the subject. A reading list is appended at the end of the
chapter.

163

Chapter 12 Digital filtering

12.1

Basic definitions relating to digital filtering


A linear timeinvariant system
Discrete time signals are defined for discrete values of time i.e. when t= n T. A
general way of describing a sequence of discrete pulses of amplitude a(n) as il
lustrated below is given in equation 12-1.
a(n)

{a(n)} 

m

a(m)u 0(n  m)

Eqn 12-1

where u0 is the unit impulse. A discrete-time system is an algorithm for con


verting one sequence into another as represented below. In this case the input
x(n) is related to the output y(n) by the specific system .
y(n)

x(n)

y(n)  [x(n)]

Eqn 12-2

A linear system implies that applying the input ax1 +bx2 will result in the output
ay1 +by2 where a and b are arbitrary constants.
A time-invariant system implies that the input sequence x(n-n0 ) will result in
the output y(n-n0 ) for all n0 .
From equation 12-1 the input x(n) to a system can be expressed as
x(n) 

m

164

x(m)u 0(n  m)

Eqn 12-3

The Lms Theory and Background Book

Digital filtering

If h(n) is defined as the impulse response of a system which is the response to the
sequence u0 (n), then by time invariance h(n-m) is the response to u0 (n-m). By
linearity, the response to sequence x(m)u0 (n-m) must be x(m)h(n-m).
Thus the response to x(n) is given by


y(n) 

x(m)h(n  m)

m

h(m)x(n  m)

Eqn 12-4

m

Equation 12-4 is known as the convolution sum and y(n) is known as the con
volution of x(n) and h(n), designated by x(n) * h(n). Thus for a linear time in
variant (LTI) system a relation exists between the input and output that is com
pletely characterized by the impulse function h(n) of the system.
LTI system

x(n)

h(n)

y(n)

Stability and Causality


The constraints of stability and causality define a more restricted class of linear
time-invariant systems which have important practical applications.
A stable system is one for which every bounded input results in a bounded out
put. The necessary and sufficient condition for stability is


|h(n)|  

Eqn 12-5

n

A causal system is one for which the output for any n=n0 depends only on the
input for n  n0. A linear time-invariant system is causal if and only if the unit
sample response is zero for n<0, in which case it may be referred to as a causal
sequence.
Difference equations
Some linear time-invariant systems have input and output sequences that are
related by a constant coefficient linear difference equation. Representing such
systems in this way, can provide means of making them realizable and the ap
propriate difference equation reveals useful information on the characteristics
of the system under investigation such as the natural frequencies, their multi
plicity, the order of the system, frequencies for which there is zero transmission
...

Part III Time data processing

165

Chapter 12 Digital filtering

The general form of an Mth order linear constant coefficient difference equation
is given in equation 12-6.
M

i0

i1

y(n)   b ix(n  i)   a iy(n  i)

Eqn 12-6

An example of a first order difference equation is given by


y(n)   a 1y(n  1)  b 0x(n)  b 1x(n  1)

Eqn 12-7

which can be realized as follows.


x(n1)

Delay

b1
y(n)

x(n)
b0
y(n1)
a1

Delay

Delay

represents a one sample delay. A realization such as this where sepa


rate delays are used for both input and output is known as Direct form 1. More
detailed information on filter realizations can be obtained from the references
listed at the end of this chapter.
The z transform
The z transform of a sequence x(n) is given by
X(z) 

x(n)z n

Eqn 12-8

n

where z is a complex variable. The z transform is a useful technique for repre


senting and manipulating sequences.
The information contained in the z transform can be displayed in terms of
(poles) and zeros. If the poles of the function X(z) fall within a radius R1 where
R1  1 then the system is stable.

166

The Lms Theory and Background Book

Digital filtering

In the z plane, the overall representation of a linear time invariant system is


given by
H(z) 

Y(z)
X(z)

Eqn 12-9

and H(z) can again be expressed in the general form of difference equations
H(z) 

a 0  a 1z 1  a 2z 2...a Mz M
1  b 1z 1  b 2z 2...b Nz N

Eqn 12-10

The frequency response of filters


Consider the case when the input to a filter is x(n)= e j n (equivalent to a
sampled sinusoid of frequency  ). From equation 12-4


y(n) 

h(m)e j0(nm)

m

e

j 0n

h(m)e j0m

m

 x(n)H(ej0)

Eqns 12-11

The quantity H(e j) is the frequency response function of the filter, which gives the
transmission of the system for every value of .
This is in fact the z transform of the impulse response function with z=ej.
H(z)| zej  H(e j) 

h(n)e jn

Eqn 12-12

n

which means that the frequency response of a filter is an important indicator of


a system's response to any input sequence that can be represented as a continu
ous superposition of input sequences x(n).
Relationship between the frequency response and the Fourier transform of
a filter
The frequency response of a linear time invariant system can be viewed as the
Fourier series representation of H(e j) .

Part III Time data processing

167

Chapter 12 Digital filtering

j

H(e ) 

h(n)e jn

Eqn 12-13

n


h(n)  1
2

 H(e

j)e jn



where the impulse response coefficients are also the Fourier series coefficients.
Since the above relationships are valid for any sequence that can be summed,
the same can apply to x(n) and y(n) and it can be shown that
Y(e j)  X(e j)H(e j)

Eqn 12-14

and so the convolution in the time domain has been converted to multiplication
in the frequency domain.
Discrete Fourier Transform
For a periodic sequence of N samples, the Discrete Fourier Transform is given
as

H p(k) 

N1

 hp(k)ej(2N)nk

Eqn 12-15

n0

and the DFT coefficients are identical to the z transform of that same sequence
evaluated at N equally spaced points around the unit circle. The DFT coeffi
cients are therefore a unique representation of a sequence of finite duration.
The continuous frequency response can be obtained from the DFT coefficients,
by artificially increasing the number of points equally spaced around the unit
circle. So by augmenting a finite duration sequence with additional equally
spaced zero valued samples the Fourier transform can be calculated with arbi
trary resolution.

Finite and Infinite Impulse Response Filters


When an impulse response h(n) is made up of a sequence of finite pulses be
tween the limits N1 < n < N2 (as shown below) and is zero outside these limits
then the system is called a finite impulse response (FIR) filter or system.

168

The Lms Theory and Background Book

Digital filtering

h(n)

N1

N2

Such filters are always stable and can be realized by delaying the impulse re
sponse by an appropriate amount. The design of FIR filters is described in sec
tion 12.2.2.
A filter (system) whose impulse response extends to either - or + (or both)
is termed an infinite impulse response (IIR) filter or system. Design of these filters
is discussed in sections 12.2.3 and 12.2.4.

Use of digital filters


Digital filters can be used in a range of applications such as anti-aliasing,
smoothing,
elimination of noise,
compensation (equalization),
modification of fatigue/damage characteristics.
They have some important advantages compared to analog filters high accuracy,
consistent behavior and characteristics,
few physical constraints,
independent of hardware,
the signals can be easily used by different processing algorithms.

Part III Time data processing

169

Chapter 12 Digital filtering

12.2

FIR and IIR filter design


Filters fall into two distinct categories -the Finite Impulse Response (FIR) fil
ters and the Infinite Impulse Response (IIR) filters. A comparison of the two
categories of filters is given below.
Characteristic

FIR

IIR

Stability

these filters are always stable


(poles=0)

will be stable if
|poles|<1

Phase

linear (important in applications nonlinear


such as speech processing)

Efficiency

low
the length (nr of taps) must be
relatively large to produce an
adequately sharp cut off

better
lower order required

Round off error


sensitivity

low

high

Start up transients finite duration

infinite duration

Adaptive filtering

easy

difficult

Realization

straightforward (direct form)

more critical (direct or


cascaded)

There are nine basic designs of filters that are described in this chapter as listed
below.
FIR Window

see page 174

FIR Multi window

see page 176

FIR Remez

see page 177

IIR Bessel

see page 180

IIR Butterworth

see page 181

IIR Chebyshev

see page 182

IIR Inverse Chebyshev

see page 183

IIR Cauer

see page 183

IIR Inverse design

see page 187

This section begins with an introduction to the terminology used in filter de


sign. The following subsections deal with the processes and parameters in
volved in each sort of filter mentioned above.

170

The Lms Theory and Background Book

Digital filtering

12.2.1

Filter design terminology


Filter characteristics
The nomenclature used in describing a (low pass) filter is illustrated in Figure
12-1.
pass band ripple

H 

attenuation

stop band ripple



pass band

Figure 12-1

stop band
transition band

Filter characteristics

The filter design functions operate with normalized frequencies with a unit fre
quency equal to the sampling frequency.
Normalized frequency =
and thus lies in the range 0 to 0.5

frequency (Hz)
sampling frequency

Angular frequency on the unit circle = Normalized frequency x 2 

Linear phase filters


The frequency response of a filter has an amplitude and a phase
H(e j)  |H(e j)|.e j()
For a linear phase, () = - where -    . It can be shown that a
necessary condition for this is that the impulse response function is symmetric,

Part III Time data processing

171

Chapter 12 Digital filtering

h(n)  h(N  1  n)
and in this case = (N-1)/2.
This means that for each value of N there is only one value of  for which exact
ly linear phase will be obtained. Figure 12-2 shows the type of symmetry re
quired when N is odd and even.
center of symmetry

center of symmetry

N =11 =5

N =12 =5.5

N odd, even symmetry

N even, odd symmetry

Figure 12-2

Symmetrical impulses for odd and even N

Filter types
Several types of filter are provided (some of which are illustrated below) as
well as multipoint filters where the required response can be of an arbitrary
shape.
H 

H 
low pass

band pass

H 

H 

high pass

Figure 12-3


band stop

Filter types

In addition it is also possible to design a Differentiator filter and a Hilbert


transformer. These can both be designed using the Remez exchange algorithm
and they are briefly described here.
Differentiator filter
Such a filter takes the derivative of a signal and an ideal differentiator has a de
sired frequency response of

172

The Lms Theory and Background Book

Digital filtering

H d()  j      

Eqn 12-16

The unit sample response is




h(n)  1
2

 H ()e
d

jnd

Eqn 12-17




 1
2

 je

jnd



 cosnn
Which is an anti symmetric
unit sample response. In
practice however the ideal
case is not required and a
pass band will be specified
as shown here.

H 

stop
band
ripple

pass band
stop band
transition band

Figure 12-4

Characteristics of a differentiator filter

Hilbert transformer
This filter imparts a 900 phase shift to the input. The ideal Hilbert transformer
has a desired frequency response of
H d()   j0    
 j      0

Eqn 12-18

The unit sample response is




h(n)  1
2

 H ()e
d

jnd

Eqn 12-19




1
 
2

je jnd 




0

!



je jnd

2
2 sin ( n2)

n

Part III Time data processing

173

Chapter 12 Digital filtering

In practice however the ideal case is not required and the desired frequency re
sponse of a Hilbert transformer can be specified as Hd () = 1 between the limits
l <<u as shown below.
H 

Figure 12-5

12.2.2

Characteristics of a Hilbert transformer

Design of FIR filters


Design of an FIR window filter
The frequency response of a filter can be expanded into the Fourier series.
H(e j) 

h(n)e jn

Eqn 12-20

n


h(n)  1
2

 H(e

j)e jn



The coefficients of the Fourier series are identical to the impulse response of the
filter. Such a filter is not realizable however since it begins at - and is infi
nitely long. It needs to be both truncated to make it finite and shifted to make
it realizable. Direct truncation is possible but leads to the Gibbs phenomenon
of overshoot and ripple illustrated below.

Figure 12-6

174

Gibbs phenomenon due to truncation of the Fourier series

The Lms Theory and Background Book

Digital filtering

A solution to this is to truncate the Fourier series with a window function. This
is a finite weighting sequence which will modify the Fourier coefficients to con
trol the convergence of the series. Then
^

h(n)  h(n)w(n)

Eqn 12-21
^

where w(n) is the window function sequence and h(n) gives the required im
pulse response.
The desirable characteristics of a window function are
d

a narrow main lobe containing as much energy as possible

side lobes that decrease in energy rapidly as  tends to .

The windows supported are listed below.


Rectangular
This is equivalent to direct truncation.
W(n)  1when

 (N  1)
(N  1)
n
2
2

W(n)

= 0 elsewhere
-(N-1)/2

(N-1)/2

Hanning
This type of window trades off transition width for ripple cancellation. In this
case

 (N  1)
(N  1)
W(n)    (1  ) cos 2n when
n
2
2
N

= 0 elsewhere
 = 0.5
Hamming
This has similar properties to the Hanning window described above. The for
mula is the same but in this case =0.54.
Kaiser
The Kaiser window function is a simplified approximation of a prolate spheroi
dal wave function which exhibits the desirable qualities of being a time-limited
function whose Fourier transform approximates a band-limited function. It
displays minimum energy outside a selected frequency band and is described
by the following formula

Part III Time data processing

175

Chapter 12 Digital filtering

W(n) 

I 0  1  [2n(N  1)] 2
I 0

when

 (N  1)
(N  1)
n
2
2

Where I0 is the zeroth order Bessel function and  is a constant representing a


frequency trade-off between the height of the side lobe ripple and the width of
the main lobe.
Chebyshev
This is another example of an essentially optimum window like the Kaiser win
dow, in the sense that it is a finite duration sequence that has the minimum
spectral energy beyond the specified limits. The window function is derived
from the Chebyshev polynomial which is described below.
The Chebyshev polynomial of degree r in x where -1 x  1 is denoted by
Tr  Tr (x).
T r(x) ( cos(r. cos 1(x))
and
And so

T r1(x)  2.x.T r(x)  T r1(x)


T0  1
T r(1)  1
T r( 1)  ( 1) r
T 2r(0)  ( 1) r
T 2r1(0)  0

The window function W(n) is obtained from the inverse DFT of the Chebyshev
polynomial evaluated at N equally spaced points around the unit circle.

FIR multi window Filter


This allows you to design a filter of arbitrary shape and is suited for narrow
band selective filters. It uses the design technique known as frequency sam
pling.
It will be recalled from equations 12-15 that a filter can be defined by its DFT
coefficients and that the DFT coefficients can be regarded as samples of the z
transform of the function evaluated at N points around the unit circle.

176

The Lms Theory and Background Book

Digital filtering

H(k) 

N1

 h(n)ej(2N)nk

n0

h(n)  1
N

N1

 H(k)ej(2N)nk

k0

H(k)  H(z)| zej(2N)k


From these relationships and since e j2k =1, it can be shown that
H(z) 

(1  zN)

N1

 [1  z1H(k)
e j(2N)nk]

Eqn. 12-22

k0

The desired filter specification can be sampled in frequency at n equidistant


points around the unit circle, to give the desired frequency response H(k). The
continuous frequency response can be obtained by interpolation of these
sampled values around the unit circle.
The filter coefficients are obtained after applying an inverse FFT on the interpo
lated response. The coefficients are tapered smoothly to zero at the ends by
multiplying the impulse response by the specified window function.

FIR Remez filter


This uses the remez exchange algorithm and the Chebyshev approximation
theory to arrive at filters that optimally fit the desired and the actual frequency
responses, in the sense that the error between them is minimized. The ParksMcClellan algorithm employed enables you to design an equi-ripple optimal
FIR filter.
The desired frequency response is expressed as a gabarit which contains a num
ber of frequency bands. These bands are interpolated onto a dense grid in a
similar way to that described for the multipoint FIR filter design using a win
dow described above.
The weighted approximation error between the desired frequency response and
the actual response is spread evenly across the passbands and the stopbands
and the maximum error is minimized by linear optimization techniques. The
approximation errors in both the pass and stop bands for a low pass filter are
illustrated in Figure 12-7.

Part III Time data processing

177

Chapter 12 Digital filtering

1
1
1

 

2
2

Figure 12-7

Approximation errors

The filter coefficients are obtained after applying an inverse DFT on the opti
mum frequency response.
Weighting
For each frequency band the approximation errors can be weighted. This is
done by specifying a weighting function W(). Applying a weighting function
of 1 (unity) in all bands implies an even distribution of the errors over the
whole frequency band. To reduce the ripple in one particular band it is neces
sary to change the relative weighting across the bands and in this case to ensure
that the band of interest has a relatively high weighting. It is convenient to
normalize W() in the stopband to unity and to set it to the ratio of the approxi
mation errors (2/1) in the passband.

12.2.3

Design of IIR filters using analog prototypes


The steps involved in this design process are described in the following subsec
tions. References for further reading on filters can be found on page 191.

Step 1) Specify the filter characteristics


The required filter characteristics are described in Figure 12-8. These will of
course depend on the type of filter required.

178

The Lms Theory and Background Book

Digital filtering

 

maximum ripple in
the pass band (dB)
attenuation (dB)


u upper cutoff

lower cutoff l

Figure 12-8

Filter specification for IIR filters

Step 2) Compute the analog frequencies


A prototype low pass filter will be designed based on the required digital cut
off frequency c . First however the digital frequency d must be converted to
an analog one a . This is achieved through a bilinear transformation from the
digital (z) plane to the analog (s) plane where s and z are related by

1
s  2 1  z 1
T 1z

Eqn 12-23

When z= e jT (the unit circle) and s=ja


jT
s  2 1  e jT  2 j tan( dT2)
T 1e
T

Eqn 12-24

 a  2 tan( dT2)
T

Eqn 12-25

The analog  axis is mapped onto one revolution the of the unit circle, but in a
non-linear fashion. It is necessary to compensate for this nonlinearity (warp
ing) as shown below

Part III Time data processing

179

Chapter 12 Digital filtering

d
computed analog frequencies

c
d
defined digital frequencies

Figure 12-9

Conversion from digital to analog frequencies

Step 3) Select the suitable analog filter


It is now necessary to select a suitable low pass analog prototype filter that will
produce the required characteristics. The selection can be made from the fol
lowing types of filter.
Bessel filters
Butterworth filters
Chebyshev type I filters
Inverse Chebyshev (type II) filters
Cauer (elliptical) filters

Bessel filters
The goal of the Bessel approximation for filter design is to obtain a flat delay
characteristic in the passband. The delay characteristics of the Bessel approxi
mation are far superior to those of the Butterworth and the Chebyshev approxi
mations, however, the flat delay is achieved at the expense of the stopband at
tenuation which is even lower than that for the Butterworth. The poor
stopband characteristics of the Bessel approximation make it impractical for
most filtering applications !

180

The Lms Theory and Background Book

Digital filtering

Bessel filters have sloping pass and stop bands and a wide transition width re
sulting in a cutoff frequency that is not well defined.
The transfer function is given by
H(s) 

d0
B n(s)

Eqn 12-26

where Bn (s) is the nth order Bessel polynomial


B n(s)  (2n  1)B n1(s)  s 2B n2(s)

Eqn 12-27

and d0 is a normalizing constant.


d0 

(2n)!
2 nn!

Eqn 12-28

Butterworth filters
These are characterized by the response being maximally flat in the pass band
and monotonic in the pass band and stop band. Maximally flat means as many
derivatives as possible are zero at the origin. The squared magnitude response
of a Butterworth filter is
|H(s)| 2 

1
1  (ssc) 2n

Eqn 12-29

where n is the order of the filter. The transfer function of this filter can be de
termined by evaluating equation 12-29 at s=j
|H(j)| 2  H(s)H( s) 

1
s2 n
1  ( j
2)

Eqn 12-30

Butterworth filters are all-pole filters i.e. the zeros of H(s) are all at s=.
They have magnitude (1/2 ) when / c =1 i.e. the magnitude response is
down 3dB at the cutoff frequency.

Part III Time data processing

181

Chapter 12 Digital filtering

|H  |2

3dB

n=4
n=10
c

Figure 12-10 Characteristics of a Butterworth filter

A means of determining the optimum order is described on page 185.

Chebyshev (type I) filters


These are all pole filters that have equi-ripple pass bands and monotone stop
bands. The formula is
|H()| 2 

1
1   2C 2n()

Eqn 12-31

where Cn () are the Chebyshev polynomials and  is the parameter related to
the ripple in the pass band as shown below for n odd and even.

1
1  2


1
1  2

n odd

n even

For the same loss requirements, the Chebyshev approximation usually requires
a lower order than the Butterworth approximation, but at the expense of an
equi-ripple passband. Therefore, the transition width of a Chebyshev filter is
narrower than for a Butterworth filter of the same order.
The increased stopband attenuation is achieved by changing the approximation
conditions in that band thus minimizing the maximum deviation from the ideal
flat characteristics. The stopband loss keeps increasing at the maximum pos
sible rate of 6*<Order> dB/Octave.

182

The Lms Theory and Background Book

Digital filtering

Chebyshev filters show a non-uniform group delay and substantially non-lin


ear phase. A means of determining the optimum order is described on page
185.

Inverse Chebyshev (type II) filters


These contain poles and zeros and have equi-ripple stop bands with maximally
flat pass bands. In this case
|H()| 2 

C n(r)
1   2 C (
n
r)

Eqn 12-32

where Cn () are the Chebyshev polynomials,  is the pass band ripple parame
ter and r is the lowest frequency where the stop band loss attains a specified
value. These parameters are illustrated below for n odd and even.

1
1  2


1
1  2

n odd

n even r

...

For the same loss requirements, the Inverse Chebyshev approximation usually
requires a lower order than the Butterworth approximation, but at the expense
of an equi-ripple stopband.
The increased passband flatness is achieved by changing the approximation
conditions in that band thus minimizing the maximum deviation from the ideal
flat characteristics.

Cauer (elliptical) filter


These filters are optimum in the sense that for a given filter order and ripple
specifications, they achieve the fastest transition between the pass and the stop
band (i.e. the narrowest transition band). They have equi-ripple stop bands
and pass bands.

Part III Time data processing

183

Chapter 12 Digital filtering

n odd

n even

The transfer function is given by


|H()| 2 

1
1   2R 2n(L)

Eqn 12-33

where Rn (L) is called a Chebyshev rational function and L is a parameter de


scribing the ripple properties of Rn (L). The determination of Rn (L) involves
the use of the Jacobi elliptic function.  is a parameter related to the passband
ripple.
This group of filters is characterized by the property that the group delay is
maximally flat at the origin of the s plane. However this characteristic is not
normally preserved by the bilinear transformation and it has poor stop band
characteristics.
For a given requirement, this approximation will in general require a lower or
der than the Butterworth or the Chebyshev ones. The Cauer approximation
will thus lead to the least costly filter realization, but at the expense of the worst
delay characteristics.
In the Chebyshev and Butterworth approximations, the stopband loss keeps in
creasing at the maximum possible rate of 6*<Order> dB/Octave. Therefore
these approximations provide increasingly more loss than a certain wanted flat
attenuation that is really needed above the edge of the stopband. This source of
inefficiency for both approximations is remedied by the Cauer or elliptic
approximation.

184

The Lms Theory and Background Book

Digital filtering

Step 4) Transform the prototype low pass filter


At this point we have selected a suitable low pass filter prototype with a
normalized cutoff frequency c =1. The next stage is to transform this low pass
filter into the type of analog filter required with the desired cutoff frequencies.
To achieve this the following transformations are applied.
Transform

Frequency response

Replace s by
s s

Low pass to low pass


s sc

Low pass to high pass

Low pass to band pass

Low pass to band stop

s2   u l
s( u   l)
l

s( u   l)
s 2   u l

Step 5) Apply a bilinear transformation


The final stage in this design process is to apply a bilinear transformation to
map the (s) plane to the (z) plane to obtain the desired digital filter.
H(z)  H(s)|

sT2

1z 1
1z 1

Eqn 12-34

The final result is a set of filter coefficients a and b, stored in vectors of length
n+1,where n is the order of the filter. A facility, described below, enables you to
determine the optimum order of a filter required for a particular design.

Determining the filter order


You can determine the filter order and the cutoff frequency for a given set of
design parameters that are shown in Figure 12-11.

Part III Time data processing

185

Chapter 12 Digital filtering

1
passband ripple
1-1
attenuation
2
p

s
v

Figure 12-11 Specifications required to determine filter order

Ripple passband

This determines the ripple parameter 1. It is ex


pressed in dB

Attenuation

When this is defined, the ripple parameter 2 is de


termined. It is expressed in dB.

Lower frequency
Upper frequency

These are the two edge frequencies p (end of the


pass band) and s (start of the stop band) of a low
pass or high pass filter. Band pass and band stop
filters will require a second pair of frequencies to
be defined.

Sampling frequency

This is the sampling frequency at which the filter


must operate.

The filter can be any one of the types mentioned above and the prototype can
be either a Butterworth, Chebyshev type I or type II or a Cauer filter. This pro
cess does not apply to the Bessel filter because of the particular condition per
taining to these filters in that the filter order affects the cutoff frequency.
The minimum filter order required is determined from a set of functions de
scribed below.
One function relates the pass band and stop band ripple specifications to a filter
design parameter  where
2

12

(1   1) (1   21)   22

Another parameter relates the pass band cut off frequency  p , the transition
width v and the low pass filter transition ratio k where
p
tan  p2
k   
s
tan  s2
analog

186

digital

The Lms Theory and Background Book

Digital filtering

A final function relates the filter order n, the low pass filter transition ratio k
and the filter design parameter  This relationship depends on the type of pro
totype analog filter.
n

n

n


k

Butterworth

cosh1( 1)
11k 2
ln
k

K(k)K (1   2)
K()K(1  k 2)

Chebyshev

Elliptic

where K( .) is the complete elliptical integral of the first kind.

12.2.4

IIR Inverse design filter


The `filter inverse design' command uses a direct digital design technique rath
er than the digitization of existing analog filters as described in section 12.2.3.
An iterative procedure is used to perform a least squares error fit between the
actual frequency response and the specified desired response.
The required response is obtained from a specified gabarit that contains the
necessary frequency and magnitude break points which are mapped onto a
grid.
The outcome is a set of filter coefficients.

Part III Time data processing

187

Chapter 12 Digital filtering

12.3

Analysis
This section describes the functions that provide information on the characteris
tics of filters.

Frequency response of filters


The magnitude and phase of the frequency response H(e j) of the filter defined
by the coefficients a and b in equation 12-10

Group delay
The group delay of a set of filters provides a measure of the average delay of a
filter as a function of frequency. The frequency response of a filter is given by
H(z)| zej  H(e j)  |H(e j)|.e j()
The phase delay is defined as
()
 p()   

Eqn 12-35

and the group delay is defined as the first derivative of the phase
 g()  

d()
d

Eqn 12-36

If the wave form is not to be distorted then the group delay should be constant
over the frequency bands being passed by the filter.
For a linear delay, () = - where -    
then  is both the phase delay and the group delay.

188

The Lms Theory and Background Book

Digital filtering

12.4

Applying filters
This section describes how filters can be applied to data.
Direct trace filtering
Implementing this method basically filters the data x according to the filter de
fined by coefficients a and b to produce the filtered data y.
Zero phase filtering
This option also filters the data using the filter defined by the coefficients a and
b, but in such a way as to produce no phase distortion. In the case of FIR filters
an exact linear phase distortion is possible since the output is simply delayed
by a fixed number of samples, but with IIR filters the distortion is very non-lin
ear. If the data has been recorded however and the whole sequence can be replayed, then this problem can be overcome by using the concept of `time rever
sal'. In effect the data is filtered twice, once in the forwards direction, then in
the reverse direction which removes all the phase distortion but results in the
magnitude effect of the filter being squared.
If x(n)=0 when n<0, then the z transform of the time reversed sequence is
0

Z{x( n)} 

Eqn 12-37

x( n)z n

n

which if -n=u

 x(u)(z1)u
0

X(z)  Z{x(n)}

So if

Z{x( n)}  X(z 1)

then

Time reversal filtering can be realized using the method shown in Figure 12-12.
x(n)

a(n)=x(-n)

f(n)

Time
reversal

b(n)=f(-n)

y(n)

Time
reversal

Figure 12-12 Realization of zero phase filters

Part III Time data processing

189

Chapter 12 Digital filtering

In this case it can be seen that


A(z)  X(z 1)
F(z)  A(z)H(z)  H(z)X(z 1)
B(z)  F(z 1)  H(z 1)X(z)
Y(z)  H(z)B(z)  H(z)H(z 1)X(z)
So the `equivalent' filter for the input data is
H eq(z)  H(z)H(z 1)
with z  e j
H eq(z)  H(e j)H(e j)  |H(e j)| 2
i.e. zero phase and squared magnitude. Using this filtering method results in
starting and end transients, which in this implementation are minimized by
carefully matching the initial conditions.

190

The Lms Theory and Background Book

Digital filtering

12.5

References
[1]

A. V. Oppenheimer and R.W. Schafer


Digital Signal Processing
Prentice Hall 1975

[2]

L.R. Rabiner and B. Gold


Theory and Application of Digital Signal Processing
Prentice Hall 1975

[3]

R.E. Crochiere and L.R. Rabiner


Multirate Digital Signal Processing
Prentice Hall 1983

[4]

J.G. Proakis and D.G. Manolakis


Digital Signal Processing: Principles Algorithms and Applications
MacMillan Publishing 1992

Part III Time data processing

191

Chapter 13

Harmonic tracking

This chapter describes the concepts involved in Harmonic tracking


using a Kalman filter.
Theoretical background
Practical considerations

193

Chapter 13 Harmonic tracking

13.1

Introduction
There are a number of circumstances when it is necessary to track periodic com
ponents (orders) when the signal of interest is buried in noise, or the rotational
speed is changing rapidly. Indeed some effects only manifest themselves when
the rate of change of frequency is high. In these situations, real time analog and
digital filters have limited of resolution due to transients and excessive proces
sing requirements. The Kalman filter however is able to accurately track sig
nals of a known structure concealed in a confusion of noise and other periodic
components of unknown structure.
An important characteristic of the Kalman filter is that it is non-stationary. It
functions well at high slew rates, because the system model used does not pre
sume either fixed time of frequency content, but adapts itself automatically as
the system itself is changing. This ability to derive the system model for each
time sample in the recording (within certain user-defined constraints) frees it
from the usual time/frequency resolution constraint encountered with the
traditional frequency transformations.

Conditions for use


Some important capabilities of the Kalman filter are V

the ability to track an order with arbitrary fractional order resolution


from signals sampled at a constant rate,

fine spectral resolution of the orders (i.e. 0.01 Hz) obtained after just a
few measurement samples (not even one cycle of the fundamental com
ponent),

virtually no slew rate limitations,

the ability to produce an order value for every measurement sample


point,

no phase distortion.

In order to use the Kalman filter the following conditions must apply -

194

The structure of the signal (sine wave) to be tracked must be accurately


known.

The signals must be acquired at a constant sampling rate.

An accurate estimate of the instantaneous Rpm value is required when


you are dealing with signals that vary with rotational speed.

The Lms Theory and Background Book

Harmonic tracking

13.2

Theoretical background
The application of the Kalman filters to track harmonic components involves
two stages.
1 Accurate determination of the Rpm
If you want to track an order, then you must provide the corresponding
Rpm/time trace. Your Rpm may have been determined using a Tacho signal
which results in a pulse train or a swept sine function in which case you will
need to convert it to a Rpm/time function.
2 The tracking of the specified waveform
Section 13.2.2 describes the mathematical background to the operation of the
tracking function.
Some practical considerations are discussed in section 13.3.

13.2.1

Determination of the Rpm


Since the Kalman filter is highly selective and accurate in tracking a target sig
nal buried in noise, it is crucial that the instantaneous RPM of the system is pre
cisely modelled, otherwise the wrong component will be tracked. The rpm in
formation can be derived from the tachometer channel, which is sampled at the
same rate as the measurement channels to obtain a small statistical variability
in the period estimation. Clearly the tachometer events will occur at a lower
rate and so to reduce the error on the period estimate, resampling is performed
on the original tachometer signal.
The first part of the process therefore is to convert the original tacho signal from
a pulse train to an rpm/time function.
The second step involves obtaining an equidistant function. Since all mechani
cal systems have some inertia, it is reasonable to expect the speed to be a con
tinuous function, so a cubic spline with the appropriate boundary conditions
can be used to obtain the required `sample-by-sample RPM' estimate of speed
function.

13.2.2

Waveform tracking
The Kalman filtering method involves setting up and solving a pair of equa
tions known as the Structural and the Data equations.

Part III Time data processing

195

Chapter 13 Harmonic tracking

The Structural equation


This equation defines the shape or structure of the waveform you wish to track.
A sine wave for example, x(t) of frequency  sampled at time t satisfies the
following second order difference equation

x(nt)  2 cos(2t)x((n  1)t)  x((n  2)t)  0

Eqn. 13-1

by dropping the time increment t this can be written more simply as

x(n)  c(n)x(n  1)  x(n  2)  0

Eqn. 13-2

where c(n) = cos (2 t)


When the instantaneous frequency  is known, equation 13-2 is a linear fre
quency dependent constraint equation on the sine wave which is known as the
structural equation.
When tracking a sine wave which is changing in frequency, and which is con
taminated by noise and other sinusoids, a non homogeneity term (n) is
introduced. This allows the sine wave to vary in frequency, amplitude and
phase and Equation 13-2 then becomes

x(n)  c(n)x(n  1)  x(n  2)  (n)

Eqn. 13-3

(n) is a deterministic but unknown term which allows for deviations from the
true stationary wave.
It is also useful to define S (n) as the standard deviation of the non homogene
ity of the structural equation.

The Data equation


x(n) is the time history defined by the structural equation, but the measured

signal y(n) contains both the signal that matches the structural equation as well
as noise and other periodic components.

196

The Lms Theory and Background Book

Harmonic tracking

y(n)  x(n)  (n)

Eqn. 13-4

where (n) contains noise and periodic components at frequencies other than
the target signal.
Once again S (n) is defined as the standard deviation of the nuisance element
of the data equation.

The Least squares formulation


For any point in time (n), equations 13-3 and 13-4 provide linear equations for
{x(n) x(n-1) x(n-2)}. Rearranging these equations gives an unweighted form
of equation where the structural equation is on the top row and the data equa
tion on the bottom.



1  c(n)1 x(n  2)


(n)
x(n  1) y(n)  (n)
1
x(n)

Eqn. 13-5

The error in equation 13-5 is made isotropic by applying a weighting factor r(n)
which is defined as the ratio of the standard deviations of the errors in the
structural and data equations.

r(n) 

s(n)
s (n)

Eqn. 13-6

Equation 13-5 then becomes -



1  c(n)1 x(n  2)


(n)
x(n  1) r(n)(y(n)  (n))
r(n)
x(n)

Eqn. 13-7

The weighting function r(n) expresses the degree of confidence between the
structural equation and data equation, or, the certainty of the presence of orders
in the data. This function shapes the nature of the Kalman filter and influences
its tracking characteristics. A small value for r(n) leads to a filter that is highly
discriminating in frequency, but which takes time to converge. Conversely, fast
convergence with low frequency resolution is achieved by choosing a large r(n).

Part III Time data processing

197

Chapter 13 Harmonic tracking

When applied to all observed time points Equation 13-7 provides a system of
overdetermined equations which may be solved using standard least squares
techniques.

198

The Lms Theory and Background Book

Harmonic tracking

13.3

Practical considerations
This section considers some practical characteristics of the Kalman filter and the
parameters that influence them.

Frequency resolution
In principle the Kalman filter is capable of tracking sinusoidal components of
any frequency up to half the sample frequency. In practice however, it has been
found that the ability to distinguish between two closely spaced sine waves is
inversely proportional to the total observation time. As a consequence, the ob
servation time should be equal to the inverse of minimum frequency spacing
required between components.

Filter characteristics
It was mentioned above that the weighting r(n) used in Equation 13-7 can be
used to influence the nature of the tracking filter used. This weighting can be
adjusted through the specification of a harmonic confidence factor which is de
fined as the inverse of the weighting factor.
s (n)
HC  1 
r(n) s (n)

Eqn. 13-8

Applying a high value implies confidence in the harmonic (structural data) and
assumes that the error in your measured data is high. In this case the filter will
be narrow so that it is highly discriminating in frequency. This is obtained at
the cost of time to converge in amplitude. Applying a low value implies that
the error in the measured data is low and consequently a wider filter can be
used which while less discriminating in frequency has the advantage that the
amplitude converges more quickly.
The three Kalman filters shown below are characterized by different harmonic
confidence factors which influence the width of the filter.

Part III Time data processing

199

Chapter 13 Harmonic tracking

HC= 50
HC= 100
HC= 200

Figure 13-1

Effect of the Harmonic Confidence Factor

Bandwidth characteristics
Equation 13-7 shows that the weighting function, r(n), which is the inverse of
the harmonic confidence factor, can be different for every time point. This
means that the bandwidth of the filter can vary as a function of the frequency
or order being tracked.
Using a frequency defined band width means that at low Rpm values, a num
ber of orders will be encompassed by the filter range.
amp

Rpm
amp
orders

frequency

Rpm

orders

frequency

Figure 13-2 Defining the filter bandwidth in terms of frequency and amplitude.

Allowable slew rates


The formulation of the Kalman filter assumes that the frequency of the signal to
be tracked remains constant over three consecutive measurement points. When
the frequency is varying, but the variation over these three points is less than the
bandwidth of the filter then no problem arises.
The minimum value of the bandwidth is equal to the inverse of the observation
time T. If the sample rate is Fs then the slew rate must be less than Fs/2T.

200

The Lms Theory and Background Book

Harmonic tracking

Tracking closely spaced order signals with a high slew rate requires sampling
at a high frequency over a long period which imposes a heavy computational
effort. However if you consider the significant slew rate encountered during
the deceleration of gas turbines of 75Hz/sec over 5 seconds, from the above this
implies a sample rate of 750Hz. It can be seen therefore that such an extreme
slew rate does not impose any realistic limitation on the sample rate.

Part III Time data processing

201

Chapter 14

Counting and
histogramming

This chapter provides an introduction to various counting methods


and provides a reading list for further information at the end
Counting of single events and occurrences
Twodimensional counting methods

203

Chapter 14 Counting and histogramming

14.1

Introduction
In fatigue analysis, real life measurements of mechanical or thermal loads are
used to assess and predict the damage inflicted by such loads over the life time
of a product. Figure 14-1 shows such measurements made on a vehicle part
over a period of around 5 minutes (330 seconds).

acceleration

0.4

(g)

time
(s)

-0.4

Figure 14-1

Typical load/time data

In terms of fatigue analysis it is the occurrence of specific events that are of


more significance than the frequency content of the loads. The approach used
is to scan such time histories looking for typical fatigue-generating events and
then to register how often they occur. These typical events can be demon
strated with a zoomed-in section of a load time history, shown in Figure 14-2.

Figure 14-2

Typical events in a data trace

The interesting events are:V

204

The occurrence of peaks at specific levels


These are represented by the circles
and are determined using
``Peak counting'' methods described in section 14.2.1.

The Lms Theory and Background Book

Counting and histogramming

The exceedence or crossing of specific levels.


These are represented by the squares
and are determined using
``Level cross'' counting methods described in section 14.2.2.

The occurrence of signal changes of a certain size.


These are represented by the arrows and are determined using ``Range
count methods' described in section 14.2.3

The determination of the signal characteristics based on the events mentioned


above is a two stage process
d

Stage 1, counting
The data is scanned for the occurrence of one of the events listed above.
This in effect reduces the full time history to a set of mechanical or ther
mal load events.

Stage 2 histogramming
This involves dividing the counted occurrences into classes where for
each event, its number of occurrences is specified.

Part III Time data processing

205

Chapter 14 Counting and histogramming

14.2

One dimensional counting methods


The procedures described above deal with the counting of `single events' or oc
currences which are further explored in this section.
Section 14.3 describes a number of methods used to examine the occurrence of
additional event circumstances. These methods are termed `Two dimensional
counting methods'.

14.2.1

Peak count methods


The turning points in a data trace are termed ``peaks"(maximums ) and ``val
leys" (minimums ). The number of times that peaks and valleys occur at spe
cific levels is counted as shown below. You can choose to count both the peaks
and the valleys (extrema) or just the peaks (maxima), or just the valleys (mini
ma).
2
1
0
-1
-2

Figure 14-3

Counting of peaks and valleys

A histogram is then created by calculating the distribution of the number of oc


currences as a function of the level at which the occurrence appeared. The Fig
ure 14-4 shows the results of processing the above peak-valley reduction ac
cording to the three types of counting methods.

206

The Lms Theory and Background Book

2
1
0

-2 -1

0
level

Minima

Figure 14-4

14.2.2

2
1
0

-2 -1

0
level

Maxima

4
Nr of occurrences

Nr of occurrences

Nr of occurrences

Counting and histogramming

2
1
0

-2 -1

level

Extrema

Histograms of peaks (maxima), valleys (minima) and both (extrema)

Level cross counting methods


This procedure counts the number of times that the signal crosses various lev
els. Distinctions can be made between an upward (positive ) and a down
ward (negative ) crossing as illustrated below. You can choose to count both
the positive (up) crossings, the negative (down) crossings or both types.
2
1
0
-1
-2

Figure 14-5

Counting of level crossings

Peak counts and level cross counts are closely related. The number of positive
crossings of a certain level is equal of the number of peaks above that level mi
nus the number of valleys above it. This implies that a level cross count can be
derived from a peak-valley count.
A level crossing count is typically initiated by specifying a grid on top of the
signal to determine the levels. The grid can be specified in ordinate units or as
a percentage of the ordinate range. The resulting histograms for the above sig
nal when up, down and both types of crossings are counted are shown below.

Part III Time data processing

207

Chapter 14 Counting and histogramming

0
level

-2 -1

6
4

14.2.3

-2 -1

0
level

8
6
4
2

up (+) crossings

Figure 14-6

10

Nr of occurrences

10
Nr of occurrences

Nr of occurrences

10

-2 -1

0
level

up (+) & down (-) crossings

down (-) crossings

Histograms of level crossing counts

Range counting methods


A range count method will determine the number of times that a specific range
change is observed between successive peak-valley sequences.

Counting of single ranges


The range between successive peak-valley pairs is counted. Ranges are consid
ered positive when the slope is rising and negative when the slope is falling.
4
1

+
1

1
+
1

+
1
+
1

+
4

Figure 14-7

Counting of single peak-valley ranges

A histogram of the number of occurrences, as a function of the range, is gener


ated.

208

The Lms Theory and Background Book

Counting and histogramming

Nr of occurrences

4
3

2
1
0

-4

-3

-2

-1

Range

Figure 14-8

Histogram of single peak-valley ranges

Counting of rangepairs
The counting of single ranges (usually indicated as a range-count), is both sim
ple and straightforward but sensitive to small variations of the signal. Thus in
the analysis of the left hand signal illustrated in Figure 14-9, single range
counting would result in a large number of relatively small ranges.

low pass filter

Figure 14-9

Sensitivity of single range counting to signal variation

If this signal were passed through a filter, suppressing the small load varia
tions, the resulting signal would reveal a count of only one very large range.
As a consequence the two analysis results are completely different and the
method is very sensitive to small signal variations.
The range-pair counting method overcomes this sensitivity. Rather then split
ting up the signal into consecutive ranges, it is interpreted in terms of a ``main"
signal variation (or range) with a smaller cycle (range pair) superimposed on it.

Figure 14-10 Range pair counting

Part III Time data processing

209

Chapter 14 Counting and histogramming

If a pair of extremities are separated by a range that is less than the defined
range of interest (R), then they are `filtered out' of the range count.

210

The Lms Theory and Background Book

Counting and histogramming

14.3

Twodimensional counting methods


The counting methods described so far, consider the occurrence of single events
in isolation from any other circumstances which may affect these events. How
ever, it is also meaningful to count events differently, depending on other cir
cumstances using `two-dimensional' methods. Such methods are discussed in
this section.

14.3.1

Fromtocounting
Such a ``combined" event can be the occurrence of a peak at level j followed by
a valley at level i. As an example, consider the combination of a valley at level
A followed by a peak at level C as illustrated in Figure 14-11.
4

D
2

C
B
A

12

11

Figure 14-11 From-to counting

In this example, the Fromto sequence (12) is counted separately from the
sequences (34) and (1112), although the ranges involved are identical
(C-A=D-B).
The result of such ``fromto'' counting can be presented in a so called MarkovMatrix A[i,j]. The element aij gives the number of peaks at level j followed by a
valley at level i. The matrix of results of counting the events in Figure 14-11 are
shown below.

Part III Time data processing

211

Chapter 14 Counting and histogramming

From j
B
C

To i

peaks

valleys

The lower left triangle of the Markov matrix contains the positive fromto
events, the upper right triangle summarizes the negative transitions. The addi
tional separate columns contain the counting results for peaks and valleys at a
particular level. These results are easily obtained for the triangles of the Mar
kov matrix.

14.3.2

Rangemean counting
Another example of a two-dimensional counting method results in the socalled Range-mean matrix. The variation or range (i-j) is associated with its
corresponding mean value (i+j)/2.
4

D
2

B
A

12

3
1

D-B

11

D-B

C-A

Figure 14-12 Range mean counting

Instead of considering the actual values of A and C, the Range-mean method


will consider the values CA (the range) and B (= A+C / 2 the mean).
Ranges, means and the number of occurrences can be displayed in a 3D format.

212

The Lms Theory and Background Book

Counting and histogramming

Number of events

Mean

Range

Figure 14-13 Display of range-mean counting

14.3.3

Range pairrange or Rainflow method


A two-dimensional counting method of special interest, especially for fatigue
damage calculations, is the ``range pair-range" method. Such a method was
also developed, simultaneously and independently in Japan, known as the
``Rainflow method". Both methods yield exactly the same results, i.e. they ex
tract the same range-pairs and ranges from the signal, by combining the rangepair counting principle and the single range counting principle into one meth
od. For further details see the references listed on page 217.
Essentially the signal is split into separate cycles, having a specific amplitude
(or range) and a mean. The result can be put directly into cumulative fatigue
damage calculations according to Miner's rule and into simple crack growth
calculations. Three steps are involved in the complete procedure.
1

Conversion of the load history into a peak-valley sequence.


As the counting procedure considers only the values of successive peaks
and valleys, the complete signal may first be reduced to a peak-valley se
quence. In doing this it is usual to apply a specific ``range-filter" or gate.
For a range filter of size R, a peak (or valley) at a certain level is only rec
ognized as such if the signal has dropped (or risen) to a level which is R
lower (or higher) then the previous peak (or valley) level.

Part III Time data processing

213

Chapter 14 Counting and histogramming

e5
R

e3
R

e1
e0

e5

e4

e1

e6
R

e6

e0

e2

e2

Figure 14-14 Conversion of a load history to a peak valley sequence

In the above example e1 is counted as a peak because the signal drops by


more then the range filter size R after it.
After counting the first peak, the next valid valley is looked for, which in
this case is e2. This point is validated as a valley as the signal rises by
more then R to go to e3. The algorithm then searches for the next valid
peak. The first peak encountered is e3, but this is not counted as a valid
peak as the signal does not drop sufficiently before reaching the next ex
tremum in the signal (e4). So the algorithm checks whether the following
peak is a valid one. Peak e5 is regarded as valid since the drop in signal
level following it, is greater than R.
In this example the range filter eliminated the small signal variation (e3,e4)
from the peak-valley sequence.
Note that increasing the range filter eliminates only those transitions from
the histogram for which the range is smaller than the new value of R.
This is important for fatigue purposes since it proves that the filtering is
not that sensitive to the range filter size.
2

Scanning of the entire signal for range-pairs.


This phase of the counting procedure consists of taking a set of four con
secutive points, and check whether a range-pair is contained in it. If not,
the search through the peak-valley sequence continues by shifting one
data point ahead. Once a range-pair is detected, the pair is counted and
removed from the sequence. After this, the next new set of four points is
formed by adding the closest two previously scanned points, to the two
remaining after removal of the range pair. The fact that earlier scanned
points are re-considered, clearly distinguishes Range-pair range counting
from single range counting.

214

Counting the ``Residue"

The Lms Theory and Background Book

Counting and histogramming

At the end of the second phase, a ``residue" of peaks and valleys is left
which is analyzed according to the single range principle. It can be
shown that this residue has a specific shape, namely a diverging part fol
lowed by a converging part.

Example
The following example shows how the range-pair range method operates.
Consider the time signal shown be
low.

A peak-valley reduction with a range


filter of size R, results in the peak-valley
sequence shown below.
S6
S2

S4

S8
S5

S3
S1

S7

The second phase (scanning of the range-pair occurrences) starts by looking at


the 4 first extremes. In this group (S1,S2,S3,S4), a pair is counted if the two in
ner extremes (S2,S3,) fall within the range covered by the two outer extremes
(S1, and S4),. If this is not (as in this example), then the algorithm moves one
step forward and considers the extremes S2,S3,S4, and S5. These do not satisfy
the condition either, so the extremes S3,S4,S5, and S6 are considered and this
time a range pair is counted.
S6
S2

S6
S2

S4

S3

S8

S8

S5

S3

S1

S1
S7

S4
S5

S7

Counting a range-pair implies deleting the counted extremes from the signal.
``Stepping backwards", the extremes S1,S2,S3, and S6 are now considered and
another pair (S2,S3) is found.

Part III Time data processing

215

Chapter 14 Counting and histogramming

S6

S6

S2
S8

S8

S3
S1

S1

S2

S7

S7
S3

From the remaining four extremes, no ``pairs'' can be subtracted. This forms
the residue which is further counted as single ``from-to-ranges''.

Further considerations
The result of the range pair-range counting depends on the length of the data
record being analyzed at one time because the largest range counted will be be
tween the lowest valley and the highest peak. This largest variation is often re
ferred to as the `half load cycle'. If the lowest valley occurs near the beginning
of a very long load cycle, and the highest peak near the end, you should con
sider whether it makes physical sense to combine such occurrences, so remote
in time into one cycle.
The counting method is insensitive to the size of the range filter applied. The
only effect of increasing the range filter size from R to 3R, for example, is that
all elements in a From-to counting for which |from-to|<3*R, become zero. In
other words, the choice of the range filter size is not critical.

216

The Lms Theory and Background Book

Counting and histogramming

14.4

References
[1]

Fatigue load monitoring of tactical aircraft, de Jonghe J.B., 29th Meeting of


the AGARD SMP, Istanbul, September 1969.

[2]

The monitoring of fatigue loads, de Jonghe J.B., IACS-Congress, Rome,


September 1970 .

[3]

Statistical load data processing, van Dijk C.M, 6th ICAF Symposium Mi
ami, Florida USA, May 1971 .

[4]

Fatigue of Metals subjected to varying stress, Matsuiski M. & Endo T.,


Kyushu district meeting, Japan Society of Mechanical Engineers, March
1968 .

[5]

Cycle counting and fatigue damage, Watson P., SEE Symposium of 12th
February 1975, Journal of Society of Environmental Engineers, September
1976.

Part III Time data processing

217

Theory and Background

Part IV
Analysis and design
Chapter 15
Estimation of modal parameters . . . . . . . . . .

219

Chapter 16
Operational modal analysis . . . . . . . . . . . . . .

267

Chapter 17
Running modes analysis . . . . . . . . . . . . . . . .

281

Chapter 18
Modal validation . . . . . . . . . . . . . . . . . . . . . . . .

293

Chapter 19
Rigid body modes . . . . . . . . . . . . . . . . . . . . . .

309

Chapter 20
Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

321

Chapter 21
Geometry concepts . . . . . . . . . . . . . . . . . . . . .

218

357

Chapter 15

Estimation of modal
parameters

This chapter describes the basic principles involved in estimating


modal parameters. The topic covered are :
The definition and derivation of modal parameters
Factors to consider in the estimation
Descriptions of different parameter estimation techniques
Calculation of static compensation modes

219

Chapter 15 Estimation of modal parameters

15.1

Estimation of modal parameters


A modal analysis provides a set of modal parameters that characterize the dy
namic behavior of a structure. These modal parameters form the modal model
and Figure 15-1 illustrates the process of arriving at the modal parameters.

 (jr  )  (jr  )

h ij(j) 

k1

ijk

ijk

*
k

curve fit to estimate modal parameters

measure the frequency response function

input

FREQUENCY
DAMPING
MODE SHAPES

input

Figure 15-1 Derivation of modal parameters

If a structure exists on which measurements can be made, then it can be as


sumed that a parametric model can be defined that describes that data. The
starting point is usually a set of measured data - most commonly frequency re
sponse functions (FRFs), or the time domain equivalent, impulse responses
(IRs). For IRs the relation between modal parameters and the measurements is
expressed in Equation 15-1.

h ij(t)   r ijke  t  r ijk *e  t

*
k

Eqn 15-1

k1

The corresponding relation for FRFs is given in Equation 15-2.

220

The Lms Theory and Background Book

Estimation of modal parameters

 r ijk  r ijk !
h ij(j)   
(j   ) (j   *)
N

k1

Eqn 15-2

where
hij (t)

= IR between the response (or output) degree of freedom i and the ref
erence (or input) DOF j

hij (j ) = FRF between the response DOF i and reference DOF j


N

= number of modes of vibration that contribute to the structure's dy


namic response within the frequency range under consideration

r ijk

= residue value for mode k

k

= pole value for mode k.

* designates complex conjugate.


The pole value can be expressed as shown in Equations 15-3 and 15-4.
 k   k  j dk

Eqn 15-3

where
dk

= the damped natural frequency of mode k

k

= the damping factor of mode k

or
 k    k nk  j nk 1   2k

Eqn 15-4

where
nk

= the undamped natural frequency of mode k

k

= damping ratio of mode

Equation 15-5 shows that the residue can be proven to be the product of three
terms
r ijk  a kv ikv jk

Part IV

Modal Analysis and Design

Eqn 15-5

221

Chapter 15 Estimation of modal parameters

where
vik

= the mode shape coefficient at response DOF i of mode k

vjk

= the mode shape coefficient at reference DOF j of mode k

ak

= a complex scaling constant, whose value is determined by the scaling


of the mode shapes

Note that the mode shape coefficients can be either real (normal mode shapes)
or complex. If the mode shapes are real, the scaling constant can be expressed
as,
ak 

1
2jm k dk

Eqn 15-6

where
mk

= the modal mass of mode k

The poles, natural frequencies (damped and undamped), damping factors or


ratios, mode shapes, and residues are commonly referred to as modal parameters
(parameters of the modes of the structure).
The fundamental problem of parameter estimation consists of adjusting (esti
mating) the parameters in the model, so that the data predicted by the model
approximate (or curve-fit) the measured data as closely as possible. Modal pa
rameters can be estimated using a number of techniques. These techniques are
discussed in the following sections.

A note about units


The frequency and damping values have a dimension of 1/time, and are there
fore stored in Hz.
The residues, as appearing in Equation 15-1 of 15-2, have the same dimension
as the measurement data. As an aside, it is important to note that residues have
a dimension. Residues are composed of a product of mode shape coefficients
and a scaling constant, (Equation 15-5). The mode shape coefficients by them
selves do not have any dimension, nor absolute (or scaled) magnitude. Dimen
sion, and therefore units will be viewed as attributes of the scaling constant.
Finally, for multiple input analysis, the residues are written in factored form as
the product of mode shapes with modal participation factors. Again, the prod
uct of the factors has a dimension and absolute magnitude. Formally, the mode
shape coefficients will again be considered as without dimension and therefore
units will be viewed as attributes of the residues.

222

The Lms Theory and Background Book

Estimation of modal parameters

15.2

Types of analysis
The section discusses some general principles to be considered when perform
ing a modal analysis. These topics include V

Using single or multiple degree of freedom methods in section 15.2.1

Making local or global estimates in section 15.2.2

Using multiple input analysis in section 15.2.3

Using time or frequency domain analysis in section 15.2.4

Special conditions which apply when performing vibro-acoustic analy


sis in section 15.2.5

The specific parameters estimation techniques are described in section 15.3

15.2.1

Single or multiple degree of freedom method


If, in a given frequency band, only one mode is assumed to be important, then
the parameters of this mode can be determined separately. This assumption is
sometimes called the single degree of freedom (sDOF) assumption.
d/f

min
Figure 15-2

max

frequency

The single degree of freedom assumption

Under this assumption, the FRF equation 15-2 can be simplified to equation
15-7. This is assuming the data to have the dimension of displacement over
force.
h ij

Part IV

r ijk
(j   k)

Modal Analysis and Design

r *ijk
(j   *k)

Eqn 15-7

223

Chapter 15 Estimation of modal parameters

 min     max
It is possible to compensate for the modes in the neighborhood of this band, by
introducing so called upper and lower residual terms into the equation.
h ij 

r ijk
(j   k)

r ijk *
(j   *k)

 ur ij 

lr ij

Eqn 15-8

2

where
urij = upper residual term (residual stiffness) used to approximate modes
at frequencies above max.
lrij = lower residual term (residual mass) used to approximate modes at
frequencies below min
Upper and lower residuals are illustrated in Figure 15-3.
d/f

mass line
upper
residual urij
lower
 lr ij
residual
2

Figure 15-3

stiffness line

frequency

Upper and lower residuals

Equation 15-7 can be further simplified by neglecting the complex conjugate


term, and so becomes
h ij 

r ijk
(j   k)

Eqn 15-9

Single degree of freedom methods


The single DOF assumption forms the basis for parameter estimation techn
iques such as Peak picking, Mode picking and Circle fitting.

224

The Lms Theory and Background Book

Estimation of modal parameters

Multiple degree of freedom methods


The sDOF assumption is valid only if the modes of the system are well de
coupled. In general this may not be the case. It then becomes necessary to
approximate the data with a model that includes terms for several modes. The
parameters of several modes are then estimated simultaneously with so-called
multiple degree of freedom methods.

15.2.2

Local or global estimates


If you recall the time domain relationship between modal parameters and mea
surement functions,
h ij(t) 

 rijke t  r*ijke t

*
k

Eqn 15-10

k1

you will see that the pole values k are independent of both the response and
the reference DOFs. In other words the pole value k is a characteristic of the
system and should be found in any function that is measured on the structure.
When applying parameter estimation techniques, one of two strategies can be
employed; making local or global estimates.

Part IV

Local estimates

Global estimates

Each data record hij is analyzed indi


vidually, and a potentially different
estimate of the pole value k is found
each time.

All the data records are analyzed si


multaneously in order to estimate the
structure's characteristics.

Analyzing data in this manner pro


duces as many estimates of each pole
as there are data records. It is then
left to the user to decide which esti
mate is the best or to somehow calcu
late the best average of all the esti
mates.

With this approach, a unique estimate


of the pole values k is obtained.
Such estimates are therefore called
global estimates.

Peak picking and Circle fitting are


techniques that calculate local esti
mates of pole values.

The Least Squares Complex Exponen


tial, Complex Mode Indicator Func
tion and Direct Parameter Identifica
tion methods allow you to obtain
global estimates of structure charac
teristics.

Modal Analysis and Design

225

Chapter 15 Estimation of modal parameters

15.2.3

Multiple input analysis


Assume that data is available between Ni input DOFs and No output DOFs.
The expression for each of the individual data records ( equation 15-10) can
then be rewritten in matrix form for all the data records.
[H] 

 [Rk]e t  [R*k]e t

*
k

Eqn 15-11

k1

where
[H] =(No ,Ni ) matrix with hij as elements
[Rk ] =

(No ,Ni ) matrix with rijk as elements

Equation 15-5 can be used to express the residue matrix in factored form,
[R k]  a k{V} kV r k

Eqn 15-12

where
{V}k = No vector (column) with mode shape coefficients at the output DOFs
Vr k = Ni vector (row) with mode shape coefficients at the input
DOFs
If DOFs i and j are both output and input DOFs then the above equation im
plies Maxwell Betti reciprocity,
r ijk  r jik

Eqn 15-13

This assumption is not essential however since the residue matrix can be ex
pressed in a more general form,
[R k]  {V} kL k

Eqn 15-14

where L k is a vector (row) with Ni coefficients that express the par


ticipation of the mode k in response data relative to different input
DOFs.
These coefficients are called modal participation factors therefore. Note that if rec
iprocity is assumed then the modal participation factors are proportional to the
mode shape coefficient at the input DOFs.

226

The Lms Theory and Background Book

Estimation of modal parameters

Using the factored form of the residue matrix, equation 15-11 can be written as,
N

 H    {V} kLke  t  {V *} kL* ke  t

*
k

Eqn 15-15

k1

If just the data between any output DOF and all input DOFs are considered
then
H i 

 vikLke t  v*ikL*ke t

*
k

Eqn 15-16

k1

where
H i = Ni vector of data between output DOF i and all input DOFs.
It is essential in the model of equation 15-16 that both the poles and the modal
participation factors are independent of the output DOF. In other words in this
formulation the characteristics become L ke kt

Eqn 15-17

A multiple input modal parameter estimation technique is one that analyses


data relative to several inputs simultaneously to estimate the characteristics ex
pressed by equation 15-17 (i.e. both the pole values and the modal participa
tion factors). The basis for these techniques is a model expressed by equation
15-16.
The identification of modal participation factors is essential for decoupling
highly coupled or even repeated roots. To illustrate this consider a structure
that has two modes with pole values
1 and
2 very close to each other. Ne
glecting the other modes and the complex conjugate terms, the response data
relative to the input DOF j can be expressed as
{H} i  {V} 1l 1je 1t  {V} 2l 2je  2t...

Eqn 15-18

or since
1 " 2  

{H} i  {V} 1l 1j  {V} 2l 2j e t...

Part IV

Modal Analysis and Design

Eqn 15-19

227

Chapter 15 Estimation of modal parameters

The latter equation shows that in the response data relative to an input DOF j, a
combination of the coupled modes is observed and not the individual modes.
The combination coefficients for the modes are the modal participation factors
l1j and l2j .
The response data relative to another input DOF l, is expressed by an equation
similar to equation 15-19.
{H} i  {V}1l 1l  {V} 2l 2l
e t...

Eqn 15-20

The only difference between these last two equations is the modal participation
factors l1l and l2l . If they are linearly independent of the modal participation
factors for input i, then the modes will appear in a different combination in the
response data relative to input l. As a multiple input parameter estimation
technique analyses data relative to several inputs simultaneously, and the mod
al participation factors are identified, then it is possible to detect highly
coupled or repeated modes.

15.2.4

Time vs frequency domain implementation


Using digital signal processing methods, only samples of a continuous function
are available. For modal parameter estimation the sampled data consist most
frequently of FRF measurements. Normally these are taken at equally spaced
frequency lines. Testing techniques such as stepped sine excitation allow you to
measure data at unequally spaced frequency lines.
For modal parameter estimation applications with the data measured in the fre
quency domain, introducing the sampled nature of the data transforms the
equation for the model to -

 ijk
h ij,n(j)   

(j n   )
N

k1

!

(j n  *k)
r ijk*

Eqn 15-21

where
hij,n = samples of data in measured range.

228

The Lms Theory and Background Book

Estimation of modal parameters

n = sampled value of frequency in measured range.


A frequency domain parameter estimation method uses data directly in the fre
quency domain to estimate modal parameters. It is therefore irrelevant wheth
er the frequency lines are equally spaced or not. They are based directly on the
model expressed by equation 15-21.
If the data are sampled at equally spaced frequency lines, then the FRF can be
transformed back to the time domain to obtain a corresponding Impulse Re
sponse (IR). A Fast Fourier Transform (FFT) algorithm is used for this trans
formation but the restriction on the number of frequency lines being equal to a
power of 2 (e.g. 32, 64, 128...) is no longer valid. After transformation, a series
of equally spaced samples of corresponding impulse response functions is ob
tained. A time domain parameter estimation technique allows you to analyze
such equally spaced time samples to estimate modal parameters.
In practice, a variety of conditions mean that the frequency band over which
data is analyzed is smaller than the full measurement band. This is illustrated
in Figure 15-4.

hij

min

max
analysis frequency band

frequency

measurement band

Figure 15-4

Analysis frequency band vs. measurement band

The analysis frequency band includes only three modes whereas the measure
ment band includes five. If the data is transformed from frequency to time do
main, then the time increment between samples will be determined by the anal
ysis frequency band and not the measurement band. If the frequency band of
analysis is bounded by max and min then t is determined from
t 

2
2( max   min )

Eqn 15-22

By substituting sampled time for continuous time

Part IV

Modal Analysis and Design

229

Chapter 15 Estimation of modal parameters

h ij,n(t) 

 rijke nt  r*ijke nt

*
k

Eqn 15-23

k1

or
N

h ij,n   r ijkz nk  r *ijkz *n


k

Eqn 15-24

k1

where
z k  e  kt

Eqn 15-25

Time domain parameter estimation methods are based on the model defined by
equation 15-24. They analyze hij,n to estimate zk .
k is then calculated from
equation 15-25. Note however that this calculation is not unique since
z k  e ( kjm2t)t  e kt

Eqn 15-26

This implies that no poles outside the frequency band 2 /t can be identi
fied. In other words, with a time domain parameter estimation method, all es
timated poles are to be found in the frequency band of analysis ( min ,  max ).
This may cause problems in estimating modal parameters if the data in the fre
quency band of analysis is strongly influenced by modes outside this band (re
sidual effects). Since with frequency domain methods
k is estimated directly,
no such limitation arises. A frequency domain technique may therefore some
times be preferred over a time domain technique for analyzing data over a nar
row frequency band, where residual effects are important.

15.2.5

Vibroacoustic modal analysis


Coupling between the structural dynamic behavior of a system and its interior
acoustical characteristics can have an important impact in many applications.
Based on combined vibrational and acoustical measurements with respect to
acoustical or structural excitation, a mixed vibro-acoustical analysis can be per
formed.
The finite element equation of motion is used to derive the equations describ
ing the vibro-acoustical behavior:

230

The Lms Theory and Background Book

Estimation of modal parameters

  2M S  iC S  KS{x}  {f}  {l p}

Eqn 15-27

with
M S, C S, K S

the structural mass, damping and stiffness matrices

{f}

the externally applied forces

{lp }

the acoustical pressure loading vectors

In the fluidum the indirect acoustical formulation states:

 2M f  iCf  Kf{p}  {q. }  2{lf}

Eqn 15-28

with
M f, C f, K f matrices describing the pressure-volume acceleration
2{lf }

the acoustical pressure loading vectors

Combining these equations with


{l p} 

 pdS

Eqn 15-29

Sb

{l f} 

 x dS

Eqn 15-30

Sb

and rewriting the formulations results in the description of the vibro-acoustical


coupled system:

px  iC0 C0 px   MM

KS  Kc
0 Kf

   

0
C Mf

x
p 

f
.
q

Eqn 15-31

This represents a second order model formulation of the vibro-acoustical be


havior which is clearly non-symmetrical.
The above equation also reflects the vibro-acoustical reciprocity principle
which can be expressed as:
..

xj
pi
| q. 0   . | fi0
qi
fj i

Eqn 15-32

Most of the multiple input - multiple output modal parameter algorithms do


not require symmetry. So the non-symmetry of the basic set of equations and
hence the modal description does not pose a problem in obtaining valid modal
frequencies, damping factors and mode shapes.

Part IV

Modal Analysis and Design

231

Chapter 15 Estimation of modal parameters

Structural excitation can be substituted for acoustical excitation. The modal


models derived from both are compatible but differ in a scaling factor per
mode due to the special non-symmetry of the set of equations.
To go from the structural formulation to the acoustical formulation a scaling
factor which is the squared eigenvalue of the corresponding mode is required.
This is fully explained in the paper `Vibro-acoustical Modal Analysis : Reci
procity, Model Symmetry and Model Validity' by K. Wyckaert and F.
Augusztinovicz.

232

The Lms Theory and Background Book

Estimation of modal parameters

15.3

Parameter estimation methods


A summary of different methods and their applications is given in Table 15.1.
Method

Application

DOF

Domain Estimates

Inputs

Peak picking

frequency,
damping

single

freq

local

single

Mode picking

mode shapes

single

freq

local

single

Circle fitting

frequency
damping
mode shapes

single

freq

local

single

Complex Mode
Indicator Function

frequency
damping
mode shapes

multi

freq

global

single or
multiple

Least Squares
Complex Exponential

frequency
damping
modal
participation
factors

multi

time

global

single or
multiple

Least Squares
Frequency Domain

mode shapes

multi

freq

global

single or
multiple

Frequency domain
Direct Parameter
identification

frequency
damping
modal
participation
factors

multi

freq

global

single or
multiple

Table 15.1

Parameter estimation methods and application

Selection of a method
A guide on which parameter estimation techniques method to adopt is outlined
below. Details on all the methods are given in the following sections.
SDOF
Single degree of freedom curve fitters are rough and ready and will give you a
quick impression of the most dominant modes (frequency damping and mode
shapes) influencing a structure under test. As such they are useful in checking
the measurement setup and can help assess:
V

Part IV

whether all the transducers are working and correctly calibrated;

Modal Analysis and Design

233

Chapter 15 Estimation of modal parameters

whether the accelerometers are correctly labelled with their node and
direction;

whether all the nodes are instrumented.

For this purpose it is recommended to identify real modes since these are the
easiest to interpret when displayed.
The circle fitter gives the most accurate estimates of the SDOF techniques, but
may create large errors on nodal points of the mode shapes.
Complex MIF
This method can be used in the same way as the SDOF techniques to give you
an idea of the most dominant modes and check the test setup. It has the
advantage that multiple input FRFs can be used and the mode shape estimates
are of a higher quality. Furthermore, it can extract a modal model that includes
the most dominant modes in a particular frequency band.
Time domain MDOF
This is the most general purpose parameter estimation technique that is prob
ably the standard tool used in modal analysis. It provides a complete and accu
rate modal model from MIMO FRFs. Its major weakness seems to be when
analyzing heavily damped systems where the damping is greater than 5% such
as in the case of a fully equipped car.
Frequency domain MDOF
The Frequency Domain Direct Parameter technique provides similar results to
the Time domain technique described above, in terms of accuracy but is gener
ally slower. It is weak when dealing with lightly damped systems (damping
less than 0.3%) but fortunately performs better on heavily damped ones, thus
complementing the other MDOF technique. Since it operates in the frequency
domain it is able to analyze FRFs with an unequally spaced frequency axis.

15.3.1

Peak picking
Peak picking is a single DOF method to make local estimates of frequency
and damping. The method is based on the observation that the system re
sponse goes through an extremum in the neighborhood of the natural frequen
cies.
For example, on a frequency response function (FRF) the real part will be zero
around the natural frequency (minimum coincident part), the imaginary part
will be maximal (peak quadrature) and the amplitude will also be maximal
(peak amplitude). The frequency value where this extremum is observed is
called the resonant frequency r and is a good estimate of the natural frequency
of the mode nk for lightly damped systems.

234

The Lms Theory and Background Book

Estimation of modal parameters

A corresponding estimate of the damping can be found with the 3dB rule. The
frequency values 1 and 2,on both sides of the peak of the FRF at which the
amplitude is half the peak amplitude (3dB down) are introduced in the formula
in equation 3.1 to yield the critical damping ratio. The method is also illus
trated in Figure 15-5 below. 1 and 2 are also called half power points.


2  1
2 r

Eqn 15-33

hij
dB
ampl

3 dB

1 r 2
Figure 15-5

frequency

Half power (3 dB) method for damping estimates

Since the curve fitter locates the resonance frequency on a spectral line, signifi
cant errors can be introduced if the FRF has a low frequency resolution and the
peaks of modes fall between two spectral lines. This can be compensated for by
extrapolating the slopes on either side of the picked line to determine the am
plitude of the FRF more precisely.
It may be necessary to deal with the situation when one of the half power
points is not found. This may arise when the frequency of one mode is close to
that of another mode, or it is near to the ends of the measured frequency range.

Note!

Peak picking is a single DOF method: it is therefore only suitable for data with
well separated modes.

As this method yields local estimates, it requires only one data record to obtain
frequency and damping values for all modes. However, if several data records
are available, it may be that different records identify different modes.

Part IV

Modal Analysis and Design

235

Chapter 15 Estimation of modal parameters

15.3.2

Mode picking
If you assume that the modes are uncoupled and lightly damped, the modal
amplitude can be computed from the peak quadrature or peak amplitude of the
FRF. With this assumption, the data in the neighborhood of the resonant fre
quency can be approximated by
h ij,n "

r ijk
(j n   k)

Eqn 15-34

(see also equation 15-7)


The amplitude is maximum at the resonant frequency. However for lightly
damped modes, the resonant frequency, natural frequency and damped natural
frequency are all approximately the same. Therefore, the amplitude at reso
nance or the modal amplitude is found at n which is equal to dk .
By substituting dk for n in equation 15-34 the modal amplitude is given by


r ijk
k

Eqn 15-35

Note that from the modal amplitude a residue or mode shape estimate is ob
tained by multiplying by the modal damping.
To use the Mode picking method you must have an estimate of dk . This esti
mate can be obtained with the Peak picking method (see section 15.3.1) or other
techniques.
The Mode Picking method is obviously quite sensitive to frequency shifts in the
data. If for example the resonant frequency of a mode in a data record is
shifted a few spectral lines with respect to the frequency that is used as reso
nant frequency for that mode, then the modal amplitude would be erroneously
picked. To accommodate situations where frequency shifts occur, you need to
specify an allowed frequency shift around the resonant frequencies dk that are
used to calculate the modal amplitudes. Rather than picking the modal ampli
tude at the resonant frequencies the method now scans a band around each
modal frequency for each data record. The maximum amplitude in this band is
used to determine the modal amplitude and thus the mode shape coefficient.
Mode picking allows you to make a very quick determination of a modal mod
el. The accuracy of this model however depends on how well the assumptions
of the methods were applicable to the data.

236

The Lms Theory and Background Book

Estimation of modal parameters

15.3.3

Circle fitting
The Circle fitting method is based on estimating a circle in the complex plane
through data points in a band around a selected mode. The method was origi
nally developed by Kennedy and Pancu for lightly damped systems under the
single DOF assumption. In the band around a mode, the data can be approxi
mately described by
h ij,n 

r ijk
j n   k

r *ijk

Eqn 15-36

j n   *k

Making an abstraction of the indices i, j and k, introducing complex notation for


the residue, and approximating the complex conjugate term by a complex
constant, equation 15-36 transforms to
hn 

U  jV
 R  jI
   j( n   d)

Eqn 15-37

It can be demonstrated that the modal parameters in this expression can be


derived from the coefficients of a circle that is fitted to the data in the complex
plane, as shown in Figure 15-6.

  arctan V
U

Re(h)

(R,I)

d

U 2  V 2


(R  U2, I  V2)





Im(h)

f
Figure 15-6



d

Relation between circle fitting parameters and modal parameters

The natural frequency d is determined by the maximum angular spacing


method where the natural frequency is assumed to occur at the point of maxi
mum rate of change of angle between data points in the complex plane.

Part IV

Modal Analysis and Design

237

Chapter 15 Estimation of modal parameters

Having determined the natural frequency and assuming a lightly damped sys
tem, the damping is given by equation 15-38.


 1
d

tan( 2) 2 tan( 2)

Eqn 15-38

The complex residue U + jV is determined from the diameter of the circle d,


and the phase  as illustrated in Figure 15-6.
U 2  V 2

V
  arctan
U

Eqn 15-39

d

Eqn 15-40

Circle fitting is a basic sDOF parameter estimation method. It can be used to


obtain frequency, damping and mode shape estimates. The method is fast, but
should really be used interactively to obtain the best possible results.

15.3.4

Complex mode indicator function


The Complex Mode Indicator Function method allows you to identify a modal
model for a mechanical system where multiple reference FRFs were measured.
The method provides a quick and easy way of determining the number of
modes in a system and of detecting the presence of repeated roots. This in
formation can then be used as a basis for more sophisticated multiple input
techniques such as LSCE or FDPI. However in cases where modes are well ex
cited and obvious it can yield sufficiently accurate estimates of modal parame
ters.
The FRF matrix of a system with No (output) and Ni (input) degrees of freedom
can be expressed as follows
2N

Qr
[H()]   {} r
{L} Tr
  r

Eqn 15-41

r1

Or in matrix form as
[H()]  []

238

 Q  [L]
r

Eqn 15-42

The Lms Theory and Background Book

Estimation of modal parameters

where
[H()]= the FRF matrix of size Ni by No
[] = the mode shape matrix of size No by 2N
Qr =

the scaling factor for the rth mode

 r = the system pole value for the rth mode


[L] T = the transposed modal participation factor matrix of size Ni
by 2N
Taking the singular value decomposition of the FRF matrix at each spectral line
results in
[H]  [U][S][V] H

Eqn 15-43

where
[U]= the left singular matrix corresponding to the matrix of mode shape
vectors
[S]= the diagonal singular value matrix
[V]= the right singular matrix corresponding to the matrix of modal par
ticipation vectors
In comparing equations 15-42 and 15-43, the mode shape and modal participa
tion vectors in equation 15-42 are, through the singular value decomposition,
scaled to be unitary vectors and the mass matrix in equation 15-43 is assumed
to be an identity matrix, so that the orthogonality of modal vectors is still satis
fied.
For any one mode, the natural frequency is the one where the maximum singu
lar value occurs.
The Complex Mode Indicator Function is defined as the eigenvalues solved
from the normal matrix, which is formed from the FRF matrix ([H] H[H]) at
each spectral line.
[H] H[H]  [V][S] 2[V] H

Eqn 15-44

CMIF k()  k()  s k() 2k  1, 2, .N i

Eqn 15-45

where
k ()= the kth eigenvalue of the normal FRF matrix at frequency 

Part IV

Modal Analysis and Design

239

Chapter 15 Estimation of modal parameters

sk ()= the kth singular value of the FRF matrix at frequency 


Ni =

the number of inputs

In practice the [H] H[H] matrix is calculated at each spectral line and the ei
genvalues are obtained. The CMIF is a plot of these values on a log scale as a
function of frequency. The same number of CMIFs as there are references can
be obtained. Distinct peaks indicate modes and their corresponding frequency,
the damped natural frequency of the mode. This is illustrated in Figure 15-7.
Peaks in the CMIF function can be searched for automatically whilst taking into
account criteria that are used to eliminate spurious peaks due to noise or mea
surement errors.
1

CMIF
log

.1

.01

.001

frequency

Figure 15-7

Example of a CMIF showing selected frequencies

When the frequencies have been selected, equations 15-43 and 15-44 can be
used to yield the complex conjugate of the modal participation factors [V],
and the as yet unscaled mode shape vectors [U].
The unscaled mode shape vectors and the modal participation factors are used
to generate an enhanced FRF for each mode (r), defined by
H
HE
r ()  {U} r [H()]{V} r

Eqn 15-46

Since the mode shape vectors and modal participation factors are normalized
to unitary vectors by the singular value decomposition, the enhanced FRF is ac
tually the decoupled single mode response function
HE
r () 

Qr
  r

Eqn 15-47

A single degree of freedom method (such as the circle fitter technique) can now
be applied to improve the accuracy of the natural frequency estimate and then
to extract damping values and the scaling factor for the mode shape.

240

The Lms Theory and Background Book

Estimation of modal parameters

CMIF

.1

log
.01

.001

frequency

amp

frequency

Figure 15-8

Example of a CMIF and the corresponding enhanced FRF

One CMIF can be calculated for each reference DOF. They can be sorted in
terms of the magnitude of the eigenvalues. They can all be plotted as a func
tion of frequency as shown in the example in Figure 15-9.
CMIF 1

.1
CMIF_1

log
.01

CMIF_2

.001
frequency

Figure 15-9

Part IV

Example of first and second order CMIFs

Modal Analysis and Design

241

Chapter 15 Estimation of modal parameters

Cross checking and tracking


At any one frequency these functions will indicate how many significant inde
pendent phenomena are taking place as well as their relative importance.
At a resonance, at least one CMIF will peak implying that at least one mode is
active. At a different frequency however it may be that a different mode has
increased its influence and is the major contributor to the response. Between
resonances, a cross over point can occur where the contribution of two modes are
equal. This can result in a higher order CMIF exhibiting peaks if they are
sorted as shown in Figure 15-9 and in the effect of one CMIF exhibiting a dip at
the same time as a lower order function is exhibiting a peak.
A check on peaks in the second order CMIF functions can be made to deter
mine whether or not they are due to the cross over effect or a genuine pole of
second order. This is done by calculating the MAC matrix using data on either
side of the frequency of interest.
MAC (1a,1b)

MAC (2a,1b)

MAC (1a,2b)

MAC (2a,2b)

Where a and b represent the frequencies


and 1 and 2 the CMIF functions. CMIF 1
contains the larger values and CMIF 2 the
smaller ones

CMIF_1

CMIF_2

When this MAC matrix approximates


a unity matrix then the peak in
CMIF_2 represents a resonance peak.
The mode is not changing between fre
quencies a and b.
"1
"0

"0
"1

CMIF_1
CMIF_2

When this MAC matrix is anti diago


nal then the peak in CMIF_2 repre
sents a cross over point. The mode is
switching between frequencies a and
b.
"0
"1

"1
"0

Peak picking can be facilitated by using tracked CMIFs. This alters the display
of the CMIFs for when the mode shapes represented by the two CMIFs are
switched, the CMIFs are also switched. This is determined by the cross over
check described above.

242

The Lms Theory and Background Book

Estimation of modal parameters

An example of the tracked versions of the CMIFs illustrated in Figure 15-9 is


shown below.
CMIF

.1

log
.01

.001
frequency
Figure 15-10 Example of first and second order tracked CMIFs

15.3.5

Least squares complex exponential


The Least Squares Complex Exponential method allows you to estimate values
of modal frequency and damping for several modes simultaneously. Since all
the data is analyzed simultaneously, global estimates are obtained.
To understand how the method works, recall the expression for an impulse re
sponse (IR) given below
N

h ij(t)   r ijke  t  r *ijke  t

*
k

Eqn 15-48

k1

It can be seen from this expression that the pole values


k are not a function of a
particular response (output) or reference (input) DOF. In other words the pole
values are global (rather than local) characteristics of the structure. They are
the same for any measured FRF on the structure. It should therefore be pos
sible to use all the available data measured on the system to identify global esti
mates simultaneously.
This method can be used with single and multiple inputs.

Part IV

Modal Analysis and Design

243

Chapter 15 Estimation of modal parameters

Model for continuous data


A particular problem when trying to work with equation 15-48 to achieve the
above objective is that it contains residues rijk which do depend on the response
and reference DOFs. It is therefore essential to define another parametric mod
el for the data hij , in which the coefficients are independent of response and ref
erence DOFs and can be used to identify estimates for
k. . It can be proved that
such a model takes the form of a linear differential equation of order 2N with
constant real coefficients
(ddt) 2Nh ij  a 1(ddt) 2N1h ij   a 2Nh ij  0

Eqn 15-49

Indeed, equation 15-48 expresses the data as a linear superposition of a set of


2N damped complex exponentials occurring in complex conjugate pairs. Such
complex exponentials can be viewed as the characteristic solutions of a linear
differential equation with constant real coefficients
(ddt) 2Nf (t)  a 1(ddt) 2N1f (t)   a 2Nf (t)  0

Eqn 15-50

The impulse response, being a linear superposition of characteristic solutions, is


by itself also a characteristic solution. Therefore equation 15-49 is valid if the
coefficients are such that
 2N  a 1 2N1   a 2N  0
   k,  

 *k, k 

1 N

Eqn 15-51

Turning the reasoning around therefore, one could first try to estimate the coef
ficients in equation 15-49 using all available data. Estimates of the complex ex
ponential coefficients
k can then be found by solving equation 15-51.

Model for sampled data


Measured data is however sampled, not continuous. So rather than working
from equation 15-48 it is necessary to work with
h ij,n 

 rijkznk  r*ijkz*nk

k1

Eqn 15-52

z k  e  kt
Instead of damped complex exponentials, the characteristics are now power se
ries with base numbers zk .

244

The Lms Theory and Background Book

Estimation of modal parameters

Following a similar reasoning to that explained above for continuous data it


can be proved that the sampled data is the solution of a linear finite difference
equation with constant real coefficients of order 2N (instead of a differential
equation as for continuous data).
h ij,n  a 1h ij,n1   a 2Nh ij,n2N  0

Eqn 15-53

The characteristics zk and therefore the poles


k can be found by solving,
2N1
  a 2N  0
z 2N
k  a 1z k

Eqn 15-54

Practical implementation of the method


The Least Squares Complex Exponential is a method that estimates the coeffi
cients in equation 15-53 using data measured on the system.
In principle any data record hij,n can be used. Applying the method to just a
single data record at a time will result in local estimates of the poles.
To estimate the coefficients in equation 15-53 in a least squares sense the equa
tions for all possible time points and all possible response and reference DOFs
are to be solved simultaneously as indicated in equation 15-55. This equation
system will be greatly overdetermined. To find the least squares solution the
normal equations technique can be applied so that the final solution is calcu
lated from a compact equation with a square coefficient matrix, equation 15-56.
The coefficient matrix in this equation is called a covariance matrix.

h11,2N1
 )
 h11,N 1
 )

 hij,n1

 )

t

h N0N i,Nt1


h 11,0 !
s  h11,2N!
)
)

 ) 
h 11,N t2N  a 1 !
 h 11,Nt 



a
 2
)
)
)
$
%
$
) %
h ij,n2N 



h
ij,n
a


)
)

h N 0Ni,N t2N





 hN N N 





2N

Eqn 15-55

where
Nt= last available time sample
N0 = number of response DOFs
Ni = number of input DOFs
We can write this in a simpler manner

Part IV

Modal Analysis and Design

245

Chapter 15 Estimation of modal parameters

r 1,1
.
 ).
.

sr 1,2 r 1,2N
! a1 !   r 1,0 !
r2,2 r 2,2N  a 2    r 2,0 
 ) %=$
) %
)
)
) $



 r 2N,0
. . r 2N,2N a2N

Eqn 15-56

The coefficients in the covariance matrix are defined as


r k,l 

N0

Ni

Nt

   (hij,nkhij,nl)

Eqn 15-57

i1 j1 n1

Building this covariance matrix is the first stage in applying the Least Squares
Complex Exponential method. This phase is usually the most time consuming
since all the available data is used to build the inner products expressed by
equation 15-57.
Note that after solving equation 15-56 all that is required to calculate the esti
mates of modal frequency and damping is to substitute the estimated coeffi
cients in equation 15-54 and to solve for zk .

Determining the optimum number of modes


The solution of equation 15-56 results in least squares estimates of the coeffi
cients in the model expressed by equation 15-53. It is also possible therefore to
calculate the corresponding least squares error. This error is of importance in
determining the minimum number of modes in the data.
In the preceding discussion it has been assumed that N modes are present in
the data. However, the number of modes contained in the data is in fact un
known. It is preferable that this should be determined by the method itself.
Using the Least Squares Complex Exponential method, this can be achieved by
observing the evolution of the least squares error on the solutions of equation
15-56 as a function of the number of assumed modes.
To do this, an equation like equation 15-56 is initially created, assuming a num
ber of modes N that is sufficiently large. A subset of such an equation is then
taken to solve for the coefficients of a model that describes just one mode

r 1,1 r 1,2
. r 2,2

   
 r 1,0
a1
a 2 =  r 2,0

The corresponding least squares error is represented by 1.


When 2 modes are assumed in the data then the sub set to be solved is

246

The Lms Theory and Background Book

Estimation of modal parameters

r 1,1
.

.

.

r 1,2 r 1,3
r 2,2 r 2,3
. r 3,3
.
.

r 1,4 a
! 1!  r 1,0!
r 2,4 a 2  r 2,0
$a %=$ r %
r 3,4
3,0
 3
r 4,4 a 4  r 4,0


With corresponding least squares error 2, and so on. Now if a model is as
sumed with a number of modes equal to the number of modes that is present in
the data then the corresponding least squares error should be significantly
smaller than the error for models with fewer modes.
A diagram that plots the least squares error for increasing number of modes is
called the least squares error chart. Figure 15-11 shows a typical diagram if data
is analyzed for a system with 4 modes (and 4 modes are observable from the
data!).
Noise on the data may cause the error diagram to show a significant drop at a
certain number of modes, followed by a continued decrease of the error as the
number of modes is increased. The problem now is to determine how many
extra modes, or so called computational modes, are to be considered to com
pensate for the noise on the data so that the best estimates of modal frequency
and damping can be obtained. This problem is also illustrated in Figure 15-11.
Least
Squares
Error


No noise on data
Noise on data

Nr of modes

Figure 15-11 Least squares error diagram, system with 4 modes

To determine the optimal number of modes you could try to compare frequen
cy and damping estimates that are calculated from models with various num
ber of modes. Physical intuition would lead you to expect that estimates of fre
quency and damping corresponding to true structural modes, should recur (in
approximately the same place) as the number of modes is increased. Computa
tional modes will not reappear with identical frequency and damping. A dia
gram that shows the evolution of frequency and damping as the number of
modes is increased is called a Stabilization diagram. The optimal number of
modes that can be calculated for use can then be seen, as those modes for which
the frequency and damping values of the physical modes do not change signifi
cantly. In other words, those which have stabilized.

Part IV

Modal Analysis and Design

247

v
f
d
d
f
f
v
f
d
v
f
f

s
s
s
s
s
s
s
s

s
s
s
s
s
s
s
s
d
v
f
f

s
s
s
s
s
s
s
s
s
s
v
f
o

number of modes

amplitude

Chapter 15 Estimation of modal parameters

frequency
Figure 15-12 A stabilization diagram

Example
Let two data records be measured on a system, both shown in Figure 15-13.
h11
1
0

-1
h21
1
0

-1

t

Figure 15-13 Example least squares complex exponential

Let four data samples be measured of which the values are listed in the Table
below.

248

The Lms Theory and Background Book

Estimation of modal parameters

n
0
1
2
3

h11
1
0
-1
0

h21
0
1
0
-1

Consider a model for 1 mode (N=1). Equations 15-55 and 15-56 become re
spectively

 0 1! a
1!
1 0
1
0




=
$
a
 1 0  2 0%

1
0 s1

20 02
aa =02
1
2

The solution is therefore a1 =0, a2 =1. Now equation 15-54 is used to calculate zk
and so k,
z2  1  0
z * j
The frequency and damping values follow from
z  e t
z   j,   0  j


2t

z   j,   0  j


2t

The solution indicates a mode with a period 4t and zero damping. This is
compatible with the trend of the cursor as shown in Figure 15-13.

15.3.5.1 Multiple input least squares complex exponential


The Least Squares Complex Exponential method, described above, uses all data
measured on a structure to estimate global estimates of modal frequency and
damping. In principle, data relative to several reference DOFs can be used.
However the model used by the previous method does not take specific advan
tage of this.

Part IV

Modal Analysis and Design

249

Chapter 15 Estimation of modal parameters

The multiple input Least Squares Complex Exponential, (or polyreference), is


an extension of the Least Squares Complex Exponential that does allow consis
tent simultaneous analysis of data relative to several reference DOFs. The
method computes global estimates of frequency and damping and also of
modal participation factors. Modal participation factors are terms which ex
press the participation of modes in the system response as a function of the ref
erence (or input) DOF (see section 15.2.3). The simultaneous estimation of fre
quency, damping and modal participation factors means that highly coupled,
even repeated modes can be identified.
The basis for the Multiple Input Least Squares Complex Exponential method is
the model of the data introduced in section 15.2.3 equation 15-16.
H i 

 vikLke t  v*ikL*ke

*
k

Eqn 15-58

k1

where
H i = Ni vector (row) of IRs between output DOF i and all input DOFs
L k = vector of modal participation factor for mode k. If Ni reference DOFs
are assumed then L k is of dimension Ni
vik

= is the mode shape coefficient at response DOF i for mode k

Note that in this model, frequency, damping and modal participation factors
are independent of the particular response DOF. It should therefore be possible
to estimate these coefficients using all the available data simultaneously.

Model for sampled data


The model expressed by equation 15-58 is not directly suitable for global es
timation of frequency, damping and modal participation factors as it still con
tains the mode shape coefficients that are dependant on the response DOF.
Therefore a more suitable model must be derived.
Introducing firstly the sampled nature of the data, equation 15-58 is rewritten
as,
H n i 

 vikLkznk  v*ikL*kz*nk

k1

Eqn 15-59

z k  e  kt
It can be proved that if the data can be described by equation 15-59, it can also
be described by the following model

250

The Lms Theory and Background Book

Estimation of modal parameters

H n i  H n1A 1  HnpA p  0

Eqn 15-60

if the following conditions are fulfilled


A 1  A p]  0
Lk[z pk  z p1
k

Eqn 15-61

pN i  2N

Eqn 15-62

(The proof of this follows from basic calculus along the same lines as for Least
Squares Complex Exponential in section 15.3.5).
Equation 15-60 represents, in matrix notation, a coupled set of Ni finite differ
ence equations with constant coefficients. The coefficients A1 . . . Ap are there
fore matrices of dimension (Ni Ni ).
The condition expressed by equation 15-61 states that the terms [ Lk ] and zk n
are characteristic solutions of this system of finite difference equations. As
equation 15-59 is a superposition of 2N of such terms, it is essential that the
number of characteristic solutions of this system of equations pNi at least equals
2N as expressed by equation 15-62.
Note finally, that if data for each reference DOF is treated individually, i.e. Ni =
1, then equation 15-60 and 15-61 simplify to equations 15-53 and 15-54. Thus
the least squares complex exponential method is a special case of the multiple
input least squares complex exponential method.

Practical implementation of the method


To estimate the coefficients in equation 15-60 in a least squares sense the equa
tions for all possible time points and all possible response DOFs are to be
solved simultaneously, as indicated by equation 15-63. A least squares solution
is found, for example using the normal equations method, from equation 15-64.
The coefficient matrix in this equation is again in the form of a covariance ma
trix,

 Hp1 1
.

 HN 1

1

.

 Hn1i

.

H 
N 1 N
t


.

.

.

!
  Hp1 !
A 
.

1
!

  H N  

H N p 1 

1
A

.

t.
=
 ) 2




 H n i 


H npi 
 A p 
.

.





H


N
N
H N p N 
H 0 1
.
t

Eqn 15-63

where

Part IV

Modal Analysis and Design

251

Chapter 15 Estimation of modal parameters

Nt = the last available time sample


N0 = the number of response DOFs
R k,l 

R 1,1
.
 )

.

N0

Nt

  ([Hnk]ti[Hnl]i)

Eqn 15-64

i1 np

R 1,2 R 1,p A 1
   R 1,0
R 2,2 R 2,pA 2  R 2,0
  


)
)
)  )   ) 
 R p,0
.
. R p,p A p

Eqn 15-65

The order (p) of the finite difference equation is related to the number of modes
in the data by equation 15-62. It is preferable that this be determined by the
method itself. As the coefficients of the finite difference equation are solved for
in a least squares sense, this can be done by observing the least squares error as
a function of the assumed order. As an order is reached such that the model
can describe as many modes as are present in the data, the error should drop
considerably.
Due to the condition expressed by equation 15-62 there is no linear relation be
tween the number of modes that can be described by the model and the order
of the model. The relation between the number of modes, the order of the
model and the number of reference DOFs is listed in Table 15.2. It can be seen
that a model of order 8 can describe 11 or 12 modes if data for 3 inputs are ana
lyzed simultaneously. In the error diagrams therefore the same least squares
error is shown for 11 and 12 modes.
As for the Least Squares Complex Exponential method, a stabilization diagram
can again be created to determine the optimal number of modes. As well as
comparing frequency and damping values calculated from models of consecu
tive order it is now also possible to compare the stabilization of modal partici
pation factors. In section 15.2.3, the modal participation factors were shown to
be proportional to the mode shape coefficients at the reference DOFs. They also
represent a physical characteristic of the structure like the frequency and damp
ing. Therefore, the values corresponding to structural modes should also stabi
lize as the order of the model is increased. This additional criterion adds much
to the readability of the stabilization diagram and to the ability to distinguish
computational modes from physical modes
Additionally, the modal participation factors can be used by themselves to
identify physical modes. If they are normalized with respect to the largest, the
values should all be approximately real, in phase or in anti-phase, for structur
al modes.

252

The Lms Theory and Background Book

Estimation of modal parameters

Ni=1

Ni=2

Ni=3

Ni=4

Ni=5

Ni=6

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
46
48
51
52
54
56
58
60
62
64

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

1
2
2
3
4
4
5
6
6
7
8
8
9
10
10
11
12
12
13
14
14
15
16
16
17
18
18
19
20
20
21
22

1
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
9
9
10
10
11
11
12
12
13
13
14
14
15
15
16
16

1
1
2
2
2
3
3
4
4
4
5
5
6
6
6
7
7
8
8
8
9
9
10
10
10
11
11
12
12
12
13
13

1
1
1
2
2
2
3
3
3
4
4
4
5
5
5
6
6
6
7
7
7
8
8
8
9
9
9
10
10
10
11
11

Table 15.2

Relation between modal order (tabulated), number of modes (N) and


number of reference DOFs (Ni )

Example
To clarify the method, consider again the example discussed on page 248. Let
the example system satisfy reciprocity so that h12 is also equal to h21 . The vec
tor [h12 h21 ] then represents the data between response DOF 1 and reference
DOFs 1 and 2.
Considering a model for 1 mode (so p= 1, as Ni = 2) equations 15-55 and 15-56
become respectively

Part IV

Modal Analysis and Design

253

Chapter 15 Estimation of modal parameters




1 0! a a
0  1!
11 12
0 1 a a =1 0 
12 22
 1 0
0  1

20 02
aa

11
12

a 12
0 2
a 22 = 1 0

The resulting matrix polynomial is therefore

l 1l 2

z 1 1z
= 00

and the solutions of this eigenvalue problem are


z * j,   0 * j


2t

 L   [* j, 1]
Notice that the solution for the frequency and damping is the same as found
with the Least Squares Complex Exponential (see page 249). In addition you
also find an estimate of the modal participation factors. For this example they
indicate that there should be a phase difference of 90_ in the system response
between excitation from reference DOFs 1 and 2 as h11 is a cosine, and h12 a
sine. This estimate seems to be correct.

15.3.6

Least squares frequency domain


The Least Squares Frequency Domain method is a multiple DOF technique to
estimate residues, or mode shape coefficients. The method requires that fre
quency and damping values have already been estimated. It can be used with
single or multiple inputs.
Consider the model expressed by equation 15-66
N

h ij(t)   r ijke  t  r *ijke  t

*
k

Eqn 15-66

k1

If estimates of the modal frequency and damping are available, then the resi
dues appear linearly as unknowns in this model.

254

The Lms Theory and Background Book

Estimation of modal parameters

To estimate the residues, equation 15-66 is transformed back to the frequency


domain. Assuming sampled data therefore
h ij,p 

j
N

k1

r ijk
p

 k

lr ij
!

ur

 ij 2
j p   *k
p
r *ijk

Eqn 15-67

where
urij = an upper residual term used to approximate modes at frequencies
above max
lrij = an lower residual term used to approximate modes at frequencies
below min
These are illustrated in Figure 15-3. Note that the residues as well as lower and
upper residuals are local characteristics; in other words, they depend on the
particular response and reference DOF.
The Least Squares Frequency Domain method is based on the model expressed
by equation 15-67. Least squares estimates of residues, lower and upper resid
uals are calculated by analyzing all data values in a selected frequency range.

15.3.6.1 Multiple input least squares frequency domain


The multiple input Least Squares Frequency Domain method is a multiple DOF
technique to estimate mode shapes. The method analyses data relative to sev
eral reference DOFs simultaneously to estimate mode shape coefficients that are
independent of reference DOFs.
Consider the model expressed by equation 15-58,
H i 

 vikLke t  v*ikL*ke t

*
k

Eqn 15-68

k1

If estimates of frequency, damping and modal participation factors are avail


able, then the mode shape coefficients appear linearly as the only unknowns in
this model. Furthermore, they are only dependent on the response DOF (and
not on the reference DOF) so that data relative to several reference DOFs can be
analyzed simultaneously.
To estimate the residues, equation 15-68 is transformed to the frequency do
main. Adding residual terms and assuming sampled data results in

Part IV

Modal Analysis and Design

255

Chapter 15 Estimation of modal parameters

H pi 


N

k1

v ikL k
v *ikL * k

j p   k
j p   *k

LR i
 2p

 UR i

Eqn 15-69

where
[UR]i = upper residuals between response DOF i and all reference DOFs,
vector of dimension Ni
[LR]i = lower residuals between response DOF i and all reference DOFs, vec
tor of dimension Ni
The multiple input LSFD method is based on equation 15-69.

15.3.7

Frequency domain direct parameter identification


The Frequency domain Direct Parameter Identification (FDPI) technique allows
you to estimate the natural frequencies, damping values and mode shapes of
several modes simultaneously. If data relative to several references are avail
able, a multiple input analysis will also extract values for the modal participa
tion factors. In this case, the FDPI technique offers the same capabilities as the
LSCE time domain method.

Theoretical background
The basis of the FDPI method is the second order differential equation for me
chanical structures
..

Eqn 15-70

My(t)  Cy(t)  Ky(t)  f (t)

When transformed into the frequency domain, this equation can be reformu
lated in terms of measured FRFs

[  2I  jA 1  A 0][H()]  jB 1  B 0

Eqn 15-71

where
= frequency variable

256

The Lms Theory and Background Book

Estimation of modal parameters

A1 = M -1 C, mass modified damping matrix ( No by No )


A0 = M -1 K, mass modified stiffness matrix ( No by No )
H( ) = matrix of FRF's (No by Ni )
B0, B1 are the force distribution matrices (No by Ni )
Note that for the single input case, the H( ) matrix becomes a column vector
of frequency dependent FRF's.
Equation 15-71 is valid for every discrete frequency value  When these equa
tions are assembled for all available FRFs, including multiple input - multiple
output test cases, the unknown matrix coefficients A0, A1 , B0, and B1 can be esti
mated from the measurement data H( ). Equation 15-71 thus means that the
measurement dataH( ) can be described by a second order linear model with
constant matrix coefficients. From the identified matrices, the system's poles
and mode shapes can be estimated via an eigenvalue and eigenvector decom
position of the system matrix.

IA 0A 
1

Eqn 15-72

This will yield the diagonal matrix [] of poles and a matrix  of eigenvectors.
It will become clear from the following section, that the matrix thus obtained
is not equal to the matrix of mode shapes, although it is related to it.
In a final step, the modal participation factors are estimated from another least
squares problem, using the obtained [] and  matrices.

Data reduction
Prior to estimating the system matrix, all available data are condensed via a
projection on their principal components. For all response stations, a maximum
of Nm principal components are first calculated and then analyzed. The ob
tained matrix  represents the modal matrix for this set of fictitious response
stations.
The data reduction procedure offers the following advantages

Part IV

the calculation time is drastically decreased for the estimation of model


parameters. This is especially important for the calculation of least
squares error charts and stabilization diagrams.

the number of contributing modes is more easily determined from the


singular value analysis.

Modal Analysis and Design

257

Chapter 15 Estimation of modal parameters

Residual correction terms


The FDPI technique operates directly on frequency domain data. It is therefore
capable of taking into account the effects of modes outside the frequency band
of analysis. This feature significantly improves the analysis results when
modes below or above the selected band influence the data set. In the case
where both upper and lower residual terms are included in the model, equa
tion 15-71 becomes
[  2I  jA 1  A 0]H() 

Eqn 15-73

 2C 2   1C 1  C0  C 1   2C 2
The presence of these residual terms will influence the estimates for frequency,
damping and mode shapes (as well as the modal participation factors for multi
ple input analysis).
Determining the optimum number of modes
As with the Least Squares Complex Exponential (LSCE) method, a least squares
error chart can be built to determine the optimal number of modes in the se
lected frequency band. Because of the principal component projection, this
chart may look somewhat different. For small models, only the first (most im
portant) principal data are used, and the global error will decrease drastically.
As more and more principal components are included by estimating more
modes, their information becomes less important, which may distort the least
squares error chart.
A more reliable tool for estimating the optimal number of nodes for the FDPI
technique is the singular values diagram. As an alternative to the error dia
gram, and to some extent to the stabilization diagram too, the rank of the calcu
lated covariance matrix can be determined. The rank of the matrix is also a
good indication of the optimal number of modes to be used in the analysis.
The rank of the matrix can be determined using a singular value decomposi
tion. A diagram showing the normalized singular values in ascending order is
called a singular values diagram: the rank of the matrix is determined at the
point where the singular values become significantly smaller compared to the
previous values.
When building a stabilization diagram, (see LSCE method page 247), the same
data are described by models of increasing order. An updating procedure is
implemented to save calculation time.
PseudoDOFs for small measurement sets
Due to the type of identification algorithm, the FDPI technique can only esti
mate as many modes in the model as there are measurement Degrees of Free
dom. This means that normally

258

The Lms Theory and Background Book

Estimation of modal parameters

Nm  N0
However, using a similar approach as for the time domain LSCE method, it is
possible to create so-called pseudo-" Degrees of Freedom from the measure
ments that are available, thus generating enough new" measurements to allow
a full identification on as few as one measurement.

Mode shape estimation


Using the reduced mode shapes  for the principal responses, and the trans
formation matrix between the principal and physical responses, the FDPI algo
rithm allows you to identify the complete mode shapes of the system by ex
panding the reduced  matrix.
This mode shape expansion offers several advantages :V

it is very fast (no least squares solution required as for the LSFD meth
od)

it identifies a mode shape vector as a global direction in the modal


space, rather than estimating its elements one by one via mutually in
dependent least squares problems.

If the mode shape expansion method is not employed then the LSFD technique
is used to estimate mode shapes.

Normal modes
From the meaning of the matrices [A0 ] and [A1 ] and the eigenvalue problem
(15-72), it is possible to estimate damped (generally complex) mode shapes ,
or undamped real normal modes.
Normal modes can be identified via the FDPI technique by solving an eigenva
lue problem for the reduced mass and stiffness matrices only
M 1K n   n

Eqn 15-74

This eigenvalue problem is very much related to the one that is solved by FEM
software packages that ignore the damping contribution in a system. This is an
entirely different approach to the one that is used to estimate real modes via the
LSFD technique. The latter technique estimates the real-valued mode shape
coefficients that curve-fit the data set in a best least squares sense (proportional
damping assumed), while the FDPI method uses an FEM-like approach.
Damping values are computed by applying a circle-fitter to enhanced FRFs for
each mode. The enhanced FRFs are calculated by projecting the principal FRFs
on the reduced mode shapes.

Part IV

Modal Analysis and Design

259

Chapter 15 Estimation of modal parameters

15.4

Maximum likelihood method


A multi-variable frequency-domain maximum likelihood (ML) estimator is
proposed to identify the modal parameters together with their confidence inter
vals. The solver is robust to errors in the non-parametric noise model and can
handle measurements with a large dynamical range.
Although the LSCE-LSFD approach has proven to be useful in solving many
vibration problems, the method has some drawbacks:

15.4.1

the polyreference LSCE estimator does not always work well when the
number of references (inputs) is larger than 3 for example

the frequencies should be uniformly distributed

the method is not able to handle noisy measurements properly, which


can result in unclear stabilization plots and

the method does not deliver confidence intervals on the estimated mo


dal parameters.

Theoretical aspects
A scalar matrix-fraction description better known as a common-denominator
model will be used. The Frequency Response Function (FRF) between output
o and input i is modeled as
^

H oi( f) 

N oi( f)

Eqn 15-75

D( f)

for i = 1, . . . . . , Ni and o = 1, . . . . . , No
with
n

N oi( f)   ! j( f)A j


j0

the numerator polynomial between output o and input i


and

260

The Lms Theory and Background Book

Estimation of modal parameters

D( f)   !( f)B oij


j0

the common-denominator polynomial.


The polynomial basis functions j ( f) are given by ! j( f)  e i fT s.j for a
discrete-time model (with Ts the sampling period). The complex-valued coeffi
cients Aj and Boij are the parameters to be estimated. The approach used to op
timize the computation speed and memory requirements will first be explained
for the Least Squares Solver and then these results will be extrapolated to the
ML estimator.
The LeastSquares Solver
^

Replacing the model H oi( f) in equation 15-75 by the measured FRF H oi( f)
gives, after multiplication with the denominator polynomial,
n

j0

j0

 !j(f)Boij   !j(f)Hoj(f)Aj # 0

Eqn 15-76

for i = 1, . . . . . , Ni 0 = 1, . . . . . , No and f= 1, . . . . . , Nf
Note that equation 15-76 can be multiplied with a weighting function Woi (f ).
The quality of the estimate can often be improved by using an adequate
weighting function.
As the elements in equation 15-76 are linear in the parameters, they can be re
formulated as

X1 0
 0) X) 2

0 0

0
0
)

X N 0N i

 B1 
Y1 
B

Y2 

 )2 
B # 0
) 
X N0N i  N0N i
 A 

with

A0
B oi0
B 
A1
.. , A 
B k 
.
 oi1

. 
 .. 
B oin
An

Part IV

Modal Analysis and Design

261

Chapter 15 Estimation of modal parameters

X k( f)  W oi( f)[!( f), ! 1( f), ...., ! n(f)]


Y k( f)   X k( f).H oi( f)
and k  (o  1)Ni  i  1, ..., N oN i
The (complex) Jacobian matrix J of this least-squares problem

X1 0 ... 0
0
 0 X2
..
J  .. ..
.
.
.

XN N
0 0

o

Y1 
Y2 
.. 
. 
Y NoN i

Eqn 15-77

has Nf No Ni rows and (n+1)(No Ni +1) columns (with Nf >> n, where n is the
order of the polynomials). Because every element in equation 15-76 has been
weighted with Woi (f ), the Xk 's in equation 15-77 can all be different.

The MaximumLikelihood Solver


Using referenced measurements (e.g., FRF data) makes it easier to get global
estimates from measurements that were obtained by roving the sensors over the
structure under test (which is a common practice in experimental modal analy
sis). Because of this, the FRFs will be used here as primary data instead of the
input/output spectra (i.e. non-referenced data). However, one should take care
that the FRFs are not contaminated by systematic errors.

The ML equations
Assuming the different FRFs to be uncorrelated, the (negative) log-likelihood
function reduces to
No

l ML() 

Ni

Nf

  

|H oi(,  f)  H oi( f)| 2

o1 i1 f1

var{Hoi( f)}

Eqn 15-78

, ...., B TN oN , A T] T is given by minimizing equa


The ML estimate of   [B T
1
i
tion 15-78. This can be done by means of a Gauss-Newton optimization algo
rithm, which takes advantage of the quadratic form of the cost function (15-78).
The Gauss-Newton iterations are given by
H
(a ) solve J H
for m
mJ m m   j mr m

262

The Lms Theory and Background Book

Estimation of modal parameters

(b ) set  m1   m   m
with r m  r( m), J m  +r()+| m and

 H^11(, 1)  H11(1) !




var{H 11(1)}




..


.
 ^

 H11(, N )  H11(N ) 


 var{H ( )} 
11 N




..
.
r()  $
%
^
H
(,

)

H
(
)


12 1
 12 1



var{H 12(1)}


..


.


.
^

HN N (, N )  HN N (N )


 var{H ( )} 
NN
N

f

Deriving confidence interval


The covariance matrix of the ML estimate ^ is usually close to the corre
ML
sponding Cramr-Rav Lower Bound (CRLB) : cov{^ }  CRLB
ML

A good approximation of this Cramr-Rav Lower Bound is given by

1
CRLB " [J H
mJ m]

Eqn 15-79

with Jm the Jacobian matrix evaluated in the last iteration step of the GaussNewton algorithm. As one is mainly interested in the uncertainty on the reso
nance frequencies and damping ratios, only the covariance matrix of the de
nominator coefficients is in fact required.
Hence, it is not necessary to invert the full matrix to obtain the uncertainty on
the poles (or on the resonance frequencies and the damping ratios).

Part IV

Modal Analysis and Design

263

Chapter 15 Estimation of modal parameters

15.5

Calculation of static compensation modes


Modal synthesis can be used to couple substructures together in low frequency
ranges. For this, modal models for each of the substructures are required as
separate disconnected items. However the results of this coupling may be less
than optimal due to truncation errors. Truncation errors arise because only a
limited number of modes are taken into account.
To improve the results, both static and dynamic compensation terms can be
used.
Eqn 15-80

[H C()] # [H R()]  [H 0]   2[H 1]

exact FRF
supposition

modal FRF

static
compensation
term

dynamic
compensation
term

Truncation errors can be approximated by a quadratic function using a taylor


expansion. It has been shown that there is a good correspondence between the
real truncation error and the quadratic estimation.
Static compensation terms can be derived from direct (driving point) FRFs.
These static compensation terms are calculated using the upper residual terms
which were obtained while fitting the FRF matrix of the coupling points (driv
ing points and cross terms). The upper residual terms are converted by means
of a singular value compensation into regular mode shapes and participation
factors. These mode shapes and participation factors can be used afterwards
for modal substructuring, in addition to the regular modes of the two substruc
tures.
Frequency of the static compensation modes
The frequency of the static compensation modes ( 0) must be significantly
higher than the frequency band of the modes, which are taken into account
during the substructuring calculations. The upper limit of the frequency band
used in modal substructuring is defined by the frequency of the upper residual
(upper residual )
 0  10.0 upperresidual
The Singular Value decomposition (SVD)
In order to calculate the static compensation terms, a singular value decomposi
tion has to be applied on the upper residual term matrix. This is obtained by
putting all upper residual terms together in one big matrix.

264

The Lms Theory and Background Book

Estimation of modal parameters

R upperresidual  U

 VT

The mode shape values of the static compensation mode () are related to the
left singular vector, the singular value, and the frequency value ( 0)
 j  U j  j  0
The participation factor values (L) can be derived from the mode shape values
()
Lj  

j

2m r 0 j

m r  V    U 
Lj  

Part IV

Modal Analysis and Design

j 
  V
2 0 j 0 j

265

Chapter 16

Operational modal
analysis

This chapter describes the theoretical and technical background re


lating to operational modal analysis.
Reasons for performing operational modal analysis
Theoretical aspects

267

Chapter 16 Operational modal analysis

16.1

Why operational modal analysis?


Traditional modal model identification methods and procedures are based on
forced excitation laboratory tests during which Frequency Response Functions
(FRFs) are measured. However, the real loading conditions to which a struc
ture is subjected often differs considerably from those used in laboratory test
ing. Since all real-world systems are to a certain extent non-linear, the models
obtained under real loading will be linearized for much more representative
working points. Additionally, environmental influences on system behavior
(such as pre-stress of suspensions, load-induced stiffening and aero-elastic in
teraction) will be taken into account.
In many cases, such as small excitation of off-shore platforms or traffic/wind
excitation of civil constructions, forced excitation tests are very difficult, if not
impossible, to conduct, at least with standard testing equipment. In such situa
tions operational data are often the only ones available.
It is also the case that large in-operation data sets are measured anyway, for
level verification, operating field shape analysis and other purposes. Hence,
extending classical operating data analysis procedures with modal parameter
identification capabilities will allow a better exploitation of these data.
Finally, the availability of in-operation established models opens the way for in
situ model-based diagnosis and damage detection. Hence, a considerable in
terest exists in extracting valid models directly from operating data.

Traditional processing of operational data


An accepted way of dealing with operational analysis in industry is based on a
peak-picking technique applied to the auto-and crosspowers of the operational
responses. Such processing results in the so-called Running Mode Analysis".
By selecting the peaks in the spectra, approximate estimates for the resonance
frequencies and operational deflection shapes can be obtained. These shapes
can then be compared to or even decomposed into the laboratory modal results.
Correlation of the operating data set with the modal database measured in the
lab allows an assessment the modes which are dominant for a particular oper
ating condition. In case of partially correlated inputs (e.g. road analysis), princi
pal component techniques are employed to decompose the multi-reference prob
lem into subsets of single reference problems, which can be analyzed in
parallel. These decomposed sets of data can be fed to an animation program, to
interpret the operational deflection shapes for each principal component as a
function of frequency.

268

The Lms Theory and Background Book

Operational modal analysis

The auto-and crosspower peak-picking method requires considerable engi


neering skill to select the peaks which correspond to system resonances. In
addition, no information about the damping of the modes is obtained and the
operational deflections shapes may differ significantly from the real mode
shapes in case of closely spaced modes. Pre-knowledge of a modal model de
rived from FRF measurements in the lab is often indispensable to successfully
perform a conventional operational (running modes) analysis.
Curve-fitting techniques therefore, which allow modal parameters to be ex
tracted directly from the operational data would be of a great use for the engi
neer. Such techniques would identify the dominant modes excited under driv
ing conditions and this information might even be used to improve some
traditional FRF tests in the laboratory.

Using Operational modal analysis


The purpose of this procedure is to extract modal frequencies, damping and
mode shapes from data taken under operating conditions. This means that un
der the influence of its natural excitation such as airflow around the structure
(e.g. wind turbines, aeroplanes, helicopters), road input, liquid flow (in pipes),
road traffic (e.g. bridges), internal excitation (rotating machinery).
Theoretically, one could consider the case where the input forces are measured
in such conditions which means that conventional FRF processing and modal
analysis techniques could be used. However the Operational modal analysis
software is aimed specifically at applications where the inputs can not be mea
sured, and works when only responses such as accelerations signals are avail
able. The ideal situation is when the input has a flat spectrum.
Three methods are discussed, all of which use time domain correlation func
tions. These auto- and cross-correlation functions can be calculated directly
from raw time data or be derived from measured auto- and cross powers by an
inverse FFT processing.

Part IV

Modal Analysis and Design

269

Chapter 16 Operational modal analysis

16.2

Theoretical aspects
This section describes the mathematical background to the methods used to
identify modal parameters from operational data.
Over recent years, several modal parameter estimation techniques have been
proposed and studied for modal parameter extraction from output-only data.
These include
Auto-Regressive Moving Averaging models (ARMA),
Natural Excitation Technique (NExT)
Stochastic subspace methods.
The Natural Excitation Technique (NExT)
The underlying principle of the NExT technique is that correlation functions
between the responses can be expressed as a sum of decaying sinusoids. Each
decaying sinusoid has a damped natural frequency and damping ratio that is
identical to the one of the corresponding structural mode. Consequently, con
ventional modal parameter techniques such as polyreference Least-Squares
Complex Exponential (LSCE) can be used for output-only system identifica
tion.
Stochastic subspace methods
With the subspace approach, first a reduced set of system states is derived, and
then a state space model is identified. From the state space model, the modal
parameters are derived. The terminology subspace" comes mainly from the
control theory it is a family name" which groups methods that use Singular
Values Decomposition in the identification process.
Two subspace techniques, referred to as the Balanced Realization (BR) and the
Canonical Variate Analysis (CVA) are provided.

16.2.1

Stochastic substate identification methods


The following stochastic discrete time state space model is considered
{x k1}  [A]{x k}  {w k}

Eqn 16-1

{y k}  [C]{x k}  {v k}
where

{xk } represents the state vector of dimension n,


{yk } is the output vector of dimension Nresp
{wk }, {vk }, are zero-mean, white vector sequences, which represent
the process noise and measurement noise respectively.

270

The Lms Theory and Background Book

Operational modal analysis

For p and q large enough, the matrices [A] and [C] are respectively the state
space matrix and the output matrix. Along with this model, the observability
matrix [Op ] of order p and the controllability matrix [Cq ] of order q are defined :

 [C] 
[C][A] 
q1
[O p] 
 .. p1; [Cq]  [[G][A][G]...[A] [G]]
[C][A] 

Eqn 16-2

T
where [G]  E[{x k1}{y k} ] and E[.] denotes the expectation operator. The
matrices [Op ] and [Cq ] are assumed to be of rank 2Nm , where Nm is the number
of system modes.

The dynamics of the system are completely characterized by the eigenvalues


and the observed parts of the eigenvectors of the [A] matrix. The eigenvalue
decomposition of [A] is given by
[A]  [][ ][] 1

Eqn 16-3

Complex eigenvectors and eigenvalues in equation 16-3 always appear in com


plex conjugate pairs. The discrete eigenvalues r on the diagonal of [] can be
transformed into continuous eigenvalues or system poles r by using the fol
lowing equation
 r  e rt & r   r  i r  1 ln( r)
t

Eqn 16-4

where r is the damping factor and r the damped natural frequency of the r-th
mode.
The damping ratio r of the r-th mode is given by
r  

r
2r  2r

Eqn 16-5

The mode shape {}r of the r-th mode at the sensor locations are the observed
parts of the system eigenvectors {}r of [], given by the following equation
{"} r  [C]{}

Eqn 16-6

The extracted mode shapes can not be mass-normalized as this requires the
measurement of the input force.

Part IV

Modal Analysis and Design

271

Chapter 16 Operational modal analysis

The stochastic realization problem


The problem considered here is the estimation of the matrices [A] and [C] in
equation 16-2, up to a similarity transformation, using only the output mea
surements { yk }. This problem is known as the stochastic realization problem
and has been addressed by many researchers from the control departments as
well as the statistics community [ 4, 5 and 6].
Two correlation-driven subspace algorithms are briefly discussed below,
known as the Balanced Realization (BR) and the Canonical Variate Analysis
(CVA).
Given a sequence of correlations
[R k]  E {y km}{y m} Tref

Eqn 16-7

where {yk }ref is a vector containing Nref outputs serving as references.


For p q, let [Hp,q ] be the following block-Hankel matrix :

[R 1] [R 2] . [R q] 
[R 2] [R 3] .. [R q1] 

..
[H p,q]  .. .. ..

.
.
.
.
.


[R p] [R p1]
[R pq1]



Eqn 16-8

Direct computations of the [Rk ] from the model equations lead to the following
factorization property
Eqn 16-9

[H p,q]  [O p][C q]

Let [W1 ] and [W2 ] be two user-defined invertible weighting matrices of size
pNresp and qNresp , respectively. Pre-and post multiplying the Hankel matrix
with [W1 ] and [W2 ] and performing a SVD decomposition on the weighted
Hankel matrix gives the following

[S 1] [0]
[W 1][H p,q][W 2]  [[U 1][U 2]]
[0] [0]
T

[V 1]T [U ][S ][V ]T


[V ]T
1
1
1
 2 

Eqn 16-10

where [S1 ] contains n non-zero singular values in decreasing order, the n col
umns of [U1 ] are the corresponding left singular vectors and the n columns of
[V1 ] are the corresponding right singular vectors.
On the other hand, the factorization property of the weighted Hankel matrix
results in

272

The Lms Theory and Background Book

Operational modal analysis

[W 1][H p,q][W 2] T  [W 1][O p][C q][W 2] T

Eqn 16-11

From equations 16-10 and 16-11, it can be easily seen that the observability ma
trix can be recovered, up to a similarity transformation, as
[O p]  [W 1] 1[U 1][S 1] 12

Eqn 16-12

The system matrices are then estimated, up to similarity transformation, using


the shift structure of [Op ]. So,
[C]  {firstblockofrowO p}

Eqn 16-13

and [A] is computed as the solution of

O,p1  [Op1][A]

Eqn 16-14

where [Op-1 ] is the matrix obtained by deleting the last block row of [Op ] and
[Op-1 ] is the upper shifted matrix by one block row.
Different choices of weighting will lead to different stochastic subspace identifi
cation methods. Two particular choices for the weighting matrices give rise to
the Balance Realization and the Canonical Variate Analysis methods.

Balanced Realization (BR)


[W 1]  [I] and[W 2]  [I]

Eqn 16-15

So no weighting is involved.

Canonical Variate Analysis (CVA)


CVA requires that all responses are serving as references, so {yk }={yk }ref . Conse
quently, the correlation matrix [Rk ] given by equation 16-7 is square. Define
then the following Toeplitz matrices
[R p1]
[R 1]
 [R 0] [R 1]T [R p1]T
 [R 0]
..
.
[R 0] .. [R p2]
[R 1]
[R 1]T
[R 0] . [R p2] T






..
.
.
.
.
[
]  . . .

.. .. 
.. ; [
]  ..
.
.
.
.
.
. 

. 
[R ] [R ]
T
T [R
]
[R
]
[R 0] T 
p2
[R 0] 
 p1
 p1 p2
Eqn 16-16

Part IV

Modal Analysis and Design

273

Chapter 16 Operational modal analysis

Let the full-rank factorization of [+] and [-] be


[
]  [L ][L ] T; [
]  [L ][L ] T

Eqn 16-17

In case of CVA, the weighting is as follows


[W 1]  [L ] 1 and[W 2]  [L ] 1

Eqn 16-18

With this weighting, the singular values in equation 16-10 correspond to the
so-called canonical angles. A physical interpretation of the CVA weighting is
that the system modes are balanced in terms of energy. Modes which are less
well excited in operational conditions might be better identified.

Practical implementation of correlationdriven stochastic subspace methods


Equation 16-10 only holds for `true' block-Hankel matrices and for a finite or
der system. In practice, the system has a larger, possibly infinite order and the
Hankel and Toeplitz matrices in equations 16-8 and 16-16 will be filled up with
`empirical' correlations, which are computed as follows :
[R k]  1
M
^

 {ymk}{ym}Tref

Eqn 16-19

m0

where M is the number of data samples.


Although equation 16-19 is a preferred estimator for the correlation functions
as no leakage errors are made and as it can also be used for non-stationary
data, the evaluation of equation (19) in the time domain is not really efficient in
computational effort. A faster estimator for the correlation functions can be im
plemented by taking the inverse FFT of auto-and crosspower spectra which are
calculated on the basis of the FFT and segment averaging. This however as
sumes stationary signals and time windowing (e.g. Hanning) is needed to
avoid leakage.
The SVD decomposition of the weighted empirical Hankel matrix will then re
sult in the following
^ T
^

^
^
^
^
^
^
[S 1] [0] [V 1] 
[W 1][H p,q][W 2]  [[U 1][U 2]] ^  ^   [U1][S 1][V 1] T  [U 2][S 2][V 2] T
T
 [0] [S2][V 2] 
^

Eqn 16-20
with

274

The Lms Theory and Background Book

Operational modal analysis

[S 1]  diag( 1---  n),  1   2---  n  0


^

[S 2]  diag( n1---  pRresp),  n1   n2---  pR resp  0

Eqn 16-21

Identification of a model with order n is done by truncating the singular values,


so by keeping [S1 ]. The observability matrix is then approximated by
^

[O p]  [W 1] 1[U 1][S 1] 12

Eqn 16-22

As the model order is typically unknown, inspection of the singular values


might help the engineer to select n such that n n+1 In practice, this criteria
is not often of great use as no significant drop in the singular values can be ob
served. Other techniques such as stabilization diagrams are then needed in or
der to find the correct model order.
The remaining steps of the algorithm are similar to those described in equations
16-11 to 16-18, where theoretical quantities are replaced with empirical ones.

16.2.2

Natural Excitation Techniques


Subtitled :
The Polyreference Least Squares Complex Exponential (LSCE) method ap
plied to auto-and crosscorrelation functions
Polyreference LSCE applied to impulse response functions is a well-known
technique in conventional modal analysis, yielding global estimates of poles
and the modal participation factors [7]. It has been shown that, under the as
sumption that the system is excited by stationary white noise, correlation func
tions between the response signals can also be expressed as a sum of decaying
sinusoids [ 8 ].
Each decaying sinusoid has a damped natural frequency and damping ratio
that is identical to that of a corresponding structural mode. Consequently, the
classical modal parameter techniques using impulse repines functions as input,
like Polyreference LSCE, Eigensystem Realization Algorithm (ERA) and
Ibrahim Time Domain are also appropriate to extract the modal parameters
from response-only data measured under operational conditions.
This technique is also referred to as NExT, standing for Natural Excitation Tech
nique. An interesting remark is that the ERA method applied to correlation
functions instead of impulse response functions is basically the same as the Bal
anced Realization method.

Part IV

Modal Analysis and Design

275

Chapter 16 Operational modal analysis

Mathematically, the Polyreference LSCE will decompose the correlation func


tions as a sum of decaying sinusoids. So,
[R k] 

[R k] 

Nm

 {"}re kt{L}Tr  {"}*r e kt{L}T*r or


r

*
r

r1

Eqn 16-23

Nm

 {"}rkr{L}Tr  {"}*r k*r {L}T*r

r1

where  r  e rt and {L}r is a column vector of Nref constant multipliers which
are constant for all response stations for the r-th mode.
{Note that in conventional modal analysis, these constant multipliers are the
modal participation factors.}
The combinations of complex exponential and constant multipliers,
 r{L} Tror *r {L} T*
r are a solution of the following matrix finite difference equa
tion of order t
T
T
 kr{L} Tr[I]   k1
{L} Tr[F 1]    kt
r
r {L} r [F t]  {0}

Eqn 16-24

where [F1 ]...[Ft ] are coefficient matrices with dimension Nref x Nref .
In case the system has Nm physical modes, the order t in equation 16-24 should
be theoretically equal to 2Nm/Nref in order to find the 2Nm characteristic poles.
In practice, over specification of the model order will be needed.
Since the correlation functions are a linear combination of the characteristic
T
*
T*
solutions of equation 16-24,  r{L} r or r {L} r , they are also a solution of that
equation. Hence,
[R k][I]  [R k1][F 1] ---  [R kt][F t]  0

Eqn 16-25

Equation 16-25 which uses all response stations simultaneously enables a glob
al least squares estimate of the coefficient matrices [F1 ]... [Ft ]... The overdeter
mination is also achieved by considering all available or selected time intervals.
Once the coefficient matrices are known, equation 16-24 can be reformulated
into a generalized eigenvalue problem resulting in Nref t eigenvalues lr, yielding
estimates for the system poles r and the corresponding left eigenvectors {L}r T .
The selection of outputs which function as references have to be chosen in such
a way that they contain all of the relevant modal information. In fact, the selec
tion of output-reference channels is similar to choosing the input-reference
locations in a traditional modal test.

276

The Lms Theory and Background Book

Operational modal analysis

Extraction of mode shapes in a second step and model validation


Contrary to the stochastic subspace methods, the Polyreference LSCE does not
yield the mode shapes. So, a second step is needed to extract the mode shapes
using the identified modal frequencies and modal damping ratios. For outputonly data, it has been shown [ 9 ] that this can be done by fitting the auto-and
crosspower spectra between the responses and the responses serving as refer
ences :
X mn(j) 

A mn*
B mn
B mn*
r
r
r
r
 jAmn




r j  *  j  r  j  *
Nm

r1

Eqn 16-26

where Xmn (j) is the crosspower between m-th response station and the n-th
response station serving as a reference.
In case of autopowers (m=n), Ar mn equals Br mn. The residue Ar mn is proportion
al to the m-th component of the mode shape {}r and the residue Br mn is pro
portional to the n-th component of the mode shape {}r. Consequently, by fit
ting the crosspowers between all response stations and one reference station,
the complete mode shape can be derived.
The power spectra fitting step offers the advantage that not all responses
should be included in the time-domain parameter extraction scheme and that
consequently, mode shapes of a large number of response stations can be easily
processed by consecutively fitting the spectra. Additionally, it provides a
graphical quality check by overlaying the actual test data with the synthesized
data. In comparison with modal FRF synthesis, it can be observed in equation
16-26 that two additional terms as function of -jw need to be included for a cor
rect synthesis of the auto-and crosspowers which are assumed to be estimated
on the basis of the FFT and segment averaging. If Xmn (jw) would not be calcu
lated with the FFT segment averaging approach, but as the FFT of the correla
tion function between response m and response n estimated using equation
16-19, the last 2 terms in equation 16-26 can be neglected.

16.2.3

Selection of the modal parameter identification


method
This section discusses the criteria for selecting a particular method
LSCE - LSFD
This classical Least Squares Complex Exponential method is adapted to
work on Auto-correlation and Cross-correlation instead of FRFs or Im
pulse Response functions.

Part IV

Modal Analysis and Design

277

Chapter 16 Operational modal analysis

A subset of the response functions can be selected as references in the


computation of the cross power functions. The responses chosen as refer
ences should contain all of the relevant modal information, as is required
for the input-reference locations in a traditional modal test.
Mode shapes are identified in a secondary process using the Least Squares
Frequency Domain procedure. For the theoretical background on this
method see section 16.2.2
BR (Balanced Reduction)
This is one of the subspace" techniques which identifies frequency,
damping and mode shapes.
A subset of the response functions can be selected as references. These are
used in the computation of the cross power functions from the original
time domain data.
This method is useful in identifying the most dominant modes occurring
under operational conditions.
CVA (Canonical Variate Analysis)
This is the second of the subspace" techniques which identifies frequency
damping and mode shapes.
In this case all the response functions must be selected as references which
are used in the computation of the cross power functions from the original
time domain data. Thus this method requires more computational effort
but this algorithm will give equal importance" to all modes and can
identify modes which are not well excited under operational conditions.
For the theoretical background on subspace methods see section 16.2.1.

278

The Lms Theory and Background Book

Operational modal analysis

16.3

References
[1]

LMS CADA-X Running modes manual, 1997.

[2] Otte D., Development and Evaluation of Singular Value Analysis Method
ologies for Studying Multivariate Noise and Vibration Problems, PhD K.U.Leu
ven, 1994.
[3] Otte D., Van de Ponseele P., Leuridan J., Operational Deflection Shapes in
Multisource Environments, Proc. 8th International Modal Analysis Conference,
p. 413-421, Florida, 1990.
[4] Abdelghani M., Basseville M., Benveniste A., In-operation Damage Mon
itoring and Diagnostics of Vibrating Structures, with Applications to Offshore
Structures and Rotating Machinery", Proc. of IMAC XV, Orlando, 1997.
[5] Desai U.B., Debajyoti P., Kirkpatrick R.D., A realization approach to sto
chastic model reduction", Int. J. Control, Vol. 42, No. 4, pp. 821-838, 1985.
[6] Kung S., A new identification and model reduction algorithm via singu
lar value decomposition", Proc. 12th Asilomar Conf. Circuits, Systems and
Computers, pp. 705-714, Pacific Groves, 1978.
[7] Brown D., Allemang R., Zimmerman R., and Mergeay, M., Parameter Es
timation Techniques for Modal Analysis", SAE Paper 790221, pp. 19, 1979.
[8] James G.H. III, Carne T.G., and Laufer J.P., The Natural Excitation Tech
nique (NExT) for Modal Parameter Extraction from Operating Structures, the
international Journal of Analytical and Experimental Modal Analysis", Vol. 10,
no 4, pp. 260-277, 1995.
[9] Hermans L., Van der Auweraer H., On the Use of Auto-and Cross-cor
relation functions to extract modal parameters from output-only data, Proc. of
the 6th International conference on Recent Advances in Structural Dynamics,
Work in progress Paper, 1997.
[10] Van der Auweraer H., Wyckaert K., Hendricx W., From Sound Quality to
the Engineering of Solutions for NVH Problems: Case Studies", Acustica/Acta
Acustica, Vol. 83, N 5, pp. 796-804, 1997.
[11] Wyckaert K., Van der Auweraer H., Hendricx W., Correlation of Acousti
cal Modal Analysis with Operating Data for Road Noise Problems", Proc. 3rd
International Congress on Air- and Structure-Borne Sound and Vibration,
Montreal (CND), June 13-15, 1994, pp. 931-940, 1994.
[12] Wyckaert K., Hendricx W., Transmission Path Analysis in View of Active
Cancellation of Road Induced Noise in Automotive Vehicles", 3rd International
Congress on Air- and Structure-Borne Sound and Vibration, Montreal (CND),
June 13-15, 1994, pp. 1437-1445, 1994.

Part IV

Modal Analysis and Design

279

Chapter 16 Operational modal analysis

[13] Van der Auweraer H., Ishaque K., Leuridan J., Signal Processing and
System Identification Techniques for Flutter Test Data Analysis", Proc. 15th Int.
Seminar of Modal Analysis, K.U.Leuven, pp. 517-538, Leuven, 1990.
[14] Van der Auweraer H, Guillaume P., A Maximum Likelihood Parameter
Estimation Technique to Analyse Multiple Input/Multiple Output Flutter Test
Data", AGARD Structures and Materials Panel Specialists' Meeting on Ad
vanced Aeroservoelastic Testing and Data Analysis, Paper no 12, May, 1995.
[15] Van der Auweraer H., Leuridan J., Pintelon R., Schoukens J., A Frequen
cy Domain Maximum Likelihood Identification Scheme with application to
Flight Flutter Data Analysis", Proc. 8-th IMAC, pp. 1252-1261, Kissimmee,
1990.

280

The Lms Theory and Background Book

Chapter 17

Running modes analysis

This chapter describes the basic principles involved in running


mode analysis. It includes the following topics:
The definition of running modes analysis
The type of measurement data required for running mode
analysis
The identification and scaling of running modes
The interpretation and validation of running modes

281

Chapter 17 Running modes analysis

17.1

Running mode analysis


The aim of modal analysis is to identify a modal model that describes the dy
namic behavior of a (mechanical) system. This behavior is identified by means
of the transfer function measured between any two degrees of freedom of the
system.
The outcome of a modal analysis therefore is the estimated modal parameters
of the system, which are the natural frequencies (n ), damping ratios () and
scaled mode shapes (Vik ).
One of the most common ways of estimating the modal parameters is based
upon the measurement of FRFs between one or more input (reference DOFs)
and all response DOFs of interest. These measurements are made under well
defined and controlled conditions, where all input and output signals are mea
sured and no unknown forces (external or internal) are acting on the system.
The modal model is (ideally) valid under any circumstances; that is to say,
whatever the frequency contents, level or nature of the acting forces. This
makes modal analysis a very powerful tool, and the modal model (once identi
fied) can be used in a number of ways, such as trouble shooting, forced re
sponse prediction, sensitivity analysis or modification prediction.
For many reasons, a complete modal analysis can be impracticable. It may be
that the cost of the test setup is too high, the measurement object (e.g. a proto
type) cannot be made available for the period of time required to perform a
modal analysis, or it is found to be simply impossible to isolate the object from
all the forces acting on the system and excite it artificially.
In this case, it is possible to take measurements of the system while it is operat
ing. A number of output signals can be measured (one at each response DOF),
while the system is operating under stationary conditions. This provides a set
of measurements (Xi ()) as a function of frequency.
The measured quantity Xi ( ) at DOF i can be any number of things: displace
ment, acceleration, voltage, angular position or acceleration, for example. It is
however measured for one particular operating condition, with an unknown
level or nature of the acting forces or inputs.
If you are interested in a particular phenomenon at a well defined frequency, it
is very often most helpful to see what the output levels are at that frequency for
each measurement DOF. So you might, for example, want to know what the
harmonic motion of measurement point 13 is at 85.6 Hz, or perhaps its level of
acceleration. These values can then be assembled in a vector {X}, having one
element for each of the measurement DOFs.

282

The Lms Theory and Background Book

Running modes analysis

Animating the system's wire frame model can lead to a better understanding of
these phenomena. This makes it possible to show each motion (or acceleration)
level at the corresponding DOF, in a cyclic manner. Because of the external re
semblance of the animated representation of the vector quantity {X} with the
mode shape vector {V}, the vector {X} is called a running mode, or an operational
deflection shape.
These running modes must be interpreted entirely differently from modal
modes. Running modes only reflect the cyclic motion of each DOF under specif
ic operational conditions, and at a specific frequency. Using a modal model based
on displacement/force frequency response functions {H}, the displacement run
ning mode {X} can be described as follows.
{X i( p)}  {Hi1( p)}F 1( p)  {H i2( p)}F 2( p)  
{H im( p)}F m( p)

 2N


k1

V ikV 1k !
 2N V ikV mk !F ( )
F
(
)



p
 jp    m p
j p   k
k
 1

k1

Eqn 17-1

Eqn 17-2

where,
i = the DOF counter
p = the particular angular frequency
Fj () = the force input spectrum at DOF j
m = the number of acting forces
The above equation clearly shows that running modes:

Part IV

can be identified at any of the measured frequencies p , whereas a mo


dal mode has a fixed natural frequency determined by the structural
characteristics of the system (mass, size, Young's modulus, etc.).

depend on the level and nature of the acting force(s).

depend on the structural characteristics of the system, through its FRF


behavior.

depend on the frequency contents of each of the acting forces : if F3 (p )


happens to be zero at p , it will not contribute to the running mode
{x(p )}.

will be dominant at structural resonances (p " k ), but also at peaks in


the acting force spectra.

Modal Analysis and Design

283

Chapter 17 Running modes analysis

17.2

Measuring running modes


Ideally, all response spectra for a running mode analysis would be acquired:
V

simultaneously

in a short period of time in which the operating conditions of the test


object remain constant

with signals having a high signal to noise ratio, so that no averaging is


required.

In practice, the number of acquisition channels on the measurement system lim


its the number of response signals which can be measured simultaneously, and
so different sets of responses have to be measured at different periods of time.
Additionally, if a relatively high level of noise is present on the signals, an aver
aging procedure may be necessary during the acquisition of the response sig
nals.
Because of varying operation conditions, it is usual to choose a specific re
sponse DOF as a reference station and then measure the responses relative to
this reference. If the operating conditions then change slightly from one mea
surement to the next, this will hopefully affect all response signals in the same
way and the change will be cancelled out because of the relative nature of the
measurements. This procedure also guarantees a fixed phase relationship be
tween the different response signals, using the phase of the reference signal as a
reference.
The two measured functions available for running mode analysis are: transmis
sibility functions and crosspower spectra.

17.2.1

Transmissibility functions
When the response signals are related to the reference by simply dividing each
response signal frequency spectrum by the reference frequency spectrum, the
result is the transmissibility function (T)
T ij() 

X i()

X j()

Eqn 17-3

where j is the reference station.

284

The Lms Theory and Background Book

Running modes analysis

When averaging is involved, transmissibilities can be calculated from measured


crosspower and autopower spectra.

T ij() 

G ij()
G jj()

Eqn 17-4

The transmissibility function represents the complex ratio (amplitude and


phase) between two spectra. A peak in this function may thus be caused either
by a peak in the numerator crosspower (i.e. a structural resonance or peak in
the excitation spectrum), or a zero (anti-resonance) in the denominator auto
power spectrum. As resonance peaks will occur at the same frequencies for
cross and autopower spectra, while antiresonances do not, the denominator ze
ros will cause more peaks in Tij. Resonance peaks tend to cancel each other out.
In the case of Frequency Response Functions (acceleration over force), different
estimators (H1 , H2 , HV ) can be used to estimate the transmissibility functions.
In practice, the difference between these different methods of estimating Tij ()
is small when the coherence function is high (near 100 %). When estimating the
transmissibility functions from Equation 17-4 above, the coherence function ()
can also be calculated using the following equation.

 2ij() 

Gij()2
G ii().G jj()

Eqn 17-5

The coherence function expresses the linear relationship between both response
signals of the measured system. This coherence function is expected to be high,
since both responses are caused by the same acting forces. In practice, however,
it can be low for the same reasons as those affecting the measurement of FRFs,
that is to say due to low signal to noise ratio for one or both of the signals, bad
signal conditioning, etc.
Another interesting reason why the coherence between two measured signals
may be low, can be derived from equation 17-1, when it is substituted in equa
tion 17-3. The linear relationship (and hence the coherence) will vary as a func
tion of the weighting factors Fj ( ), this can be because of changing operating
conditions during the averaging process for example. High coherence function
values in the frequency regions of interest therefore indicate both a high quality
of the measurement signals and stationary operating conditions.

Part IV

Modal Analysis and Design

285

Chapter 17 Running modes analysis

Absolutely scaled running mode coefficients for each DOF i can be obtained by
multiplying the transmissibility spectra by the RMS value of the reference auto
power spectrum.

 X i()   T ij(). G jj()

Eqn 17-6

When the measured autopower spectrum has units of displacement squared,


the scaled running mode will be expressed in units of displacement (for exam
ple, meters, or inches), if the transmissibility functions themselves are dimen
sionless. Displacement running modes can be converted to velocity or accelera
tions by simply multiplying by j or (j)2. For a certain value of  (say o), the
following relationships apply.

 X i( 0)   T ij( 0). G jj(0)

[m]

Eqn 17-7

Xi(0)    Xi(0) .j(0)

[m/s]

Eqn 17-8

X(0)   X i( 0) .j(0)

[m/s 2]

Eqn 17-9

..

17.2.2

Crosspower spectra
When it can be assumed that the operating conditions are not going to change
while measuring all response signals, then it is possible to measure just cross
power spectra between each response DOF i and a certain reference DOF j
G ij()  X i()X *j ()

Eqn 17-10

where * denotes the complex conjugate.


Compared to transmissibility functions, crosspower functions have the advan
tage that peaks clearly indicate high response levels (which may still be caused
by a structural resonance or a peak in the acting force spectrum). This tech
nique is especially useful when all the response signals are measured simulta
neously by a multi-channel measurement system. In this case, the operating
conditions are indeed the same for all response DOFs.

286

The Lms Theory and Background Book

Running modes analysis

Absolutely scaled running modes can, in this case, be obtained again by means
of the autopower spectrum of the reference station j
{X i()} 

G ij()

Gjj()

Eqn 17-11

When displacements were measured, the running mode coefficient will have
units of displacement. Equations 17-8 and 17-9 can be used to derive velocity
or acceleration values.

Part IV

Modal Analysis and Design

287

Chapter 17 Running modes analysis

17.3

Identification and scaling of running


modes
Unlike modal modes, a running mode can be identified at any arbitrary fre
quency of the measured spectra.
Simple peak picking and mode picking methods can be used to extract the
sampled values, corresponding to a certain spectral line from the measured
spectra. They can then be scaled, and assembled into a vector which can be
listed, or animated using a 3D wire frame model of the measured object. For a
measurement blocksize of 1024 (512 spectral lines), it is thus possible to identify
512 running modes - or even more when interpolating between the spectral
lines.

Note!

17.3.1

There is no such quantity as damping defined for a running mode. Similarly


other modal parameter concepts such as residues or modal participation factors have no meaning for running mode analysis.

Scaling of running modes


It is possible to scale the identified running modes to values with absolute
meaning.
The scaling of running modes coefficients that have been determined using
peak picking methods, depends upon the nature of the measurement data (e.g.
transmissibilities, or autopowers).
Several ways of scaling running modes can be considered

288

If transmissibility spectra were measured, then scaling can be per


formed using the reference autopower spectrum, as described in equa
tion 17-6.

If crosspowers were measured, then equation 17-11 can be applied to


scale the running modes, again using the reference autopower spec
trum.

It is possible to convert between displacement, velocity and accelera


tion coefficients using equations 17-7, 17-8 and 17-9 where it is pos
sible to integrate or differentiate once or twice.

The Lms Theory and Background Book

Running modes analysis

A number of running modes can be scaled manually, by entering a


complex scale factor. Each individual mode shape coefficient will be
multiplied with this scaling factor.

Finally, a very general scaling mechanism can be used to scale a num


ber of running modes using a spectrum. Individual running mode co
efficients will be multiplied by the (possibly complex) value of the
spectrum block, belonging to the spectral line that corresponds to the
frequency of that particular mode.

Each one of the above scaling methods may change and influence the units of
the scaled running mode. The scaling factor's units will be incorporated into
the mode shape coefficient units, which were initially obtained from the mea
surement data.

Part IV

Modal Analysis and Design

289

Chapter 17 Running modes analysis

17.4

Interpretation of results
A set of functions exists, that are designed to assess the validity of modes.
These include the functions of Modal Scale Factor, Modal Assurance Criterion
and Modal decomposition.

Modal Scale Factors and Modal Assurance Criterion


Both the Modal Scale Factor and Modal Assurance Criterion are mathematical
tools used to compare two vectors of equal length. They can be used to
compare running and modal, mode shape information.
The Modal Scale Factor between columns l and j of mode shape k or MSFjlo is
the ratio between two vectors. Although this ratio should be independent of
the row index i (the response station), a least squares estimate has to be com
puted for it when more than one output station coefficient is available.

MSF jlk 

{V jk} t*{V lk}

Eqn 17-12

{V jk} t {V jk}
*

where {V jk } is the jth column of [Vk ].


The corresponding Modal Assurance Criterion expresses the degree of confi
dence in this calculation, which is obtained using equation 17-13.

MAC jlk 

({V jk} t*{V lk}) 2


({V jk} t {V jk})({V lk} t {V lk})
*

Eqn 17-13

If a linear relationship exists between the two complex vectors {V jk } and {V lk },


then the MSF is the corresponding proportionality constant between them, and
the MAC value will be near to one. If they are linearly independent, the MAC
value will be small (near zero), and the MSF not very meaningful.
Modal Scale Factors and Modal Assurance Criterion values can be used to
compare an obtained modal model with the accepted running modes. The
MAC values for corresponding modeshapes should be near 100 % and the MSF
between corresponding vectors should be close to unity. When multiple inputs
are used, the MSF can be calculated for each input, while the corresponding
MAC will be the same for all of them.

290

The Lms Theory and Background Book

Running modes analysis

Modal decomposition
When a modal model for the same DOFs is available for a measured object, it is
possible to compare modal and running modes and to track down resonance
phenomena causing a particular running mode to become predominant. This is
termed Modal decomposition. By using a decomposition of each running
mode in a linear combination of the modal modes, it becomes clear whether or
not a running mode originates primarily from a resonance phenomenon.
The modal modes form what is termed the `basis' group of modes. The run
ning modes are in a separate group that is to be decomposed. The following
formula applies.
{X i( 0)}  a 1{V 1}  a 2{V 2}   a n{V n}  Rest

Eqn 17-14

Where
Xi is the i th mode of the group to be decomposed (running modes)
Vi is the i th mode of the basis group (modal modes)
ai are the scaling coefficients needed to satisfy the above equation.
The scaling coefficients are rescaled relative to the maximum value.
{X i( 0)} 

Eqn 17-15

 {Xi( 0)}  [a 1{V 1}   a n{V n}] 


 {X i( 0)} 

Eqn 17-16

a
a max a 1
.100%{V 1}  a n .100%{V n}   Rest
a
max
max
100%

The Rest" is expressed as a relative error


Rest  100%

Note!

Part IV

Take care when interpreting these values since resemblance of the modal and
the running mode may purely be coincidental. A running mode at 56 Hz will
have no connection with a modal mode at 200 Hz even if they look alike.

Modal Analysis and Design

291

Chapter 18

Modal validation

This document describes tools used to verify the validity of a modal


model.
Modal Scale Factors and Modal Assurance Criterion
Mode participation
Reciprocity
Scaling
Modal Phase Collinearity and Mean Phase Deviation
Comparison of modal models
Mode Indicator Functions
Summation of FRFs
Synthesis of FRFs

293

Chapter 18 Modal validation

18.1

Introduction
A number of means are available to validate the accuracy of modal models of
frequencies, damping values, mode shapes and modal participation factors.
These tools are V

Modal Scale Factors between modes and corresponding correlation


factors (Modal Assurance Criterion, MAC) described in section 18.2.

Mode participation described in section 18.3.

Reciprocity between inputs and outputs, described in section 18.4.

Generalized modal parameters, described in section 18.5. (Scaling)

Mode complexity, described in section 18.6.

Modal Phase Collinearity and Mean Phase Deviation indices, de


scribed in section 18.7.

Comparison of modal models described in section 18.8.

Mode Indicator Functions, described in section 18.9.

Summation of FRF data in the Index table, described in section 18.10.

Synthesis of FRFs described in section 18.11

Some validation procedures allow you to convert the complex mode shape vec
tors to normalized ones. Normalized mode shapes are obtained from the am
plitudes of the complex mode shape coefficients after a rotation over their
weighted mean phase angle in the complex plane.

294

The Lms Theory and Background Book

Modal validation

18.2

MSF and MAC


Modal Scale Factors and Modal Assurance Criterion
The FRF between input j and output i on a structure can be written in partial
fraction expansion form as

 r ijk
h ij,n  
j  
N

k1

r *ijk

!



j n   *k

Eqn 18-1

The matrix of FRFs is then expressed as

[H] 

 j[RK]
N

k1

[R K] *
j n   *k

Eqn 18-2

where [RK ] represents the matrix of residues. When Maxwell's reciprocity prin
ciple holds for the tested structure this residue matrix is symmetric and can be
rewritten as
R k  a k{V k}{V k} t

Eqn 18-3

The ratio between two residue elements on the same row i but on two different
columns j and l can be computed as
r ij,k v jk
r il,k  v lk  MSF jlk

Eqn 18-4

This ratio MSFjlk is called the Modal Scale Factor between columns l and j of
mode k. Although this ratio should be independent of the row index i (the re
sponse station), a least squares estimate has to be computed for it when more
than one output station residue coefficient is available

MSF jlk 

{R jk} t*{R lk}


{R jk} t {R jk}
*

Eqn 18-5

where {R jk } is the jth column of [Rk ].

Part IV

Modal Analysis and Design

295

Chapter 18 Modal validation

The corresponding Modal Assurance Criterion expresses a degree of confidence


for this calculation :

MAC jlk 

({R jk} t*{R lk}) 2

Eqn 18-6

({R jk} t {R jk})({R lk} t {R lk})


*

If a linear relationship exists between the two complex vectors {R jk} and {R lk}
the MSF is the corresponding proportionality constant between them and the
MAC value will be near to one. If they are linearly independent, the MAC val
ue will be small (near zero), and the MSF not very meaningful.
In a more general way, the MAC concept can be applied on two arbitrary com
plex vectors. This is useful in comparing two arbitrary scaled mode shape vec
tors since similar mode shapes have a high MAC value.
Modal Scale Factors and Modal Assurance Criterion values can be used to
compare two modal models obtained from two different modal parameter es
timation processes on the same test data for example. When comparing mode
shapes, the MAC values for corresponding modes should be near 100 % and
the MSF between corresponding residue vectors (mode shapes, scaled by the
modal participation factors) should be close to unity. When multiple inputs
were used, this MSF can be calculated for each input while the corresponding
MAC will be the same for all of them.
A second application for the MAC value is derived from the orthogonality of
mode shape vectors when weighted by the mass matrix:
{V k} t[M]{V}  m kwhenk  1

Eqn 18-7

 0otherwise
where mk represents the modal mass for mode k.

Even when no exact mass matrix is available, it can usually be assumed to be


almost diagonal with more or less equal elements. In this case, the calculation
of the MAC value between two different modes is approximately equivalent to
checking their orthogonality.
For more specific information on using MSF and MAC for interpreting results
in a running mode analysis see section 17.4.

296

The Lms Theory and Background Book

Modal validation

18.3

Mode participation
The relative importance of different modes in a certain frequency band can be
investigated using the concept of modal participation. For each mode, the sum
of all residue values for a specific reference expresses that mode's contribution
to the response. At the same time these sums can be added over all references,
to evaluate the importance of each mode.

Note!

These evaluations are only meaningful when the same response and reference
stations are included for all modes.
When a comparison is made of the residue sums for one mode at all the refer
ences, it evaluates the reference point selection for that mode. The reference
with the highest residue sum is the best one to excite that mode.
When these sums are added together for all references, the importance of the
modes themselves is evaluated. The mode with the highest result is the most
important one.
Finally the sums of residues can be added for all modes. Comparison of these
results between different inputs allows you to evaluate the selection of refer
ence stations in a global sense for all modes.

Part IV

Modal Analysis and Design

297

Chapter 18 Modal validation

18.4

Reciprocity between inputs and outputs


Reciprocity is one of the fundamental assumptions of modal analysis theory.
This section discusses the reciprocity of FRFs and the reciprocity of the modal
model.

Reciprocity of FRFs
Reciprocity of FRFs means that measuring the response at DOF i while exciting at DOF j is the same as measuring
the response at DOF j while exciting at DOF i
This is expressed mathematically as h ij()  h ji()

Eqn 18-8

This means that the FRF matrix is symmetric. Note that this property is in
herently assumed when performing hammer impact testing to measure FRFs or
impulse responses.

Reciprocity in the modal model


Using the modal model for the FRF matrix
*

{V} kL k {V} *kL k!

 H   

j   k
j   *k 
k1
N

Eqn 18-9

it becomes clear that, when this matrix is symmetric, the role of mode shape
vectors and modal participation vectors can be switched. Making an abstrac
tion of the absolute scaling of residues, this property can be expressed as fol
lows.
For a reciprocal test structure, the modal participation factors should be propor
tional to the mode shape coefficients at the input stations.
Using this proportionality between mode shapes and modal participation fac
tors, reciprocity can be checked for each mode when data for more than one in
put station has been used for the modal parameter estimation.

298

The Lms Theory and Background Book

Modal validation

If reciprocity exists then it is possible to correctly synthesize the transfer func


tion between any pair of response and reference DOFs. This is done by com
puting a scaling factor between the driving point mode shape and the modal
participation factor. This same scaling factor is then used as a reference to de
rive the necessary participation factor from the available mode shape coeffi
cient.
If reciprocity is not satisfied then really only the transfer functions between the
measured response and reference DOFs can be correctly synthesized. If reciproc
ity is required then it can be imposed on the model, and a number of options
are available to calculate the proportionality factor needed to do this.
1 Select one driving point for each mode. The best choice in this case is
the one with the largest driving point residue, since it is the one that
best excites and is observed from that input DOF.
2 Select one specific driving point for all modes. Other participation fac
tors are disregarded for scaling.
3 Compute a reciprocal scale factor (RSF) using a least squares average of
all the driving point data as defined by the following formula for n
driving points.

 v*i li

RSF 

i1
n

 v*i vi

i1

where

vi = the mode shape coefficient


li = the modal participation factor

Part IV

Modal Analysis and Design

299

Chapter 18 Modal validation

18.5

Generalized modal parameters


This section deals with mode shape scaling and generalized parameters (modal
mass).
The residuerij,k between locations i and j for mode k can be written as the
product of a scaling factor ak (which is independent of the location) and the mo
dal vector components in both locations. If the structure is proportionally
damped, the modal vectors of the structure are real whereas the residues are
purely imaginary. As a consequence, the scaling factor ak , is also purely imagi
nary.
r ij,k  a kv ikv jk
ak 

Eqn 18-10

1
2j dkm k

Equation 18-1 can then be rewritten as


1  vikv jk
H ij(j)  
2j m j  
N

k1

dk



j n   *k
v *ikv *jk

Eqn 18-11

where
mk = modal mass of mode k
dk = damped natural frequency of mode k
 nk 1   2k

k
= the critical damping ratio of mode k
nk = the undamped natural frequency of mode k
At this point, it should be pointed out that equation 18-11 contains N more pa
rameters than equation 18-1, i.e. one more parameter per mode. This is due to
the fact that residues are scaled quantities whereas the modal vectors are deter
mined within a scaling factor only. In equation 18-11 the modal mass values
play the role of the scaling constants. It is clear that the value of the modal
mass depends on the scaling scheme that was used to obtain the numerical val
ues of the modal vector amplitudes.
When the residues of a proportionally damped structure are known, equations
18-10 and 18-11 can therefore be used to compute the modal mass and the mo
dal vector amplitudes once a scaling method is proposed. Indeed residues, mo
dal vectors and modal mass are related by following equation

300

The Lms Theory and Background Book

Modal validation

r ijk 

v ikv jk
2j dkm k

Eqn 18-12

To compute the amplitudes of one modal vector and the corresponding modal
mass from a set of residues with respect to a given input location j you need
one additional equation since the set of equations that can be written for all out
put locationsi in the form of equation 18-12 is undetermined. Therefore N
equations in N +1 unknowns are obtained. This last equation will actually de
termine the scaling of the modal vector.
Note that an eigenvector determines only a direction in the state space and has
no absolutely scaled amplitude, while a residue has a magnitude with physical
meaning. The scaling of the eigenvectors will determine the modal mass. Mo
dal stiffness is determined as the modal mass multiplied by the natural fre
quency squared. Modal damping is twice the modal mass multiplied by the
natural frequency and the damping ratio.
V

Unity mass
In this case the mode shapes and participation factors are scaled such
that the modal mass (mk ) in equation 18-12 is equal to 1.

Unity stiffness
In this case the mode shapes and participation factors are scaled such
that the modal stiffness (kk = mk k 2 ) is scaled to 1.

Unity modal A
In this case the mode shapes and participation factors are scaled such
that the scaling factor (ak ) is scaled to 1. This scaling factor is indepen
dent of the DOFs.

Unity length
In this case the mode shapes and participation factors are scaled such
that the squared norm of the vector vik is scaled to unity.
N0

 v2ik  1

i1

Part IV

Unity maximum
In this case the mode shapes and participation factors are scaled such
that the vector vik is scaled to 1 where i is the DOF with the largest
mode shape amplitude.

Unity component
In this case the mode shapes and participation factors are scaled such
that the vector vik is scaled to 1 where i is any DOF selected by the user.

Modal Analysis and Design

301

Chapter 18 Modal validation

18.6

Mode complexity
When a mass is added to a mechanical structure at a certain measurement point
then the damped natural frequencies for all modes will shift downwards. This
theoretical characteristic forms the basis of a criterion for the evaluation of esti
mated mode shape vectors.
For each response station, the sensitivity of each natural frequency to a mass
increase at that station can be calculated and should be negative. A quantity
called the Mode Overcomplexity Value" (MOV) is defined as the (weighted)
percentage of the response stations for which a mass addition indeed decreases
the natural frequency for a specific mode,
N0

 wiaik

MOV k 

i0

x100%

N0

w
i0

Eqn 18-13

where
wi

is the weighting factor


= 1 for unweighted calculations
= |vik |2 for weighted calculations

aik

= 1 if the k th frequency sensitivity to a mass addition in point i is negative


= 0 otherwise

This MOV index should be high (near 100 %) for high quality modes. If this
index is low the considered mode shape vector is either computational or
wrongly estimated. It is called overcomplex", which means that the phase
angle of some modal coefficients exceeds a reasonable limit.
However if this MOV is low for all modes for a specific input station (say, be
low 10%), this might indicate that the excitation force direction was wrongly
entered while measuring the FRFs for that input station. This error may be
corrected by changing the signs of the modal participation factors for all modes
for that particular input.

302

The Lms Theory and Background Book

Modal validation

18.7

Modal phase collinearity


For lightly or proportionally damped structures, the estimated mode shapes
should be purely normal. This means that the phase angle between two differ
ent complex mode shape coefficients of the same mode (i.e. for two different
response stations) should be either 0_, 180_ or -180_. An indicator called the
``Modal Phase Collinearity" (MPC) index expresses the linear functional rela
tionship between the real and the imaginary parts of the unscaled mode shape
vector.
This index should be high (near 100%) for real normal modes. A low MPC in
dex indicates a rather complex mode, either because of local damping elements
in the tested structure or because of an erroneous measurement or analysis pro
cedure.

Mean phase deviation


Another indicator for the complexity of unscaled mode shape vectors is the
Mean Phase Deviation (MPD). This index is the statistical variance of the phase
angles for each mode shape coefficient from their mean value, and indicates the
phase scatter of a mode shape. This MPD value should be low (near 0_) for real
normal modes.

Part IV

Modal Analysis and Design

303

Chapter 18 Modal validation

18.8

Comparison of models
When you have two groups of modes representing the same modal space then
you can compare the two groups. The comparison concerns the damped fre
quencies, the damping values, the modal phase collinearities and the MAC val
ues of the two groups. This is a useful way of comparing sets of modes gener
ated from the same data but using different estimation techniques for example.

304

The Lms Theory and Background Book

Modal validation

18.9

Mode indicator functions


Mode Indicator Functions (MIFs) are frequency domain functions that exhibit
local minima at the natural frequencies of real normal modes. The number of
MIFs that can be computed for a given data set equals the number of input
locations that are available. The so-called primary MIF will exhibit a local
minimum at each of the structure's natural frequencies. The secondary MIF
will have local minima only in the case of repeated roots. Depending on the
number of input locations for which data is available, higher order MIFs can be
computed to determine the multiplicity of the repeated root. So a root with a
multiplicity of four will cause a minimum in the first, second, third and fourth
MIF for example. An example of a MIF is shown below.

Given a structure's FRF matrix [H], describing its input-output characteristics


and a force vector, {F}, the output or response {X} can be computed from the fol
lowing equation
{X}  H{F}

Eqn 18-14

Removing the brackets from the notation, equation 18-14 can be split into real
and imaginary parts
X r  jX i  (H r  jH i)(F r  jF i)

Eqn 18-15

For real normal modes, the structural response must lag the excitation forces by
90_. Therefore, when the structure is excited at the correct frequency according
to one of these modes (modal tuning) the contribution of the real part of the re
sponse vector X to its total length must become minimal. Mathematically this
can be formulated in the following minimisation problem

Part IV

Modal Analysis and Design

305

Chapter 18 Modal validation

X trX r
min
|FF| ( 1 X t X  X tX
r r
i i

Eqn 18-16

Substituting the expression for the real and imaginary parts of the response
18-15 in this expression yields

FH trH rF
min

|FF| ( 1 F t(H t H  H tH )F
r r
i i

Eqn 18-17

The solution of equation 18-17 reduces to finding minima of the frequency


functions that are built from eigenvalues. The following eigenvalue problem is
formulated at each spectral line under investigation
H trH rF  (H trH r  H tiH i)F

Eqn 18-18

The square matrices Hr t Hr and Hi t Hi have as many rows and columns as the
number of input or reference locations that were used to create them (i.e. the
number of columns of the FRF matrix that were measured). The primary Mode
Indicator Function is now constructed from the smallest eigenvalue of expres
sion 18-18 at each spectral line. It exhibits noticeable local minima at the fre
quencies where real normal modes exist. A second MIF can be constructed us
ing the second smallest eigenvalue of 18-18 for each spectral line. It will
contain noticeable local minima if the structure has repeated modes. This can
be repeated for all other eigenvalues of equation 18-18. The number of func
tions that can be constructed is equal to the number of eigenvalues, which is the
same as the number of input stations. From these functions, you can then de
duce the multiplicity of each of the normal modes.

306

The Lms Theory and Background Book

Modal validation

18.10

Summation of FRFs
An important indication of the accuracy of the natural frequency estimates is
their coincidence with resonance peaks in the FRF measurements. These reso
nance peaks can be enhanced by a summation of all available data, either by
real or imaginary parts.
Graphically comparing this summation of FRFs with values of the natural fre
quencies of modes in a display module can be useful. Problems like missing
modes, erroneous frequency estimates or shifting resonances because of mass
loading by the transducers can easily be detected this way.

Part IV

Modal Analysis and Design

307

Chapter 18 Modal validation

18.11

Synthesis of FRFs
The FRFs that you have obtained from a modal model can be synthesized in a
number of ways. Scaled mode shapes (i.e. mode shapes and modal participa
tion factors) have to be available for at least one input station for which a mode
shape coefficient is also available. Using the Maxwell-Betti reciprocity princi
ple between inputs and outputs (section 18.4) it is however possible to calculate
the FRF between any two measurement stations.

Correlation and errors


It is also possible to assess correlation and error values relating to the measured
and synthesized FRFs.
The correlation is the normalized complex product of the synthesized and mea
sured values.
|

 SixM*i
|2
i

correlation 

 S xS*
! M xM*
!
 i i 
i
 i i

i

Eqn 18-19

with
Si = the complex value of the synthesized FRF at spectral line i
Mi = the complex value of the measured FRF at spectral line i
The LS error is the least square difference normalized to the synthesized values.

 Si  Mi
x Si  Mi
*
LSerror  i
 SixS*i

Eqn 18-20

A listing of FRFs where the correlation is lower than a specified percentage and
which exhibit an error higher than a specified percentage provides useful infor
mation on the quality if the synthesized FRF.

308

The Lms Theory and Background Book

Chapter 19

Rigid body modes

In this chapter the behavior of a structure as a rigid body is dis


cussed. The following topics are covered
The calculation of rigid body properties of a structure from
FRF measurements
Rigid body analysis to determine rigid body modes

309

Chapter 19 Rigid body modes

19.1

Calculation of rigid body properties


This section discusses the theory used in the calculation of rigid body proper
ties. Experimental frequency response functions (FRFs) can be used to derive
structural modes of a structure and the inertia properties of a system. These
properties are: the moments of inertia, the products of inertia and the principal
moments of inertia.
In general two types of method are applied.
1 A first type determines the inertia characteristics using the rigid body mode
shapes obtained from test data. This is the Modal Model Method described
in reference [1].
2 The second type starts from the mass line, i.e. the FRF inertia restraint of the
softly suspended structure. This mass line is used in a set of kinematic and
dynamic equations, from which the rigid body characteristics (mass, center
of gravity, principal directions and moments of inertia) can be determined
(reference [2]). Some of these methods also look for the suspension stiff
nesses while others consider the mass of the system as known (reference [3]).
This type of method is described in more detail below.
acc/force
first deformation mode
frequency
band
Rigid body
mode
mass line
frequency

Figure 19-1

Rigid body modes

Derivation of rigid body properties from measured FRFs


Input data
FRFs are required in order to determine the rigid body properties. The input
format is required to be acceleration/force, and if this is not the case then a
transformation can be applied. Rotational or scalar (acoustic) measurements
are not used in the rigid body calculations.

310

The Lms Theory and Background Book

Rigid body modes

In theory 2 excitations and 6 responses are needed for the calculations. Practi
cal tests show that best results are obtained with at least 6 excitations (e.g. 2
nodes in 3 directions) and 12 responses need to be measured.
Reference axis system
All the rigid body properties are calculated relative to a reference axis. The ref
erence axis system is defined by the three coordinate values of its origin and
three euler angles representing its rotation.
Specification of the frequency band
Rigid body properties are calculated in a global (least squares) sense over a spe
cified frequency band between the last rigid body mode and the first deforma
tion mode (see Figure 19-1).
Mass line value
The mass line" value which is needed for the calculations, can be derived from
the measured FRFs in three ways:
1)

When the rigid body modes and deformation modes are sufficiently
spaced, the amplitude values (with the sign of the real part) of the origi
nal, unchanged measured FRFs can be used. In this case there is no need
to have the deformation modes available for the rigid body modes analy
sis.

2)

When the spacing between rigid body modes and deformation modes is
not sufficient, the FRFs have to be corrected. In this case the influence of
the first deformation modes, if significant, can be subtracted from the
original FRFs. The amplitude values (with the sign of the real part) of
synthesized FRFs are used.

3)

If accurate measured FRFs are not available in the frequency range direct
ly above the rigid body modes, then lower residual terms which lie in a
frequency band which contains the first deformation modes can be used.
Residual terms can be determined from a modal analysis. Lower residu
als represent the influence of the modes below the deformation modes, and
are therefore representative of the rigid body modes.

Calculation of the rigid body properties


1

Calculation of the reference acceleration matrix

1.1

Coordinate transformation
If nodes, corresponding to the response DOFs used do not have global di
rections or when a reference (not coincident with the global origin) is spe
cified, then a rotation of the measured accelerations according to the glob
al/reference axis system is needed.

Part IV

Modal Analysis and Design

311

Chapter 19 Rigid body modes

All three directions (+X, +Y, +Z) are required. For the three measured (lo
cal) accelerations of output node o":
..

..

{X} g  [T] 1
o {X} l

Eqn 19-1

where
..
{X} g is the global acceleration vector
..

{X} l is the local acceleration vector


[T] 1
o
is the rotation matrix (global to local) of node o"
When a reference is specified, which does not coincide with the global ori
gin, the three measured accelerations of output node o" are also rotated
according to the axis of the reference system.
..

..

{X} r  ([T] 1
o [T] r){X} l

Eqn 19-2

where
[T] r is the rotation matrix (global to local) of node r".
1.2

System of equations
For all spectral lines of the selected band, for all response nodes P, Q,...
and for all inputs 1, 2, ... under consideration
..

X.. 1P
X1P
 ..
X1P
 ..
X1Q
 ..
X
 .. 1Q

X1Q
 )

..

X 2Px ---

X 2Py

X 2Pz

X 2Qx

X 2Qy

..
..

..
..
..

X 2Qz
)

..

1 0 0 0
Z P  Y P X.. 1g

---
0 1 0  ZP 0
X P  X 1g
 
 ..
0 0 1 YP  XP 0 
--- 
X 1g

 

..



1
0
0
0
Z

Y

Q
Q 
--- 
  .. 1g
 0 1 0  ZQ 0 XQ 
--- 
 .. 1g
 0 0 1 Y Q  XQ  0  
1g
---
)
)
) 

 )
---

..

x
y
z

x
y
z

X 2g x ---
X 2g y ---
..

X 2g z ---

..
 2g x ---

..
 2g y --- Eqn 19-3
..

 2g z ---
) ---
..

acceleration of input 1 towards global axis system

where XP, YP, and ZP are the global coordinates of node P (or towards the
reference axis system).
This over-determined system of equations (number of output DOFs is
higher than or equals 6) is solved for each spectral line in a least square
sense. In this way at each spectral line, the reference acceleration matrix is
found. Further, a general solution of the reference acceleration matrix
over the total frequency band is calculated by solving in a least squares
sense the global set of equations containing all outputs and all spectral
lines.

312

The Lms Theory and Background Book

Rigid body modes

Calculation of the reference force matrix

2.1

Coordinate transformation
For input force 1 in the local X-direction of node i":

1.0
!
{F 1}  [T] 1
i $0.0%
0.0

Eqn 19-4

[T] 1
i
is the rotation matrix (global to local) of node i"
When the reference r" is not coincident with the global origin:
{F 1} 

1.0
!
$0.0%
0.0

([T] r[T] 1
i )

Eqn 19-5

[T] 1
r is the rotation matrix (global to local) of reference node r"
Similar equations are used when the input has Y-direction or Z-direction.
2.2

System of equations
For all inputs 1, 2 . . .

F1g !
1
0
0
F1g   0 1 0 
F1g  
0
0
1 

$M1g %  0  Z1 Y 1 
{F1}

M1g   Z1 0  X1

   Y 1 X1 0 
M 1g

x
y
z

Eqn 19-6

x
y
z

reference force matrix towards global axis system for input 1

{F1} is the applied force at input 1


X1, Y1 and Z1 are the global coordinates of node corresponding with input
1.
3

Calculation of the co-ordinates of the center of gravity and moments


and products of inertia
For
(i) each input and for each spectral line
(ii) each input over the total band:

Part IV

Modal Analysis and Design

313

Chapter 19 Rigid body modes

Fg  m.ag !  0  mz


F g  m.a g
m z
0

Fg  m.ag 
 
 my mx
$ Mg % 
Fg
 0


0
 Mg 
   Fg
Mg
  Fg  mFg
x

Xcog!
m y 0 0 0 0
0
0  Y gog
 
0
0  Z cog
 m x 0 0 0 0
0
0 0 0 0
0
0   I xx 
I
 F g  x 0 0   y 0   z
$ Iyy %
zz 
Fg
0 y 0  x  z 0 

I xy 

0
0 0  z 0   y   x 

 Iyz 
I xz 
y

Eqn 19-7
Xcog, Ycog and Zcog are the global coordinates of the center of gravity
Ixx, Iyy Izz are the moments of inertia towards the global axis system
Ixy, Iyz Ixz are the products of inertia towards the global axis system.
This set of equations can be solved in two steps. First, the coordinates of
the center of gravity can be solved from the first three equations (per ref
erence). Afterwards, these values can be filled in the last equations to
solve the inertia moments and products.
Step 1
for each input and for each spectral line
and for each input over the total band:
Fg  m.a g !  0
 m z m y  x cog

ycog!


F

m.a
m
0

m
g
g


z
x
$

 $z %
Fg  m.ag %


m
m
0
y
x
 cog
 
x

Eqn 19-8

Step 2
for each input and for each spectral line
and for each input over the total band:

Ixx!
I yy

Mg  ycogFg  zcogFg ! x 0 0  y 0  z 

I zz


$Mg  x cogFg  zcogFg %  0 y 0  x  z 0 $Ixy%
Mg  x cogFg  ycogFg 
  0 0 z 0  y  x 
Iyz

I xz
x

Eqn 19-9

At each spectral line, these over-determined sets of equations (number of


inputs larger than or equal to 2) are solved in a Least Square sense. Also a
global solution for these rigid body properties over the total band can be
found out of the global acceleration matrix over the total frequency band
(see equation 19-3).

314

The Lms Theory and Background Book

Rigid body modes

If wanted, only the second set of equations is solved. In this case the coor
dinates of the center of gravity are presumed to be known and specified
by the user.
4

Calculation of the co-ordinates of the center of gravity and moments


and products of inertia
In general: {L g}  [A]{ g}

Lx!  Ixx  Ixy  Ixzx!


$Ly%  Iyx Iyy  Iyz$y%
L z  I zx  I zy I zz  z

Eqn 19-10

{Lg } is the vector of total impulse towards the global (reference) axis sys
tem
[A] is the matrix of inertia (symmetrical)
{g } is the vector of velocity
This is an eigenvalue problem, where
the Eigenvalues : I1, I2, I3: are 3 principal moments of inertia
the Eigenvectors : {e1}, {e2}, {e3} : are directions of the 3 principal axes of
inertia.

Part IV

Modal Analysis and Design

315

Chapter 19 Rigid body modes

19.2

Rigid body mode analysis


A rigid body is a (part of a) structure that does not deform of itself, but that
moves periodically as a whole at a certain frequency.
The modal parameters for such a rigid body mode are determined not by the
dynamics of the structure itself, but by the dynamic properties of the boundary
conditions of that structure. This includes the way it is attached to its sur
roundings (or the rest of the structure), the stiffness and damping characteris
tics of suspending elements, its global mass, etc... A rigid body can be
compared to a simple system with a mass attached to a fixed point by a spring
and a damper element.
It has 6 modes of vibration i.e. translation along the X, Y, and Z axes, and rota
tion about these axes. Every mode which is measured for such a system will be
a linear combination of these 6 modes.
Section 19.1 describes how it is possible to calculate the inertia properties of a
structure based on measured FRFs. This enables you to calculate the center of
gravity, moments of inertia and the principle axes as well as synthesized rigid
body modes.
This section discusses how rigid body modes are used and describes two meth
ods by which the modes can be determined; namely
V

decomposition of measured modes into rigid body modes

synthesis of rigid body modes based on geometrical data

Use of rigid body analysis


In modal analysis applications, the fact that (part of) a structure acts as a rigid
body up to a certain frequency can be used in different ways.
1 Debugging the measurement setup
Rigid body modes can be used to verify the measurement set-up when the
frequency range of measured FRFs covers a rigid body of the entire structure
in its suspension (elastic cords or air bags for example). In this case, a simple
peak picking procedure and an animation of the resulting mode will indicate
which measurement points are not moving in line" with the rest of the
structure. Deviations from this rigid body motion can be caused by

316

non-measured nodes (not moving at all)

wrong response point identification (moving out of line)

The Lms Theory and Background Book

Rigid body modes

wrong response direction (moving in opposite direction)

bad transducers or wrong calibration values (wrong amplitude)

other measurement errors

Obvious errors, as in the first 4 cases, can be easily detected by curve-fitting


a rigid body mode of the structure.
2 Completion of nonmeasured DOFs
Mode shape coefficients for non-measured points and/or directions can be
calculated based on the assumptions that the resulting deformed mode
shape should still be a rigid body. This can be achieved by first calculating
the weighting coefficients for each of the 6 rigid body motions of the struc
ture from the measured data and then applying the same weighting to obtain
the motion of the non-measured DOFs. This takes into account the geometry
constraints and thus preserves the rigid body motion of the structure. This
feature is useful to complete sparsely measured rigid parts of a wire frame
model for animation.
3 Correction of measurement errors
Using the same approach as described under 2 it is also possible to re-calcu
late mode shape coefficients for measured DOFs and compare them to the
actually measured ones to evaluate measurement errors (as under 1) or mea
surement noise. It is even possible to replace the measured data by the cal
culated data and smooth the mode shapes to obtain good rigid body motion
for (parts of) the structure.
4 Synthesis of modes based on the geometry of the structure
Rigid body modes can be calculated for a structure based on the structure's
mass, moments of inertia, the boundary conditions and values for frequency
and damping specified by the user. This is useful when coupling two sub
structures for example for which the modal parameters have been obtained
separately.

19.2.1

Decomposition of measured modes into rigid body


modes
The decomposition into rigid body mode is quite simple and involves the fol
lowing steps.
1 Use the geometry data to construct the 6 rigid body motions of the structure.

Part IV

Modal Analysis and Design

317

Chapter 19 Rigid body modes

2 Decompose a given mode shape in these 6 modes. This involves solving a


system of linear equation and can only be accomplished if enough equations
can be built. This means that at least 6 measured DOFs must be available
and that the equations must be linearly independent. This means for exam
ple that it is not possible to calculate the contribution of a rotation about the
Z axis from data for 2 points on that axis, even if they both have all 3 DOFs
measured.
3 Calculate the mode shape coefficients for the requested DOFs based upon
the geometry and the 6 weighting coefficients.

Limitations
Calculating the rigid body motion for a part of the structure (for example one
single component) can sometimes prove a little awkward. The component will
indeed move as a rigid body but is not constrained to still be connected to the
rest of the structure. When applied to the tail wing of an airplane for example
this wing may rotate about a horizontal axis through the middle of the wing
but may no longer be connected to the fuselage at its base. The same may hap
pen to an engine block of a car which may be disconnected from the supports
when a rigid body motion is applied to it.

19.2.2

Synthesis of rigid body modes based on geometrical data


The synthesis of rigid body modes for a `free-free' structure is based on the
translation along and the rotation about the three principal axes of inertia. The
position of these three axes as well as the principal moments of inertia about
them and the mass are required for the calculation of the rigid modes. The
damping and frequency are specified by the user. The residues are calculated
as follows R trans  1
2m
R rot 

r fr x
2I

where

318

The Lms Theory and Background Book

Rigid body modes

m = the total mass


 = the user defined damped natural frequency
rf = the perpendicular distance from the reference DOF to the respective
axis of inertia
rx = the perpendicular distance from the response DOF to the respective
axis of inertia
I = the moment of inertia about the respective axis of inertia.
Rigid body modes are useful in completing the modal model of a structure that
is being used for structural modification purposes.

Part IV

Modal Analysis and Design

319

Chapter 19 Rigid body modes

19.3

320

References
[1]

Toivola, J. and Nuutila, O.


Comparison of three Methods for Determining Rigid Body Inertia Properties from
Frequency Response Functions
Tampere University of Technology, P.O. Box 589, SF-33101 Tampere,
Finland,

[2]

Okuzumi, H.
Identification of the Rigid Body Characteristics of a Powerplant by Using
Experimental Obtained Transfer Functions
Central Engineering Laboratories, Nissan Motor Co., Ltd., Jun 1991

[3]

Lemaire, G. and Gielen, L.


Het bepalen van de inertie-parameters van een star lichaam door middel van
transfertfuncties
Eindwerk katholieke hogeschool Brugge-Oostende dep. industriele
wetenschappen en technologie, 1995-1996

[4]

LMS International
LMS CADA-X Modal Analysis Manual Revision 3.4
LMS International, Leuven, Belgium, pp 2.6-2.7, pp 3.24-3.32, 1996

[5]

LMS International
How to Add Rigid Body Modes to an Existing Modal Model in CADA-X
LMS International Consulting reports, Ref. DVDB/sh/911295, Leuven,
Belgium,
22 pp, 1991

The Lms Theory and Background Book

Chapter 20

Design

This chapter discusses the three types of analysis that can be per
formed to determine the effect of design changes on the modal be
havior of a structure. These are
Sensitivity
Modification prediction
Forced response

321

Chapter 20 Design

20.1

Using the modal model for modal design


Correctly scaled mode shapes are an absolute pre-requisite of the correct ap
plication of the design procedures described here.
The dynamic behavior of a structure can be fully described and modelled there
fore if the poles #k ), and the residues rijk for each mode k and each pair of re
sponse and reference DOFs i and j are known.
In practise however the modal model is often defined by the poles (frequency
and damping values) and the residues for only one (or a few) reference sta
tion(s) j. The question now arises as to how this limited modal model can be
used for the prediction of responses when forces are acting on a degree of free
dom for which residues are not readily available. The residues required be
tween any two degrees of freedom can be derived as follows.
For a linear structure which obeys the Maxwell-Betti reciprocity principle be
tween inputs and outputs, the FRF between two DOFs i and j can be obtained
by exciting the structure at DOF j and measuring the response at DOF i, or by
exciting at DOF i and measuring the response at j:
H ij()  H ji()

Eqn 20-1

In other words, the FRF matrix for a reciprocal structure is symmetric.


Under these circumstances, the residue for each mode k between two response
DOFs m and n can be obtained from the ones between each of them and the
available set of residues for reference j:
r mjk  a k$mk$ jk

Eqn 20-2

r njk  a k$nk$ jk

Eqn 20-3

where

322

rmjk

is the known residue between DOFs m and j

rnjk

is the known residue between DOFs n and j

$mk

is the unknown mode shape coefficient at response DOF m

$nk

is the unknown mode shape coefficient at response DOF n

The Lms Theory and Background Book

Design

$jk

is the unknown mode shape coefficient at response DOF j

The required residue is then r mnk  a k$ mk$nk  a k

$ mk$ jk
$ nk$jk
r mjkrnjk
$jk a k a k$jk  r jjk

Eqn 20-4

where rjjk is the known driving point residue.


The starting point for modal synthesis applications is the available modal mod
el for the structure to be modified or for each of the substructures to be as
sembled.
It is important however that some conditions are met.
V

In order to be able to scale the included mode shapes correctly, they


must include driving point coefficients.

Mode shape coefficients need only be available for the Degrees Of Free
dom which are affected by the structural changes.

The information used to obtain this scaling are: poles, (unscaled) mode shapes
and modal participation factors for a number of reference stations. The re
quired scaled mode shape coefficients can be obtained from this information as
follows For Ni points for which output data are also available (i.e. driving points), a
vector of complex modal participation factors Lkj for each mode k can be built:
L k  L 1L 2...L N i

Eqn 20-5

The corresponding unscaled mode shape coefficients Wik are assembled in a


column vector {W}k

 W1 !
 W2 
{ W } k  $ .. %
W. 
N
k

Eqn 20-6

The residues Rk are defined as the product of mode shapes and modal partici
pation factors :
[ R ] k  { W } kL k

Part IV

Modal Analysis and Design

Eqn 20-7

323

Chapter 20 Design

The scaled mode shapes {V}k , used in the theoretical derivation of the previous
chapter are related to the unscaled mode shapes {W}k via a complex scaling fac
tor k for each mode :
{ V } k   k{ W } k

Eqn20-8

From the definition of residues these mode shapes are scaled such that
R k  {W} kL k  {V} k{V} tk

Eqn 20-9

or from equation 20-8


t

 2k{ W } k{W } k  { W } kL k

 k 

*

 ...  W N kL N ik
i
W *1kW1k  W *2kW 2k  ...  W *NikW N ik

W *1kLik

W *2kL 2k

Eqn
20-10

In the special case where only one input is considered, i.e. only one set of resi
dues is available, the scaling factor becomes -

 k 

L 1k
W 1k

Eqn
20-11

The scaling of equation 20-8 actually converts the generally valid modal model
of mode shape vectors W and modal participation factors L to a model of scaled
mode shape vectors V, in which the modal participation factors are absorbed
via equation 20-10. Obviously some information is lost by removing the scal
ing factors L from the model, and as a consequence, the resulting model is only
valid for reciprocal structures with a symmetric FRF matrix. The calculation of
the scaling factor k according to equation 20-10 is in fact the best compromise in
a least squares sense to approximate a non-reciprocal modal model by a re
duced reciprocal one.

324

The Lms Theory and Background Book

Design

20.2

Sensitivity
An experimental modal analysis of a structure results in a dynamic model in
terms of modal parameters. The qualitative information contained in this mod
el can be used to identify dynamic problems for example by animation of the
mode shapes. Through physical insight and expertise structural modifications
can be proposed to overcome specific dynamic problems.
For structures with complex dynamic behavior, predictions about the effect of
physical changes on modal parameters are usually very difficult - if not impos
sible - to make. When unsatisfactory dynamic behavior is detected or sus
pected the designer can use trial and error procedures to try out a number of
modifications, but there is no guarantee that any of these attempts will yield
satisfying results. On the other hand numerical techniques can be employed
which use the quantitative results of a modal test to evaluate the effects of
structural changes.
These structural changes can be imposed by modifying the physical character
istics of the structure in terms of its inertia, stiffness and damping. A Sensitiv
ity analysis allows you to see how changes in these physical characteristics af
fect particular modes at various points on the structure. It computes only the
sensitivity of the modal model to structural alterations, and does not involve
actually applying any changes. A Sensitivity analysis provides you with the
means of determining the points where such modifications will have most ef
fect.

20.2.1

Mathematical background to sensitivity analysis


Determining the sensitivity of a DOF to various parameters involves (in a
mathematical sense) evaluating the partial derivatives of the eigen properties of
a matrix with respect to its individual elements.
Modal parameters are related to the Frequency Response Function as follows.

H ij() 

 jrijk

2N

k1

Eqn 20-12

The partial derivative of this equation to a physical parameter P, can be com


puted as follows

Part IV

Modal Analysis and Design

325

Chapter 20 Design

H ij
P

2N

k1

r ijk
1

j   k P

2N

r ijk
 k
 (j 
2
 ) P

k1

Eqn 20-13

P can be a mass at one DOF or damping or stiffness between a pair of DOFs.


The dynamic stiffness matrix Q is given by
Q    2M cc  jC ccK cc
where M
C
K
c
that

Eqn 20-14

is the mass matrix


is the damping matrix
is the stiffness matrix
is a subscript denoting that only elements in the matrices
are affected by P will be considered.

Using this equation and the theory of adjoined matrices, equation 20-13 can be
rewritten in the form
H ij
P

 {H ic} t

Q
{H cj}
P

Eqn 20-15

Using equation 20-12 equation 20-15 becomes

H ij
P

2N
r cjk !
 2N r ick ! Q 
 $
%
$
j  k P
(j   k)%

k1
k1

Eqn 20-16

Splitting up equation 20-16 into partial fractions, and identifying the corre
sponding terms of equation 20-13, gives the sensitivities for frequency (20-17)
and mode shape (20-18).

 

 k
1 {r } t Q
 
ick
r ijk
P
P

326

jjdk

{r cjk}

Eqn 20-17

The Lms Theory and Background Book

Design

 

r ijk

Q
  {r ick} t 
j P
P



2N

m1

jjdk

 

{r cjk} 

 {rick}tQP 
2N

m1

jjdk

r cjm
m  k

r ick
Q

m  k
P

jj dk

{r cjm}

Eqn 20-18

So from equations 20-17 and 20-18, the residues rick and rcjk for each DOF c
that is influenced by the structural change are required in order to calculate the
sensitivity to that change. Even if not all the residues are available, the Max
well-Betti reciprocity principle can be used to calculate the required values.
The residue rick to be derived for any reference DOF c when the residues for
DOFs i and c are available for an arbitrary reference j on condition that the driv
ing point residue rjjk is also available. The driving point residue is also required if
the mode shapes are to be correctly scaled.
From the general formula of equation 20-18, it is now possible to calculate the
sensitivity value of a mode shape coefficient for DOF i when a structural
change is considered for the parameter P, which will affect DOFs a and b. The
corresponding scaled mode shape coefficients for each mode in the modal mod
el are required. From the definition of the dynamic stiffness matrix Q, the three
specific cases of P being a mass, a linear spring (stiffness) or a viscous damper
can be considered.

Mass
This is the case where P is a mass at a specific DOF a. Equations 20-17 and
20-18 are then simplified to
 k
  2k$ 2ak
m a
$ ik
   k$ 2ak$ ik  $ak
m a

Eqn 20-19
2N



m1

 2k
$ am$ im
k  m

Eqn 20-20

Stiffness
This is the case where P is a linear spring between DOFs a and b. Equations
20-17 and 20-18 are then simplified to

Part IV

Modal Analysis and Design

327

Chapter 20 Design

 k
  ($ ak  $ bk) 2
k ab
$ ik
  ($ ak  $ bk)
k ab

Eqn 20-21
2N

)$ im
 ($am $bm


m1

Eqn 20-22

Note that if DOF b is a fixed point (ground") then $bm = $bk = 0

Damping
This is the case where P is a viscous damper between DOFs a and b. Equations
20-17 and 20-18 then become
 k
   k($ ak  $ bk)2
c ab
$ ik
($  $ bk) 2
  ak
$ik  ($ ak  $ bk)
2
c ab

Eqn 20-23
2N



k

 m
m1 k

($ am  $ bm)$ im

Eqn 20-24

The imaginary parts of equation 20-19, 20-21 and 20-23 are used to compute
the sensitivities of the damped natural frequencies. The corresponding real
parts express the sensitivities of damping factors or exponential decay rates.

328

The Lms Theory and Background Book

Design

20.3

Modification prediction
This section describes the use of a dynamics modification theory to predict the
effect of structural modifications on a mechanical structure's modal parameters.
These modifications can take the form of local mass, stiffness and/or damping,
FEM-like rod, truss, beam or plate reinforcements. In addition to local modifi
cations, a substructure assembly theory allows you to predict the modal model
for a structure that consists of an assembly of substructures.
Modification prediction allows you to evaluate:
V

the effect of structural modifications

the effect of any number and type of connections between any number
of substructures (only if installed)

the dynamics of small scale models, built up from lumped massspring-dash pot elements

Such an analysis avoids time consuming experimental trial and error proce
dures of modifying prototypes or scale models of mechanical structures, mea
suring and analyzing the dynamic behavior and evaluating the effects of these
modifications.

20.3.1

Mathematical background
The starting point for the structural modification and substructure theory is the
modal model described in section 15.1.
The first section of this theoretical background deals with the coupling and
modification of substructures using flexible coupling and general viscous
damping. It continues with the cases of rigid coupling and flexible coupling
with proportional damping.

Modal models for the assembly of substructures with flexible coupling and
viscous damping
Modal models of substructures
Consider two structures, 1 and 2. They obey the following equations of motion
in the Laplace domain :

Part IV

Modal Analysis and Design

329

Chapter 20 Design

s 2M 1x 1  sC 1x 1  K 1x 1  f 1

Eqn 20-25

s 2M 2x 2  sC 2x 2  K 2x 2  f 2

Eqn 20-26

The matrices Mi , Ci and Ki are the mass, damping and stiffness matrices of the
structure 1 or 2 corresponding to the subscript i. General viscous damping is
allowed. The system matrices are symmetric. The displacement vectors are
{x1 } and {x2 }, and the force vectors {f1 } and {f2 } respectively.
The modal parameters for substructure 1 will first be derived in a general way.
For substructure 2 the same method can be used but will not be entirely re
peated.
The transformation to decouple the equations of motion can be found by ad
ding a set of dummy equations (Duncan's method) :
sM 1x 1  sM 1x 1  0

Eqn 20-27

The system equations for substructure 1 become :


sA 1y 1  B 1 y 1  p 1

Eqn 20-28

where

0 M1
A 1  M C
1
1

sx 

y 1  x 1
1

B 1 

 M1 0
0 K1


0

p 1  f
1

The matrices A1 and B1 are diagonalized by the transformation matrix V1 , the


matrix of eigenvectors of substructure 1. The corresponding eigenvalues are
stored in the diagonal matrix 1 . Due to the addition of equation 20-27 there
are twice as many eigenvalues as there are degrees of freedom. They appear in
complex conjugate pairs.
The matrices A1 and B1 are diagonalized by post- and pre-multiplication by
the eigenvector matrix V1 and its transpose :
V t1A 1V 1  a 1

330

Eqn 20-29

The Lms Theory and Background Book

Design

V t1B 1V 1  b 1

Eqn 20-30

The matrix of eigenvectors V1 defines a coordinate transformation from physi


cal co-ordinates {y1 } to modal coordinates {q1 } :

y 1  V 1q1

Eqn 20-31

Using expressions 20-29 and 20-30 in the equation of motion 20-28 after premultiplication with the transpose of V1 and substitution with expression 20-31
one obtains the equations of motion in modal coordinates for substructure 1 :
sa 1q 1  b 1q 1  V t1p1

Eqn 20-32

It can be seen that the equations of motion in modal space are uncoupled.
The same procedure can be repeated for substructure 2, yielding a diagonal ei
genvalue matrix  2 and an eigenvector matrix V2 . The eigenvector matrix V2
defines a transformation to modal coordinates {q2 }. The equations of motion for
substructure 2 in modal space are :
sa 2q 2  b 2q 2  V t2p2

Eqn 20-33

Substructure assembly
The system matrices of both substructures can be merged to give a structure
composed of two dynamically independent substructures. For this assembled
structure one can easily derive the modal parameters since they are the same as
those of the two substructures but gathered in one eigenvalue matrix and one
eigenvector matrix.
More explicitly this substructuring yields the following system matrices :
A 

A1 0
0 A2

B 

 
B1 0
0 B2

and

Eqn 20-34

y 

 y   y 1
2

p 

 p   p 1
2

which yields as equation :

Part IV

Modal Analysis and Design

331

Chapter 20 Design

sA{ y }  B{ y }  p 

Eqn 20-35

It can be verified that the matrices of equation 20-35 are diagonalized by the
eigenvector matrix V composed as follows :

V1 0
V  0 V
2

Eqn 20-36

and that the eigenvalue diagonal matrix is :

  
0

0
2

Eqn 20-37

This yields a transformation to modal coordinates :


{ y }  Vq 

Eqn 20-38

where



q
 q   q 1
2

An expression of the type of equation 20-33 using the eigenvector and eigenva
lue matrices, yields :
sa  q   bq   V t p 

Eqn 20-39

A close look at the matrix of eigenvectors V shows that the two substructures 1
and 2 are still dynamically independent. Indeed, any force at any point of one
substructure will not induce any motion at any point of the other substructure.
The two substructures can now be connected with flexible connections mod
elled as springs and dampers. With the connection matrices Kc and Cc equation
20-35 becomes:
s(A  A c){ y }  (B  B c){ y }  p 

332

Eqn 20-40

The Lms Theory and Background Book

Design

where

0
0
A c  
0
0

0
Cc
0
 Cc

0 0 
0  Cc

0 0 
0 Cc 

0
0
B c 
0
0

0
Kc
0
 Kc

0 0 
0  Kc

0 0 
0 Kc 

The system matrices of the connected substructures will no longer be diagonal


ized by the transformation matrix V as the unconnected substructures were.
This is due to the introduction of the connection stiffness and/or damping val
ues.
Modification of structures
Before decoupling the equations of motion of the connected substructures a
number of modifications to each substructure can be added. Let the structural
modifications be gathered in the modification matrices M 1, C 1, K 1andM 2, C 2, K 2

Eqn 20-41

These changes can be brought together in system matrices for the modifica
tions:
0
M
A 
0
0






0
M 1 0
0
C 1 0
0
0 M2
0 M 2 C2

M
0
B 
0
0






0
0
0
K 1
0
0
0  M 2 0
0
0
K 2

Eqn 20-42

It is clear from the matrices of previous expression that the modifications are
not coupling the substructures, they are only modifying each substructure sepa
rately.
When the modifications of expression 20-42 are added to the system equation
of the connected structure (Eqn. 20-40), one obtains the final equation in physi
cal coordinates
s(A  A c  A){ y }  (B  B c  B){y }   p 

Eqn 20-43

Uncoupling the equations of motion


Using the coordinate transformation of the original unconnected substructures
(expression 20-36) and premultiplying with Vt , one derives a new set of equa
tions of motion in modal coordinates :

Part IV

Modal Analysis and Design

333

Chapter 20 Design

sA m q   B m q   V t p 

Eqn 20-44

where
A m  a  V tA cV  V tAV
B m  b  V tB cV  V tBV
The matrices Am and Bm for the modified structure can again be diagonalized
by a general eigenvalue decomposition. When the new eigenvalues and eigen
vectors are represented by ' and W, one has :
W tA mW  a
W tB mW  b
Consider then the transformation :
 q   W q 

Eqn 20-45

Substituting equation 20-44 and premultiplying with W t yields :


sa q   b q   W tV t p 

Eqn 20-46

The transformation matrices V and W can be combined in one matrix Vi as Eqn 20-47

V  VW
which then gives the following transformation equation :
{ y }  V q 

Eqn 20-48

Equation 20-48 is the transformation between physical coordinates and modal


coordinates of the connected and modified substructures. With this coordinate
transformation the uncoupled equations of motion are :
sa q   b q   V t p 

Eqn 20-49

The natural frequencies and the damping factors can be found as the imaginary
resp. the real part of the eigenvalues in  . The mode shapes are the columns
of the matrix Vi.

334

The Lms Theory and Background Book

Design

Flexible coupling with proportional damping


The theory discussed above relates to flexible coupling with general viscous
damping. In this section we consider the case of zero and proportional damp
ing.
Recall the general equation of motion for viscous damping
(s 2[M]  s[C]  [K]){X}  {F}

Eqn 20-50

Zero damping
In case of no damping : [C] = [0], next eigenvalue problem is to be solved with
eigenvalues:  r2 and with eigenvectors : {}r.
(s 2[M]  [K]){X}  {0}

Eqn 20-51

This system has purely imaginary poles, occurring in complex conjugate pairs.
 1  j 1, ...,  N  j N
*

 1   j 1, ...,  N   j N

Eqn 20-52
Eqn 20-53

The modal vectors are real, called normal modes (phase: +/ - 180_).
The equation of motion can be diagonalized, based on the orthogonality of the
modal vectors. Transformation to modal coordinates leads to an equation of
motion, with diagonal system matrices, being the modal mass and modal stif
fness ma
trices:
Eqn 20-54
["]  [{" 1}...{" N}]
["] t[M]["]  m ["] t[K]["]  k

Eqn 20-55

{X}  ["]{q}

Eqn 20-56

(  2m   k){q}  {0}

Eqn 20-57

where : k  m  r 2

Eqn 20-58

Propotional damping
In case of proportional damping, the damping system matrix is a linear com
bination of the mass system matrix and the stiffness system matrix:

Part IV

Modal Analysis and Design

335

Chapter 20 Design

Eqn 20-59

[C]  [M]  [K]


This leads to the next equation of motion:
2
((s  s)[M]  [K]){X}  {0}
s  1

Eqn 20-60

The eigenvalues are related to the complex poles


2

 r   r
   r2
 r  1

Eqn 20-61

The complex poles are solved from the real eigenvalues (-n) and the damping
factors (, ). When more than two original modes are taken into account (in
practical cases, this is always the case), the damping factors can solved in a
least squares way from the modal masses, modal stiffnesses and modal damp
ing factors.
Modal synthesis
Only mass and stiffness coupling modifications, M, K and not damping
coupling modifications can be applied. The equation of motion of the coupled
system are
(s 2([M]  [M])  [K]  [K]){X}  {0}

Eqn 20-62

(  2(m   [m])  k  [k]){q}  {0}

Eqn 20-63

["] m  ["][q r] m

Eqn 20-64

[m]  [V] t[M][V] and[k]  [V] t[K][V]

Eqn 20-65

In modal space:

Where :

The eigenvalues and eigenvectors of this equation, back-transformed from mo


dal to physical space, are the modal parameters of the coupled system.
In case of proportional damping, the complex poles can be solved from the ei
genvalues and the proportional damping factors: and .
The option to use proportional damping is provided when modes are pre
dicted. It reduces the computation time when dealing with large structures
with numerous modifications and mode shapes containing a lot of DOFs. At
least two original modes must be used in order to determine  and .

336

The Lms Theory and Background Book

Design

Rigid coupling
The above theory relates to flexible coupling, but it is also possible to place
constraints on DOFs connecting substructures to create rigid coupling between
them, or to constrain a single DOF, thus fixing it rigidly to `ground'. In this
case the restrained DOFs will have zero displacement.
Constraints on the physical degrees of freedom are
[R]{Y}  {0}

Eqn 20-66

Performing a modal transformation:


Eqn 20-67

{Y}  ["]{q}
yields constraints in modal space:
[R]["]{q}  [T]{q}  {0}

Eqn 20-68

The modal coordinates are split up into dependent modal coordinates qd an in


dependent modal coordinates qi. The constraint matrix [T] is also split up

 

qd
[[T d][T i]] q  {0}
i

  

Eqn 20-69

{q d}
 [T d] 1[T i]

{q i}  [T]{q i}
I 
{q i}

Eqn 20-70

The choice of the dependent modal coordinates has to be made to lead to a


non-singular [Td].
This leads to the new eigenvalue problem:
(s[T] ta [T]  [T] tb[T]){q i}  {0}

Eqn 20-71

When the eigenvalues and the eigenvectors with the independent modal
coordinates qi are solved, the dependent modal coordinates qd of the eigenvec
tors can be calculated. In a last step, the mode shapes in physical coordinates
are found by the inverse modal transformation.
Constraints can be defined in the same way as other structural modifications.

Part IV

Modal Analysis and Design

337

Chapter 20 Design

20.3.2

Implementation of Modification prediction


This section discusses some of the more practical aspects of performing modifi
cation prediction. This process allows you to compute the natural frequencies,
damping values and scaled mode shapes for a modified mechanical structure
which is possibly build up from a number of substructures.

20.3.2.1 Retrieval of the modal model


The starting point for modal synthesis applications is the available modal mod
el for the structure to be modified or for each of the substructures to be as
sembled.
All modal parameters (natural frequencies, damping values, and scaled mode
shapes) have to be available for the calculation procedure. It is important how
ever that some conditions are met 1 Driving point coefficients
In order to be able to scale the included mode shapes correctly, they must in
clude driving point coefficients. This means that for at least one record of the
modal participation factor table, the force input (reference) identifier should
match with a record of the mode shape table for the same mode and neither
one of them should be equal to zero. Note that this driving point Degree Of
Freedom can be different for each of the included modes.
2 Matching DOFs for modes and modifications
Mode shape coefficients need only be available for the Degrees Of Freedom
which are affected by the structural changes. This means those for which
mass, stiffness or damping modifications are to be considered or to which
structural elements are to be attached. Moreover, it is perfectly possible to
use incomplete mode shape vectors missing some coefficients for irrelevant De
grees Of Freedom.
To obtain correct results, the modal model should include all structural modes
to accurately describe the dynamic response for the frequency band of interest.
This aspect is especially important when an experimental modal model was ob
tained from a set of FRFs relative to only one reference station, which happened
not to excite some structural modes. This may arise if the reference station was
located on or near a nodal point for these modes. In this case the modal model
may be well suited to describe the measured FRFs but not the dynamic behav
ior of the structure as such.

338

The Lms Theory and Background Book

Design

A similar problem occurs for out-of-band effects caused by the presence of


modes above or below the frequency band of experimental modal parameter
estimation. Some of the frequency domain techniques for estimating mode
shape coefficients allow correction terms (residual masses and flexibilities) to
compensate for these residual effects. Using these corrections it is often pos
sible to curve-fit the measurement data fairly accurately. Unfortunately, these
residual terms cannot be scaled correctly for other reference stations as is done
for the mode shape coefficients in the previous sections. They cannot therefore
be included in the calculations. For this reason, it is advisable to use a suffi
ciently large modal model i.e. one with at least one mode below and one mode
above the frequency band of interest.
When using a modal model for a limited frequency band it is possible that im
portant structural modifications would generate modes with a natural frequen
cy outside the range of this frequency band. Since the original modal models
are not valid at these frequencies, the predicted results will not be very reliable.
It is therefore advisable to either include all modes for the frequency band of
the resulting modal model or to keep the structural modifications small enough
to avoid these problems. In any case you should not attach too much confi
dence to modes with natural frequencies outside the frequency band of the
original modal model.
Included mode shapes are correctly scaled. To obtain correctly scaled mode
shapes, the original mode shapes should be scaled in a consistent unit set which
respects the consistency of physical quantities: poles, response engineering
units per Volt, etc.... A correct calibration of measurement signal transducers
and acquisition equipment is required to attach any absolute scaling values to
the obtained results.

20.3.3

Definition of modifications to the model


At each of the available Degrees Of Freedom of the modal model you can de
fine one or more local modifications to influence the dynamic behavior of the
mechanical structure. The structure can also be modified by the addition of
complete substructures for which modal models exist and by the use of
constraints providing rigid coupling.

20.3.3.1 Mass modifications


A point mass can be added to a node on the structure. To add a mass modifica
tion you simply have to specify the node and the mass.

Part IV

Modal Analysis and Design

339

Chapter 20 Design

20.3.3.2 Stiffness modifications


A stiffness connection (spring) can be added between any two Degrees Of Free
dom of the structure.
To add a stiffness modification you have to specify, the DOFs between which
the stiffness is to be applied and the stiffness value.
Note that stiffness (with mass) can also be added to a structure through the
addition of a truss or a rod.

20.3.3.3 Damping modifications


A damping element (dashpot) can be added between any two Degrees Of Free
dom of the structure.
To add a stiffness modification you have to specify the DOFs between which
the damping is to be applied, and the damping value.
Note that damping can also be added to the structure through the addition of a
tuned absorber.

20.3.3.4 Truss elements


A truss element can be defined as a doubly hinged rod between two points.
Forces located at the ends of the truss element (nodal forces) are directed along
the axis of the rod. Since trusses are modelled with hinges at the end, they can
not withstand transversal forces. Bending and torsion moments cannot be
transmitted from one element to the next.
It provides a means of adding stiffness and mass between two points by the
addition of a connection for which you know the physical characteristics.
To add a truss element you have to specify; The nodes between which the truss
is to be fixed and the physical characteristics of the truss.
A truss element is characterized by its
- cross sectional area A
- material's Young's modulus of elasticity E
- mass density d
These must all expressed in the active unit system.

340

The Lms Theory and Background Book

Design

A truss element between two nodes is translated into elementary mass and
stiffness modifications. The longitudinal stiffness is related to a 6 by 6 stiffness
matrix for 6 Degrees Of Freedom (3 for each node). This matrix is obtained by
projecting the longitudinal stiffness along each of the 3 coordinate axes.

20.3.3.5 Rod elements


A rod element can be added between any two separate nodes on the structure.
Rods are modelled with hinges at their ends so (modal) forces acting on the
ends are directed along the axis of the rod. Bending and torsion moments can
not be transmitted from one element to the next.
In effect it provides a means of adding stiffness and mass between two points
by the addition of a connection for which you know the mass and the stiffness.
To add a rod element you have to specify the nodes between which the rod is to
be fixed and the physical characteristics of the rod. A rod element is character
ized by its
- longitudinal stiffness Kij
- its mass M.
The longitudinal stiffness is related to a 6 by 6 stiffness matrix for 6 Degrees Of
Freedom (3 for each node). This matrix is obtained by projecting the longitudi
nal stiffness along each of the 3 co-ordinate axes.
The mass M is divided into two equal parts at both ends of the rod.

20.3.3.6 Beam elements


A beam element is an element that can transfer translational forces and mo
ments of bending and torsion.
To add a beam element you have to specify the following parameters which are
illustrated below :

Part IV

the two end nodes (n1, n2)

the area of its cross section (A)

the material's Young's modulus (E)

Modal Analysis and Design

341

Chapter 20 Design

the material's mass density (m)

the material's shear modulus (G)

the moment of inertia for bending in two planes (Ip, Ib)

the moment of inertia for torsion (It)

a reference node to define the orientation of the moments of inertia for


bending (r)
material : E,G,m

cross sectional area


A
Ip

Ib

It

Reference Node
(r)

The reference node together with the two end nodes defines the so-called refer
ence plane. The moments of inertia for bending are defined in two directions :
Ib for bending in the reference plane
Ip for bending in a plane perpendicular to the reference plane
The 2 end nodes have six Degrees Of Freedom each: 3 translational and 3 rota
tions. A beam element can therefore transmit six forces to another beam ele
ment: 3 translational forces and 3 moments. For end nodes that are not con
nected to another beam only the translational forces can be transmitted as for
example in the case for a stand-alone beam. In the same way, beams that are
positioned on a straight line (colinear beams) will not be subjected to torsion.

20.3.3.7 Plate membrane elements


A plate membrane element is a two dimensional quadrilateral element capable
of transferring both bending forces (perpendicular to the plane of the plate) and
membrane forces (in the same plane as the plate).
To add a plate element you have to specify the following parameters which are
illustrated below :

342

The Lms Theory and Background Book

Design

The name of the plate

The four corner nodes c1, c2, c3, and c4

The plate thickness (t) expressed in the appropriate user unit

The number of divisions along the first side, between c1 and c2 (a)

The number of divisions along the second side, between c2 and c3 (b)

The connection nodes n1, n2 and n3

Material properties of the plate i.e. Young's Modulus (E), Poisson's ra


tio (), mass density (m).
These must all be expressed in the appropriate unit.
c1

n
3

c4

c2

c3

thickness t

When a plate is defined with a and b divisions along its two sides, a mesh of (a
x b) rectangles is created as shown in the diagram. As the corner nodes already
exist this means that ((a+1).(b+1) - 4) new nodes are generated.
If there are connection nodes defined then the mesh point situated closest to a
connection node is replaced by that node.
The plate so defined should comply with the following conditions (1 )
d

the mesh elements should not deviate too much from a rectangular
form,
i.e. each corner angle should be #900

(1) The calculation of the mass and stiffness matrices of a plate membrane described here is based on the plate
theory of Mindlin.

Part IV

Modal Analysis and Design

343

Chapter 20 Design

the mesh elements should be approximately square,


i.e. the ratio of length/width should be # 1

the plate should not be too thick,


i.e. the ratio of length/thickness should be

Each of the corner nodes of the mesh elements has 6 Degrees Of Freedom - 3
translations and 3 rotations - and so can transmit six forces to another mesh ele
ment. This is also the case between elements of different plate membranes, as
long as they are connected either at a corner or at a common connection node.

20.3.3.8 Tuned absorbers


A tuned absorber is a single Degree Of Freedom system consisting of a rigid
mass which is connected by a spring and a dashpot to a more complex struc
ture.
m

The parameters m, k and c of this SDOF system are designed such that the mo
tion of the coupling point in the direction of this absorber is decreased
(damped) as much as possible for a certain frequency, typically at resonance.
xa e jwt
k

xr e jwt

If the motion of the coupling point in the direction of the absorber is designated
by xa and the frequency to be damped by f (= /2) then the following formu
lae apply for the equations of motion of m (xr is the relative displacement be
tween the absorber's mass and the attachment point).

344

The Lms Theory and Background Book

Design

(kx r  cjx r)e jt  m(x a  x r) 2e jt

Eqn 20-72

When this equation is solved for xr :


x r 

m 2x 2
 m 2  jc  k

Eqn 20-73

The force acting on the attachment point is Fe jt  (k  jc)x re jt

Eqn 20-74

From equations 20-73 and 20-74


F 

(k  jc)m
 2x a
2
 m  jc  k

Eqn 20-75

This force can be imagined as being generated by the inertia of an equivalent


mass meq , which is rigidly attached to the attachment point :
F  m eq 2xa

m eq 

(k  jc)m
 m 2  jc  k

Eqn 20-76

Eqn 20-77

It can be shown that if no damping is used (c=0) the mass and stiffness of the
absorber can be designed such that the vibration of the attachment point is
eliminated entirely (xa = 0). This happens if the natural frequency of the ab
sorber equals the forcing frequency .
The most practical application of a tuned absorber is the reduction of vibration
levels at a resonance frequency n . In this case, the absorber's own natural fre
quency for optimal tuning is  na 

Part IV

Modal Analysis and Design

mk  1 n

Eqn 20-78

345

Chapter 20 Design

where ! is the ratio between the absorber's mass and the equiva
lent" mass of the system at resonance :
 mm

Eqn 20-79

eq

An optimal damping ratio for the absorber is then obtained from :


% opt 

c 

2 km

3
8(1  ) 3

Eqn 20-80

From equations 20-78, 20-79 and 20-80 the physical parameters m, c and k of
the attached absorber can be computed if the following values are known.
meq the equivalent mass (see further)
n

the target frequency of tuning, natural frequency of mode to be tuned

the absorber's mass to be specified by user

The equivalent mass of the system for a certain mode can be obtained as fol
lows:
m eq 

1
.V 2i .2jd.

Eqn 20-81

where
Vi
is the scaled mode shape coefficient of the mode to be tuned at
the attachment point
d

is the damped natural frequency of the mode to be tuned.

20.3.3.9 Constraints
Physical constraints can be defined between separate DOFs or between one
DOF and itself.
Defining a constraint between two separate DOFs, applies a rigid coupling be
tween them. Defining a constraint between a DOF and itself effectively fixes it
to `ground'.

346

The Lms Theory and Background Book

Design

20.3.4

Modification prediction calculation


Once the required modifications have been defined the modification prediction
calculation process can be started.
For the simplified case of two substructures which are possibly modified (sym
bol ) and connected to each other (subscript c), the following procedure is fol
lowed to predict the modal model of the resulting structure :
1. Retrieve the modal models for each substructure. Build the diagonal ma
trices  1 and  2 of poles and the (possibly complex) modal matrices V1
and V2 of scaled mode shapes.
2. Join both modal models into the global matrices   (equation 20-37) and V
(20-36).
3. Define the connecting elements (springs and dash pots) between both sub
structures. This yields matrices Ac and Bc (equation 20-40).
4. Define the necessary modifications and join them into matrices A and B
(equation 20-42).
5. Use the modal matrix V to transform the connection and modification ma
trices to the modal space.
6. Add the diagonalized matrices in modal space (equation 20-44) to yield the
system matrix of the resulting structure.
7. Calculate the modal model via an eigenvalue and eigenvector decomposition
of the resulting system matrix. This yields the complex poles (natural fre
quencies and damping factors) and the mode shapes.

Numerical problems
The eigenvalue problem mentioned above that is to be solved for the modified
system, can be subject to numerical problems. These can arise from two
sources.

Part IV

The presence of unbalanced structural modifications, such as those


introducing large amounts of stiffness to simulate a fixation or local
heavy dampers.

A wide range of original natural frequencies. This can occur especially


when rigid body modes of free-free systems (virtually at 0Hz) are im
ported from an FE code and mixed with flexible modes at high fre
quencies. More specifically in this case it is the ratio of the highest to
lowest natural frequency that is the relevant factor.

Modal Analysis and Design

347

Chapter 20 Design

In practice these numerical problems are manifested in the modified modal


model by unrealistic modal parameters or missing modes. While it is impossi
ble to eliminate such problems, they can be reported during the modification
prediction calculation.
The criterion used in this respect is the condition number of the system matrix.
The system matrix is the one whose eigenvalues and eigenvectors yield the mo
dal parameters. If this condition number exceeds a certain (critical) value this
is reported to the user. The critical value used has been established by empiri
cal tests and is by default set to 1e+8.

20.3.5

Units of scaling
In order to obtain correct modification prediction results, it is absolutely neces
sary to maintain a correct scaling of the original modal model using a consis
tent unit set.
The scaled mode shapes of the original structure have a physical dimension re
lated to the measurement data from which they were extracted by modal pa
rameter estimation techniques. Since this modal model is a valid description
for the relation between input forces and response displacements, the applied
modifications should be defined in a unit set which is consistent for these quan
tities. The same rule applies to the interpretation of the resulting modal model.
Erroneous results are bound to occur when the original mode shape vectors are
not scaled correctly. This might arise because of the incorrect definition of the
reference point for the data (wrong driving point residue), not using the correct
transducer sensitivity or calibration factors for the experimental FRFs (force as
well as response transducers), or the use of an inconsistent unit set during the
modal test or analysis phase. These errors may cause an entirely wrong trans
formation of the applied physical modifications to the modal space and a small
mass modification for example may grow out of proportion because of this bad
scaling.

348

The Lms Theory and Background Book

Design

Example of the application of a beam element


The following example will illustrate the procedure. Suppose the dynamic be
havior of an isotropic plate is to be influenced by a rib fixed to the plate as
shown below.
5
4

main plate

2
1

I cross section beam

elem
1

elem
2

elem
3

elem
4

nodes

4
3

The procedure becomes :


1 Discretization of the rib into 4 beam elements, interconnected at nodes corre
sponding to measurement points of the experimental analysis.
2 Definition or calculation of the following physical parameters.
A

= cross section of the beam

It

= moment of inertia for torsion

Ib

= moment of inertia for bending in the reference plane, defined


by the nodes n1, n2 and r

= Young's modulus of elasticity

= shear modulus

= length of the beam

1, 2, 3 = orientation of the local beam reference system in the global


system. This information is derived from the position of the
three nodes n1, n2 and r as shown in Figure 20-1.

Part IV

Modal Analysis and Design

349

Chapter 20 Design

= material's mass density

From the geometrical properties of the beam the user can calculate the cross
sectional area and the different moments of inertia. Tables listing character
istics of various types can be found.
z

1
3

n2

n1

y
x

Figure 20-1 Stiffening rib orientation and local co-ordinate system (Axes 1 2 and 3)
3 Construction of the element matrices for each beam element.
An element stiffness (full) and mass (diagonal) matrix can be built from the
relations between the 6 forces and 6 Degrees Of Freedom at each end node
(U1 , V1 , W1 , 1 , 1 , and "1 for node 1, u2 , v2 , w2 ,  & 2 and "2 for node 2)
U1!
U 2!





V1%  T $V 2
 T 2T & translation
$
%
1
W1


W
2


T t1

R t1 T t2 R t2
T1
R1

 1!
2!





1%  R 1$ 2
 R 2T & rotation
$
%
"1


"
2


T2
R1

Figure 20-2 Element matrices for nodes 1 and 2


4 Assembly of the element matrices as shown below

350

The Lms Theory and Background Book

Design

t t t t t t t t t t
T1 R1 T2 R2 T3 R3 T4 R4 T5 R5
T1
R1
T1
R1
T1
R1
T1
R1
T1
R1
Figure 20-3 Assembly of element matrices
5 Perform a static condensation (see below) of the rotational DOFs.
6 Add the condensed matrices to the system matrices and continue the calcula
tion procedure as for other (lumped) modifications.
Remarks :
V

The element matrices of a beam model must be assembled before con


densation and addition to the system matrices to allow moments to be
transmitted between different elements.

It is important to keep in mind that the basic assumption in beambending analysis is that a plane section originally normal to the neutral
axis remains plane during deformation. This assumption is true pro
vided that the ratio of beam length to beam height is greater than 2.
Furthermore, shear effects do not contribute to the elements of the stiff
ness matrix.

Care should be taken with the input of moments of inertia. In the ex


ample stated above the distance between the axis of the plate and the
axis of the beam must be taken into account.

Static condensation
Static condensation in a dynamic analysis is based upon the assumption that
the mass at some Degrees Of Freedom can be neglected without a significant
loss of accuracy on the dynamic model in the frequency range of interest. More
explicitly, for the beam elements in the application of interest consider the rota
tional Degrees Of Freedom to be without mass. The assembled mass and stiff
ness matrices of the entire beam can then be partitioned as follows,

Part IV

Modal Analysis and Design

351

Chapter 20 Design

KTT
K 
 RT

K TR M  [0]


TT

K RR;
[
0
  ] [0]

Eqn 20-82

where
T

refers to the translational DOFs

refers to the rotational DOFs.

The modal parameters describing the dynamic behavior of this structure are
then obtained by solving following eigenvalue problem,

KTT
K 
 RT

 

 

K TR V T
MTT [0] V T
V   2


K RR
 [0] [0] V R
 R

Eqn 20-83

From the bottom half of equation 20-83 a relation between the translational and
the rotational DOFs is derived.

K RTV T  K RRV R  {0}

Eqn 20-84

which can be solved to express the rotational DOFs in terms of the translational
ones,
1

V R   K RR K RTV T

Eqn 20-85

Introduction of equation 20-85 into equation 20-83 yields :

K TV T  2M TTV T

Eqn 20-86

with

K T  K TT  K TRK RR 1KRT

352

Eqn 20-87

The Lms Theory and Background Book

Design

The matrices [KT] and [MTT] of equation 20-86 are used to dynamically model
the beam structure. The model will only be valid in the frequency range where
the mass effects of the rotational DOFs are negligible. Mass effects only con
tribute significantly to the dynamic behavior around and above those reso
nances where they are capable of storing a considerable amount of kinetic ener
gy.
Note that [KT] as expressed in equation 20-87 can only be computed if [KRR] is
non-singular. The stiffness matrix is singular if rigid body motion is possible.
The rigid body mode of a beam along its longitudinal axis is not naturally elim
inated by constraining its three translational DOFs so causing in general a first
order singularity. With such configurations it will not be possible to store tor
sional deformation energy in the beam therefore the corresponding off-diago
nal elements of the assembled stiffness matrix can be neglected and the diago
nal elements made relatively small. In this way the matrix becomes invertible
and the predicted dynamic behavior will reflect the inability to store torsional
deformation energy in the beam. This operation will, however, not be neces
sary when the beam is two or three dimensional, as in such cases, rigid body
motion through rotation around one of the axes is no longer possible.

Part IV

Modal Analysis and Design

353

Chapter 20 Design

20.4

Forced response
Experimental modal analysis results in a dynamic model described by the mo
dal parameters, damped natural frequency, exponential decay rate and scaled
mode shapes (residues). These modal parameters provide valuable insight into
the dynamic behavior of a structure. Problem areas can be identified by ani
mating the mode shapes and the relative importance of the mode shapes can be
assessed by comparing their amplitudes.
In most cases however the designer is less interested in dynamic characteristics
themselves than in knowing how the structure is going to behave under normal
operating conditions. The important points to determine are V

what will happen under dynamic loading conditions ?

which of the natural frequencies will dominate the response ?

which points will exhibit large deformations ?

how will the structure will deform at particular frequencies ?

The natural frequency of the modes of vibration which seem to be the most im
portant parameters in the modal model may well not dominate the response if
conditions are such that they are not excited.
The Forced response functions enable you to answer these questions by deter
mining the response of the modal model to known force spectra.

20.4.1

Mathematical background for forced response


The structure's modal model forms the input for the computation of its dynam
ic response and is the starting point for the forced response analysis.
The equations of motion of a linear, time invariant mechanical structure are ex
pressed in the frequency domain as follows:

 X()    H()  F() 

Eqn 20-88

where {X()} is the response spectra vector (N0 by 1)


[H()] is the Frequency Response Function matrix (N0 by N0 )
{F()} is the applied force spectra vector (N0 by 1).

354

The Lms Theory and Background Book

Design

These quantities are complex-valued functions of the frequency variable  and


are valid for every value of  for which these functions are known.
When the response at one specific degree of freedom (DOF), say i, is needed the
above equations become:
X i() 

N0

 Hij()Fj()

Eqn 20-89

j1

This means that the response at DOF i can be written as a linear combination of
the applied forces, each weighted by the corresponding FRF between input
DOF j and output DOF i. These frequency dependent weighting factors de
scribe the dynamic flexibility between two degrees of freedom i and j of a me
chanical structure.
When the modal model for that structure is available, e.g. from modal test data
or finite element calculations, the FRF can be modelled as given by
H ij() 

 j rijk 

2N

Eqn 20-90

k1

Using equation 20-89, it is now possible to predict the dynamic response at


DOF i when the structure is subjected to a number of simultaneous loads at
DOFs j for which scaled mode shape coefficients (residues) are also available in
the modal model.
X i() 

 jvikvjk !Fj()


N0

2N

j1 k1

Eqn 20-91

Even if not all the residues are available, the Maxwell-Betti reciprocity princi
ple can be used to calculate the required values. Equation 20-4 allows the resi
due rick to be derived for any reference DOF c when the residues for DOFs i and
c are available for an arbitrary reference j on condition that the driving point resi
due rjjk is also available. The driving point residue is also required if the mode
shapes are to be correctly scaled.
Equation 20-91 represents the response at all DOFs to all forces with a contribu
tion from all modes. The contribution of each mode is given by -

Part IV

Modal Analysis and Design

355

Chapter 20 Design

mode k ; 0 to N

; N to 2N

1
f k() 
j   k
f k() 

N0

N0

j1

j1

N0

N0

 vjkFj()  pk()  vjkFj()




Eqn 20-92

1
v *jkF j()  p k()
v *jkF j()
j   *k j1
j1
(complex conjugate modes)

The response for each DOF then taking into account the contribution of each
mode is then given by
X i() 

356

2N

k1

kN

 vikfk()   v*ikfk()

Eqn 20-93

The Lms Theory and Background Book

Chapter 21

Geometry concepts

This chapter describes the basic concepts involved in the definition


of the geometry of a structure.
the geometry of a test structure
the definition of nodes

357

Chapter 21 Geometry concepts

21.1

The geometry of a test structure


A geometrical representation of a test structure is necessary for the display and
animation of mode shapes, and for the implementation of design modifications.
This chapter discusses the basics regarding the geometry definition of a model
for a test structure.
The most important part of the model is the nodes. These define the points
where measurements will be taken on the structure, and the points where the
mode shape deformation are calculated. It is common practice to defined con
nections or edges between specific nodes to form a wire frame model of the
structure. In addition surfaces can be defined, that aid in the visual representa
tion of the structure.
z
y
node

surface

Figure 21-1

connection

A wire frame model of a structure

Note that the definition of nodes and meshes for acoustic measurements are de
scribed in the Acoustic" documentation.

358

The Lms Theory and Background Book

Geometry concepts

21.2

Nodes
A node is defined by its location and its orientation.

Location
The location of a node in the 3D space is defined by a set of 3 real numbers
known as the coordinates. Coordinates are always defined relative to a refer
ence coordinate system.
The reference coordinates are normally shown along with the model in the the
display window. The origin of the global coordinate system is the origin of the
3D space that contains the test structure and the global symmetry of the struc
ture should be considered when defining this.
The reference coordinate system can be either Cartesian, cylindrical or spheri
cal.
Z

Z
z
y

y
x

"

Right handed
Cartesian

Figure 21-2

Cylindrical

"
Spherical

Coordinate systems

So as an example, the same node defined in each of the coordinate systems


would appear as follows.
Cartesian

Cylindrical

"

2

45

Spherical

"

3

45

55

Orientation
Nodal orientation is defined using a Cartesian coordinate system. In many ap
plications the orientation of the node defines the measurement directions.

Part IV

Modal Analysis and Design

359

Chapter 21 Geometry concepts

z
x
Figure 21-3

Nodal coordinate system

The origin of the nodal coordinate system coincides with the node's location. If
the principal axes of the nodal coordinate system are not coincident with the
measurement directions, in either a positive or a negative sense, then the differ
ence must be defined with Euler angles.

Euler angles
Three Euler angles are used to define the orientation of a one coordinate sys
tem, relative to a reference coordinate system with the same origin.
"xy
The first angle, "xy (Euler XY) is a rotation
about the Zr axis of the reference system. (Posi
tive from Xr axis to Yr axis). This generates a
first intermediate system indicated by a single
quote ' on the axis labels.

Zr z'

y'

Xr
"xy

"xz
The second angle "xz (Euler XZ) is a rotation
about the y' axis of the first intermediate system.
(Positive from the x' axis to the z' axis). This gen
erates a second intermediate system, indicated
by two quotes " on the axis labels.

360

z'

z"

y'y''

x"
+
"xz

Yr

x'

x'

The Lms Theory and Background Book

Geometry concepts

"yz

Finally the third angle, "yz (Euler YZ) is a rota


tion about the x" axis of the second intermediate
system, positive from the y" axis to the z" axis.
This last orientation generates the desired new
coordinate system orientation.

x''X

z''

Y
+ "yz
y''

Degrees Of Freedom (DOFs)


The Degrees Of Freedom of a node represent the directions in which a node is
free to move. Each node therefore has a maximum of 7 Degrees Of Freedom; 3
translational, 3 rotational and a scalar DOF(Sc).
Z
RZ scalar
RX
X
Figure 21-4

Part IV

RY
Y

Degrees of freedom

Modal Analysis and Design

361

Das könnte Ihnen auch gefallen