Remote and Coupled Vibration Measurement Methods

"Imaging small vibrations have been of interest since Chladni first placed sand on a vibrating plate to make modal patterns visible. Discuss methods that have since been developed to remote sense and optically image sub-micrometer vibrations (e.g. laser Doppler vibrometry, holographic interferometry, electronic speckle pattern interferometry, etc.) including your own method currently being developed. Discuss the pros and cons of each method, and how your own method capitalizes on the strengths---or attempts to overcome any limitations---of its predecessors in the context of your particular application requirement(s)."

-Tamara Smyth

Part 1:
Practical Considerations for Vibratory Measurement Methods

The process of measuring structure-borne vibration differs significantly from that of vibrations in air. The task is complicated by several factors, the most obvious of which is the difficulty of measuring variables at any location besides the boundary. This predicament is exactly inverted in air-borne sound, where measurements can easily occur within the field of interest, and boundary measurements are relegated to specialized sensors. [p.3][7] Furthermore, the variables which can be readily measured in traditional structure-borne acoustics are kinematic variables: displacement and its derivatives. Pressure---the old standby of air-borne acousticians---must typically be inferred from these variables. Referring back to chapter 1, section 1, we can see that the structure-borne equivalent of acoustical pressure, stress, must be calculated as a 2nd-rank tensor, because of its directionality.

Furthermore, the excursions of the kinematic variables in a solid as a result of vibratory stress can be quite small. The precise relationship between these variables depends primarily on the material's elasticity constants, c_{ijkl}, which are sensitive to the directionality and polarization of the vibratory motion, relative to the particles in the solid. See chapter 1, section 1 for an introductory look at this formalization.

For example, given equation (1.7), we can calculate the ratio of the inter-particulate equilibrium length to the length under unit normal stress of a piece of polycrystalline aluminum with a density of 2,695 \frac{kg}{m^2}. This value will be the amount of compression observed as the result of 1 \frac{N}{m^2}. Substituting in the value of c_{11} = 11.1, with units of 10^{10} \frac{N}{m^2},[p.12][3] we find the material will deviate on the order of \frac{1}{11.1E10} from its original position. Given the boundary conditions for a free surface (see chapter 1, section 2), we conclude that the stress compression wave traveling through this material will be zero at the surface and the kinematic waves will be at a maximum magnitude.

Another rule of thumb to derive the orders of magnitude entailed by structure-borne sound is the maximal strain a given material can withstand before fracture. In the orders of magnitude near this limit, the constitutive relation equation (1.7) no longer holds, and becomes nonlinear. For most materials, the strains in the nonlinear region between fracture and Hookean linearity are on the order of 10^{-3} to 10^{-5}. [p.16][3]

As should be clear from these back-of-the-envelope calculations, the direct observation of vibration in a solid is generally restricted to those in the possession of a microscope. However, many individuals have devised methods for indirectly observing such phenomena. A famous example occurred in 1787, in Wittenberg, Germany, where a 31 year-old Ernst Chladni first published the results of an experiment which would captivate the imaginations of scientists and laypeople alike.[30][32] His experiment, which will be discussed in greater depth in section (3.6) of this document, consisted of sprinkling a fine powder over the surface of an object, and setting it into vibration using a violin bow. The particles collected along the nodal lines, where the incident kinematic waves were at a minimum amplitude. The dynamic shapes and synesthetic correspondence between sight and sound helped to canonize acoustics as a legitimate scientific discipline.

The techniques discussed in pages that follow will be assessed according to a number of attributes: cost, field-readiness, quality, generality, and impedance. In this context, cost refers to economic cost---measured in dollars, not flops. Field-readiness is actually a cluster of concerns including mechanical robustness, portability, and ease of calibration. These considerations may take on a variety of forms, depending on the technique: for example tube lasers are not very field-ready because of their high power drain. The term quality refers to the information content of the measurements, meaning linearity (or linearizability), signal-to-noise ratio (SNR), gain-bandwidth-product (GBP), and sensitivity. Generality means we ask ``what can it measure?'' For example, a technique which measures perturbations in the magnetic field would be difficult to apply to the general case. Finally, impedance refers not to electrical impedance---as any modern sensor can be trivially impedance-matched with proper gain-staging---but rather mechanical impedance, as explored in chapter 1, section 4. Sometimes, despite our best efforts, these considerations will become interdependent. This is the nature of considerations.

Part 2:
Vibration Sensors that Require Inertial Coupling

A large number of structural vibration measurement devices require the mechanical coupling of the sensor to the object of interest. Even fairly recent articles in structural vibration analysis cite methods that involve these devices.[14][17] Using sensors which mechanically couple to the vibrating body has many advantages: they often provide high quality measurements with a low cost, extremely field-ready apparatus. Furthermore, many of these devices are reversible, meaning they can also serve as actuators.[p.69][7] In some applications, these sensors can be extremely precise, and can hone in on specific mode families within the material.[p.36][3]

The drawbacks to using mechanically coupled sensors most frequently involve the acoustic impedance load concomitant with their operation. All sensors have inertia, and this will always result in a normal stress being added to the system, in the location of the sensor. Recall that the characteristic impedance of a material is given by the stress over the particle velocity in the medium, or T/\dot{u}. If the impedance of the sensor is high enough, the dynamic behavior of the object changes. This can be somewhat ameliorated by ensuring the sensors are extremely lightweight, or perhaps by the clever use of suspension springs.[p.4][7] Recall from section chapter 1, section 4 that at an interface between two impedances Z_1 and Z_2, if Z_2 is much higher than Z_1, the reflected wave will not be completely inverted at the interface, and a partial standing-wave will result. The least adversely affected frequencies are those for which the sensor has a resonance. Since the sensor's inertial and restorative forces are \pi apart in phase at such frequencies, the impedance will tend toward zero. However, the severely limited bandwidth of such a measuring device would not make for a particularly good sensor.[p.9][7]

The carbon microphone is an element which changes its resistance in response to strain. The principle of operation is simple: carbon particles are suspended in a insulating substrate, which are pressed together when the material exhibits strain. A voltage across the element encodes the strain into changes in current. This type of instrument is very light-weight, cheap, and can be made into force sensing resistors (FSR's) and strain gauges. However, carbon microphones do not have reasonable signal to noise ratios or bandwidths for high-quality vibration sensing. Furthermore, they are fairly easily damaged. Their generality is also questionable, since they must be glued onto the object of study. They do, however, work fairly well for measuring low frequencies.[p.26][7]

Variable inductance sensors typically measure displacement. They are usually implemented with an alternating voltage, and the displacement is encoded into the current. This approach allows the engineer to filter out interference sidebands easily. However, the carrier frequency must high enough that the entire bandwidth of interest can pass through without artifacts. Typically the usable bandwidth is from DC to about one quarter the carrier frequency. This technique encodes the signal onto the amplitude of the carrier wave. An alternative to the variable inductance method is to use an LC oscillator with the inductor safely hidden within the housing, and instead measuring variable capacitance. In this case, the frequency of the circuit will vary as \begin{equation}\omega_0 = \frac{1}{\sqrt{LC}} \text{,} \end{equation}

although there will also be some amplitude modulation as well. The second plate of the capacitor may either be the object itself or some foil attached to it. The variable capacitance sensor can measure very small excursions of the material. The mechanical impedance entailed by this method can be fairly low, at least in comparison to others in its family. However, the technique is susceptible to interference, and not particularly field-ready.[7]

By far, the most common type of coupled instrument for structure-borne sound is the piezoelectric transducer. These devices are extremely convenient and cheap. Furthermore, they can be made to be quite accurate. Because of the potentially high efficiency with which these devices can convert energy from stress to current and back, piezoelectrics are desirable in low power or high frequency applications. When used as sensors, they can be made to approach ideal accelerometers.[p.67][7] They are so important in structure-borne sound that we will consider their unique acoustic properties in moderate detail.

Piezoelectric materials are generally crystals or amorphous ceramics that have asymmetries in their atomic lattice.[p.17][18] These materials have a different set of constitutive relations, and therefore a different wave equation, which explains the lengthy title of chapter 1, section 1. These materials exhibit coupling between strain and electrical polarization, which is the result of unevenly distributed charges in the lattice. This coupling results in two effects, which are always observed together: the direct piezoelectric effect, wherein the material develops a macroscopic net charge distribution as the result of mechanical strain, and the converse piezoelectric effect, wherein mechanical strain occurs as the result of the application of an electric field.[p.23][3] The constitutive relations for piezoelectric materials may be derived from an adjustment of the non-piezoelectric elastic constitutive relations, as defined in equation (1.5), by adding a second equation, resulting in a system of equations[p.24][3],

\begin{align}\label{pecrelation} T_I &= \sum_{J=1}^6 \left( c_{IJ}^E S_J \right)- \sum_{k=1}^3 \left( e_{Ik}E_k \right) \notag\\ D_i &= \sum_{j = 1}^3 \left( \epsilon_{ij}^S E_j \right) + \sum_{J = 1}^6 \left( e_{iJ} S_J \right) \qquad \text{,} \end{align}

where e_{Ij} is a 3rd-rank tensor in reduced notation (see chapter 1, section 1) describing the piezoelectric stress constants, D_i is a 1st-rank tensor containing the electrical displacement components, \epsilon_{ij} is a 2nd-rank tensor of the permittivity constants, and E_j is a 1st-rank tensor containing the electric field components.[p.24][3] c_{IJ}^E is analogous to the elasticity tensor, c_{IJ}, from equation (1.7), except that we have now defined its components for the condition of zero electric field.[p.19][18] The above system of equations shows the nature of the interdependence of the electric and mechanical variables: stress relates to strain through the elasticity constants c (as usual), and to the electric field through the piezoelectric stress constants e, while the electric displacement D relates to the electric field E through the permittivity constants \epsilon, and to the strain through the piezoelectric stress constants once again. The entire system forms a loop of dependence among the variables. A piezoelectric solid requires 3\times6 = 18 constants to describe each of these relationships, and thus fully characterize it. Furthermore, strain patterns will vary with the piezoelectric material chosen.[p.24][3]

From the constitutive relations defined in equations \eqref{pecrelation}, we can now derive the Piezoelectric Wave Equation in a few simple steps. First, we note that the electric field components, E, can be written in terms of potential, \phi, as E_k = {}^{-\partial \phi}/_{\partial x_k}. We also note that S_J = {}^{\partial u_l}/_{\partial x_k}, which comes from (1.7). Substituting these into our definition of T_I from \eqref{pecrelation}, we find[p.24][3]

\begin{equation} T_{ij} = \sum_{J=1}^6 \left( c_{IJ}^E \frac{\partial u_l}{\partial x_k}\right) + \sum_{k=1}^3 \left( e_{Ik} \frac{\partial \phi}{\partial x_k} \right) \qquad \text{,} \end{equation}

differentiating both sides with respect to x_j, and noting, just as we did in (1.4), that {}^{\partial T_I} /_\partial x_j = \rho \ddot{u}_i gives the piezoelectric wave equation[p.27][3]:

\begin{equation}\label{pwaveq} \rho\ddot{u}_i = \sum_{j,k,l = 1}^3 \left( c_{ijkl} \frac{\partial^2 u_l}{\partial x_k \partial x_j} \right) + \sum_{j,k=1}^3 e_{ijk}\left( \frac{\partial^2 \phi}{\partial x_k \partial x_j} \right) \end{equation}

The above system represents 3 equations, but there are four unknowns: u_{xyz} and \phi. If we believe the divergence of electric displacement \nabla \cdot D = 0, which is true for a piezoelectric solid with no free charges, we can find the fourth equation to be[p.27][3]

\begin{equation}\label{pwaveq_phi}\sum_{i,k,l=1}^3 e_{ikl}\frac{\partial^2 u_l}{\partial x_k \partial_i} - \sum_{i,k=1}^3 \epsilon_{ik} \frac{\partial^2 \phi}{\partial x_k \partial x_i} = 0 \qquad \text{.} \end{equation}

Piezoelectric coupling changes the effective elasticity coefficient c. To find this effect, we solve the system of equations for a plane shear wave. We start by setting two of our particle displacement components, u, to zero, and combining equations \eqref{pwaveq} and \eqref{pwaveq_phi}. Since our hypothetical wave only has displacement coefficients along a single component---notated as x---we will drop the tensor notation for clarity.[p.27][3]

\begin{align}c \frac{\partial^2 u}{\partial x^2} + e \frac{\partial^2 \phi}{\partial x^2} &= \rho \ddot{u}\label{pwaveq_plane}\\ e \frac{\partial^2 u}{\partial x^2} - \epsilon \frac{\partial^2 \phi}{\partial x^2} &= 0\notag \end{align}

Manipulating the the second equation's \phi term to enable subtracting it from the first equation yields:

\begin{equation} \frac{e^2}{\epsilon} \frac{\partial^2 u}{\partial x^2} - e \frac{\partial^2 \phi}{\partial x^2} = 0 \qquad \text{,} \end{equation}

which can now be added to equation \eqref{pwaveq_plane}, to cancel the \phi terms,

\begin{equation} \left(\frac{e^2}{\epsilon} + c\right) \frac{\partial^2u}{\partial x^2} = \rho \ddot{u} \qquad \text{.} \end{equation}

Comparing the left-hand side of the above equation to that of equation (1.9) shows the relationship between the elasticity coefficient c' in a material exhibiting piezoelectric coupling, and the elasticity coefficient c in a material not exhibiting such coupling (for that vibratory regime), all other things being equal, to be

\begin{align} c' = c\left( 1 + \frac{e^2}{c \epsilon}\right) c' = c (1 + K^2) \qquad \text{.} \end{align}

This value, K^2 = e^2/c \epsilon, is called the electromechanical coupling coefficient, and it is given for a wave with a specific propagation direction and specific particulate displacement direction, on a particular material. Thus, it has the same dimensionality as c.[p.28][3] N.B. this is a different variable from k, the spatial frequency, although they are intimately related. Piezoelectric stiffening always results in a lower spatial frequency k = \omega / v, and a higher phase velocity v, relative to the effective values, since

\begin{align} k' &= \omega \ \sqrt{\frac{\rho}{c'}} = \frac{\omega}{v'} \notag\\ \text{and} \qquad v' &= v \sqrt{1 + K^2} \qquad \text{.}\label{pvelocity_K} \end{align}

Ceramics like PZT can have a K^2 value of up to 0.5, while other substances like gallium arsenide can have K^2 on the order of 10^{-4}.[p.22][18]

Perhaps the most common method of incorporating piezoelectrics into vibrational measurement involves simply coupling the element to a surface and reading off the voltage-encoded acceleration signal. These are the kind of sensors typically used as ``contact microphones.'' Care must be taken to provide the the correct electrical impedance load to these sensors, otherwise the frequency response will be muffled. Care must also be taken to ensure the mechanical coupling is secure: in semi-permanent applications, the author prefers a double coat of epoxy, and for temporary applications, rare earth magnets. This method is cost-effective, as the resonant properties of the piezoelectric can be ignored, and virtually no external gear is needed. Piezoelectrics are at least as field ready as any other accelerometer; perhaps moreso because of how readily they can be replaced should things go awry. The only real drawback is the mechanical impedance caused by coupling.

The piezoelectric effect is frequently used in the implementation of other devices as well. As electromechanical resonators, piezoelectrics are extremely selective in frequency, which allows them to be used as efficient oscillators and filters.[p.27][18] There are several variants of piezoelectric sensors, each one sensing a different mode of vibration. However, the vibrations most of these devices are intended to sense are forced. These forced signals are often produced by the converse piezoelectric effect.[p.154][7] Therefore, despite their novelty, their applicability to the topic at hand---that of spatially measuring acoustic vibration in the field---is fatally limited.

Thickness Shear Mode resonators (TSM), as their name implies, act on the shear mode of vibration as it propagates the thickness of the crystal. TSM resonators are similar in form factor to the general purpose ``piezo-discs,'' however they are much more sensitive when driven to resonance. By tracking fluctuations in the resonant frequency of the crystal, a rigidly bound mass may be measured.[p.37][3]

The measurement works using the fairly simple hypothesis offered by Lord Rayleigh: that the mechanical resonance only occurs when the peak kinetic energy balances the peak potential energy in a system. A mass that has been rigidly coupled to the surface of the crystal results in a change in kinetic energy, since the surface is naturally an anti-node for the strain wave, without a change in potential energy, which is zero at the surface regardless.[p.43][3]

Recall from earlier discussions of the boundary behaviors of a vibrating body chapter 1, section 4 that the incident surface of a reflected wave is the result of a change in impedance. When the impedance of at the surface is zero, the reflection is \pi, and the shear wave traveling across the sensor matches the resonance given by the object alone. However, if the impedance on the other side of the interface is nonzero, the wave is no longer rotated by \pi, and in the case of our forced vibration in chapter 1, section 4, a partial standing wave resulted.[p.119][7] In this case, the vibration is not held at a given frequency, but rather held at resonance, so the frequency of the vibration changes to match the impedance ratio at the barrier.

The quality of TSM measurements, when applied correctly, is outstanding: the sensor can be driven at any of the thickness shear modes, which, similarly to the example we gave in chapter 1, section 6, would favor odd integer multiples of the zero-th modal wave-number, itself likely in the megahertz range. The sensitivity of the TSM is given by the Sauerbrey equation,[p.44][3]

\begin{equation} \delta f = - \frac{2 f_1^2 \rho_s}{sqrt{\mu_q \rho_q}} \qquad \text{,} \end{equation}

where \rho_s is the surface mass density, or mass per unit area. Since the fluctuations of frequency as the result of incident mass density are proportional to the square of the resonant frequency, this technique can be calibrated to many applications, particularly in the medical and chemical fields. While typically TSM's measure the contribution of mass density most directly, the Rayleigh hypothesis should hold for any net force in the system. The fact that these devices must inject vibrations into the object to perform measurements is troubling as well. However, as long as the vibratory frequencies of interest are sufficiently low relative to the resonator, this should not pose much of a problem. Even the nonlinear relationship between incident mass density must be corrected for after the fact.

Although they vary over a wide range, the transducer itself is fairly cheap, and the implementation circuit is fairly simple. The frequency and stability of the crystal are the two primary factors in the cost vs quality curve. Digitizing the signal with consumer audio equipment would require heterodyning with a high frequency oscillator, perhaps also supplied by a crystal for stability purposes.

The measurement is only precise when the object of interest is thin in comparison with the wavelength of the vibration. This is because a reflection from the opposite boundary of the object measured would then cause its own reflections, and in fact would behave similarly to the coupled sensor as described earlier in this section. As a result of this, typically TSM's are used to measure the mass of rigid, thin films. Therefore, generality is not a strong point of this technique.

We previously discussed Surface Acoustic Waves in section chapter 1, section 3, in the context of boundary interactions in vibrating objects. These waves have practical uses mostly related to communications and non-destructive testing. That being said, the field of SAW devices is ripe for development. The basic idea for a SAW device is to line a material with strips of piezoelectric material, which alternate to form a pattern that corresponds to the mode of vibration of interest. These striated piezoelectrics, called Interdigital Transducers, can be either sensors or transducers. Frequently, these are designed as two-port devices, to encompass both behaviors.[p.36][3]

Using the IDT as a building-block, along with materials selected for favorable acoustic properties, ultrasonic mechanical circuits can be made to perform a number of signal processing tasks.[p.318][18] These devices can perform massive computations in parallel, such as taking arbitrarily large Fourier Transforms, applying filtering, and inverse transforming.[p.318][18] Sensor-actuator components have been constructed that exploit a dizzying array of vibratory regimes, for a variety of applications. Acoustic Plate Mode devices (APM), which excite shear plate modes both symmetric and asymmetric, and Flexural Plate Wave sensors, which are typically used in micromachining processes, both seem especially well-suited for measuring properties of fluids.[p.106][3][p.115][3]

However, for our purposes, the essential problem with such devices remains one of impedance. Even in the cases where APM and FPW devices can measure the effects of individual molecules, there is a complex vibratory reaction between the field being measured and the tools used to measure it.[p.120][3] The quality of the measurements is clearly very high, field readiness and generality questionable, but the impedance is the true site of failure here. We will revisit the topic of piezoelectric devices after a brief characterization of the field in which they may be more useful to us: optics.

Part 3:
A Brief Overview of Electromagnetic Theory

The characterization of light sources is essential for any optical imaging application. Like sound, light can be described as a wave phenomenon: a solution to the differential wave equation. This time, the wave equation derives from the differential form of Maxwell's equations for free space. This model is known as the Classical model, and we will largely content ourselves with this oversimplification, just as we did for acoustics earlier in this document. The vector formulation of Maxwell's equations is concisely descriptive: [p.44][15]
\begin{align}\label{maxwell} \nabla^2 E &= \epsilon_0 \mu_0 \frac{\partial^2 E}{\partial t^2} \notag\\ \nabla^2 B &= \epsilon_0 \mu_0 \frac{\partial^2 B}{\partial t^2} \end{align}

Above, E and B are vectors with x, y, and z components. \epsilon_0 and \mu_0 are the permittivity and permeability of free space, respectively. Thus, the propagation speed of such a disturbance (in a vacuum) is v = 1/\sqrt{\epsilon_0 \mu_0}. Light is a coupled electromagnetic field disturbance---that is, an electric and magnetic field, moving in synchrony---each component of which satisfies the scalar differential wave equation, just as we saw for sound in chapter 1, section 1. That equation, which we have seen before in terms of exponential functions in equation (1.9), looks like this when written more generally. [p.44][15]

\begin{equation}\label{waveq_light} \frac{\partial^2\Psi}{\partial x^2} + \frac{\partial^2 \Psi}{\partial y^2} + \frac{\partial^2 \Psi}{\partial z^2}= \frac{1}{v_0^2}\frac{\partial^2\Psi}{\partial t^2} \end{equation}

We will once again restrict our discussion of wave propagation to harmonic plane waves, without loss of generality (cf. section chapter 1, section 2) because light behaves linearly at relatively small amplitudes, just like sound.[p.286][15] Recall that harmonic waves are of the form, e.g., e^{-\mathsf{i}(\omega t \pm kx)}. Harmonic waves are also called monochromatic, in the context of light. In a vacuum, both constituent fields of light propagate in transverse waves, and they are in phase. Their axes of displacement are not only perpendicular to their propagation direction, but to each other. Plane waves can therefore be oriented, or polarized, by rotating these vectors along the plane of constant phase.[p.45][15]
Just like acoustic waves, electromagnetic waves transmit energy without globally translating matter, according to the classical model. We can now define the energy per unit volume of a traveling harmonic plane wave. This invites cautious analogies to equation (1.10). That equation, which we have seen before in terms of exponential functions in equation (1.9):

\begin{align}\label{em_energy}W_E &= \frac{\epsilon_0 E^2}{2}\notag\\ W_B &= \frac{\mu_0 B^2}{2}\qquad \text{,} \end{align}

with the notable difference that with light, we have two field variables to keep straight, whereas acoustic waves only had kinetic energy. N.B. the above equations are distinct from our treatment in section chapter 1, section 2 of kinetic versus potential energy. While in acoustics, we described impedance and damping in terms of complex numbers, which changed the phase relationships between W_k and W_p, this is not the case with W_E and W_B. The electric and magnetic fields cannot be out of phase with one another: doing so would violate Maxwell's Equations.[p.46][15] Because these two disturbances are always connected to each other---with a real-valued constant, no less---the description of one field disturbance or another is typically sufficient. The standard conceit is to describe the E-field, since its magnitudes are much ``larger.'' A simple manipulation of Maxwell's Equations reveals by how much (for plane waves):[p.45][15]

\begin{equation}\label{EvB} E_i = v_0 B_j \qquad \text{,} \end{equation}

where v_0 is 2.997E8 meters per second. The energy in each field, however, is the same. The total energy per unit volume of the wave is then given by[p.46][15]

\begin{equation}\label{emEtotal} W_t = W_E + W_B = \epsilon_0 E^2 = \frac{B^2}{2} \qquad \text{.} \end{equation}

To calculate the total power flow per unit area, we use the \emph{Poynting vector},

\begin{equation}\label{poynting} S(x,y,z) = v_0^2 \epsilon_0 E(x,y,z) \times B(x,y,z) \end{equation}

where the operator \times denotes a cross-product.[p.46][15] The Poynting vector is an instantaneous measure that fluctuates in length with twice the frequency of the radiation. This makes it impossible to observe directly, and essentially meaningless for time-averaged values. Generally, to quantify the degree to which something is illuminated, the measure to use is irradiance. Irradiance is the field variable that most sensors will measure. It is defined by

\begin{align}\label{irradiance} I = \epsilon_0 v_0 \langle E^2 \rangle_T \qquad \text{,} \end{align}

wherein the braces and the subscript, \langle \rangle_T, denote a time-average.[p.48][15]

The angle of reflection \theta_r, exhibited by a wavefront of light after striking a reflective surface, will always be equal to the angle of the wavefront's incident, \theta_i. When light travels through an appropriately transparent, non-dispersive material, the permittivity \epsilon and permeability \mu, and, as a result, the velocity v, change. While this change in propagation velocity does not affect the temporal frequency (color) of the light, it alters the angle of transmitted light, \theta_t, and the amplitude of the reflected and transmitted components. The ratio which describes this change in propagation is called the relative index-of-refraction between the two media and it is given by Snell's Law[p.97][15]:

\begin{equation}\label{snell_em} \eta_{ti} \equiv \frac{\sin{\theta_i}}{\sin{theta_t}} = \frac{v_i}{v_t} \end{equation}

In contrast to this value, which compares two materials, there is also an index-of-refraction \eta, which describes a given material's properties in relation to a vacuum. The incident, transmitted, and reflected vectors are all coplanar, and that plane is called the plane-of-incidence. To find the amplitudes and polarizations of the resulting refracted and reflected light, we must split the incident E-field into components which are parallel and perpendicular to this plane. Assuming the change in permeability is negligible, we can use the Fresnel equations:[p.109][15]

\begin{align}\label{fresnel} r_\perp \equiv \left( \frac{E_0r}{E_0i} \right)_\perp = \frac{n_i \cos{\theta_i} - n_t \cos{\theta_t}}{n_i \cos{\theta_i} + n_t \cos{\theta_t}}\notag\\ t_\perp \equiv \left( \frac{E_0t}{E_0i} \right)_\perp = \frac{2n_i \cos{\theta_i}}{n_i \cos{\theta_i} + n_t \cos{\theta_t}}\notag\\ {}\notag\\ r_\parallel \equiv \left( \frac{E_0r}{E_0i} \right)_\parallel = \frac{n_t \cos{\theta_i} - n_i \cos{\theta_t}}{n_i \cos{\theta_t} + n_t \cos{\theta_i}}\\ t_\parallel \equiv \left( \frac{E_0t}{E_0i} \right)_\parallel = \frac{2n_i \cos{\theta_i}}{n_i \cos{\theta_t} + n_t \cos{\theta_i}}\notag \end{align}

These equations may be verified using the boundary conditions and Maxwell's equations. The boundary conditions require that the E-field component tangent to the boundary be continuous, while the E-field component normal to the boundary must rotate by \pi. N.B. for a given interface and polarization, may be an angle where \theta_t = \pi / 2, called the critical angle. At this angle, all of the light is internally reflected.[p.112][15] Electromagnetic waves that propagate in this regime bear certain similarities to mechanical waves in SAW devices, in that their propagation is restricted. However, the similarities have more to do with the applications than with the physical phenomena.[p.440][18]

It is common to simplify the propagation of waves into rays: straight lines perpendicular to the wavefronts. The ray model loses a lot of detail, even in its description of free-space propagation, however it is beneficial for back-of-the-envelope estimates of some interactions.[p.95][15] Making the further assumption that such rays travel together in bundles which subtend only small angles from the main axis of an optical system, is called the Paraxial approximation. This allows us to assume that optical components behave linearly, and as a result, many free-space optical systems can be easily modeled as matrix multiplications.[p.444][13] This technique, of constructing ray transfer matrices, has implications for both modeling and regression analysis.

On the other hand, this assumes that all light travels as a plane wave, which is a fine assumption to make until these waves start interfering with each other. Huygen's principle is another useful approximation that suggests all the points of a propagating wavefront are the sources of spherical wavelets, which add together to form the wavefront at some later time. This assumption will help us visualize phenomena related to diffraction.[p.100][15]

Diffraction is a broadly defined family of effects caused by the confinement of waves. It is caused by the constructive and destructive interference patterns endemic to all waves, exacerbated by the discontinuities in propagation factors at the boundaries. It is especially apparent when the extent of the wave's confinement is comparable to their wavelength.[p.32][13] However, diffraction is present in even the most finely calibrated optical systems.

Optical systems that are theoretically perfect---from a geometrical optics perspective, at least---are said to be ``diffraction limited.''[p.143][15] There are several methods for describing the performance of the optical system in this state. These depend on application-specific assumptions, such as the coherence---chromatic purity, measured in average spatial beat length. Despite their differences, however, all of these measures somehow involve the Airy pattern, a radially symmetric function of irradiance over deviation angle. The Airy pattern is essentially a cross-section of an ideal point-source at infinity, which has passed through an aperture at some point in the optical path, i.e.,[p.445][15]

\begin{equation}\label{airy_patt} I(\theta) = \left[ \frac{2 J_1 (ka \sin{\theta})}{ka \sin{\theta}} \right]^2 \qquad \text{,} \end{equation}

where J_1(u) is a Bessel function of the 1st kind, order zero, k is the spatial frequency of the light source, a is the aperture radius, and \theta is the angle of deviation from the center.[p.445][15]

By ``at infinity,'' we mean that the wavefronts of the point source---ostensibly spherical---have traveled sufficiently far to become plane waves. Since all lenses, mirrors, etc are of finite radius, we can assume any distant object fits this description. In the neighborhood of \theta = 0, this function is at a maximum. This circular region is known as the Airy disk:[p.445][15]

\begin{equation} (\Delta \varphi)_{min} = \Delta \theta = \frac{1.22 \lambda}{D} \qquad \text{,} \end{equation}

where D is the diameter of the aperture, and \lambda is the wavelength. This disk is the result of Fraunhofer (far-field) diffraction, and the Airy pattern from equation \eqref{airy_patt} is the Fourier Transform of the aperture.[p.496][15] Sadly, time and space constraints preclude the author from providing this fascinating proof. However, an excellent form of it may be found in[p.112][13].

Part 4:
Laser Beam Characterization via Transverse EM Modes

Lasers use resonance to produce a beam of polarized, monochromatic light. These devices provide a light source which can create finely detailed images of vibratory motion. In order to reliably image vibration with laser light, we must characterize the beam lasers produce. This will both provide us with the means to interpret the results of imaging, and equip us with a vocabulary to differentiate the quality of illumination sources.

While there are several implementations for lasers, all designs are based on the principle of optical feedback. An easily visualized case is the gas laser. In gas lasers, the resonance is produced in a halogen tube which is periodically excited and relaxed electrically. During the relaxation part of this cycle, the gas mixture releases photons, which are initially scattered in all directions. The component frequencies of this light are limited by the properties of the transition bandwidth of the gases used. Either end of the tube is affixed with high reflectance mirrors, one or both of which can also transmit light. Prior to reaching the mirrors, the light passes through angled polarizing filters, which only permit the emission of light whose E-field is parallel to the plane-of-incidence. \cite[p.564]{Hecht1987}

Because of the phase-aligning effect of the boundaries in the cavity, light in this tube behaves like vibration in a solid object with two free ends. For reference, the case of a solid object with a single free end was discussed in chapter 1, section 6. In the present case, the minimum round-trip length the light must traverse before emission is 2L, if L is the length of the tube. Light at this frequency would constitute the first longitudinal cavity mode. Note that these are not longitudinal waves, as only transverse waves are possible. A given tube theoretically has an infinite number of these modes, at integer multiples of the fundamental, however, these must coincide with the transition bandwidth of the material in order to be emitted.[p.558][15]

In addition to longitudinal modes, the light inside the tube can exhibit standing waves that are transverse to the tube's axis as well. These modes are called Transverse Electric and Magnetic modes, or TEM_{mn} for short. TEM_{mn} indeces refer to the mode number for each axis of symmetry, and prescribe much of the beam's behavior on exiting the tube. Depending on the construction of the laser, these irradiance profiles can be either rectangularly symmetric (Hermite-Gaussian),

\begin{equation}\label{rect_mode} U_{mn}(x, y, z) = H_m\left(\frac{x}{w}\right)H_n\left(\frac{y}{w}\right)u(x,y,z)\qquad \text{,} \end{equation}

where H refers to a pair of Hermite polynomials, and w is the radius of the beam, or circularly symmetric (Laguerre-Gaussian),

\begin{equation}\label{circ_mode} U_{pl}(r, \varphi, z) = L_{pl}\left(\frac{r}{w}, \varphi\right)u(r, z)\qquad \text{,} \end{equation}

where L refers to a Laguerre polynomial, r is a radial coordinate and \varphi is an angular coordinate.[p.4][23] The function u refers to the Gaussian Distribution, of the form

\begin{equation}\label{gauss} u = \sqrt{ \frac{2}{\pi}} e^{-\frac{x^2 + y^2}{w^2}} = \sqrt{ \frac{2}{\pi}}e^{-\frac{r^2}{w^2}} \qquad \text{,} \end{equation}

depending on the axis of symmetry.[p.5][23] In addition to these modes, there also exist \emph{degenerate modes}, notated as TEM_{mn}^*, which consist of two modes that share a frequency, arranged in quadrature.[p.6][23] These modes are loosely analogous to those found in plates.

Typical lasers exhibit several of these modes simultaneously. This is possible because modes are energetically orthogonal (cf. chapter 1, section 6). Mixed-mode beams are generally more tolerant to microscopic impurities on the lenses or mirrors, and therefore sometimes desirable for specific tasks. However, the irradiance profile which best minimizes divergence from the collinear regime---due to diffraction---is the Gaussian distribution, equation \eqref{gauss}.[p.13][23]

The Airy disk is a reasonable place to start characterizing the irradiance profiles of monochromatic beams: 84% of the irradiance may be localized this way. Another 7% may be localized within the second null of the bessel function. However, the width of such beams is not constant, and so measurements which take into account only the width at the aperture fail to completely characterize the beam.[p.2][23]

A Gaussian, fundamental-mode (TEM_{00}), non-aberrant laser beam differs from the cylinder implied by such measurements, and also from the cone implied by ray tracing. These spatial distortions amount to a hyperbolic change in radius over the propagation length, of the form:[p.562][15]

\begin{equation}\label{wz} w(z) = w_0 \sqrt{ 1 + \left( \frac{\lambda z}{\pi w_0^2} \right)^2 } \qquad \text{,} \end{equation}

where z is the position along the beam, measured from the location of the minimum radius, or ``waist.'' The radius w_0 is the smallest radius the beam achieves, which occurs at z=0. N.B. ray tracing would have us believe this radius to be zero, but diffraction causes this to be limited by the wavelength, \lambda. z=0 usually occurs some distance away from the aperture.[p.11][23] Obviously, if this distance is known, we could call it z_0 and replace the occurrences of z in these equations with z-z_0 to shift the spatial functions the appropriate amount.

A complete characterization of the ideal fundamental-mode beam merely requires a knowledge of the length over which the cross sectional area doubles, from the minimum. This range is called the Rayleigh range z_R, and is, equivalently, the region where radius grows from w_0 to \sqrt{2}w_0. From z_R, we may infer all other spatial parameters of an ideal beam. Thus, the Rayleigh range is an extremely important aspect of the beam, given by[p.562][15]

\begin{equation}\label{zr} z_R = \frac{\pi w_0^2}{\lambda} \qquad \text{.} \end{equation}

An ideal fundamental-mode beam will also have variations in the curvature of its wavefronts. This measure is given by the beam's radius of curvature, R(z), expressed as a function of z:[p.12][23]

\begin{equation}\label{rz} R(z) = z \left(1 - \frac{z_R^2}{z^2} \right) \qquad \text{.} \end{equation}

Within the Rayleigh range, z_R, these curvatures will be arbitrarily close to plane waves. At the edges, \pm z_R, the magnitude of the curvature is maximized. At z=0 and z=\pm \infty, the magnitude of curvature is minimized. The sign of the curvature matches the sign of z. This parameter will affect the beam's interaction with lenses, as well as define regions where near-field (Rayleigh) versus far-field (Fraunhofer) diffraction regimes are attainable.

We can also determine the phase-shift \psi accumulated across the beam as a function of z:[p.12][23]

\begin{equation}\label{psiz} \psi(z) = -\tan^{-1}\left(\frac{z}{z_R}\right) \qquad \text{,} \end{equation}

in comparison to an ideal plane wave.

The full angular divergence, \theta, is the largest angle developed by the beam on either side of the z axis. Once again, this angle is asymptotically approached in the far positive and negative fields. This limit is given by[p.12][23]

\begin{equation}\label{fadiv} \theta = \frac{2\lambda}{\pi w_0} = \frac{2w_0}{z_R} \qquad \text{.} \end{equation}

The above equations \eqref{wz} \eqref{zr} \eqref{rz} & \eqref{psiz} must be adjusted for the case of mixed-mode beams. To do this, we consider that the given combination of higher-order modes are related to a fundamental mode, embedded Gaussian beam, which may or may not be mixed into the given beam. We posit that the diameter of this embedded Gaussian beam is proportional to that of the given beam at all points z. This ratio is M. By custom, the values of the mixed-mode beam are capitalized, and M \equiv W_0 / w_0. Noting that Z_R = z_r, and making both substitutions into our equations \eqref{wz} \eqref{rz} \& \eqref{fadiv}, gives us another set of equations that characterize the mixed-mode parameters:[p.13][23]

\begin{align}\label{mixed_mode} &Z_R = \frac{\pi W_0^2}{M^2 \lambda} = z_R\notag\\ {}\notag\\ &W(z) = W_0 \sqrt{1 +\left(\frac{z^2}{z_R^2}\right)}\notag\\ {}\notag\\ &R(z) = z \left(1 - \frac{z_R^2}{z^2} \right)\notag\\ {}\notag\\ &\Theta = \frac{2M^2 \lambda}{\pi W_0}= \frac{W_0}{z_R} = M\theta \end{align}

Equation \eqref{psiz} cannot apply to the mixed-mode case in a straightforward way, because our Gaussian beam had only one frequency component, whereas the mixed-mode beam ostensibly has more. The value used to denote this relationship is M^2, because it is an invariant of the beam, and unchanged by optical elements.[p.15][23]

Part 5:
Vibration Sensors that Do not Require Inertial Coupling

We are now in a position to assess a few optical techniques for vibration measurement. The techniques herein will once again be judged on the same set of attributes as the section \ref{sec:coupling}, however, in this case, all of the techniques can be deployed without loading the object, so the impedance attribute will be ignored.

Interferometry

As a general term, Optical Interference occurs when two or more light waves interact in such a way as to produce any pattern of irradiance distribution other than the sum of their irradiances.[p.367][15] Generally, for the design of an instrument, the standard method is to produce an observable fringe pattern: a region of light and dark bands of sufficient width and spacing for the photosensitive element. In order for interference to occur, the beams must exhibit spatial and temporal coherence. Spatial coherence is the result of alignment, whereas temporal coherence is the result of spectral purity.[p.371][15] Monochromatic light sources typically provide the consumer with a coherence length, which describes the length in space over which a source is approximately monochromatic.[p.298][15] Interference can be produced by many different regimes. The interferometry techniques we will discuss are all amplitude splitting techniques, meaning they will require the use of a half-silver mirror (equivalently, a ``beam-splitter'') to split a single light source into two. The number of parts involved is a in important consideration; an increased part count will generally cause higher production cost and will increase the chances of failure. However, occasionally these costs come with benefits, such as increased image quality or generality.

Michelson Interferometry

The Michelson Interferometer is a basic amplitude-splitting interferometry technique which is seldom used on its own for instrumentation purposes.[p.387][15] However, it is well studied and serves as a model on top of which other features may be added.

Light travels from the source into a beam-splitter O, at an angle. The beam is partially reflected and partially transmitted. The reflected component travels to mirror M_1 and the transmitted component to mirror M_2. Both beams are reflected by their respective mirrors straight back into the beam-splitter, where one component of the of either wave is then projected down to the Detector.

N.B. element C is a clear piece of dielectric with the same index of refraction and width as the beam-splitter O. It corrects for the difference in path lengths between the two beams. However, this will not correct for the additional phase term k_0, which is introduced as the result of internal reflection on the OM_2 leg, which occurs at O. There will also be a secondary set of beams (omitted), which will occur on the left side of O.[p.388][15]

This implementation is still not exactly correct in theory, but after a little calibration, This will result in a fringe pattern at Detector, the nulls of which are fairly close to:

\begin{equation}\label{mfringe} 2d\cos{\theta_m} = m\lambda_0 \end{equation}

where m is any integer---call it the order, d is the distance between the mirrors, and \lambda_0 is the wavelength of the light source. The fact that the fringe diameters are wavelength-dependent is a dispersion byproduct of the beam-splitter. If the light source has multiple wavelengths present, interference still occurs. This would cause other fringes to appear at the appropriate values for \lambda_0.[p.388][15] For every component wavelength.

If instead, we use a monochromatic source, the interference between two parallel mirrors M_1 and M_2 will consist of many concentric rings. These rings are called fringes of equal inclination. Each ring has a fixed order, m, which is its distance in number of rings from the center. By counting the number of rings, we can find the number of half-wavelengths between the mirrors M_1 and M_2. As the mirrors move closer together, the order decreases, and one after another, the rings move toward the center of the bull's-eye and disappear.

To find the angular radius of a particular ring, p, we use the following:[p.390][15]

\begin{equation}\label{haidinger} \cos{\theta_p} \approx \theta_p = \sqrt{ \frac{p \lambda_0}{d} } \end{equation}

where, since the rings are stationary, \theta_p = \theta_m. We will see a different case below. Furthermore, since we presume to look only at a small angle subtended from the detector, we can use a Taylor approximation to make the formula faster to calculate.[p.390][15]

If we replace M_1 with an arbitrary surface, we could use this as a rangefinder. It turns out that for some applications, this sort of topology works well.[p.390][15] Adding an attenuator on the OM_2 leg---now called the reference beam---we could theoretically adjust for losses due to scattering. This will be necessary for the system to produce an interference pattern in the first place.

Keep in mind that for a wavelength \lambda=500 nm, and a distance d = 10 cm, the order m_0, which corresponds to the number of fringes in the pattern, will be 400,000.

While interpreting the fringe pattern as a whole is something our eyes can do quickly, it is unreasonable to assume our detector would have the spatial resolution or frame rate necessary to do this in real time. Furthermore, it isn't necessary, as long as we decide to look at the velocity instead.

A typical interferometer only has a single photodetector, or at most an array. A single photodetector can be used to count the number of fringes that pass over a single point, and would be able to detect the magnitude of the velocity. In general, to detect the velocity while preserving the direction, a slightly different technique is used.[p.241][2]

The cost of building such a device as a prototype is not very high. There are potentially some optical components that would need to be well made. In particular, the mirrors would have to be of good quality, and everything would have to be mounted with finely adjustable stands. However, to construct a housing for the device would be costly.

The field-readiness of the device, as described above, is fairly low. There are 8 minimum components, counting the reference attenuator (discussed but not shown), that would each require fully locking, adjustable supports. Furthermore, unless one added additional mirrors, the maximum distance that could be measured with the device is the length of the reference beam. Therefore an arm must be constructed to house it, etc.

The quality is limited by the devices inability to discern direction of motion. The device is clearly much better suited for displacement measurements. If the fringe patterns could be analyzed more directly, it could be made to track direction, but that would necessitate a loss in frame rate, field readiness, and an increase in cost.

The device is fairly general. It can measure vibrations even on rough surfaces by calibrating the reference beam.

Laser Doppler Vibrometry

Laser Doppler Vibrometry is an attempt to solve some of the drawbacks concomitant with Michelson or other standard interferometry methods, most importantly the sign ambiguity.[12]
This is typically accomplished in one of two ways. The first method is to use polarization to read instantaneous velocity in quadrature. The second is to frequency shift the reference beam by a constant value prior to mixing the beams back together.[12]

Since the first method is essentially just a doubling of the interferometer approach, we shall focus on the operation of the second method. The frequency shifting is performed by an acousto-optic modulator, typically a Bragg cell. Such devices are ultrasonic piezoelectric transducers which are transparent and capable of modulating their indeces of refraction at sufficiently high frequencies to shift the color of the light by a few tens of Mhz. This is applied to the reference beam in order to produce fringe patterns that move at a constant velocity when the surface is at rest. This way, a change in the displacement in the negative direction corresponds to a negative acceleration in the fringe pattern. The bandwidth is then limited by how far down the Bragg cell can shift the reference beam frequency.[p.241][2]

Either method is guaranteed to cost a lot more, even to prototype. The quadrature method doubles the cost of the prototype, and will also increase the cost of a field version considerably. The frequency shift method requires the use of a Bragg cell, which is expensive.

As for their field-readiness, the added bulk of the quadrature method is unreasonable. Furthermore, it poses even more problems related to calibration. The frequency shift method seems to be a better choice for field work out of the two. Bragg cells are not very big, and therefore portability is not any more hindered than the case of the interferometer.

For the quadrature method the quality is likely very good. For the frequency shift method, the quality depends on the Bragg cell, but it will be optically band-limited, due to the heterodyning process.

There are many variants that have been suggested on this general idea. C.f. [25] for a good overview of early techniques, [2], and [28] for more specific applications.

Self-Mixing LDV

An especially attractive variant on the Laser Doppler Vibrometer, the Self-Mixing LDV, uses the laser cavity itself as the location of beam mixing. Therefore, no external mirrors are required. This technique strongly suggests the use of a diode laser, although it has been attempted in tube lasers as well.[25] Modern laser diodes frequently have a second pair of pins, which connect to the embedded photosensitive element. While the self-mixing approach seems to work without any accoutrements besides the laser diode and an attenuator, the dynamic range in this configuration is poor. Thus, much of the research contributing to the design of these devices is dedicated to improving them in this respect. Since this technique has been established for some time, some fairly sophisticated implementations exist in the literature. This technique looks very promising on all fronts.

 


Bibliography

1 

A Bäcker.

Random waves and more: Eigenfunctions in chaotic and mixed systems.

The European Physical Journal Special Topics, 145:161-169,
2007.

2 

J.R. Baker, R.I. Laming, T.H. Wilmshurst, and N.a. Halliwell.

A new, high sensitivity laser vibrometer.

Optics & Laser Technology, 22(4):241-244, August 1990.

3 

D. S. Ballantine, R. M. White, S. J. Martin, A. J. Ricco, G. C. Frye,
H. Wohltjen, and E. T. Zellers. 

Acoustic Wave Sensors.

Academic Press, Inc, San Diego, CA, 1997.

4 

ME Bonds.

Aesthetic Amputations: Absolute Music and the Deleted Endings of
Hanslick's Vom Musikalisch-Schonen.

NINETEENTH CENTURY MUSIC, 36(1):3-23, 2012.

5 

Kevin K. Chen, Clarence W. Rowley, and Jonathan H. Tu.

Variants of Dynamic Mode Decomposition: Boundary Condition, Koopman,
and Fourier Analyses.

Journal of Nonlinear Science, 22(6):887-915, April 2012.

6 

R Courant and D Hilbert.

Methods of mathematical physics.

John Wiley and Sons, Inc, New York, 2 edition, 1937.

7 

L. Cremer and M. Heckl.

Structure-Borne Sound.

Springer-Verlag, Berlin, 2 edition, 1973.

8 

Andrew Dewar.

Reframing Sounds: Recontextualization as Compositional Process in
the Work of Alvin Lucier.

LMJ, 2012.

9 

BF Feeny and R. Kappagantu.

On the physical interpretation of proper orthogonal modes in
vibrations.

Journal of Sound and Vibration, 4(211):607 616, 1998.

10 

I K Fodor.

A Survey of Dimension Reduction Techniques.

2002.

11 

Michel Foucalt.

Nietzsche, Genalogy, History.

In Paul Rabinow, editor, The Foucault Reader. Pantheon, New
York, 1984.

12 

Guido Giuliani, Simone Bozzi-Pietra, and S Donati.

Self-mixing laser diode vibrometer.

Measurement Science and ..., page 14 24, 2003.

13 

Joseph W. Goodman.

Introduction to Fourier Optics.

Ben Roberts, Greenwood Village, CO, 3 edition, 2005.

14 

S. Han and B. Feeny.

Application of proper orthogonal decomposition to structural
vibration analysis.

Mechanical Systems and Signal Processing, 2003.

15 

Eugene Hecht and A. R. Ganesan.

Optics.

Addison-Westley, Boston, 4 edition, 1987.

16 

Hans Jenny.

Cymatics: a study of wave phenomena and vibration.

3 edition, 2001.

17 

G Kerschen and JC Golinval.

Physical interpretation of the proper orthogonal modes using the
singular value decomposition.

Journal of Sound and Vibration, 2002.

18 

Gordon S. Kino.

Acoustic Waves.

Prentice-Hall, Inc, Englewood Cliffs, NJ, 1987.

19 

Ronald Kuivila.

Images and Actions in the Music of Alvin Lucier.

Leonardo Music Journal, 2012.

20 

Bruno Latour.

Politics of nature: East and West perspectives.

Ethics & Global Politics, 4(1):1-10, March 2011.

21 

Alvin Lucier.

Reflections.

Musiktexte, Koln, 3 edition, 1995.

22 

Alvin Lucier.

Music 109.

Wesleyan University Press, Middletown, CT, 2012.

23 

GF Marshall and GE Stutz.

Handbook of optical and laser scanning.

2004.

24 

P. A. Nelson and S. J. Elliot.

Active Control of Sound.

Academic Press, Inc, San Diego, CA, 1992.

25 

CJD Pickering, NA Halliwell, and TH Wilmshurst.

The laser vibrometer: a portable instrument.

Journal of sound and vibration, 07:471-485, 1986.

26 

William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P.
Flannery. 

Numerical Recipes in C.

Cambridge University Press, New York, 2 edition, 1988.

27 

Christopher E. Reid and Thomas B. Passin.

Signal Processing in C.

John Wiley and Sons, Inc, New York, 1992.

28 

P a Roos, M Stephens, and C E Wieman.

Laser vibrometer based on optical-feedback-induced frequency
modulation of a single-mode laser diode.

Applied optics, 35(34):6754-61, December 1996.

29 

P Scherz.

Practical Electronics for Inventors.

McGraw-Hill, New York, 2 edition, 2007.

30 

HJ Stöckmann.

Chladni meets Napoleon.

The European Physical Journal Special Topics, 2007.

31 

Gilbert Strang.

Introduction to Linear Algebra.

Wellesley-Cambridge Press, Wellesley, 4 edition, 2009.

32 

D. Ullmann.

Life and work of EFF Chladni.

The European Physical Journal Special Topics, 2007.

33 

David S. Watkins.

Fundamentals of Matrix Computations.

John Wiley and Sons, Inc, Hoboken, NJ, 3 edition, 2010.

34 

J Yang.

Semiotics, Presence and the Sublime in the Work of Alvin Lucier.

Leonardo Music Journal, 2012.

One thought on “Remote and Coupled Vibration Measurement Methods

Leave a Reply

Your email address will not be published. Required fields are marked *