audio-intl

principals

Impulse Response in Audio Transducers

Authored by AEC Engineering

The structure of what we perceive as speech, music, and sound is determined by the nature of constantly changing sound-pressure waves. With some degree of certainty we can say that the only constant in this whole process is change itself.

Sound pressure clusters vary with such speed and level that the chance of the same waveform repeating itself is virtually nonexistent. It follows, then, that the challenge facing components in a sound reproduction chain is to convey these ever changing patterns in both electrical and acoustic form. Steady-state signals characterized by a constant wave-form do not for all practical purposes occur in nature. Picture #1 illustrates the structure of a single piano tone.

Audio transducers – loudspeakers – understandably face the greatest challenge in the sound reproduction sequence. Confronted by signals of extreme complexity their task is to convert these wave-forms from electrical to mechanical energy, then to an acoustical analogue of the original signal – a difficult task indeed. To precisely transform and recreate abruptly changing wave-forms and patterns one after the other is all but impossible for today’s loudspeakers. Even the simplest of sound patterns cannot be reproduced coherently over most of the audible range.

Why is it that two loudspeakers with nearly identical frequency response measurements very often sound quite different one from the other?

 

There is an underlying assumption, of course, to such a question; e.g., that frequency response is the most important determiner of sound; and that an examination of frequency response is fundamental to understanding the acoustical properties of a loudspeaker. While some claim to be able to make meaningful determinations about acoustic performance by analyzing a loudspeaker’s frequency response, by adding time – the fundamental element in change – as a third axis in the matrix of measurements it becomes clear there are important patterns only hinted at in the frequency response alone. It is the conclusion of this paper that although frequency response is one of several indicators that can be meaningful in an overall evaluation of loudspeaker performance, it is our contention that alone – or as a primary tool – it is no more than a finger pointing at the moon.

Moreover, we can be greatly misled by frequency response measurements. We often draw the wrong conclusions, and implement secondary design decisions aimed at improving measured frequency response, rather than making a better sounding more accurate loudspeaker. What we lose by analyzing a loudspeaker strictly in the frequency domain becomes all the more apparent when we understand that a frequency response curve is a simple two function mathematical calculation. In reality, however, every loudspeaker works in the time domain and does so with a very complex set of instructions. At its input is an electrical signal with varying amplitude – a time dependant variable. The loudspeaker then transforms this electrical impulse into magnetic energy and generates motion, moving a diaphragm as a function of time and mechanical driving force, ideally in exact relation to the input signal. The transducer’s diaphragm sets air into motion and these sound pressure variations reach our ear. It is only in the brain that this time dependant signal is transformed into the frequency domain. Otherwise we would perceive sound in terms of varying time constants, rather than hearing and recognizing certain pitches.

Most current measurements are such that this time/frequency transformation is taken for granted. It is assumed that since this transformation is commonly understood and intuitive by nature that it is of no significance where in the sequence of events we take it into account. We do so early in the process in order to produce a display that coincides with our common perception. And we do so by using steady-state constant tones. In this whole process we inadvertently create other problems; we ignore real issues, and make critical design decisions based on incomplete criteria or wrong assumptions.

Different time related functions of varied sound patterns can all result in the same measured amplitude response at a given frequency. Naturally, since only one very specific signal profile in the time domain represents the input correctly, other profiles that contribute to the same amplitude response actually, in fact, represent distortion. In the most ironic of ways, we often assume that when a frequency response measurement is “in the window,” we can move on to address other issues of concern, never being fully aware of the kinds of flaws we are actually rewarding in the process. This is one very important reason why speakers with nearly identical frequency response will most likely sound quite different one from the other.

Picture #2. A perfect electrical step signal. Exemplified by an infinite rise-time followed by a continuous level in the new plane.Picture #3. An ideal speaker response. Showing the effect of a band-pass with upper and lower limits. Ideally we would have an infinitely steep, rising wave-front followed by a gradual signal decay. Rise-time is proportional to the upper limit of bandwidth, while the duration of the decay is proportional to its lower limit and directly related to its cut-off frequency.

Despite amplitude response measurements that are consistently good, often exemplary, nearly all of today’s two and three way loudspeakers will distort an input signal in the time domain almost beyond recognition. This is confirmed by a close look at their short duration impulse response, and is due to inherent deficiencies in their time domain characteristics. In this type of evaluation we work with either a short duration step or a Dirac impulse. A step signal contains more energy than a Dirac impulse, providing a better signal to noise display. A step response also more clearly demonstrates distortions in the signal. Both have the same content and can be converted one to the other by means of differential and integral calculus.

In practical terms, an impulse signal much more closely approximates music than does a sinusoidal waveform. Music is characterized by an infinite succession of attacks and decays of sound patterns that give individual musical instruments their timbre and voices their characteristic sound. It is our belief that measurement of a speaker’s impulse response represents the primary and most fundamental method of analyzing transducer performance. Impulse testing is both repeatable and meaningful; it adds time domain considerations to the analysis, and most importantly it presents a loudspeaker with a signal that very closely resembles real music.

The use of impulse testing immediately reveals a common deficiency in nearly every multi-way speaker. In viewing the leading edge of an impulse as reproduced by a typical three way system, for example, we see stepped delays in its reproduction as it is crossed over from the high frequency driver to the midrange, and then again to the woofer. The high frequency information therefore arrives first, followed by that of the midrange and then the low frequency driver, each delayed successively by several microseconds. What we see are three or more individual responses to a single stimulus.

It is understandable, given these examples, that our hearing is constantly being challenged by time domain nonlinearities, additive distortions, and a wholesale reorganization of important cues. As a result the reproduction of music is at best only a rough approximation; merely hinting at what it was that was originally recorded. We ask a lot of our brains in the process, trying to coax recognition and appreciation from what is ultimately a mixture of familiar and unfamiliar cues.

Picture 4Picture 6
Picture 5

This relates in some measure to the evolutionary development of our hearing mechanism. Originally human hearing was fine-tuned to decipher miniscule temporal sequences, like the sound of a breaking branch. Our life depended on recognizing what was happening, and in what direction and how far away, to avoid a possible life-threatening attack. Analyzing this important aspect of hearing, we now understand that the human auditory mechanism can perceive time increments in the order of a few 100,000th of a second. In that brief period we recognize the character of a sound (frequency content) as well as direction and loudness, giving us a clue as to distance. All of this is found in the leading edge of a wave-front. The steep rise of an impulse should be reproduced coherently and without temporal sequencing to recreate the sense of live information. In reality most transducers create a range of different and contradictory clues, making our brains work very hard to decipher the origin of these sounds. Hence the term “listener fatigue.”

To improve the reproduction of fine detail and to enhance spatial information many designers also opt to raise the overall level of the high frequency driver. The first arrival of unnaturally emphasized high frequencies, which in almost all cases are the delicate harmonics of fundamental tones yet to arrive, stresses many of the processes we as humans use to identify and recognize music. It is our contention that in the interest of creating a generally acceptable response curve, this actually undermines our recognition and appreciation of space and detail as they occur in nature.

Inverting one driver in relation to the overall system as discussed briefly above further complicates and convolutes the situation. This process simultaneously creates compression and rarefaction in two adjacent drivers, destroying dynamics in a wave-form, and altering the absolute level and frequency content of a sound cluster. This forces the brain to work increasingly hard at recognizing what it is that is being presented.

From our perspective it is clearly preferable to define driver parameters in the context of the overall system; and to design the system with coherence and time linearity as primary, not secondary, parameters. In such a system all drivers would work in phase and each would be given a set of instructions that would allow it to participate in the overall recreation of an impulse signal that is in tact and coherent. This is the approach taken in the AEC WTL loudspeaker design. Impulse response in this loudspeaker replicates as closely as possible the step input. It is also important to note that amplitude response is not at all neglected. It is simply put in perspective from a design standpoint; and its evaluation done with an understanding of where the time-frequency transformation occurs in the recognition and appreciation of music. The difference this approach makes is demonstrated in the ability to coherently reproduce natural musical timbre over a complete bandwidth, and to retrieve delicate spatial cues, lending a sense of live and natural to our perception of what is being reproduced.

As mentioned at the outset of this paper, pure tones comprised of constant and repeating sine waves simply do not occur in nature. While they form the basis for many important lab measurements, live acoustic music does not consist of these waveforms. Most importantly, what we hear and recognize as music is made up of a combination of fundamental tones and their distinctive harmonics that blend in time to give character to the sounds that we hear. To reproduce these sounds and to retrieve the delicate qualities that make music recognizable it is important that a transducer reproduces both the amplitude and a signal’s phase content coherently.

Harmonic structures are both delicate and complex. They are an integral part of sound from the very lowest frequencies to the upper limits of hearing and beyond. The key to deciphering most varied sounds in nature lies in the delicate arrangement of these harmonics. They differentiate one instrument from another and convey such things as the subliminal message of joy, anger or fear in the human voice. Their bandwidth is often quite wide, making coherency and integration through a speaker’s crossover essential. A transducer absolutely must have the ability to reproduce an impulse with one voice instead of a chorus. The ESS WTL (Wave-Form Transient Linearity) design accomplishes this and preserves delicate overtone structure in the following ways:

Relative to the listening position, bass, midrange, and high frequency drivers are precisely positioned to create a virtual point source. This eliminates one of the overriding causes of discontinuity.

    

Picture 7 The AEC WTL design preserves the delicate relationship between fundamental tones and their harmonics, not only on-axis but also off-axis as well. This is largely the result of carefully controlled dispersion, ensuring that reflected sound will be as integrated and coherent as the on axis information. Since most sounds reaching a listener’s ears in a typical listening room are reflective in nature this is a very important consideration.

 

Any discussion of coherency in sound reproduction must invariably include some general appreciation of the importance of linear phase response. This is one of the more difficult parameters to explain – as much for the expert to explain perhaps as the layman to understand. However, the effects of phase distortion are very audible, and they affect the coherency of reproduced sound not only in terms of the resolution of fine detail, but also in the reproduction of important spatial characteristics.

 

To illustrate this point, imagine an amusement park mirror that distorts images by elongating or compressing them. While the mirror may greatly change the visual perspective of images being reflected, it does not render them completely unrecognizable. This smearing of spatial characteristics, created at a carnival for fun, corresponds to the time distortion and phase anomalies created in the reproduction of music quite unintentionally. Phase distortion is not readily detectable with conventional amplitude measurements. It does not, after all, represent an additive or subtractive form of distortion, but more one of reorganization. While difficult to measure quantitatively, it is a deadly sin nonetheless; but one whose presence can be detected and confirmed by impulse measurements.

Since any crossover creates its own set of problems, many of them related to phase linearity, the fewer points of transition in a loudspeaker the better in many respects. This of course is said with the understanding that the individual drivers must themselves be predominantly linear over their useable bandwidth. ESS designs, based on such a philosophy, are exemplary in areas of fundamental importance such as time coherence; and outperform multi-way designs built around three and four driver complements significantly in these areas. Linear phase response corresponds to constant time coherence. This has been thoroughly researched in amplifier design by analyzing group delay through an amplification chain. We have seen the theoretical postulation of these precepts; and in pictures # 8 and 9 it is possible to see their actual implementation in ESS loudspeakers.

In analyzing real-world transducer performance in the frequency domain what we often see in response to a single stimulus (test signal) is an assembly of separate time interrupted signals. As we look at the spectral nature of the individual wave-forms in the reproduced sound, we see that we are looking at much more than the original stimulus –3000 Hz – as documented in the following display.

The analysis of these complex wave forms is aided tremendously by a new kind of impulse measurement, one that displays the steep rising wave-front and provides an enhanced view of decay patterns with new and different perspectives – in addition to the more traditional “waterfall” pattern display. We can now analyze complex structures by looking at them with regard to the time factor and can see how very different wave-form mixes can produce the same amplitude response – illustrating a fundamental precept of this paper, that even distortion products can contribute to a flat amplitude response.

As we look at the response of a transducer between 500 Hz and 5 kHz we can see the consistently changing character of reproduced sound in conventional speakers. We also see conclusive evidence that out-of-band problems do in fact modulate into the audible band to contaminate the reproduced spectrum with harmonically unrelated additive distortion. Being in the presence of an out-of-tune piano, most of us can relate to this phenomenon. While we recognize that it is a piano we are hearing, we are easily drawn into what becomes a fatiguing listening exercise in the effort to understand what is going on acoustically. We can often hear discordant tones within the mix of sounds created by the interaction of harmonically unrelated notes.]

Employing this new Dynamic Measurement System we can now add time considerations in a meaningful and repeatable way to the evaluation of audio transducers. Building on, but greatly enhancing, impulse measurement techniques of the past we can now analyze in consummate detail and with the speed of modern computers what is real and what is not. We clearly see the difference between source (test stimulus) and response and can develop a new level of understanding of what it is that lends a sense of live and natural to reproduced sound.

Coherency can now be seen and evaluated in a new and meaningful way. It is our contention that impulse testing, bringing time into the matrix of fundamental parameters in transducer performance, is integral to understanding the nature of reproduced sound. We believe that we have instrumentation that underscores the delicate balance between time and frequency in the creation and reproduction of live acoustic music; and that with our new Dynamic Measurement System we have turned the corner on the design of a whole new generation of audio transducers. The ESS WTL design, our standard bearer, is the first product of its kind to be wholly developed with this new level of design sophistication.

Impulse Response in Audio Transducers

Authored by AEC Engineering

The structure of what we perceive as speech, music, and sound is determined by the nature of constantly changing sound-pressure waves. With some degree of certainty we can say that the only constant in this whole process is change itself.

Sound pressure clusters vary with such speed and level that the chance of the same waveform repeating itself is virtually nonexistent. It follows, then, that the challenge facing components in a sound reproduction chain is to convey these ever changing patterns in both electrical and acoustic form. Steady-state signals characterized by a constant wave-form do not for all practical purposes occur in nature. Picture #1 illustrates the structure of a single piano tone.

Audio transducers – loudspeakers – understandably face the greatest challenge in the sound reproduction sequence. Confronted by signals of extreme complexity their task is to convert these wave-forms from electrical to mechanical energy, then to an acoustical analogue of the original signal – a difficult task indeed. To precisely transform and recreate abruptly changing wave-forms and patterns one after the other is all but impossible for today’s loudspeakers. Even the simplest of sound patterns cannot be reproduced coherently over most of the audible range.

Why is it that two loudspeakers with nearly identical frequency response measurements very often sound quite different one from the other?

There is an underlying assumption, of course, to such a question; e.g., that frequency response is the most important determiner of sound; and that an examination of frequency response is fundamental to understanding the acoustical properties of a loudspeaker. While some claim to be able to make meaningful determinations about acoustic performance by analyzing a loudspeaker’s frequency response, by adding time – the fundamental element in change – as a third axis in the matrix of measurements it becomes clear there are important patterns only hinted at in the frequency response alone. It is the conclusion of this paper that although frequency response is one of several indicators that can be meaningful in an overall evaluation of loudspeaker performance, it is our contention that alone – or as a primary tool – it is no more than a finger pointing at the moon.

Moreover, we can be greatly misled by frequency response measurements. We often draw the wrong conclusions, and implement secondary design decisions aimed at improving measured frequency response, rather than making a better sounding more accurate loudspeaker. What we lose by analyzing a loudspeaker strictly in the frequency domain becomes all the more apparent when we understand that a frequency response curve is a simple two function mathematical calculation. In reality, however, every loudspeaker works in the time domain and does so with a very complex set of instructions. At its input is an electrical signal with varying amplitude – a time dependant variable. The loudspeaker then transforms this electrical impulse into magnetic energy and generates motion, moving a diaphragm as a function of time and mechanical driving force, ideally in exact relation to the input signal. The transducer’s diaphragm sets air into motion and these sound pressure variations reach our ear. It is only in the brain that this time dependant signal is transformed into the frequency domain. Otherwise we would perceive sound in terms of varying time constants, rather than hearing and recognizing certain pitches.

Most current measurements are such that this time/frequency transformation is taken for granted. It is assumed that since this transformation is commonly understood and intuitive by nature that it is of no significance where in the sequence of events we take it into account. We do so early in the process in order to produce a display that coincides with our common perception. And we do so by using steady-state constant tones. In this whole process we inadvertently create other problems; we ignore real issues, and make critical design decisions based on incomplete criteria or wrong assumptions.

Different time related functions of varied sound patterns can all result in the same measured amplitude response at a given frequency. Naturally, since only one very specific signal profile in the time domain represents the input correctly, other profiles that contribute to the same amplitude response actually, in fact, represent distortion. In the most ironic of ways, we often assume that when a frequency response measurement is “in the window,” we can move on to address other issues of concern, never being fully aware of the kinds of flaws we are actually rewarding in the process. This is one very important reason why speakers with nearly identical frequency response will most likely sound quite different one from the other.

Picture #2. A perfect electrical step signal. Exemplified by an infinite rise-time followed by a continuous level in the new plane.
Picture #3. An ideal speaker response. Showing the effect of a band-pass with upper and lower limits. Ideally we would have an infinitely steep, rising wave-front followed by a gradual signal decay. Rise-time is proportional to the upper limit of bandwidth, while the duration of the decay is proportional to its lower limit and directly related to its cut-off frequency.

Despite amplitude response measurements that are consistently good, often exemplary, nearly all of today’s two and three way loudspeakers will distort an input signal in the time domain almost beyond recognition. This is confirmed by a close look at their short duration impulse response, and is due to inherent deficiencies in their time domain characteristics. In this type of evaluation we work with either a short duration step or a Dirac impulse. A step signal contains more energy than a Dirac impulse, providing a better signal to noise display. A step response also more clearly demonstrates distortions in the signal. Both have the same content and can be converted one to the other by means of differential and integral calculus.

In practical terms, an impulse signal much more closely approximates music than does a sinusoidal waveform. Music is characterized by an infinite succession of attacks and decays of sound patterns that give individual musical instruments their timbre and voices their characteristic sound. It is our belief that measurement of a speaker’s impulse response represents the primary and most fundamental method of analyzing transducer performance. Impulse testing is both repeatable and meaningful; it adds time domain considerations to the analysis, and most importantly it presents a loudspeaker with a signal that very closely resembles real music.

The use of impulse testing immediately reveals a common deficiency in nearly every multi-way speaker. In viewing the leading edge of an impulse as reproduced by a typical three way system, for example, we see stepped delays in its reproduction as it is crossed over from the high frequency driver to the midrange, and then again to the woofer. The high frequency information therefore arrives first, followed by that of the midrange and then the low frequency driver, each delayed successively by several microseconds. What we see are three or more individual responses to a single stimulus.

It is understandable, given these examples, that our hearing is constantly being challenged by time domain nonlinearities, additive distortions, and a wholesale reorganization of important cues. As a result the reproduction of music is at best only a rough approximation; merely hinting at what it was that was originally recorded. We ask a lot of our brains in the process, trying to coax recognition and appreciation from what is ultimately a mixture of familiar and unfamiliar cues.

Picture 4
Picture 6
Picture 5

This relates in some measure to the evolutionary development of our hearing mechanism. Originally human hearing was fine-tuned to decipher miniscule temporal sequences, like the sound of a breaking branch. Our life depended on recognizing what was happening, and in what direction and how far away, to avoid a possible life-threatening attack. Analyzing this important aspect of hearing, we now understand that the human auditory mechanism can perceive time increments in the order of a few 100,000th of a second. In that brief period we recognize the character of a sound (frequency content) as well as direction and loudness, giving us a clue as to distance. All of this is found in the leading edge of a wave-front. The steep rise of an impulse should be reproduced coherently and without temporal sequencing to recreate the sense of live information. In reality most transducers create a range of different and contradictory clues, making our brains work very hard to decipher the origin of these sounds. Hence the term “listener fatigue.”

To improve the reproduction of fine detail and to enhance spatial information many designers also opt to raise the overall level of the high frequency driver. The first arrival of unnaturally emphasized high frequencies, which in almost all cases are the delicate harmonics of fundamental tones yet to arrive, stresses many of the processes we as humans use to identify and recognize music. It is our contention that in the interest of creating a generally acceptable response curve, this actually undermines our recognition and appreciation of space and detail as they occur in nature.

Inverting one driver in relation to the overall system as discussed briefly above further complicates and convolutes the situation. This process simultaneously creates compression and rarefaction in two adjacent drivers, destroying dynamics in a wave-form, and altering the absolute level and frequency content of a sound cluster. This forces the brain to work increasingly hard at recognizing what it is that is being presented.

From our perspective it is clearly preferable to define driver parameters in the context of the overall system; and to design the system with coherence and time linearity as primary, not secondary, parameters. In such a system all drivers would work in phase and each would be given a set of instructions that would allow it to participate in the overall recreation of an impulse signal that is in tact and coherent. This is the approach taken in the AEC WTL loudspeaker design. Impulse response in this loudspeaker replicates as closely as possible the step input. It is also important to note that amplitude response is not at all neglected. It is simply put in perspective from a design standpoint; and its evaluation done with an understanding of where the time-frequency transformation occurs in the recognition and appreciation of music. The difference this approach makes is demonstrated in the ability to coherently reproduce natural musical timbre over a complete bandwidth, and to retrieve delicate spatial cues, lending a sense of live and natural to our perception of what is being reproduced.

As mentioned at the outset of this paper, pure tones comprised of constant and repeating sine waves simply do not occur in nature. While they form the basis for many important lab measurements, live acoustic music does not consist of these waveforms. Most importantly, what we hear and recognize as music is made up of a combination of fundamental tones and their distinctive harmonics that blend in time to give character to the sounds that we hear. To reproduce these sounds and to retrieve the delicate qualities that make music recognizable it is important that a transducer reproduces both the amplitude and a signal’s phase content coherently.

Harmonic structures are both delicate and complex. They are an integral part of sound from the very lowest frequencies to the upper limits of hearing and beyond. The key to deciphering most varied sounds in nature lies in the delicate arrangement of these harmonics. They differentiate one instrument from another and convey such things as the subliminal message of joy, anger or fear in the human voice. Their bandwidth is often quite wide, making coherency and integration through a speaker’s crossover essential. A transducer absolutely must have the ability to reproduce an impulse with one voice instead of a chorus. The ESS WTL (Wave-Form Transient Linearity) design accomplishes this and preserves delicate overtone structure in the following ways:

Relative to the listening position, bass, midrange, and high frequency drivers are precisely positioned to create a virtual point source. This eliminates one of the overriding causes of discontinuity.

    

Picture 7 The AEC WTL design preserves the delicate relationship between fundamental tones and their harmonics, not only on-axis but also off-axis as well. This is largely the result of carefully controlled dispersion, ensuring that reflected sound will be as integrated and coherent as the on axis information. Since most sounds reaching a listener’s ears in a typical listening room are reflective in nature this is a very important consideration.

Any discussion of coherency in sound reproduction must invariably include some general appreciation of the importance of linear phase response. This is one of the more difficult parameters to explain – as much for the expert to explain perhaps as the layman to understand. However, the effects of phase distortion are very audible, and they affect the coherency of reproduced sound not only in terms of the resolution of fine detail, but also in the reproduction of important spatial characteristics.

To illustrate this point, imagine an amusement park mirror that distorts images by elongating or compressing them. While the mirror may greatly change the visual perspective of images being reflected, it does not render them completely unrecognizable. This smearing of spatial characteristics, created at a carnival for fun, corresponds to the time distortion and phase anomalies created in the reproduction of music quite unintentionally. Phase distortion is not readily detectable with conventional amplitude measurements. It does not, after all, represent an additive or subtractive form of distortion, but more one of reorganization. While difficult to measure quantitatively, it is a deadly sin nonetheless; but one whose presence can be detected and confirmed by impulse measurements.

Since any crossover creates its own set of problems, many of them related to phase linearity, the fewer points of transition in a loudspeaker the better in many respects. This of course is said with the understanding that the individual drivers must themselves be predominantly linear over their useable bandwidth. ESS designs, based on such a philosophy, are exemplary in areas of fundamental importance such as time coherence; and outperform multi-way designs built around three and four driver complements significantly in these areas. Linear phase response corresponds to constant time coherence. This has been thoroughly researched in amplifier design by analyzing group delay through an amplification chain. We have seen the theoretical postulation of these precepts; and in pictures # 8 and 9 it is possible to see their actual implementation in ESS loudspeakers.

In analyzing real-world transducer performance in the frequency domain what we often see in response to a single stimulus (test signal) is an assembly of separate time interrupted signals. As we look at the spectral nature of the individual wave-forms in the reproduced sound, we see that we are looking at much more than the original stimulus –3000 Hz – as documented in the following display.

The analysis of these complex wave forms is aided tremendously by a new kind of impulse measurement, one that displays the steep rising wave-front and provides an enhanced view of decay patterns with new and different perspectives – in addition to the more traditional “waterfall” pattern display. We can now analyze complex structures by looking at them with regard to the time factor and can see how very different wave-form mixes can produce the same amplitude response – illustrating a fundamental precept of this paper, that even distortion products can contribute to a flat amplitude response.

As we look at the response of a transducer between 500 Hz and 5 kHz we can see the consistently changing character of reproduced sound in conventional speakers. We also see conclusive evidence that out-of-band problems do in fact modulate into the audible band to contaminate the reproduced spectrum with harmonically unrelated additive distortion. Being in the presence of an out-of-tune piano, most of us can relate to this phenomenon. While we recognize that it is a piano we are hearing, we are easily drawn into what becomes a fatiguing listening exercise in the effort to understand what is going on acoustically. We can often hear discordant tones within the mix of sounds created by the interaction of harmonically unrelated notes.]

Employing this new Dynamic Measurement System we can now add time considerations in a meaningful and repeatable way to the evaluation of audio transducers. Building on, but greatly enhancing, impulse measurement techniques of the past we can now analyze in consummate detail and with the speed of modern computers what is real and what is not. We clearly see the difference between source (test stimulus) and response and can develop a new level of understanding of what it is that lends a sense of live and natural to reproduced sound.

Coherency can now be seen and evaluated in a new and meaningful way. It is our contention that impulse testing, bringing time into the matrix of fundamental parameters in transducer performance, is integral to understanding the nature of reproduced sound. We believe that we have instrumentation that underscores the delicate balance between time and frequency in the creation and reproduction of live acoustic music; and that with our new Dynamic Measurement System we have turned the corner on the design of a whole new generation of audio transducers. The ESS WTL design, our standard bearer, is the first product of its kind to be wholly developed with this new level of design sophistication.