Java Programming Notes # 2352
- Preface
- General Background Information
- Preview
- Discussion and Sample Code
- Run the Program
- Summary
- What's Next?
- Complete Program Listings
Preface
DSP and adaptive filtering
With the decrease in cost and the increase in speed of digital devices, Digital Signal Processing (DSP) is showing up in everything from cell phones to hearing aids to rock concerts. Many applications of DSP are static. That is, the characteristics of the digital processor don't change with time or circumstances. However, a particularly interesting branch of DSP is adaptive filtering. This is a situation where the characteristics of the digital processor change with time, circumstances, or both.
Second in a series
This is the second lesson in a series designed to teach you about adaptive filtering in Java.
The first lesson, entitled Adaptive Filtering in Java, Getting Started, introduced you to the topic by showing you how to write a Java program to adaptively design a time-delay convolution filter with a flat amplitude response and a linear phase response using an LMS adaptive algorithm. That was a relatively simple time-adaptive filtering problem for which the correct solution was well known in advance. That made it possible to check the adaptive solution against the known solution.
An adaptive whitening filter
In this lesson, I will show you how to write an adaptive whitening filter program in Java, which is conceptually more difficult than the filter that I explained in the previous lesson. This lesson will also show you how to use the whitening filter to extract wide-band signal from a channel in which the signal is corrupted by one or more components of narrow-band noise.
Viewing tip
You may find it useful to open another copy of this lesson in a separate browser window. That will make it easier for you to scroll back and forth among the different listings and figures while you are reading about them.
Supplementary material
I recommend that you also study the other lessons in my extensive collection of online Java tutorials. You will find those lessons published at Gamelan.com. However, as of the date of this writing, Gamelan doesn't maintain a consolidated index of my Java tutorial lessons, and sometimes they are difficult to locate there. You will find a consolidated index at www.DickBaldwin.com.
General Background Information
Review of DSP concepts
Before getting into the details of the program, I need to prepare you to understand the program by reviewing some digital signal processing (DSP) concepts with you.
Sampled time series, convolution, and frequency spectrum
First there is the matter of the spectrum of a signal as well as the concepts of convolution and sampled time series. In order to understand this program, you will first need to understand the material in the following previously-published lessons:
- 100 Periodic Motion and Sinusoids
- 104 Sampled Time Series
- 108 Averaging Time Series
- 1478 Fun with Java, How and Why Spectral Analysis Works
- 1482 Spectrum Analysis using Java, Sampling Frequency, Folding Frequency, and the FFT Algorithm
- 1483 Spectrum Analysis using Java, Frequency Resolution versus Data Length
- 1484 Spectrum Analysis using Java, Complex Spectrum and Phase Angle
- 1485 Spectrum Analysis using Java, Forward and Inverse Transforms, Filtering in the Frequency Domain
- 1487 Convolution and Frequency Filtering in Java
- 1488 Convolution and Matched Filtering in Java
- 1492 Plotting Large Quantities of Data using Java
Data predictability
The adaptive design of the whitening filter in this lesson is based on the predictability, or lack thereof, of a time series. Predictability is a measure of the degree to which it possible to use the current sample and a set of previous samples to predict the value of the next sample.
White noise versus a single-frequency sinusoid
The two extremes of predictability are given by white noise on one hand and a single frequency sinusoid on the other.
(Recall that insofar as sampled time series are concerned, white noise is represented by a time series that is composed of equal contributions of all frequencies in the spectrum between zero and the Nyquist folding frequency, which is one-half the sampling frequency.)
Generating white noise
The easiest way to generate sampled white noise is to take the values for the samples from a random number generator. If you take a sufficiently long series of such values and perform a spectral analysis on that time series, you will find that as the length of the series approaches infinity, the spectrum approaches the ideal case of an equal contribution of energy at all frequencies.
(If that doesn't happen, then the values produced by your random number generator aren't truly random.)
Random values are uncorrelated
If the series of values produced by the random number generator is truly random, then the value of each sample is totally uncorrelated with all previous values. If there is no correlation between successive values, then it is not possible to successfully predict the next value (except through pure chance) based on a knowledge of some subset or all of the previous values.
(For example, given a true coin and given the outcome of any number of previous tosses, it is not possible to predict the next toss with a probability of success greater than one chance in two. In other words, knowing the outcome of many previous tosses doesn't improve your likelihood of correctly predicting the next toss to better than one chance in two.)
Therefore, if white noise is equivalent to a series of values produced by a random number generator, it is not possible to predict the value of a white noise sample using any number of previous samples.
A sinusoid is predictable
On the other hand, a pure single-frequency sinusoid is completely deterministic. There is nothing random about it. That is to say, given a small number of successive values from a pure sinusoid, it is easy to design a convolution filter that will process that sinusoid to produce a perfect prediction of the next value given a small set of previous values.
Predictability is inversely related to bandwidth
In the real world, signals and noise are neither pure sinusoids nor completely random. However, the narrower the bandwidth of a time series, the easier it is to predict the next value given a set of previous values. Similarly, the wider the bandwidth of a time series, the more difficult it is to predict the next value given a set of previous values. The program in this lesson will take those facts into account to adaptively design a convolution filter that will extract wide-band signals that have been corrupted by additive narrow-band noise.
Why would we want to do this?
This is not an unusual circumstance. Wide-band signals corrupted by narrow-band noise can occur in a variety of real-world situations. Some of the most common are situations in which wide-band signals are corrupted by additive reverberation noise. This can occur in a theatre, for example, where specific audio frequencies tend to reverberate due to the architecture. Another common example is an audio system that is corrupted by 60-cycle hum.
Reflection seismology
One of the earliest applications of digital whitening filters (although not necessarily adaptive) took place in the industry that searches for underground petroleum deposits using reflection seismology.
In reflection seismology, a burst of energy is "shot" into the earth where it is reflected back to the surface by the different layers in the earth. The reflected energy that arrives back at the surface is measured by sensors on the surface. The two-way travel time of the energy to and from each layer is different. Thus, the reflections from the shallow layers arrive back at the surface before the reflections from the deeper layers. The output from each sensor (or possibly each group of sensors added together) is digitized and treated as a sampled time series.
Repeat the process many times
This process is repeated over and over moving along a straight line on the surface of the earth. Then the sampled time series are plotted on the same display with equal spacing between the "traces" as they are often called. Each trace represents a point on the surface of the earth, and the peaks and valleys in the time series represent reflections from the various layers in the earth below that point.
Orient the display
If this display is then oriented such that the zero time reference is at the top of the display and time increases going down the display, the peaks and valleys on the individual traces can be correlated by eye to trace out the layering in the earth. Examples of such displays are shown in Figure 2 at the following URL:
http://sepwww.stanford.edu/sep/prof/iei/mltp/paper_html/node4.html
Each of the panels in Figure 2 at the above URL consists of hundreds of seismic traces with time going down the page. To the trained eye, the layering in the earth is evident in those images.
Initially used on shore
Reflection seismology was first used to search for underground petroleum deposits underneath the land masses on the earth. In this case, the shot of energy often consisted of a small explosion with the explosive material being tamped into a shallow borehole in the earth. The sensors for each different shot point were often placed on the surface of the earth in a line.
Moving offshore
Around the turn of the twentieth century, this technique was moved offshore to those portions of the earth covered with shallow water along the continental shelves. The purpose was to find underground petroleum deposits under these shallow water areas. In this case, the sensors were often trailed along behind the boat on a cable that was slightly submerged. The shots consisted of a variety of acoustic energy sources such as small explosions, or the release of a burst of air into the water from a high-pressure pneumatic device.
Reverberation
A special new problem was encountered with the transition to offshore exploration. When the shot was fired in an attempt to inject energy into the earth, a large percentage of the energy became trapped in the water layer and continued to bounce back and forth between the surface of the water and the surface of the earth below the water. This is a form of narrow-band reverberation.
The level of the reverberation energy was greater than the level of the reflections from the deep layering of the earth. Thus, the reverberation energy appeared as narrow-band reverberation noise in the output from the sensors, and the reflection energy of interest appeared as wide-band signals. The reverberation energy tended to mask the reflections from the different surfaces in the earth making it difficult to interpret the results.
Mathematical solutions
Different mathematical techniques (usually involving matrix inversions) were used to design convolution filters that could be used to filter out the narrow-band noise and to make the wide-band signals visible in the displays. These filters were called whitening filters, and the overall process was often referred to as deconvolution.
If you are interested in learning more about the reverberation problem and deconvolution in exploration seismology, visit this site or go to Google and search for the keywords seismic and deconvolution.
An adaptive solution
The adaptive algorithm that I will present in this lesson is an adaptive approach to the matrix inversion solutions that were frequently used to solve this reverberation problem.
The algorithm is also appropriate for use in a variety of other application areas involving wide-band signals corrupted by narrow-band noise.
Before getting into the details of the program, I am going to present and explain some experimental results that were produced using the program.
How does it work?
In the previous lesson, you learned how to use a least mean square (LMS) adaptive algorithm to adjust the individual coefficients in a convolution filter. The setup was such that when the filter was applied to one sampled time series it would attempt to cause the output to look like another sampled time series.
In the scenario presented in the previous lesson, the second sampled time series was simply a time-shifted version of the first time series. As a result, the convolution filter that resulted from the adaptive process was a filter with a flat amplitude response and a linear phase response. When the filter was applied to the first sampled time series, the output was a time-shifted version of that time series that matched the second time series.
We will use that same approach in this lesson, but will apply the approach to a different scenario.
The scenario for this lesson
In this lesson, we will have a sampled time series that consists of the sum of unpredictable wide-band signal and narrow-band (predictable) noise. The objective is to produce a replica of the narrow-band noise and then to subtract it from the original time series consisting of signal plus noise. If successful, this will produce an output consisting mainly of the original wide-band signal.
Will predict the next sample in the series
We will set the adaptive algorithm up so that it uses the current sample plus a specified number of history samples to develop a convolution filter that is capable of predicting the value of the next sample.
Because the narrow-band noise is largely predictable and the wide-band signal is largely unpredictable, the filter coefficients will adjust themselves to make a good prediction of the narrow-band noise. When we apply this convolution filter to the time series consisting of signal plus noise, the output will be an estimate of the waveform of the narrow-band noise. We will then subtract that waveform from the time series consisting of signal plus noise, leaving an estimate of the wide-band signal.
The quality of the results
The quality of the estimate of the wide-band signal will depend on a variety of factors including but not limited to:
- The number of narrow-band noise components that are added to the signal.
- The signal-to-noise ratio.
- The number of coefficients in the convolution filter.
- The feedback gain factor.
- The number of iterations allowed for the adaptive process to converge to a solution.
Some experiments
Before getting into the details regarding the program code, we will perform some experiments where we will vary the factors in the above list and observe the results.
First, however, I want to discuss of the difference between a prediction filter and a whitening filter, and to introduce you to the graphic output produced by the program.
The whitening process
In the above discussion, I explained that we will develop a convolution filter that can be applied to a sampled time series consisting of signal plus noise to use the current sample plus a specified number of historical samples to produce an output value that is an estimate of the value of the next sample.
I also explained that in order to separate the signal from the noise, we will subtract the estimate of the next sample from the actual value of the next sample. The combined process of applying the prediction filter and performing the subtraction process can be thought of as a whitening processing.
The whitening filter
I hope that by now you are sufficiently familiar with the convolution process that you will recognize that we can combine these two steps simply by concatenating a coefficient value of -1 onto the end of the prediction filter and applying this filter to the sampled time series consisting of signal plus noise.
I will refer to the filter that is created by concatenating a coefficient with a value of -1 onto the prediction filter as the whitening filter. I will show you an example of a whitening filter shortly.
The time-series output
This program uses a class named PlotALot07 to display various sampled time series involved in the adaptive process.
(In fact, much of the code in this program involves displaying various results for explanation purposes having nothing to do with the actual adaptive process.)
PlotALot07
An object of the PlotALot07 class produces multiple pages of plotted data with multiple traces or time series on each page. Figure 1 shows an example of one of the pages produced by this program.
Figure 1 |
Each page displays six different sampled time series plotted horizontally with time increasing from left to right. (At this point, I will start referring to the sampled time series as traces.)
Figure 1 shows the page produced by the program at the beginning of an adaptive run for a specific set of parameters.
The output from the whitening filter
The black trace at the top of Figure 1 shows the output from the whitening filter. Ideally this trace contains the wide-band signal with the narrow-band noise having been removed. However, in Figure 1, the top trace is still significantly corrupted by the narrow-band noise.
Figure 2 shows the graphic output produced by the same run after approximately 500 adaptive iterations. At this point, the narrow-band noise has been largely removed by the application of the whitening filter leaving only the wide-band signal in the top trace in Figure 2.
Figure 2 |
The wide-band signal
The second (red) trace in Figure 1 and Figure 2 shows the raw wide-band signal prior to adding the narrow-band noise. This wide-band signal consists of samples taken from a random number generator. Therefore, this is a white signal containing equal contributions of all frequency components between zero and the Nyquist folding frequency.
Ideally, the top trace should look exactly like the second trace once the narrow-band noise has been removed. This is pretty much the case after 500 adaptive iterations in Figure 2.
The narrow-band noise
The third (blue) trace in Figure 1 and Figure 2 shows the narrow-band noise that was added to the wide-band signal for the purpose of purposely corrupting the signal. For the case shown in Figure 1 and Figure 2, the narrow-band noise consisted of a single sinusoid with a peak-to-peak amplitude roughly twice the peak-to-peak amplitude of the wide-band signal.
The wide-band signal plus the narrow-band noise
The fourth (green) trace in Figure 1 and Figure 2 shows the sum of the wide-band signal and the narrow-band noise. This is the time series that is processed by the whitening filter to produce the output shown in the top trace.
You might note that at the beginning of the adaptive run in Figure 1, the output of the whitening filter in the top trace is very similar to the fourth trace except for a time shift. However, by the end of 500 adaptive iterations, the output from the whitening filter bears little resemblance to the fourth trace, but instead looks much more like the second trace, which is pure signal.
Output from the prediction filter
The output from the fifth (violet) trace is the output produced by applying the prediction filter to the fourth trace consisting of the sum of signal and noise. At the beginning of the adaptive process in Figure 1, the output from the prediction filter is essentially zero for all output values. (This is because all of the initial coefficients in the prediction filter were initialized to a value of zero.) However, by the end of 500 adaptive iterations, the output from the prediction filter in the fifth trace is a very good replica of the narrow-band noise in the third trace. Thus, subtracting the prediction filter output from the input that consists of the sum of signal and noise leaves a good estimate of the signal.
The adaptive target
The sixth trace at the bottom is the target time series that is used to control the
adaptive process.
(I explained the use of an adaptive target in the previous lesson.)
This trace displays the next sample beyond the samples that are processed by the prediction filter during each adaptive iteration. This trace is essentially the signal plus noise with a time shift as you can see by comparing it to the fourth (green) trace in Figure 1 and Figure 2. The prediction filter attempts to predict the value of this trace during each iteration and the adaptive process is designed to improve the ability of the prediction filter to perform that prediction in a high quality fashion.
The impulse response and the frequency response
As another approach to explaining how adaptive whitening works, Figure 3 shows the impulse response and the frequency response of the whitening filter at the beginning of the run, and at the end of every 100 iterations of the iterative adaptive process.
The impulse responses of the whitening filters at those points in time are shown in the panel on the left of Figure 3. The frequency response of each of the impulse responses is shown in the panel on the right of Figure 3.
Figure 3 |
The impulse response of the whitening filter
First consider the impulse response of the whitening filter. The top trace in the left panel shows the impulse response at the beginning of the run before the adaptive process begins. Each of the traces below that one shows the impulse response at the end of each set of 100 adaptive iterations, ending with the impulse response at the end of 500 iterations.
The impulse response of the whitening filter always ends with a coefficient value of -1.
(Recall that the whitening filter is constructed by concatenating a coefficient with a value of -1 onto the end of the prediction filter.)
The impulse response of the prediction filter
Thus, the impulse response of the prediction filter consists of all of the coefficient values to the left of the coefficient having the value of -1. These coefficient values are initialized to values of zero at the beginning of the adaptive process as shown by the top impulse response in Figure 3.
As you can see by examining each impulse response going down the page, the adaptive process causes the prediction filter coefficients to take on different values as the adaptive process proceeds through 500 adaptive iterations.
As you can also see, the coefficient values for the prediction filter have pretty well stabilized by the end of 300 iterations for this set of conditions.
The frequency response
Although the format can be a little confusing, the right panel in Figure 3 shows the amplitude and phase response of each of the whitening filters shown in the left panel. Each of the plots in the right panel shows the frequency response from a frequency of zero on the left, to the Nyquist folding frequency (one-half the sampling frequency) on the right.
The red and black traces
To get your bearings, consider the red trace and the black trace at the bottom of the right panel. The black trace with the notch near the bottom of the right panel shows the amplitude response of the whitening filter in the bottom of the left panel. The red trace at the bottom of the right panel shows the corresponding phase response for the same whitening filter plotted over an interval of +180 degrees to -180 degrees.
Each such pair of red and black traces corresponds to the phase and amplitude response of the whitening filter immediately to the left of the red phase response.
A notch filter
Consider first the amplitude response shown by the black trace at the bottom of the right panel. This amplitude response shows a reasonably sharp notch at a frequency about one fourth of the way between zero on the left and the Nyquist folding frequency on the right. The location of the notch matches the frequency of the narrow-band noise that was suppressed by the adaptive process.
A flat wide-band response
The frequency response of the whitening filter is relatively flat at all frequencies on both sides of the notch. When this filter is applied to the input consisting of wide-band signal plus narrow-band noise at the same frequency as the notch, the filter does a reasonably good job of preserving the wide-band signal and suppressing the narrow-band noise. That agrees with what we saw in the time series output in Figure 2.
The adaptive progression
If you examine the amplitude response curves at each level from top to bottom, you can see how this notch develops as the adaptive process converges. As was the case with the impulse response, the position of the notch and the flatness at surrounding frequencies was pretty well established and stabilized by the end of about 300 adaptive iterations.
The phase response
Another important characteristic of the whitening filter is the phase response. The output of a filter with a flat amplitude response and a phase shift of zero degrees simply reproduces the input. That is probably the best case scenario. A phase shift of 180 degrees (or -180 degrees) reverses the algebraic sign of the input values. This is probably the next best scenario because this phase shift is relatively easy to compensate for.
(Note that a -180-degree phase shift is the same as a +180-degree phase shift.)
Phase or waveform distortion
Except for the unique case of a linear phase shift (see the previous lesson), phase shifts between the two extremes of zero degrees and 180 degrees usually introduce phase or waveform distortion into the signal. This is usually undesirable and can be difficult to compensate for.
The phase response curve
The red phase response curves in Figure 3 are plotted against a black axis that represents zero degrees. As you can see, at the end of 500 adaptive iterations and at most frequencies, the phase shift is either +180 degrees or -180 degrees, indicating that there will be very little phase or waveform distortion in the signal as it passes through the whitening filter. The only frequencies where this is not true is in the narrow band of frequencies in the near vicinity of the notch in the amplitude response. Thus, we can expect a small amount of phase distortion for those signal components on either side of the notch in the amplitude response.
Overall, as we saw in Figure 2, this whitening filter does a reasonably good job of suppressing the narrow-band noise while preserving the wide-band signal with very little phase or waveform distortion.
Required input data
The user is required to provide the information shown in Figure 4 as command-line parameters to the program.
(If the user fails to provide the required command-line parameters, default values are used. The results shown in Figures 1 through 3 resulted from the default values.)
feedbackGain: The gain factor that is used in the feedback loop to adjust the coefficient values in the prediction/whitening filter. (A whitening filter is a prediction filter with a -1 appended to its end.) If the value of the feedbackGain is too high, the program will become unstable. If too low, convergence will take a long time. Values toward the low end tend to converge to better solutions. It is possible for the feedbackGain value to be low enough to avoid instability but high enough to cause the adaptive process to bounce around and never find a good solution. Typical useful values for feedbackGain in this program are around 0.00001. numberIterations: The is the number of iterations that the program executes before stopping and displaying all of the graphic results. predictionFilterLength: This is the number of coefficients in the prediction filter. This can be any integer value greater than zero. The program will throw an exception if this value is zero. Typical values are 15 to 30. Longer filters tend to produce better results in terms of the narrowness of the notches at the noise frequencies and the flatness of the filter between the notches. signalScale: A scale factor that is applied to the wide band signal provided by the random noise generator. The random noise generator produces uniformly distributed values ranging from -0.5 to +0.5. Scaling values of from 10 to 20 work well in terms of producing a wide-band signal that is of a suitable magnitude for plotting. Set this to 0 to see how the program behaves in the presence of noise and the absence of signal. noiseScale: A scale factor that is applied to each of the sinusoidal noise functions before they are added to the signal. The raw sinusoids vary from -1.0 to +1.0. Scaling values of from 10 to 20 work well in terms of being of a suitable magnitude for plotting. Set this to 0 to see how the program behaves in the presence of wide-band signal and the absence of narrow-band noise. numberNoiseSources: This value specifies the number of sinusoidal noise components that are added to the wide-band signal. Must be an integer value from 0 to 3. Figure 4 |
The default values
For the record, the default values that produced the output shown in Figures 1 through 3 are as shown in Figure 5.
Using following values by default: feedbackGain: 1.0E-5 numberIterations: 500 predictionFilterLength: 26 signalScale: 20.0 noiseScale: 20.0 numberNoiseSources: 1 Figure 5 |
A more difficult problem
Now let's look at the experimental results for a considerably more difficult scenario. The parameters for this scenario are shown in Figure 6.
Using following values from input: feedbackGain: 1.0E-5 numberIterations: 1000 predictionFilterLength: 45 signalScale: 20.0 noiseScale: 10.0 numberNoiseSources: 3 Figure 6 |
The main thing that makes this scenario more difficult is the fact that there are three narrow-band noise components instead of only one. This means that the adaptive process will be required to build a whitening filter with a frequency response that has three notches but which is otherwise flat.
To accommodate this added difficulty, I increased the prediction filter length to 45 coefficients and extended the number of adaptive iterations from 500 to1000. I didn't change the feedback gain.
The time-domain output
Figure 7 shows the time-domain graphs at the beginning and at the end of the adaptive run after 1000 adaptive iterations.
Figure 7 |
As you can see in the bottom panel of Figure 7, the whitening filter output in the top (black) trace is a reasonably good representation of the actual wide-band signal shown in the second (red) trace. This indicates that the adaptive process was successful in designing a whitening filter that suppresses the three narrow-band noise components while preserving the wide-band signal.
The impulse response and the frequency response
Figures 8 and 9 show the impulse and frequency response curves for the whitening filter as the adaptive process converges. The traces at the top of Figure 8 show the impulse response and frequency response of the whitening filter before the adaptive process began. Each successive set of traces shows the response curves at the end of 100 adaptive iterations.
Figure 8 |
The traces at the bottom of Figure 8 show the response curves after 500 adaptive iterations.
Figure 9 |
The fifth set of traces down from the top in Figure 9 show the response curves at the end of 1000 iterations.
Three notches are visible
You can see the three notches in the frequency response develop as you examine the curves from the top of Figure 8 to near the bottom of Figure 9.
Reasonably flat amplitude response
Although some ripple is evident in the amplitude response near the bottom of Figure 9, the amplitude response outside the areas of the three notches is reasonably flat.
Well-behaved phase response
Also, outside the areas of the three notches, the phase response is very close to either 180 degrees or -180 degrees indicating that there should be very little phase or waveform distortion for the wide-band signal. This agrees with a visual comparison of the first and second traces in the bottom panel of Figure 7.
Enough talk, let's see some code
Now that you know what to expect from the behavior of this program, it's time to examine the program code in some detail.
Preview
The program named Adapt02 illustrates one aspect of time-adaptive signal processing. This program implements a time-adaptive whitening filter using a predictive approach.
Input signal plus noise
The program input is a time series consisting of a wide-band signal plus up to three sinusoidal noise components. The program adaptively creates a filter that attempts to eliminate the sinusoidal noise while preserving the wide-band signal.
Time series output
The following time series are displayed when the program runs:
- -err: This is the negative of the error which is actually the output from the whitening filter. Ideally this time series contains the wide-band signal with the sinusoidal noise having been removed.
- signal: The raw wideband signal consisting of samples taken from a random number generator.
- sineNoise: The raw noise consisting of the sum of one, two, or three sinusoidal functions.
- input: The sum of the signal plus the sinusoidal noise.
- output: The output produced by applying the prediction filter to the input signal plus noise.
- target: The target time series that is used to control the adaptive process. This is the next sample beyond the samples that are processed by the prediction filter. The prediction filter attempts to predict this value. Thus, the adaptive process attempts to cause the output from the prediction filter to match the next sample in the incoming signal plus noise.
Examples of these six sampled time series outputs are shown in Figure 1 and Figure 2 above.
Frequency response of the whitening filter
Although not required by the adaptive process, the frequency response of the whitening filter is computed and displayed once every 100 adaptive iterations. This output is provided to help you to understand the adaptive process.
Ideally the amplitude response will be flat with very narrow notches at the frequencies of the interfering sinusoidal noise components.
Both the amplitude and the phase response are displayed once every 100 iterations. This makes it possible for you to see the notches develop in the frequency response of the whitening filter as it converges on a solution. It also makes it possible for you to see how the phase behaves at and between the notches in the amplitude response.
An example of the frequency response output is shown in the right panel in Figure 3 above.
Impulse response of the whitening filter
The individual time-domain whitening filters, (on which the frequency response is computed), are also displayed once every 100 iterations. An example is shown in the left panel of Figure 3.
Command-line input
The user provides six command line parameters to control the operation of the program. These command-line parameters are described in Figure 4 above. If the user doesn't provide any command line parameters, six default values are used instead.
In addition to the class named Adapt02, this program requires the following classes:
- PlotALot01
- PlotALot03
- PlotALot07
- ForwardRealToComplex01
I provided the source code for and explained the class named PlotALot01 in the earlier lesson entitled Plotting Large Quantities of Data using Java. Therefore, I won't repeat that explanation in this lesson.
I also provided and explained the class named PlotALot03 in the earlier lesson entitled Plotting Large Quantities of Data using Java, and I won't repeat that material here either.
I provided the source code for and explained the class named ForwardRealToComplex01 in the earlier lesson entitled Spectrum Analysis using Java, Sampling Frequency, Folding Frequency, and the FFT Algorithm. Once again, I will simply refer you to that lesson and won't repeat that material here.
The class named PlotALot07 is new to this lesson. The source code for this class is provided in Listing 22 near the end of the lesson. The class named PlotALot07 is a simple extension of the class named PlotALot04, which I explained in the lesson entitled Plotting Large Quantities of Data using Java. I will refer you to that lesson for a general explanation of the class and won't provide further explanation of the class named PlotALot07.
Program testing
This program was tested using J2SE 5.0 running under Windows XP. J2SE 5.0 or later is required due to the use of Generics and the use of static import directives.