Sie sind auf Seite 1von 4

Active Noise Cancellation for Audio Signals Using

Neural Network based Adaptive Filters

Harsh Bhate, Karthik Natarajan and Adhithya Natarajan V.


Ramamoorthy Professor, Department of Electronics and Communication
Department of Electronics and Communication Engineering,
Engineering, SRM University,
SRM University, Chennai, India
Chennai, India natarajan.v@ktr.srmuniv.ac.in
adhiram19@gmail.com

​ Active noise control, the technique of achieving


Abstract— II. EXISTING SOLUTION
noise reduction using an electronically generated anti – noise, has
been around for a while. Traditional implementations of active A. Plant description
noise control involve the use of algorithms such as Least Mean
Squares (LMS). This paper deals with the subject of achieving
active noise control for an acoustic signal using a neural network
based adaptive filter.

Keywords—Active Noise Cancellation, Neural Networks,


Adaptive Filters, Gradient Descent, Least Mean Square

I. INTRODUCTION
Noise, or unwanted signal(s), is an integral part of nature Figure 1. Feedforward System for 1D Duct System ​[1]
and its study a subject in itself. Traditional methods of
controlling noise have been either cost – prohibitive and/or The plant model for implementing active noise control
bulky and inefficient. This paper looks into the problem of involve the use of feedforward system design wherein a
achieving sufficient active noise control for audio frequencies reference microphone listens to the noisy signal and a control
on the lower end of the spectrum in a way that is cost – element tries to estimate the anti – noise signal and
effective, computationally lean and scalable. This paper begins subsequently introducing it through a speaker. The acoustic
with the description of the problem and the traditional output thus achieved is benchmarked with the help of an error
approaches to achieving sufficient noise control and later microphone which acts as the feedback component of the
moves on to a qualitative discussion on the nature of a method system. With the use of the two microphones, the controller
for achieving active noise control using neural nets. This is tries to estimate to get the most optimized value of the noisy
followed through by computer simulations and benchmarking signal for cancellation.
against traditional methods. Future scope of the paper involves
the possibility of physical implementation of the said method. B. Least Means Square (LMS) Algorithm
The Least Means Square (LMS) algorithm is one of the
most common error reducing algorithm due to its simplicity.
The LMS algorithm is a gradient descent type algorithm that
works by traversing through the negative of the error function
gradient to arrive at the minima.
Figure 2. LMS Adaptive Filter ​[2]

Figure 4. MADALINE Neural Network ​[3]


The LMS algorithm involves the use of an adaptive
algorithm that updates the weight of a filter to achieve
adaptive filtering. The weights are updated as follows: MADALINE neural network are multi-layer neural
network. The architecture takes the inputs into a layer of
wn+1 = wn − μ∇ε[n] (1)
ADALINE neuron and the output is passed on to multiple
Here, wn represents the filter weights at nth instant, μ the layer of neurons until the final layer whereby the output is
step size and ε[n] the error at the nth instant. The step size processed.
determines the rate of convergence in the algorithm. It also E. ADALINE vs MADALINE Neural Network
determines the stability of the algorithm.
III. NEURAL NETWORK BASED ADAPTIVE While ADALINE neural networks provide easy
FILTER computations but they are only good at noise cancellation with
Artificial Neural Network are computational constructs linearly separable signals. MADALINE neural networks do a
loosely based on biological neural networks. The two neural better job at complex signals. [4]
network architecture of interest in this paper are ADALINE F. ADALINE Neural Network based Adaptive Filter
(Adaptive Linear Neuron) and MADALINE (Multilayer Filters are causal systems and thus, to implement an
Adaptive Linear Neuron) neural network. Abbreviations and adaptive filter out of neural network involves the inclusion of
Acronyms. an causal element in the form of a tapped delay. Thus, the
C. ADALINE Neural Net following architecture is arrived:

Figure 3. ADALINE Neural Network ​[3]

ADALINE neural network are single layer neural network.


The architecture takes the weighted inputs and passes it
through an input function such as addition. The output of
the input function helps in descision making of the
activation function which helps in quantizing the output of
the network. The network adapts the weights of the inputs Figure 5. Linear Adaptive Filter with Tapped Delay ​[5]
based on the error using some learning methodology.
D. MADALINE Neural Network G. Widrow – Hoff Learning Rule
The Widrow – Hoff Learning rule is an LMS based
gradient descent rule. The weight update equation is given by:
wn+1 = wn − μ∇ε [n] = − μεi (2)
Here, wn represents the filter weights at nth instant, μ the
step size and ε[n] the error vector at the nth instant. The step
size determines the rate of convergence in the algorithm. It
also determines the stability of the algorithm. Since the exact
computation of the gradient of the error is not possible, any
one of the element of the error vector εi is used as a crude
approximation.
IV. NOISE CANCELLATION USING ADALINE
NEURAL NET BASED ADAPTIVE FILTER
Figure 8. Error Signal with learning rate = 0.5
As the graphs suggest, the best convergence is observed
To simulate the performance of an ADALINE neural with a learning rate of 0.1. A higher learning rate of 0.5
network based adaptive filter, SIMULINK was used to overshoots the error signal to a more than satisfactory value
simulate on a data set comprising of a song sampled at 8000 while the lower learning rate of 0.01 is too slow to converge.
Hz, stereo audio mixed with Gaussian noise to give the noised
signal for cancellation with two tapped delays. I. Final Result
H. Stability Analysis After setting on the learning rate, the following results
were observed:

The selection of learning rate is crucial for noise


control. A small value of learning rate will result in slow
convergence of the output to the desired signal, while a
larger value of learning rate will result in overshoot of the
error from the minimum point. The following graphs were
observed for different learning rate to determine the ideal
learning rate:

Figure 9. System Output vs Desired Signal

In the figure 9, the orange signal denotes the output signal


while the blue signal denotes the desired signal. As the time
passes, the output signal slowly converges to desired signal.
Accordingly, the error converges to a minimum.

Figure 6. Error Signal with learning rate = 0.01

Figure 10. Error Rate for the signals shown in Fig. 9

Figure 7. Error Signal with learning rate = 0.1


Table 1. Results
Correlation of Desired
Correlation Coefficient
Signal WRT
Noised Signal 0.9828 [2] National Instruments, "Least Means Squares (LMS) Adaptive Filter -
National Instruments," National Instruments, 23 August 2013. [Online].
Output Signal 0.9956 Available: http://www.ni.com/example/31220/en/.
[3] S. Raschka, "Machine Learning FAQ," [Online]. Available:
The correlation with respect to desired signal of the filtered https://sebastianraschka.com/faq/docs/diff-perceptron-adaline-neuralnet.
signal is closer to 1 than the noised signal thereby showing html.
promising results. [4] B. Widrow, "c1992artificialneural.pdf," 1992. [Online]. Available:
http://www-isl.stanford.edu/~widrow/papers/c1992artificialneural.pdf.
V. PHYSICAL IMPLEMENTATION AND FUTURE [5] Mathworks Inc., "Adaptive Neural Network Filters," Mathworks Inc.,
WORKS [Online]. Available:
https://in.mathworks.com/help/nnet/ug/adaptive-neural-network-filters.ht
Future work involves implementation of the filter design ml. [Accessed 13 March 2017].
on a digital signal processor.

REFERENCES
[1] C. H. (. U. A. N. C. N. Y. N. U. T. &. F. Hansen, "Understanding Active
Noise Cancellation," New York, NY, USA, Taylor & Francis, 2001.

Das könnte Ihnen auch gefallen