Signal approximation using linear combination of functions

509 Views Asked by At

How I can approximate the signal $x(t)=0.001\,t^3 \exp(-0.1t)$ in the interval $[0,100]$ using a linear combination of the following functions:

$f_1(t)=A_1$

$f_2(t)=A_2\cos(0.05t)$

$f_3(t)=A_3\cos(0.1t)$

$f_4(t)=A_4\cos(0.2t+1)\exp(-0.2t)$

$f_5(t)=A_5\,t^3$

Can I use a matlab code in order to do it for the whole interval $[0,100]$?

I try to write a matlab code. Is this correct?

t=[0:100];
x_t=0.001*(t.^3).*exp(-0.1*t); %signal given for aproximation
f1_t=x_t.^0;
f2_t=cos(0.05*x_t);
f3_t=cos(0.1*x_t);
f4_t=cos(0.2*x_t+1).*exp(-0.2*x_t);
f5_t=x_t.^3;
M=[f1_t' f2_t' f3_t' f4_t' f5_t']; %matrix with linear components
A=M\x_t'; %matrix with coefficients
f_t=(A(1)*f1_t)+(A(2)*f2_t)+(A(3)*f3_t)+(A(4)*f4_t)+(A(5)*f5_t);
figure(1),plot(f_t);
1

There are 1 best solutions below

0
On BEST ANSWER

In $L^2$ approximation you can use Gauss Newton algorithm to minimize squared error $$\text{min}_{\vec A}\,\,\sum_i\big(y_i-f(\vec A,x_i)\big)^2$$ The derivation you can find in every textbook; the result is an iterative method $$\Delta\vec A=(J^TJ)^{-1}J^T\vec r$$ $$\vec A=\vec A+a\,\Delta \vec A$$ where $a$ is the damping coefficient and $$J=\begin{pmatrix}\bigg(\frac{\partial f}{\partial A_1}\bigg)_{x=x_1}&...&\bigg(\frac{\partial f}{\partial A_n}\bigg)_{x=x_1}\\\ ...&...&...\\\ \bigg(\frac{\partial f}{\partial A_1}\bigg)_{x=x_m}&...&\bigg(\frac{\partial f}{\partial A_n}\bigg)_{x=x_m}\end{pmatrix}\quad \vec r=\begin{pmatrix}y_1-f(\vec A,x_1)\\\ ... \\\ y_m-f(\vec A,x_m) \end{pmatrix}$$ In your case the function is $$f(\vec A,x)=A_1+A_2\cos(0.05x)+A_3\cos(0.1x)+A_4\cos(0.2x+1)e^{-0.2x}+A_5x^3$$ and the elements of Jacobian $$\frac{\partial f}{\partial A_1}=1$$ $$\frac{\partial f}{\partial A_2}=\cos(0.05x)$$ $$\frac{\partial f}{\partial A_3}=\cos(0.1x)$$ $$\frac{\partial f}{\partial A_4}=\cos(0.2x+1)e^{-0.2x}$$ $$\frac{\partial f}{\partial A_5}=x^3$$

You can use below code

clc
clear
% Generate 100 equally spaced points
X=[0:100];
% Generate y values
Y=0.001.*(X.^3).*exp(-0.1.*X);

plot(X,Y,'r');

fA1=inline('1','x');
fA2=inline('cos(0.05*x)','x');
fA3=inline('cos(0.1*x)','x');
fA4=inline('cos(0.2*x+1)*exp(-0.2*x)','x');
fA5=inline('x^3','x');
r=inline('y-A1-A2*cos(0.05*x)-A3*cos(0.1*x)-A4*cos(0.2*x+1)*exp(-0.2*x)-A5*x^3','A1','A2','A3','A4','A5','x','y');


% Initial guess
A=[0.1;0.1;0.1;0.1;0.1];
% Initilize error
error=0;

% max number of iterations
for j=1:1000

    for i=1:size(X,2)
        % Generate Jacobian
        J(i,1)=fA1(X(i));
        J(i,2)=fA2(X(i));
        J(i,3)=fA3(X(i));
        J(i,4)=fA4(X(i));
        J(i,5)=fA5(X(i));
        % Generate residual
        res(i,1)=r(A(1),A(2),A(3),A(4),A(5),X(i),Y(i));
    end
    % Calculate increment
    dA=(transpose(J)*J)^(-1)*transpose(J)*res;
    A=A+0.1*dA;
    prev_error=error;
    error=transpose(res)*res;
    if(mod(j,50)==0)
        fprintf('%4i %+4.3f / %+4.3f / %+4.3f / %+4.3f / %+4.3f / %7.5f \n',j,A(1),A(2),A(3),A(4),A(5),error);
    end
    if ((error<1) || (abs(error-prev_error)<0.000001))
        fprintf('%4i %+4.3f / %+4.3f / %+4.3f / %+4.3f / %+4.3f / %7.5f \n',j,A(1),A(2),A(3),A(4),A(5),error);
        break;
    end

end

    for i=1:size(X,2)
        Y_calc(i)=A(1)+A(2)*cos(0.05*X(i))+A(3)*cos(0.1*X(i))+A(4)*cos(0.2*X(i)+1)*exp(-0.2*X(i))+A(5)*X(i)^3;
    end
    hold on
    plot(X,Y_calc,'b+');

The solution will be $A_1=0.916$, $A_2=-0.297$, $A_3=-0.471$, $A_4=-0.071$ and $A_5=0$. The graph is below (red for original; blue for approximation)

enter image description here