Gradient descent (derivative) from an algorithm to minimize the error, but doesn't work with negative real value

74 Views Asked by At

Best

I've a simple algorithm and I would like to minimize its error.
Therefore I'm planning to calculate the derivatives with respect to the input.

algorithm:
-- constant values -- $$ real = 3 $$ -- initial parameters -- $$ Ain = 1.6$$ $$ Bin = -1.2$$

-- algorithm -- $$ Aout = 10^{\frac{\left | Ain \right |}{3}}-1 $$ $$ Bout = 10^{\frac{\left | Bin \right |}{3}}-1 $$ $$ error = (real - Aout - Bout)^{2} $$

-- derivatives -- $$ Aout\_der = \frac{log(10) * Ain * 10^{ \frac{|Ain|}{3} }}{3 * |Ain|}$$ $$ Bout\_der = \frac{log(10) * Bin * 10^{ \frac{|Bin|}{3} }}{3 * |Bin|}$$ $$ error\_A\_der = 2*( real - Aout - Bout ) * ( - Aout\_der ) $$ $$ error\_B\_der = 2*( real - Aout - Bout ) * ( - Bout\_der ) $$



Here you can see my program

program

for i in range(10000):

    #-------------------#
    # STEP 1: Goad diff #
    #-------------------#
    Aout = 10^( |Ain| /3 ) - 1 
    Bout = 10^( |Bin| /3 ) - 1 

    Aout_der = ( log(10) * Ain * 10^( |Ain| /3) ) / (3 * |Ain|)
    Bout_der = ( log(10) * Bin * 10^( |Bin| /3) ) / (3 * |Bin|)

    #--------------------#
    # Step 2: Error      #
    #--------------------#-----------------#
    # |u(x)| ′  =  u(x) / |u(x)| * u'(x)   #
    # &                                    #
    # u(x)**n ′  =  n * u(x)**n−1  * u′(x) #
    #--------------------------------------#

    error = ( real - Aout - Bout )^2

    # Derivative error to Ain and Bin        
    error_A_der = 2*( real - Aout - Bout ) * ( - Aout_der )
    error_B_der = 2*( real - Aout - Bout ) * ( - Bout_der )

    #--------------------------------#
    # Step 3: update the Ain and Bin #
    #--------------------------------#
    Ain = Ain - error_A_der * 0.001
    Bin = Bin  - error_B_der * 0.001


    # stop criteria
    if error  < 0.0001:
        break

And everything works okey, as long as real > 0


if real = 3

i stopped at: 584
error: 9.759144048796994e-09
Ain: 1.3523187418649631
Bin: -1.0133921615317931


if real = 1

i stopped at: 1701
error: 9.988463991863827e-09
Ain: 0.6265998215802447
Bin: -0.4220107676504184


if real = 0

i stopped at: 6599
error: 9.964813417717756e-09
Ain: 0.00012989918077312943
Bin: 1.5323536787935905e-07



But when real become negative, it doesn't work anymore


if real = -1

i stopped at: 9999
error: 1.00236266444978
Ain: -0.0007688867423695804
Bin: 0.0007688895693352667


if real = -3

i stopped at: 9999
error: 9.021301796298907
Ain: 0.0023093982869384407
Bin: 0.0023093982869377746


if real = -10

i stopped at: 9999
error: 100.23817133864833
Ain: -0.007730146364071466
Bin: -0.007730146364071119

also notice that the error is near real^2, when the error is negative.

My question: am I doing something wrong? and how can i solve this? on everage the value Real can differ from -10 to 10
thus the range of real = [ -10, 10]

extra info: real is a constant. you might think it in terms of back propagation. Aout + Bout should approximate real kind regards

1

There are 1 best solutions below

0
On BEST ANSWER

real cannot possibly be negative, it's the sum of $2$ positive quantities as $10^{|x|} - 1$ is never less than $0$ for any $x \in \mathbb{R}$ (this is because $10^x > 1$ for $x > 0$).

So when you do the gradient descent, it tries to achieve the best possible error, which occurs when $x$ is zero. That's why you get $A_{in}$ and $A_{out}$ to be nearly zero after algorithm terminates, and in that case, the error is $(\textbf{rate} - 0 - 0 )^2 = \textbf{rate}^2$ which you already noted.