Maple - LinearSolve evaluates seemingly equivalent augmented matrix differently

131 Views Asked by At

I am trying to get myself acquainted with Maple and was fooling around with some linear algebra.

I noticed that the following commands

restart;
with(LinearAlgebra);

P:=<0.6,0.3;0.4,0.7>;
Id := IdentityMatrix(2);

A:=P-Id;
b:=ZeroVector(2);

Aug1:=<A|b>;
LinearSolve(Aug1,free='x');

produce the unexpected result $$ \begin{bmatrix} -0. \\ -0. \\ \end{bmatrix} $$

Entering the augmented matrix manually, however

Aug2:=<-0.4,0.3,0;0.4,-0.3,0>;
LinearSolve(Aug2,free='x');

does produce the result I was expecting $$ \begin{bmatrix} 0.750000000000000x_1 \\ x_1 \\ \end{bmatrix} $$

The only difference I see, is that Aug1 looks like $$ \begin{bmatrix} -0.400000000000000 & 0.300000000000000 & 0 \\ 0.400000000000000 & -0.300000000000000 & 0 \\ \end{bmatrix} $$ while Aug2 looks like $$ \begin{bmatrix} -0.4 & 0.3 & 0 \\ 0.4 & -0.3 & 0 \\ \end{bmatrix} $$

Using whattype() I already determined that both Aug1 and Aug2 have the Matrix datatype and their elements have the float datatype.

Any idea how I could produce the expected result using Aug1 ($=P-Id$)?

1

There are 1 best solutions below

0
On BEST ANSWER

When you add your Matrix A (whose entries are all of type numeric, and at least one is a float) to your Id the 2x2 identity Matrix, then the result has datatype=float[8] and is stored in so-called hardware double precision.

That is a base-2 format, in which the floats 0.3 and 0.7 cannot be represented exactly. Notice how the following values p1 and temp differ by a very small (but nonzero!) amount. I use Maple's evalhf mode here, to mimic the double precision operations in question.

evalhf(proc() local p1,p2,res,temp;
   p1:=0.3; p2:=0.7;
   temp:=p2-1.0; res:=p1+temp;
   eval(lprint([p1,temp,res]));
   return NULL; end proc()):

   [.299999999999999989, -.300000000000000044,
     -.555111512312578270e-16]

HFloat(0.3), HFloat(0.7)-HFloat(1.0):
lprint(%);

  HFloat(.299999999999999989), 
  HFloat(-.300000000000000044)

`+`(%);

                        -17
    -5.55111512312578 10   

Notice the res is not exactly 0.0.

At the default values for the environment variables Digits (10) and UsehardwareFloats (deduced) the following LinearAlgebra computations are done in hardware double precision.

The result from LinearSolve is indeed a single, valid solution, but the parametric answer is not obtained. That is because the linear-solving algorithm does not detect that the two rows are "close enough" to being merely elementwise negations of each other.

restart;
with(LinearAlgebra):
P:=<0.6,0.3;0.4,0.7>:
Id := IdentityMatrix(2):
b:=ZeroVector(2):

UseHardwareFloats; # default is `deduced`

             deduced

A:=P-Id;

       [-0.400000000000000   0.300000000000000]
  A := [                                      ]
       [ 0.400000000000000  -0.300000000000000]

lprint(A);

  Matrix(2,2,{(1, 1) =
  HFloat(-.400000000000000022),
  (1, 2) = HFloat(.299999999999999989),
  (2, 1) = HFloat(.400000000000000022),
  (2, 2) = HFloat(-.300000000000000044)}, 
  datatype = float[8],storage = rectangular,
  order = Fortran_order,shape = [])

Aug1:=<A|b>:

LinearSolve(Aug1);

              [-0.]
              [   ]
              [-0.]

The following provides some insight as to how that was computed.

LUDecomposition(Aug1);

  [1  0]  [1.0   0.]  [-0.400000000000000        0.300000000000000  0.]
  [    ]  [        ]  [                                               ]
  [0  1], [-1.  1.0], [                                        -17    ]
                    [                0.  -5.55111512312578 10     0.]

RowOperation(Aug1,[2,1],1);

     [-0.400000000000000        0.300000000000000  0]
     [                                              ]
     [                                        -17   ]
     [                0.  -5.55111512312578 10     0]

ReducedRowEchelonForm(Aug1);

          [ 1.  -0.  -0.]
          [             ]
          [-0.   1.  -0.]

Notice the similarity with the earlier scalar computation. Notice that the bottom row of the results from ReducedRowEchelonForm and RowOperation are not all zero, and neither is the L Matrix returned by LUDecomposition.

One way to work around this issue is to perform the computations in so-called "software floats", which in Maple means a base-10 floating-point format in which the values 0.3 and 0.7 can be stored "exactly".

SFloat(0.3), SFloat(0.7)-SFloat(1.0):
lprint(%);

  .3, -.3

`+`(%);

            0.

With the default value of deduced for UseHardwareFloats then software floats will get used if Digits is greater than 15. Or UseHardwareFloats could be set to false and then software floats would get used, for any valid setting of Digits.

Let's re-do all those computations, with UseHardwareFloats set to false.

UseHardwareFloats:=false:

A:=P-Id;

           [-0.4   0.3]
      A := [          ]
           [ 0.4  -0.3]

lprint(A);

  Matrix(2,2,{(1, 1) = -.4, (1, 2) = .3,
  (2, 1) = .4, (2, 2) = -.3},
  datatype = sfloat,storage = rectangular,
  order = Fortran_order,shape = [])

Aug1:=<A|b>:

LinearSolve(Aug1);

         [0.7500000000 _t0[1]]
         [                   ]
         [      _t0[1]       ]

The following provides some insight as to how that was computed.

LUDecomposition(Aug1);

     [1  0]  [          1.  0.]  [-0.4  0.3  0.]
     [    ], [                ], [             ]
     [0  1]  [-1.000000000  1.]  [  0.   0.  0.]

RowOperation(Aug1,[2,1],1);

         [-0.4  0.3  0]
         [            ]
         [  0.   0.  0]

ReducedRowEchelonForm(Aug1);

     [1.  -0.7500000000  -0.]
     [                      ]
     [0.             0.   0.]

Another way to get at a parametric solution for your example is to compute the NullSpace (Kernel) of your A Matrix, and form a parametrized linear combination of the (here, one) Vector in the basis.

This question is about Maple programming and not about mathematics per se, and as such is off-topic in this forum. More suitable would be stackoverflow.com or www.mapleprimes.com the Maplesoft user forum.