How to reverse matrix vector multiplication?

12.7k Views Asked by At

I'm using the simple matrix x vector multiplication below to calculate result. And now I wonder how can I calculate vector if I know matrix and result ? So when I multiply matrix and vector afterwards again I get result.


Sorry I don't know how you call this multiplication. I was never deep in those math topics. I have a program that does my calculation. I hope you can understand and classify the multiplication:

// float[] matrix = [4x4], float[] vector = [4] column vector

float[] result = new float[vector.Length];
for (int column = 0; column < 4; column++)
{
    for (int row = 0; row < 4; row++)
        result[column] += matrix[column + row * 4] * vector[row];
}
return result;

Update: I found this for inverted matrices and I now remember mathematicians don't care for complexity of things but we programmers do. Is there no way to avoid matrix inversion (to lessen complexity) ?

Solution: I implemented the raw 4x4 matrix inversion and (alternatively) I inverted the matrix generation. In the end I got the very same matrices and the very same valid results for my vector. I choose the 2nd path because that reduces the complexity to around the same as the calculation done in my first sentence above.

Thank you for the help!

3

There are 3 best solutions below

5
On BEST ANSWER

If I understand your question correctly, you have a matrix $A$ and a vector $b$ and want to find the vector $x$ which satisfies $$Ax=b.$$

If the matrix $A$ is invertible, then the equation $Ax=b$ has a unique solution, namely $x=A^{-1}b$. If $A$ is not invertible, there may be either zero or many solutions to your problem.

4
On

If you want to solve a linear system of the form $$Ax = b,$$ you shouldn't compute the inverse of the matrix $A$. If $A\in \mathbb{R}^{n \times n}$ has no special structure it can need up to $O(n^3)$ steps to compute $A^{-1}$. This is only worth if you have to solve this system a LOT of times for fixed $A$ and differents $b$. Usually we use the conjugate gradient method which provides you a solution in at most $O(n)$ steps. Think also at using some preconditioner. Anyway the best is probably to reproduce what the Matlab command $A\setminus b$ is doing and this can be found here.

0
On

If one were to list 5 computational problems that are central to computer science, solving the matrix equation $Ax = b$ would be one of them. Here, $A$ is your matrix, $b$ is your result, and $x$ is the vector you are trying to solve for.

Regardless of what method you decide to use, you should be using a library and not manually coding the algorithm. There are two classical methods for solving the linear system $Ax = b$. The first is Gaussian elimination, suitable for small to medium-sized, dense matrices. This is implemented in NumPy as numpy.linalg.solve, or more generally, in LAPACK as gesv. Gaussian elimination, in all its implementations, are exact in exact arithmetic, and so you can never beat the $O(n^3)$ scaling.

The second is the iterative method, suitable for large, sparse matrices. There are several variations on this method. If $A$ is symmetric and positive-definite, you can use Conjugate Gradients. Otherwise, you will want to use GMRES. Actual library implementations of these are slightly more varied, and you will have to search the internet to find one suitable for your purposes. Iterative methods take advantage of sparsity to perform far faster than Gaussian elimination, at the cost of not being exact in exact arithmetic. However, given well-conditioned and sparse matrices, iterative methods have superb accuracy.

There is a third but less common option. If you know some special knowledge about the structure of your matrix, then you can also look for structured matrix solvers. Given the nature of your question, it is likely that this is not the case for your problem.

The bottom line is that you should search the internet to find a suitable library. If computational efficiency is very important to you, nothing that you come up with is going to beat the performance of subroutines in libraries like LAPACK.