Computational complexity of matrix multiplication

528 Views Asked by At

True or False: Let $A,B \in \mathbb{R}^{d \times d}$ be two matrices. Computing the product $AB \in \mathbb{R}^{d \times d}$ takes roughly $d^2$ floating point operations.

I'm having a hard time computing floating point operations. I believe the above is false but I am not sure how do you show it. How do you compute FLOPS?

1

There are 1 best solutions below

4
On BEST ANSWER

HINT

Well, the naive way of computing the product of two matrices $C = AB$ defines $$ c_{ij} = \sum_{k=1}^d a_{ik} b_{kj}. $$

  1. How many operations (multiplications and additions) are needed to compute one such $c_{ij}$?
  2. How many entries do you need to compute?

Multiply the answers from (1) and (2) and you get the number of floating point operations needed.

Can you now complete this?


UPDATE

Following your comments, you (almost) got it correctly. Note that $$ c_{ij} = a_{i1}b_{1k} + a_{i2}b_{2k} + \ldots + a_{d1}b_{dk}, $$ and each term in the sum needs $1$ multiplication to be computed, which is $d$ multiplications total, as you note. Now there will be $d$ numbers to add, but that requires $d-1$ multiplications (it takes $1$ operation to add $2$ numbers, $2$ operations to add $3$ numbers, etc.).

Overall, computing one entry is $d+(d-1) =2d-1$ operations, and we have $d^2$ entries in the matrix, so a total of $d^2(2d-1) =\Theta(d^3)$ calculations.