Graph theory beginning

56 Views Asked by At

We now consider different arithmetic operations for vectors and matrices and want to express the number of individual calculation steps as a function of n in the O -notation. By the number of operations we obtain an approximate measure of the runtime complexity, which is additionally influenced by the used precision of the number representation within the implementation and various memory space issues. We will use the representation

Runtime = Number of operations · Duration of a single operation

Let n, m ∈ N and V = Rn a n-dimensional R-vector space.

• Let a, b ∈ V and λ ∈ R. Determine the runtime for the calcuation of a + λ · b.

• Let A, B ∈ Rm×n and λ ∈ R. Determine the number of necessary operations for the matrix addition A + λB.

• Let A ∈ Rm×n and v ∈ Rn. Determine the runtime of the multiplication A · v.

• Let A ∈ Rn×n. Determine the runtime of the Gauß-elimination to determine a reduced row echelon form.

I am not sure I completely understand what am I supposed to do here? If we don't have any variables to get an answer from, how am I supposed to get any answers or determine anything?Do I just guess? If anyone can help me understand how to do these problems or guide me to resources that explain them it would be great

1

There are 1 best solutions below

1
On BEST ANSWER

You do have variables: they are $n$ and $m$.

The idea here is that there are some calculations in linear algebra to be done. Each of these will involve doing some additions and multiplications, using the various scalars and vector/matrix entries that might be given. How many additions are needed, and how many multiplications, will depend on the size of the vectors and matrices. Bigger matrices will need more operations.

In reality, not all "additions" and "multiplications" take the same amount of time to carry out. But for the purposes of this analysis, we are going to ignore that. We want to get "an approximate measure of the runtime complexity", not a precise one which would be much harder to work out. So we are only looking at the size of the inputs, not their values. That is why we do not have specific examples of $a$, $b$, etc.: we're not trying to do a specific calculation, but determine how long such calculations generally take.

For a prompt like

Let a, b ∈ V and λ ∈ R. Determine the runtime for the calcuation of a + λ · b

We are trying to find out how many additions and multiplications this calculation needs, for arbitrary vectors each of length $n$. We are not interested in the contents of the vectors because that will not affect the total count of operations. Only their size matters.

For example, if $n=2$ then we are working out $$ \begin{pmatrix}a_1 \\ a_2\end{pmatrix} + \lambda \begin{pmatrix}b_1 \\ b_2\end{pmatrix} = \begin{pmatrix}a_1 + \lambda b_1\\a_2 + \lambda b_2\end{pmatrix}. $$ I have done two multiplications and two additions. The values $a_1$, $a_2$, $b_1$, $b_2$ and $\lambda$ could be anything but that does not change how many operations are involved. (We might imagine doing a bit better in some cases, such as avoiding some work if $\lambda = 0$. Or, for some calculations there might be a better-than-naive algorithm. But because we are only trying to come up with an upper bound, expressed in terms of the dimensions of the matrices, we don't have to worry about that.)

Thinking about what happens for different choices of $n$, we can see that we'll be doing $n$ multiplications ($\lambda b_i$ for each $i$) followed by $n$ additions (adding each $a_i$ to the corresponding $\lambda b_i$), for $2n$ total arithmetic operations. We could write the runtime "as a function of n in the O -notation" as $O(n)$, or break down the operations separately.