This is a repost of a StackOverflow thread. I was told that people on this forum will be able to answer this question faster. So here it is:
This is not homework. I'm taking a computer architecture MOOC on my own time. There is a problem I can't figure out and maybe someone can help me. Here it is:
Memory operations currently take 30% of the execution time.
- A new widget called a "cache" speeds up 80% of memory operations by a factor of 4.
- A second new widger called a "L2 cache" speeds up 1/2 the remaining 20% by a factor of 2.
What is the total speedup?
Here is the formula that is used to calculate the speedup:
Speedup = 1 / [(1 - Non-speedup portion) + (Sped up portion 1)/speedup1 + (Sped up portion 2)/speedup2 + ...]
I calculated it as follows:
Speedup = 1 / [0.7 + 0.3*0.8/4 + 0.3*0.2*0.5/2 + 0.3*0.2*0.5] = 1.2422
But the answer is wrong which indicates that my reasoning is wrong, but I can't figure out where it is wrong. Can someone help me out?
Thanks.
Execution time = 1
Memory operations = 0.3
80% of memory operations = 0.24, this now takes 0.06 seconds, 0.18 seconds saved
20% of the memory operations = 0.06
Half of the remaining 20% = 0.03, this now takes 0.015 seconds, 0.015 seconds saved
0.195 seconds saved
What had taken 1 seconds,now takes 0.805 seconds.
1/0.79=1.2422, you seem to have done it correctly. what makes you think your answer is wrong?
this is how I would solve it. "speed" and "improvement" have different definitions and sometimes not obvious. "What had taken 1 seconds,now takes 0.805 seconds." is an unambiguous way to put it.