Today I've tried to solve a geometric problem (collision point between two circles in a specific situation). I found a working solution but I'm not sure if it was optimal (maybe my solution took more calculations and work than necessary to get the result).
The solution took some known data (circles' positions and radii, and other stuff), calculated a few half-products (circle's new center point after moving along a vector and so on...) and finally calculated the searched collision point.
This was needed for a computer program so it needed to be all brought down to formulas. Some formulas took known data and calculated the half-products, the other(s) took half-products and calculated the final result. So it's possible to combine all these formulas into one big one which takes the known starting data and calculated the final result. Is that correct (in the general case)?
And if my big formula took only a few known data (a few variables) and used them in multiple places then the formula could be reduces/simplified, e.g.:
x^2 + 64x - y - 32x + 3y -> x * (x + 32) + 2y
So my question is: if my solution is correct (regardless of how complicated/unoptimal way it is to calculate the result) and I can bring it all down to 1 formula which only uses the starting variables and then I maximally reduce/simplify that formula - will I always (in any case) get the optimal formula/way to calculate the final result? Regardless of the method (way of getting to the result)?
EDIT: After thinking about my question I think I could state it differently: Let's say my solution is correct (i.e. produces the correct result) and let's say I can bring it down to a single formula. If I use different approaches I would probably get different formulas. But are they all equivalent (can I always transform one into another)? Can I reduce every such formula to the same form (regardless of its optimality)?
Defining "maximally reduce" to be the simplest way to write down a function, and "optimal formula" to be the calculation method that takes the least amount of time, reduction is not guaranteed to find the most efficient method.
Here's a quick counterexample. Let's say I have a computer that takes 1 second to multiply two numbers, and 0.1 seconds to add two numbers. Now I have two functions, $$f(x) = 5x$$ $$g(x) = x + x + x + x + x$$ It looks like $f$ is much simpler! But that computation will take 1 second, while $g$ will take only half that time.
This seems like a pretty ridiculous scenario, but floating point multiplication was once pretty slow! A more relevant example is the Discrete Fourier Transform (DFT). The mathematical description is quite simple, but there are complicated ways of doing the transform much more quickly. Mathematical notation is not directly related to computational efficiency!