I was doing a programming exercice, it took me a lot of time, and when I finished and checked the result, I saw that a lot of people used a formula that I could not understand to solve it where as I used a more algorithmic approach.
Here is the exercise:
If you consider a given number d, how many proper fractions can be built using d as a denominator?
For example, let's assume that d is 15: you can build a total of 8 different proper fractions between 0 and 1 with it: 1/15, 2/15, 4/15, 7/15, 8/15, 11/15, 13/15 and 14/15.
You have to build a function that computes how many proper fractions you can build with a given denominator
The mathematical approach that most people answered is like this: Given a number (N), the first step is to calculate the prime factors (PF) of N. Then, they reduce the PF by using this formula:
$$ result = 1$$
for each element $pf_i$ in PF
$$ \sum_{i=0}^n i = result = result * (1 - \frac{1}{pf_i}) $$
with:
$pf_i$ a prime factor,
$n$ the number of prime factors
Then the answer of the question is $result * N $. Where does this formula come from ?