If you have a function in the form of $f(kx)$, the graph is horizontally scaled by a factor of $k$ and the bigger the magnitude of $k$, the more compressed the graph gets, and the inverse is true.
So by definition, $k$ should be called the horizontal compression factor of the function, meaning if $k = \frac{1}{2}$, the graph is horizontally compressed by a factor of $\frac{1}{2}$, and since stretching is the inverse of compressing you could also say the graph is horizontally stretched by a factor of $2$. and using the same logic, if $k = 2$ then the graph is horizontally compressed by a factor of $2$ or horizontally stretched by a factor of $\frac{1}{2}$
But this is not the case and the accepted practice is to say the graph is compressed by a factor of $k$ if $|k|>1$ and stretched by a factor of $k$ if $0<|k|<1$
This seems extremely unintuitive and to me it doesn't use the word factor correctly. So my question is why do we describe stretching and compressing transformations like this?
Note that by the transformation:
$$f(x)\to f(kx)$$
if $|k|>1$ the graph of f is horizontally compressed
if $0<|k|<1$ the graph of f is horizontally stretched
Infact if for the original funtion $f(1)$ corresponds to the point $x=1$ in the new function f(1) correspond to $x=\frac1k$, thus if |k|>1 the function is compressed.
If k<0 the graph is also mirrored by the y-axis.
Let's try with $sin(kx)$:
http://www.wolframalpha.com/input/?i=plot+sin(4x),sin(2x),sinx,sin(x%2F2),sin(x%2F4)