Let $f$, continuous function, differentiable at $x=1$ and $f(1)>0$.
Consider the following equation:
$$\lim \limits_{x\to 1} \left(\frac{\color{Blue}{f(x)-f(1)}(x-1)}{\color{Blue}{(x-1)}f(1)}\right)^\frac{1}{\log x} = \lim \limits_{x\to 1} \left( \frac{\color {Blue}{f'(1)}(x-1)}{f(1)} \right)^\frac{1}{\log x}$$
My question is: Why can you evaluate the blue expression on the LHS as $f'(1)$ before taking the limit? Moreover, the whole expression is powered by $\frac{1}{\log x}$
It seems to me like claiming $$\lim\limits_{n\to\infty}(1+\frac{1}{n})^n = \lim\limits_{n\to\infty}(1+0)^n = 1$$
So what's making the first valid whereas the second is nonsense.
This: $$\lim_{x\rightarrow\infty} f(x)^{g(x)}=(\lim_{x\rightarrow\infty}f(x))^{\lim_{x\rightarrow\infty}g(x)}$$ only if both limits exist (in the narrow sense)! On the second example the outer limit $\lim_{n\rightarrow\infty}n=\infty$, therefore doesn't exist (in the narrow sense). In your first example I think it should be $$\frac{\textbf ( f(x)-f(1)\textbf{)}(x-1)}{(x-1)f(1)}$$ inside the brackets..