My question is about the proof of the Central Limit Theorem as given in "Probability with Martingales" by David Williams (18.4). $\theta$ is fixed below. He says as $n\to\infty$ $$n\log\left( 1 - \frac{1}{2}\frac{\theta^2}{n}+o\left(\frac{\theta^2}{n}\right) \right) = n\left(-\frac{1}{2}\frac{\theta^2}{n}+o\left(\frac{\theta^2}{n}\right)\right)\to-\frac{1}{2}\theta^2$$
He justifies the first equality by the inequality that says $$\lvert\log(1+z)-z\rvert\leq \lvert z\rvert^2,\qquad \lvert z\rvert \leq \frac{1}{2}$$ I don't see how he is applying this inequality. So that is my first question.
My second question is about the limit expression. If I start with
$$f_n = n\left(-\frac{1}{2}\frac{\theta^2}{n}+o\left(\frac{\theta^2}{n}\right)\right)$$
can I then say the following? $$\frac{1}{n}\left(f_n + \frac{1}{2}\theta^2\right) = o\left(\frac{\theta^2}{n}\right)$$ $$\lim_{n\to\infty}\frac{\frac{1}{n}\left(f_n + \frac{1}{2}\theta^2\right)}{\frac{\theta^2}{n}} = \lim_{n\to\infty}\frac{\left(f_n + \frac{1}{2}\theta^2\right)}{\theta^2} = 0$$ Hence $$f_n \to -\frac{1}{2}\theta^2$$ I find the $o$ notation rather confusing. That is why I am asking if my interpretation of how Williams uses it in his proof is correct.
Little oh is your friend. It allows manipulation of limits in a much more intuitive way, instead of having to rely on algebraic manipulations, which are always error-prone. Starting from your second question, if $f_n$ is given as it is, then you can write:
$$f_n=-\frac{n\theta^2}{2n}+n\cdot o(\frac{\theta^2}{n})$$ The first term is just $-\frac{\theta^2}{2}$, and the second term tends to zero as $n\to\infty$ by the definition of little-oh: $g(n)=o(h(n))$ if and only if $g(n)/h(n)$ tends to zero as $n\to\infty$. Therefore $o(\frac{1}{n})$ (one usually omits constants inside the oh-notation) is a quantity that if divided by $1/n$, i.e., muliplied by $n$, tends to zero as $n\to\infty$. In other words, it's rate of convergence to zero as $n\to\infty$ is "faster" than the rate of convergence of $1/n$ to zero. So if you multiply it by $n$, it will still converge to zero, and this is precisely what's done in the proof abvoe. All this rather verbose description is usually omitted in proofs, as the notion of the little-oh notation becomes familiar.
Now let's practice with your first question. Using the inequality you wrote, we have:
$$-|z|^2\leq \log(1+z)-z\leq |z|^2$$
Put $z_n=-\frac{1}{2}\frac{\theta^2}{n}+o(\frac{\theta^2}{n})$. Then $z_n^2$ only contains terms of the order of magnitude $1/n^2$ or smaller, and so $|z_n|^2=o(1/n)$. So:
$$o(1/n)\leq \log(1+z_n)-z_n\leq o(1/n)$$
hence $\log(1+z_n)=z_n+o(1/n)$. Observe that the same symbol $o(1/n)$ is used for possibly different quantities, but we can still add them as if they were the same, and from an inequality of the form:
$$o(1/n)+z_n\leq \log(1+z_n)\leq z_n+o(1/n)$$
dedcue that $\log(1+z_n)$ actually equals $z_n+o(1/n)$. This kind of manipulation may look strange at first and may require some practice to get used to, but once you do it saves a lot of computation.
Anyway, now that you have $\log(1+z_n)=z_n+o(1/n)$, with $z_n$ as above, the rest follows.
So
$$n\log(1-\frac{1}{2}\frac{\theta^2}{n}+o(\frac{\theta^2}{n}))=n\log(1+z_n)=n(z_n+o(1/n))\to-\frac{1}{2}\theta^2$$