Perhaps this question is very specific to the book 'Probability Theory' by Varadhan.
I'm trying to understand the proofs of Lindeberg's Theorem and the Accompanying Laws Theorem. The setup for both of these is that we are trying to find a limiting distribution of a sequence of random variables $$ Z_1 = X_{1,1} + \dots + X_{1,k_1}\\ Z_2 = X_{2,1} + \dots + \dots + X_{2,k_2}\\ \vdots\\ Z_n = X_{2,1} + \dots + \dots + \dots + X_{n,k_n}\\ \vdots $$ with some assumptions about independence and uniform infinitesimality of the $X_{n,j}$. Let $\phi_{n,j}$ denote the characteristic function of each $X_{n,j}$. The book then makes some estimates and computations for the product $$\hat{\mu}_n = \prod\limits_j \phi_{n,j}. $$ (Note this is the characteristic function of $Z_n$.) One of these computations, see the bottom of page 51 and again the top of page 56, involves taking the log of this product and using $\log(ab) = \log(a) + \log(b)$. However, in general, I feel that there needs to be some justification for this. We are taking the log of a product of complex numbers. If the arguments add up to a large angle, the expression $$\log\left(\prod a_i\right) = \sum\log(a_i)$$ doesn't seem to be valid anymore. (We might have to add some multiple of $2\pi$, right?)
I hope someone can clarify my misunderstanding here. Perhaps by providing a justification that I missed. Additionally, I have noticed that in another book (Billingsley), the use of logs is avoided in the proof of Lindeberg's theorem. But I find it frustrating to change books and notation. And further, I don't see the Accompanying Laws theorem in Billingsley.
Thanks for reading and apologies for the wall of text.
Edit: The proofs I'm referring to can be found in some notes from the author's website. Theorems 3.18 and 3.19.
The problem, of what the logarithm of a characteristic function is, is addressed in Eugene Lukas's Characteristic Functions:
It is true that in general, for $z\in\mathbb C{\setminus}\{0\}$, the value of the integral $\int_\gamma \frac {du}u$ depends on the path $\gamma$ connecting $1$ to $z$. But in the context of Varahdan's proof, the rule stated by Lukacs saves the day. If a characteristic function $f$ does not vanish anywhere on $[-T,T]$, there is a unique continuous function $\ell(t)$ on $[-T,T]$ such that $\exp(\ell(t))=f(t)$ on $[-T,T]$ for which $\ell(0)=0$. We then define $\log f$ to be that $\ell$. This amounts to using the contour $\gamma:[0,t]\to\mathbb C{\setminus}\{0\}$, connecting $1$ to $f(t)$, given by $\gamma(u)=f(u)$ on $[0,t]$, in the formula $\log f(t) = \int_\gamma \frac{du}u$. Since all the characteristic functions in Varadhan's proof are differentiable, this takes the form: $$\log f(t) = \int_0^t \frac {f'(u)}{f(u)}du.$$
If $g$ is another characteristic function that also does not vanish anywhere on $[-T,T]$, and $h=fg$, this convention allows us to conclude $\log h = \log f + \log g$, because for each $t\in[-T,T]$, $$\exp(\log h(t))=h(t)=f(t)g(t)=\exp(\log f(t))\exp(\log g(t)) = \exp(\log f(t)+\log g(t))\tag{L},$$ and because the sum of the two continuous functions $\log f(t)$ and $\log g(t)$ is continuous.
Here is how this fits in Varadhan's argument. There are characteristic function $\phi_{n,j}(t)$ and $\hat\mu_n=\prod_{j=1}^n\phi_{n,j}$, and other functions $\psi_{n,j}$ such that $\psi_{n,j}(t)=\exp[\phi_{n,j}(t)-1]$, and $\psi_n=\prod_{j=1}^n \psi_{n,j}$. It is supposed that for each finite $T$, $$\lim_{n\to\infty}\sup_{|t|\le T} \sup_{1\le j\le n}|\phi_{n,j}(t)-1|=0\tag{H1}$$ and $$\sup_n\sup_{|t|\le T}|\phi_{n,j}(t)-1|<\infty \tag{H2}.$$ Varahdan starts a long chain of deductions from these, the first of which puzzles the OP: ``this would imply that'': $$\lim_{n\to\infty}\sup_{|t|\le T}|\log \hat\mu_n(t)-\log\psi_n(t)|\le \lim_{n\to\infty}\sup_{|t|\le T}\sum_{j=1}^n|\log \phi_{n,j}(t)-[\phi_{n,j}(t)-1]|...$$
Here is why this first step follows. Note that for given $T$, for all $n$ sufficiently large, we have $|\phi_{n,j}(t)-1|\le 1/2$ on $[-T,T]$, so all the functions in sight are non-vanishing on $[-T,T]$, and the recipe (L) tells us that $\log\hat\mu_n(t)=\sum_j \log\phi_{n,j}(t)$, and $\log\psi_n(t)=\sum_j \log \psi_{n,j}(t)$. (Note that the $\psi_{n,j}$ and $\psi_n$ are also characteristic functions, non vanishing on $[-T,T]$, so (L) applies to them, too.) So we have $$\begin{align*}\left|\log \hat\mu_n(t)-\log\psi_n(t)\right| &= \left|\sum_{j=1}^n \log\phi_{n,j}(t) - \sum_{j=1}^n \log\psi_{n,j}(t)\right|\\&\le \sum_{j=1}^n \left|\log\phi_{n,j}(t)- \log\psi_{n,j}(t)\right|=\sum_{j=1}^n \left|\log\phi_{n,j}(t)-[\phi_{n,j}(t)-1]\right|\end{align*}$$ and so on.