A recent Mathologer video about the sum $1+2+\ldots = -1/12$ reawakened some unease I have about using analytical continuation to assign values to divergent sums. Specifically, the idea that the analytic continuation of different functions gives the same result, or perhaps it doesn't and its something else that I've not understood. I'll lay out below what I think must be true, to see if anyone knows if it is true or why it doesn't have to be.
Proposition
Let $\Omega_1,\Omega_2\subset \mathbb{C}$ be open and connected, and $u_k:\Omega_1 \rightarrow \mathbb{C},v_k:\Omega_2 \rightarrow \mathbb{C}, k\in\{1,2,\ldots\}$ be analytic. Define $$ f(z) = \sum_{k=1}^\infty u_k(z) \\ g(z) = \sum_{k=1}^\infty v_k(z) $$ and suppose that $u_k,v_k$ are such that both $f$ and $g$ are convergent and analytic on some (possibly different) open subsets of $\Omega_1$ and $\Omega_2$ respectively. Denote the unique analytic continuation of the functions by $\tilde{f},\tilde{g}$. In addition suppose that there exists $z_1\in \Omega_1,z_2\in \Omega_2$ such that, for a given (not necessarily convergent) sequence $a_1,a_2,\ldots$ $$ u_k(z_1)=a_k \\ v_k(z_2)=a_k $$ Then we can prove that $\tilde{f}(z_1) = \tilde{g}(z_2)$.
Questions
It seems to me that, if the above is true, the analytic continuation of a function has meaning with regard to infinite sums, if not then it doesn't.
Is the above true?
If yes: prove it (or, intuitively justify it, link to a proof, reference a book, etc)
If no: why are the results obtained by analytically continuing the Riemann zeta function the ones we go with?
It need not be the case that $\tilde{f}(z_1) = \tilde{g}(z_2)$, even when both, $f$ and $g$ can be uniquely analytically continued to the respective $z_j$.
For $k \in \mathbb{N}\setminus \{0\}$, define (using the real-valued logarithms)
$$u_k(z) = (k+1)\cdot k^z \qquad\text{and}\qquad v_k(z) = k\cdot (k+1)^z\,.$$
The $u_k$ and $v_k$ are all entire functions, and we have $u_k(1) = v_k(1) = k(k+1)$ for all $k$. For $\operatorname{Re} z < -2$, we can compute
$$f(z) = \sum_{k = 1}^{\infty} (k+1)k^z = \sum_{k = 1}^{\infty} k^{z+1} + \sum_{k = 1}^{\infty} k^z = \zeta(-z-1) + \zeta(-z)$$
and
$$g(z) = \sum_{k = 1}^{\infty} k(k+1)^z = \sum_{m = 2}^{\infty} (m-1)m^z = \sum_{m = 1}^{\infty} (m-1)m^z = \zeta(-z-1) - \zeta(-z)\,.$$
Hence $\tilde{f}(1) = \zeta(-2) + \zeta(-1) = \zeta(-1) = -\frac{1}{12}$ and $\tilde{g}(1) = \zeta(-2) - \zeta(-1) = \frac{1}{12}$.
We don't always. For one, this method applies only to some sequences. One cannot obtain a value for the series of fast-growing sequences with it. Even if it can be used, a different method may be more inviting.
There are a lot of summation methods for divergent series, some of which look more reasonable than others. If the more reasonable-looking methods assign the same value to a divergent series, that's an indication that this particular value makes more sense than others [for every divergent series and every complex number $w$, there are summation methods assigning the value $w$ to the series; typically, such summation methods don't even pretend to be reasonable]. That's also the case if some of these methods assign the same value, and others don't assign any value to the series in question. If the more reasonable-looking methods assign different values to the series, it's more hairy. There may be reasons to prefer one of these methods to others in particular circumstances, but a choice ought to be justified.