Conditions for establishing if a multivariate function is not divergent at a point and iterated limits

61 Views Asked by At

Consider a multivariate function \begin{equation} f(x;x_1,...,x_n)=\frac{g(x;x_1,...,x_n)}{\prod_{i=1}^n (x-x_i)}, \end{equation} where $g$ is a function, not necessarily a polynomial, which is well-behaved in the limits $x_i\rightarrow x$ (say it is $\mathcal{C}^\infty$). Is it possible to state that $f$ is finite in all the limits $x_i\rightarrow x$ if \begin{equation} c_{j_1,...,j_n}=\lim_{\{x_i\rightarrow x\}}\Bigg[\frac{d^j}{dx_1^{j_1}...dx_n^{j_n}}\prod_{i=1}^n (x-x_i) f\Bigg]=0, \ \ \forall j_1,...,j_n \ \text{ such that } \ j_1+...+j_n=j, \ \forall j\le n. \end{equation} This is my main question and deserving of the "useful click" thingy.

Now for my second question, let $v^\star\in\mathbb{R}$, $t_i:\mathbb{R}\rightarrow \mathbb{R}$ and $t_1(v^\star)=t_2(v^\star)$. Under what conditions is it true (I'm happy to also look at suggested literature) that \begin{equation} \lim_{v\rightarrow v^\star}\lim_{t\rightarrow t_1(v)} h(t)=\lim_{v\rightarrow v^\star}\lim_{t\rightarrow t_2(v) }h(t)? \end{equation}

2

There are 2 best solutions below

1
On BEST ANSWER

I'll answer your first question in this post. I am still not completely sure of my interpretation, but rather than continue to play a game of 20 questions, I'll just take a stab at it.

First, let me introduce you to multi-index notation. A multi-index is a tuple $\alpha = (\alpha_1, \alpha_2, ..., \alpha_n) \in \Bbb N^n$ (where I am including $0 \in \Bbb N$). Define $|\alpha| := \sum_i \alpha_i$, and define $$\partial^{\alpha}g := \dfrac{\partial^{|\alpha|}g}{\partial x^\alpha} := \dfrac{\partial^{|\alpha|}g}{\partial x_1^{\alpha_1}\ldots\partial x_n^{\alpha_n}}$$ It makes for a much cleaner notation than you are using. Your condition then becomes $$\partial^\alpha g(x,x, ..., x) = 0 \quad \forall \alpha\text{ with }|\alpha| \le n$$ (assuming $g$ is at least $C^n$, so the limit is the same as the function value).

There some other ways this problem can be simplified notationally - which just makes it easier to deal with. First, $x$ is just a constant in this problem, not a variable, so there is no reason to include it in the parameter list. Also, instead of having the $x_i$ all converge to the same value, you can generalize the problem by giving them each their own limits $a_i$. This also frees up $x$, which can now be used to represent $x = (x_1, x_2, ..., x_n)$. Similarly, call $a = (a_1, a_2, ..., a_n)$.

So we have an $a \in \Bbb R^n$, a neigbhorhood $U$ of $a$, and a function $g \in C^n(U)$ which satisfies $\partial^\alpha g(a) = 0$ for all multi-indexes $\alpha$ with $|\alpha| < n$. The function $f$ satisfies $$f(x) = \dfrac {g(x)}{\prod_{i=1}^n(x_i - a_i)}, \quad x \in U, \forall i, x_i \ne a_i$$

And as I understand it, your question is whether this is enough to show for each $i$, that $\lim_{x \to a} f(x)$ converges to a finite value.

The answer is no. For an example, let $a = \mathbf 0$ and $g(x) = \|x\|^2 = \sum_i x_i^2$ then $$f(x) = \dfrac {\sum_i x_i^2}{\prod_i x_i}$$ Note that $g(x) = 0$ only for $x = \mathbf 0$. But the denominator of $f$ is $0$ wherever $x_1 = 0$. This means $f$ is unbounded in every neighborhood of $\mathbf 0$, so $\lim_{x \to \mathbf 0} f(x)$ does not exist.

What you actually need for this to hold is $g(x) = 0$ whenever $x_i = a_i$ for some $i$, and $g \in C^n(U)$. This insures that for all points with an $x_i = a_i$, the limit of $f$ will just be a finite multiple of $\frac{\partial g}{\partial x_i}$. Since the partial derivative is finite by the requirement $g\in C^n(U)$, $f$ must be finite also.

0
On

FYI - since your two questions are not directly related to each other, they should really be asked in separate threads. There are a few reasons for this. The main one is that other people looking for answers to similar questions are not likely to find one that is buried in another question. However, I am not going to bother about it here. But please keep it in mind for future inquiries.

I'll answer the second question in this post:

That $t_1(v^*) = t_2(v^*)$ is immaterial, as the limits have nothing to do with the value of the function directly at the point where the limit is taken. Instead they depend how the function behaves near that point. What you need here is $$\lim_{v\to v^*} t_1(v) = \lim_{v\to v^*} t_2(v)$$ Call the common value of those limits $a$. The first thing needed for the two limits to be equal is that they exist, and that requires that $\lim_{t \to t_1(v)} h(t)$ and $\lim_{t \to t_2(v)} h(t)$ converge when $t_1(v)$ and $t_2(v)$ are sufficiently close to $a$. Otherwise, the outer limits would not even be defined. If you want a condition that doesn't require the specifics of $t_1$ and $t_2$, then you can use

  • There exists a neighborhood $U$ of $a$ such that $\lim_{t \to x} h(t)$ converges for all $x \in U, x \ne a$.

(Because limits are not concerned with the value of the function at the point itself, existence of the limit does not require convergence at $a$ itself.)

If you define $g(x) = \lim_{t \to x} h(t)$ for $x \in U$, then your limit equation can be restated as $$\lim_{v \to v^*} g(t_1(v)) = \lim_{v \to v^*} g(t_2(v))$$ $t_1$ and $t_2$ are two ways of approaching $a$, and you need $g$ to have the same limit along each. Again, if you want a condition that doesn't rely on the exact behavior of $t_1$ and $t_2$, that would simply be that the full limit $\lim_{t \to a} g(t)$ converges. But by the definition of $g$, this is the same as saying $\lim_{t \to a} h(t)$ converges. Which just means dropping the $x \ne a$ exception from the previous condition.

To summarize, $$\lim_{v \to v^*} \lim_{t\to t_1(v)} h(t) = \lim_{v \to v^*} \lim_{t\to t_2(v)} h(t)$$ if

  • There is a point $a$ such that $\lim_{v\to v^*} t_1(v) = \lim_{v\to v^*} t_2(v) = a$, and
  • there is a neighborhood $U$ of $a$ such that for all $x \in U, \lim_{t \to x} h(t)$ converges.

These are sufficient conditions. It is possible that the two limits will exist and be equal even when these conditions fail (for example if the limits of the $t_i$ are different, but $h$ happens to converge to the same value at both places).