The picture gives part of the definition for Implicit Function Theorem, I know some definition for determinants where there is linear independence between each equation in $\mathbb{R^k}$. However, other than that, I cannot seem to connect why det$D_y$F(a,b) cannot equal to 0 in order for $F(x,y)=0$
2026-03-25 23:35:54.1774481754
Question regarding the requirement (determinant) for implicit function theorem
410 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in MULTIVARIABLE-CALCULUS
- Equality of Mixed Partial Derivatives - Simple proof is Confusing
- $\iint_{S} F.\eta dA$ where $F = [3x^2 , y^2 , 0]$ and $S : r(u,v) = [u,v,2u+3v]$
- Proving the differentiability of the following function of two variables
- optimization with strict inequality of variables
- How to find the unit tangent vector of a curve in R^3
- Prove all tangent plane to the cone $x^2+y^2=z^2$ goes through the origin
- Holding intermediate variables constant in partial derivative chain rule
- Find the directional derivative in the point $p$ in the direction $\vec{pp'}$
- Check if $\phi$ is convex
- Define in which points function is continuous
Related Questions in DETERMINANT
- Form square matrix out of a non square matrix to calculate determinant
- Let $T:V\to W$ on finite dimensional vector spaces, is it possible to use the determinant to determine that $T$ is invertible.
- Optimization over images of column-orthogonal matrices through rotations and reflections
- Effect of adding a zero row and column on the eigenvalues of a matrix
- Geometric intuition behind determinant properties
- Help with proof or counterexample: $A^3=0 \implies I_n+A$ is invertible
- Prove that every matrix $\in\mathbb{R}^{3\times3}$ with determinant equal 6 can be written as $AB$, when $|B|=1$ and $A$ is the given matrix.
- Properties of determinant exponent
- How to determine the characteristic polynomial of the $4\times4$ real matrix of ones?
- The determinant of the sum of a positive definite matrix with a symmetric singular matrix
Related Questions in IMPLICIT-DIFFERENTIATION
- Derivative of implicit functions
- Is the Inverse Function Theorem Global?
- Show that $e^{xy}+y=x-1$ is an implicit solution to the differential equation $\frac{dy}{dx} = \frac{e^{-xy}-y}{e^{-xy}+x}$
- How to see the sign of an entangled PDE
- Find the value of $\theta$ that maximizes $t_c$.
- What is the sign of the result when applying the implicit function theorem?
- Implicit-differentiation with two surfaces
- Does this entangled PDE capture the derivative?
- Implicit differentiation. Confusing assumption.
- Chain rule problem: given $f(x)=\sqrt{4x+7}$ and $g(x)=e^{x+4}$, compute $f(g(x))'$.
Related Questions in IMPLICIT-FUNCTION-THEOREM
- Is there a variant of the implicit function theorem covering a branch of a curve around a singular point?
- Is the Inverse Function Theorem Global?
- $X^2 + X =A$ with $X, A\in \text{Mat}_{2,2} (\mathbb{R})$ . Show that there exists a solution $X$ for a given $A$
- How to see the sign of an entangled PDE
- Help me understand this proof of Implicit Function Theorem on Banach spaces
- Implicit function theorem involving $\cos$ function
- Does this entangled PDE capture the derivative?
- Applying implicit function theorem
- Question involving implicit functions and PDE
- What to do when we can't apply the implicit function theorem?
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?

What I'll provide is a motivation for why we might impose/how we might come up with the condition $\det (D_yF(a,b)) \neq 0$. For the full explanation of where this fact is used, of course just refer to the proof in your book.
I hope you know that differential calculus (roughly speaking) is the theory of locally approximating by linear functions, because linear things are nice to work with. So the key idea behind things like implicit function theorem/inverse function theorem or really any "big" theorem in differential calculus is to say to yourself
So, in the spirit of this guiding principle, we consider a very special case: let $A \in M_{k \times n}(\Bbb{R})$, and let $B \in M_{k \times k}(\Bbb{R})$, and define the function $G: \Bbb{R}^n \times \Bbb{R}^k \to \Bbb{R}^k$ by \begin{equation} G(x,y) = Ax + By \end{equation} Now the question at hand is: If $G(x,y) = 0$, then can I solve $y$ in terms of $x$? The answer is pretty simple in this case, because if the matrix $B$ is invertible (i.e $\det B \neq 0$) then \begin{equation} G(x,y) = 0 \end{equation} implies that \begin{align} Ax + By = 0, \end{align} and hence \begin{align} y =- (B^{-1}A) x. \end{align}
So, to solve the problem in this special case, we had to make the assumption that $B$ is invertible (i.e $\det B \neq 0$). This is the key insight we gained by solving the special linear case!
This is useful because in general the function $F$ you have been given in the theorem might be very complicated, so you don't know what it really looks like. However, near a point $(a,b)$, where $F(a,b) = 0$, we can use the power of differential calculus to say \begin{align} F(x,y) \approx D_xF(a,b) \cdot (x-a) + D_yF(a,b) \cdot (y-b) \quad \text{if $(x,y)$ is near $(a,b)$} \tag{$*$} \end{align} (the approximation being better the closer $(x,y)$ is to $(a,b)$)
Now, the actual question you're being asked is: if $F(x,y) = 0$, then can we solve for $y$ in terms of $x$ (atleast for $(x,y)$ close to $(a,b)$)? This is a difficult problem, but we can use the linear approximation ($*$) to get a rough idea: we have that \begin{align} 0 &= F(x,y) \\ & \approx D_xF(a,b) \cdot (x-a) + D_yF(a,b) \cdot (y-b) \end{align} Notice how this is almost like the situation we had above with the function $G$. Here, $A = D_xF(a,b)$ and $B = D_yF(a,b)$. So, if in this general case we impose the condition that $D_yF(a,b)$ is invertible (i.e its determinant is nonzero), then, we get \begin{align} y \approx - (D_yF(a,b))^{-1} \cdot D_xF(a,b) \cdot (x-a) + b \end{align}
Thus, we have used our knowledge of the exact solution in the special linearized case to get a "rough approximate solution" in the general case. Now, all that remains to rigorously prove the theorem is to do some detailed and technical analysis of all the error terms wherever I said $\approx$ above, and to show that even in the general case, we really can solve for $y$ in terms of $x$, provided that $D_yF(a,b)$ is invertible (your book should cover all the detailed arguments).
This is the motivation for why we put $\det D_yF(a,b) \neq 0$ as part of our hypothesis, and it also outlines the thought process of how one might come up with such a requirement. Of course, after coming up with such a requirement, one can come up with examples to show that if this condition is not satisfied, then we cannot solve for $y$ in terms of $x$.
Indeed a simple example to show that the assumption $\det D_yF(a,b) \neq 0$ is needed for the theorem to be true is the following:
let $k=n=1$, define $F: \Bbb{R} \times \Bbb{R} \to \Bbb{R}$ by $F(x,y) = x^2 + y^2 - 1$. Choose $(a,b) = (1,0)$. Then, clearly $F(1,0) = 0$ and $D_yF(1,0) = 0$ (this is a $1 \times 1$ matrix). So, the determinant is also $0$.
Now, notice that the set of $(x,y)$ which satisfy $F(x,y) = 0$ are points on the unit circle in the plane. It should be clear pictorially, that near $(1,0)$, it is impossible to solve for $y$ as a function of $x$.
It was not possible in this case because the determinant was $0$. This shows why the determinant condition is required. (However, notice that $D_xF(1,0) = 2 \neq 0$, so we can solve for $x$ as a function of $y$)
Edit in response to comments:
Recall that in general, by definition, for any function $F: \Bbb{R}^p \to \Bbb{R}^m$, we say $F$ is differentiable at $\alpha$, if there is an $n \times p$ matrix $T$ such that \begin{equation} F(\xi) - F(\alpha) = T(\xi - \alpha) + o(\lVert\xi - \alpha \rVert) \end{equation} If $F$ is differentiable at $\alpha$, then $T$ is unique, and we denote it by the symbol $DF(\alpha)$. i.e we can approximate the change $F(\xi)-F(\alpha)$ by a linear part $DF(\alpha) \cdot (\xi -\alpha)$, and the approximation is valid up to an accuracy of little-oh.
In your particular case, write $p = n+k$, $\xi = \begin{bmatrix} x \\y \end{bmatrix} $, and write $\alpha = (a,b)$. Note that we have the following block matrix decomposition: \begin{align} DF(a,b) = \begin{bmatrix} D_xF(a,b) & D_yF(a,b) \end{bmatrix} \end{align} Hence, we get \begin{align} F(x,y) &= F(a,b) + DF(a,b) \cdot \begin{bmatrix} x-a \\ y-b \end{bmatrix} + o(\lVert (x,y) - (a,b)\rVert) \\ &= F(a,b) + \begin{bmatrix} D_xF(a,b) & D_yF(a,b) \end{bmatrix} \cdot \begin{bmatrix} x-a \\ y-b \end{bmatrix} + o(\lVert (x,y) - (a,b)\rVert) \\ &= F(a,b) + D_xF(a,b) \cdot (x-a) + D_yF(a,b) \cdot (y-b) + o(\lVert (x,y) - (a,b)\rVert) \end{align}
This is the proper statement in general, and everything is an equal sign (there are no approximations, because we already took the error term into account with the little-oh notation). In the case of the implicit function theorem, we have $F(a,b) = 0$ by assumption. Hence, we get the statement \begin{equation} F(x,y) = D_xF(a,b) \cdot (x-a) + D_yF(a,b) \cdot (y-b) + o(\lVert (x,y) - (a,b)\rVert) \end{equation}
(In my above explanation, I was too lazy to carry around the little-oh, so I just wrote $\approx$ everywhere instead)