basic multivariable calculus

52 Views Asked by At

All I knew about partial derivatives before thinking about all this (the paragraphs below) was that you keep one variable constant and differentiate with respect to the other variable. I got to know about this from youtube videos where it is mentioned a lot; I do not have any sort of formal education in multivariable calculus because I'll be going to college this year (technically still in high school).

What clicked in my mind was this: In 3d space when you fix one coordinate you get a plane and in that plane, you find the derivative with respect to x once and then you fix x and find the derivative with respect to y because when you’re dealing with multivariable functions, I guess x is one variable y is the second and f(x,y) can be taken on the z-axis. First of all, is this correct?

What I do not get is this: Now you have two tangents in perpendicular planes at a single point. What do you do with them? Add them vectorially or just leave it like that?

Second question: Does this mean I can't graphically interpret tri-variable calculus? Because imagining 4-d is very difficult??

Third question: How does integration work in this?

2

There are 2 best solutions below

0
On

On your first question, it depends, but you often want to solve for the gradient of a function, which is a vector whose components are the partial derivatives of the function wrt each variable. The gradient is the direction of fastest increase of the function and its magnitude is that rate—this is more general than the 2D case you learn in 2-variable calculus, where the derivative is the slope, and its magnitude is THE rate of change.

On your second question, while you can’t draw four dimensions, you can possibly draw partial derivatives of four dimensions. For example, if f(x, y, z) = x + y + z, then df / dx = y + z + 1, which is 3-dimensional.

Your third question is extremely general. There are a few types of integrals in multivariable calculus, with various uses: e.g. finding the area of a surface or a volume; there are multiple applications of these integrals, particularly to physics.

0
On

Again, your first experience with partial derivatives is sound and should be stuck with. Suppose you have a function dependent on more than one quantity. If we consider all but one of those quantities to be unchanging, then we just have in effect a function of one variable, as usual, and the usual derivative with respect to this variable is as before. Now we may differentiate with respect to each one of the other variables, or to any at all, by considering all other parameters as fixed. This is what's usually called a partial derivative. For example, in the function $e^x,$ the derivative with respect to $y$ is $0$ if we consider $x$ and $y$ as independent.

What you've been learning to see online is the geometrical interpretation of the partial derivative. If you know the geometrical meaning of the single variable derivative, then you're good to go. Suppose you have a function of two parameters, say $z=z(x,y),$ then in $3$-D space the graph is in general a surface, with the plane $xy$ being the domain, and the ordinates from this plane representing $z.$ Now if we consider one of $x$ or $y$ as not variable, say $y.$ Then in the $xy$ plane we've picked a particular value of $y,$ say $y_0.$ This means we've now restricted our domain to the set of all points with that certain $y$-coordinate. Hence, our domain is now the line $y=y_0$ in the $xy$-plane. Hence, the only variable is now $x,$ restricted to this line. That means our ordinates now form a plane intersecting the surface in a line. The derivative of $z(x,y_0)$ with respect to $x$ at a certain point $x_0$ on this line is now just the slope of the tangent line to the curve defined by the intersection of the surfaces $z=z(x,y)$ and $y=y_0$ at the point $x=x_0.$ This is just the usual derivative, once again.

What do you do with partials? For one, whatever you did with single-variable derivatives. But more, if you're considering the parameters $x,y$ in $z(x,y)$ as simultaneously varying. You can sort of find a representative rate of change of $z$ with respect to both $x,y.$ This is given by the magnitude of what's called the gradient of $z.$ And yes, this gradient is composed from all the partial derivatives of the function in question. Again, geometrically, the partials in each coordinate may be combined to give a single vector -- this is the gradient, which always points to the direction of greatest ascent on the graph of the function (which I'm thinking of as a surface in $3$D now).

As for graphics, that depends on what you want. Of course, you can't use the usual interpretation, where the domain is made from $n-1$ orthogonal directions, and the last coordinate represents the values of the function, also orthogonal to the rest. This starts to fail when $n=4$ since you can't have four orthogonal directions in $3$ dimensions. But there are many other ways to think of multivariate functions -- for a function of the four variables $x,y,z,t,$ for example, you may think of the domain $(x,y,z)$ as changing with respect to time $t,$ -- as being deformed, that is. Different ways. You may think of fluids, etc. For functions of even more variables, think of things in real life (which usually depend simultaneously on several variables). So you need not always think of a graph in the usual sense. It's useful for $1$-dimensional, and maybe $2$-dimensional domains, but as you go higher you need to learn to think of functions otherwise -- that's if you must think graphically at all, which is not always necessary. Just think of playing with quantities depending on other quantities, and you're good to go, even with infinitely many variables.

As for integration, you need to know what it means and exactly what you're asking about. There are two likely things -- looking for a primitive, or actually integrating. Before every other thing, integrating with respect to only one variable is as before, so that's done. As for finding a primitive function, recall that when we differentiate totally, we get a bunch of partials, arranged into the gradient vector. So, in this case if we're given the gradient, we can sometimes find the exact primitive back, although such questions are not of much importance here. Now to integration proper -- we just proceed as before, we want to sum up certain differentials, which are products of the values of the function at each point, and an elementary differential at that point. The only difference now is that out differential is $2$-dimensional, following our domain. Geometrically, it is an infinitesimal area. And we proceed as before. Partition the domain into pieces, say rectangles. Pick convenient values of the function in each rectangle. Multiply the area of each rectangle by each representative value, and add up all these products. Then make the rectangles smaller ad infinitum, while increasing their number. If this sum has a limit, it is the integral of the function over the domain, etc. This is just the barebones -- many interesting things arise that did not in the $1$D case.