# Intro So, the wave of single-variable calculus has wrought its wrath upon the small, insignificant villages of your nation. You come, seeking aid from the international community, only to be told that another mega-tsunami is projected to crash upon your shores. That tsunami is multivariable calculus. May the force be with you because you will damn need it oh lord is calc 3 difficult. To begin, let's take a look at an extension of single-variable calculus. We'll get into the partials soon, I swear! %%COMIC%% # Directional Derivatives Now, remember the [[Partial Derivatives (Maths)#Fundamental Theorem of Calculus - Multivariable Edition|Multivariable Theorem of Calculus?]] Let's extend what we know about it by introducing a *direction* to our derivative. Notice the change in the variable in front, from the $\delta$ to a '$D_{v}! $D_{v}f(a,b) = \lim_{ h \to 0 } \frac{(f(a,b) + h\overrightarrow{v}) - f(a,b)}{h} \tag{1}$ A vector's replaced the one-dimensional addition we perform for the small quantity. This also means, taking $\overrightarrow{v_1}$ as the $x$ component of this vector and $\overrightarrow{v_2}$ as the $y$ component, that we can rewrite this as: $D_{v}f(a,b) = \lim_{ h \to 0 } \frac{(f(a + hv_{1},b + v_{2})) - f(a,b)}{h} \tag{2}$ But wait! If we think, taking the partial of this function with respect to $x$ is the same as integrating along a vector with magnitude $(1,0)$, isn't it? Think about it - we're only taking the derivative of *one* of the axes, yes? This means we can rewrite this expression as: $D_{v} f(a,b) = v_{1} \frac{\delta f}{\delta x} f (a,b) + v_{2} \frac{\delta f}{\delta y} f(a,b) \tag{3}$ This is also: $D_{v} f(a,b) = \overrightarrow{v} \cdot{(\frac{\delta f}{\delta x} f (a,b), \frac{\delta f}{\delta y} f(a,b) \tag{4})}$ Since this is just the multivariable theorem rewritten, but in a certain direction as determined by $\overrightarrow{v}$. Amazing! You can try to break apart this integral yourself, but be warned - it ain't fun. >[!Success]- TBA - Proof for Equation (3) >It's just a brute force approach! We'll backtrack to (2) before anything else to address the numerator. >$f(a + hv_{1}, b + hv_{2}) - f(a,b) \tag{5}$ >*You need the Mean Value Theorem for this, if it helps - but I haven't gotten around to learning it yet, so I'll put this here for when I need to do revision!* > # Gradient See the vector to the right of equation (4)? It's the almighty **Gradient**! I'll write it again in column vector form: $\nabla f = \begin{pmatrix} \frac{\delta f}{\delta x} \\ \frac{\delta f}{\delta y} \end{pmatrix} \tag{6}$ Which means $D_v f = \overrightarrow{v} \cdot{\nabla f}$! We call a function that is *continuous*, or having no asymptotes, and possessing extant partial derivatives *continuously differentiable* (surprise, surprise) functions. There are some rules that this gradient must abide by if the function is continuously differentiable: 1. **The Gradient $\nabla f$ MUST be normal to any well-behaved curve $f(x,y) = c$** 2. **The Gradient $\nabla f$ is the direction where the function $f(x,y)$ increases the fastest.** 3. **The negative Gradient** $-\nabla f$ **is the direction where the function $f(x,y)$** **decreases the fastest.** Let's home in on number one, since the other two are just waffle. To understand why it's always going to be perpendicular, we'll have to take a look at the vector itself. ![[samplemultivarcurve.png|center]] If we take a vector for our directional derivative that is perpendicular to the level curve (the curve where the value of the function is the same), then the value of the curve in its direction will not increase, meaning that the directional derivative at that point is 0. This, however, has a neat effect, which we can visualise by rewriting the dot product: $D_{v}f = 0 = \mid v \mid \mid \nabla f\mid \cos \theta \tag{7}$ For $\cos \theta$ to be 0, $\theta = 90^{\circ}$, which means $\overrightarrow{v} \perp \nabla f$ in this scenario. Generalised, however, this has the ramification that $\nabla f$ will **always** be perpendicular to the level curve, which makes our lives far, far easier. We say that the vectors associated with the gradient are in the $\mathbb{R}^2$ space, which is just a square version of the tried-and-tested $\mathbb{R}$ set. >[!Tip]- The Contours (TBA) >This will be a section explaining how heatmaps can help build intuition for the Gradient! # Maxima and Minima Before we go into multivariable functions, let's first look to single-variable functions, where we take the zero of the gradient to first determine the stationary point, then the second derivative to determine the nature of the stationary point. That much is obvious, and it actually transfers forward to the domain of the multivariable, in terms of just stationary points alone! Yippee! Still, that's assuming that both the partial derivative at $x$ and $y$ is zero - which, as you'll come to know, is really, really not often the case. So, this section will be dedicated to finding those odd points, and how we can differentiate between them! The points where the partials at both $x$ and $y$ are 0 are called the **Critical Points**, where $\nabla f(a,b) = 0$. There's also another case where the partial at $x$ acts as a local minimum and the partial at $y$ acts as a local maximum, called the **Saddle Point.** Easy, huh? You can distinguish between the two by finding different means to interpret a function (through analysis). For example, for the function $f (x,y) = xy$, we can take $y$ to be both equal to $x$ and $-x$, which gives both local maxima and minima depending on what we use. This makes the apparent minimum/maximum at (0,0) for this function a Saddle Point. Alternatively, you can use this other equation: $D = \frac{\delta ^2f}{\delta x^2} (a,b) \frac{\delta ^2f}{\delta y^2} (a,b) - \frac{\delta ^2f}{\delta y\delta x} (a,b)$ And we just look at the solutions to this equation to determine what happens! - If $D > 0$ and $\frac{\delta ^2f}{\delta x^2} (a,b) > 0$, the point is a local min at (a,b) - If $D > 0$ and $\frac{\delta ^2f}{\delta x^2} (a,b) < 0$, the point is a local max at (a,b) - If $D<0$, the point is neither a local min nor max and is likely a Saddle Point. - If D = 0, we cannot gain anything from this equation. %%technically is a matrix too but let's not bother with that for the time being%% #