# Intro So, the wave of single-variable calculus has wrought its wrath upon the small, insignificant villages of your nation. You come, seeking aid from the international community, only to be told that another mega-tsunami is projected to crash upon your shores. That tsunami is multivariable calculus. May the force be with you because you will damn need it oh lord is calc 3 difficult. To begin, let's take a look at an extension of single-variable calculus. We'll get into the partials soon, I swear! %%COMIC%% # Directional Derivatives Now, remember the [[Partial Derivatives (Maths)#Fundamental Theorem of Calculus - Multivariable Edition|Multivariable Theorem of Calculus?]] Let's extend what we know about it by introducing a *direction* to our derivative. Notice the change in the variable in front, from the $\delta$ to a '$D_{v}! $D_{v}f(a,b) = \lim_{ h \to 0 } \frac{(f(a,b) + h\overrightarrow{v}) - f(a,b)}{h} \tag{1}$ A vector's replaced the one-dimensional addition we perform for the small quantity. This also means, taking $\overrightarrow{v_1}$ as the $x$ component of this vector and $\overrightarrow{v_2}$ as the $y$ component, that we can rewrite this as: $D_{v}f(a,b) = \lim_{ h \to 0 } \frac{(f(a + hv_{1},b + v_{2})) - f(a,b)}{h} \tag{2}$ But wait! If we think, taking the partial of this function with respect to $x$ is the same as integrating along a vector with magnitude $(1,0)$, isn't it? Think about it - we're only taking the derivative of *one* of the axes, yes? This means we can rewrite this expression as: $D_{v} f(a,b) = v_{1} \frac{\delta f}{\delta x} f (a,b) + v_{2} \frac{\delta f}{\delta y} f(a,b) \tag{3}$ This is also: $D_{v} f(a,b) = \overrightarrow{v} \cdot{(\frac{\delta f}{\delta x} f (a,b), \frac{\delta f}{\delta y} f(a,b) \tag{4})}$ Since this is just the multivariable theorem rewritten, but in a certain direction as determined by $\overrightarrow{v}$. Amazing! You can try to break apart this integral yourself, but be warned - it ain't fun. >[!Success]- TBA - Proof for Equation (3) >It's just a brute force approach! We'll backtrack to (2) before anything else to address the numerator. >$f(a + hv_{1}, b + hv_{2}) - f(a,b) \tag{5}$ >*You need the Mean Value Theorem for this, if it helps - but I haven't gotten around to learning it yet, so I'll put this here for when I need to do revision!* > # Gradient See the vector to the right of equation (4)? It's the almighty **Gradient**! I'll write it again in column vector form: $\nabla f = \begin{pmatrix} \frac{\delta f}{\delta x} \\ \frac{\delta f}{\delta y} \end{pmatrix} \tag{6}$ Which means $D_v f = \overrightarrow{v} \cdot{\nabla f}$! We call a function that is *continuous*, or having no asymptotes, and possessing extant partial derivatives *continuously differentiable* (surprise, surprise) functions. There are some rules that this gradient must abide by if the function is continuously differentiable: 1. **The Gradient $\nabla f$ MUST be normal to any well-behaved curve $f(x,y) = c$** 2. **The Gradient $\nabla f$ is the direction where the function $f(x,y)$ increases the fastest.** 3. **The negative Gradient** $-\nabla f$ **is the direction where the function $f(x,y)$** **decreases the fastest.** Let's home in on number one, since the other two are just waffle. To understand why it's always going to be perpendicular, we'll have to take a look at the vector itself. ![[samplemultivarcurve.png|center]] If we take a vector for our directional derivative that is perpendicular to the level curve (the curve where the value of the function is the same), then the value of the curve in its direction will not increase, meaning that the directional derivative at that point is 0. This, however, has a neat effect, which we can visualise by rewriting the dot product: $D_{v}f = 0 = \mid v \mid \mid \nabla f\mid \cos \theta \tag{7}$ For $\cos \theta$ to be 0, $\theta = 90^{\circ}$, which means $\overrightarrow{v} \perp \nabla f$ in this scenario. Generalised, however, this has the ramification that $\nabla f$ will **always** be perpendicular to the level curve, which makes our lives far, far easier. We say that the vectors associated with the gradient are in the $\mathbb{R}^2$ space, which is just a square version of the tried-and-tested $\mathbb{R}$ set. >[!Tip]- The Contours (TBA) >This will be a section explaining how heatmaps can help build intuition for the Gradient! # Maxima and Minima %%Hehe. Saddle points and Critical Points are the things you should be talking about here, comprende pas?%%