The word ‘calculus’ itself is sufficient to make the average person cower away, let alone attaching obscure adjectives to such a terrifying concept. Nevertheless, vector calculus opens the portal to a much higher of mathematical computation, and for those in specialized fields, has a wide range of applications. This post serves to provide some clarity towards the unsettling features of vector calculus, provide a general introduction to the subject, and hopefully demonstrate some useful applications in a code-based setting

## A Brief Overview

If you are venturing this deep into the mathematical universe, you may have had previous encounters with vectors, so I will attempt to overlook the mundanities. Because future posts will generally be made with respect to three dimensional spaces, we will first define our Cartesian system which is R X R X R, ={(x,y,z) | x,y,z ∈ R}. This confusing jargon simply means that our three dimensional coordinate systems is comprised of all the ordered triples synthesized by the various combinations of real numbers available.

With our spatial system appropriately defined, we may speedily review the fundamental premise of vectors as they pertain to our investigation of vector functions: vectors are mathematical entities that specify both magnitude and direction. Vectors are spatially defined based upon the relationship two points along the vector, typically the initial point and the terminal point. Suppose our vector v has initial point A and terminal point B. We would then model this vector as v = AB. Any given vector can be understood by virtue of its components. A vector in three dimensional space will exhibit components such as ‘a’, ‘b’, and ‘c’. If vector AB is positioned with its initial point in the origin of our space, then the terminal point is constituted by the vector AB = <a,b,c> wherein ‘a’ represents the component of the x-dimension, ‘b’ represents the component of the y-dimension, and ‘c’ represents the component of the z-dimension.

Vectors support a variety of their own computations which will be briefly reviewed:

- Vector Addition: Suppose we have vectors AB and BC represented by AB=<a,b,c> and BC=<d,e,f>. We may take the sum of these vectors such that AB + BC = AC. We say that AC is the resultant vector that occurs when the sum of vectors AB and BC is taken. Taking the sum of AB and BC creates a resultant vector which can be computed by the function AC = <a+d, b+e, c+f>
- Vector Subtraction: Taking the difference of two vectors is really like taking the sum of vectors wherein one (or both) may be an inverse of themselves. We may say that for our vectors AB and BC that the difference vector of these two is concommitant to: AB – BC = AB + (-BC). Again, this operation returns a resultant vector AC which is said to be the difference vector of such a computation. This resultant vector can be computed by the function AC = <a-d, b-e, c-f>
- Scalar Multiplication: Vectors are capable of being multiplied throughout by a real number called a scalar, which is a multiplier extending the length of the vector. If we have a scalar k which is applied to the vector AB, then the resultant vector is kAB, whose length is k times the original length of AB. If k > 0, then the direction of the vector is retained, while if k<0, then the direction of the scalar is reversed. If scalar multiplication is applied then the resultant components may be modeled as <ka, kb, kc>
- Computing Vector Length: Recall the distance formula which is the sum of the squared differences between the points. Computing the distance (length/magnitude) of a vector is really an abstraction of this device. If we desire to compute the length of vector AB, we must know the individual points of A and B in three dimensions. Let us model these as A(x,y,z) and B(h,k,l) computing the magnitude of AB can then be understood as |AB|=((x-h)2 + (y-k)2 + (z-l)2 ).5
- Dot Product: The dot product of two vectors is a type of multiplication which does not procure a new vector, but rather, returns a real number. If we take the dot product of our vectors AB and BC, this may be modeled as AB·BC= ad + be + cf. The dot product is particularly useful for quantifying the angle between two vectors. If vectors AB and BC begin at the same initial point, then the angle between them is computed by the dot product as follows: cosθ=(AB·BC)/(|AB||BC|)
- Cross Product: Computing the cross-product is also a multiplication based function but differs from the dot product in that it does not return a real number; rather, it returns another vector. This is quite similar to taking the determinant of a matrix, but such detail need not be extrapolated upon here. Working again with vectors AB and BC, taking the cross product of these vectors may be modeled as such: AB X BC = <bf-ce, cd-af, ae-bd>. Taking the cross product is also useful for computing angles by the following function: sinθ=(|ABXBC|)/(|AB||BC|).

## Vector Functions

With the banalities now acknowledged, we may elaborate on the true powers of vectors: the functions they constitute. If a general function is simply a relationship between a domain and range elements, then a vector function is an abstraction of such, relating real numbers in the domain to vectors which constitute the range. Lets consider a new vector, ‘r’, whose components are constituted by three dimensional vectors. We may say that the domain is denoted by ‘t’ which establishes a set of real numbers. The components of vector ‘r’ may be modeled as general functions ‘f’, ‘g’, and ‘h’. In such a case, a function of ‘r’ may be established such that r(t)=<f(t), g(t), h(t)>. From this format we may observe that f(t), g(t), h(t) actually constitute components of the vector ‘r’ at different values of ‘t’.

The function r(t) specifies a curve in three dimensions which may be understood as a space curve. The vectorial components of r(t), f(t), g(t), and h(t), are known as parameterizations of the space curve.

Let’s consider our very own vector function which we can model via Desmos. Let’s call our experimental vector ‘A’ whose domain is established by the set of real numbers ‘t’. Let us define our three parametric equations: f(t)=cos(t) , g(t)=sin(t) , and h(t)=t. Taking into account these parametric equations, then our vector function of A may be modeled as: A(t)=<cos(t) , sin(t), t>.

Before graphing, let us break down what this function implies. The function A(t) notes that the vector A at a particular value of ‘t’ is defined by its components, which themselves are also functions of ‘t’. Thus, for every value of t, we obtain a new triple of components defining a new vector of A for that particular value of ‘t’. The value of f(t) denotes displacement in the x-dimension, g(t) denotes displacement in the y-dimension, and h(t) denotes displacement in the z-dimension. The space curve, then, is the line connecting all the points denoted by A(t). Let us see what the computer generated image of the vector function produces.

## Vector Calculus: Differentiation and Integration

Now, this is, in fact, a discussion of higher-level calculus. So you may be asking: where is the differentiation/integration? Truth be told, if you are familiar with these techniques, then applying them to vector functions is just as simple, if not more simple, than using them with general functions. This is because these parameter functions tend not to be so complex as to require complex techniques such as partial fractionation or other higher level mechanisms. For that reason, I am not going to discuss the methods of these techniques. However, I will provide a general blue-print as to how these tend to manifest.

Let us reconsider our original vector function A: A(t)=<cos(t) , sin(t), t>. If we desire to take the derivative of our vector function A, then we must take the derivative of every parameter constituting components of these functions. Therefore, we may undertake the following: A'(t) = <f'(t), g'(t), h'(t)>. By applying simple differentiation techniques, we may see that the the derivative of A(t) may be computed as A'(t)=<-sin(t), cos(t), 1>.

Integration of vector functions follow concomitant expectations in that to compute the integral of a vector function, we need only to compute the integrals of the parametric components of this vector function. As such, we could expect to model vector function integration via: ∫A(t) = < ∫f(t), ∫g(t), ∫h(t)>. Now, by conducting this computation we could find that: ∫A(t)=<sin(t), -cos(t), t^{2}/2>. Not too shabby.

## Using Vector Differentiation and Integration for Computing Arc Length and Curvature

Using the techniques of vectorial differentiation and integration are primarily useful for the purposes of computing lengths of the space curve produced by the vector function, as well as computing the tangent of the curve.

Here, we will first establish the analysis of arc length and arc curvature with differentiation and integration. If our vector function A is comprised of components f(t), g(t), and h(t), then length may be computed by the function: L=∫([f'(t)]^{2}+[g'(t)]^{2}+[h'(t)]^{2})^{.5} Let’s break down what this function implies: We take the derivative of each parameter function and square this value. Then we take the sum of these and take the square root. Then we integrate the squared-sum.

Acknowledging this is particularly useful for identifying the tangent of a space curve at a particular point. To identify a tangent ‘T’ to our vector function ‘A’ is simply dividing the derivative of the vector function by the magnitude of the vector function such that: T(t)= A'(t) / |A'(t)|.

A normal vector refers to a vector that is orthogonal to a tangent vector. The normal vector is readily found by dividing the derivative of the tangent function by the magnitude of the tangent function. The binomial vector is a vector which is orthogonal to both the tangent vector and the normal vector, which can be readily computed by the cross product of the tangent and normal vectors.

## En Masse

A thorough overview of vector calculus has presently been made, incorporating:

- Review of Vector Computations
- Description of Vector Functions and their Space Curves
- Exploring Differentiation and Integration of Vector Functions
- Using Calculus Techniques to Investigate Tangent, Normal, and Binomial Vectors

Later discussions will focus on extrapolations of these components individually. If any confusion remains, future proofs may be useful to show how these functions work. Furthermore, several examples of these will be worked out and provided with thorough explanation to provide a comprehensive analysis of these topics. This page will be updated with links to these resources as they become available.

If you have any questions or suggestions, don’t hesitate to leave comments.