We are used to $\mathbb R$ being an ordered field: through the construction of real numbers we know that either $a\leq, a=b$ or $a\geq b$.Thus we're able to use real numbers to say something is bigger, better or faster than something else. It is also well-known, at some level, that $\mathbb R^d$, the vector space whose elements are $d$-length tuples of real numbers, is *not* ordered. This has momentous consequences: you're no longer able to say who's the better man when there are multiple scores measuring multiple factors, or what's best for society when each person has a scale from better to worse. You *can* hope the true scoreable thing falls within a one-dimensional manifold and perform some kind of 'dimensionality reduction' (averaging is a crude method; regression and SVD are better, but still reliant on ad hoc hypotheses). But this is no longer "the math". It's methodology, and it's questionable.

A vector space can be given *some* structures that are weaker than ordering but produce useful mathematics. First, you can define a distance $\mathrm{dist}\colon \mathbb R^d \times \mathbb R^d \to \mathbb R$ -- a function that is scalable ($\mathrm{dist}(\alpha \dot a, \alpha \dot b) =|\alpha|\cdot \mathrm{dist}(\mathbb a,\mathbb b) $ for a scalar $\alpha$), sub-additive ($\mathrm{dist}(\mathbb a,c) \leq \mathrm{dist}(\mathbb a,\mathbb b) + \mathrm{dist}(b,c)$) and point-separating ($\mathrm{dist}(a,a)=0$). Equivalently, you can define a norm $\|\cdot\|$ that is scalable, sub-additive, and point-separating, such that $\mathrm{dist}(a,b) = \|a-b\|$.

Note that these axioms imply that both distance and norm are always positive (except for the trivial case $\|(0,\cdots,0)\|=0$). In other words, they give our vector field a sense of *size*, but not order: you can try to rank vectors by their norm, but $\|(0,-1)\|=(0,1)\|$ even though one is "positive" and the other is "negative". To introduce this idea of an orientation on a vector space we need to develop some additional concepts.

A multilinear map (or tensor) of order $k$ on $\mathbb V$ is a function $\omega\colon \mathbb V^k \to \mathbb R$ that is linear on each of its component functions $\omega_i\colon \mathbb V \to \mathbb R$. The set of all tensors of order $k$ on $\mathbb V$ forms a vector space by inheriting the linear structure of its component functions, as the following examples will show.

Rank 1 tensors are linear functionals (or covectors), that is, vectors in the dual space $\mathbb V^\ast$ of linear maps $v^\ast\colon \mathbb V \to \mathbb R$. They can be represented by $(1\times \dim \mathbb V)$-dimensional row matrices as follows: given a basis $(\mathbf e_1,\cdots,\mathbf e_n)$ of $\mathbb V$, we define $\mathbf a = (a_1, \cdots, a_{\dim \mathbb V})$ as the basis vector of $\omega$, given by $$ \mathbf a = \begin{bmatrix} \omega(\mathbf e_1) & \cdots & \omega(\mathbf e_{\dim \mathbb V})^\top \end{bmatrix} $$ Then, if $\mathbf{u} = \sum_{i=1}^m u_i e_i$, we define $$ \omega(\mathbf u) = \omega\left(\sum_i u_i \mathbf e_i\right) = \sum_i \omega(u_i \mathbf e_i) = \sum_i u_i \omega(\mathbf e_i)=\sum_i a_i x_i = \mathbf a^\top \mathbf u $$ We verify that $\mathbf a$ is indeed the basis of the vector space of all covectors by noting that $\alpha(\omega \mathbf u)= \mathbf a^\top(\alpha \mathbf u)=\alpha(\mathbf a^\top \mathbf u)$ and $\omega(\mathbf u) + \omega(\mathbf v) = \mathbf a^\top\mathbf u + \mathbf a^\top\mathbf v=\mathbf a^\top (\mathbf u + \mathbf v)$ for all $\mathbf u, \mathbf v \in \mathbb V$.

Rank 2 tensors (bilinear maps) have a similar matrix correspondence. This can be seen by arranging the tensor mapping across the basis vectors of $\mathbb V$ as the rows of a matrix:

$$[A]*{ij} = \omega(e*i,e_j) $$

Let, then, $\mathbf u = \sum_i u_i e_i, v = \sum_i v_i e_i$, then $$\omega(\mathbf u,\mathbf v) = \sum_{i} [A]*{ij} u*i v_i = \mathbf u ^\top A \mathbf v$$
The same arguments as before can be used to show that $A$ is a basis for the vector space of all rank 2 tensors.

Being a bilinear form, rank-2 tensors admit the familiar matrix representation given by $$ \begin{bmatrix} -- & \mathbf u & -- \end{bmatrix} \begin{bmatrix} \omega(\mathbf e_1,\mathbf e_1)&\cdots& \omega(\mathbf e_1 ,\mathbf e_{\dim \mathbb V})\ \vdots & \ddots & \vdots \ \omega(\mathbf e_{\dim \mathbb V},\mathbf e_1)& \cdots &\omega(\mathbf e_m,\mathbf e_m) \end{bmatrix} \begin{bmatrix} | \ \mathbf v \ | \end{bmatrix} $$ The rows of the matrix $A$ are, then, the component linear maps corresponding to the basis vectors of $\mathbb V$ and form together a basis for the space of rank 2 tensors.

Tensors of higher rank can be similarly given bases by structures of the form
$$
[A]*{i*1,i_2,\cdots,i_k} = \omega(\mathbf e_{i_1},\mathbf e_{i_2},\cdots, \mathbf e_{i_k})
$$
which could conceivably be represented as ``$k$-dimensional matrices'' (although this is hard to visualize even in three dimensions).

Tensors of rank $k$ are said to be $k$-forms if they are alternating -- that is, if swapping the position of two parameters changes its sign.

$1$-forms are just covectors (which are trivially alternating), while $2$-forms are bilinear maps $\omega\colon \mathbb V\times \mathbb V \to \mathbb R$, such that $$ \omega(q,p) = -\omega(p,q) $$

For $k=3$, in turn, $\omega(p,q,r) = -\omega(q,p,r) = \omega(q,r,p) = -\omega(p,r,q) = \omega(r,p,q) = -\omega(r,q,p)$. The set of $k$-forms on the vector space $\mathbb V$ (i.e. such that each of its arguments is a vector in $\mathbb V$) is denoted $\Omega^k(\mathbb V)$, and is itself a vector space with some restrictions on its basis vectors.

Through a $k$-form, any $k$ vectors can be given an orientation, either positive or negative. If $k$ matches the dimension of $\mathbb V$, then the sign of $\omega(\mathbf e_1, \cdots, \mathbf e_n)$ is the orientation of the vector space.

As a bonus: for each $k\in \mathbb N$, the order-$k$ volume element $\det_{(k)} \in \Omega^ k(\mathbb V)$ (with $\dim \mathbb V = k$) is the unique alternating tensor such that $$ \det_{(k)}(\mathbf e_1, \cdots, \mathbf e_n) = 1 $$ (where $(\mathbf e_1,\cdots, \mathbf e_n)$ is the canonical basis of $\mathbb V$).

Because $\det_{(k)}$ is alternating, it defines an oriented volume alongside an orientation given by $\det_{(k)}(\mathbf e_1, \cdots, \mathbf e_n)$.

The familiar determinant of a $(m\times m)$ matrix is then given by $\det A = det_{(m)} (\mathbf a_1, \cdots, \mathbf a_m) $ where $\mathbf a_i$ is the $i$-th column of $A$

All rights reserved.

Unbounded Variation takes no responsibility for any damage, whether physical, financial, psychological or otherwise, that may result from reading or applying the information contained in its articles. Our mission is chaos. Expect us.