Tensors

Co-ordinate Transformations

Let one co-ordinate system be defined by basis vectors \([\boldsymbol{e_1}, ..., \boldsymbol{e_n}]\). A second co-ordinate system can be defined with basis vectors \([\boldsymbol{\bar{e}_1}, ..., \boldsymbol{\bar{e}_n}]\) as follows: $$ \begin{equation} \boldsymbol{\bar{e}_i} = \sum_{j=1}^n { F^j_i \boldsymbol{e_j}} \quad \quad \quad i=1,...,n \label{eq:eq0} \end{equation} $$ Let the matrix \(F\) be defined as having its \( (i,j) \) element equal to \(F^j_i\). This matrix is sometimes referred to as the forward transformation matrix. The second co-ordinate system is well defined provided that the matrix \(F\) is invertible. Equation \eqref{eq:eq0} can be written in matrix format as follows: $$ [\boldsymbol{\bar{e}_1}, ..., \boldsymbol{\bar{e}_n}] = [\boldsymbol{e_1}, ..., \boldsymbol{e_n}] F $$ Using tensor notation it may also be written: $$ \boldsymbol{\bar{e}_i} = F^j_i \boldsymbol{e_j} \quad \quad \quad i=1,...,n $$ Let \(B = F^{-1} \). Then we can also write the old basis vectors in terms of the new ones: $$ \begin{equation} [\boldsymbol{e_1}, ..., \boldsymbol{e_n}] = [\boldsymbol{\bar{e}_1}, ..., \boldsymbol{\bar{e}_n}] B \label{eq:eq1} \end{equation} $$ $$ \boldsymbol{e_i} = B^j_i \boldsymbol{\bar{e}_j} \quad \quad \quad i=1,...,n $$ The matrix \(B\) is sometimes referred to as the backward transformation matrix.

Let \( \boldsymbol{v} = (v^1, ..., v^n)\) and \( \boldsymbol{\bar{v}} = (\bar{v}^1, ..., \bar{v}^n)\) be the co-ordinates of the same vector in the old and new co-ordinate systems respectively. Thus, we can write: $$ \begin{equation} \sum_{i=1}^n {v^i \boldsymbol{e_i}} = \sum_{i=1}^n {\bar{v}^i \boldsymbol{\bar{e}_i}} \end{equation} $$ which when written in matrix form produces: $$ [\boldsymbol{e_1}, ..., \boldsymbol{e_n}] \boldsymbol{v} = [\boldsymbol{\bar{e}_1}, ..., \boldsymbol{\bar{e}_n}] \boldsymbol{\bar{v}} $$ Using equation \eqref{eq:eq1} we can write: $$ [\boldsymbol{\bar{e}_1}, ..., \boldsymbol{\bar{e}_n}] B \boldsymbol{v} = [\boldsymbol{\bar{e}_1}, ..., \boldsymbol{\bar{e}_n}] \boldsymbol{\bar{v}} $$ which informs us that: $$ \boldsymbol{\bar{v}} = B \boldsymbol{v} \quad \quad \quad \quad \bar{v}^i = B_k^i v^k $$ and consequently: $$ \boldsymbol{v} = F \boldsymbol{\bar{v}} \quad \quad \quad \quad v^i = F_k^i \bar{v}^k $$

Covectors

$$ \begin{bmatrix} \boldsymbol{\bar{\epsilon}^1} \\ \vdots \\ \boldsymbol{\bar{\epsilon}^n} \end{bmatrix} = B \begin{bmatrix} \boldsymbol{\epsilon^1} \\ \vdots \\ \boldsymbol{\epsilon^n} \end{bmatrix} $$ Using tensor notation it may also be written: $$ \boldsymbol{\bar{\epsilon}^i} = B_k^i \boldsymbol{\epsilon^k} \quad \quad \quad i=1,...,n $$ The old basis vectors in terms of the new ones: $$ \begin{equation} \begin{bmatrix} \boldsymbol{\epsilon^1} \\ \vdots \\ \boldsymbol{\epsilon^n} \end{bmatrix} = F \begin{bmatrix} \boldsymbol{\bar{\epsilon}^1} \\ \vdots \\ \boldsymbol{\bar{\epsilon}^n} \end{bmatrix} \label{eq:eq4} \end{equation} $$ $$ \boldsymbol{\epsilon^i} = F_k^i \boldsymbol{\bar{\epsilon}^k} \quad \quad \quad i=1,...,n $$ Let \( \boldsymbol{w} = (w_1, ..., w_n)\) and \( \boldsymbol{\bar{w}} = (\bar{w}_1, ..., \bar{w}_n)\) be the co-ordinates of the same vector in the old and new co-ordinate systems respectively. Thus, we can write: $$ \begin{equation} \sum_{i=1}^n {w_i \boldsymbol{\epsilon^i}} = \sum_{i=1}^n {\bar{w}_i \boldsymbol{\bar{\epsilon}^i}} \end{equation} $$ which when written in matrix form produces: $$ \boldsymbol{w} \begin{bmatrix} \boldsymbol{\epsilon^1} \\ \vdots \\ \boldsymbol{\epsilon^n} \end{bmatrix} = \boldsymbol{\bar{w}} \begin{bmatrix} \boldsymbol{\bar{\epsilon}^1} \\ \vdots \\ \boldsymbol{\bar{\epsilon}^n} \end{bmatrix} $$ Using equation \eqref{eq:eq4} we can write: $$ \boldsymbol{w}F\begin{bmatrix} \boldsymbol{\bar{\epsilon^1}} \\ \vdots \\ \boldsymbol{\bar{\epsilon^n}} \end{bmatrix} = \boldsymbol{\bar{w}} \begin{bmatrix} \boldsymbol{\bar{\epsilon}^1} \\ \vdots \\ \boldsymbol{\bar{\epsilon}^n} \end{bmatrix} $$ which informs us that: $$ \boldsymbol{\bar{w}} = \boldsymbol{w} F \quad \quad \quad \quad \bar{w}_i = w_k F_i^k$$ and consequently: $$ \boldsymbol{w} = \boldsymbol{\bar{w}} B \quad \quad \quad \quad w_i = \bar{w}_k B_i^k$$

Covariant and Contravariant Tensors

Let \(A^1, ..., A^n\) be the components of a vector defined at co-ordinates \((x^1, ..., x^n)\) in a co-ordinate system defined by basis vectors \((\boldsymbol{e_1}, ..., \boldsymbol{e_n})\). Furthermore, let \(\bar{A}^1, ..., \bar{A}^n\) be the components of the same vector defined at co-ordinates \( (\bar{x}^1, ..., \bar{x}^n)\) in a co-ordinate system defined by basis vectors \((\boldsymbol{\bar{e}_1}, ..., \boldsymbol{\bar{e}_n})\).

A vector is said to be contravariant if the two sets of vector components are related as follows: $$ \begin{equation} \bar{A}^i (\bar{x}^1, ..., \bar{x}^n) = \sum_{j=1}^n {\partial \bar{x}^i \over \partial x^j} A^j(x^1, ..., x^n) \quad \quad \quad i=1,...,n \label{eq:eq7a} \end{equation} $$ in which \( A^j(x^1, ..., x^n) \) indicates that \(A^j\) is a function of \((x^1, ..., x^n)\). Note the use of superscripts to denote the components of a contravariant vector.

A vector is said to be covariant if the two sets of vector components are related as follows: $$ \begin{equation} \bar{A}_i (\bar{x}^1, ..., \bar{x}^n) = \sum_{j=1}^n {\partial x^i \over \partial \bar{x}^j} A_j(x^1, ..., x^n) \quad \quad \quad i=1,...,n \label{eq:eq7b} \end{equation} $$ Note the use of subscripts to denote the components of a covariant vector.

Covariant and Contravariant Tensors - Summary

The basis vectors are covariant and are indexed with a subscript. The co-ordinates of a vector are contravariant and are indexed with a superscript. The basis covectors are contravariant and are indexed with a superscript. The co-ordinates of a covector are covariant and are indexed with a subscript.

Denote: $$ [{\partial \bar{x}^i \over \partial x^j}] := \begin{bmatrix} \partial \bar{x}^1 \over \partial x^1& \cdots &\partial \bar{x}^1 \over \partial x^n \\ \vdots &\partial \bar{x}^i \over \partial x^j& \vdots \\ \partial \bar{x}^n \over \partial x^1& \cdots &\partial \bar{x}^n \over \partial x^n \end{bmatrix} $$ and $$ [{\partial x^i \over \partial \bar{x}^j}] := \begin{bmatrix} \partial x^1 \over \partial {\bar{x}}^1& \cdots &\partial x^1 \over \partial {\bar{x}}^n \\ \vdots &\partial x^i \over \partial {\bar{x}}^j& \vdots \\ \partial x^n \over \partial {\bar{x}}^1& \cdots &\partial x^n \over \partial {\bar{x}}^n \end{bmatrix} $$ Then equations \eqref{eq:eq7a} and \eqref{eq:eq7b} can be written in matrix form as follows: $$ \boldsymbol{\bar{A}} = [{\partial \bar{x}^i \over \partial x^j}]\boldsymbol{A} $$ $$ \boldsymbol{A} = [{\partial x^i \over \partial \bar{x}^j}]\boldsymbol{\bar{A}} $$

Let \( \bar{A}^i (\bar{x}^1, ..., \bar{x}^n) := \bar{x}^i \) and \(A^i(x^1, ..., x^n) := x^i \). We then have: $$ [{\partial \bar{x}^i \over \partial x^j}] \equiv B$$ and: $$ [{\partial x^i \over \partial \bar{x}^j}] \equiv F$$

A tensor is said to be mixed if it contains both covariant and contravariant indices. It is assigned the type (or valence) \((p,q)\) if it has \(p\) contravariant indices and \(q\) covariant indices.

Metric Tensor

Let \(\boldsymbol{r} = \boldsymbol{r}(u^1, u^2, u^3) \) be a path in \(\mathbb{R}^3\) and let \(u^1, u^2, u^3\) be curvilineaer co-ordinates for \(\boldsymbol{r}\). The differential arc-length \(ds\) is given (in terms of the curvilinear co-ordinates) by: $$ ds^2 = d\boldsymbol{r}.d\boldsymbol{r} = \sum_{p=1}^3{} \sum_{q=1}^3 g_{pq} \, du^p du^q $$ in which: $$ g_{pq} = {{\partial \boldsymbol{r}} \over {\partial u^p}} . {{\partial \boldsymbol{r}} \over {\partial u^q}} $$ are the components of the rank-2 metric tensor. Writing (using tensor notation): $$ du^p = {{\partial u^p} \over {\partial \bar{u}^k}} d\bar{u}^k $$ $$ ds^2 = g_{pq} \, du^p du^q = g_{pq} {{\partial u^p} \over {\partial \bar{u}^k}} {{\partial u^q} \over {\partial \bar{u}^l}} \, d\bar{u}^k d\bar{u}^l $$ $$ = \bar{g}_{kl} \, d\bar{u}^k d\bar{u}^l $$ where: $$ \bar{g}_{kl} = g_{pq} {{\partial u^p} \over {\partial \bar{u}^k}} {{\partial u^q} \over {\partial \bar{u}^l}}$$ This demonstrates that the metric tensor is a covariant rank-2 tensor of type (0,2).