Description
Equations
Zero-order tensor (Scalar)
( s ) ∈ R (s) \in \mathbb{R} ( s ) ∈ R
First-order tensor (Vector)
[ v ] ∈ E 3 [\mathbf{v}] \in E^3 [ v ] ∈ E 3
Second-order tensor
{ T } ∈ L ( E 3 , E 3 ) \{ \mathbf{T} \} \in L(E^3, E^3) { T } ∈ L ( E 3 , E 3 )
Partial derivatives
ϕ , i ≡ ∂ ϕ ∂ x i \phi_{,i} \equiv \dfrac{\partial \phi}{\partial x_i} ϕ , i ≡ ∂ x i ∂ ϕ
Kronecker delta
δ i j = { 1 i = j 0 i ≠ j \delta_{ij} = \begin{cases} 1 && i = j \\ 0 && i \not= j \end{cases} δ i j = { 1 0 i = j i = j
Permutation symbol Levi-Civita symbol
ε i j k = { 1 i j k = 123 , 231 , 312 − 1 i j k = 321 , 132 , 213 0 otherwise (two indices alike) \varepsilon_{ijk} = \begin{cases} 1 & ijk = 123, 231, 312 \\ -1 & ijk = 321, 132, 213 \\ 0 & \text{otherwise (two indices alike)} \end{cases} ε i j k = ⎩ ⎪ ⎪ ⎨ ⎪ ⎪ ⎧ 1 − 1 0 i j k = 1 2 3 , 2 3 1 , 3 1 2 i j k = 3 2 1 , 1 3 2 , 2 1 3 otherwise (two indices alike)
Description
Equations
Integers
Z \mathbb{Z} Z
Natural numbers
N \mathbb{N} N
Real numbers
R \mathbb{R} R
Element of (in)
x ∈ X x \in X x ∈ X
Not element of (not in)
x ∉ X x \not\in X x ∈ X
Subset
X ⊆ Y X \sube Y X ⊆ Y
Proper subset
X ⊂ Y X \sub Y X ⊂ Y
Union (or)
X ∪ Y X \cup Y X ∪ Y
Intersection (and)
X ∩ Y X \cap Y X ∩ Y
Empty set
∅ \varnothing ∅
Cartesian product
X × Y = { ( x , y ) ∣ x ∈ X , y ∈ Y } X \times Y = \{(x, y) \ \vert\ x \in X, y \in Y\} X × Y = { ( x , y ) ∣ x ∈ X , y ∈ Y }
Definition of Vector Space { V , + ; R , ⋅ } \{V, +; \mathbb{R}, \cdot\} { V , + ; R , ⋅ }
Equations
Closure under linear combination
u ∈ V \mathbf{u} \in V u ∈ V , v ∈ V \mathbf{v} \in V v ∈ V satisfing ( a ⋅ u + b ⋅ v ) ∈ V (a \cdot \mathbf{u} + b \cdot \mathbf{v}) \in V ( a ⋅ u + b ⋅ v ) ∈ V
Existence of null element
∃ 0 ∈ V \exist\mathbf{0}\in V ∃ 0 ∈ V satisfying u + 0 = u \mathbf{u + 0 = u} u + 0 = u
Existence of additive inverse
∃ ( − u ) ∈ V \exist(\mathbf{-u})\in V ∃ ( − u ) ∈ V satisfying u + ( − u ) = 0 \mathbf{u + (-u) = 0} u + ( − u ) = 0
Existence of scalar identity
1 ⋅ u = u 1 \cdot \mathbf{u} = \mathbf{u} 1 ⋅ u = u
Associativity of vector addition ( + ) (+) ( + )
( u + v ) + w = u + ( v + w ) (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}) ( u + v ) + w = u + ( v + w )
Associativity of scalar multiplication ( ⋅ ) (\cdot) ( ⋅ )
( α β ) ⋅ u = α ⋅ ( β ⋅ u ) (\alpha\beta)\cdot \mathbf{u} = \alpha \cdot(\beta\cdot\mathbf{u}) ( α β ) ⋅ u = α ⋅ ( β ⋅ u )
Distributivity w.r.t. R \mathbb{R} R
( α + β ) ⋅ u = α ⋅ u + β ⋅ u (\alpha + \beta) \cdot \mathbf{u} = \alpha \cdot \mathbf{u} + \beta \cdot \mathbf{u} ( α + β ) ⋅ u = α ⋅ u + β ⋅ u
Distributivity w.r.t. V V V
α ⋅ ( u + v ) = α ⋅ u + α ⋅ v \alpha \cdot (\mathbf{u} + \mathbf{v}) = \alpha \cdot \mathbf{u} + \alpha \cdot \mathbf{v} α ⋅ ( u + v ) = α ⋅ u + α ⋅ v
Commutativity of vector addition ( + ) (+) ( + )
u + v = v + u \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u} u + v = v + u
Description
Equations
Linear subspace
U ⊆ V U \sube V U ⊆ V α u 1 + β u 2 ∈ U \alpha\mathbf{u}_1 + \beta\mathbf{u}_2 \in U α u 1 + β u 2 ∈ U
Linear independent
∑ i N α i v i = 0 ⟺ α i = 0 \displaystyle\sum_{i}^N \alpha_i \mathbf{v}_i = 0 \iff \alpha_i = 0 i ∑ N α i v i = 0 ⟺ α i = 0
Finite dimensional
V n , ∃ n ∈ Z V^n, \exist n \in \mathbb{Z} V n , ∃ n ∈ Z such that all linearly independent sets contain at most n n n elements
Basis
v = ∑ i = 1 n α i b i \displaystyle\mathbf{v} = \sum_{i=1}^n \alpha_i \mathbf{b}_i v = i = 1 ∑ n α i b i
Definition of inner product
Equations
Commutativity of inner product ( ⋅ ) (\cdot) ( ⋅ )
u ⋅ v = v ⋅ u \mathbf{u \cdot v = v \cdot u} u ⋅ v = v ⋅ u
Distributivity of ( + ) (+) ( + )
u ⋅ ( v + w ) = u ⋅ v + u ⋅ w \mathbf{u \cdot (v + w) = u \cdot v + u \cdot w} u ⋅ ( v + w ) = u ⋅ v + u ⋅ w
Associativity of ( ⋅ ) (\cdot) ( ⋅ )
( α u ) ⋅ v = α ( u ⋅ v ) (\alpha \mathbf{u})\cdot \mathbf{v} = \alpha (\mathbf{u \cdot v}) ( α u ) ⋅ v = α ( u ⋅ v )
Positive definite
u ⋅ u ≥ 0 \mathbf{u \cdot u} \ge 0 u ⋅ u ≥ 0 u ⋅ u = 0 ⟺ u = 0 \mathbf{u \cdot u} = 0 \iff \mathbf{u} = 0 u ⋅ u = 0 ⟺ u = 0
Description
Equations
Euclidean norm (magnitude)
∣ u ∣ = u ⋅ u \vert\mathbf{u}\vert = \sqrt{\mathbf{u \cdot u}} ∣ u ∣ = u ⋅ u
Distance
d ( u , v ) ≡ ∣ u ⋅ v ∣ d(\mathbf{u, v}) \equiv \vert\mathbf{u \cdot v}\vert d ( u , v ) ≡ ∣ u ⋅ v ∣
Orthogonal
u ⋅ v = 0 \mathbf{u \cdot v} = 0 u ⋅ v = 0
Orthonormal
e 1 ⋅ e 2 = δ i j \mathbf{e}_1 \cdot \mathbf{e}_2 = \delta_{ij} e 1 ⋅ e 2 = δ i j
Orthonormal basis
v = ∑ i n α i e i \displaystyle\mathbf{v} = \sum_i^n \alpha_i \mathbf{e}_i v = i ∑ n α i e i
Definition of vector product
Equations
Negative commutativity
u × v = − v × u \mathbf{u \times v = - v \times u} u × v = − v × u
Triple product (Box product)
( u × v ) ⋅ w = ( v × w ) ⋅ u = ( w × u ) ⋅ v \mathbf{(u \times v) \cdot w = (v \times w) \cdot u = (w \times u) \cdot v} ( u × v ) ⋅ w = ( v × w ) ⋅ u = ( w × u ) ⋅ v [ u , v , w ] = [ v , w , u ] = [ w , u , v ] \mathbf{[u, v, w] = [v, w, u] = [w, u, v]} [ u , v , w ] = [ v , w , u ] = [ w , u , v ]
Magnitude of vector product
∣ u × v ∣ = ∣ u ∣ ∣ v ∣ sin θ \lvert \mathbf{u \times v} \rvert = \lvert\mathbf{u}\rvert \lvert\mathbf{v}\rvert \sin\theta ∣ u × v ∣ = ∣ u ∣ ∣ v ∣ sin θ , where cos θ = u ⋅ v ∣ u ∣ ∣ v ∣ \cos\theta = \dfrac{\mathbf{u \cdot v}}{\mathbf{\lvert u \rvert\lvert v \rvert}} cos θ = ∣ u ∣ ∣ v ∣ u ⋅ v
Triple cross product
u × ( v × w ) = ( u ⋅ w ) v − ( v ⋅ u ) w \mathbf{u \times (v \times w) = (u \cdot w) v - (v \cdot u) w} u × ( v × w ) = ( u ⋅ w ) v − ( v ⋅ u ) w
Description
Equations
Self cross product
u × u = 0 \mathbf{u \times u = 0} u × u = 0
Cross product is orthogonal to original vectors
( u × u ) ⋅ u = 0 \mathbf{(u \times u) \cdot u} = 0 ( u × u ) ⋅ u = 0 ( u × v ) ⋅ v = 0 \mathbf{(u \times v) \cdot v} = 0 ( u × v ) ⋅ v = 0
Right-handed orthonormal basis
e i × e j = ∑ i = 1 3 ε i j k e k \displaystyle\mathbf{e}_i \times \mathbf{e}_j = \sum_{i=1}^3 \varepsilon_{ijk} \mathbf{e}_k e i × e j = i = 1 ∑ 3 ε i j k e k
Vector product in tensor notation
u × v = ∑ i = 1 3 ∑ j = 1 3 u i v j e i × e j \displaystyle\mathbf{u} \times \mathbf{v} = \sum_{i=1}^3 \sum_{j=1}^3 u_i v_j \mathbf{e}_i \times \mathbf{e}_j u × v = i = 1 ∑ 3 j = 1 ∑ 3 u i v j e i × e j
Vector product in permutation notation
u × v = ∑ i = 1 3 ∑ j = 1 3 ∑ k = 1 3 u i v j ε i j k e k \displaystyle\mathbf{u} \times \mathbf{v} = \sum_{i=1}^3 \sum_{j=1}^3 \sum_{k=1}^3 u_i v_j\varepsilon_{ijk}\mathbf{e}_k u × v = i = 1 ∑ 3 j = 1 ∑ 3 k = 1 ∑ 3 u i v j ε i j k e k
Description
Equations
Vector-to-scalar function
f : U → R f: U \to R f : U → R
Vector-to-vector function
f : U → V \mathbf{f}: U \to V f : U → V
Linear function
f ( α 1 v 1 + α 2 v 2 ) = α 1 f ( v 1 ) + α 2 f ( v 2 ) f(\alpha_1 \mathbf{v}_1 + \alpha_2 \mathbf{v}_2) = \alpha_1 f({\mathbf{v}_1}) + \alpha_2 f(\mathbf{v}_2) f ( α 1 v 1 + α 2 v 2 ) = α 1 f ( v 1 ) + α 2 f ( v 2 )
General form of linear function
f ( v ) = a ⋅ v f(\mathbf{v}) = \mathbf{a \cdot v} f ( v ) = a ⋅ v
Description
Equations
Dummy index appears twice and summed
u i v i ≡ ∑ i = 1 3 u i v i = u ⋅ v \displaystyle u_i v_i \equiv \sum_{i=1}^3 u_i v_i = \mathbf{u \cdot v} u i v i ≡ i = 1 ∑ 3 u i v i = u ⋅ v
Free index appears once and stacked
v i ≡ v i e i = ∑ i = 1 3 v i e i = [ v 1 v 2 v 3 ] T \displaystyle v_{i} \equiv v_i \mathbf{e}_i = \sum_{i=1}^3 v_i \mathbf{e}_i = \begin{bmatrix}v_1 & v_2 & v_3\end{bmatrix}^T v i ≡ v i e i = i = 1 ∑ 3 v i e i = [ v 1 v 2 v 3 ] T V i j ≡ V i j e i ⊗ e j = ∑ i = 1 3 ∑ j = 1 3 V i j e i ⊗ e j = [ V 11 V 12 V 13 V 21 V 22 V 23 V 31 V 32 V 33 ] \begin{aligned}\displaystyle V_{ij} \equiv V_{ij} \mathbf{e}_i \otimes \mathbf{e}_j = \sum_{i=1}^3 \sum_{j=1}^3 V_{ij} \mathbf{e}_i \otimes \mathbf{e}_j = \begin{bmatrix}V_{11} & V_{12} & V_{13} \\ V_{21} & V_{22} & V_{23} \\ V_{31} & V_{32} & V_{33}\end{bmatrix}\end{aligned} V i j ≡ V i j e i ⊗ e j = i = 1 ∑ 3 j = 1 ∑ 3 V i j e i ⊗ e j = ⎣ ⎢ ⎡ V 1 1 V 2 1 V 3 1 V 1 2 V 2 2 V 3 2 V 1 3 V 2 3 V 3 3 ⎦ ⎥ ⎤
No index appears more than twice
v i i i \cancel{v_{iii}} v i i i v i u i w i \cancel{v_i u_i w_i} v i u i w i
All notations below follows Einstein's indicial notation.
Description
Equations
Tensor
A \mathbf{A} A in f ( v ) = A v \mathbf{f}(\mathbf{v}) = \mathbf{Av} f ( v ) = A v
Tensor (dyadic) product
( a ⊗ b ) v ≡ ( b ⋅ v ) a \mathbf{(a \otimes b) v \equiv (b \cdot v)a} ( a ⊗ b ) v ≡ ( b ⋅ v ) a
Tensor (dyadic) product
( a ⊗ b ) = a i b j \mathbf{(a \otimes b)} = a_ib_j ( a ⊗ b ) = a i b j
Description
Equations
Transpose
u ⋅ T v ≡ v ⋅ T T u \mathbf{u \cdot T v \equiv v \cdot T}^T \mathbf{u} u ⋅ T v ≡ v ⋅ T T u T i j = T j i T T_{ij} = T_{ji}^T T i j = T j i T
Tensor multiplication
( T S ) v ≡ T ( S v ) \mathbf{(TS)v \equiv T(Sv)} ( T S ) v ≡ T ( S v ) T S = T i k S k j \mathbf{TS} = T_{ik} S_{kj} T S = T i k S k j
Trace
t r ( u ⊗ v ) ≡ u ⋅ v \mathrm{tr}(\mathbf{u \otimes v}) \equiv \mathbf{u \cdot v} t r ( u ⊗ v ) ≡ u ⋅ v t r ( T ) = T i i = ∑ d i a g ( T ) \mathrm{tr}(\mathbf{T}) = T_{ii} = \sum\mathrm{diag}(\mathbf{T}) t r ( T ) = T i i = ∑ d i a g ( T )
Contraction (Inner product, dot product)
T ⋅ S ≡ t r ( T S T ) \mathbf{T \cdot S} \equiv \mathrm{tr}(\mathbf{TS}^T) T ⋅ S ≡ t r ( T S T ) T ⋅ S = T i j S i j \mathbf{T \cdot S} = T_{ij}S_{ij} T ⋅ S = T i j S i j
Identity tensor
I v ≡ v \mathbf{Iv \equiv v} I v ≡ v I = δ i j \mathbf{I} = \delta_{ij} I = δ i j
Zero tensor
O v = O \mathbf{Ov = O} O v = O O = O i j \mathbf{O} = O_{ij} O = O i j
Symmetric
T T = T \mathbf{T}^T = \mathbf{T} T T = T T i j = T j i T_{ij} = T_{ji} T i j = T j i
Skew-symmetric
T T = − T \mathbf{T}^T = -\mathbf{T} T T = − T T i j = − T j i T_{ij} = -T_{ji} T i j = − T j i
Positive-definite
v ⋅ T v ≥ 0 \mathbf{v \cdot T v} \ge 0 v ⋅ T v ≥ 0 v ⋅ T v = 0 ⟺ v = 0 \mathbf{v \cdot T v} = 0 \iff \mathbf{v = 0} v ⋅ T v = 0 ⟺ v = 0
Invertible
T v = w \mathbf{Tv = w} T v = w uniquely determines v ⟹ v = T − 1 w \mathbf{v} \implies \mathbf{v = T}^{-1}\mathbf{w} v ⟹ v = T − 1 w T T − 1 = I \mathbf{TT}^{-1} = \mathbf{I} T T − 1 = I
Orthogonal
T T T = T T T = I \mathbf{T}^T \mathbf{T} = \mathbf{T}\mathbf{T}^T = \mathbf{I} T T T = T T T = I T T = T − 1 \mathbf{T}^T = \mathbf{T}^{-1} T T = T − 1
Characteristic polynomial
det ( T − λ I ) = 0 − λ 3 + I 1 λ 2 − I 2 λ + I 3 = 0 \begin{aligned}\det (\mathbf{T - \lambda I}) &= 0 \\ -\lambda^3 + I_1 \lambda^2 - I_2 \lambda + I_3 &= 0 \end{aligned} det ( T − λ I ) − λ 3 + I 1 λ 2 − I 2 λ + I 3 = 0 = 0
Eigenvalue
λ \lambda λ
Principal invariant 1
I 1 = t r ( T ) I_1 = \mathrm{tr}(\mathbf{T}) I 1 = t r ( T )
Principal invariant 2
I 2 = 1 2 [ t r ( T 2 ) − t r ( T ) 2 ] I_2 = \frac{1}{2}[\mathrm{tr}(\mathbf{T}^2) - \mathrm{tr}(\mathbf{T})^2] I 2 = 2 1 [ t r ( T 2 ) − t r ( T ) 2 ]
Principal invariant 3
I 3 = det ( T ) I_3 = \det(\mathbf{T}) I 3 = det ( T )
Description
Equations
Distributivity of transpose
( T + S ) T = S T + T T (\mathbf{T} + \mathbf{S})^T = \mathbf{S}^T + \mathbf{T}^T ( T + S ) T = S T + T T
Transpose flips multiplication order
( T S ) T = S T T T (\mathbf{T}\mathbf{S})^T = \mathbf{S}^T \mathbf{T}^T ( T S ) T = S T T T
Inverse flips multiplication order
( T S ) − 1 = S − 1 T − 1 (\mathbf{T}\mathbf{S})^{-1} = \mathbf{S}^{-1}\mathbf{T}^{-1} ( T S ) − 1 = S − 1 T − 1
Transpose-inverse
T − T ≡ ( T − 1 ) T = ( T T ) − 1 \mathbf{T}^{-T} \equiv (\mathbf{T}^{-1})^T = (\mathbf{T}^{T})^{-1} T − T ≡ ( T − 1 ) T = ( T T ) − 1
Description
Notation
Domain
Range
Scalar-to-scalar
ϕ 1 ( t ) \phi_1(t) ϕ 1 ( t )
R \mathbb{R} R
R \mathbb{R} R
Vector-to-scalar
ϕ 2 ( x ) \phi_2(\mathbf{x}) ϕ 2 ( x )
E 3 E^3 E 3
R \mathbb{R} R
Multivariable scalar-valued
ϕ 3 ( x , t ) \phi_3(\mathbf{x}, t) ϕ 3 ( x , t )
E 3 × R E^3 \times \mathbb{R} E 3 × R
R \mathbb{R} R
Scalar-to-vector
v 1 ( t ) \mathbf{v}_1(t) v 1 ( t )
R \mathbb{R} R
E 3 E^3 E 3
Vector-to-vector
v 2 ( x ) \mathbf{v}_2(\mathbf{x}) v 2 ( x )
E 3 E^3 E 3
E 3 E^3 E 3
Multivariable vector-valued
v 3 ( x , t ) \mathbf{v}_3(\mathbf{x}, t) v 3 ( x , t )
E 3 × R E^3 \times \mathbb{R} E 3 × R
E 3 E^3 E 3
Scalar-to-tensor
T 1 ( t ) \mathbf{T}_1(t) T 1 ( t )
R \mathbb{R} R
L ( E 3 , E 3 ) L(E^3, E^3) L ( E 3 , E 3 )
Vector-to-tensor
T 2 ( x ) \mathbf{T}_2(\mathbf{x}) T 2 ( x )
R \mathbb{R} R
L ( E 3 , E 3 ) L(E^3, E^3) L ( E 3 , E 3 )
Multivariable tensor-valued
T 3 ( x , t ) \mathbf{T}_3(\mathbf{x}, t) T 3 ( x , t )
E 3 × R E^3 \times \mathbb{R} E 3 × R
L ( E 3 , E 3 ) L(E^3, E^3) L ( E 3 , E 3 )
Description
Definition
Gradient of a scalar function
[ g r a d ϕ ( x ) ] ⋅ w ≡ [ d d ω ϕ ( x + ω w ) ] ω = 0 [\mathrm{grad} \ \phi(\mathbf{x})] \cdot \mathbf{w} \equiv \left[\frac{d}{d\omega} \phi(\mathbf{x} + \omega \mathbf{w})\right]_{\omega = 0} [ g r a d ϕ ( x ) ] ⋅ w ≡ [ d ω d ϕ ( x + ω w ) ] ω = 0
Gradient of a vector function
[ g r a d v ( x ) ] w ≡ [ d d ω v ( x + ω w ) ] ω = 0 [\mathrm{grad} \ \mathbf{v}(\mathbf{x})] \mathbf{w} \equiv \left[\frac{d}{d\omega} \mathbf{v}(\mathbf{x} + \omega \mathbf{w})\right]_{\omega = 0} [ g r a d v ( x ) ] w ≡ [ d ω d v ( x + ω w ) ] ω = 0
Divergence of a vector function
d i v v ( x ) ≡ t r [ g r a d v ( x ) ] \mathrm{div} \ \mathbf{v}(\mathbf{x}) \equiv \mathrm{tr}[\mathrm{grad} \ \mathbf{v}(\mathbf{x})] d i v v ( x ) ≡ t r [ g r a d v ( x ) ]
Divergence of a tensor function
[ d i v T ( x ) ] ⋅ w ≡ d i v [ T T ( x ) w ] [\mathrm{div} \ \mathbf{T}(\mathbf{x})] \cdot \mathbf{w} \equiv \mathrm{div}[\mathbf{T}^T(\mathbf{x}) \mathbf{w}] [ d i v T ( x ) ] ⋅ w ≡ d i v [ T T ( x ) w ]
Curl of a vector function
[ c u r l v ( x ) ] ⋅ w ≡ d i v ( v ( x ) × w ) [\mathrm{curl} \ \mathbf{v}(\mathbf{x})] \cdot \mathbf{w} \equiv \mathrm{div}(\mathbf{\mathbf{v}(x) \times w}) [ c u r l v ( x ) ] ⋅ w ≡ d i v ( v ( x ) × w )
Description
Expression in Orthonomal Coordinates
Gradient of a scalar function
g r a d ϕ ( x ) = ϕ , i e i \mathrm{grad} \ \phi(\mathbf{x}) = \phi_{,i} \mathbf{e}_i g r a d ϕ ( x ) = ϕ , i e i
Gradient of a vector function
g r a d v ( x ) = v i , j e i ⊗ e j \mathrm{grad} \ \mathbf{v}(\mathbf{x}) = v_{i, j} \mathbf{e}_i \otimes \mathbf{e}_j g r a d v ( x ) = v i , j e i ⊗ e j
Divergence of a vector function
d i v v ( x ) = v i , i \mathrm{div} \ \mathbf{v}(\mathbf{x}) = v_{i, i} d i v v ( x ) = v i , i
Divergence of a tensor function
d i v T ( x ) = T j i , i e j = T i j , j e i \mathrm{div} \ \mathbf{T}(\mathbf{x}) = T_{ji,i} \mathbf{e}_j = T_{ij,j} \mathbf{e}_i d i v T ( x ) = T j i , i e j = T i j , j e i
Curl of a vector function
c u r l v ( x ) = ε i j k v j , i e k \mathrm{curl} \ \mathbf{v}(\mathbf{x}) = \varepsilon_{ijk}v_{j,i}\mathbf{e}_k c u r l v ( x ) = ε i j k v j , i e k
Description
Domain
Range
Gradient of a scalar function
Scalar function
Vector function
Gradient of a vector function g r a d = ∇ \mathrm{grad} = \nabla g r a d = ∇
Vector function
Tensor function
Divergence of a vector function
Vector function
Scalar
Divergence of a tensor function d i v = ∇ ⋅ \mathrm{div} = \nabla\cdot d i v = ∇ ⋅
Tensor function
Vector
Curl of a vector function c u r l = ∇ × \mathrm{curl} = \nabla \times c u r l = ∇ ×
Vector function
Vector function
Description
Equations
-
g r a d ( ϕ v ) = ϕ g r a d ( v ) + v ⊗ g r a d ( ϕ ) \mathrm{grad}(\phi\mathbf{v}) = \phi\mathrm{grad}(\mathbf{v}) + \mathbf{v} \otimes \mathrm{grad}(\phi) g r a d ( ϕ v ) = ϕ g r a d ( v ) + v ⊗ g r a d ( ϕ )
-
d i v ( ϕ v ) = ϕ d i v ( v ) + v ⋅ g r a d ( ϕ ) \mathrm{div}(\phi\mathbf{v}) = \phi\mathrm{div}(\mathbf{v}) + \mathbf{v} \cdot \mathrm{grad}(\phi) d i v ( ϕ v ) = ϕ d i v ( v ) + v ⋅ g r a d ( ϕ )
-
c u r l [ g r a d ( ϕ ) ] = 0 \mathrm{curl} [\mathrm{grad} (\phi)] = \mathbf{0} c u r l [ g r a d ( ϕ ) ] = 0
-
d i v [ c u r l ( v ) ] = 0 \mathrm{div} [\mathrm{curl} (\mathbf{v})] = 0 d i v [ c u r l ( v ) ] = 0
-
g r a d ( v ⋅ w ) = [ g r a d ( v ) ] T w + [ g r a d ( w ) ] T v \mathrm{grad}(\mathbf{v \cdot w}) = [\mathrm{grad} (\mathbf{v})]^T \mathbf{w} + [\mathrm{grad} (\mathbf{w})]^T \mathbf{v} g r a d ( v ⋅ w ) = [ g r a d ( v ) ] T w + [ g r a d ( w ) ] T v
-
g r a d [ d i v ( v ) ] = d i v [ g r a d ( v ) ] T \mathrm{grad}[\mathrm{div}(\mathbf{v})] = \mathrm{div}[\mathrm{grad}(\mathbf{v})]^T g r a d [ d i v ( v ) ] = d i v [ g r a d ( v ) ] T
-
d i v ( v ⊗ w ) = d i v [ g r a d ( v ) ] T \mathrm{div}(\mathbf{v \otimes w}) = \mathrm{div}[\mathrm{grad}(\mathbf{v})]^T d i v ( v ⊗ w ) = d i v [ g r a d ( v ) ] T
-
c u r l [ c u r l ( v ) ] = g r a d [ d i v ( v ) ] − d i v [ g r a d ( v ) ] \mathrm{curl}[\mathrm{curl}(\mathbf{v})] = \mathrm{grad}[\mathrm{div}(\mathbf{v})] - \mathrm{div}[\mathrm{grad}(\mathbf{v})] c u r l [ c u r l ( v ) ] = g r a d [ d i v ( v ) ] − d i v [ g r a d ( v ) ]
-
d i v ( v × w ) = w ⋅ c u r l ( v ) − v ⋅ c u r l ( w ) \mathrm{div}(\mathbf{v \times w}) = \mathbf{w} \cdot \mathrm{curl}(\mathbf{v}) - \mathbf{v}\cdot \mathrm{curl}(\mathbf{w}) d i v ( v × w ) = w ⋅ c u r l ( v ) − v ⋅ c u r l ( w )
-
c u r l ( v × w ) = d i v ( v ⊗ w − w ⊗ v ) \mathrm{curl}(\mathbf{v \times w}) = \mathrm{div}(\mathbf{v \otimes w - w \otimes v}) c u r l ( v × w ) = d i v ( v ⊗ w − w ⊗ v )
Description
Equations
Permutation symbol
ε i j k = 1 2 ( i − j ) ( j − k ) ( k − i ) \varepsilon_{ijk} = \frac{1}{2}(i - j)(j - k)(k - i) ε i j k = 2 1 ( i − j ) ( j − k ) ( k − i )
Permutation symbol and Kronecker delta
ε i j k ε i j m = 2 δ k m \varepsilon_{ijk} \varepsilon_{ijm} = 2 \delta_{km} ε i j k ε i j m = 2 δ k m
ε \varepsilon ε -δ \delta δ identity
ε i j k ε m n k = δ i m δ j n − δ i n δ j m \varepsilon_{ijk}\varepsilon_{mnk} = \delta_{im}\delta_{jn} - \delta_{in}\delta_{jm} ε i j k ε m n k = δ i m δ j n − δ i n δ j m
Determinant in permutation symbol
det ( A ) = ∣ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ∣ = ε i j k a 1 i a 2 j a 3 k \det(\mathbf{A}) = \begin{vmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{vmatrix} =\varepsilon_{ijk} a_{1i}a_{2j}a_{3k} det ( A ) = ∣ ∣ ∣ ∣ ∣ ∣ ∣ a 1 1 a 2 1 a 3 1 a 1 2 a 2 2 a 3 2 a 1 3 a 2 3 a 3 3 ∣ ∣ ∣ ∣ ∣ ∣ ∣ = ε i j k a 1 i a 2 j a 3 k
Dot product of basis vectors
e i ⋅ e j = δ i j \mathbf{e}_i \cdot \mathbf{e}_j = \delta_{ij} e i ⋅ e j = δ i j as a scalar
Basis set of tensor
e i ⊗ e j = δ i j \mathbf{e}_i \otimes \mathbf{e}_j = \delta_{ij} e i ⊗ e j = δ i j as a tensor
Note that the notations depends on the context of the expression following the indicial notation. E.g. δ i j \delta_{ij} δ i j could be a scalar or a tensor depending on the context of that it’s multiplied to.