import theme
theme.load_style()
This lecture by Tim Fuller is licensed under the Creative Commons Attribution 4.0 International License. All code examples are also licensed under the MIT license.
A matrix ${\boldsymbol{K}}$ can be written in ``indicial'' notation as
$$ {\boldsymbol{K}}=K_{ij} $$where $i$ and $j$ represent the rows and columns of the matrix, respectively. Matrix-vector multiplication, in indicial notation is written as
$$ {\boldsymbol{K}}{\boldsymbol{x}}=\sum_{i=1}^{N}K_{ij}x_{j}=K_{ij}x_{j} $$we drop the summation symbol when there are two repeated subscripts in an expression, i.e., summation is implied over the repeated subscript. Incidentally, when a subscript is not repeated in an expression, it is called a 'free' index and takes values from 1 to $N$. In our example above, we sum over $j$ for the $N$ values of the free index $i$. Using the indicial notation we write the transpose of a matrix as
$$ K_{ij}^T=K_{ji} $$and if a matrix is symmetric
$$ K_{ij}=K_{ji} $$The integral of a sum
$$ \int \sum_{i=1}^n c_iy_i \,\mathrm{d}{y} \ \ \ c_i\in{\rm I\kern-.16em R} $$is equal to the sum of integrals:
$$ \int \sum_{i=1}^n c_iy_i \,\mathrm{d}{y} =\sum_{i=1}^nc_i \int y_i \,\mathrm{d}{y} $$Proof:
$$ \int \sum_{i=1}^n c_iy_i \,\mathrm{d}{y} = \int c_1y_1 + c_2y_2 + \cdots + c_ny_n \,\mathrm{d}{y} $$$$ = \int c_1y_1\,\mathrm{d}{y} + \int c_2y_2\,\mathrm{d}{y} + \cdots + \int c_ny_n \,\mathrm{d}{y}= c_1\int y_1\,\mathrm{d}{y} + c_2 \int y_2\,\mathrm{d}{y} + \cdots +c_n \int y_n \,\mathrm{d}{y} $$$$ =\sum_{i=1}^nc_i \int y_i \,\mathrm{d}{y} $$$$ \Rightarrow\int \sum_{i=1}^n c_iy_i \,\mathrm{d}{y}=\sum_{i=1}^nc_i \int y_i \,\mathrm{d}{y} $$Similarly,
$$ \frac{d\left( \sum_{j=1}^n u_j\right)}{dx} = \sum_{j=1}^n\frac{du_j}{dx} $$Any integral of the form
can be written as a sum over elements:
or
$$ \int_0^Lf(x)\,\mathrm{d}{x}=\sum_1\int_{\Omega_e}f(x)\,\mathrm{d}{x} $$where $\Omega_e$ represents the $e^{\rm th}$ element domain.
Consider the equation
$$ c_1x+c_2y =0, \ \ c_1,c_2\in {\rm I\kern-.16em R} $$If $c_1$ and $c_2$ are linearly independent and non-zero, or
$$ c_1\neq kc_2 $$$$ c_1,c_2\neq0 $$then it follows that
$$ x=y=0 $$In general, if
$$ \sum_{i=1}^n c_ix_i = 0, \ \ \forall c_i $$$$ \Rightarrow x_i = 0 $$In general, a $k^{\rm th}$ order ODE
$$ au^{(k)} + a_{k-1}u^{(k-1)}+\cdots++ a_{2}u''+ a_{1}u' + a_0=0 $$requires that
$$ u\in C^{k-1} $$which means that $u$ has $k-1$ continuous derivatives. For example, if
$$ \frac{d^2u}{dx^2}=0 $$then $u\in C^1$, or the first derivative of $u$ must be continuous. Let's see if this is true.
$$ \frac{d^2u}{dx^2} = 0 \Rightarrow u=c_1x + c_2 $$and
$$ u'=c_1 $$sure enough, $u'$ is continuous and $u\in C^1$
Recall that, for $a=a(x), \ u=u(x)$, and $w=w(x)$
$$ \frac{d}{dx}\left( wa \frac{du}{dx} \right)=w\left[\frac{d}{dx}\left( a\frac{du}{dx}\right)\right]+ a\frac{dw}{dx}\frac{du}{dx} $$Note that the product rule of differentiation is used in integration by parts
$$ \int_{x_0}^{x_f}w\left[\frac{d}{dx}\left( a\frac{du}{dx}\right)\right] \,\mathrm{d}{x} = \int_{x_0}^{x_f}\frac{d}{dx}\left( wa \frac{du}{dx}\right) \,\mathrm{d}{x} - \int_{x_0}^{x_f}a\frac{dw}{dx}\frac{du}{dx}\,\mathrm{d}{x} $$$$ \Rightarrow \int_{x_0}^{x_f}w\left[\frac{d}{dx}\left( a\frac{du}{dx}\right)\right] \,\mathrm{d}{x} =- \int_{x_0}^{x_f}a\frac{dw}{dx}\frac{du}{dx}\,\mathrm{d}{x} + \left( wa \frac{du}{dx}\right)\Bigg|_{x_0}^{x_f} $$Each one of these identities and rules will be used in the formulation of the FEM to follow.
An inner product between two vectors ${\boldsymbol{u}}$ and ${\boldsymbol{v}}$, denoted by $({\boldsymbol{u}},{\boldsymbol{v}})$, is a product with the following properties. For the vectors ${\boldsymbol{u}}$, ${\boldsymbol{v}}$, ${\boldsymbol{w}}$ and scalar $\alpha$
$$ \begin{split} 1. \ & ({\boldsymbol{u}}+{\boldsymbol{v}},{\boldsymbol{w}})=({\boldsymbol{u}},{\boldsymbol{w}})+({\boldsymbol{v}},{\boldsymbol{w}})\\ 2. \ & (\alpha{\boldsymbol{u}},{\boldsymbol{v}})=\alpha({\boldsymbol{u}},{\boldsymbol{v}})\\ 3. \ & ({\boldsymbol{u}},{\boldsymbol{v}})=({\boldsymbol{v}},{\boldsymbol{u}})\\ 4. \ & ({\boldsymbol{v}},{\boldsymbol{v}}) \geq 0 \ \mathrm{for \ all} \ {\boldsymbol{v}}\neq 0\\ 5. \ & ({\boldsymbol{v}},{\boldsymbol{v}}) = 0 \ \mathrm{iff} \ {\boldsymbol{v}}= 0\\ \end{split} $$In Cartesian space, the inner product of two vectors is simply the familiar dot product, or
$$ ({\boldsymbol{u}},{\boldsymbol{v}}) = {\boldsymbol{u}}\cdot\boldsymbol{v}=\sum_{i=1}^N u_iv_i=uv\cos\theta $$We use the $(\cdot,\cdot)$ notation because we will soon generalize the inner product for vectors to functions. You will recall from linear algebra that two vectors are orthogonal if
$$ ({\boldsymbol{u}},{\boldsymbol{v}}) = 0 $$We can also think of inner products of functions in much the same way as inner products of vectors. For instance, suppose that instead of being vectors, ${\boldsymbol{u}}$ and ${\boldsymbol{v}}$ were functions and that instead of summing the product of the functions at discrete points we summed the product of the functions at an infinite number of points on some interval. Then we can think of the sum as an integral and the function inner product is then defined as
$$ (u(y),v(y))=\int_a^b u(x)v(x) \,dx $$The handwaving above is not a proof, but it serves to show the similarity between the dot product and the function inner product. However, it is relatively straight forward to show that the integral defined above does in fact satisfy all of the properties of an inner product and is, therefore, an inner product.
Two functions are orthogonal if
$$ (u,v)=\int_a^b uv \,dx =0 $$