Variational Calculus: Deriving the strong and weak form

Variational Calculus: Deriving the strong and weak form

Most physical processes happening in our environment can be expressed in terms of (partial) differential equations. The real-world problem is converted into a suitable mathematical differential equation statement, solved to arrive at a mathematical solution, which is then physically interpreted so as to arrive at a real-world solution. This is true for simple things such as heat transfer, fluid flow, population growth, compound interest to complex things such as pandemic spread, nuclear reaction etc… As a Civil/Mechanical Engineer, we are concerned about the problems coming under the domain of continuum mechanics, such as spring systems, linear elastic bodies etc…

The differential equation thus developed for a physical system is called the Governing Differential Equation (GDE). To solve such a GDE, we may need to have some extra details about the system, which are known as the boundary conditions or initial conditions of the system. The combination of GDE and the Boundary (and/or Initial) Conditions constitutes the ‘strong form‘ of the system. This means that the underlying statements within is applicable to every single point inside the system and solving the ‘strong form’ gives you an exact solution of the system. As an example, the strong form of a 1-D bar element is given below.

But deriving a strong form is not realistically possible all the time. If the system under consideration has inherent complexities in the form of geometrical arbitrariness, loading pattern or boundary conditions, a strong form may not be ensured and hence getting an exact solution is impossible. In such cases, we resort to numerical methods which gives an ‘approximate solution’ of the system. The Finite Element Methods, Finite Difference Methods etc… are examples of such approximate numerical methods. To adjust for the lack of a strong form, we develop what is known as the ‘weak form’ of the system. A weak form is basically another interpretation of the GDE and boundary conditions. But the weak form doesn’t necessarily hold true for every point in the domain, and the solution of a weak form needs only lesser mathematical requirements as compared to that of the strong form. In other words, the order of the highest power term in the weak form will be half that of the strong form and hence a solution is easier to obtain. So for an approximate but efficient solution of a complex domain, we can adapt to a weak form and avoid the difficulties of the strong form.

There are many ways in which we can obtain the weak form and the strong form. We focus on the Variational Principles to derive them.

Variational Principles

Variational methods are nothing but a type of optimization methods. They are used in all fields of mathematics to arrive at the optima (minima) of a certain class of functions called the functionals. Functionals are nothing but functions of functions, i.e the dependent variable(s) in a function is a function itself. We will see examples as we move along. But before going to the variational principle, let us understand the basics of ‘single variable optimization’.

 

Single Variable Optimisation

Consider a function $F(x)$ with a dependent variable $x$, plotted as shown above. At a point $x^{*}$ lies the optimum value (local minima) of the function. It means that in the neighbourhood of $x^{*}$, the value of $F(x)$ will be > $F(x^{*})$

$F(x^{*} + \Delta x) \gt F(x^{*})$

When expanded as a Taylor series, the formulation becomes

$\begin{aligned}&F\left(x^{*}+\Delta x\right)=F\left(x^{*}\right)+\left.\Delta x \cdot F^{\prime}(x)\right|_{x=x^{*}}+\ldots . \\&F\left(x^{*}+\Delta x\right)-F\left(x^{*}\right)=\left.\Delta x F^{\prime}(x)\right|_{x=x^{*}}\end{aligned}$

For any arbitrary $\Delta x$ , the LHS ≥ 0. The only condition that is possible is when $F'(x) = 0$. Otherwise, you can take a negative $dx$ and manipulate the RHS to be negative. So this is the necessary condition for single variable optimization.

$F^{\prime}(x)|_{x=x^{*}} = 0$ is the first order necessary condition.

We extend this idea to functionals in the variational principle.

Functionals

Consider the same axial bar as shown above. The Potential energy $\Pi$ of the system is given by

$\Pi = W_{int} -W_{ext}$

When expanded for the whole domain, it becomes

$\begin{aligned}\Pi &=\int_{v} \frac{1}{2} \sigma \cdot \varepsilon \cdot d v-\int_{0}^{L} q(x) \cdot u(x) d x \\&=\int_{\nu} \frac{1}{2} E \varepsilon^{2} d v-\int_{0}^{2} q \cdot u d x \\&=\frac{1}{2} A E \int_{0}^{L}\left(\frac{d u}{d x}\right)^{2} d x-\int_{0}^{L} q u d x \\\Pi &=\frac{A E}{2} \int_{0}^{L}\left(u^{\prime}\right)^{2} d x-\int_{0}^{L} q u d x\end{aligned}$

Now $\Pi$ is a function of $u(x)$ which itself is a function of $x$. This is called a functional.

So the above can be rewritten as

$\Pi=\int_{0}^{L} F(u, u’,x) \ d x$

We adopt the ‘theorem of minimum potential energy’ to describe the variational principle and corresponding weak form in elasticity problems. According to the theorem, the $u(x)$ that minimizes the $\Pi (u(x))$ corresponds to the solution of the weak form of the system. We have to determine the optimum u(x) which gives the optimum $\Pi_T$. So the independent variable is a function itself. The path that we take to solve such optimization problems is called the variational principle.

First Variation

We will now describe how $\Pi$ varies as the function $u(x)$ is varied. An infintesmall change in a function is called a variation of the function $\delta u(x)$.

Consider a function $u(x)$ dependent on $x$. Between 2 points 1 and 2, I want to find a $u(x)$ that minimizes the functional (potential energy for elasticity). Let this $u(x)$ be called the $u^{*}(x)$ as in the figure above. Similar to the perturbation in local minima in single variable optimization $(\Delta x)$, we can have perturbations/variations here also, but between 1 and 2. That is they are also functions $u(x)$. Only condition that these variations should satisfy is the boundary conditions, ie they should start at 1 and end at 2. Below figure represents many such perturbations $\bar u(x)$

Let $\delta(x)$ denote the difference between the perturbed function and the minimum function at any point x such that

$\delta u(x) = \tilde u(x) – u^*(x)$

Let $u(x)$ replace $u^*(x)$

Deriving the Strong Form

Let us define the integrals to be optimized

$\begin{aligned}&I=\int_{x_{1}}^{x_{2}} F\left(u, u^{\prime}, x\right) d x \\&\tilde{I}=\int_{x_{1}}^{x_{2}} F\left(\tilde{u}_{}, \tilde{u}_{}^{\prime}, x\right) d x\end{aligned}$

$F\left(\tilde{u}_{}, \tilde{u}_{}^{\prime}, x\right) = F\left(\left[u+\delta u, u^{\prime}+\delta u^{\prime}, x\right]\right) = F\left[u, v^{\prime}, x\right]+\frac{\partial F}{\partial u} \cdot \delta u+\frac{\partial F}{\partial u^{\prime}} \delta u^{\prime}$

$\therefore \tilde I – I = \delta^1 I = \delta^{1} I=\int_{x_{1}}^{x_{2}} \frac{\partial F}{\partial u} \delta u+\frac{\partial F}{\partial u^{\prime}} \delta u^{\prime} d x$

Integrating by parts the second term and rearranging gives

$\begin{array}{r}\delta^{1} I=\int_{x_{1}}^{x_{2}}\left[\frac{\partial F}{\partial u}-\frac{d}{d x}\left(\frac{\partial F}{\partial u^{\prime}}\right)\right] \delta u d x +\left.\frac{\partial F}{\partial u^{\prime}} \delta v\right|_{x_{2}}-\left.\frac{\partial F}{\partial u^{\prime}} \delta u\right|_{x_{1}}\end{array}$

he same arguments of single variable optimization is valid here as well. We have the arbitrary choice of $\delta_u$ to make. This comes as a result of the perturbation $\bar u(x) $ that we make. The requirement being that $\delta^1 I \gt 0$. If that holds, then the u*(x) will give the local minimum. With the same logic that we used in single variable optimization, we will consider only $\delta^1 I=0$ case here. We also require every term of the equation to be individually also = 0.

$\begin{gathered}\frac{\partial F}{\partial u}-\frac{d}{d x}\left(\frac{\partial F}{\partial u^{\prime}}\right)=0 \\\frac{\partial F}{\partial {u’} } \delta u|_{x_{1}}=0 \\\frac{\partial F}{\partial v^{\prime}} \delta u|_{x_{2}}=0\end{gathered}$

The first equation is called the Euler-Lagrange equation and applying the potential energy on it gives the strong form of the system. The rest of the terms constitute the boundary conditions

Eg. consider as before $\Pi = \frac{A E}{2} \int_{0}^{L}(u^{\prime})^{2} d x-\int_{0}^{L} q u d x$

$\begin{aligned}\frac{\partial F}{\partial u}-\frac{d}{d x}\left[\frac{\partial F}{\partial u^{\prime}}\right]=0 & \Rightarrow-q-\frac{d}{d x}\left[A E \frac{d u}{d x}\right]=0 \\&=>A E \frac{d^{2} u}{d x^{2}}+q=0\end{aligned}$

Deriving the weak form

The infinitesimal change in a function is called a variation of the function and denoted by $\delta u(x) \equiv \zeta w(x) $ where $w(x)$ is an arbitrary function and $0 < \zeta <1$, i.e. it is a very small positive number

The corresponding change in the functional is called the variation in the functional and denoted by

$\delta \Pi= \Pi(u(x) + \zeta w(x)) – \Pi (u(x)) = \Pi(u(x) + \delta u(x)) – \Pi (u(x))$

Let us evaluate the variation in $\Pi_{int}, i.e \ \ \delta \Pi_{int}$

$\begin{aligned}\delta \Pi_{\mathrm{int}} &=\frac{1}{2} \int_{\Omega} A E\left(\frac{\mathrm{d} u}{\mathrm{~d} x}+\zeta \frac{\mathrm{d} w}{\mathrm{~d} x}\right)^{2} \mathrm{~d} x-\frac{1}{2} \int_{\Omega} A E\left(\frac{\mathrm{d} u}{\mathrm{~d} x}\right)^{2} \mathrm{~d} x \\&=\frac{1}{2} \int_{\Omega} A E\left(\left(\frac{\mathrm{d} u}{\mathrm{~d} x}\right)^{2}+2 \zeta \frac{\mathrm{d} u}{\mathrm{~d} x} \frac{\mathrm{d} w}{\mathrm{~d} x}+\zeta^{2}\left(\frac{\mathrm{d} w}{\mathrm{~d} x}\right)^{2}\right) \mathrm{d} x-\frac{1}{2} \int_{\Omega} A E\left(\frac{\mathrm{d} u}{\mathrm{~d} x}\right)^{2} \mathrm{~d} x\end{aligned}$

Cancelling some terms and neglecting higher order terms gives us

$\delta \Pi_{\mathrm{int}}=\zeta \int_{\Omega} A E\left(\frac{\mathrm{d} w}{\mathrm{~d} x}\right)\left(\frac{\mathrm{d} u}{\mathrm{~d} x}\right) \mathrm{d} x$

Similarly variation in $\Pi_{ext}$ is

$\delta W_{e x t}=\int_{\Omega}(u+\zeta w) q d x-\int_{\Omega} u q d x=\zeta \int_{\Omega} w q d x$

where $\Omega$ represents the domain and is equivalent to $0 → L$ in the 1D axial bar case under consideration. (or the path 1→ 2 in the figure above)

The same arguments of single variable optimization (SVO) is valid here as well. To minimize $\Pi$, the variational of the functional must be equal to 0 (equivalent to $F'(x) = 0$ in SVO. Here the arbitrary choice is the function $\delta u(x) \equiv \zeta w(x)$ instead of $\Delta x$ of SVO. That is

$\delta \Pi = \delta \Pi_{int} – \delta \Pi_{ext} = 0 \\ \delta \Pi / \zeta=\int_{\Omega} A E\left(\frac{\mathrm{d} w}{\mathrm{~d} x}\right)\left(\frac{\mathrm{d} u}{\mathrm{~d} x}\right) \mathrm{d} x-\int_{\Omega} w q \mathrm{~d} x = 0$

The above formulation is known as the variational weak form of the axially loaded bar. The derivation above is the generalized procedure for deriving any elasticity weak form using variational principles.

In Finite Element terms, the function $w(x)$ is called the test function and the unknown function $u(x)$ is called the trial function. If you compare the strong form and weak form, you can see that the order of the differential equation has halved from 2 to 1.

$Strong \ Form: AE\frac{d^2 u}{dx^2} + q(x) = 0 \\Weak \ Form: \int_{\Omega} A E\left(\frac{\mathrm{d} w}{\mathrm{~d} x}\right)\left(\frac{\mathrm{d} u}{\mathrm{~d} x}\right) \mathrm{d} x-\int_{\Omega} w q \mathrm{~d} x = 0$

Another way of obtaining the weak form is to multiply the strong form with the arbitrary function $w(x)$ and perform integration by parts over higher-order terms to bring down the order. This is the Galerkin Method

A generalized weak form of elasticity, that can be applied to 2D or 3D domain may be derived to obtain as

$\int_{\Omega} \boldsymbol{\sigma}(\boldsymbol{u}): \boldsymbol{\varepsilon}(\boldsymbol{w}) d \Omega – \int_{\Omega} \boldsymbol{q} \cdot \boldsymbol{w} d \Omega = 0$

where $\sigma, \varepsilon$ are the stress and strain tensors respectively.

Leave a Reply