Monday, May 13, 2024

Hodge * Operator

 Basics of wedge products
We know vector space has a dual space that consists of functionals. Similarly, if we have a tangent plane there is a dual to it. Typically tangent space is spanned by ${ \frac{\partial }{\partial x^i} }$ where $i=1,2,\cdots,n$ for an n-dimensional basis for tangent space. Since this is a vector space, there is dual to it called ``cotangent space''. Just like vector space duals are functions to real numbers, the corresponding vector space dual is consists of differentials $dx_i$ and these are maps to real. These are called ``covectors''. 

If you consider Riemman integral $\int f dx$, here we consider $dx$ to be limit of $\delta x$ which is considered as small strip in $x-y$ plane. This concept can guide thinking on $dx_i$.
 

The differentials $dx_i$ have an operation called ``wedge'' operation, given by
\begin{equation}
dx \wedge dy = - dy \wedge dx
\end{equation}

In regular integration $dxdy$ is considered area element and $dxdydz$ volume element. Similarly, wedge product gives oriented area or more generally oriented volume.

Wedge operation maps to reals. Then $dx \wedge dx = - dx \wedge dx$ implies $dx \wedge dx=0$.
For a function $f(x,y,z)$ in $\mathcal{R}^3$, total differential leads to the following equation.
\begin{equation}
  df = \frac{\partial{f}}{\partial{x}}dx + \frac{\partial{f}}{\partial{y}}dy + \frac{\partial{f}}{\partial{z}}dz
\end{equation}
Here $dx,dy, dz$ are covectors or duals. If as in above each expression in addition, $df$ involves single differential or covector $dx,dy,dz$, these are called $1$-form. If you wedge two $1$-forms we get $2$($dx\wedge dy$) form etc.,$0$ forms are simply functions.

Collection of $k$ covectors is denoted by $\Lambda^k_n(M)$, where $M$ is $n$ dimensional manifold or space under consideration.

From the above equation it is evident that the $d$ operator acts on functions - that is $0-form$ and produces $1$ form. Using this as an example, we can write $d:\Lambda^k_n(M) \rightarrow \Lambda^{k+1}_{n+1}(M)$ which means that if we apply $d$ to a $k$ form, we get $k+1$ form.

From equation (2) we can also infer the dual space has basis consists of $dx,dy,dz$ or $3$-dimensional. Similarly for $n=3$ space we take $2$-forms, then the dimension is also $3$. In general, for  $\Lambda^k_n(M)$ the dimension of vector space is given by $^nC_k$. Since $^nC_k=^nC_{n-k}$, the dimensions of $k$ covectors and $n-k$ covector space is same.

Hodge *
Let $M$ be an $n$-dimensional $C^\infty$ Manifold. For a given integer $0 \leq k \leq n$, $\Lambda^k T^*_nM$ and $\Lambda^{n-k}T^{*}_nM$ have same dimension as vector space and they are isomorphic. Here $T^*M$ is dual to Tangent space $TM$. If $M$ has a Riemanian metric and oriented, for each point $p \in M$, there is a natural isomorphism
\begin{equation}
  *:\Lambda^k T^*_nM \equiv \Lambda^{n-k}T^{*}_nM
\end{equation}
By varying $p$ we get linear isomorphism
\begin{equation}
  *:\mathcal{A}^k(M) \rightarrow \mathcal{A}^{n-k}(M)
\end{equation}
This operator is called ``Hodge'' star operator where $\mathcal{A}^k$ represents vector space of $k$ forms etc.,


A bit of explanation for above definition. $C^\infty$ Manifold means a smooth manifold.
Riemannian metric is a positive definite inner product defined defined on the tangent space at each point of the Manifold $g_p: T_pM \times T_p M \rightarrow \mathcal{R}$ in such a way $g_p$ is $C^\infty$ at point $p$. Very standard way to express this is $ds^2 = \sum_{i,j=1}^n g_{ij}dx_i dx_j$.

Going back to Hodge star, if we write $V$ for $T_p M$, and use $V^*$ for $T_p^* M$, we get,
\begin{equation}
  *: \Lambda^k V^* \rightarrow \Lambda^{n-k} V^*
\end{equation}

This linear map can be defined by setting
\begin{equation}
  *(\theta_1 \wedge \theta_2 \wedge \cdots \wedge \theta_k) = \theta_{k+1}\wedge \cdots \theta_n
\end{equation}
In particular
\begin{equation}
  *1 = \theta_1 \wedge \theta_2 \wedge \cdots \wedge \theta_n = 1
\end{equation}

Let $\omega$ be a one form for $X \in \Xi(M)$. For example, $\omega = f_x dx + f_y dy + f_z dz$ where $f_x = \frac{\partial f}{\partial x}$ etc., Then,
\begin{equation}
  div X = *d*\omega
\end{equation}

For example, for $\omega$ given in above example, we can work this out as follows:
\begin{align*}
  *\omega = *(f_x dx + f_y dy + f_z dz) = f_x*dx+f_y*dy+f_z*dz \\
  = f_x dy \wedge dz + f_y dz \wedge dx + f_z dx \wedge dy \\
  d*\omega = f_{xx} dx \wedge dy \wedge dz + f_{yy}  dx \wedge dy \wedge dz + f_{zz}  dx \wedge dy \wedge dz \\
  *d*\omega = f_{xx}+f_{yy}+f_{zz} = \frac{\partial^2f}{\partial x^2}+ \frac{\partial^2f}{\partial y^2}+ \frac{\partial^2f}{\partial z^2} = div X
\end{align*}


Monday, May 6, 2024

Chain complex

 A cochain complex $\mathcal{C}$ is a collection of vector spaces ${C^k}_{k\in\mathcal{Z}}$ together with sequence of linear maps $d_k:C^k \rightarrow C^{k+1}$
\begin{equation}
  \cdots \rightarrow C^{-1} \xrightarrow{d_{-1}} C^{0} \xrightarrow{d_{0}} C^1 \xrightarrow{d_{1}} C^2\xrightarrow{d_{2}}\cdots
\end{equation}
with
\begin{equation}
  d_k \circ d_{k-1} = 0
\end{equation}
${d_k}$ are collection of linear maps known as ``differentials'' of the cochain complex.
One relevant example of Cochain complex is the vector space $\Omega^{*}(M)$ of differential forms on Manifold together with exterior derivative.
\begin{equation}
  \cdots \rightarrow \Omega^{-1}(M) \xrightarrow{d_{-1}} \Omega^{0}(M) \xrightarrow{d_{0}} \Omega^1(M) \xrightarrow{d_{1}} \Omega^2(M)\xrightarrow{d_{2}}\cdots,\;d\circ d = 0
\end{equation}
Above cochain complex is known as deRahm complex.

Thursday, May 2, 2024

Mixed formulation of Laplacian



The following deals with mixed problem of Dirichelet kind for Poisson equation. To get started, we cast Poisson equation into first order equation.

Recall, original Poisson equation is
\begin{equation}
  -u^{''}(x) = f(x)
\end{equation}
Let's set $\sigma = -\text{grad}\; u$. $grad$ - aka Gradient is a vector field. In its full glory written as
\begin{equation}
  \nabla u = (\frac{\partial u}{\partial x},\frac{\partial u}{\partial y}, \frac{\partial u}{\partial z} )
\end{equation}
clearly $\text{grad}\; u$ is a vector field. For our case, we simply set this to $\sigma= -\text{grad}\; u$.
Our original equation is $-u^{''}(x) = f(x)$. To express this in terms of $\sigma$, we use another operator - div.
Let's write down what div is.
The divergence of a vector field, say $F=(F_x,F_y,F_z)$, is a scalar function that represents the net rate of outward flux per unit volume at each point in the field. It gives a measure of how much the vector field is spreading out or compressing at a given point. The divergence is calculated as follows:
\begin{equation}
  \text{div} F = \frac{\partial F_x}{\partial x}+ \frac{\partial F_x}{\partial y} + \frac{\partial F_x}{\partial z}
\end{equation}
Clear from above definition if we $\text{div}\; \text{grad}\; u$, we get
\begin{equation}
  \text{ div } \text{grad}\; u = \frac{\partial^2 u}{\partial x^2}+ \frac{\partial^2 u}{\partial y^2}+ \frac{\partial^2 u}{\partial z^2}
\end{equation}
which in our original equation is same as $u^{''}$.
Hence, the original equation becomes,
\begin{equation}
  \text{div}\; \text{grad}\; u(x) = f(x)
\end{equation}
This in turn is shortened as,

\begin{equation}
  \sigma = - \text{grad}\; u, \text{div}\; \sigma = f
\end{equation}
The pair $(\sigma,u)$ can be characterized as critical point (unique) of the functional
\begin{equation}
  I(\sigma,u) = \int_\Omega (\frac{1}{2} \sigma.\sigma - u \text{div}  \sigma)dx + \int_\Omega fu dx
\end{equation}
over $H(div:\Omega) \times L^2(\Omega)$ where $H(div:\Omega) = {\sigma \in L^2; div \sigma \in L^2}$.
Equivalently one can solve weak problem
\begin{equation}
  \int_{\Omega}\tau.\sigma dx - \int_{\Omega}u \text{div} \tau dx = 0, \tau \in H(div:\Omega)
\end{equation}
\begin{equation}
  \int_{\Omega} \text{div} \tau v dx = \int_{\Omega} f v dx
\end{equation}
This fits into abstract framework if we define, $V=H(div:\Omega) \times L^(\Omega)$.
\begin{equation}
  B(\sigma,u;\tau,v) = \int_{\Omega}\sigma.\tau dx - \int_{\Omega}u \text{div} \tau dx + \int_{\Omega}\text{div} \sigma v dx, F(\tau,v) = \int_{\Omega}fvdx
\end{equation}


In this case the bilinear form $B$ is not coercive, and so the choice of subspaces an the analysis is not so simple
as for the standard finite element method for Poisson’s equation. Finite element discretizations based on such saddle point variational principles are called mixed finite element methods. Thus a mixed finite element for Poisson’s equation is obtained by choosing subspaces $\Sigma_h \subset H(div;\Omega)$ and $V_h \subset L_2(\Omega)$ and seeking a critical point of $I$ over $\Sigma_h \times V_h$. The resulting Galerkin method has the form: Find $\sigma_h \in \Sigma_h,u_h \in V_h$ satisfying
\begin{equation}
      \int_{\Omega} \sigma_h \cdot \tau \, dx - \int_{\Omega} u_h \, \text{div} \tau \, dx = 0, \quad \forall \tau \in \Sigma_h,
    \int_{\Omega} \text{div} \sigma_h v \, dx = \int_{\Omega} fv \, dx, \quad \forall v \in V_h.
  \end{equation}
Since the bilinear form is not coercive, it is not automatic that the linear system is nonsingular.
If $f=0$, then $\int_{\Omega} fv \, dx=0$. This means $\int_{\Omega} \text{div} \sigma_h v \, dx = 0$. This foces $\sigma_h=0$ as $v$ is trail function. This, in turn, via $\int_{\Omega} \sigma_h \cdot \tau \, dx - \int_{\Omega} u_h \, \text{div} \tau \, dx = 0$ forces $\int_{\Omega} u_h \, \text{div} \tau \, dx = 0$. Then $u_h=0$.
Choosing $\tau = \sigma h$ and $v = uh$ and adding the discretized variational equations, it follows
immediately that when $f = 0, \sigma_h = 0$. However, $u_h$ need not vanish.



Tuesday, April 30, 2024

Galerkin method in abstract settings

 
This section deals with analyzing errors in finite element method. To determine when Galerkin method will produce a good approximation, abstraction is introduced.
Let $B:V \times V \rightarrow R$ be bounded bilinear form and let $F:V\rightarrow R$ be bounded linear form.
It is assumed that the problem to be solved can be stated as, find $u \in V$ such that
\begin{equation}
  B(u,v)=F(v), v \in V
\end{equation}
  Example
To make sense of above abstraction, it is best to see how this is derived using a weak form of PDE as shown below.
Consider Poisson's equation given by:
\[
- \Delta u = f \quad \text{in } \Omega, \quad u = 0 \text{on } \partial \Omega
\]
where \(\Omega\) is a bounded domain, \(f\) is a given function, and \(u\) is the function to be determined.


The weak formulation involves multiplying the differential equation by a test function \( v \) from the space \( H_0^1(\Omega) \), integrating over the domain \(\Omega\), and applying integration by parts:
\[
\int_\Omega \nabla u \cdot \nabla v \, dx = \int_\Omega f v \, dx
\]
Here, \( B(u, v) = \int_\Omega \nabla u \cdot \nabla v \, dx \) and \( F(v) = \int_\Omega f v \, dx \) define the bounded bilinear and linear forms respectively.

Well posed problem
The abstract setting problem is considered well-posed, if for each $F \in V^*$ , there exists a unique solution $u \in V$ and the mapping $F->u$ is bounded. $V^*$ here means dual space of $V$. Alternately, if $L:V \rightarrow V^*$ given by $<Lu,v>=B(u,v)$ is an isomorphism.

Example
For Dirichelet problem of Poisson's equation, we have
\begin{align*}
  V=\circ{H^1}(\Sigma) \\
  B(u,v) = \int_{\Sigma} (grad u(x))(grad v(x)) dx \\
  F(v) = \int_{\Sigma} f(x) v(x) dx
\end{align*}
 A generalized Garlekin method for the abstract problem begins with a finite dimensional normed vector space $V_h$, a bileaner form $B_h:V_h \times V_h \rightarrow \mathcal{R}$ and a linear form $F_h:V_h \rightarrow \mathcal{R}$ and defines $u_h \in V_h$ by
 \begin{equation}
   B_h(u_h,v) = F_h(v), v \in V_h
 \end{equation}
 Above equation can be written in the form $L_h u_h = F_h$ where $L_h:V_h\rightarrow V_h^*$ given by $<L_hu,v>=B_h(u,v)$
 If the finite dimensional problem is non-singular, then the norm of discrete solution operator known as ``stability constant'' is defined as
 \begin{equation}
   \parallel L_h^{-1} \parallel
 \end{equation}

 In this approximation of the original problem determined by $V,B,F$ by $V_h,B_h,F_h$ intention is that $V_h$ in some sense approximates $V$ and $B_h,F_h$ approximate $B,V$. This is idea behind ``Consistency''.
 The goal here is to approximate $u_h$ to $u$. This is known as ``Convergence''.

 To this end, assume there is a restriction operator $\pi_h:V \rightarrow V_h$ so that $\pi_hu$ is close to $u$.
 Using the equation $L_h u_h = F_h$, we can compute the ``consistency error'' as
 \begin{equation}
   L_h\pi u - F_h
 \end{equation}
 Error we wish to control is
 \begin{equation}
   \pi_h u - u
 \end{equation}
 Easy to see the relation between error and consistency error.
 \begin{equation}
   \pi_h u - u = L_h^{-1}(L_h\pi u - F_h)
 \end{equation}
 Recall thet norm of $L_h$ is stability constant. If we take norms both sides to above equation, we see that the norm of error is bounded by product of stability constant and norm of consistency error.
 \begin{equation}
   \parallel  \pi_h u - u  \parallel \leq \parallel L_h^{-1} \parallel \parallel (L_h\pi u - F_h) \parallel
 \end{equation}
Expressing this in terms of bilinear forms the relation becomes,
\begin{equation}
  \parallel L_h\pi_hu - V_h \parallel = sup_{0 \neq v \in V_h}\frac{B_h(\pi_hu,v) - F_h(v))}{\parallel v \parallel}
\end{equation}
Above equation, especially RHS needs some explanation.
The consistency error measures how close $B_h(\pi_hu,v)$ to $F_h(v)$ for all test functions $v$ in the subspace $V_h$. Here $F_h$ represents the discrete analog of forcing function. The ratio including sup means worst or max consistency error to the norm of test function $v$ over all possible non-zero test functions in $V_h$.
Finite dimensional problem is non-singular iff
\begin{equation}
  \gamma_h = inf_{0\neq u \in V_h} sup_{0 \neq v \in V_h} \frac{B_h(u,v)}{\parallel u \parallel \parallel v \parallel}
\end{equation}
And the stability constant is given by $\gamma^{-1}_h$
Notes:

Above $\gamma_h$ measures smallest ration of bilinear form $B_h(u,v)$ to the product of norms of $u$ and $v$ overall non-zero functions $u,v$ in the subspace $V_h$. Above condition shows that the bilinear form $B_h(u,v)$ is bounded from below and has a positive lower bound. This is also called ``coercive'' over subspace $V_h$. Since $B_h(u,v)$ is a matrix, above condition shows that this matrix is non-singular. For numerical methods, this means the problem is well-posed and having a continuous solution.


Monday, April 29, 2024

PDE- Galerkin method and example

 
The Galerkin method is a numerical technique that transforms boundary value problems, particularly partial differential equations (PDEs), into a system of linear algebraic equations by projecting the problem onto a finite-dimensional subspace.
Steps of the Galerkin Method


 Problem Setup: 

Start with the weak form of the differential equation integrated against a test function.

Choice of Basis Functions

Select basis functions \( \{\phi_i\}_{i=1}^n \) that satisfy the boundary conditions.
 Approximation of the Solution: 

Assume \( u(x) \approx u_n(x) = \sum_{i=1}^n c_i \phi_i(x) \).
 Galerkin Projection: 

Ensure the residual is orthogonal to the span of the basis functions:
  \[
  \int \text{Residual} \cdot \phi_j \, dx = 0 \quad \text{for all } j.
  \]

Example: One-Dimensional Poisson Equation

Consider the problem
\[
-u''(x) = f(x), \quad u(0) = u(1) = 0.
\]
on the interval \([0,1]\).
Step 1: Weak Form

Multiply by a test function \( v(x) \) and integrate by parts to get:
\[
\int_0^1 u'(x) v'(x) \, dx = \int_0^1 f(x) v(x) \, dx.
\]
Step 2: Discretization

Choose linear basis functions and assume:
\[
u(x) \approx u_n(x) = \sum_{i=1}^n c_i \phi_i(x).
\]
Step 3: Galerkin Projection

Insert the approximation into the weak form using basis functions as test functions:
\[
\sum_{i=1}^n c_i \int_0^1 \phi_i'(x) \phi_j'(x) \, dx = \int_0^1 f(x) \phi_j(x) \, dx \quad \text{for all } j.
\]
Step 4: Matrix Formulation

Define the stiffness matrix \( \mathbf{A} \) and load vector \( \mathbf{b} \) as follows:
\[
A_{ij} = \int_0^1 \phi_i'(x) \phi_j'(x) \, dx, \quad b_j = \int_0^1 f(x) \phi_j(x) \, dx.
\]
This leads to the linear system:
\[
\mathbf{Ac} = \mathbf{b},
\]
where \( \mathbf{c} \) is the vector of coefficients.
\subsection*{Example: Very simple and concrete}
Take a differential equation
\[
  \frac{d^2y}{dx^2} + x+ y = 0 , 0 \le x \le 1
\]
with boundary condition \(y(0) = y(1) = 0 \)

Example:

Step 1
Take a trail solution \(y(x) = a_0+a_1x+a_2 x^2\). Always take the number of constants one greater than the highest differential power. Here we have second order differential. So, we take \( 2+1=3\) constants \(a_0,a_1,a_2\).
Apply boundary conditions \( x=0,y=0 \). This leads to \( a_0 =0 \). Then apply boundary condition \(x = 1, y = 0\) leads to \(a_1 = -a_2 \). Then the trail function becomes \( y(x) = a_2(x^2-x) \). This is one parameter solution - namely \( a_2 \).

Step 2
Compute Weighting function
\[
  W(x) = \frac{\partial{y}}{a_2} = x^2 - x
\]

Step 3
Compute domain residuals. Simply substitute the trail function into original differential equation.
\begin{align*}
  R_d &= \frac{d^2y}{dx^2} + x + y \\
  R_d(x) &= \frac{d a_2(x^2-x)}{d x^2} + x + a_2(x^2-x)
\end{align*}
 Step 4
Minimization of domain residual:
\[
  \int_0^1 W(x) R_d(x) dx = 0
\]
Compute minimization. You will get \( a_2 = -\frac{5}{98} \).
Then the solution is \( y(x) =  -\frac{5}{98}(x - x^2) \).
Other formulations
For the differential equation \( -u{''} = f \), we can write this as first order by setting
\[
  \sigma = -u' , \sigma' = f
\]
The pair \( \sigma,f \) can be characterized variationally as unique critical point of the functional
\[
  I(\sigma,u) = \int_{-1}^1 (\frac{1}{2}\sigma^2 - u \sigma') dx +  \int_{-1}^1 fu dx
\]
over \( H^1(-1,1) \times L^2(-1,1) \)
Note if we take the original \(J(x)\) the Energy functional and substitute, we get above.
\begin{align*}
  J(u) &= \frac{1}{2}\int_{-1}^1 |u'(x)|^2 dx - \int_{-1}^1f(x)u(x)dx \\
       &=  \frac{1}{2}\int_{-1}^1\sigma^2 -  \int_{-1}^1 f(x) u(x) dx \\
       &= \frac{1}{2}\int_{-1}^1\sigma^2 + f(x)\int_{-1}^1u(x)dx - \int_{-1}^1\int_{-1}^1u(x) dx f'(x) dx \\
       &= \frac{1}{2}\int_{-1}^1\sigma^2 + 0 \\
       &= \frac{1}{2}\int_{-1}^1\sigma^2 + \int_{-1}^1f(x)u(x)dx  - \int_{-1}^1f(x)u(x)dx \\
       &= \int_{-1}^1\frac{1}{2}\sigma^2 - f \sigma' + \int_{-1}^1f(x)u(x)dx
\end{align*}

Hilbert space properties for PDEs

 In the context of PDEs the following properties are important for the Hilbert spaces.

Notation: Let $V$ be Hilbert space and let $a(.,.):V\times V \rightarrow \mathcal{R}$ be a bilinear form.
Property 1:
For PDE solutions, we need this bilinear form to be bounded.
\begin{equation}
  |a(u,v)| < M||u|| ||v|| \text{ for some $M>0\in \mathcal{R}$ }
\end{equation}
Property 2:
This is called \(V\)-Ellipticity'. The concept of $V$-ellipticity is crucial in establishing the well-posedness (existence, uniqueness, and stability of solutions) of boundary value problems formulated in a variational framework. It guarantees the uniqueness and stability of solutions to the corresponding variational problems. In essence, if the bilinear form derived from a PDE is V-elliptic, then the solution to the variational problem (and hence to the PDE) depends continuously on the data (such as boundary conditions and external forces), ensuring that small changes in input lead to small changes in the output.
Mathematically, it provides a lower bound for the bilinear form.
\begin{equation}
  a(u,v) \ge \alpha ||v||^2 \text{ $\alpha$ is a constant. $v \in V$}
\end{equation}

Simple example
Simplest example of Hilbert space is Euclidean space with euclidean metric. It is not too difficult to verify that the euclidean metric is bilinear, it is bounded in the property $1$ sense and it has $V$-Ellipticity (Property $2$).

Sunday, April 28, 2024

Weak formulation of boundary value PDE and its meaning

Energy functional
An energy functional is a mapping from a function space (often a Sobolev space) to the real numbers, which assigns a "total energy" value to each function in the space. The energy assigned typically depends on the function and its derivatives, reflecting physical or geometrical properties like potential energy, kinetic energy, or strain energy in various physical contexts.

\begin{equation}
  J(u) = \frac{1}{2} \int_{-1}^1 |u'(x)|^2 dx - \int_{-1}^1f(x)u(x)dx
\end{equation}
over \(\mathring{H^1}(-1,1)\) where this Sobolev space is for functions that go to $0$ at boundary points for weak formulation.

Find \(u \in \mathring{H^1}(-1,1)\)such that
\begin{equation}
   \int_{-1}^1u'(x)v'(x) =  \int_{-1}^1 f(x)v(x), v(x) \in \mathring{H^1}(-1,1)
\end{equation}

Weak Formulation of a Boundary Value Problem

We consider a boundary value problem where we seek to find a function \( u \) that satisfies the differential equation
\[
-u''(x) = f(x) \quad \text{on} \quad (-1, 1),
\]
with boundary conditions
\[
u(-1) = u(1) = 0.
\]
Multiplying by a Test Function
To derive the weak form, multiply the differential equation by a test function \( v(x) \), which is smooth and vanishes at the boundaries \( (-1, 1) \), hence \( v(-1) = v(1) = 0 \). This test function \( v(x) \) belongs to the space \( \mathring{H}^1(-1, 1) \), a subspace of \( H^1(-1, 1) \). We obtain:
\[
-u''(x)v(x) = f(x)v(x).
\]
Integration Over the Domain
Integrate both sides over the interval \( (-1, 1) \):
\[
-\int_{-1}^1 u''(x) v(x) \, dx = \int_{-1}^1 f(x) v(x) \, dx.
\]
Integration by Parts
Use integration by parts on the left-hand side:
\begin{align*}
-\int_{-1}^1 u''(x) v(x) \, dx &= \left[ -u'(x)v(x) \right]_{-1}^1 + \int_{-1}^1 u'(x) v'(x) \, dx \\
&= 0 + \int_{-1}^1 u'(x) v'(x) \, dx,
\end{align*}
                                                                                                        where the boundary terms vanish because \( v(-1) = v(1) = 0 \).
Weak Formulation
Thus, we have the weak formulation of the boundary value problem:
\[
\int_{-1}^1 u'(x) v'((x) \, dx = \int_{-1}^1 f(x) v(x) \, dx.
\]
This equation must hold for all test functions \( v(x) \in \mathring{H}^1(-1,1) \).
Meaning of the Weak Formulation
In this form, the differential equation \( -u'' = f \) is translated into an integral equation that does not require the function \( u \) to be twice differentiable. Instead, \( u \) needs only to have its first derivative in \( L^2(-1, 1) \). This allows the inclusion of functions with less smoothness, accommodating more general solutions such as those exhibiting weak derivatives.

Saturday, April 27, 2024

Simplest two-point boundary value problem

  

 Previous blog discussed Sobolev space definition and example

Here we focus on Simplest two-point boundary value problem.

Simplest two-point boundary value problem
\begin{equation}
  -u^{''}(x) = f(x), -1 < x < 1, u(-1)=u(1)=0
\end{equation}
Energy functional
An energy functional is a mapping from a function space (often a Sobolev space) to the real numbers, which assigns a "total energy" value to each function in the space. The energy assigned typically depends on the function and its derivatives, reflecting physical or geometrical properties like potential energy, kinetic energy, or strain energy in various physical contexts.

\begin{equation}
  J(u) = \frac{1}{2} \int_{-1}^1 |u'(x)|^2 dx - \int_{-1}^1f(x)u(x)dx
\end{equation}
over \(\mathring{H^1}(-1,1)\) where this Sobolev space is for functions that go to $0$ at boundary points for weak formulation.

Find \(u \in \mathring{H^1}(-1,1)\)such that
\begin{equation}
   \int_{-1}^1u'(x)v'(x) =  \int_{-1}^1 f(x)v(x), v(x) \in \mathring{H^1}(-1,1)
\end{equation}

 Example

Imagine a fluid flowing between two long, parallel plates separated by a distance, typically normalized for simplicity. The flow is assumed to be steady (unchanging with time) and laminar (smooth with no turbulence).
The fluid is driven by a pressure gradient or external force such as gravity.
 
 Navier-Stokes Equations: For an incompressible, steady flow with no change in flow direction, the Navier-Stokes equations can simplify significantly. For a flow driven by a pressure gradient in the x-direction, or an external force such as gravity, and assuming no dependence on the y coordinate (2D flow), the equation simplifies to:
 \begin{equation}
   -\mu \frac{d^2u}{dx^2} = \frac{dp}{dx}+f
 \end{equation}
 where \(\mu)\) is viscosity of the fluid, \(\frac{dp}{dx}\) pressure gradient, \(f\) additional force such as gravity.

 Assuming constant pressure gradient and assuming \(f\) is constant, we can rewrite above equation as
 \begin{equation}
   -u^{''}(x) = g(x)
 \end{equation}
 where \(g(x) = \frac{1}{\mu}( \frac{dp}{dx}+f) \).
 Assume walls are at $x=-1,x=1$ and the flow is zero here, that is $u(-1)=u(0)=0$

 

Friday, April 26, 2024

Sobolev Space definition with example

 Sobolev spaces are essential in functional analysis and partial differential equations.

 The space \( H^1(-1,1) \) is a specific Sobolev space defined on the interval \((-1, 1)\) on the real line.

Definition of \( H^1(-1,1) \)

The Sobolev space \( H^1(-1,1) \) includes functions \( u \) that satisfy both the function itself and its first derivative being in \( L^2(-1,1) \). Specifically, the space is defined as follows:


  •    A function \( u \) belongs to \( L^2(-1,1) \) if:

  \[
  \int_{-1}^1 |u(x)|^2 \, dx < \infty
  \]

  •   The first weak derivative \( u' \) must also be in \( L^2(-1,1) \), which means:

  \[
  \int_{-1}^1 |u'(x)|^2 \, dx < \infty
  \]


Weak derivatives are used to accommodate functions that might not be differentiable in the classical sense everywhere on the interval, including functions that are continuous and differentiable almost everywhere but may have points of derivative discontinuity.


Norm in \( H^1(-1,1) \)}

The norm in the Sobolev space \( H^1(-1,1) \) is defined to incorporate both the function and its derivative, given by:
\[
\|u\|_{H^1} = \left( \int_{-1}^1 |u(x)|^2 + |u'(x)|^2 \, dx \right)^{1/2}
\]

Importance and Applications

Sobolev spaces such as \( H^1(-1,1) \) are crucial in the study of partial differential equations (PDEs), as they frequently arise as the solution spaces for various PDEs. Understanding Sobolev spaces is foundational for advanced studies in fields such as mathematical physics, engineering, and applied mathematics.


Example Function for \( H^1(-1,1) \)

Working with a concrete example clarifies above definitions.

Consider the function \( u(x) = x^2 \) as an example for a function in the Sobolev space \( H^1(-1,1) \). We will verify that both \( u \) and its first derivative \( u' \) belong to \( L^2(-1,1) \).

\Verification of \( u \in L^2(-1,1) \)
First, we check if the function \( u(x) = x^2 \) is square integrable over \((-1,1)\):

\[
\int_{-1}^1 |u(x)|^2 \, dx = \int_{-1}^1 x^4 \, dx
\]

Calculate the integral:

\[
\int_{-1}^1 x^4 \, dx = \left[ \frac{x^5}{5} \right]_{-1}^1 = \frac{1}{5} - \left(-\frac{1}{5}\right) = \frac{2}{5}
\]

This result confirms that \( u(x) = x^2 \) is in \( L^2(-1,1) \) as the integral is finite (\( < \infty \)).

Verification of \( u' \in L^2(-1,1) \)
Next, compute the derivative of \( u(x) \) and check its square integrability:

\[ u'(x) = 2x \]

Check if \( u'(x) = 2x \) is square integrable:

\[
\int_{-1}^1 |u'(x)|^2 \, dx = \int_{-1}^1 (2x)^2 \, dx = 4 \int_{-1}^1 x^2 \, dx
\]

Computing the integral:

\[
\int_{-1}^1 x^2 \, dx = \left[ \frac{x^3}{3} \right]_{-1}^1 = \frac{1}{3} - \left(-\frac{1}{3}\right) = \frac{2}{3}
\]

Thus:

\[
4 \int_{-1}^1 x^2 \, dx = 4 \cdot \frac{2}{3} = \frac{8}{3}
\]

Since this integral is also finite, \( u'(x) = 2x \) is in \( L^2(-1,1) \).


Thus, the function \( u(x) = x^2 \) and its derivative \( u'(x) = 2x \) are both in \( L^2(-1,1) \), confirming that \( u(x) = x^2 \) is a valid example of a function in the Sobolev space \( H^1(-1,1) \).


Sunday, March 31, 2024

Field Extension - Simple example

 Real numbers for a Field, so do complex numbers and rationals. What is deeply interesting is that we can construct several other fields based on an existing field. These are called Field extensions.

Standard example of such Field extension goes like this. Let $Q$ be a rational field. Consider polynomial $X^2 -2 \in Q$. Clearly roots of the this polynomial $X=\sqrt{2}$ and $X=-\sqrt{2}$ don't belong to $Q$ as there are irrationals.

One nice things we can do is to extend $Q$ by adding these roots to form a new field. Let's call it $Q(\sqrt{2})$. Now $Q(\sqrt{2})$ consists of roots of polynomials in $Q[X]$ whose roots involve $\sqrt{2}$. 

For example, a polynomial $

Elements: We can add any two elements and resulting element is still in

This setup allows for the exploration of linear algebra concepts within field extensions, including the basis, dimension, and linear transformations, offering a rich framework for understanding more complex algebraic structures.

Hodge * Operator

  Basics of wedge products We know vector space has a dual space that consists of functionals. Similarly, if we have a tangent plane there i...