Loading [MathJax]/jax/element/mml/optable/MathOperators.js

Saturday, June 1, 2024

Chain complexes on Hilbert spaces

 Chain complexes are mathematical structures used extensively in algebraic topology, homological algebra, and other areas of mathematics. They can be defined on various types of algebraic structures, including vector spaces and, more specifically, Hilbert spaces. Here, I'll explain the concept of chain complexes and provide an example based on Hilbert spaces.
 

Definition of Chain Complexes

A **chain complex** is a sequence of objects (usually groups or modules, but in the context of Hilbert spaces, these objects are Hilbert spaces themselves) connected by morphisms (usually called boundary maps or differentials) that satisfy a specific property: the composition of any two consecutive maps is zero. This sequence can be written as follows:

Hn+1HnHn1

where each Hn is a Hilbert space and the maps (denoted by dn:HnHn1) are continuous linear operators. The key property that makes this sequence a chain complex is that:

dn1dn=0 for all n

This means the image of each map is contained in the kernel of the next map.
 

 Hilbert Spaces

A **Hilbert space** is a complete vector space equipped with an inner product. It generalizes the notion of Euclidean space to an infinite-dimensional context. Common examples include spaces of square-integrable functions.
 

Example of a Chain Complex on Hilbert Spaces

Let's consider a simple example involving function spaces, which are typical examples of Hilbert spaces. Define the Hilbert spaces H0, H1, and H2 as:

- H0: space of real-valued continuous functions on [0,1] that are zero at the endpoints.
- H1: space of real-valued square-integrable functions on [0,1].
- H2: space of real-valued continuous functions on [0,1].

We define the boundary maps d1:H1H0 and d2:H2H1 by:

- d1(f)=f, the derivative of f, assuming f is differentiable almost everywhere and that f is continuous and zero at the endpoints (making it belong to H0).
- d2(g)=g, the inclusion map, assuming that every continuous function is also square-integrable.

To check the property d1d2=0, observe that:

d1(d2(g))=d1(g)=g

Since g is a continuous function on [0,1] and g needs to be zero at the endpoints for g to belong to H0, in general, we must choose g such that it satisfies this property (e.g., g could be any function whose derivative vanishes at the endpoints).

This example, though simplified, shows how chain complexes can be constructed in the setting of Hilbert spaces and how they relate to familiar concepts in calculus and functional analysis. Chain complexes in this setting are particularly interesting in the study of differential operators and their kernels and images, which play crucial roles in the theory of partial differential equations and spectral theory.

Monday, May 13, 2024

Hodge * Operator

 Basics of wedge products
  In the description below, we are assuming that our underlying space is an orientable C manifold.

We know vector space has a dual space that consists of functionals. Similarly, if we have a tangent plane there is a dual to it. Typically tangent space is spanned by xi where i=1,2,,n for an n-dimensional basis for tangent space. Since this is a vector space, there is dual to it called ``cotangent space''. Just like vector space duals are functions to real numbers, the corresponding vector space dual is consists of differentials dxi and these are maps to real. These are called ``covectors''. 

If you consider Riemman integral fdx, here we consider dx to be limit of δx which is considered as small strip in xy plane. This concept can guide thinking on dxi.
 

The differentials dxi have an operation called ``wedge'' operation, given by
dxdy=dydx

In regular integration dxdy is considered area element and dxdydz volume element. Similarly, wedge product gives oriented area or more generally oriented volume.

Wedge operation maps to reals. Then dxdx=dxdx implies dxdx=0.
For a function f(x,y,z) in R3, total differential leads to the following equation.
df=fxdx+fydy+fzdz
Here dx,dy,dz are covectors or duals. If as in above each expression in addition, df involves single differential or covector dx,dy,dz, these are called 1-form. If you wedge two 1-forms we get 2(dxdy) form etc.,0 forms are simply functions.

Collection of k covectors is denoted by Λkn(M), where M is n dimensional manifold or space under consideration.

From the above equation it is evident that the d operator acts on functions - that is 0form and produces 1 form. Using this as an example, we can write d:Λkn(M)Λk+1n+1(M) which means that if we apply d to a k form, we get k+1 form.

From equation (2) we can also infer the dual space has basis consists of dx,dy,dz or 3-dimensional. Similarly for n=3 space we take 2-forms, then the dimension is also 3. In general, for  Λkn(M) the dimension of vector space is given by nCk. Since nCk=nCnk, the dimensions of k covectors and nk covector space is same.

Hodge *
Let M be an n-dimensional C Manifold. For a given integer 0kn, ΛkTnM and ΛnkTnM have same dimension as vector space and they are isomorphic. Here TM is dual to Tangent space TM. If M has a Riemanian metric and oriented, for each point pM, there is a natural isomorphism
:ΛkTnMΛnkTnM
By varying p we get linear isomorphism
:Ak(M)Ank(M)
This operator is called ``Hodge'' star operator where Ak represents vector space of k forms etc.,


A bit of explanation for above definition. C Manifold means a smooth manifold.
Riemannian metric is a positive definite inner product defined defined on the tangent space at each point of the Manifold gp:TpM×TpMR in such a way gp is C at point p. Very standard way to express this is ds2=ni,j=1gijdxidxj.

Going back to Hodge star, if we write V for TpM, and use V for TpM, we get,
:ΛkVΛnkV

This linear map can be defined by setting
(θ1θ2θk)=θk+1θn
In particular
1=θ1θ2θn=1

Let ω be a one form for XΞ(M). For example, ω=fxdx+fydy+fzdz where fx=fx etc., Then,
divX=dω

For example, for ω given in above example, we can work this out as follows:
ω=(fxdx+fydy+fzdz)=fxdx+fydy+fzdz=fxdydz+fydzdx+fzdxdydω=fxxdxdydz+fyydxdydz+fzzdxdydzdω=fxx+fyy+fzz=2fx2+2fy2+2fz2=divX


Monday, May 6, 2024

Chain complex

 A cochain complex C is a collection of vector spaces CkkZ together with sequence of linear maps dk:CkCk+1
C1d1C0d0C1d1C2d2
with
dkdk1=0
dk are collection of linear maps known as ``differentials'' of the cochain complex.
One relevant example of Cochain complex is the vector space Ω(M) of differential forms on Manifold together with exterior derivative.
Ω1(M)d1Ω0(M)d0Ω1(M)d1Ω2(M)d2,dd=0
Above cochain complex is known as deRahm complex.

Thursday, May 2, 2024

Mixed formulation of Laplacian



The following deals with mixed problem of Dirichelet kind for Poisson equation. To get started, we cast Poisson equation into first order equation.

Recall, original Poisson equation is
u(x)=f(x)
Let's set σ=gradu. grad - aka Gradient is a vector field. In its full glory written as
u=(ux,uy,uz)
clearly gradu is a vector field. For our case, we simply set this to σ=gradu.
Our original equation is u(x)=f(x). To express this in terms of σ, we use another operator - div.
Let's write down what div is.
The divergence of a vector field, say F=(Fx,Fy,Fz), is a scalar function that represents the net rate of outward flux per unit volume at each point in the field. It gives a measure of how much the vector field is spreading out or compressing at a given point. The divergence is calculated as follows:
divF=Fxx+Fxy+Fxz
Clear from above definition if we divgradu, we get
 div gradu=2ux2+2uy2+2uz2
which in our original equation is same as u.
Hence, the original equation becomes,
divgradu(x)=f(x)
This in turn is shortened as,

σ=gradu,divσ=f
The pair (σ,u) can be characterized as critical point (unique) of the functional
I(σ,u)=Ω(12σ.σudivσ)dx+Ωfudx
over H(div:Ω)×L2(Ω) where H(div:Ω)=σL2;divσL2.
Equivalently one can solve weak problem
Ωτ.σdxΩudivτdx=0,τH(div:Ω)
Ωdivτvdx=Ωfvdx
This fits into abstract framework if we define, V=H(div:Ω)×L(Ω).
B(σ,u;τ,v)=Ωσ.τdxΩudivτdx+Ωdivσvdx,F(τ,v)=Ωfvdx


In this case the bilinear form B is not coercive, and so the choice of subspaces an the analysis is not so simple
as for the standard finite element method for Poisson’s equation. Finite element discretizations based on such saddle point variational principles are called mixed finite element methods. Thus a mixed finite element for Poisson’s equation is obtained by choosing subspaces ΣhH(div;Ω) and VhL2(Ω) and seeking a critical point of I over Σh×Vh. The resulting Galerkin method has the form: Find σhΣh,uhVh satisfying
ΩσhτdxΩuhdivτdx=0,τΣh,Ωdivσhvdx=Ωfvdx,vVh.
Since the bilinear form is not coercive, it is not automatic that the linear system is nonsingular.
If f=0, then Ωfvdx=0. This means Ωdivσhvdx=0. This foces σh=0 as v is trail function. This, in turn, via ΩσhτdxΩuhdivτdx=0 forces Ωuhdivτdx=0. Then uh=0.
Choosing τ=σh and v=uh and adding the discretized variational equations, it follows
immediately that when f=0,σh=0. However, uh need not vanish.



Tuesday, April 30, 2024

Galerkin method in abstract settings

 
This section deals with analyzing errors in finite element method. To determine when Galerkin method will produce a good approximation, abstraction is introduced.
Let B:V×VR be bounded bilinear form and let F:VR be bounded linear form.
It is assumed that the problem to be solved can be stated as, find uV such that
B(u,v)=F(v),vV
  Example
To make sense of above abstraction, it is best to see how this is derived using a weak form of PDE as shown below.
Consider Poisson's equation given by:
Δu=fin Ω,u=0on Ω
where Ω is a bounded domain, f is a given function, and u is the function to be determined.


The weak formulation involves multiplying the differential equation by a test function v from the space H10(Ω), integrating over the domain Ω, and applying integration by parts:
Ωuvdx=Ωfvdx
Here, B(u,v)=Ωuvdx and F(v)=Ωfvdx define the bounded bilinear and linear forms respectively.

Well posed problem
The abstract setting problem is considered well-posed, if for each FV , there exists a unique solution uV and the mapping F>u is bounded. V here means dual space of V. Alternately, if L:VV given by <Lu,v>=B(u,v) is an isomorphism.

Example
For Dirichelet problem of Poisson's equation, we have
V=H1(Σ)B(u,v)=Σ(gradu(x))(gradv(x))dxF(v)=Σf(x)v(x)dx
 A generalized Garlekin method for the abstract problem begins with a finite dimensional normed vector space Vh, a bileaner form Bh:Vh×VhR and a linear form Fh:VhR and defines uhVh by
 Bh(uh,v)=Fh(v),vVh
 Above equation can be written in the form Lhuh=Fh where Lh:VhVh given by <Lhu,v>=Bh(u,v)
 If the finite dimensional problem is non-singular, then the norm of discrete solution operator known as ``stability constant'' is defined as
 L1h

 In this approximation of the original problem determined by V,B,F by Vh,Bh,Fh intention is that Vh in some sense approximates V and Bh,Fh approximate B,V. This is idea behind ``Consistency''.
 The goal here is to approximate uh to u. This is known as ``Convergence''.

 To this end, assume there is a restriction operator πh:VVh so that πhu is close to u.
 Using the equation Lhuh=Fh, we can compute the ``consistency error'' as
 LhπuFh
 Error we wish to control is
 πhuu
 Easy to see the relation between error and consistency error.
 πhuu=L1h(LhπuFh)
 Recall thet norm of Lh is stability constant. If we take norms both sides to above equation, we see that the norm of error is bounded by product of stability constant and norm of consistency error.
 πhuu∥≤∥
Expressing this in terms of bilinear forms the relation becomes,
LhπhuVh∥=sup0vVhBh(πhu,v)Fh(v))v
Above equation, especially RHS needs some explanation.
The consistency error measures how close B_h(\pi_hu,v) to F_h(v) for all test functions v in the subspace V_h. Here F_h represents the discrete analog of forcing function. The ratio including sup means worst or max consistency error to the norm of test function v over all possible non-zero test functions in V_h.
Finite dimensional problem is non-singular iff
γh=inf0uVhsup0vVhBh(u,v)u∥∥v
And the stability constant is given by \gamma^{-1}_h
Notes:

Above \gamma_h measures smallest ration of bilinear form B_h(u,v) to the product of norms of u and v overall non-zero functions u,v in the subspace V_h. Above condition shows that the bilinear form B_h(u,v) is bounded from below and has a positive lower bound. This is also called ``coercive'' over subspace V_h. Since B_h(u,v) is a matrix, above condition shows that this matrix is non-singular. For numerical methods, this means the problem is well-posed and having a continuous solution.


Monday, April 29, 2024

PDE- Galerkin method and example

 
The Galerkin method is a numerical technique that transforms boundary value problems, particularly partial differential equations (PDEs), into a system of linear algebraic equations by projecting the problem onto a finite-dimensional subspace.
Steps of the Galerkin Method


 Problem Setup: 

Start with the weak form of the differential equation integrated against a test function.

Choice of Basis Functions

Select basis functions {ϕi}i=1n that satisfy the boundary conditions.
 Approximation of the Solution: 

Assume u(x)un(x)=i=1nciϕi(x).
 Galerkin Projection: 

Ensure the residual is orthogonal to the span of the basis functions:
  Residualϕjdx=0for all j.

Example: One-Dimensional Poisson Equation

Consider the problem
u(x)=f(x),u(0)=u(1)=0.
on the interval [0,1].
Step 1: Weak Form

Multiply by a test function v(x) and integrate by parts to get:
01u(x)v(x)dx=01f(x)v(x)dx.
Step 2: Discretization

Choose linear basis functions and assume:
u(x)un(x)=i=1nciϕi(x).
Step 3: Galerkin Projection

Insert the approximation into the weak form using basis functions as test functions:
i=1nci01ϕi(x)ϕj(x)dx=01f(x)ϕj(x)dxfor all j.
Step 4: Matrix Formulation

Define the stiffness matrix A and load vector b as follows:
Aij=01ϕi(x)ϕj(x)dx,bj=01f(x)ϕj(x)dx.
This leads to the linear system:
Ac=b,
where c is the vector of coefficients.
\subsection*{Example: Very simple and concrete}
Take a differential equation
d2ydx2+x+y=0,0x1
with boundary condition y(0)=y(1)=0

Example:

Step 1
Take a trail solution y(x)=a0+a1x+a2x2. Always take the number of constants one greater than the highest differential power. Here we have second order differential. So, we take 2+1=3 constants a0,a1,a2.
Apply boundary conditions x=0,y=0. This leads to a0=0. Then apply boundary condition x=1,y=0 leads to a1=a2. Then the trail function becomes y(x)=a2(x2x). This is one parameter solution - namely a2.

Step 2
Compute Weighting function
W(x)=ya2=x2x

Step 3
Compute domain residuals. Simply substitute the trail function into original differential equation.
Rd=d2ydx2+x+yRd(x)=da2(x2x)dx2+x+a2(x2x)
 Step 4
Minimization of domain residual:
01W(x)Rd(x)dx=0
Compute minimization. You will get a2=598.
Then the solution is y(x)=598(xx2).
Other formulations
For the differential equation u=f, we can write this as first order by setting
σ=u,σ=f
The pair σ,f can be characterized variationally as unique critical point of the functional
I(σ,u)=11(12σ2uσ)dx+11fudx
over H1(1,1)×L2(1,1)
Note if we take the original J(x) the Energy functional and substitute, we get above.
J(u)=1211|u(x)|2dx11f(x)u(x)dx=1211σ211f(x)u(x)dx=1211σ2+f(x)11u(x)dx1111u(x)dxf(x)dx=1211σ2+0=1211σ2+11f(x)u(x)dx11f(x)u(x)dx=1112σ2fσ+11f(x)u(x)dx

Hilbert space properties for PDEs

 In the context of PDEs the following properties are important for the Hilbert spaces.

Notation: Let V be Hilbert space and let a(.,.):V\times V \rightarrow \mathcal{R} be a bilinear form.
Property 1:
For PDE solutions, we need this bilinear form to be bounded.
|a(u,v)|<M||u||||v|| for some M>0R 
Property 2:
This is called V-Ellipticity'. The concept of V-ellipticity is crucial in establishing the well-posedness (existence, uniqueness, and stability of solutions) of boundary value problems formulated in a variational framework. It guarantees the uniqueness and stability of solutions to the corresponding variational problems. In essence, if the bilinear form derived from a PDE is V-elliptic, then the solution to the variational problem (and hence to the PDE) depends continuously on the data (such as boundary conditions and external forces), ensuring that small changes in input lead to small changes in the output.
Mathematically, it provides a lower bound for the bilinear form.
a(u,v)α||v||2 α is a constant. vV

Simple example
Simplest example of Hilbert space is Euclidean space with euclidean metric. It is not too difficult to verify that the euclidean metric is bilinear, it is bounded in the property 1 sense and it has V-Ellipticity (Property 2).

Chain complexes on Hilbert spaces

 Chain complexes are mathematical structures used extensively in algebraic topology, homological algebra, and other areas of mathematics. Th...