Saturday, June 1, 2024

Chain complexes on Hilbert spaces

 Chain complexes are mathematical structures used extensively in algebraic topology, homological algebra, and other areas of mathematics. They can be defined on various types of algebraic structures, including vector spaces and, more specifically, Hilbert spaces. Here, I'll explain the concept of chain complexes and provide an example based on Hilbert spaces.
 

Definition of Chain Complexes

A **chain complex** is a sequence of objects (usually groups or modules, but in the context of Hilbert spaces, these objects are Hilbert spaces themselves) connected by morphisms (usually called boundary maps or differentials) that satisfy a specific property: the composition of any two consecutive maps is zero. This sequence can be written as follows:

\[ \cdots \rightarrow H_{n+1} \rightarrow H_n \rightarrow H_{n-1} \rightarrow \cdots \]

where each \( H_n \) is a Hilbert space and the maps (denoted by \( d_n: H_n \rightarrow H_{n-1} \)) are continuous linear operators. The key property that makes this sequence a chain complex is that:

\[ d_{n-1} \circ d_n = 0 \text{ for all } n \]

This means the image of each map is contained in the kernel of the next map.
 

 Hilbert Spaces

A **Hilbert space** is a complete vector space equipped with an inner product. It generalizes the notion of Euclidean space to an infinite-dimensional context. Common examples include spaces of square-integrable functions.
 

Example of a Chain Complex on Hilbert Spaces

Let's consider a simple example involving function spaces, which are typical examples of Hilbert spaces. Define the Hilbert spaces \( H_0 \), \( H_1 \), and \( H_2 \) as:

- \( H_0 \): space of real-valued continuous functions on \([0,1]\) that are zero at the endpoints.
- \( H_1 \): space of real-valued square-integrable functions on \([0,1]\).
- \( H_2 \): space of real-valued continuous functions on \([0,1]\).

We define the boundary maps \( d_1: H_1 \rightarrow H_0 \) and \( d_2: H_2 \rightarrow H_1 \) by:

- \( d_1(f) = f' \), the derivative of \( f \), assuming \( f \) is differentiable almost everywhere and that \( f' \) is continuous and zero at the endpoints (making it belong to \( H_0 \)).
- \( d_2(g) = g \), the inclusion map, assuming that every continuous function is also square-integrable.

To check the property \( d_1 \circ d_2 = 0 \), observe that:

\[ d_1(d_2(g)) = d_1(g) = g' \]

Since \( g \) is a continuous function on \([0,1]\) and \( g' \) needs to be zero at the endpoints for \( g' \) to belong to \( H_0 \), in general, we must choose \( g \) such that it satisfies this property (e.g., \( g \) could be any function whose derivative vanishes at the endpoints).

This example, though simplified, shows how chain complexes can be constructed in the setting of Hilbert spaces and how they relate to familiar concepts in calculus and functional analysis. Chain complexes in this setting are particularly interesting in the study of differential operators and their kernels and images, which play crucial roles in the theory of partial differential equations and spectral theory.

Monday, May 13, 2024

Hodge * Operator

 Basics of wedge products
  In the description below, we are assuming that our underlying space is an orientable $C^\infty$ manifold.

We know vector space has a dual space that consists of functionals. Similarly, if we have a tangent plane there is a dual to it. Typically tangent space is spanned by ${ \frac{\partial }{\partial x^i} }$ where $i=1,2,\cdots,n$ for an n-dimensional basis for tangent space. Since this is a vector space, there is dual to it called ``cotangent space''. Just like vector space duals are functions to real numbers, the corresponding vector space dual is consists of differentials $dx_i$ and these are maps to real. These are called ``covectors''. 

If you consider Riemman integral $\int f dx$, here we consider $dx$ to be limit of $\delta x$ which is considered as small strip in $x-y$ plane. This concept can guide thinking on $dx_i$.
 

The differentials $dx_i$ have an operation called ``wedge'' operation, given by
\begin{equation}
dx \wedge dy = - dy \wedge dx
\end{equation}

In regular integration $dxdy$ is considered area element and $dxdydz$ volume element. Similarly, wedge product gives oriented area or more generally oriented volume.

Wedge operation maps to reals. Then $dx \wedge dx = - dx \wedge dx$ implies $dx \wedge dx=0$.
For a function $f(x,y,z)$ in $\mathcal{R}^3$, total differential leads to the following equation.
\begin{equation}
  df = \frac{\partial{f}}{\partial{x}}dx + \frac{\partial{f}}{\partial{y}}dy + \frac{\partial{f}}{\partial{z}}dz
\end{equation}
Here $dx,dy, dz$ are covectors or duals. If as in above each expression in addition, $df$ involves single differential or covector $dx,dy,dz$, these are called $1$-form. If you wedge two $1$-forms we get $2$($dx\wedge dy$) form etc.,$0$ forms are simply functions.

Collection of $k$ covectors is denoted by $\Lambda^k_n(M)$, where $M$ is $n$ dimensional manifold or space under consideration.

From the above equation it is evident that the $d$ operator acts on functions - that is $0-form$ and produces $1$ form. Using this as an example, we can write $d:\Lambda^k_n(M) \rightarrow \Lambda^{k+1}_{n+1}(M)$ which means that if we apply $d$ to a $k$ form, we get $k+1$ form.

From equation (2) we can also infer the dual space has basis consists of $dx,dy,dz$ or $3$-dimensional. Similarly for $n=3$ space we take $2$-forms, then the dimension is also $3$. In general, for  $\Lambda^k_n(M)$ the dimension of vector space is given by $^nC_k$. Since $^nC_k=^nC_{n-k}$, the dimensions of $k$ covectors and $n-k$ covector space is same.

Hodge *
Let $M$ be an $n$-dimensional $C^\infty$ Manifold. For a given integer $0 \leq k \leq n$, $\Lambda^k T^*_nM$ and $\Lambda^{n-k}T^{*}_nM$ have same dimension as vector space and they are isomorphic. Here $T^*M$ is dual to Tangent space $TM$. If $M$ has a Riemanian metric and oriented, for each point $p \in M$, there is a natural isomorphism
\begin{equation}
  *:\Lambda^k T^*_nM \equiv \Lambda^{n-k}T^{*}_nM
\end{equation}
By varying $p$ we get linear isomorphism
\begin{equation}
  *:\mathcal{A}^k(M) \rightarrow \mathcal{A}^{n-k}(M)
\end{equation}
This operator is called ``Hodge'' star operator where $\mathcal{A}^k$ represents vector space of $k$ forms etc.,


A bit of explanation for above definition. $C^\infty$ Manifold means a smooth manifold.
Riemannian metric is a positive definite inner product defined defined on the tangent space at each point of the Manifold $g_p: T_pM \times T_p M \rightarrow \mathcal{R}$ in such a way $g_p$ is $C^\infty$ at point $p$. Very standard way to express this is $ds^2 = \sum_{i,j=1}^n g_{ij}dx_i dx_j$.

Going back to Hodge star, if we write $V$ for $T_p M$, and use $V^*$ for $T_p^* M$, we get,
\begin{equation}
  *: \Lambda^k V^* \rightarrow \Lambda^{n-k} V^*
\end{equation}

This linear map can be defined by setting
\begin{equation}
  *(\theta_1 \wedge \theta_2 \wedge \cdots \wedge \theta_k) = \theta_{k+1}\wedge \cdots \theta_n
\end{equation}
In particular
\begin{equation}
  *1 = \theta_1 \wedge \theta_2 \wedge \cdots \wedge \theta_n = 1
\end{equation}

Let $\omega$ be a one form for $X \in \Xi(M)$. For example, $\omega = f_x dx + f_y dy + f_z dz$ where $f_x = \frac{\partial f}{\partial x}$ etc., Then,
\begin{equation}
  div X = *d*\omega
\end{equation}

For example, for $\omega$ given in above example, we can work this out as follows:
\begin{align*}
  *\omega = *(f_x dx + f_y dy + f_z dz) = f_x*dx+f_y*dy+f_z*dz \\
  = f_x dy \wedge dz + f_y dz \wedge dx + f_z dx \wedge dy \\
  d*\omega = f_{xx} dx \wedge dy \wedge dz + f_{yy}  dx \wedge dy \wedge dz + f_{zz}  dx \wedge dy \wedge dz \\
  *d*\omega = f_{xx}+f_{yy}+f_{zz} = \frac{\partial^2f}{\partial x^2}+ \frac{\partial^2f}{\partial y^2}+ \frac{\partial^2f}{\partial z^2} = div X
\end{align*}


Monday, May 6, 2024

Chain complex

 A cochain complex $\mathcal{C}$ is a collection of vector spaces ${C^k}_{k\in\mathcal{Z}}$ together with sequence of linear maps $d_k:C^k \rightarrow C^{k+1}$
\begin{equation}
  \cdots \rightarrow C^{-1} \xrightarrow{d_{-1}} C^{0} \xrightarrow{d_{0}} C^1 \xrightarrow{d_{1}} C^2\xrightarrow{d_{2}}\cdots
\end{equation}
with
\begin{equation}
  d_k \circ d_{k-1} = 0
\end{equation}
${d_k}$ are collection of linear maps known as ``differentials'' of the cochain complex.
One relevant example of Cochain complex is the vector space $\Omega^{*}(M)$ of differential forms on Manifold together with exterior derivative.
\begin{equation}
  \cdots \rightarrow \Omega^{-1}(M) \xrightarrow{d_{-1}} \Omega^{0}(M) \xrightarrow{d_{0}} \Omega^1(M) \xrightarrow{d_{1}} \Omega^2(M)\xrightarrow{d_{2}}\cdots,\;d\circ d = 0
\end{equation}
Above cochain complex is known as deRahm complex.

Thursday, May 2, 2024

Mixed formulation of Laplacian



The following deals with mixed problem of Dirichelet kind for Poisson equation. To get started, we cast Poisson equation into first order equation.

Recall, original Poisson equation is
\begin{equation}
  -u^{''}(x) = f(x)
\end{equation}
Let's set $\sigma = -\text{grad}\; u$. $grad$ - aka Gradient is a vector field. In its full glory written as
\begin{equation}
  \nabla u = (\frac{\partial u}{\partial x},\frac{\partial u}{\partial y}, \frac{\partial u}{\partial z} )
\end{equation}
clearly $\text{grad}\; u$ is a vector field. For our case, we simply set this to $\sigma= -\text{grad}\; u$.
Our original equation is $-u^{''}(x) = f(x)$. To express this in terms of $\sigma$, we use another operator - div.
Let's write down what div is.
The divergence of a vector field, say $F=(F_x,F_y,F_z)$, is a scalar function that represents the net rate of outward flux per unit volume at each point in the field. It gives a measure of how much the vector field is spreading out or compressing at a given point. The divergence is calculated as follows:
\begin{equation}
  \text{div} F = \frac{\partial F_x}{\partial x}+ \frac{\partial F_x}{\partial y} + \frac{\partial F_x}{\partial z}
\end{equation}
Clear from above definition if we $\text{div}\; \text{grad}\; u$, we get
\begin{equation}
  \text{ div } \text{grad}\; u = \frac{\partial^2 u}{\partial x^2}+ \frac{\partial^2 u}{\partial y^2}+ \frac{\partial^2 u}{\partial z^2}
\end{equation}
which in our original equation is same as $u^{''}$.
Hence, the original equation becomes,
\begin{equation}
  \text{div}\; \text{grad}\; u(x) = f(x)
\end{equation}
This in turn is shortened as,

\begin{equation}
  \sigma = - \text{grad}\; u, \text{div}\; \sigma = f
\end{equation}
The pair $(\sigma,u)$ can be characterized as critical point (unique) of the functional
\begin{equation}
  I(\sigma,u) = \int_\Omega (\frac{1}{2} \sigma.\sigma - u \text{div}  \sigma)dx + \int_\Omega fu dx
\end{equation}
over $H(div:\Omega) \times L^2(\Omega)$ where $H(div:\Omega) = {\sigma \in L^2; div \sigma \in L^2}$.
Equivalently one can solve weak problem
\begin{equation}
  \int_{\Omega}\tau.\sigma dx - \int_{\Omega}u \text{div} \tau dx = 0, \tau \in H(div:\Omega)
\end{equation}
\begin{equation}
  \int_{\Omega} \text{div} \tau v dx = \int_{\Omega} f v dx
\end{equation}
This fits into abstract framework if we define, $V=H(div:\Omega) \times L^(\Omega)$.
\begin{equation}
  B(\sigma,u;\tau,v) = \int_{\Omega}\sigma.\tau dx - \int_{\Omega}u \text{div} \tau dx + \int_{\Omega}\text{div} \sigma v dx, F(\tau,v) = \int_{\Omega}fvdx
\end{equation}


In this case the bilinear form $B$ is not coercive, and so the choice of subspaces an the analysis is not so simple
as for the standard finite element method for Poisson’s equation. Finite element discretizations based on such saddle point variational principles are called mixed finite element methods. Thus a mixed finite element for Poisson’s equation is obtained by choosing subspaces $\Sigma_h \subset H(div;\Omega)$ and $V_h \subset L_2(\Omega)$ and seeking a critical point of $I$ over $\Sigma_h \times V_h$. The resulting Galerkin method has the form: Find $\sigma_h \in \Sigma_h,u_h \in V_h$ satisfying
\begin{equation}
      \int_{\Omega} \sigma_h \cdot \tau \, dx - \int_{\Omega} u_h \, \text{div} \tau \, dx = 0, \quad \forall \tau \in \Sigma_h,
    \int_{\Omega} \text{div} \sigma_h v \, dx = \int_{\Omega} fv \, dx, \quad \forall v \in V_h.
  \end{equation}
Since the bilinear form is not coercive, it is not automatic that the linear system is nonsingular.
If $f=0$, then $\int_{\Omega} fv \, dx=0$. This means $\int_{\Omega} \text{div} \sigma_h v \, dx = 0$. This foces $\sigma_h=0$ as $v$ is trail function. This, in turn, via $\int_{\Omega} \sigma_h \cdot \tau \, dx - \int_{\Omega} u_h \, \text{div} \tau \, dx = 0$ forces $\int_{\Omega} u_h \, \text{div} \tau \, dx = 0$. Then $u_h=0$.
Choosing $\tau = \sigma h$ and $v = uh$ and adding the discretized variational equations, it follows
immediately that when $f = 0, \sigma_h = 0$. However, $u_h$ need not vanish.



Tuesday, April 30, 2024

Galerkin method in abstract settings

 
This section deals with analyzing errors in finite element method. To determine when Galerkin method will produce a good approximation, abstraction is introduced.
Let $B:V \times V \rightarrow R$ be bounded bilinear form and let $F:V\rightarrow R$ be bounded linear form.
It is assumed that the problem to be solved can be stated as, find $u \in V$ such that
\begin{equation}
  B(u,v)=F(v), v \in V
\end{equation}
  Example
To make sense of above abstraction, it is best to see how this is derived using a weak form of PDE as shown below.
Consider Poisson's equation given by:
\[
- \Delta u = f \quad \text{in } \Omega, \quad u = 0 \text{on } \partial \Omega
\]
where \(\Omega\) is a bounded domain, \(f\) is a given function, and \(u\) is the function to be determined.


The weak formulation involves multiplying the differential equation by a test function \( v \) from the space \( H_0^1(\Omega) \), integrating over the domain \(\Omega\), and applying integration by parts:
\[
\int_\Omega \nabla u \cdot \nabla v \, dx = \int_\Omega f v \, dx
\]
Here, \( B(u, v) = \int_\Omega \nabla u \cdot \nabla v \, dx \) and \( F(v) = \int_\Omega f v \, dx \) define the bounded bilinear and linear forms respectively.

Well posed problem
The abstract setting problem is considered well-posed, if for each $F \in V^*$ , there exists a unique solution $u \in V$ and the mapping $F->u$ is bounded. $V^*$ here means dual space of $V$. Alternately, if $L:V \rightarrow V^*$ given by $<Lu,v>=B(u,v)$ is an isomorphism.

Example
For Dirichelet problem of Poisson's equation, we have
\begin{align*}
  V=\circ{H^1}(\Sigma) \\
  B(u,v) = \int_{\Sigma} (grad u(x))(grad v(x)) dx \\
  F(v) = \int_{\Sigma} f(x) v(x) dx
\end{align*}
 A generalized Garlekin method for the abstract problem begins with a finite dimensional normed vector space $V_h$, a bileaner form $B_h:V_h \times V_h \rightarrow \mathcal{R}$ and a linear form $F_h:V_h \rightarrow \mathcal{R}$ and defines $u_h \in V_h$ by
 \begin{equation}
   B_h(u_h,v) = F_h(v), v \in V_h
 \end{equation}
 Above equation can be written in the form $L_h u_h = F_h$ where $L_h:V_h\rightarrow V_h^*$ given by $<L_hu,v>=B_h(u,v)$
 If the finite dimensional problem is non-singular, then the norm of discrete solution operator known as ``stability constant'' is defined as
 \begin{equation}
   \parallel L_h^{-1} \parallel
 \end{equation}

 In this approximation of the original problem determined by $V,B,F$ by $V_h,B_h,F_h$ intention is that $V_h$ in some sense approximates $V$ and $B_h,F_h$ approximate $B,V$. This is idea behind ``Consistency''.
 The goal here is to approximate $u_h$ to $u$. This is known as ``Convergence''.

 To this end, assume there is a restriction operator $\pi_h:V \rightarrow V_h$ so that $\pi_hu$ is close to $u$.
 Using the equation $L_h u_h = F_h$, we can compute the ``consistency error'' as
 \begin{equation}
   L_h\pi u - F_h
 \end{equation}
 Error we wish to control is
 \begin{equation}
   \pi_h u - u
 \end{equation}
 Easy to see the relation between error and consistency error.
 \begin{equation}
   \pi_h u - u = L_h^{-1}(L_h\pi u - F_h)
 \end{equation}
 Recall thet norm of $L_h$ is stability constant. If we take norms both sides to above equation, we see that the norm of error is bounded by product of stability constant and norm of consistency error.
 \begin{equation}
   \parallel  \pi_h u - u  \parallel \leq \parallel L_h^{-1} \parallel \parallel (L_h\pi u - F_h) \parallel
 \end{equation}
Expressing this in terms of bilinear forms the relation becomes,
\begin{equation}
  \parallel L_h\pi_hu - V_h \parallel = sup_{0 \neq v \in V_h}\frac{B_h(\pi_hu,v) - F_h(v))}{\parallel v \parallel}
\end{equation}
Above equation, especially RHS needs some explanation.
The consistency error measures how close $B_h(\pi_hu,v)$ to $F_h(v)$ for all test functions $v$ in the subspace $V_h$. Here $F_h$ represents the discrete analog of forcing function. The ratio including sup means worst or max consistency error to the norm of test function $v$ over all possible non-zero test functions in $V_h$.
Finite dimensional problem is non-singular iff
\begin{equation}
  \gamma_h = inf_{0\neq u \in V_h} sup_{0 \neq v \in V_h} \frac{B_h(u,v)}{\parallel u \parallel \parallel v \parallel}
\end{equation}
And the stability constant is given by $\gamma^{-1}_h$
Notes:

Above $\gamma_h$ measures smallest ration of bilinear form $B_h(u,v)$ to the product of norms of $u$ and $v$ overall non-zero functions $u,v$ in the subspace $V_h$. Above condition shows that the bilinear form $B_h(u,v)$ is bounded from below and has a positive lower bound. This is also called ``coercive'' over subspace $V_h$. Since $B_h(u,v)$ is a matrix, above condition shows that this matrix is non-singular. For numerical methods, this means the problem is well-posed and having a continuous solution.


Monday, April 29, 2024

PDE- Galerkin method and example

 
The Galerkin method is a numerical technique that transforms boundary value problems, particularly partial differential equations (PDEs), into a system of linear algebraic equations by projecting the problem onto a finite-dimensional subspace.
Steps of the Galerkin Method


 Problem Setup: 

Start with the weak form of the differential equation integrated against a test function.

Choice of Basis Functions

Select basis functions \( \{\phi_i\}_{i=1}^n \) that satisfy the boundary conditions.
 Approximation of the Solution: 

Assume \( u(x) \approx u_n(x) = \sum_{i=1}^n c_i \phi_i(x) \).
 Galerkin Projection: 

Ensure the residual is orthogonal to the span of the basis functions:
  \[
  \int \text{Residual} \cdot \phi_j \, dx = 0 \quad \text{for all } j.
  \]

Example: One-Dimensional Poisson Equation

Consider the problem
\[
-u''(x) = f(x), \quad u(0) = u(1) = 0.
\]
on the interval \([0,1]\).
Step 1: Weak Form

Multiply by a test function \( v(x) \) and integrate by parts to get:
\[
\int_0^1 u'(x) v'(x) \, dx = \int_0^1 f(x) v(x) \, dx.
\]
Step 2: Discretization

Choose linear basis functions and assume:
\[
u(x) \approx u_n(x) = \sum_{i=1}^n c_i \phi_i(x).
\]
Step 3: Galerkin Projection

Insert the approximation into the weak form using basis functions as test functions:
\[
\sum_{i=1}^n c_i \int_0^1 \phi_i'(x) \phi_j'(x) \, dx = \int_0^1 f(x) \phi_j(x) \, dx \quad \text{for all } j.
\]
Step 4: Matrix Formulation

Define the stiffness matrix \( \mathbf{A} \) and load vector \( \mathbf{b} \) as follows:
\[
A_{ij} = \int_0^1 \phi_i'(x) \phi_j'(x) \, dx, \quad b_j = \int_0^1 f(x) \phi_j(x) \, dx.
\]
This leads to the linear system:
\[
\mathbf{Ac} = \mathbf{b},
\]
where \( \mathbf{c} \) is the vector of coefficients.
\subsection*{Example: Very simple and concrete}
Take a differential equation
\[
  \frac{d^2y}{dx^2} + x+ y = 0 , 0 \le x \le 1
\]
with boundary condition \(y(0) = y(1) = 0 \)

Example:

Step 1
Take a trail solution \(y(x) = a_0+a_1x+a_2 x^2\). Always take the number of constants one greater than the highest differential power. Here we have second order differential. So, we take \( 2+1=3\) constants \(a_0,a_1,a_2\).
Apply boundary conditions \( x=0,y=0 \). This leads to \( a_0 =0 \). Then apply boundary condition \(x = 1, y = 0\) leads to \(a_1 = -a_2 \). Then the trail function becomes \( y(x) = a_2(x^2-x) \). This is one parameter solution - namely \( a_2 \).

Step 2
Compute Weighting function
\[
  W(x) = \frac{\partial{y}}{a_2} = x^2 - x
\]

Step 3
Compute domain residuals. Simply substitute the trail function into original differential equation.
\begin{align*}
  R_d &= \frac{d^2y}{dx^2} + x + y \\
  R_d(x) &= \frac{d a_2(x^2-x)}{d x^2} + x + a_2(x^2-x)
\end{align*}
 Step 4
Minimization of domain residual:
\[
  \int_0^1 W(x) R_d(x) dx = 0
\]
Compute minimization. You will get \( a_2 = -\frac{5}{98} \).
Then the solution is \( y(x) =  -\frac{5}{98}(x - x^2) \).
Other formulations
For the differential equation \( -u{''} = f \), we can write this as first order by setting
\[
  \sigma = -u' , \sigma' = f
\]
The pair \( \sigma,f \) can be characterized variationally as unique critical point of the functional
\[
  I(\sigma,u) = \int_{-1}^1 (\frac{1}{2}\sigma^2 - u \sigma') dx +  \int_{-1}^1 fu dx
\]
over \( H^1(-1,1) \times L^2(-1,1) \)
Note if we take the original \(J(x)\) the Energy functional and substitute, we get above.
\begin{align*}
  J(u) &= \frac{1}{2}\int_{-1}^1 |u'(x)|^2 dx - \int_{-1}^1f(x)u(x)dx \\
       &=  \frac{1}{2}\int_{-1}^1\sigma^2 -  \int_{-1}^1 f(x) u(x) dx \\
       &= \frac{1}{2}\int_{-1}^1\sigma^2 + f(x)\int_{-1}^1u(x)dx - \int_{-1}^1\int_{-1}^1u(x) dx f'(x) dx \\
       &= \frac{1}{2}\int_{-1}^1\sigma^2 + 0 \\
       &= \frac{1}{2}\int_{-1}^1\sigma^2 + \int_{-1}^1f(x)u(x)dx  - \int_{-1}^1f(x)u(x)dx \\
       &= \int_{-1}^1\frac{1}{2}\sigma^2 - f \sigma' + \int_{-1}^1f(x)u(x)dx
\end{align*}

Hilbert space properties for PDEs

 In the context of PDEs the following properties are important for the Hilbert spaces.

Notation: Let $V$ be Hilbert space and let $a(.,.):V\times V \rightarrow \mathcal{R}$ be a bilinear form.
Property 1:
For PDE solutions, we need this bilinear form to be bounded.
\begin{equation}
  |a(u,v)| < M||u|| ||v|| \text{ for some $M>0\in \mathcal{R}$ }
\end{equation}
Property 2:
This is called \(V\)-Ellipticity'. The concept of $V$-ellipticity is crucial in establishing the well-posedness (existence, uniqueness, and stability of solutions) of boundary value problems formulated in a variational framework. It guarantees the uniqueness and stability of solutions to the corresponding variational problems. In essence, if the bilinear form derived from a PDE is V-elliptic, then the solution to the variational problem (and hence to the PDE) depends continuously on the data (such as boundary conditions and external forces), ensuring that small changes in input lead to small changes in the output.
Mathematically, it provides a lower bound for the bilinear form.
\begin{equation}
  a(u,v) \ge \alpha ||v||^2 \text{ $\alpha$ is a constant. $v \in V$}
\end{equation}

Simple example
Simplest example of Hilbert space is Euclidean space with euclidean metric. It is not too difficult to verify that the euclidean metric is bilinear, it is bounded in the property $1$ sense and it has $V$-Ellipticity (Property $2$).

Chain complexes on Hilbert spaces

 Chain complexes are mathematical structures used extensively in algebraic topology, homological algebra, and other areas of mathematics. Th...