Sunday, July 11, 2021

Introduction to Manifolds - Notion of $C^\infty$ functions

 There are several types of functions. Functions that behave nicely when we differentiate them and those that behave nastily. Continuous everywhere and differential nowhere is an example of the later. See Weierstrass function

When we study Manifolds we would like to avoid nasty functions and restrict ourselves to nicer functions. "smooth" or $C^\infty$ functions meet this niceness criteria.

Then, the question is what are $C^\infty$ functions?

Before jumping into definition of $C^\infty$ functions, nice to look at another concept called "real-analytic".

"real-analytic" functions are functions that are equal to their Taylor series expansion in a neighborhood of a point $p$.

Taylor series expansion of a function $f(x)$ at a point $p$ is defined as 

$f(x) = f(p) + \sum_i \frac{\partial{f}}{\partial(x^i)}|_p (x^i -p^i)+\sum_{i,j}  \frac{1}{2!}\frac{\partial^2{f}}{\partial(x^i)\partial{x^j}}|_p (x^i-p^i)(x^j-p^j)+\cdots$

Clearly for a function to be real-analytic, we need the function to have infinite derivatives that are continuous at $p$. This leads to concept of $C^\infty$ functions.

For now, we consider that we are in $\mathcal{R}^n$ space. We represent coordinates in this space with superscripts - $(x^1,x^2\cdots,x^n)$. Thus a point $p$ is represented as $(p^1,p^2,\cdots ,p^n)$. We further assume that $p \in U \subset \mathcal{R^n}$ where $U$ is an open subset of $\mathcal{R^n}$.

To qualify as a $C^\infty$ function, we require function to have  partial derivatives of all orders (all the way to infinity) at a point $p \in U$ and at $p$ the derivative must be continuous. If this is true for every point $p \in U$, then the function is $C^\infty$  on $U$. 

Simplest examples of  $C^\infty$ function are $sin(x), cos(x)$ etc.,

One caution - not all $C^\infty$ functions are real-analytic.


Saturday, August 8, 2020

Cohomolgy-Some Module theory

 Preliminaries:
Given two modules $B,C$ we seek a module $A$ such that $B$ contains an isomorphic copy of $A$ such that resulting quotient module $B/A$ is isomorphic to $C$.
Clearly, $B$ contains an isomorphic copy of $A$ is same as saying that there is an injective homomorphism $\psi:A \rightarrow B$. This can be expressed as
\begin{equation}
  A \equiv \psi(A) \subset B
\end{equation}
To say $C$ is isomorphic to quotient means that there is a surjective homomorphism $\phi: B\rightarrow C$ with $ker\;\phi=\psi(A)$.
This gives us a pair of homomorphisms
\begin{equation}
  \label{eq:hom}
  A \xrightarrow{\psi} B \xrightarrow{\phi} C
\end{equation}
such that $im\;\psi=ker\;\phi$.

These homorphisms such that above holds are known as "exact".

Examples:

Using direct sum of modules $A,C$ with $B=A \oplus C$, the following exact sequence can be constructed.
\begin{equation}
  0 \rightarrow A \xrightarrow{i} A \oplus C \xrightarrow{\pi} C \rightarrow 0
\end{equation}
where $i(a)=(a,0)$ and $\pi(a,c) = c$. Notice that $pi \circ i=\pi(a,c)\circ(a,0)=\pi(a,0)=0$. Thus $\partial^2=0$ map is satisfied.

When $A=Z$ a $Z$ module with $C=Z/nZ$, above sequence becomes
\begin{eqnarray}
  0 \rightarrow Z \xrightarrow{i} Z \oplus Z/nZ \xrightarrow{\pi} Z/nZ \rightarrow 0 \\
  0 \rightarrow Z \xrightarrow{n} Z \xrightarrow{\pi} Z/nZ \rightarrow 0
\end{eqnarray}
If we consider $A=Z$ and $C=Z/nZ$, we can consider these as extension of $C$ by $A$.

For homomorphism $\phi$, we may form the following.
\begin{equation}
0 \rightarrow K \xrightarrow{i} F(S) \xrightarrow{\phi} M \rightarrow 0
\end{equation}
Here $\phi$ is a unique $R$-module homomorphism which is identity in $S$ - set of generators for $M$ an $R$-module.k

Friday, July 31, 2020

K-Algebras and Bimodules

K-algebras
A k-algebra A is a (possibly noncommutative) ring with identity that is also a k-vector space, such that for α ∈ k and a,b ∈ A,

(1.1) α(ab) = (αa)b = a(αb).

Note that scalar commutes with ring elements.

Examples:
  • Field extensions such as $F/E$.
  • Polynomial ring $k[X,Y,Z]$.
  • Matrix $M_{nm}(k)$ ring (under addition and multiplication) is $k$ algebra. Here we can see that $k$ can commute with elements of $M_{nm}(k)$ but the ring multiplication is non-commutative.
  • The set $Hom_k(V,V)$ of $k$-linear maps of $k$ vector spaces forms a $k$-algebra under addition and composition of linear maps.
Since center of ${\cal H}$ consists of real numbers, ${\cal H}$ is a ${\cal R}$ algebra.

Finite dimensional $k$ algebra it is a finite dimensional vector space over $k$.

Note ${\cal C}$ is 2 dimensional over ${\cal R}$ etc.,

Bimodules

If $R,S$ are two rings, an $R-S$ bimodule is an abelian group $(M,+)$ such that

  • $M$ is a left $R$ module, and a right $S$ module.
  • for all $r \in R$, $s \in S$ and $m \in M$ \begin{equation}
    \label{eq:rs}
    (rm)s=r(ms)
    \end{equation}
An $R-R$ bimodule is known as $R-$bimodule.

For positive intergers $m,n$, the set of $n \times m$ matrices $M_{nm}(\mathcal{R})$. Here the $R$-module is $n \times n$ matrices $M_{nn}(\mathcal{R})$. And the $S$-module is $m \times m$ matrices $M_{mm}(\mathcal{R})$.

Addition and multiplication are carried out using the usual rules of matrix addition and matrix multiplication; the heights and widths of the matrices have been chosen so that multiplication is defined.

The crucial bimodule property, that $(rx)s = r(xs)$, is the statement that multiplication of matrices is associative.

A ring $R$ is a $R-R$ module.

For $M$ an $S-R$ bimodule and $N$ a $R-T$ bimodule then $M \otimes N$ is a $S-T$bimodule.

Bimodule homomorphism:

For $M,N$ $R-S$ bimodule, bimodule homomorphism $f:M \rightarrow N$ is an right $R$ module homomorphism as well as an right $S$ modules homomorphism.

An $R-S$ bimodule is same as left module over ring $R \otimes_Z S^{op}$ where $S^{op}$ is opposite ring of $S$. Note in opposite ring multiplication is performed in opposite direction of the original ring.

This caused me some confusion initially. $S^{op}$ is a ring with multiplication reversed. Denote multiplication in the righ $S$ by "." and opposite multiplication by "*". So, how does all this work to define a bimodule?
Lets look at $R \otimes_Z S^{op}$ operating on $m \in M$.
\begin{equation}
r s*m = r m.s = (rm)s = r(ms)
\end{equation}
Thus the definition is satisfied. Using $R \otimes_Z S^{op}$ is more nicer.

Tuesday, July 28, 2020

Associative algebras-some preliminary notes

Associative algebra:

Associative algebras are generalizations of field extensions and matrix algebras. For example, in field extension $E/F$,$E$ can be considered a $F$-algebra of dimension $n$. Also a $F$-vector space.

An associative algebra $\mathcal{A}$ is a ring, (with multiplication associative) with scalar multiplication and addition from a field $F$.

$K$-algebra means an associative algebra over field $K$.

In short, we want the $F$ action to be compatible with multiplication in $\mathcal{A}$. Say $f \in F$ and $a,b \in \mathcal{A}$ then
\begin{equation}
  (f.a)b=f.(ab)=a(f.b)
\end{equation}

We may consider $F$ as a subring under identification $f \rightarrow f.1_A$ where $1_A$ is multiplicative identity. Then, in compatibility condition noted above we can drop the dot in between $F$ elements and $\mathcal{A}$ elements
\begin{equation}
  fab = afb
\end{equation}
which is same as saying $fc=cf$ for some $c=ab$. Hence, this implies that $F \in Z(\mathcal{A})$ - that is in center of $\mathcal{A}$.



Examples:
A standard first example of a $K$-algebra is a ring of square matrices over a field $K$, with the usual matrix multiplication.

Let $F=Q$, the field of rationals. Consider the polynomial $X^2-2 \in Q[X]$. The splitting field $E=Q(\sqrt{2})$. Then $Q(\sqrt{2})=\{a+b\sqrt{2}|a,b\} \in Q$ is a vector space of $F$ over $E$ such that $dim_F(E)=2$.

Note, that an $n$-dimensional $F$-algebra $A$ can be realized as a subalgebra of $M_n(F)$ ($n\times n$ matrices over field $F$).

If $A,B$ are $F$-algebras, they can be added and multiplied via tensor operations. That is $A \otimes_F B$ and $A \oplus B$ are also associative algebras.

If $A$ is an algebra of dimension $2$, then $A \equiv F \oplus F$. This means $A$ is quadratic extension of $F$, or $A$ contains a nilpotent element.

To prove this, first we establish commutativity of $A$ using basis $\{1,\alpha\}$ over $F$.
 To see this simply expand $(x+y\alpha)(x'+y\alpha')$
 
 Quadratic extension requires that every non-zero element of $A$ should be invertible.

 Say $x+y\alpha$ be a non-zero element that is not invertible. This means $y \neq 0$ and can assert $\alpha$ is not invertible.

 $A$ can be represented via subalgebra of $M_2(F)$. So, we write $\alpha$ as
 \begin{align*}
   \alpha = [[a,b],[c,d]] \text{ a 2 by 2 matrix }
 \end{align*}
 Using above it is not too difficult to prove that $A \equiv F \oplus F$.

 Opposite algebras $A^{opp}$ is an algebra where multiplication is in reverse order. That is for $a,b \in A$, with $a.b$ as multiplication in $A$ and using $\times$ symbol for multiplication in $A^{opp}$, the condition is $b \times a = a.b$.
 

Sunday, July 26, 2020

Cohomology-Homotopy operator etc.,


Homotopy equivalent manifolds have isomorphic de Rahm cohomology groups.

Suppose $F,G:M\rightarrow N$ are smooth homotopic maps. Suppose $\omega$ is a $k$ form on $N$ and $h$ be an homotopic operator that maps from space of $k$ forms on $N$ to $k-1$ forms on $M$ given by
\begin{equation}
  d(h\omega)+h(d\omega)=G^{*}(\omega)-F^{*}(\omega)
\end{equation}
This means $h:\mathcal{A}^k(N) \rightarrow \mathcal{A}^{k-1}(M)$.

This homotopy is used as a stepping stone for proving homotopy equivalent manifolds have isomorphic homology groups.

I shall write in detail the motivation and how this is used later.

There is deRahm theorem proof of which I shall blog later.

Saturday, July 25, 2020

Cohomology- Homotopy

Imagine you are stuck in a strange planet without light source and it is always dark. You landed in an area with some rocks and some pleasant flat ground (otherwise, how else could you have landed?). Your job is to direct an incoming ship to a flat ground.

Since you took some algebraic topology at college, you are a bonafide Mathematician and would always think as one. So using your knowledge, you design this highly contrived, non-optimal "rock" detection system. (If you need more convincing Mathematician's way of doing certain simple things is unique, please refer to Mathematician's way of making Tea- Check out fantastic online book http://www.topologywithouttears.net).

Your plan or device has the following steps.
1. Put a peg on the ground at a random place. Tie a rope.
2. Walk in some direction for some time unrolling your rope, assuming that you haven't hit a rock, plant another peg where you tie other end of the rope.
3. Pause and recognize this rope as a curve on the surface and name it $f$.
4. Walk a few feet and repeat the same steps - if you are successful unrolling your rope, you have another curve - call is $g$.

Now if you can drag rope $f$ to rope $g$, you don't have a rock in between otherwise you have an obstruction or a rock!

If you can drag $f$ to $g$ (or vice versa), you call it $f ~ g$ and this basic idea of homotopy.

Now while dragging the rope from $f$ to $g$, you are working along a map $F$ which at start (we call the start as $t=0$ as a parameter for the drag), the map should be $f$ and at the end (say $t=1$) $F$ should be $g$.

This notion is formalized as follows. Let $M,N$ be two manifolds and let $f,g:M\rightarrow N$ be two smooth functions. If there is a $C^\infty $ map
\begin{equation}
  F: M \times R \rightarrow N
\end{equation}
such that $F(M,0) = f$ and $F(M,1)=g$, then we say $f$ is homotopic to $g$ and write $f \tilde g$.

Now that we have such a map, we can add extra nomenclature when certain conditions occur. Let $N=\mathcal{R}^n$. $F:M\times \mathcal{R} \rightarrow \mathcal{R}^n$ linear in $t$ can be expressed as
\begin{equation}
  F(M,t) = f(1-t)+(t)g
\end{equation}
When $t=0$ implies $F(M,0)=f$ and when $t=1$, $F(M,1)=g$.

Notice $F(M,t)=f+(g-f)t$ which is like $y=mx+c$ or a straight line equation in terms of $t$. Such a straight line path is called "Straight line homotopy".

Convex means that any arbitarary two points can be joined by a straight line.  On any subset of $\mathcal{R}^n$ for which this is true, straight line homotopy is applicable.

Sometimes we are fortune enough to get maps $f,g$ where $g:N \rightarrow M$ such that $f\circ g$ is identity on $N$ and $g \circ f$ is identity on $M$. In such situations, $M$ is considered to be "homotopy equivalent" to $N$. $M,N$ are said to be same homotopy type.

Notice, nothing precludes $N$ to be a single point set. For example, if $M=\mathcal{R}^n$ and $N$ is a single point set, we can smoothly scrunch $\mathcal{R}^n$ to a single point. Such manifolds that can be shrunk to a point are called "contractible".

Friday, July 24, 2020

Cohomology-Computations 1

Let $U,V$ be open cover of circle - $S^1$. Let $X,Y$ be arcs on the circle that correspond to the open cover with disjoint overlaps at top of the circle and at the bottom of the circle.

Each arc $X,Y$ is diffeomorphic to an interval and thus to $\mathcal{R}$.

Here instead of writing one forms and seeking existence of integral solutions, we can use Mayer-Vietoris sequence to make computations. The Mayer-Vietoris sequence for $S^1$ is as follows:
\begin{equation}
  \label{eq:MV1}
  0 \rightarrow H^{0}(M) \xrightarrow{i^*} H^{0}(U) \oplus H^{0}(V) \xrightarrow{j^*} H^{0}(U \cap V) \xrightarrow{d^*} H^1(M) \rightarrow 0.
\end{equation}
Using dimensional formula for sequence of vector spaces $\sum_{k=0}^n(-1)^k d^k$, we can figure out dimension of $H^1(M)=d^1=1$ as follows:
\begin{equation}
  1 - 2 + 2 - d^1 = 0
\end{equation}
Since $S^1$ is connected, $H^0(S^1)=\mathcal{R}$. As shown before $H^0(U)=H^0(V)=\mathcal{R}$. Since overlaps are disjoint we have $\mathcal{R} \oplus \mathcal{R}$. All this results in the following sequence.
\begin{equation}
  \label{eq:MV2}
  0 \rightarrow \mathcal{R} \xrightarrow{i^*} \mathcal{R} \oplus \mathcal{R} \xrightarrow{j^*} \mathcal{R} \oplus \mathcal{R} \rightarrow 0
\end{equation}
Notice $j^*:H^{0}(U) \oplus H^{0}(V) \xrightarrow{j^*} H^{0}(U \cap V)$ is given as follows. Since, we are dealing with $0$ dimensional space, corresponding vectors are $0$ dimensionals - that is scalars or real numbers.
\begin{equation}
  j^*(m,n) = (n-m,n-m)
\end{equation}
That is these are elements of diagonal in $\mathcal{R}\times\mathcal{R}$. $d^*: H^{0}(U \cap V) \xrightarrow{d^*} H^{1}(M)$ sends this to one dimensional space which is isomorphic to $\mathcal{R}$, hence points $m \neq n$ will result in this element. Among many, we can choose one.

Chain complexes on Hilbert spaces

 Chain complexes are mathematical structures used extensively in algebraic topology, homological algebra, and other areas of mathematics. Th...