Thursday, July 19, 2018

R&C Analysis-Integration of Complex functions

Let $\mu$ be a positive definite measure on arbitarary measurable space $X$. Let $L^{1}(\mu)$ be collection of all complex measurable functions $f$ on $X$ for which \begin{equation} \int_X |f| d\mu < \infty \end{equation} is called space of ``Lesbesgue integral functions''. $f$ measurable implies $|f|$ is measurable, hence above integral is defined. For understanding next definition, recall that a function $u$ can be split into positive $u^+=max\{u,0\}$ and $u^-=-max\{u,0\}$ parts.  
Definition If $f=u+iv$ for real measurable functions $u,v$ on $X$ and if $f \in L^{1}(\mu)$, we define \begin{equation} \int_E f d\mu = \int_E u^+ d\mu - \int_E u^- d\mu + i \int_E v^+ d\mu - i \int_E v^- d\mu \end{equation} for every measurable set $E$.Know that $u^+,u^-,v^+,v^-$ are measurable. Then $\int_E f d\mu$ exists. Futhermore $u^+ \leq |u| \leq |f|$ for all $4$ parts of above integral. Hence, each one of them is finite. Clearly from above definition, $\int_E fd\mu$ is a complex number. Occasionally, it is desirable to define integral of $f$ with range $[-\infty,\infty]$ to be \begin{equation} \int_E f d\mu = \int_E f^+ d\mu - \int_E f^-1 d\mu \end{equation} provided atleast one of the integrals on the right is finite. Thus LHS is a number between $[-\infty,\infty]$. Theorem 1.32 Suppose $f,g \in L^1(\mu)$ and $\alpha,\beta$ are complex numbers, then $\alpha f+\beta g \in L^1(\mu)$ and \begin{equation} \int_X (\alpha f + \beta g) = \alpha \int_X f d\mu +\beta \int_X g d\mu \end{equation} \subsection{Proof} First we need to establish that $\alpha f+\beta g$ is measurable.Then,need to show that the integral is less than infinity - thus establishing that LHS of above belongs to esteemed Legesgue integrable functions set($L^1(\mu)$). If $f,g$ are complex measurable functions, $f+g$ and $fg$ are measurable functions. $\alpha f = f.f.\cdots(\alpha\ times)$ is measurable. Similarly $\beta g$ is measurable. The sum of measurable funcitons $\alpha f + \beta g$ is measurable. WLOG assume $o \leq f \leq g$, then $\int_E f d\mu \leq \int_E g d\mu$. Know, \begin{equation} |\alpha f + \beta g | \leq |\alpha||f| + |\beta||g| \end{equation} This implies \begin{equation} \int_X |\alpha f + \beta g |d\mu \leq \int_X |\alpha||f| + \int_X |\beta||g|=|\alpha|\int_X|f| d\mu + |\beta| \int_X |g| d\mu < \infty \end{equation} Thus $\alpha f + \beta g \in L^1(\mu)$. To prove $(4)$ we need to establish \begin{equation} \int_X (f+g)d\mu = \int_X f d\mu + \int_x g d\mu \end{equation} and \begin{equation} \int_X \alpha f d\mu = \alpha \int_X f d\mu \end{equation} Assume $h=f+g$ \begin{equation} h^+ - h^- = f^+ - f^- + g^+ - g^- \text{ implies } h^++f^-+g^-=f^++g^++h^- \end{equation} From Theorem $1.27$ know that if $f=\sum_{i=1}^\infty f_n(x)$ then $\int_X f \mu = \sum_{i=1}^n \int_X f_n d\mu$. Applying this theorem yields, \begin{equation} \int h^++\int f^-1 + \int g^- = \int f^+ + \int g^+ + \int h^- \end{equation} Since each of these integrals is finite, we can rearrange terms as we like. \begin{equation} \int h^+-\int h^- = \int(h^+-h^-) = \int (f^+- f^-) + \int (g^+- g^-) \end{equation} Leading to $\int_X(f+g) d\mu = \int_X f d\mu + \int_X g d\mu$. To establish equation (8), the following was already proved earlier. \begin{equation} \int_X (\alpha f) d\mu = \alpha \int_X f d\mu \text{ when } \alpha \geq 0 \end{equation} All that's left is to show that equation (8) holds for $\alpha < 0$ and $\alpha=i$. $\alpha=0$ case: Notice \begin{equation} -u^+ = -max\{u,0\} = u^- \end{equation} which means \begin{equation} \int_X (-1) f d\mu = \int_X (-1)(u+iv) d\mu = \int_X(-u-iv)d\mu = (-1)\int_X f d\mu \end{equation} $\alpha=i$ case: \begin{equation} \int (if) = \int i(u+iv) = \int (iu-v) = -\int v + i\int u = i\int (u+iv)=i\int f \end{equation} Proved for all $\alpha$ less than $0$ and for $i$. This shows, \begin{equation} \int_X \alpha f d\mu = \alpha \int_X f d\mu \end{equation}

Wednesday, July 18, 2018

Machline Learning-Kernel Smoothing methods

Converted document
Chapter 5:
Kernel Smooth Methods:
The set up is as follows:
You have input data - this is called matrix X.
Dimension of this matrix are N × p where N is number of rows. Here each row corresponds to a sample from your experiment.
ie/ Inputs. p is number of features.
Your output is collected into a matrix called y. Most of the time y is a N × 1
Typical linear regression is expressed as y = Xβ where βare called coeff of linear regression.
For example you have a function y = β0 + β1x
These functions are linear in β. For example y = β0 + β1x + β2x2 is still a linear function.
If output is a real value, then it is called regression. And if output is categorical variable - ex: Obese/NonObese. We use logistic regression and its variations.
Key term here is “localization”. Idea is fit a simple model at each of the query points and from there infer a overall function f(X)
This locatization is achieved via using of “Kernels”. Basic set up and terminology is - kernel is Kλ(x0, x) where x0is query point and xis any arbitary point. λwhich is size of neighborhood,
In these models λis a parameter.
Simplest way to understand one dim Kernels.
One simple is to select a neighborhood λand simply average all the distances from your query point x0to all other points within λdistance or ball. Other way is to simply fit a linear function in each neighborhood. Here again you loose continity. This lack of continuity gets resolved using Nadara-Watson average.
f(x0) = ({i = 1}NKλ(x0, xi)yi)/({i = 1}NKλ(x0, xi))
epinichekov kernel.
Kλ(x0, xi) = D((|x0 − xi|)/(λ))
D(t) = (3)/(4)(1 − t2) if |t| < 1
else D(t) = 0
These algorithms have issues at Boundaries:
At boundaries local cluster or neighborhood of points may generate a curve that takes off in different direction compared with the direction overall function takes.
This leads to poor predictability.
We need to tackle this.
To tackle this - as a start we use linear regression for these points.
Linear regression is done for each cluster of points or neighborhood, but applied only to x0your query point at Boundary.
minNi = 1Kλ(x0, xi)[yi − α(x0) − β(x0)xi]2
Then your end function or boundary function is simply f(x0) = α(x0) + β(x0)x0
Defune b(x)T = (1, x) and define B as N × 2 and we define W(x0) which is a diagnol weights N × N
f(x0) = b(x0)T(BTW(x0)B){ − 1)BTy = Ni = 1li(x0)yi
where li(x0) are weights.

R&C Analysis-Th 1.29

Suppose \(f:X \rightarrow [0,\infty]\) is measurable, and \[\begin{equation} \varphi(E) = \int_E f d\mu \text{ } [E \in \mathscr{R}] \end{equation}\] Then \(\varphi\) is a measure on \(\mathscr{R}\) \[\begin{equation} \int_X g d\varphi = \int_X gf d\mu \end{equation}\] for every measurable function \(g\) on \(X\) with range in \([0,\infty]\).

In this set up, as we tour through different sets of sigma algebra \(\mathscr{R}\), the integral of measurable function generates a measure.

To prove Measure, need to show

Let \(E_1,E_2,\cdots\) be disjoint members of \(\mathscr{R}\) whose union is \(E\). Then, \[\begin{equation} \chi_E f = \sum_{i=1}^\infty \chi_{E_i}f \end{equation}\] and that \[\begin{equation} \varphi(E) = \int_X \chi_E f \mu \text{ } \varphi(E_j) = \int_X\chi_{E_j}f d\mu \end{equation}\] Then, \[\begin{equation} \sum_{i=1}^\infty \varphi(E_i) = \sum_{i=1}^\infty \int_X \chi_{E_i} f d\mu \end{equation}\] Using previous theorem on summation of integrals, \[\begin{align} \sum_{i=1}^\infty \int_X \chi_{E_i} f d\mu = \int_X\sum_{i=1}^\infty\chi_{E_i} f d\mu \\ = \int_X \chi_E f d\mu \\ = \varphi(E) \end{align}\] Thus \(\sum_{i=1}^\infty \varphi(E_i) = \varphi(E)\) establishing countable additive property. Since \(\phi \in \mathscr{R}\), and \(\varphi(\phi)=0\) the finiteness property of atleast one of the sets in sigma algebra is satisfied. This shows \(\varphi\) is a measure. Each \(\chi\) in above equations is a simple function. Setting \(g=\chi\) If we set \(h=\chi_E\), for any simple measurable function. then \[\begin{align*} hf = \sum_{i=1}^\infty h_if \\ \text{ and } \\ \varphi(E) = \int_E h d\mu \\ \varphi(E_i)=\int_{E_i} h_i f d\mu \\ \text{ then $\varphi(E)$ is a measure. } \\ \int_X hd\mu = \int_X fg d\mu \end{align*}\] Assume, \[\begin{equation*} 0 \leq g_1(x) \leq g_2(x) \leq \cdots \leq g(x) \end{equation*}\] with \(lim_{n \rightarrow \infty} g_i(x) = g(x)\). for each \(i\) the following is true \[\begin{equation} \int_X g_i d\mu = \int_X fg_i d\mu \end{equation}\]

from monotone conv theorem, LHS is \[\begin{equation} lim_{n \rightarrow \infty} \int_X g_i d\varphi = \int_X g d\varphi \end{equation}\] as \(f\) is positive definite, \[\begin{equation} 0 \leq g_1(x)f(x) \leq g_2(x)f(x) \cdots \leq g(x)f(x) \end{equation}\] with \(\int_X g_nf d\mu \rightarrow \int_X gf d\mu\).

Hence, the equation \[\begin{equation} \int_X g d\varphi = \int_X gf d\mu \end{equation}\] holds.


Weak formulation of boundary value PDE and its meaning

Energy functional An energy functional is a mapping from a function space (often a Sobolev space) to the real numbers, which assigns a "...