Saturday, October 20, 2018

Linear Discriminant Analysis -1

Computing classification means computing class posterior probabilities \(Pr(G|X)\) - that is probability that class is G given input X. If \(f_k(x)\) is the class conditional density of \(X\) in class \(Gk\) and if \(\pi_k\) is prior probability, then Bayes theorem gives, \[\begin{equation} Pr(G=k|X=x) = \frac{f_k(x)\pi_k}{\sum_{l=1}^K f_l(x)\pi_l} \end{equation}\]
Depending on the models for class density, different techniques emerge.
* linear and quadratic discrimnant analysis use Gaussian densities
* Mixture of Gaussians lead to non-linear decision boundaries.
* Navie Bayes models assume that each class density is product of marginal densities.
Derivations:
Suppose that each class density is modelled as multivariate Gaussian. \[\begin{equation} f_k(x)=\frac{1}{(2\pi)^{p/2} |\Sigma_k|^{1/2}} e^{-\frac{1}{2}(x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k)} \end{equation}\] Clear that \[\begin{equation} Pr(G=k|X=x) = \frac{f_k(x)\pi_k}{\sum_{l=1}^K f_l(x)\pi_l} \\ Pr(G=l|X=x) = \frac{f_l(x)\pi_l}{\sum_{p=1}^K f_p(x)\pi_p} \\ \frac{Pr(G=k|X=x)}{Pr(G=l|X=x)} = \frac{f_k(x)\pi_k}{f_l(x)\pi_l} \\ log\left(\frac{Pr(G=k|X=x)}{Pr(G=l|X=x)} \right) = log\frac{f_k(x)}{f_l(x)}+log\frac{\pi_k(x)}{\pi_l(x)} \end{equation}\] Now expression \(\frac{f_k(x)}{f_l(x)}\) can be simplified as follows. Note \(\Sigma\) is assumed to be same for all the classes. \[\begin{equation} log\left(\frac{f_k(x)}{f_l(x)}\right) = \left[ -\frac{1}{2}(x-\mu_k)^T\Sigma^{-1}(x-\mu_k)+\frac{1}{2}(x-\mu_l)^T\Sigma^{-1}(x-\mu_l)\right] \end{equation}\] And \[\begin{equation} = -\frac{1}{2} [x^T\Sigma^{-1}x-x^T\Sigma^{-1}\mu_k-\mu_k^T\Sigma^{-1}x+\mu_k^T\Sigma^{-1}\mu_k - x^T\Sigma^{-1}x + x^T\Sigma^{-1}\mu_k + \mu_l \Sigma^{-1} x - \mu_l^T\Sigma^{-1}\mu_l ] \end{equation}\] Next \[\begin{equation} = \frac{1}{2} [-x^T\Sigma^{-1}x + x^T\Sigma^{-1}\mu_k + \mu_k^T\Sigma^{-1}x -\mu_k^T\Sigma^{-1}\mu_k+x^T\Sigma^{-1}x-x^T\Sigma^{-1}\mu_l-\mu_l^T\Sigma^{-1}x + \mu_l^T\Sigma^{-1}\mu_l ] \end{equation}\]
Before simplying, we should note the following:
  • Since \(\Sigma\) is a covariance matrix and since covariance between \(i\) and \(j\) is same as \(j\) and \(i\), then \(\Sigma^T=\Sigma = \Sigma^{-1}\).
  • The mean matrices \(\mu_k\) for class \(k\) and \(\mu_l\) for class \(l\) are \(p \times 1\) matrix.
  • Terms such as \(\mu_k \Sigma^{-1}\mu_l\) have dimensions \(1\times p * p \times p * p \times 1=1\times 1\) or scalars aka reals.
Second term and fifth term with leading \(x^T\) can be simplified as \[\begin{equation} \frac{1}{2}x^T\Sigma^{-1}(\mu_k-\mu_l) \end{equation}\] Collecting third and seventh terms \(x\) at end. \[\begin{equation} \left[\frac{1}{2}(\mu_k^T - \mu_l^T)\Sigma^{-1}x\right]^T = \frac{1}{2}x^T\Sigma^{-1}(\mu_k-\mu_l) \end{equation}\] The remaining terms are \[\begin{equation*} -\frac{1}{2}\mu_k^l\Sigma^{-1}\mu_l+\frac{1}{2}\mu_l^T\Sigma^{-1}\mu_l+\frac{1}{2}\mu_k^T\Sigma^{-1}\mu_l-\frac{1}{2}(\mu_k^T\Sigma^{-1}\mu_l)^T \\ = \frac{1}{2}(\mu_k+\mu_l)^T\Sigma^{-1}(\mu_k-\mu_l) \end{equation*}\] Combining all those simplification, \[\begin{equation*} log\left(\frac{Pr(G=k|X=x)}{Pr(G=l|X=x)}\right) = log\left(\frac{\pi_k}{\pi_l}\right) \\ -\frac{1}{2}(\mu_k+\mu_l)^T\Sigma^{-1}(\mu_k-\mu_l)+x^T\Sigma^{-1}(\mu_k-\mu_l) \end{equation*}\] At the line that divides class \(k,l\) has probabilities \(Pr(G=k|X=x)=Pr(G=l|X=x)\) \[\begin{equation*} log(1)=0=log(\pi_k)-log(\pi_l) - \frac{1}{2}\mu_k^T\Sigma^{-1}\mu_k+x^T\Sigma^{-1}\mu_k + \frac{1}{2}\mu_l^T\Sigma^{-1}\mu_l-x^T\Sigma^{-1}\mu_l \end{equation*}\] From here, the linear discriminant can be written as \[\begin{equation} \delta_k=log(\pi_k)-\frac{1}{2}\mu_k^T\Sigma^{-1}\mu_k+x^T\Sigma^{-1}\mu_k \end{equation}\]

Weak formulation of boundary value PDE and its meaning

Energy functional An energy functional is a mapping from a function space (often a Sobolev space) to the real numbers, which assigns a "...