Proving a discriminant function. Finding boundary of classification.

67 Views Asked by At

By Bayes Theorem we have that the posterior probability P(t|x) for the class t is given by:

P(t|x) = $\frac{P(x|t)P(t)}{P(x|-1)P(-1)+P(x|+1)P(+1)}$

where the priors P(t) satisfy that P(-1)+P(+1)=1. if we model the distribution of features inside class t by a Gaussian distribution demonstrate that:

The function y:R{-1,0,1} defined as:

y(x) = sgn ( log P(x|+1) - log P(x|-1) + log $\frac{P(+1)}{1-P(+1)}$ )

where sgn(x) = 1 if x > 0, -1 if x < 0, and 0 otherwise, is a discriminant function.

Also, is the boundary of classification determined by a quadratic form? And if, $\sum$-1=$\sum$+1=$\sum$, then the boundary of classification is given by: w<superscript>t + w<subscript>0 = 0

1

There are 1 best solutions below

0
On

Yes, that is correct. Bayes' Theorem states that the posterior probability of an event t occurring given some observed data x is given by:

P(t|x) = P(x|t) * P(t) / P(x)

where P(x|t) is the probability of observing the data x given that the event t has occurred, P(t) is the prior probability of the event t occurring, and P(x) is the probability of observing the data x for all possible events.

In the expression you provided, P(t|x) is being calculated as the ratio of two quantities: the numerator is P(x|t)P(t), which represents the probability of the event t occurring given the observed data x, and the denominator is P(x|-1)P(-1)+P(x|+1)P(+1), which represents the sum of the probabilities of the events -1 and +1 occurring given the observed data x.

Therefore, the expression you provided represents the posterior probability of the event t occurring given the observed data x, using Bayes' Theorem.

However, you can show further:

we take logs now

y+(x) = ln P(1|x)

y+(x) = ln [P(x|1)P(1)] - ln[ P(x|1) P(1) + P(x|-1) P(-1)]

y-(x) = ln P(-1|x)

y-(x) = ln [P(x|-1)P(-1)] - ln[ P(x|1) P(1) + P(x|-1) P(-1)]

for all numbers less than 1 the log is

y(x) = y+(x) - y-(x)