Suppose you have a D-dimensional data vector $x$ = ($x_1$, ..., $x_n$) and associated class variable $y \in \{0, 1\}$, which is Bernoulli with parameter $\alpha$. Assume the dimensions of $x$ are conditionally independent given $y$, and that the conditional likelihood of each $x_i$ is Gaussian with $\mu_{i0}$ and $\mu_{i1}$ as the means of the two classes and $\sigma_i$ as their shared standard deviation.
Use Bayes' rule to show that $p(y=1|x)$ takes the form of the logistic function.
i.e.
$ p(y=1|x) = \frac{1}{1 + exp\{-\sum_{i=1}^{D} w_ix_i - b\}} $
Where $w = (w_1, ..., w_n)$ is the weight vector and $b$ is the bias.
I started with
$ p(y=1|x) = \frac{p(x | y = 1)p(y=1)}{p(x)}$ by Bayes rule
$=\frac{\alpha p(x | y = 1)} {\alpha p(x | y = 1) + (1 - \alpha) p(x | y = 0)}$ by the Product rule
And here is where I got stuck. The dimensions of $x$ are conditionally independent given $y$, so can I somehow split $p(x | y = k)$ by dimensions?
Well, since you know that given $\left\{y=i\right\}$ (for $i=0,1$), the sequence of random variables $(x_i)_i$ are independent, you know that $$ P(x\mid y=i) = \prod_{j=1}^nP(x_j\mid y=i) $$ Now, simply use the fact that $P(x_j\mid y=i)$ represents a Gaussian density with the respective mean and variance (according to the class), namely, $$ P(x_j\mid y=i) = \frac{1}{\sqrt{2\pi\sigma_j^2}}\exp\left(-\frac{1}{2\sigma_j^2}(x_j-\mu_{j,i})^2\right) $$ BTW, you may need to use the following simple "observation" \begin{align} p(y=1|x) &=\frac{\alpha p(x | y = 1)} {\alpha p(x | y = 1) + (1 - \alpha) p(x | y = 0)}\\ &=\frac{1}{1+\frac{1-\alpha}{\alpha}\frac{p(x | y = 0)}{p(x | y = 1)}}. \end{align}