I'm going to check if: $P(T(x)\geq t \mid H_0) = P(H_0\mid T(x)>t)$.
As a sidenote, it's a self-study post. I did my math courses a little while ago, so I do not remember exactly how it goes with probability calculus and kindly ask for help.
Actually I know there is no equation between these two conditional probabilities, so that's why we have bayesian branch of statistics, but I'd like to find out on my own.
Suppose I had $10$ coin flips and got $7$ heads.
Then, according to NHST, I state the null hypothesis that $H_0: \theta = 0.5$, where $\theta$ is a probability of obtaining heads.
The alternative hypothesis is $H_0: \theta > 0.5$ (for ease of calculations it's one-sided test). $T(x)$ is my test statistics, is a ratio of heads in 10 coin flips, I got $7$ heads so $T(x)=\frac{7}{10}$.
The p-value is $P(T(x)\geq \frac{7}{10}\mid H_0) = P(T(x)\geq \frac{7}{10}\mid \theta=0.5) = 0.17$
A one-line code in R is:
pbinom(6,10,0.5, lower.tail = FALSE)
But I also want to check if reverse probability ($P(\theta=0.5\mid T(x)\geq\frac{7}{10})$) is similar or equal to $0.17$.
This time I need to use the Bayes' Theorem and define prior probability for $\theta$.
I need to evaluate $P(\theta=0.5\mid T(x)\geq\frac{7}{10})$. Previously, I tried to break down the condition into summation, but as @Ian pointed in his answer - the conditional probability is not additive over the condition.
Doing this step-by-step I get: $P(\theta=0.5\mid T(X)\geq 0.7)=\frac{P(\theta=0.5, T(x)>0.7)}{P(T(x)>0.7)}=\frac{P(T(x)>0.7\mid\theta=0.5)\cdot P(\theta=0.5)}{P(T(x)>0.7}$
Here, I have to assume some prior distribution for $\theta$, so I chose discrete uniform on set $\{0.1, 0.2, \ldots, 1\}$ (just to have some prior).
The numerator $P(T(x)\geq 0.7, \theta=0.5)= P(T(x)=0.7\mid\theta=0.5)\cdot P(\theta=0.5) + P(T(x)=0.8\mid\theta=0.5)\cdot P(\theta=0.5)+P(T(x)=0.9\mid\theta=0.5)\cdot P(\theta=0.5)+P(T(x)=1.0\mid\theta=0.5)\cdot P(\theta=0.5) = {10\choose 3}\cdot 0.5^7\cdot 0.5^3\cdot \frac{1}{10}+{10\choose 2}\cdot 0.5^8\cdot 0.5^2\cdot \frac{1}{10}+ {10\choose 9}\cdot 0.5^9\cdot 0.5^1\cdot \frac{1}{10}+ {10\choose 10}\cdot 0.5^{10}\cdot 0.5^0\cdot \frac{1}{10}=0.017$
The denominator.
Probably, it's correct to rewrite the denominator:
$P(T(x)\geq0.7)=P(T(x)=0.7)+P(T(x)=0.8)+P(T(x)=0.9)+P(T(x)=1.0)$.
Now I have to calculate those terms, I do it by:
$P(T(x)=0.7)=\sum_{\theta\in\{0.1,0.2,\ldots 1\}} P(T(x)=0.7, \theta)\cdot f(\theta)= \sum_{\theta\in\{0.1,0.2,\ldots 1\}}{10\choose 3}\cdot\theta^7\cdot(1-\theta)^3\cdot\frac{1}{10} = \frac{1}{10}{10\choose 3}\sum_{\theta\in\{0.1,0.2,\ldots 1\}}\theta^7\cdot(1-\theta)^3$
It's far faster to use R in the calculations.
sum(0.1*choose(10,7)*\theta^7*(1-\theta)^3)
The result is $P(T(x)=0.7)=0.091$ Doing this for $P(T(x)=0.8)$, $P(T(x)=0.9)$, and $P(T(x)=1.0)$, I get (respectively) $0.0906, 0.0828$ and $0.149$. Then these four sums are again add up to get $P(T(x)\geq 0.7)=0.414$
Finally, the result is $P(\theta=0.5\mid T(x)\geq 0.7)= \frac{P(T(x)\geq 0.7|\theta=0.5)\cdot P(\theta=0.5)}{P(T(x)\geq 0.7)} = \frac{0.017}{0.414}=0.0.41\not=0.17$.
This proves that 'reversed' conditional probabilities, p-value and its reverse, are not equal, I'm really proud of the result.. but the thing that bothers me...
[Q] Is this all correct? Especially in terms of probability calculus and all conditional probabilities.