I have a couple of questions on conditional probability:
First of all, how is p(B|A,C) spoken out correctly? The likelihood in joint with C, or B, given the joint of A and C - or does both mean the same?
I am looking for numerical examples for playing around with p(A,B,C). I want to show myself numerically why working with exact Bayes only works on simple examples and becomes intracable at some point - best would be both discrete and continous examples.
This notation would only make sense in the context of a joint probability distribution (discrete or continuous or some mixture) of random variables $A,B,C$ (it is customary to use uppercase letters for random variables and lowercase letters for real values that they might be assigned, but this does not remove responsibility of writing up exactly what the variables mean in your application).
Given a joint distribution one can define both marginal distributions and conditional distributions. The notation you used $p(B|A,C)$ is associated with the conditional probability of an event determined by the random variable $B$ given knowledge of an event based on random variables $A,C$.
The notation, as is often the case in practice, abbreviates for the Reader the details of what these events were meant to be. In the simplest case we might express the probability that $B=b$ given that $A=a$ and $C=c$. So again there is a burden placed on the Author to prepare the Reader's thinking so that the meaning is mathematically clear.