So according to the multinomial distribution, the probability function $\Pr(X_1 = x_1, X_2 = x_2, \dots, X_k = x_k)$ is equal to $\dfrac{n!}{x_1! x_2! \cdots x_k!} \cdot p_1^{x_1}\cdot p_2^{x_2} \cdots p_k^{x_k}$
(See http://en.wikipedia.org/wiki/Multinomial_distribution).
As it is, this distribution allows for at least one $x_j$ to equal zero. Is there a way to define the multinomial distribution for only $x_j > 0$?
If so, how?
The problem is how to distribute the missing $0$'s. One way is to use the ordinary multinomial $(X_1,X_2,\dots,X_k)$ for sample size $n-k$, and let $$\Pr(Y_1=y_1,Y_2=y_2, \dots, Y_k=y_k)=\Pr(X_1=y_1-1, X_2=y_2-1,\dots,X_k=y_k-1).$$
This does not look particularly interesting, since it is a simple shift of an ordinary multinomial.
Added: Another way of "customizing" is to divide the probabilities $\Pr(X_1=x_1,X_2=x_2,\dots, X_k=x_k)$, where none of the $x_i$ is $0$, by a number $Q$, where $Q$ is the probability that none of the $x_i$ is $0$. Then we face the issue of calculating $Q$. This is $1$ minus the probability that at least one of the $X_i$ takes on the value $0$.
One can express the probability that none of the $X_i$ is $0$ as a complicated sum. It is not clear that there is a pleasant expression for this sum. But we give an approach using Inclusion/Exclusion that is feasible for small $k$, and that with suitable truncation might give useful approximations for larger $k$.
The probability that $X_i=0$ is $(1-p_i)^n$. Add up over all the $i$. We get our first estimate $\sum_i (1-p_1)^n$.
However, this sum double counts all the instances where $X_i=0$ and $X_j=0$ for distinct $i$ and $j$. This probability is $(1-p_i-p_j)^n$. So we find the sum $\sum_{i,j}(1-p_i-p_j)^n$. Subtract this from the first estimate to get the second estimate.
But we have taken away too much, all instances where $X_i=0$, $X_j=0$, and $X_l=0$ for distinct $i,j,l$. So we must add back $\sum_{i,j,l} (1-p_i-p_j-p_l)^n$.
Continuing this way, we find $1-Q$, and therefore $Q$.