According to Wikipedia, the quantile function is defined by $$Q(p)=\inf \{x\in\mathbb{R}:F(x)\geq p \}.$$ But if I apply this to equally likely data set 10, 11, 12, 13, I get $Q(0.5)=11$. But shouldn't $Q(0.5)$ be the median $\frac{11+12}{2}=11.5$ ?
I thought quantile is just the quantile function evaluated at specific values of $p$, but this example seems to indicate that it is not the case.
For discrete random variables, quantiles will not behave like you may intuit from continuous random variables. The cummulative distribution function is not continuous, so there is no clear cutoff value where you exactly cross from below 50% to at least 50%.
You can use $x=12,11.7, 11.5,11.1,\ldots$ for $F(x)=0.5$ and they all work. Every number in the half open interval $(11,12]$ works for that. It just so happens that the definition of the quantile uses the infimum of this set to get a unique value, while the definition of median uses the arithemtic average of the end points to get a unique value. They are all the same for continuous random variables, but can differ for discrete (or mixed) ones.
Why that is so I don't know, my guess is that the concept of median was in use long before the concept of quantiles. It's a special value that exists only for $50$%; unlike quantiles, there is no "$70$% median". For that, using the avarage of the endpoints may have felt more natural and symmetric than the infimum.