The chain rule for differential entropy says
$$h(x_1, x_2, \dots, x_K) = \sum_{i=1}^K h(x_i | x_1, x_2, \dots, x_{K-1}) $$
but not sure how the (multivariate) conditional entropies in the summand (to be broken into $K$ terms additively) can even be simplified to something that is actually calculable:
$$ h(x_i | x_1, x_2, \dots, x_{K-1}) = ?$$
Could someone simplify the first expression into a ready-to-estimate form? because how it's shown now, I don't know what or where to begin calculating.
I can only think of switching the conditional entropies to the familiar $H(Y|X) = H(X,Y) − H(X) $ instead, whose multivariate version is
$$h(x_K \mid x_1,x_2,\dots,x_{K-1}) = h(x_1, x_2, \dots, x_K) - h(x_1, x_2, \dots, x_{K-1}) \tag{3}$$
but this is a paradox since it just takes us back to the first line in terms of how to calculate the new terms $h(x_1, x_2, \dots, x_K)= ?$
First, there an error in your equation, the RHS should be
$$\sum_{i=1}^K h(x_i | x_1, x_2, \dots, x_{i-1})$$
In general, there's not much more to do. One often uses the (multivariate) chain rule to compute the joint entropy, but only because one has some extra assumption in the model that lead to a simple calculation of the conditonal entropies. By far the most common case is when the sequence corresponds to a Markov process, in which case
$$h(x_i | x_1, x_2, \dots, x_{i-1}) = h(x_i | x_{i-1})$$
Furthermore, if the Markov process is stationary, then all the conditional entropies are the same.