I'm now working on a maintenance optimization problem, and I'm learning to use MDP for model formulation. The state space concerned is continuous, but the value function is actually piecewise. For example, the value function equals A when state x ranges within [x1, x2], and equals B when x ranges within [x2, x3].
So my question is, can I still use the value iteration algorithm to find the optimal policy, maybe replacing the summation operation with integration?
Use Chebyshev collocation to replace the infinite dimensional functional equation with a finite dimensional projection onto the space of functions spanned by the Chebyshev basis. Basically, you take a finite basis of orthogonal polynomials and project the value function onto that, and only save the vector of coefficients of the polynomials as your "value function". Then integration is pretty simple and exact through quadrature methods. There are lots of references. It all looks complicated at first, but once you start programming it, it comes together very quickly (Chebyshev polynomials can be computer recursively in a very simple way that isn't obvious from the definition, which involves trigonometric functions and can look a bit intimidating).
The rough intuition is that you are using a version of Gram-Schmidt orthogonalization to project the true value function onto a lower dimensional space of functions, but this allows you to keep the state space continuous.
Here's a good reference to get started: http://ice.uchicago.edu/2008_presentations/Judd/ICEProjectionMethods.pdf
You might also want to learn Tauchen's method for discretizing an AR(1) process into a Markov chain, as that's another route to turning a continuous state problem into a discrete state problem.