Using image vs position and velocity states for reinforcement learning

34 Views Asked by At

I am developing a 2d car simulator to use DRL to find optimal path from initial to target position. Since it is a continuous space, I am using DDPG method like actor-critic. Is it a good idea to feed the whole image (showing the car, obstacles and target position) as input to the deep reinforcement learning model or should I combine the current state (position and velocities), target and some other measured distance of how far obstacles are? Currently I have the latter one as inputs. After many trials, it doesn't seem like learning much. It keeps roaming around some point.