I am running a Monte Carlo simulation to price call and put options, and observe a strange correlation between the number of sampling points and the standard deviation. It makes sense that as the number of sampling points approaches infinity that the standard deviation would decrease; however, my results do not show this.
I am observing the following trend during testing and cannot make sense of it:
For a call option, the error increases as the number of sampling points increases. For a put option, the error decreases as the number of sampling points increases.
Is it possible for the aforementioned correlation to exist in a Monte Carlo simulation for option pricing formula?