I have discrete data points of a function(I can have infinite number of points of the function), which is something like this:

I used trapezoidal function of Matlab for a numerical integration.
The problem is that increasing the number of points, does not reduce the error of the integration. As someone already answered, the problem lies on the spikes. But,
- why increasing the number of points does not decrease the error? Using 10k points or 10million points makes no difference.
Isn't really 10 million points enough to capture all details of this function?
I have had this problem with MATLAB before too, the problem lies in how it stores numbers. Theoretically the error would decrease but actually you are summing up more regions, each of which has a relatively increasing error. Here's an example:
Say I want to work out the derivative numerically at a given point, we have: $$\frac{f(x+h)-f(x)}{h}$$ however, when making $h$ very small it will be fine if its a simple number (like $10^a)$ but when it has a large number of non-zero digits, matlab will store it rounded after the output, and so the value of $f(x+h)$ will not have the precision you want. Also, during the summation process there will be rounding after every addition.
Matlab by default uses 16 digits, however if you use the vpa function you can increase it, so you could write your own code for numerical integration that incorporates this, the number of digits vpa used can also be changed.
vpa Variable precision arithmetic. R = vpa(S) numerically evaluates each element of the double matrix S using variable precision floating point arithmetic with D decimal digit accuracy, where D is the current setting of DIGITS. The resulting R is a SYM.
You can recall this yourself using the command