In deep learning, accuracy curves are crucial for evaluating a model's performance. Typically, an accuracy curve resembles a logarithmic function, although the reasons for this are beyond the scope of this question. Large spikes in the accuracy curve can indicate issues such as an inappropriate batch size. Let's examine these curves:
Here, I have plotted a function ($log(x)$) with different random noises of $\alpha$, using this code:
import numpy as np
import matplotlib.pyplot as plt
def f(x, alpha):
return np.log(x) + alpha * np.random.normal(size=x.size)
def main():
x = np.linspace(0.5, 3)
for i in [0, 0.1, 0.3]:
plt.plot(x, f(x, alpha=i),label=fr'$\alpha$ = {i}')
plt.legend()
plt.show()
if __name__ == '__main__':
main()
My objective is to determine the smoothness of these curves in order to infer the original $\alpha$ value. I am asking this question here rather than on Stack Overflow because it is not strictly a programming question. The only thing that hints a solution to me, is the integral of squared second derivative; but I think there might be a better, more accurate solution.
Can anyone suggest a solution?
