Essentially, I am generating datasets in which I can make the sample sizes as large as I want. Therefore, any statistical test that I do between the generated sample distributions are somewhat meaningless because the statistics are dependent on sample size. In particular, as long as I make the sample sizes large enough, the test statistic can always be returned as "significant".
What I am looking to do, in essence, is compare two distributions of data, and not compare the means, but compare the distributions themselves, to quantify some measure of statistical significance. In particular, I would say that in my case, if such a significance test exists, you could argue that increasing the sample size would make the comparison, more accurate, but would ultimately converge to some non-zero significance value for sample size -> infinity.
The two-sample Kolmogorov–Smirnov test (wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test), is something very close to what I am looking to quantify, however, although this measures statistical significance between two continuous distributions, the test for significance is still dependent on sample sizes of the two distributions.
I am bringing this question here in hopes that someone with experience is aware of some kind of statistical significance test between two distributions that is independent of sample size. I am aware that in general having a large sample essentially translates to "we can now be more confident that the test statistic is significant!", but again I am looking for a significance test that directly compares two distributions where we may assume that the sample sizes are very large for both distributions, thus we have attained a distribution of the data that has converged, and thus we do not consider sample size to have an effect on the test statistic.