Does the null hypothesis always conform to a normal distribution?

53 Views Asked by At

Hopefully this is an appropriate place for this question.

Let's imagine I have a function Trader(param), which takes a matrix of market data as a parameter, and then chooses whether to buy or sell. I feed it a bunch of historical data, and it spits out series of numbers representing the value of each trade, sometimes making money, sometimes losing. These numbers belong to a dataset called Sample.

I have a hypothesis: giving real money to Trader() has a positive expected value. I want to find out if my dataset Sample is large enough to confirm this hypothesis according to some confidence interval Z, within a margin of error M. My required sample size is N. I'm led to believe that this is the appropriate formula:

N >= ((Z * σ) ÷ M)^2

Let's plug in an actual value for Z. I want to be 95% confident. 95% of the data points within a normal distribution are within 1.96 standard deviations of the mean, therefore Z == 1.96, for 95% confidence.

But does my null hypothesis necessarily conform to a normal distribution? If I calculated the value of every possible trade within my historical market data - that's to say buy on every day, and sell on every day in the future of the buying day, giving me a very large amount of potential trades - would that be my null hypothesis? And if so, would I then want to compute how many standard deviations from the mean 95% of these data points fall?