Is there a difference between the NHST (null hypothesis statistical test) and the Fisher approach to decision theory? I could not find any difference, since it seems to me that they both ignore the alternative hypothesis (which is not ignored for example by the Neyman-Pearson and Bayes approach). But maybe I have problems in finding in the literature a clear description of NHST.
2026-04-01 14:57:58.1775055478
Difference between NHST and Fisher approach to decision theory
294 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in PROBABILITY
- How to prove $\lim_{n \rightarrow\infty} e^{-n}\sum_{k=0}^{n}\frac{n^k}{k!} = \frac{1}{2}$?
- Is this a commonly known paradox?
- What's $P(A_1\cap A_2\cap A_3\cap A_4) $?
- Prove or disprove the following inequality
- Another application of the Central Limit Theorem
- Given is $2$ dimensional random variable $(X,Y)$ with table. Determine the correlation between $X$ and $Y$
- A random point $(a,b)$ is uniformly distributed in a unit square $K=[(u,v):0<u<1,0<v<1]$
- proving Kochen-Stone lemma...
- Solution Check. (Probability)
- Interpreting stationary distribution $P_{\infty}(X,V)$ of a random process
Related Questions in STATISTICS
- Given is $2$ dimensional random variable $(X,Y)$ with table. Determine the correlation between $X$ and $Y$
- Statistics based on empirical distribution
- Given $U,V \sim R(0,1)$. Determine covariance between $X = UV$ and $V$
- Fisher information of sufficient statistic
- Solving Equation with Euler's Number
- derive the expectation of exponential function $e^{-\left\Vert \mathbf{x} - V\mathbf{x}+\mathbf{a}\right\Vert^2}$ or its upper bound
- Determine the marginal distributions of $(T_1, T_2)$
- KL divergence between two multivariate Bernoulli distribution
- Given random variables $(T_1,T_2)$. Show that $T_1$ and $T_2$ are independent and exponentially distributed if..
- Probability of tossing marbles,covariance
Related Questions in STATISTICAL-INFERENCE
- co-variance matrix of discrete multivariate random variable
- Question on completeness of sufficient statistic.
- Probability of tossing marbles,covariance
- Estimate the square root of the success probability of a Binomial Distribution.
- A consistent estimator for theta is?
- Using averages to measure the dispersion of data
- Confidence when inferring p in a binomial distribution
- A problem on Maximum likelihood estimator of $\theta$
- Derive unbiased estimator for $\theta$ when $X_i\sim f(x\mid\theta)=\frac{2x}{\theta^2}\mathbb{1}_{(0,\theta)}(x)$
- Show that $\max(X_1,\ldots,X_n)$ is a sufficient statistic.
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Fisher focused mainly on looking at the probability of outcomes computed under the assumption that the null hypothesis is true. Consider his famous example of a lady tasting tea. She claims tea tastes better if the milk is put into the cup before the tea is poured, than if milk is added after the tea is poured. To test whether she can really tell the difference, she is confronted with eight cups randomly arranged on a tray---four prepared each way. (She is told to pick the four cups that taste best.) Fisher would conclude she can indeed tell the difference if she picks the four milk-first cups, because there is only one chance in ${8 \choose 4} = 70$ of this outcome under the null hypothesis that she cannot distinguish orders of pouring. Fisher developed many kinds of hypothesis tests along these lines, including what we call ANOVA (analysis of variance).
A few years later Neyman and Pearson formulated the framework of acceptance and rejection regions, type I and type II error probabilities, and power. Fisher strongly resisted this framework and apparently especially the idea of power. For the tea-tasting experiment, Neyman & Pearson might have wanted to know the chances that the lady could 'pass' Fisher's test if she had some moderate, but not perfect, ability to distinguish order of pouring. (For example, maybe she can do it for only 80% of the cups she tastes.)
Apparently, it is possible for bright people to read the writings of Fisher on the one hand and Neyman & Pearson on the other hand, and come to remarkably different conclusions about what each is saying and the distinction between their philosophies of hypothesis testing. I have tried to stick to what I view as the clearest and simplest descriptions of the differences. Even so, I will not be surprised if there are lots of Comments trying to 'reinterpret', 'clarify', or 'correct' what I have said. (Perhaps some from people who have never read a word of the writings of Fisher or of Neyman & Pearson.)
What is totally obvious is that there was a long and bitter battle of Fisher vs. Neyman & Pearson over fundamental ideas of hypothesis testing. Fortunately, most texts attempt to explain the formulation and testing of hypotheses is a way that makes sense to students without feeling the need to go into past controversies.
Note (probably unrelated to your specific question): There is also a Bayesian approach to hypothesis testing. It is somewhat controversial but not in a way directly related to the Fisher vs Neyman-Pearson debate. Very roughly, a Bayesian might require a higher level of proof if the lady claims she can tell the difference in order of pouring by closing her eyes and (without smelling or tasting) sensing 'vibrations in the cup's aura' that she claims differ by order of pouring. And a lower level of proof if she only claims that Darjeeling and Earl Grey teas differ noticeably in taste. Bayesians start out with a 'prior probability' that the lady has the ability she claims, and put that together with the observational data to get a 'posterior probability', which they use to make an inference.