Why use Roman Numerals when discussing statistical decision errors?

42 Views Asked by At

I have been looking for some defensible rationale for why we us Roman numerals for Type I and Type II errors. Why do we not just call them Type 1 and Type 2? Tradition?

1

There are 1 best solutions below

1
On

This seems to just be tradition, stemming from the 1933 manuscript by Neyman and Pearson which originally described the Type I and Type II errors ("The testing of statistical hypotheses in relation to probabilities a priori"). In it, the two error types are enumerated in a list using Roman numerals as headings, a common typographic convention:

...[and] these errors will be of two kinds:
(I) we reject H0 [i.e., the hypothesis to be tested] when it is true,
(II) we fail to reject H0 when some alternative hypothesis HA or H1 is true.

These error types are referred to as "errors of type I" and "errors of type II" in the rest of the paper, and most people have generally continued the trend of using Roman numerals to refer to the error types.