I was reading up about RAID, and the text said:
Suppose that the mean time to failure of a single disk is $100000$ hours. Then the mean time to failure of some disk in an array of 100 disks will be $\frac{100000}{100} = 1000$ hours, or $41.66$ days
I don't understand this. Why should the average lifetime of a disk reduce if the number of disks increases? Let's say $X_i$ is a random variable that equals the life of $i^{\text{th}}$ disk. All $X_i$'s are identically distributed and independent. Then, by Central limit theorem:
$$\hat{X} = \dfrac{X_1 + X_2 + \ldots + X_N} { N }$$
follows normal distribution as $N \to \infty $, with mean $E(X_i)$. What's wrong with this way of thinking?
I think that's a misunderstanding. The passage you quote is a bit informally phrased. By "the mean time to failure of some disk", they don't mean the mean time to failure of each individual disk but the mean time it takes until any one of the $100$ disks fails. If the failures form a Poisson process, then having $100$ disks instead of $1$ disk will increase the rate of the process by a factor $100$ and thus reduce the mean time between two events in the process by a factor $100$. Of course this doesn't reduce the lifetimes of the individual disks.