Is there any way to calculate a confidence interval (or otherwise gauge the reliability of a sample) when you have a sample size of one and you don't know the population standard deviation?
I work at a farm. We sample our soil yearly for analysis and fertilize recommendations. For each relatively homogeneous field, the standard procedure is to take a number of samples, mix them together, and send a total of 1 lb of soil to the lab. If the field is 5 acres, that would be 1 lb of soil out of $\approx$ 10 millions.
The results would say that we have say 3 lbs of phosphorus per acre. Fertilizer recommendations would then be something like: For < 2 lbs phosphorus per acre, add 75 lbs of rock phosphate; for 2-4 lbs per acre, add 65 lbs; etc...
I've never been totally comfortable with this procedure, as I can't figure out how to put a confidence interval around the results. Is there a way to do so?
For a normal population and a sample point $x$, a 90% confidence interval is given by $$x \pm 4.84 \ \left| x \right| $$ For 95% confidence, the interval is $$x \pm 9.68 \ \left| x \right| $$
The reference is: "An Effective Confidence Interval for the Mean with Samples of Size One and Two," Wall, Boen, and Tweedie, $\it{The \ American \ Statistician},$ 2001, vol. 55, Iss. 2, pp. 102-105
If you have data from previous years, you might do better trying to estimate the variance using historical data.