I had skewed normalized count data. So I $log_2$-transformed the data, to get them normally distributed before doing regression — the usual approach. This is the result, which can be considered as the $log_2$-fold-change:
> cf
Estimate Std. Error t value Pr(>|t|) 2.5 % 97.5 %
(Intercept) 0.4120023 0.3752519 1.0979353 0.314327963 -0.50620600 1.3302106
x.1 0.1413944 0.2488691 0.5681476 0.590547229 -0.46756644 0.7503552
x.2 0.6022825 0.2215433 2.7185763 0.034708337 0.06018544 1.1443795
x.3 0.8748745 0.1597857 5.4752979 0.001550213 0.48389290 1.2658561
To back-transform the estimates, in order to present them more intuitive, I can raise 2 to the power of the estimates, to get the fold-change.
> 2^cf[, 1]
(Intercept) x.1 x.2 x.3
1.330531 1.102971 1.518116 1.833849
But what do I do with the confidence intervals?
$2^{\text{sd}(\log2(Y))} = \text{sd}(Y)$ is probably not the case, but I'm aiming at an approximation.
This looks reasonable;
> cbind(2^cf[, 1] - 2^cf[, 2], 2^cf[, 1] + 2^cf[, 2])
[,1] [,2]
(Intercept) 0.03346515 2.627597
x.1 -0.08530467 2.291246
x.2 0.35213621 2.684097
x.3 0.71672736 2.950970
> cf[, 5:6]
2.5 % 97.5 %
(Intercept) -0.50620600 1.3302106
x.1 -0.46756644 0.7503552
x.2 0.06018544 1.1443795
x.3 0.48389290 1.2658561
x.1 still includes zero for instance, but I'm not sure if this is correct.
Maybe confidence intervals need to be re-calculated by transforming the standard errors.
Data:
Here my toy data for R, if needed:
set.seed(42)
df <- data.frame(x=matrix(log2(runif(10*3)), 10, 3))
df$y <- as.matrix(cbind(1, df)) %*% runif(4) + rnorm(10,,.5)
fit <- lm(y ~ ., df)
cf <- cbind(coef(summary(fit)), confint(fit))