Suppose I have a data set of $N>100$ samples. Knowing the sample mean $\bar x$ and the sample standard deviation $S$. How may I compute the sample mean confidence interval with a 95% confidence level?
My problem is that I have not information about the population distribution and its standard deviation.
Use the following statistical formula, $$\bar{x}\pm\frac{ts}{\sqrt{n}}$$ where $\bar{x}$ is the sample mean, $s$ is your standard deviation, $n$ is the sample size, and $t$ is the critical value you can obtain from a $t$-table under 95% confidence. Here is a link to one with the 95% confidence bands highlighted https://www2.palomar.edu/users/rmorrissette/Lectures/Stats/ttests/TTable.jpg.
As an aside, a theoretical construct can be derived from probability theory. From Chebyshev's inequality (the inside rather than outside interval version): $$\Pr\left(|X-\mu|\leq k\sigma\right)\geq\frac{1}{k^2}$$ Here we have population mean $\mu$, population standard deviation $\sigma$, and random variable $X$.To find $k$ on the 95% confidence interval, solve $1/k^2=1-0.95$ to obtain $k=2\sqrt{5}$. The confidence interval is the inequality within the parentheses. Rearranging it becomes, $$\mu-k\sigma\leq X\leq \mu+k\sigma$$ This is just an aside. Use the first equation (statistical) for your problem.