For a sampling distribution of sample proportion problem (Bernoulli distribution - $\mathbb P(\mathrm{yellow ball}) = 0.6$ out of $10000$ balls, say), I get below discrete distribution (LHS) with $\mu=0.6$ and $\sigma=0.15$. When I try to draw equivalent normal pdf, its scaled up way above. However, when I scale up discrete distribution by sample size (of $10$), both pdf and discrete distribution matches. So far I could not find any glitch in code that could produce this. Is this a proper expected output? How do we justify? Does this mean we have an inherent limitation in applying normal approximation? Kindly explain.
I see that, once $\sigma$ is below $0.4$, the constant part in normal function is above $1$, so no more fit for an approximation when we have discrete dist with $\sigma \leq 0.4$? Sample size $n$ is $10$. If $np$ is at play it only worsens when I increase sample size $n$, reducing the $\sigma$.
If you doubt it still should be a glitch in code, here is with code.


If you take a sample $\boldsymbol x = (x_1, \ldots, x_n)$ of size $n$ of independent and identically distributed observations from a Bernoulli distribution with probability parameter $p$, and compute the sample proportion $$\hat p = \frac{1}{n} \sum_{i=1}^n x_i,$$ then $\hat p$ is approximately normal with mean $\mu = p$ and variance $\sigma^2 = p(1-p)/n$. When creating a histogram of $N$ simulations of $\hat p$, you would compute the density of observed proportions; i.e., for each $k \in \{0, 1, \ldots, n\}$, you would compute the number $s_k$ of simulations for which $\hat p = k/n$, then plot a scaled histogram comprising the vertical bars $$\left\{\left(\frac{k}{n}, \frac{n s_k}{N}\right)\right\}_{k=0}^n.$$
When done in this fashion, the resulting histogram will overlay with a normal distribution with the aforementioned mean and variance. Note that the height of the bar is $n s_k/N$, not $s_k/N$. This is where your error lies. The reason is because the height of the bar is not a probability, but a probability density--this should become obvious once you realize that when $\sigma$ is sufficiently small for a normal distribution (how small exactly?), then there will be some value for which the corresponding density will exceed $1$.
Plotting in Mathematica:
I make no claims as to the efficiency of the code.