There are two hypotheses about the probability of heads for a given coin: $\theta=0.5$ (hypothesis $H_0$) and $\theta=0.6$ (hypothesis $H_1$). Let $X$ be the number of heads obtained in $n$ tosses, where $n$ is large enough so that normal approximations are appropriate. We test $H_0$ against $H_1$ by rejecting $H_0$ if $X$ is greater than some suitably chosen threshold $k_n$.
$\underline{\text{False Acceptance Probability}}$
$P(X<\gamma;H_1)$ = $\sum\limits_{i=1}^{\gamma}\binom{n}{i}(0.6)^k(1-0.6)^{n-k}$ $\approx$ $\Phi\Bigg(\dfrac{\gamma-\frac{1}{2}-\frac{3}{5}n}{\sqrt{\frac{3}{5}\frac{2}{5}n}}\Bigg)$
I am not able to understand where this $\frac{1}{2}$ is coming from in the numerator of $\Phi(\cdot)$. I beleive by central limit theroem since $X$ is binomial random variable so in order to make standard normal we need to subtract the mean which is $np=n\frac{3}{5}$ and divided by the standard deviation which is $\sqrt{np(1-p)}=\sqrt{\frac{3}{5}\frac{2}{5}n}$. But why this subtraction of $\frac{1}{2}$ from $\gamma$.
What you call "false acceptance probability" is more accurately (and conventionally) called the Type II error of the test. However, the expression you wrote seems to use variables that don't make sense. For instance, you use $i$ for the index of summation but then use $k$ in the summand. And you introduce $\gamma$ but never define how it relates to the critical value $k_n$ for the test statistic. Moreover, the rejection region for the test is inconsistent with the strict inequality in the probability expression. I will fix all of these, although they are not related to your actual question (why the $1/2$ is there).
As stated, let $$H_0 : \theta = 0.5 \quad \text{vs.} \quad H_1 : \theta = 0.6$$ be the null and alternative hypotheses for a random variable $$X \mid \theta \sim \operatorname{Binomial}(n, \theta).$$
The test statistic is simply $X \mid H_0$, i.e., $$X \mid H_0 \sim \operatorname{Binomial}(n, \theta = 0.5),$$ and the test will reject $H_0$ in favor of $H_1$ if $X \color{red}{\ge} k_n$, where $k_n \in \{0, 1, \ldots, n\}$ is the critical value of the test. Thus the rejection region contains the outcome $X = k_n$, not just $X > k_n$, which is different than what the problem states. The reason for this deviation will be made clear shortly.
Now, as stated earlier, the Type II error of this test is
$$\beta = \Pr[\text{fail to reject } H_0 \mid H_1],$$
which in terms of the rejection criterion stated above, is
$$\beta = \Pr[X \color{red}{<} k_n \mid H_1].$$
So the earlier deviation is required in order to reconcile the use of strict inequality in the Type II error probability. We also discard the variable $\gamma$ because the rejection region is expressed in terms of $k_n$.
Then because $X \mid H_1 \sim \operatorname{Binomial}(n,\theta = 0.6)$, we have $$\Pr[X < k_n \mid H_1] = \sum_{x=\color{red}{0}}^{\color{red}{k_n - 1}} \binom{n}{x} \theta^x (1-\theta)^{n-x} = \sum_{x=0}^{k_n - 1} \binom{n}{x} (0.6)^x (0.4)^{n-x}.$$ You used a strange mixture of $i$ and $k$. You also started the lower index of summation at $1$, but the support of $X$ begins at $0$, not $1$. Finally, the use of strict inequality means the upper index stops at $k_n - 1$, not $k_n$.
The corresponding normal approximation to the binomial is accomplished by letting $X$ be approximated by a normal random variable $Y$ with the same mean and variance as $X$; i.e.,
$$Y \sim \operatorname{Normal}(\mu = n\theta, \sigma^2 = n\theta(1-\theta)).$$
Now this is where we answer your question. Because of continuity correction, we must write
$$\Pr[X < k_n \mid H_1] \approx \Pr\left[Y < k_n - \tfrac{1}{2} \mid H_1\right].$$
This is because the LHS probability excludes the entire probability mass at $X = k_n$. Without continuity correction, the statement $\Pr[Y < k_n]$ would include approximately half of the probability mass at $X = k_n$. So we must adjust the inequality accordingly. Standardizing then yields
$$\begin{align} \Pr\left[Y < k_n - \tfrac{1}{2} \mid H_1\right] &= \Pr\left[\frac{Y - n\theta}{\sqrt{n\theta(1-\theta)}} < \frac{k_n - \frac{1}{2} - (0.6)n}{\sqrt{(0.6)(0.4)n}} \right] \\ &= \Pr\left[Z < \frac{k_n - \frac{1}{2} - (0.6)n}{\sqrt{(0.6)(0.4)n}} \right] \\ &= \Phi \left(\frac{k_n - \frac{1}{2} - (0.6)n}{\sqrt{(0.6)(0.4)n}} \right). \end{align}$$ This is the expression you have on the RHS of your equation, with $k_n$ replacing $\gamma$.
Notice that the expression for the normal approximation is not derived from the binomial sum. Rather, it is derived directly from the probability statement. In other words, you don't need to have written down the sum first in order to be able to write down the normal approximation, and vice versa.
Let us now perform some calculations with these formulas. Let us suppose $n = 10$ and $k = 6$. Note that this is a small $n$ and so our approximation may not perform well, but we chose this to facilitate computation. The binomial sum, which is the exact Type II error probability, is
$$\beta = \sum_{x=0}^5 \binom{10}{x} (0.6)^x (0.4)^{10-x} = \frac{3582976}{9765625} \approx 0.366897.$$
The normal approximation gives $$\beta \approx \Phi\left(\frac{6 - \frac{1}{2} - (0.6)(10)}{\sqrt{(0.6)(0.4)(10)}}\right) = \Phi(-0.322749) = 0.373443.$$
The absolute error of the approximation is $0.00654607$, not too bad. But if you had not employed the continuity correction, your Type II error under a normal approximation would be $\Phi(0) = 0.5$, and the absolute error is $0.133103$, much worse.
To see that increasing the sample size improves the approximation quality, we will choose $n = 100$, $k_n = 55$. That is to say, we reject $H_0$ if $X \ge 55$ heads in $100$ coin tosses. Such a calculation would be performed with a computer: $$\beta = \sum_{x=0}^{54} \binom{100}{x}(0.6)^x(0.4)^{100-x} \approx 0.13109.$$ This is the chance of erroneously failing to reject $H_0$ when in fact the coin has a $\theta = 0.6$ chance of coming up heads.
The normal approximation with continuity correction gives $$\beta \approx \Phi\left(\frac{55-\frac{1}{2}-60}{\sqrt{24}}\right) = \Phi(-1.12268) = 0.130786.$$ The absolute error of the approximation is $0.000304335$, even better than the case $n = 10$.
As an exercise, what are the Type I errors of each of the scenarios described above? That is to say, for $n = 10$ and $k_n = 6$, what is $$\alpha = \Pr[\text{reject } H_0 \mid H_0] = \Pr[X \ge k_n \mid H_0]?$$ And what is it for $n = 100$ and $k_n = 55$? Would you use either of these tests in practice?
As a further (advanced) exercise, what is the smallest $n$ you can find such that there is a choice of $k_n$ such that $\alpha \le 0.05$ and $\beta \le 0.10$ simultaneously?