The prior has a form such that it is $0.5$ for $\theta=0.6$ or $\theta=0.2$ and $0$ elsewhere. The likelihood function $P(D|h)$ has Bernoulli form.
Hence, posterior is $0.5P(D|h)$ for $\theta=0.6$ or $\theta=0.2$ and $0$ elsewhere.
When I calculate the MAP estimate, it comes out to be independent of theta as $N_1/N$ which is same value as obtained from uniform prior, which is strange.
If we change prior such that it is $0.4$ for $\theta=0.6$ and $0.6$ for $\theta=0.2$ and $0$ elsewhere, the MAP estimate still comes out to be same.
Is it correct to say that such discrete prior has no effect on MAP estimate?
A MAP estimator maximizes
$$\underbrace{f(\theta|x)}_{\text{posterior}}\propto \underbrace{f(x|\theta)}_{\text{likelihood}}\underbrace{f(\theta)}_{\text{prior}}$$
with respect to $\theta$. In general, the MAP estimator will depend on both the likelihood and the prior since both depend on $\theta$. In the special case of a uniform prior, $f(\theta)$ is constant, so the MAP estimator coincides with the maximum likelihood estimator (MLE) (assuming the prior contains the MLE in its support).
In your setup, if your data $x_i$ is iid Bernoulli($\theta$), and your prior for $\theta$ has two-point support $\{\theta_1,\theta_2\}$ with respective hyperparameter masses $p,1-p\in (0,1)$ then for $x_i\in \{0,1\},$
$$f(x|\theta)=\Pi_i\theta^x_i (1-\theta)^{1-x_i}=\theta^{\sum_i x_i}(1-\theta)^{n-\sum_i x_i},\\ f(\theta)=p^{\frac{\theta-\theta_2}{\theta_1-\theta_2}}(1-p)^{\frac{\theta-\theta_1}{\theta_2-\theta_1}}\bf{1}_{\theta\in \{\theta_1,\theta_2\}}.$$
The log posterior for $\theta\in\{\theta_1,\theta_2\}$ is
$$\small \log f(\theta|x)=\text{const}+\sum_i x_i\log \theta+(n-\sum_i x_i)\log (1-\theta)+\frac{\theta-\theta_2}{\theta_1-\theta_2}\log p +\frac{\theta-\theta_1}{\theta_2-\theta_1}\log (1-p)$$
So to find the MAP, you just have to check which of $\theta_1,\theta_2$ gives a higher log posterior. Equivalently, letting $\bar x:=\frac{1}{n}\sum_i x_i,$
$$\small \hat \theta_{\text{MAP}}={\arg\max}_{\theta\in\{\theta_1,\theta_2\}}\left\{ \bar x\log \theta+(1-\bar x)\log (1-\theta)+\left(\frac{1}{n}\log p\right){\bf 1}_{\theta=\theta_1}+\left(\frac{1}{n}\log (1-p)\right){\bf 1}_{\theta=\theta_2}\right\}.$$
The estimator depends on data and prior hyperparameters ($\theta_1,\theta_2,p$). But it is possible that for a particular data set, a small change in $p$ will not change the MAP estimator.