Let $0<p<1$ and $n$ be and integer. Let $\mathbf G(n, p)$ be the probability space whose underlying set is all graphs on the node set $\{1, \ldots, n\}$, and the probability assigned to a graph $G$ having $m$ edges is $p^m(1-p)^{\binom{n}{2}-m}$. This is same as saying that an edge appears with probability $p$. In other words, this is the Erdos-Renyi model.
Let $k<n$ and let $\mathcal I_{k, n}$ denote the set of all injective maps from $\{1, \ldots, k\}$ to $\{1, \ldots, n\}$. We denote the cardinality of $\mathcal I_{k, n}$ by $(n)_k$. Note that $(n)_k= n(n-1)\cdots(n-k+1)$.
Fix a graph $F$ on the node set $\{1, \ldots, k\}$. We say that a map $\varphi\in \mathcal I_{k, n}$ is an induced embedding of $F$ in $G\in \mathbf G(n, p)$ if for all $1\leq i< j\leq k$, $\{\varphi(i), \varphi(j)\}$ is an edge in $G$ if and only if $\{i, j\}$ is an edge in $F$. In other words, $\varphi\in \mathcal I_{k, n}$ is an induced embedding of $F$ in $G$ if the induced subgraph of $G$ by the image of $\varphi$ is $F$.
Define a random variable $X:\mathbf G(n, p)\to \mathbf R$ as $$X(G)= \frac{|\{\varphi\in \mathcal I_{k, n}:\ \varphi \text{ is an induced embedding of } F \text{ in } G\}|}{(n)_k}$$ Endowing $\mathcal I_{k, n}$ with the uniform probability measure, $X(G)$ is basically measuring the probability that an injection $\varphi\in I_{k, n}$ happens to be an induced embedding of $F$ in $G$.
If $F$ has $r$ edges, then it is easy to see that $E[X]=p^r(1-p)^{\binom{k}{2}-r}$. I want to show that $X$ is concentrated around it's expectation.
I have tried the following (which you may ignore if you can see how to prove the concentration result):
I want to find the second moment of $X$, for which I need to find $E[X^2]$ For $\varphi\in \mathcal I_{k, n}$ and $G\in \mathbf G(n, p)$, let $\delta(\varphi, G)$ be $1$ if $\varphi$ is an induced embedding of $F$ in $G$ and $0$ otherwise.
Now $$E[X^2]= \sum_{G\in \mathbf G(n, p)}P[G] \frac{|\{\varphi\in \mathcal I_{k, n}:\ \varphi \text{ is an induced embedding of } F \text{ in } G\}|^2}{(n)_k^2}$$ Thus $$E[X^2] = \sum_{G\in \mathbf G(n, p)} P[G]\frac{\left(\sum_{\varphi\in \mathcal, I_{k, n}}\delta(\varphi, G)\right)^2}{(n)_k^2}$$ which gives $$ E[X^2] = \sum_{G\in \mathbf G(n, p)} P[G]\frac{\left(\sum_{\varphi\in \mathcal, I_{k, n}}\delta(\varphi, G)\right)}{(n)_k^2} + \sum_{G\in \mathbf G(n, p)} P[G]\frac{\sum_{\varphi, \psi\in \mathcal I_{k, n}: \varphi\neq \psi}\delta(\varphi, G)\delta(\psi, G)}{(n)_k^2} $$
which gives
$$ E[X^2] = \frac{E[X]}{(n)_k} + \sum_{G\in \mathbf G(n, p)} P[G]\frac{\sum_{\varphi, \psi\in \mathcal I_{k, n}: \varphi\neq \psi}\delta(\varphi, G)\delta(\psi, G)}{(n)_k^2} $$
I am unable to estimate the second term. Can somebody help?
The notation you are using seems too cumbersome to work with. You basically never want to have an explicit sum over random graphs $G$ with $P(G)$ in it; such a sum is always an expectation of some random variable.
First of all, it will make our lives easier with we work with the random variable $Y = (n)_k X$. This is an integer random variable that counts the induced copies of $F$ in $G$. And if $Y \sim \mathbb E[Y]$, then $X \sim \mathbb E[X]$, because both sides are off by the same factor of $(n)_k$.
Second, for a fixed $\varphi \in \mathcal I_{n,k}$, let $Y_\varphi$ be the random variable that is $1$ if it's an induced embedding, and $0$ otherwise. (When we're not in a sum over $\varphi$, I will use $Y_\varphi$ to denote an arbitrary one of these, since they're identically distributed.) Then we have $$Y = \sum_{\varphi \in \mathcal I_{n,k}} Y_\varphi = (n)_k \mathbb E[Y_\varphi]$$ and your formula becomes $$ \mathbb E[Y^2] = \mathbb E\left[\left(\sum_{\varphi \in \mathcal I_{n,k}} Y_\varphi\right)^2\right] = \sum_{\varphi} \mathbb E[Y_\varphi] + \sum_{\varphi \ne \psi} \mathbb E[Y_\varphi Y_\psi] = \mathbb E[Y] + \sum_{\varphi \ne \psi} \mathbb E[Y_\varphi Y_\psi]. $$ (We will not actually need to separate out the diagonal terms contributing to the $\mathbb E[Y]$, but it won't hurt anything either, and I wanted to match the expression in the question.)
Next, the key thing that often happens in use of the second moment is that the second sum here contains many pairs $(\varphi,\psi)$ whose images are disjoint; in that case, $Y_\varphi$ and $Y_\psi$ are independent, and so $\mathbb E[Y_\varphi Y_\psi] = \mathbb E[Y_\varphi] \mathbb E[Y_\psi] = \mathbb E[Y_\varphi]^2$. Moreover, the number of such pairs is at most $(n)_k^2$: the total number of pairs $(\varphi,\psi)$. So the contribution from independent pairs is at most $(n)_k^2 \mathbb E[Y_\varphi]^2 = \mathbb E[Y]^2$, and we get $$ \mathbb E[Y^2] \le \mathbb E[Y] + \mathbb E[Y]^2 + \sum_{\varphi \sim \psi} \mathbb E[Y_\varphi Y_\psi] $$ or $$ \operatorname{Var}[Y] = \mathbb E[Y^2]- \mathbb E[Y]^2 \le \mathbb E[Y] + \sum_{\varphi \sim \psi} \mathbb E[Y_\varphi Y_\psi]. $$ (where by $\varphi \sim \psi$ I denote the sum over correlated pairs $(\varphi, \psi)$: those whose images are not disjoint).
It is a fact that if $\mathbb E[Y] \to \infty$ and $\frac{\operatorname{Var}[Y]}{\mathbb E[Y]^2} \to 0$ as $n \to \infty$ (see, for example, Corollary 4.3.3 in Alon and Spencer's Probabilistic Method) then by Chebyshev's inequality $Y \sim \mathbb E[Y]$ almost always.
In our case, $\mathbb E[Y] = O(n^k)$ because $(n)_k = O(n^k)$ injections each have a constant chance of being realized, so the first term is definitely $o(\mathbb E[Y]^2)$. (This is why I wanted to work with $\mathbb E[Y]$ rather than $\mathbb E[X]$: so that we could distinguish $Y$ and $Y^2$ asymptotically.)
In the second term, each expectation $\mathbb E[Y_\varphi Y_\psi]$ is $O(1)$, and the key is that there are only $O(n^{2k-1})$ terms in the sum: for each of $O(n^k)$ injections $\varphi$, there are only $O(n^{k-1})$ injections $\psi$ such that $\varphi \sim \psi$. (The two images have to share at least one vertex, so there are only $O(n^{k-1})$ choices for the remaining vertices.) As a result, the second term is also $o(\mathbb E[Y]^2)$, and we obtain concentration.
(I am assuming here that you wanted concentration as $n \to \infty$ with $k$ fixed, which remains unstated in the question. This method can also give a concentration result for specific $n$ and $k$, but in that case you have to make the use of Chebyshev's inequality explicit to get a bound.)