In Bayesian theorem, $$p(y|x) = \frac{p(x|y)p(y)}{p(x)}$$, and $p(x|y)$ is called the likelihood, and I assume it's just the conditional probability of $x$ given $y$, right?
The maximum likelihood estimation tries to maximize $p(x|y)$, right? Or $p(x|y)$ is a function of some parameters $\theta$, that is $p(x|y; \theta)$, and MLE tries to find the $\theta$ which can maximize $p(x|y)$?
As you say, $p(x|y)$ is the conditional probability density function for observing $X=x$ given $Y=y$, when seen as a function of $x$.
Since it is based on a probability, you have $\displaystyle\int_x p(x|y) dx =1$.
But $p(x|y)$ can also be seen as a function of $y$, namely the likelihood function for $y$ having observed $X=x$. There is no reason to expect its integral with respect to $y$ to be $1$ so it is not a probability density function for $Y$.
You could turn it into a posterior probability density function for $Y$ by combining with a prior for $y$ using Bayes' theorem.
Or you could seek to find the value for $y$ which maximises $p(x|y)$, to give the maximum likelihood estimate of $Y$.