Given that PDF value $f_X(x)$ for a particular $x = x_1$ does not have any probabilistic meaning (by definition $p(x = x_1) = 0$). We still see the use of $f_X(x_1)$ as its likelihood.
My questions are:
What is the intuition behind using $f_X(x)$ as the likelihood ?
Am I correct if I say that $f_X(x_1)$ holds significance only for comparison purposes with $f_X(x_i), i \ne 1$ and not otherwise?
The maximization in maximum-likelihood estimators is w.r.t the unknown parameters over their parametric space, i.e., you maximize $L(\theta; x_1,...,x_n) = \prod_{i=1}^n f(\theta; x_i)$ over $\Theta$. Which, indeed, does not have probabilistic meaning (in the frequentists' point of view).