This is quoted from Feynman's Lectures' Normalization of the states in $x$:
We return now to the discussion of the modifications of our basic equations which are required when we are dealing with a continuum of base states. When we have a finite number of discrete states, a fundamental condition which must be satisfied by the set of base states is $$⟨i|j⟩=δ_{ij}.\tag{16.36}$$ If a particle is in one base state, the amplitude to be in another base state is $0.$ By choosing a suitable normalization, we have defined the amplitude $⟨i|i⟩$ to be $1.$ These two conditions are described by Eq. $(16.36).$ We want now to see how this relation must be modified when we use the base states $|x⟩$ of a particle on a line. If the particle is known to be in one of the base states $|x⟩$, what is the amplitude that it will be in another base state $|x'⟩$? If $x$ and $x'$ are two different locations along the line, then the amplitude $⟨x|x'⟩$ is certainly $0,$ so that is consistent with Eq. $(16.36).$ But if $x$ and $x'$ are equal, the amplitude $⟨x|x'⟩$ will not be $1,$ because of the same old normalization problem. To see how we have to patch things up, we go back to Eq. $(16.19),$ and apply this equation to the special case in which the state $|ϕ⟩$ is just the base state $|x'⟩.$ We would have then $$⟨x'|ψ⟩=\int ⟨x'|x⟩ψ(x)dx.\tag{16.37}$$ Now the amplitude $⟨x|ψ⟩$ is just what we have been calling the function $ψ(x).$ Similarly the amplitude $⟨x'|ψ⟩,$ since it refers to the same state $|ψ⟩,$ is the same function of the variable $x',$ namely $ψ(x').$ We can, therefore, rewrite Eq. $(16.37)$ as $$ψ(x')=\int ⟨x'|x⟩ψ(x)dx.\tag{16.38}$$ This equation must be true for any state $|ψ⟩$ and, therefore, for any arbitrary function $ψ(x).$ This requirement should completely determine the nature of the amplitude $⟨x|x'⟩$—which is, of course, just a function that depends on $x$ and $x′.$ Our problem now is to find a function $f(x,x'),$ which when multiplied into $ψ(x),$ and integrated over all $x$ gives just the quantity $ψ(x').$ It turns out that there is no mathematical function which will do this! At least nothing like what we ordinarily mean by a “function.” Suppose we pick $x'$ to be the special number $0$ and define the amplitude $⟨0|x⟩$ to be some function of $x,$ let’s say $f(x).$ Then Eq. $(16.38)$ would read as follows: $$ψ(0)=∫f(x)ψ(x)dx.\tag{16.39}$$ What kind of function $f(x)$ could possibly satisfy this equation? Since the integral must not depend on what values $ψ(x)$ takes for values of $x$ other than $0,$ $f(x)$ must clearly be $0$ for all values of $x$ except $0.$ But if $f(x)$ is $0$ everywhere, the integral will be $0,$ too, and Eq. $(16.39)$ will not be satisfied. So we have an impossible situation: we wish a function to be $0$ everywhere but at a point, and still to give a finite integral. Since we can’t find a function that does this, the easiest way out is just to say that the function $f(x)$ is defined by Eq. $(16.39).$ Namely, $f(x)$ is that function which makes $(16.39)$ correct. The function which does this was first invented by Dirac and carries his name. We write it $δ(x).$ All we are saying is that the function $δ(x)$ has the strange property that if it is substituted for $f(x)$ in the Eq. $(16.39),$ the integral picks out the value that $ψ(x)$ takes on when $x$ is equal $0$; and, since the integral must be independent of $ψ(x)$ for all values of $x$ other than $0,$ the function $δ(x)$ must be $0$ everywhere except at $x=0.$ Summarizing, we write $$ ⟨0|x⟩=δ(x),\tag{16.40}$$ where $δ(x)$ is defined by $$ψ(0)=∫δ(x)ψ(x)dx.\tag{16.41}$$ Notice what happens if we use the special function “1” for the function $ψ$ in Eq. $(16.41).$ Then we have the result $$1=∫δ(x)dx.\tag{16.42}$$ That is, the function $δ(x)$ has the property that it is $0$ everywhere except at $x=0$ but has a finite integral equal to unity. We must imagine that the function $δ(x)$ has such a fantastic infinity at one point that the total area comes out equal to one.
I'm really having problem in understanding the statement of Feynman;"Since the integral must not depend on what values $ψ(x)$ takes for values of $x$ other than $0,$..." Really, why doesn't the integral need not have depend on the values $\psi(x)$ takes for $x$ other than $0$? Can anyone please explain this line?
First of all, I believe when Feynman writes $\int f(x)\psi(x) \,dx$ here, he means $\int_{-\infty}^{\infty} f(x)\psi(x) \,dx$. Omitting the bounds is a physicist's notational shortcut.
As mentioned earlier with regard to Equation $(16.38)$:
The equation
$$\psi(0)=\int f(x)\psi(x) \,dx.\tag{16.39}$$
also must be true for any arbitrary function $\psi(x).$
So let $f(x)$ be a function that satisfies $(16.39)$ for every possible function $\psi(x)$; in particular suppose $(16.39)$ is true when we use some particular function $\psi(x) = \psi_0(x).$ And suppose $f(x)$ is not zero everywhere except at $x=0$, or more mathematically, suppose there are real numbers $a$ and $b$ (both positive, or both negative) such that $f(x) > 0$ whenever $a < x < b$. Let's define $\psi_1(x)$ as follows:
$$\psi_1(x) = \begin{cases} \psi_0(x) + 100 & \text{if $a < x < b$} \\ \psi_0(x) & \text{otherwise.} \end{cases}$$
In particular, $\psi_1(0) = \psi_0(0)$ because the interval $[a,b]$ does not include zero. Then we have
\begin{align} \int f(x)\psi_1(x)\,dx &= \int_{-\infty}^a f(x)\psi_0(x)\,dx + \int_a^b \left(f(x)\psi_0(x) + 100f(x)\right)\,dx + \int_b^\infty f(x)\psi_0(x)\,dx \\ &= \left(\int_{-\infty}^a f(x)\psi_0(x)\,dx + \int_a^b f(x)\psi_0(x)\,dx + \int_b^\infty f(x)\psi_0(x)\,dx\right)\\ & \qquad \qquad \qquad + \int_a^b 100f(x)\,dx \\ &= \int_{-\infty}^{\infty} f(x)\psi_0(x)\,dx + \int_a^b 100 f(x)\,dx \\ &= \psi_0(0) + \int_a^b 100 f(x)\,dx \\ &= \psi_1(0) + \int_a^b 100 f(x)\,dx \\ &> \psi_1(0) \end{align}
which says Equation $(16.39)$ is not true for $\psi_1(x)$.
This is just a crude demonstration, not a proof, but the idea is that if in any way we make $f(x)$ non-zero on some measurable subset of the positive and negative real numbers, we can construct a variation of $\psi_0(x)$, call the new function $\psi_1(x)$, such that $\int f(x)\psi_1(x)\,dx$ should equal $\psi_1(0)$ according to Equation $(16.39)$, yet $\int f(x)\psi_1(x)\,dx \neq \psi_1(0)$. So if we want Equation $(16.39)$ to be true for every possible choice of function $\psi(x)$, we cannot let $f(x)$ have non-zero values over any such subset of the real numbers, only at zero.
(Somewhere I'm sure there's a rigorous mathematical definition of the Dirac delta function that patches up the holes in this exposition, but I can only remember the physicists' version. Since this is for a Feynman lecture, that seems appropriate anyway.)