Suppose we have a point $\mathbf P$ and a vector $\mathbf n$ in plain ordinary 3D space. Here I am deliberately using upper-case letters for points, and lower-case points for vectors, since they are two different things. Then the set of points $\mathbf X$ that satisfy $(\mathbf X - \mathbf P) \cdot \mathbf n = 0$ is a plane, as we all know. The quantity $\mathbf X - \mathbf P$ is the difference of two points, so it's a vector, and taking its dot product with $\mathbf n$ is legitimate. So far, so good.
But, it's very tempting to rewrite this as $\mathbf X \cdot \mathbf n = \mathbf P \cdot \mathbf n$. Then, if I let $\mathbf P \cdot \mathbf n = d$, the plane equation becomes $\mathbf X \cdot \mathbf n = d$. From a programming point of view, this is nice -- I can now represent the plane by four numbers ($\mathbf n$ and $d$) rather than six ($\mathbf n$ and $\mathbf P$). Also, if I write $\mathbf P = (a,b,c)$, then the equation becomes $ax + by + cz = d$, which is the plane representation that we all know from high school geometry. Nice, comfy, familiar. Good.
The problem is that expressions like $\mathbf X \cdot \mathbf n$ and $\mathbf P \cdot \mathbf n$ don't make sense -- everyone knows that you can't take dot products of points and vectors. Computationally, everything works fine, but pedagogically, it feels like something is wrong.
Can anyone make some sense out of this, please? I'd like to write plane equations in the form $\mathbf X \cdot \mathbf n = d$ and still be able to sleep peacefully at night..
It's fine. A point $X$ is usually identified with the vector $X-O$, where $O=(0,0,0)$ is the origin, and therefore one usually writes $X$ for both the vector and the point. If you still insist on not using the same notation for both, you can simply subtract $O$ on both sides:
$$(X-P)\cdot n = 0 \iff ((X-O)-(P-O))\cdot n=0 \iff (X-O)\cdot n=(P-O)\cdot n$$