For the following convex minimization problem:
\begin{equation} \begin{array}{rl} \textrm{minimize} & f(x)\\ \textrm{subject to} & Ax=b, \end{array} \end{equation}
where $f$ is differentiable, the optimality conditions are: $$Ax^*=b, \qquad \nabla f(x^*)+A^T\nu^*=0.$$
In Boyd & Vandenberghe's "Convex Optimization" (p521), $Ax^*=b$ are called primal feasibility equations, and $\nabla f(x^*)+A^T\nu^*=0$ are called dual feasibility equations. The naming of the former makes perfect sense to me, since $Ax=b$ is a set of equations that define the feasible set of the primal problem (within the domain).
However, I'm not so sure why $\nabla f(x^*)+A^T\nu^*=0$ are called the "dual feasibility equations"? Isn't the dual problem an unconstrained concave maximization:
$$\sup_{\nu} \left[\inf_{x\in \mathcal D} f(x)+\nu^T(Ax-b)\right]?$$
Is it because we view the attainability of the infimum within the square brackets for a given $\nu$ as the feasibility condition for the dual problem? (This just defines the domain of the dual problem, doesn't it?)
Based on our discussions, I would say this. Using an extended-real convention, the domain $\mathcal{D}$ is absorbed by $f$ itself, and the Lagrange dual is \begin{array}{ll} \text{maximize} & g(\nu) \triangleq \inf_x L(x,\nu) = \inf_x f(x) + \nu^T ( Ax - b ) \\ \end{array} There is no explicit dual constraint for this problem, because the Lagrange multiplier for an equality constraint is itself unconstrained. In contrast, for an inequality constraint $Ax\leq b$, the dual problem would have an explicit constraint $\nu \geq 0$.
However, we know that in practice, there are often values of $\nu$ such that $\inf_x L(x,\nu) = -\infty$. These serve as an implicit constraint on $\nu$ reflected in the domain of $g$. It is common practice to identify those implicit constraints and make them explicit. (Indeed, this is necessary if one does not wish to adopt an extended-real convention.) So the dual becomes \begin{array}{ll} \text{maximize} & g(\nu) \triangleq \inf_x f(x) + \nu^T ( Ax - b ) \\ \text{subject to} & -\infty < \inf_x f(x) + \nu^TAx \end{array} This is true even if $f$ is not differentiable. If $f$ is differentiable on all of $\mathbb{R}^n$, then this is equivalent to \begin{array}{ll} \text{maximize} & g(\nu) \triangleq \inf_x f(x) + \nu^T ( Ax - b ) \\ \text{subject to} & \exists x ~ \nabla f(x) + A^T \nu = 0 \end{array} [EDIT: as the OP points out, there are cases where it is not truly equivalent; rather, it is a sufficient condition.] This will also hold if $f(x)$ is differentiable in an extended-real sense. That is, if:
Note that this specifically excludes cases where the domain of $f$ is "artificially" constrained, like the case we considered in the comments.
Anyway, subject to these assumptions, it is reasonable to call $\nabla f(x) +A^T\nu=0$ a "dual feasibility constraint". [EDIT: I still maintain that this is reasonable in practice, despite the exceptions found by the OP.] It may seem a bit restrictive to require this assumption at first. But I would suggest that if artificial domain constraints are replaced with explicit equality and inequality constraints instead, these restrictions are somewhat light.