in introduction to linear optimization ($\text{p. 142}$), they take the standard form problem:
minimize $c'x$, s.t. $Ax = b$, $x\geq 0$
they relax the constraints and define:
$g(p) = \min_{x\geq 0}[c'x + p'(b-Ax)] = p'b + \min_{x\geq 0}[(c - p'A)x]$
and then they try to maximize it, with respect to p.
now they note that when $c-p'A<0$ then $\min_{x\geq 0}[(c - p'A)x]=-\infty$, so to refrain from these p values they phrase the dual problem as:
maximize $p'b$, s.t. $p'A\leq c'$
my question: why is it necessary to put the constraints? why do we care that $\min_{x\geq 0}[(c - p'A)x] = -\infty$ for some values of p? why can't we just maximize $p'b$, and obviously we won't get the maximum for these $-\infty$ p values?
It is not necessary to formulate them as constraints, you can just write the dual problem as: $\max_p p'b + \min_{x\geq 0}[(c - p'A)x]$.
However, solving a nested problem is difficult. The maximum cannot occur when $A^T p > c$, so you can restrict your search for a maximizer to the set where $A^T p \leq c$. If you define $S = \{ p : A^T p \leq c \}$ you seem to propose solving $\max_{p \in S} p'b$. That is a correct statement of the dual problem.
We still prefer the equivalent formulation $\max_{p \in \mathbb{R}^m} \{ p'b : A^Tp\leq c\}$, since it is a linear optimization problem.