Say I have something like:
$$pSmoke = .656 - .069log(cigprice)+.012log(income)-.029educ + .02age - .00026age^2 - .101restaurn - .026white$$
where $pSmoke$ is the probability of smoking and $age$ is the person's age in years. This is part of a larger model but I just have this part for simplicity.
$\log(cigprice)$ is the logarithm of cigarette prices.
$\log(income)$ is the logarithm of the income of the individual.
$educ$ is the years of education.
$restaurn$ is a binary variable (1 for no smoking in restaurants. 0 otherwise).
$white$ is 1 if the respondent is white (I assume ethnicity)
The question asks, at what age does another year of age reduce the probability of smoking?
It seems the correct answer is to compute the derivative with respect to age and set it equal to 0. This gets you age = 38.46
What I did was set the given equation equal to 0. This gets you about 77. Is this incorrect?
The important piece of language here is "another year of age." That is, when does adding a new year of age cause the probability of smoking to decline, rather than increase? This says nothing about the absolute probability of smoking, but rather the change in probability with respect to another year, that is, the derivative of $pSmoking$ with respect to $age$. So indeed the correct approach is to differentiate and set to zero.
(I would note that to fully believe your answer, you need to also show that this is a max of $pSmoking$, rather than a minmum or a nonextermal critical point.)
Setting the equation equal to zero finds you not the point where a new year causes probability to decline, but rather the age where the probability of smoking hits zero. This is a statement about the actual probability, not the effect of changing age on the probability.