So I stumbled into the aforementioned exercise in Enderton's book on computability. It is presented in the first chapter -- "effectively calculable" is defined informally and is akin to say, partial recursive.
Now, I'm not sure about this one. For instance, let $\bar{H} = \{ x | \text{ machine } x\text{ does not halt on input }x\}$. Now let $f$ be the following function: $f(0) = 10^{100}$, $f(x+1) = \text{the greatest } y\in\bar{H}\text{ s.t. } y\leq f(x)$. Now, $f$ is clearly well-defined and nonincreasing, though not effectively calculable.
Similarly, there is another exercise next which asks the same about periodic functions, and I think one could then think of a similar incalculable function there.
In any case, seeing as we are in an introductory chapter, I think a hand-wavy, appealing-to-intuition explanation is expected, though I can't really think of any. What am I missing?
Tl;dr: There is a difference between a function $f$ which is known - that is, effectively calculable, and for some index $e$ we can prove that $\varphi_e=f$ - and one which is merely effectively calculable - that is, $f=\varphi_e$ for some $e$, but maybe we can't prove which $e$ has this property.
Any eventually constant function is effectively calculable, even if we don't know what Turing machine does the job. There is no rule saying that we have to prove that $\varphi_e=f$ for some $e$, just that it has to be the case that $\varphi_e=f$ for some $e$.
Specifically, if $f$ is any eventually constant function, $f$ is determined by a finite amount of information:
The finite string $\sigma$ which is $f\upharpoonright n$, where $f(m)=f(k)$ for all $m, k\ge n$.
The eventual value $v$ of $f$: $f(n+1)$, that is.
This is a block of finitely much information, and there's a Turing machine which, on input $i$, behaves as follows:
Checks if $i\ge n$.
If yes: output $v$.
If no: output the $i$th value of $\sigma$.
This is a perfectly reasonable Turing machine! Now, of course, for any finite string $\tau$ and any number $u$, there's a similar Turing machine which behaves with $\tau$ instead of $\sigma$ and $u$ instead of $v$, and that machine obviously won't do what we want it to in general, and there may be no obvious way of distinguishing the "right" machine from the "wrong" machine, but that's fine: all that's required is for a right machine to exist.
A perhaps easier example to think about: show that every constant function is effectively calculable. Note that there are some weird constant functions, like $f(x)=0$ if the Riemann Hypothesis is true and $f(x)=1$ otherwise. Once you understand why every constant function is computable, you'll understand why every eventually constant function is, too.