The definition in my book is as follows:
Let $f$ be a function defined on an open interval containing $c$ (except possibly at $c$) and let $L$ be a real number. The statement $$\lim_{x \to c} f(x) = L$$
means that for each $\epsilon>0$ there exists a $\delta>0$ such that if $0<|x-c|<\delta$, then $|f(x)-L|<\epsilon$.
With the definition the way it is, I don't see how choosing a smaller and smaller $\epsilon$ implies a smaller and smaller $\delta$.
To me, in order to produce that implication, we would need to restrict $\epsilon$ to be small enough to force $f(x)$ to be strictly increasing/decreasing on $(L-\epsilon, L+\epsilon)$, and define increasing/decreasing without the use of derivatives. However, that is not part of the definition.
P.S. Please refrain from using too much notation for logic, I am not familiar with most of the symbols such as the upside down A and such.
First of all, there exists functions that every $\delta$ is sufficient for them. For example constant function $$f(x)=2$$
On the other hand there exists functions that force us to select as small $\delta$ as possible. For example take a strictly increasing function $f(x)=x+1$.
The greatest $\delta$ we can take is such that $f$ intersects the corners of rectangle created by four lines: $x=L\pm\varepsilon$ and $y=c\pm\delta$. So if $\varepsilon$ shrinks, $\delta$ has to get smaller too.
Now, take a function that is not monotonic, for example $g(x)=2x^2+1$.
How small does $\delta$ have to be? Similar situation as above, but now $\delta$ is bounded by upper corners.