After seeing a video on the Collatz conjecture, I kind of played around with it a bit and found the following result (which can be shown by induction):
$$\underbrace{f \circ \ldots \circ f}_{N \text{ times}}(2^N x - 1) = 6^{\left\lceil\frac{N}{2}\right\rceil} x - 2 + (N + 1) \mathbin{\mathrm{mod}} 2.$$
If the Collatz conjecture were true, then $\forall x \in \mathbb{N} : \lim_\limits{N \to \infty} \underbrace{f \circ \ldots \circ f}_{N \text{ times}}(x) = 1$. However, the result above would imply that for any number of function applications, $N$, there is an (infinite set of) input(s) that will be larger than the inputs and therefore can not be one. So, even if $N \to \infty$ there will be inputs (of the form $2^N x - 1$) that do not go to 1.
Since much smarter people have worked on this, this “proof” is probably not sound, but I was wondering where the problem(s) lies exactly.
One possible caveat is that this idea breaks if it would be possible to apply the function uncountably many times (which would be rather counterintuitive to me).
I would appreciate any insights as to where this line of reasoning fails
You have shown that by increasing the number of iterations and, most crucially, also increase the starting point, you can reach arbitrarily large numbers with the Collatz procedure.
The Collatz conjecture concerns whether somewhere out there, there is a single starting point which never comes down to 1 (either by looping, or by getting large) as you increase the number of iterations. You have not addressed this in your proof.