Conjecture.
There exist positive integers $(m,n)$ large enough, such that for any positive real number $r$ and a given error $\epsilon$ :
$$
\left| r - \frac{2^m}{3^n} \right| < \epsilon
$$
There is numerical evidence for this conjecture. I have tried $r = \sqrt{2}$ and $\epsilon = 10^{-3}$.
Below is a little Delphi Pascal program with accompanying output.
But .. can somebody prove the conjecture?
program apart;
procedure test(r : double; eps : double); var a : double; m,n : integer; begin a := 1; m := 0; n := 0; while true do begin if a < r then begin m := m + 1; a := a * 2; end else begin n := n + 1; a := a / 3; end; if abs(r-a) < eps then Break; end; Writeln(r,' = 2^',m,'/3^',n,' =',a); end;
begin test(sqrt(2),1.E-3); end.
Output:
1.41421356237310E+0000 = 2^243/3^153 = 1.41493657935359E+0000
UPDATE.
The answer by lhf does look like a very concise proof. But for me - as an retired physicist by education - it's a bit beyond comprehension.
Furthermore, it leaves a few issues untouched. One might ask for example whether there are estimates for $m$ and $n$ when $r$ and $\epsilon$ are given.
Note. The question can also be formulated as: Can any positive real be approximated as $3^m/2^n$ with $(m,n)$ large enough? Which is the same as allowing negative integers with the original formulation. In this form, it shows some resemblance to the (in)famous Collatz problem.
EDIT.
As suggested by the answers, an approach with logarithms could be more effective:
program anders;
procedure proef(r : double; eps : double); var a,l2,l3,lr : double; m,n : integer; begin l2 := ln(2); l3 := ln(3); lr := ln(r); a := 0; m := 0; n := 0; while true do begin a := m*l2 - n*l3 - lr; if abs(a) < eps then Break; if a < 0 then m := m + 1 else n := n + 1; end; Writeln(r,' = 2^',m,'/3^',n,' =',exp(a)*r); end;
begin proef(sqrt(2),1.E-3); proef(sqrt(2),1.E-9); end.
Output:
1.41421356237310E+0000 = 2^243/3^153 = 1.41493657935356E+0000 1.41421356237310E+0000 = 2^911485507/3^575083326 = 1.41421356125035E+0000
The first line in the output is almost identical to the result obtained previously .
The last line in the output shows that the latter approach indeed is more effective.
The error plays the same role in both approaches. Oh well, almost.
Let's take a look at the places where the 'Break's are. First program:
$$
\left| r - \frac{2^m}{3^n} \right| < \epsilon
$$
Second program:
$$
-\epsilon < m\ln(2) - n\ln(3) - \ln(r) < +\epsilon \\
\ln(1-\epsilon) < \ln\left(\frac{2^m/3^n}{r}\right) < \ln(1+\epsilon) \\
-\epsilon < \frac{2^m/3^n}{r} - 1 < +\epsilon \\
\left| r - \frac{2^m}{3^n} \right| < \epsilon.r
$$
So $\epsilon$ in the first program is an absolute error,
while $\epsilon$ in the second program is a relative error.
Continuing story at:
Can the Stern-Brocot tree be employed for better convergence of $2^m/3^n$?


Yes, there are always solutions $(m, n)$ for any positive real $r$ and $\epsilon$ for $$\left| r - \frac{2^m}{3^n} \right| < \epsilon$$ And there's a much more efficient way to find those solutions than stepping through $m$ and $n$ values one by one.
We have $$r \approx 2^m/3^n$$ Taking logarithms, $$\log r \approx m\log 2 - n\log 3$$ $$\log r/\log 2\approx m - n\log 3 / \log 2$$ i.e., $$\log_2 r\approx m - n\log_2 3$$
[Incidentally, $$1 = \frac m{\log_2r}-\frac n{\log_3r}$$ which is a line in the $(m,n)$ plane with $m$ intercept $\log_2r$ and $n$ intercept $-\log_3r$. We want to find when that line passes close to integer $(m, n)$ lattice points].
We can find rational approximations to both those base 2 logarithms to any desired precision. However, to satisfy that equation with integer $m$ and $n$, the denominators of our approximations must be commensurate.
Let $$\log_2 r = f \approx s/t$$ and $$\log_2 3 \approx p/q$$ with the fractions being in lowest terms, i.e., $\gcd(s,t)=gcd(p,q)=1$.
Then $$\frac st = m - n \frac pq$$ $$sq = (qm - pn)t$$ Thus $t|sq$. But $s$ & $t$ are coprime, hence $t|q$.
Let $q=tk$. $$f \approx \frac st = \frac{sk}{tk}=\frac{sk}{q}=\frac dq$$ for some integer $d$.
So, for a given approximation $\frac pq$ to $\log_2 3$, the best rational approximations to $f$ with commensurate denominators are $\frac{d_0}q$ and $\frac{d_1}q$, where $d_0=\lfloor fq\rfloor$ and $d_1=\lceil fq\rceil$. That is, $$\frac{d_0}q \le f \le \frac{d_1}q$$ If $f$ is rational (eg, when $r=\sqrt 2$), then $d_0$ and $d_1$ may be equal.
So for a given $p$ & $q$ we just need to find integers $m$ & $n$ that solve our revised equation $$\frac dq = m - n \frac pq$$ $$d=qm-pn$$ for both $d_0$ and $d_1$. There are solutions for any integer $d$ because $p$ & $q$ are coprime. And those solutions can be found using the extended Euclidean algorithm.
But we also need to find suitable $p$ & $q$. That can be done using the convergents of the continued fraction expansion of $\log_2 3$. The standard algorithm for computing a continued fraction is closely related to the extended Euclidean algorithm, and as that Wikipedia article explains (in Theorem 3), if the $n$th convergent of a continued fraction is $\frac{h_n}{k_n}$ then $$k_nh_{n-1} - k_{n-1}h_n = (-1)^n$$ which enables us to find $m$ and $n$ without doing a separate Euclidean algorithm calculation.
The continued fraction convergent $\frac hk$ of a number $x$ gives the best rational approximations to $x$ for any denominator $\le k$. The error is $$\left|x - \frac hk\right| \le \frac 1{k^2}$$ and it can often be much better. In contrast, the error for an approximation $\frac hk$ with a "random" denominator (i.e., not a continued fraction convergent) is generally around $\frac 1{2k}$.
Unfortunately, because of the need for commensurate denominators in our approximations to the two logarithms, we don't get the full $\frac 1{k^2}$ goodness. But we do generally get better than $\frac 1{k}$.
So to find solutions with better error than a given $\epsilon$, we just need to look at the convergents to $\log_2 3$ with denominators in the neighbourhood of $\frac 1\epsilon$.
Here is some Sage / Python code that performs that task. Sage is a collection of mathematical libraries built on top of the popular Python programming language. It has arbitrary precision arithmetic, and facilities for performing symbolic algebra, but I've (mostly) avoided Sage features in this code (apart from the arbitrary precision arithmetic), to make it easier to port to other languages, if desired; I've also avoided most "Pythonisms", apart from Python's ability to return multiple values from a function.
Here's the output of that program, searching for solutions with $\epsilon=10^{-6}$:
And here is a live version that you can play with on the SageMath server. My code isn't stored on the server, it's encoded in the URL.
If you get weird behaviour with small $\epsilon$, try increasing the number of the
bitsglobal variable (at the top of the file). The default setting of 53 should be ok for $\epsilon > 10^{-8}$ or so. You may also need to increase thedepthof the continued fraction.FWIW, $\log_2 3$ is rather important in the mathematical music theory of equally-tempered scales. The standard 12 tone scale uses the convergent $19/12$.