It is a well-documented fact that random walks in 3-dimensional Euclidean space diverge, and while I can't find a source for it, it seems fairly obvious that random walks in 2-dimensional hyperbolic space should diverge as well. Consider the two following functions (defined on x > 1):
f(x) = the chance that a random walk in 3-Dimensional Euclidean space starting at a point distance x from the origin will pass within one unit of the origin.
g(x) = the chance that a random walk in 2-Dimensional Hyperbolic space starting at a point distance x from the origin will pass within one unit of the origin.
Clearly, both f(x) and g(x) tend to 0. My question is, what happens to the ratio f(x)/g(x) as x goes to infinity? There's an intuitive argument that it ought to tend to 0, but I have no idea if that argument is correct or how to prove it if it is. Additionally, what happens if f(x) is defined not based on 3-dimentional Euclidean space, but on some higher dimensional Euclidean space? Do random walks in 2-Dimensional hyperbolic space "diverge faster" than those in any Euclidean space? I suspect the answer is yes, but I am unsure how to approach it.