We know by Riez Representation that given a Hilbert space $H$ and its inner product $\langle\cdot,\cdot\rangle$, we have that for any $f \in H^*$ there is $v \in H$, $f(\cdot) = \langle v,\cdot\rangle$. And this is basically how gradient descent works: given $f\in C^1(H,R)$, let $v_{f,x}$ denote the vector that represents the derivative (the gradient), we have $$f(x+h) = f(x) + Df(x)h + O(h^2) = f(x) +\langle v_{f,x},h\rangle + O(h^2)$$
Thus, taking $h = -\lambda v_{f,x}$, we guarantee that $f(x+h) < f(x)$ for $\lambda$ small enough until $v_{f,x}$ goes to $0$.
However, suppose we equip the space with a different inner product, presumably the gradient will also change. Then how can we make sure that the gradient descent still works? The iteration will trace a different path, and does that affect the convergence result? I know in practice the optimization is usually with constraint, and it takes quite a computation to choose the proper step size. If we choose an inner product under which the induced metric space is not complete, then would the case be even worse?