Geven a quadratic form $f(x)=\frac{1}{2}x^TAx+x^Tb+a$, where $A$ is a symmetric positive definite matrix. We use gradient decent to compute the global min by $x_n=x_{n-1}-\triangledown f(x_{n-1})$ and define $d(x_n)=\|x_n-x^*\|$. Let $A=PDP^T$ be the eigen decomposition of $A$.
Given appropriate learning rate $\alpha$, we get convergence of $d(x_n)$ to $0$. Where
$d(x_n)=((x_o)^T+b^TPD^{-1}P^T)(I-\lambda PD^TP)^n(I-\lambda PDP^T)^n(x_0+PD^{-1}P^Tb)$. Is this correct?
now define $\rho=exp(\lim_{k \to \infty}\frac{1}{2k}ln(d(x_k))$, we want to show that for any choice of hyperparameter values(is there other hyperparameters for gradient decent except for learning rate?), there is constants $k_0 \geq 0$ and $C >0$ such that $$d(x_k)\leq C\rho^{2k}, \forall k \geq k_0$$.
This holds more generally for strongly convex functions (not just quadratic). See the proof in Yaron Singer's notes: https://people.seas.harvard.edu/~yaron/AM221-S16/lecture_notes/AM221_lecture9.pdf
(You are correct, the learning rate is the only hyperparameter.)