Let $H$ be a $\mathbb R$-Hilbert space and $A\in\mathfrak L(H)$. Consider the following optimization problem: $$\sup_{\left\|x\right\|_H=1}\langle Ax,x\rangle_H.\tag1$$ We may note that $A+A^\ast$ is self-adjoint and hence there is an unique compactly supported spectral measure $E$ on $\mathcal B(\mathbb R)$ associated with $A+A^\ast$. Now, $$\langle Ax,x\rangle_H=\frac12\langle(A+A)^\ast x,x\rangle_H=\frac12\int_{\sigma(A+A^\ast)}\lambda\:\langle E({\rm d}\lambda)x,x\rangle_H\;\;\;\text{for all }x\in H.\tag2$$
In the case $H=\mathbb R^d$, as it was shown in this answer, the supremum in $(1)$ is attained at the unit eigenvector $z_{\text{max}}$ associated with the largest eigenvalue $\lambda_{\text{max}}$ of $A+A^*$ and the optimal objective value is the logarithmic norm of $A$. This is easily seen from $(2)$.
Question: Can we infer a similar result in the general case?
We should be able to argue in the same manner by noting that $V:=\left\{\langle(A+A^\ast)x,x\rangle_H:\left\|x\right\|_H=1\right\}$ is bounded and convex and $$\sigma(A+A^\ast)\subseteq\left[\inf V,\sup V\right]\tag3,$$ but I'd need some help to figure out the details.
Your equation $(2)$ reduces the problem to $A$ selfadjoint. You also see from $(2)$ that $\sup\langle Ax,x\rangle\leq\sup\sigma(A)$. For the reverse inequality, fix $\varepsilon>0$. Then the Spectral Theorem (or the definition of integral, if $(2)$ is a given) gives you projections $P_1,\ldots,P_n$ with $\sum_jP_j=I$, and scalars $\lambda_1\geq\ldots\geq\lambda_n\subset\sigma(A)$ with $$\|A-\sum_j\lambda_jP_j\|<\varepsilon.$$ We may also choose $\lambda_1$ such that $|\lambda_1-\sup\sigma(A)|<\varepsilon$. Take $x\in P_1H$ a unit vector. Then $$ \langle Ax,x\rangle=\lambda_1>\sup\sigma(A)-\varepsilon. $$ As $\varepsilon$ is arbitrary, it follows that $\sup\langle Ax,x\rangle\geq\sup\sigma(A)$.