Let $\Gamma$ be a discrete group, and let $\Bbb{C}[\Gamma]$ be the associated complex group ring. If $\sum_{g \in \Gamma} a_g g$ represents some element in the group ring, where all but finitely many of the $a_g \in \Bbb{C}$ are nonzero, then we can equip $\Bbb{C}[\Gamma]$ with the follow natural involution:
$$\left(\sum_{g \in \Gamma} a_g g \right)^* := \sum_{g \in \Gamma} \overline{a_g} g^{-1}$$
This gives us a notion of self-adjoint elemnts ($f^* = f$ for $f \in \Bbb{C}[\Gamma]$) and positive elements (those of the form $\sum h_i^* h_i$ for $h_i \in \Bbb{C}[\Gamma]$). From here we can get an ordering on the self-adjoint elements: $f \le g$ if and only if $g - f$ is positive.
I am wondering whether the following is true:
Let $f \in \Bbb{C}[\Gamma]$ be a positive element. Then $f$ is invertible in $\Bbb{C}[\Gamma]$ if and only if there exists $\epsilon > 0$ such that $f \ge \epsilon 1$.
I know this is true for $C^*$-algebras, and I suspect that the proof just amounts to $C^*$-algebraic tricks. For example, the forward direction is trivial, because if $f$ is invertible in $\Bbb{C}[\Gamma]$, then it is invertible in the full group $C^*$-algebra $C^*(\Gamma)$, which is just the completion of $\Bbb{C}[\Gamma]$ with respect to a particular norm (see this). However, the other direction is not obvious. If $f \ge \epsilon 1$ for some $\epsilon > 0$, then $f$ is invertible in $C^*(\Gamma)$. But why must its inverse live in $\Bbb{C}[\Gamma]$?
My gut tells me this is true, but I don't see it at the moment.
Very little C$^*$-algebra theory applies to $*$-algebras. The main problem is the following: consider for instance the element $1\in \mathbb C[\mathbb Z]$, the identity. What's its spectrum? To avoid confusing the elements of $\mathbb Z$ with scalars, consider instead $G=\{V^n:\ n\in\mathbb Z\}$, where $V$ is a unitary with infinite spectrum (so that $G\simeq \mathbb Z$); we need to ask ourselves when is $V-\lambda I$ invertible. This requires us to write $$ (V-\lambda I)^{-1}=\sum_{j=1}^m \alpha_j\,V^{k_j} $$ for some $k_1,\ldots,k_m\in\mathbb Z$; by "filling the gaps with $0$" we may write $\sum_{j=-r}^{r}\alpha_jV^j$. So we need to have \begin{align} I&=(V-\lambda)(V-\lambda)^{-1}=\sum_{j=-r}^r \alpha_j\,V^{j+1}-\sum_{j=-r}^r \lambda\alpha_j\,V^{j}\\[0.3cm] &=-\lambda\alpha_{-r}V^{-r}+\alpha_rV^{r+1}+\sum_{-r+1}^r(\alpha_{j-1}-\lambda\alpha_j)V^j. \end{align} Linear independence then forces us to have $$\alpha_{-1}-\lambda\alpha_0=1, \ \ \ \lambda\alpha_{-r}=0,\ \ \ \alpha_r=0,\ \ \ \alpha_{j-1}-\lambda\alpha_j=0.$$ When $\lambda=0$ this works, since $V$ is invertible. If $\lambda\ne0$, the equations are impossible: we start from $\alpha_r=0$; then $\alpha_{r-1}=\lambda\alpha_r=0$ and so $\alpha_{j}=0$ for $j=0,\ldots,r$. Further, $\alpha_{-1}=1+\lambda\alpha_0=1$, and then $\alpha_{-2}=\ldots=\alpha_{-r}=1$, and we get a contradiction. So $$ \sigma(V)=\mathbb C\setminus\{0\}. $$
In this context you can apply reuns example: take $f=(V+V^{-1})^*(V+V^{-1})=(V+V^{-1})^2=V^2+2I+V^{-2}$. Then $f+I\geq 1$. Suppose that $f+I$ is invertible: then its inverse should be of the form $\sum_{j=-r}^r \alpha_j V^j$. We should have \begin{align} I&=(V^2+3I+V^{-2})\sum_{j=-r}^r \alpha_j V^j. \end{align} By linear independence and looking at the terms with higest and lowest degree we get $\alpha_{r}=\alpha_{-r}=0$. Repeating the argument we get $\alpha_j=0$ for all $j$, a contradiction that shows that $f+I$ is not invertible.