In convex analysis, the Legendre-Fenchel transform seems to be always written as
$$ f^*(x^*) = \sup_{x\in \mathbb{R}^n}\left\{\langle x, x^*\rangle - f(x)\right\}, $$
where $x$ and $x^*$ are both considered to be in the same vector space, namely $\mathbb{R}^n$, and $\langle\cdot,\cdot\rangle$ is its inner product. The two main references, Fenchel and Rockafellar, both define it this way.
Another possible way to think about it would be to say that $x$ lives in an $n$-dimensional vector space $V$ and $x^*$ lives in its dual space, $V^*$. So we could write the Legendre-Fenchel transform as
$$ f^*(x^*) = \sup_{x\in V}\left\{x^* x - f(x)\right\}, $$
since a dual vector $x^*$ is a map sending a vector $x$ to $\mathbb{R}$.
My question is, does the inner product do any "real work" in convex analysis? That is, are there any important theorems that rely on $x$ and $x^*$ living in the same space, or which make use of the norm or the inner product in an important way?
Or to put it another way, if one were to avoid using the inner product and instead define everything in terms of dual spaces, are there any important theorems in convex analysis that would no longer be meaningful and/or true?
I have a feeling that it might just be a matter of historical taste, that people in convex analysis have tended to prefer to work $\mathbb{R}^n$ with an inner product rather than considering dual spaces. However, I am not sure and I would like to know if there is something that necessitates the inner product in the definition.
I don't think you need the inner product, especially if you're working on optimization problems in Banach spaces, in which there is no guaranteed inner product. Although practically it goes without saying that the inner product endows a lot of structure to the space you're working in. I digress, to think about the Legendre-Fenchel transform, regardless of dimensionality, in the context of dual spaces is similar to what you said: for any function $f:X\to\overline{\mathbb{R}}$, where $X$ is a real linear locally convex space, you can define its conjugate function, $f^*$, on the dual space, $X^*$, $f^*:X^*\to\overline{\mathbb{R}}$ by: $$ f^*(x^*)=\sup\{(x^*,x)-f(x),\quad x\in X\}, \quad x^*\in X^*$$ where for two linear spaces $X$ and $Y$ over the same scalar field $F$ you define a dual system if a fixed bilinear functional on their product is given: $$ (\cdot, \cdot ): X\times Y\to F$$ For each $x\in X$, we define the application $f_x:Y \to F$ by $$ f_x(y)=(x,y), \quad \forall y\in Y$$ Notice that $f_x$ is a linear functional on $Y$, and the mapping $x\to f_x \quad \forall x\in X$ is linear and injective, so the elements of X can be identified with the linear functionals on Y. In a similar manner the elements of $Y$ can be identified with the linear functionals on $X$. Thus each dual system of linear spaces defines a mapping from either of the two linear spaces into the space of linear functionals on the other. In other words, there is a natural duality between $X$ and $X^*$ determined by the bilinear functional $(\cdot, \cdot):X\times X^*\to F$, defined by $$ (x,x^*)=x^*(x), \quad \forall x\in X, x^* \in X^*$$ A great reference that helped me formalize my understanding of this for a personal problem is Barbu and Precupanu's Convexity and Optimization in Banach Spaces. So that might be useful if you're looking for a formal treatment of this topic. For example, they present the derivation of the dual problem for a linear program in finite dimensions, and for a linear program where dimension is not concerned, among other generalizations. Hope this helps! :)