Given an ellipsoid around the origin with scaling parameter $e$ in the form $x^T E x \leq e$ and a polyhedron $P$ given by $A x \leq b$, how can we define an optimization problem that maximizes e such that the ellipsoid is fully contained in $P$? Note that $E$ is constant, i.e. the problem is not the same as inscribing an ellipsoid of maximal volume into a polyhedron. Moreover, $A$, $b$ are constant as well.
My basic idea would be as follows. The ellipsoid equation can be rewritten as $x^T \frac{1}{e} E x \leq 1$. Let $F = \frac{1}{e} E$. Let $U$ be the matrix with columns being the eigenvectors of $F$ which is equal to the eigenvectors of $E$, i.e. $U$ is constant. Let $V$ be the diagonal matrix with $\sqrt{\lambda_i}$ on the diagonal where $\lambda_i$ is the eigenvalue corresponding to the $i$-th row of $U$. Note that $v_{i,i}$ is of the form $\sqrt{\frac{const}{e}}$. We can rewrite the ellipsoid equation as $x = U V^{-1} z$, $\|z\| \leq 1$. Then the optimization problem over the single variable $e$ becomes:
$\max \log(\det(U V^{-1}))$ where $\|U V^{-1} a_i^T\| \leq b_i ~ \forall i$ ($a_i$ denotes the $i$-th row of A)
Do you agree with this approach? Is there a simpler way to represent the original problem?
You do not need the full convex optimization machinery normally applied to this problem. But if even if you wanted to use convex optimization, your ellipsoid parameterization isn't compatible with it. That parameterization works with minimum-volume ellipsoids, not maximum. To do volume maximization you need to work with $B=E^{-1/2}$.
Fortunately, you don't even need to do that.
First of all, note that if any element of $b$ is negative, no ellipsoid of any volume exists, because even $x=0$ is not in the polyhedron. So we will assume henceforth that $b\succeq 0$. We will also assume that none of the rows of $A$ are zero, though we could handle that case with a bit of care below.
Let's look at a single inequality $a_i^Tx\leq b_i$. The minimum $e$ is given by \begin{array}{ll} \text{minimize} & e = x^T E x \\ \text{subject to} & a_i^T x = b_i \end{array} In other words, we want the smallest ellipsoid that touches the hyperplane that separates the halfspaces. This is convex and readily solved with a Lagrangian approach. The Lagrangian and corresponding optimality conditions are $$L(x,\lambda)=x^TEx+\lambda(b_i-a_i^Tx) \quad\Longrightarrow\quad 2Ex = \lambda a_i, ~ a_i^Tx=b$$ In the likely event that $E$ is full rank (i.e., positive definite), then $x=\tfrac{1}{2}\lambda E^{-1}a_i$, so $\lambda$ can be determined by $$a_i^Tx = \tfrac{1}{2}\lambda a_i^TE^{-1}a_i = b_i \quad\Longrightarrow\quad \lambda = 2b_i/(a_i^TE^{-1}a_i)$$ Therefore, $$e=x^TEx=\tfrac{1}{4}\lambda^2 a_i^TE^{-1}a_i = b_i^2/(a_i^TE^{-1}a_i).$$ To solve for the multiple inequalities, we simply need the smallest value of $e$ from the bunch: $$e = \min_i b_i^2/(a_i^TE^{-1}a_i).$$ As a sanity check, note that if $a_i$ and $b$ are scaled by the same positive value, $e$ does not change. This is exactly what we would expect, because the hyperplane doesn't move.
In the event that $E$ is not full rank, then you have a bit more work to do. For brevity I'll offer the result below, but can offer more derivation if requested. $$e = \begin{cases} 0 & \exists i ~ a_i \not\in\mathop{\textrm{Range}}(E) \\ \min_i b_i^2/(a_i^TE^\dagger a_i) & \text{otherwise} \end{cases}$$ where $E^\dagger$ denotes the pseudoinverse of $E$.