(a)Basis given by the columns of A
(b)Basis given by the columns of A−1
(c)Basis given by the columns of A^T A
(d)There is no guarantee such a basis exists
How does one differentiate between inner product and dot product as it relates to the basis of the given matrices involved? Is there a more simple analogous example which puts this question into perspective?

Here's the crux:
The dot product of two vectors, defined as the sum of products of corresponding entries, is definable only if a basis is given, since "entries" must be given by the basis.
However, an inner product is not necessarily basis dependent, for example in $\mathbb R^2$, if we use the conventional $\langle x,y\rangle = ||x|| \cdot ||y|| \cdot \cos \theta$, where $\theta$ is (regardless of basis) the angle between the vectors $x$ and $y$ in the usual coordinate plane, then it does not matter if the standard basis is in use or some other basis is in use : we always use the standard basis to compute $\theta$, so $\langle x,y\rangle$ does not change with change of basis.
Another way to see this, is that the basis does not change the geometric picture : $x,y$ are still exactly where they are, with respect to each other, after any change of basis. The inner product is something that respects this, while the dot product does not.
In a similar manner, something of the form $x^TA^TAy$ also does not change under basis, which means that it is an inner product. However, this is also a dot product, with a certain choice of basis.
That is, if we fix a basis, then this inner product, is nothing but the dot product for that basis i.e. we compute the entries of $x$ and $y$ in that basis, and find the sum of products of corresponding entries.
Last but not the least, the dot product can be defined for vectors in $\mathbb R^n$, for example, but I do not know of an analogue in infinite dimensional vector spaces, of functions for example, because one would need to impose conditions on the sum being convergent, etc.
On the other hand, inner products can very well be defined on infinite dimensional spaces.
So we want a basis $B$, so that when $x$ and $y$ are expressed in the basis $B$ , the dot product is equal to the number $x^TA^TAy$.
Well, $(Ax)^T (Ay)$ gives an immediate clue : under which basis, is $x$ rewritten as $Ax$?
To see this, let us imagine the identity map from $\mathbb R^n$ to $\mathbb R^n$, where the former has the standard basis, and the latter has the basis given by the columns of $A$. What is the matrix of this transformation?
This is fairly easy to see : if $b_1,...b_n$ is a basis, then we want $x = (Ax)_1b_1 + ... + (Ax)_nb_n$. Or, $x_1e_1 + ... + x_ne_n = (Ax)_1b_1 + ... + (Ax)_nb_n = x_1(\sum A_{j1}b_j) + ... + x_n(\sum A_{jn}b_j)$. So all we need is that $\sum A_{ji}b_j = e_j$ for each $j$, and this is satisfied when
the columns of $A^{-1}$ are $b_i$, as is easy to see. Therefore, the answer is the columns on $A^{-1}$.