There is an exercise in András Frank’s book Connections in Combinatorial Optimization,
An easy exercise shows that every affine matroid can be represented as a linear matroid, and vice versa.
I think I need to prove that given a set of vectors $S$ in $\mathbb{R}^n$, there exists a transformation $m$ from $S$ to another set of vectors $T$, such that $\forall A\subseteq S$, $A$ is affinely independent if and only if $m(A)$ is linearly independent. The inverse of $m$ does the transformation from the ground set of linear matroid to affine matroid. I think the intuition is decreasing or increasing the dimension of vectors.
Representing affine matroids with linear matroids is easy. $m$ should add one dimension to all vectors in $S$, i.e. $m((a_1,...,a_n)^T)=(1,a_1,...,a_n)^T$.
However the inverse function of $m$ does not always exists. Is there a good way to decrease the dimension of vectors?
Edit I think I get the answer. Suppose the ground set of linear matroid $S=\{x_1,...,x_n\}$. The ground set of the affine matroid should be $\{x_1+e,...,x_n+e,e\}$ where $e\notin S$.