The definition of an isomorphism is "a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping".
In my linear algebra class, an isomorphism is defined as "a bijective linear transformation." I understand some of the structure that is preserved between the spaces, but is there some exact fixed set of things that are preserved by a bijective linear transformation? (or an isomorphism in general). Further, before we can say that a bijective linear transformation is an isomorphism, do we not need to prove that all of the structure is actually preserved? I ask this because we seem to just assume that a lot of things are preserved between different vector spaces, but never have we proven that these things are preserved between spaces. Below are a few things that I am a little confused about is they need to be proven or follow from the fact that $T$ is a bijective linear transformation.
Let $T: V \to W$ be an isomorphism. I understand that since $T$ is linear, scalar multiplication and addition will be preserved. Must it be shown that if there is a linear transformation $A$ in $V$ such that for all $x \in V$, there exists a unique transformation $B$ in $W$ such that $A(x) = T^{-1}(B(T(x))$, or that if $S$ is a basis of $W$ of where $S = (S_1, \dots, S_n)$, then $(T^{-1}(S_1), \dots, T^{-1}(S_n))$ is a basis of $V$?
I'm not exactly sure where this preservation of structure stops, and when I am no longer using the fact that we have an isomorphism, and rather sort of just saying things that are not proven true. Maybe I am not entirely sure what constitutes as structure that needs to be preserved in terms of vector spaces.
Any help clearing up this idea of isomorphism is greatly appreciated.
(This is a response to a question OP asks in the comments which is too long to keep in the comments, but is still relevant as an answer to the original question.)
Model Theory and/or Universal Algebra provide a very general way of approaching the question you're asking about what 'structure' is and what it means for 'isomorphisms' to preserve 'properties' of that structure. The other questions have provided more concrete explanations of this for the specific case of vector spaces, so I'll give a more abstract answer. I will also be approaching $\mathbb{F}$-vector spaces as a single-sorted structure, in contrast to user21820's approach.
A (first-order) language is a triple $\langle \mathcal{F}, \mathcal{R}, \operatorname{ar}\rangle$ consisting of disjoint sets $\mathcal{F}$ and $\mathcal{R}$ and a function $\operatorname{ar} \colon \mathcal{F} \sqcup \mathcal{R} \to \mathbb{N}$. Our interpretation of this triple is that $\mathcal{F}$ denotes a set of function symbols, $\mathcal{R}$ denotes a set of relation symbols, and $\operatorname{ar}$ associates to each function and relation symbol an arity.
In our particular example of $\mathbb{F}$-vector spaces (where $\mathbb{F}$ is any field, e.g., $\mathbb{R}$ or $\mathbb{C}$), we can take $\mathcal{F} = \{+\} \cup \{s_a \mid a \in \mathbb{F}\}$, $\mathcal{R} = \emptyset$, and $\operatorname{ar}(+) = 2$ and $\operatorname{ar}(s_\alpha) = 1$ for each $\alpha \in \mathbb{F}$. $+$ has its usual interpretation as vector addition, while the function symbols $s_a$ correspond to scalar multiplication by $a$ (this is how we get around the 'two-sorted' nature of vector spaces after fixing a field). Optionally you can include a $0$-ary function (i.e., constant) symbol $0$ for the zero vector and/or a unary function symbol $-$ for vector negation, but these are 'definable' in the sense I'll indicate below.
Given a language $\langle \mathcal{F}, \mathcal{R}, \operatorname{ar}\rangle$, a structure over that language is a triple $\mathbf{A} = \langle A, \langle f^\mathbf{A}\rangle_{f \in \mathcal{F}}, \langle R^\mathbf{A} \rangle_{R \in \mathcal{R}}\rangle$ such that for each $f \in \mathcal{F}$, $f^\mathbf{A}$ is a function $A^{\operatorname{ar} f} \to A$ and for each $R \in \mathcal{R}$, $R^\mathbf{A}$ is a $(\operatorname{ar} R)$-ary relation over $A$. In other words, a structure over a language is a set and instantiations of all the function and relation symbols (such that the arities line up appropriately). Note that at this point we haven't introduced any 'axioms' or the like.
If $\mathbf{A}$ and $\mathbf{B}$ are two structures over a language $\langle \mathcal{F}, \mathcal{R}, \operatorname{ar}\rangle$, an isomorphism between $\mathbf{A}$ and $\mathbf{B}$ is a bijection $\varphi \colon A \to B$ such that for function symbol $f \in \mathcal{F}$ and relation symbol $R \in \mathcal{R}$ (say, where $\operatorname{ar} f = \operatorname{ar} R = n$) and $a_1,a_2,\ldots,a_n$, $$\varphi(f^\mathbf{A}(a_1,a_2,\ldots,a_n)) = f^\mathbf{B}(\varphi(a_1),\varphi(a_2),\ldots,\varphi(a_n))$$ and $$\langle a_1,a_2,\ldots,a_n\rangle \in R^\mathbf{A} \iff \langle \varphi(a_1),\varphi(a_2),\ldots,\varphi(a_n)\rangle \in R^\mathbf{B}.$$ In other words, an isomorphism is a bijection which is compatible with the operations (or commutes with the operations) and both preserves and reflects each relation.
In our particular example of $\mathbb{F}$-vector spaces with the language I gave above, a structure over the language of $\mathbb{F}$-vector spaces is just a set $V$, a binary operation $+\colon V^2 \to V$, and unary operations $s_\alpha \colon V \to V$. An isomorphism $\varphi$ between such structures asks $\varphi$ to be a bijection and for $\varphi(v+w) = \varphi(v)+\varphi(w)$ and $\varphi(s_\alpha(v)) = s_\alpha(\varphi(v))$ for all $v,w$ in the domain of our first structure and $\alpha \in \mathbb{F}$. So far, this doesn't look quite like the typical definition of an isomorphism of $(\mathbb{F})$-vector spaces which would ask that $\varphi$ satisfy $\varphi(s_\alpha(v)+w) = s_\alpha(\varphi(w)) + \varphi(w)$ for all $v,w \in V$ and $\alpha \in \mathbb{F}$. We'll address that now.
We like to deal with structures that exhibit particular properties, like groups, rings, posets, etc. One way to realize this in the framework above is to define a first-order theory $\Phi$ over a language $\sigma = \langle \mathcal{F},\mathcal{R},\operatorname{ar}\rangle$ to be a set of well-formed formula in that language. To give the shortest definition I can of what is meant by 'well-formed formula', fix a countably infinite set of variables $X$ and consider strings over the set $\Sigma = X \cup \mathcal{F} \cup \mathcal{R} \cup \{ \text{`('},\text{`)'},\text{`,'},=,\vee,\wedge,\to,\neg,\leftrightarrow,\forall,\exists\}$ with the usual interpretation of the logical symbols and assuming that everything is distinct. $\mathrm{Term}_\sigma$ is the smallest subset of $\Sigma^\ast$ (the set of strings built up over the alphabet $\Sigma$) such that:
In other words, $\mathrm{Term}_\sigma$ is the set of expressions built up from the variables and function symbols. Then define $\mathrm{Form}_\sigma$ to be the smallest subset of $\Sigma^\ast$ such that:
I'll call elements of $\mathrm{Form}_\sigma$ wffs (well-formed formulas). A sentence is a wff $\varphi$ such that if $x$ is a variable appearing in $\varphi$, then it appears in a subformula of $\varphi$ of the form $\exists x \psi$ or $\forall x \psi$ -- in other words, they're wffs without free variables, or equivalently wffs all of whose variables are bound. The point of sentences is that they correspond to wffs whose 'truth' in any particular structure shouldn't depend on providing an additional assignment of values to free variables.
Finally we can say that a first-order theory is nothing more than a set $\Phi$ of sentences; in other words, we're picking out sentences which will be our 'axioms'.
In our particular example of $\mathbb{F}$-vector spaces, the theory we take consists of the following sentences/axioms (diverging a bit in notation by using infix notation):
A $\mathbb{F}$-vector space is then a structure over the aforementioned language which satisfies the above first-order theory. Giving a precise definition of satisfaction would be too long, but an intuitive idea probably suffices. It can then be easily shown that for any map $\varphi$ between vector spaces, having $\varphi(\alpha v + w) = \alpha \varphi(v)+\varphi(w)$ is equiavlent to asking for $\varphi(\alpha v) = \alpha \varphi(v)$ and $\varphi(v+w) = \varphi(v)+\varphi(w)$.
Last but not least, we double back to what all this means in terms of isomorphisms. The general result is the following:
Theorem. Given a language $\sigma = \langle \mathcal{F},\mathcal{R},\operatorname{ar}\rangle$ and two structures $\mathbf{A}$ and $\mathbf{B}$ over that language, if there exists an isomorphism from $\mathbf{A}$ to $\mathbf{B}$, then for every sentence $\varphi$ in the language $\mathbf{A}$ satisfies $\varphi$ if and only if $\mathbf{B}$ satisfies $\varphi$.
So isomorphic structures agree with each other on any property/identity/etc that can be written down as a first-order sentence.
Disclaimer: The specific way I defined what a wff is isn't entirely universal; some approaches use Polish notation, some move $=$ from being a logical symbol to being a relation symbol and then add appropriate axioms characterizing it, some use a single variable symbol and include a way to generate infinitely many variables out of that (e.g., given $x$, we add another logical unary operation symbol $'$ such that $x'$, $x''$, $x'''$, etc. are all independent logical variables), etc. The specific ways I wrote the axioms for $\mathbb{F}$-vector spaces isn't standard (though pretty typical), nor is the language I used (as mentioned above, a constant ($0$-ary function) symbol $0$ and/or a unary funciton symbol $-$ might be used). Some folks like to distinguish constant symbols from function symbols. Etc. etc.