To begin with , I am only a secondary school student (17yo) but I am very interested in higher mathematics. However we only learn so little in my school (only single variable calculus and basic linear algebra). In the past I have self-learnt some abstract algebra and very basic topology by finding online resources, but I can never get deep into those subjects.
When I read about functional analysis, I encounter objects like function spaces and infinite-dimensional spaces which I can never understand. What does it exactly mean to be a function space, how do you measure metric? I know it is hard and requires much real analysis. Can anyone give me some easy ideas and introductions?
For me, doing functional analysis is best described as 'going beyond linear algebra'.
In linear algebra, the objects you deal with are (coordinate) vectors, i.e. objects from a vector space $V$ which you can multiply with a scalar or add together and again get a vector: For $v,w\in V$ and $\alpha \in \mathbb R$ we have $v + w \in V$ and $\alpha v \in V$.
Functional analysis answers the question 'What happens if $V$ infinite-dimensional?'. The idea behind this is the observation that these vector axioms hold for other objects than coordinate vectors with a finite number of rows as well. For example, the sum of two differentiable functions is a differentiable function again (and a number times a differentiable function is differentiable, too). The same holds true for other classes of functions, e.g. polynomials or square-summable sequences (which are really just functions from $\mathbb N$ to $\mathbb R$/$\mathbb C$). Note that there are other examples of infinite-dimensional vector spaces which are not function spaces, and examples of function spaces which are finite-dimensional. But one of the things one wanted to do in early 20th century to handle quantum mechanics is to get some kind of "linear algebra for functions, not row vectors".
When we allow functions instead of vectors from a finite-dimensional space, there are a lot of things which work similarly, but a lot of things which don't work similarly compared to linear algebra. For instance:
We can still measure the length of these vectors, but suddenly it's important which norm we take (not all norms are equivalent on an infinite-dimensional vector space).
We can look at linear operators $A$, but they cannot be represented as a matrix (in fact, in the early days of functional analysis, Heisenberg did represent differential operators as matrices with an infinite number of rows and columns).
We can calculate eigenvalues $\lambda$, but since the rank-nullity theorem ($\dim V = \operatorname{rank}A + \dim \operatorname{ker}A $) doesn't help if $\dim V = \infty$, we're not only interested in cases where $(A-\lambda I)$ is not injective (eigenvalues), but also cases where $(A-\lambda I)$ is not surjective (so-called continuous spectrum). Also, calculating eigenvalues gets harder since we can't calculate a characteristic polynomial.
There's a lot of room in infinite-dimensional spaces. We can have Cauchy sequences which don't converge since we picked the 'wrong' norm. This is why Banach (and Hilbert) spaces are interesting.
Not all linear operators are continuous anymore. In fact, the most interesting operators (i.e. differential operators) are not continuous.
All of these things require a more rigorous analytical framework than linear algebra does and this is where the analysis part in functional analysis comes from.
Addendum: I just realized that I talked a lot about the 'what' and not the 'why'.
Essentially, these questions help to answer hard questions about functions, for example if you're interested in solving differential equations - eigenvalues of a differential operator $D$ are just the points where you can solve the differential equation $(D - \lambda)f = 0$.