Preparing for my exam in functional analysis, I often have to prove that certain explicitly given operators are compact. I now have a decent amount of operators of which I can prove the compactness, but if someone gave me a new one I think I would still need more time than necessary to decide whether it is compact or not, let alone proving/disproving it afterwards. My question is: Is there a way to see whether an operator is compact or not in advance, or does one just start trying to proof that it is compact without knowing, and then maybe one runs into a contradiction if it isn't. This is how I have done it so far and it's quite tedious to be honest. I need to be fast in the exam.
So far we only considered integral/summation operators on $L^2([0,2\pi])$ and $ \mathscr{C}([0,1])$, some operators on $l^2(\mathbb{N})$, the identity operator for arbitrary linear spaces and general finite rank operators. Note that I know how to prove the compactness in all of those cases, but I just start 'proving' without knowing whether the operator is even compact.
I imagine that if I knew for example how relatively compact sets on $L^2$ and $\mathscr{C}([K])$ looked like, I could at least work in the right direction.
Some secondary issue: After doing quite a bit of spectral theory lately, I still don't really understand why one would apply it to concrete operators. Does it make some calculations easier or something? I love the theory, but what would for example the spectral theorem for compact operators: 'For a compact operator $K$ on an infinite dimensional Banach space, the spectrum consists of a denumerable number of points whose only dense point is 0. Apart from $0$, all points in the spectrum lie in the point spectrum.' do for me in my exam?
Thank you in advance for all advice.
Historically, integral operators are the prototypical compact operators.
Compactness came up in the late 1800's when studying differential operators by recasting them as integral operators. While differential operators are very discontinuous, integral operators are especially continuous on bounded regions because they typically map uniformly bounded families of functions to functions with uniformly bounded derivatives; this often results in the image of uniformly bounded functions being mapped to equicontinuous families of functions. The Arzela-Ascoli theorem then allowed Mathematicians of the time to extract a convergent subsequence from such a family in order to establish existence of solutions for various differential equations. This gemeral principal/property was used without a name for some time to conclude that you would have eigenfunctions for the PDEs, with finite-dimension eigenspaces for a given eigenvalue. That led to proofs for orthogonal eigenfunction expansions associated with PDEs on bounded regions with smooth boundaries, which is where all of this started in the first place.
Because of Arzela-Ascoli, the solvability for differential equations ended up looking similar to the way it does for matrices, basically due to the Fredholm index theorem: the nullity and deficiency of $(L-\lambda I)$ would often end up being the same finite numbers in typical cases, resembling how matrices worked.
So much of what happened looked like magic, and there was a strong desire to understand what was making everything work. All of this machinery was beautifully abstracted in the first part of the 20th century by Riesz who defined a general compact operator. Much to Riesz' credit, modern expositions of this subject are almost identical to his original presentation. Fredholm index and more general topological connections took some time to evolve, but would eventually lead to Atiyah-Singer index theory, for example, and to other connections between topological indices and solvability indices.