I have always had a hard time understanding the big picture of tensors and tensor fields. I have no problem understanding why low type tensors and tensor fields such as \begin{align} \text{$scalars$ and $smooth$ $functions$ --- type $(0,0)$,}\\ \text{$vectors$ and $vector$ $fields$ --- type $(1,0)$,}\\ \text{$covectors$ and $differential$ $forms$ --- type $(0,1)$,}\\ \text{$linear$ $transformations$ and $vector$-$field$ $morphisms$ --- type $(1,1)$,}\\ \text{$inner$ $products$ and $Riemannian$ $metrics$ --- type $(0,2)$ } \end{align} are so useful and natural. As these objects all share a lot of algebraic structure I understand the reason to encapsulate them within the notion of tensor, in an algebraic context. But, from an Analysis-Geometry setting, they are way different objects, so I see no natural reason to join together all these objects and expect the resulting object (tensors) to be of so much use in Differential Geometry. Yet they are, and they are everywhere!!
Q: Am I missing some reason of Analytic-Geometric character which motivates this generalization? If not, how does it turn out that an object whose generalization seem to be natural only algebraically end up playing such a central role in Differential Geometry? Maybe I am just underestimating the role that the algebraic structure play in this context?
I think that what you're missing is the very one thing that historically brought to identify tensors in two different types: their covariant and contravariant nature.
It might be that you're thinking at covariant and contravariant nature of tensors as exclusively an algebric propriety, but in fact I think is deeply geometric.
Let's see if I can explain what I mean. When you do a transformation to a space such as a magnification the structures you have on the geometrical object can react in 2 different ways: they can transform accordingly or covariantly, saying that in a certain sense are deeply linked to the geometrical object you are transforming; or otherwise they can be appear as indipendent and then said to transform contravariantly because when you apply a transformation their being immune to it makes them appear to transform the opposite way.
So let's do an example to illustrate what I mean. Let's say you have a point $P$ in 3D space. To treat this space algebrically you choose an origin and identify this 3D space with the vector space $R^3$ with canonical base ${e_i}$. Now the point $P$ corresponds to vector $v$ and has coordinates let's say (4,4,4). Then you decide to operate a transformation on your vectorial space enlarging by a factor 4. All vectors of the space are enlarged and so they are transformed covariantly, while $P$ -which now appears to have coordinates (1,1,1) - it has been transformed contravariantly. The deep reason for this different behaviour is that vectors were inside the vector space that transformed, while the point was something indipendent from the geometrical structure that was transformed.