I understand that for a set of vectors to be linearly independent, none of the vectors in the set should be a linear combination of some other vectors in that set. But why on earth should I care about it? How does it help me?
For example imagine a simple situation - I have a system of inequalities, which defines a set of points (vectors) which satisfy all these inequalities. Why should I care whether this set of solutions to the system is linearly dependent or independent?
Why do mathematicians like to have a basis for a vector space? Because you can decompose any vector in the space and represent it as a finite linear combination of some of them.
To write any vector as a linear combination of some given vectors, they define the concept of a spanning set. But this isn't enough, because we also want the representation of a vector with respect to a given set of vectors to be unique, that is where linear independence comes in. It guarantees you the uniqueness of such a representation.
Let me explain it algebraically. Imagine that we're dealing with finite dimensional vector spaces, like $\mathbb{R}^n$. Imagine that a vector can be represented in two ways by using the same set of spanning vectors like $\{v_1,v_2,\cdots, v_n\}$. So, we can write:
$$\vec{v} = \sum_{k=1}^n \alpha_i \vec{v_i} = \sum_{k=1}^n \beta_i \vec{v_i}$$
Therefore:
$$\sum_{k=1}^n \alpha_i \vec{v_i} - \sum_{k=1}^n \beta_i \vec{v_i} = \sum_{k=1}^n (\alpha_i - \beta_i) \vec{v_i} = \vec{0}$$
Now, what does linear independence say if $\vec{v_i}$'s are linearly independent?
As a good exercise, imagine that you have a spanning set of vectors for a finite dimensional vector space which is not linearly independent. Find an example that shows you will have infinitely many different representations for the same vector!