If $p>0$, then $ \lim_{n\to\infty}\frac{1}{n^p}=0\;.$
Rudin suggests in his Principle of Mathematical Analysis to take $$n> (\frac{1}{\epsilon})^\frac{1}{p}$$ using the Archimedean property of the real number system.
This is under the assumption that we will compute the limit of the sequence based on the fact: If $\ 0 \leq x_n \leq s_n$ for $\ n \geq N$, where N is some fixed number, and if $\ s_n \rightarrow 0$, then $\ x_n \rightarrow 0.$
I don't really understand this proof, but I could try it a different way:
Letting $\ x_n = \frac{1}{n^p}$ and taking $\ s_n$ to be $\ \frac{1}{n}$, then we know that $\ x_n \leq s_n $ because $\ p>0 $. But since $ s_n = \frac{1}{n}$ goes to $0$ as $n$ approaches infinity, we know from $0 \leq x_n \leq s_n$ that $ x_n = \frac{1}{n^p}$ will also go to $0$ as $n$ approaches infinity.
Is this a valid approach?
What you mentioned is Theorem 3.20 in Rudin's book (page 57).
If one takes $n>(1/\varepsilon)^{1/p}$, then it follows that $n^p>1/\varepsilon$ (because $p>0$) and thus $\displaystyle\frac{1}{n^p}<\varepsilon$. In particular, this argument shows that if one takes a positive integer $N>(1/\varepsilon)^{1/p}$, existence given by the archimedean property, then for any integer $n>N$, one has $$ n^p>N^p>1/\varepsilon, $$ equivalently, $$ \left|\frac{1}{n^p}-0\right|<\varepsilon\;. $$ This implies by definition of limits that $\displaystyle\lim_{n\to\infty}\frac{1}{n^p}=0$.
Rudin indeed says before the theorem that the squeeze theorem will be used:
But should not take his words too literally ("The proofs will all be ..."). If you take look at the (one line) proof of (e), all he says is "Take $\alpha=0$ in (d)", which is not quite using the squeeze theorem per se.