Imagine an algorithm, that takes a set as input, and returns that set, but without duplicates, as output.
The algorithm is bruteforce, it goes through each element in the input set, and one-by-one compares it to all elements in the output set. If the current element doesn't have a dupe in the output, it will be placed there.
Say, we have $\{5, 7, 2, 6, 2, 8, 4, 5\}$ as input (the elements could be anything), and the algorithm already scanned through $5$, $7$ and $2$.
It moves on to 6:
$6$ is compared to the first element of the output set, $5$. They aren't the same, so the algorithm moves on.
$6$ is then compared to $7$, and $2$.
There wasn't another $6$ found in the output set, so $6$ is added there ($\{5, 7, 2, 6\}$).
Next, the algorithm moves on to $2$, comparing it to $5$ , then $7$. But when $2$ is compared to the next element, $2$, they're the same, so the $2$ being compared to all elements in the output array is dupe, and is dropped. The algorithm then moves on to the next element...
The main question is: What is the algorithm's time complexity?
That is of order $n^2$.
I would sort the set. The time would then be order $n \log n$.
If the elements were bounded relative to the number of them, it could be done in order $n$ by using an array with size of the maximum possible value.