I'm learning basic topology and as I understand it, a good way to intuit what an open set is, is that it determines which elements are near each other. However, in a non-Hausdorff space, it would be possible for one point to be "near" another, without the reverse being true. For example, if $X=\{a,b\}$ and the typology $T=\{\emptyset, \{a\},X\}$ then "getting close to $b$" implies getting close to $a$, but not the reverse. You can get so close to $a$ that you are no longer close to $b$.
I know that these imprecise English words like "close" and "neighborhood" shouldn't be regarded as too strongly related to topological definitions, but I'm still wondering if it's possible to give an intuitive conception of what open sets "mean" or what they are, in a non-Hausdorff space.
[Edit: Giving examples of applications, particularly ones accessible to a person who has an beginner's acquaintance with topology, would be appreciated since they can illustrate the meaning of the concept.]
I don't think looking for an intuition is the way to understand this. The point of abstract topological spaces is not that they are intuitive; it is the opposite: it is to understand certain aspects of the behavior of spaces even in spaces that are so strange that your intuition does not work.
Your intuition for topological spaces comes from (or should come from) $\Bbb R^n$, which is the canonical example. The axioms of topological spaces are intended to abstract certain very general properties of open sets in $\Bbb R^n$ so that we can handle those properties in more general settings. So for example we can consider the space of all distance-preserving origin-fixing linear transformations of the plane and observe that this space has two connected components, and suddenly that pulls in a whole pile of related results that we have proved about disconnected spaces in general, without our having to have a mental picture or an intuition about it, and without having to prove them all over again for this particular space.
Here is another example. You mentioned the Sierpiński space $S = \{\top,\bot\}$ whose topology is $\{S, \{\top\}, \emptyset\}$. There is an important notion in computability theory of “recursive enumerability”: a recursively enumerable set (“RE set”) is (roughly) one whose values can be listed by some automatic process; perhaps you can imagine why this might be important.
It transpires that if $X$ is some space of values then a subset $Y\subset X$ is RE if and only if its characteristic function $\chi_Y:X\to S$ is continuous, where the characteristic function is $$\chi_Y(x) = \begin{cases} \top\quad\text{if $x\in Y$} \\ \bot\quad\text{otherwise} \end{cases}$$
If you formulate the property of recursive enumerability in this way, you instantly get a huge amount of information about RE sets, essentially for free, all imported from the huge body of knowledge that already exists about continuous functions. For example, a finite intersection of RE sets is also RE, but an infinite intersection need not be; this is exactly analogous to the topological theorem that finite intersections of open subsets of $X$ are open, but infinite intersections need not be. The purpose here isn't to try to develop intuition about how $\{\top\}$ could be open while $\{\bot\}$ is closed. There is no intuition for that; it's just a fact. Instead, the purpose is to apply existing theory to a new set of cases.
Von Neumann is famously supposed to have said that “in mathematics, you don't understand things, you just get used to them”. I'm not sure exactly what he meant, but I think this might be a good example. You can understand the topology of $\Bbb R^2$. But I think the topology of the Sorgenfrey plane is something you get used to, not something you understand. (Even though it is Hausdorff!)