Why we cannot define axiom as two sets are equal iff they are element of same sets? What are the problems with that approach?
It doesn't seem like I quantify over anything rather than sets. The definition may be trivial if we assume that every set define a set which have only one element and it is that set. In that case definition will be trivial and every set will be different but let assume that these are the sets we are usually working with (ZFC). However intutitively, I can provide you every different set, but still I can define a set which do not bear that property. You may define a new set to show my set is different than others but it seems like you changed the set, by definition I defined that set as being part of these sets but not the sets that make it different than all other sets.
My reasoning may not clear but if you can ask me to read some stuff related to these, I will be grateful.
My idea is denying the classical axiom of extensionality and pair axiom I guess and replace extensionality definition with some kind of reverse of classical axiom of extensionality. ( I am hoping that the classical one will not follow from converse if I deny pair axiom and introduce undecidability in terms of elements of sets.)
My idea probably something formalized like that:
∀x∀y(∀z(x∈z↔y∈z)→x=y)
Thanks to Mees de Vries.
You will want to look at the distinction between first-order logic and second-order logic. To the extent that a set is comparable to a monadic predicate, you are warranting an identity statement about individuals in relation to predicates that have not yet been substantiated by individuals (in the sense that the cumulative hierarchy reflects the outdated constructive view of set theory). Semantics is ultimately grounded upon individuals; so the well-formedness of such predicates cannot be "prior" to the identity of the individuals which comprise them.
For symbolic logicians adamant about the benefits of first-order logic what you are doing is unthinkable.
There is a book by Thomas Morris entitled "Understanding Identity Statements". It has a decent account of how different notions of identity have played a role in the foundations of mathematics. He calls the ontological account, which is standard, the "objectual view". When he discusses Leibniz' identity of indiscernibles, he speaks of warranted identity statements. Because warranted identity statements are generally associated with provability, it is correct to refer to this as an "epistemic view" of identity.
Another thing to look at is the Wikipedia page on "topological inseparability". Although it is not the standard account of set theory, Zermelo's original paper interpreted the sign of equality in relation to singletons. We think about such matters in real analysis with Cantor's nested set theorem. But, from a purely logical view, topological inseparability cannot determine an individual. Because set theory provides for singletons, your system "works". But, if there is an uncountable infinity described by a countable language, then it is reasonable to think that you cannot always determine a singleton by definition. This is a different application of epistemology that will return you to the arguments against Leibniz' identity of indiscernibles.
I personally do not give credence to the standard account of identity. This is how I know about these resources. You can expect this to hurt your career unless you happen to find sympathetic instructors.