Say I have an object A that has two attributes a and b. There is also an object B with same attributes. Possibly any amount of such objects exist.
All attributes should be independent meaning that the change of A.a does not change A.b or B.a or B.b. This might be a universally accepted default, but I need to show the difference of properties being independent, and being dependent.
Note: I'm using dot notation familiar on many programming languages.
So attributes are independent in above sense but on the other hand, a and b belong to the certain object. That is evidently some sort of relation.
How do you denote such independent relation between attributes of the object in mathematics?
How would you denote attributes if they had a relationship where a change of the A.a changes/affects A.b some undefined way (but does not affect to B)?
There is a remotely related paper from statistical theory of association that might be useful. Also wiki page about predicates and other topics:
Logic and mathematical variables as objects
After small discussion
The set of such objects/things ($x$) could be denoted:
$$ \mathbb{S} = \{x \space | \space a(x) \space Λ \space b(x)\} $$
And now:
$$ A∈\mathbb{S} $$
and
$$ B∈\mathbb{S} $$
Or is there some other preferred way to do this?
Further, we could denote $R(a b)$ for "attribute $a$ mutually relates to attribute $b$". Now it would be just a matter of:
- denote object $x$
- denote that there is no mutual relation $¬R$
Could it be done with:
$$¬R(a(x) b(x))$$
and now construct the set with more elaborated:
$$ \mathbb{S} = \{x \space | \space ¬R(a(x) \space b(x))\} $$
?
As Malice Vidrine points out, "standard" forms of first-order logic (or most other variations) do not have any notion of "changing" or "time". There are logics that do, though, and that have an inherent notion of independence.
Clearly dealing with mutation has been a concern for the formal semantics of programming languages from the start. One of the earliest and most influential logical approaches is Hoare logic. Hoare logic is incapable of handling higher-order functions/objects (in the OOP sense) and has virtually no modularity. Research on it and another line of research on substructural logics such as linear logic and the logic of bunched implications led to separation logic and Hoare type theory. A key idea here is the separating conjunction/implication. The separating conjunction allows one to split the heap between to two predicates guaranteeing that they are independent of each other.
Another direction that readily comes to mind is temporal logic. If it is the case that only one "change" can happen per time step though that change could indirectly affect multiple "objects", you could add constraints like $$A(o_1,f_1,x)\land A(o_2,f_2,y)\land I(o_1,f_1,o_2,f_2)\to \mathsf{N}(A(o_1,f_1,x)\lor A(o_2,f_2,y))$$ where $A(o,f,x)$ means "field $f$ of object $o$ has value $x$" and $I(o_1,f_1,o_2,f_2)$ means "field $f_1$ of object $o_1$ is independent of field $f_2$ of object $o_2$". The above statement says that if two independent fields have a particular at one time step then at least one of them has the same value at the next time step. Of course, you need to axiomatize $A$ and $I$ appropriately. Something like dynamic logic could be used if more flexibility is required. Note that this won't help with less direct connections. If the "objects" are nodes of a linked list and $o_1$'s $\mathsf{next}$ field points at $o_2$, then mutating $o_2$'s $\mathsf{next}$ doesn't change $o_1$'s $\mathsf{next}$ field but it does change the list $o_1$ represents. This is the issue that separation logic addresses. If you don't have this pointer-like behavior then things are much more tractable.
Returning to a "standard" logic, the first thing to note is that you can encode one of the above logics into first-order logic. Usually, you'll lose a bit doing this (which is why these logics exist), but perhaps not so much that it is a problem. In particular, there will be things that are true in the original logics that can't be proven in the encoding. In many cases, this can be mitigated by adding additional axioms that would be "automatic" in the original logics. Separation predicates is an example of this for separation logic.
Now you can imagine an approach like having a sort of states and a transition relation $s \leadsto s'$ stating that "state $s$ transitions to state $s'$". From there you can consider something like $A(s,o,f,x)$ for "field $f$ of object $o$ has value $x$ in state $s$". You could then encode the temporal logic formula before as $$A(s,o_1,f_1,x)\land A(s,o_2,f_2,y)\land I(o_1,f_1,o_2,f_2)\land s\leadsto s'\to A(s',o_1,f_1,x)\lor A(s',o_2,f_2,y)$$ Indeed, if your states were the natural numbers and $s \leadsto s'$ was $s' = s+1$, you'd recover linear-time temporal logic. The problem is that first-order logic can't categorically axiomatize the natural numbers. For separation logic, the problem is first-order logic can't categorically axiomatize finite maps. This is to say that first-order logic can't rule out unintended models, e.g. infinite maps.
One solution to this is to stop working syntactically/logically and start working semantically, that is in the intended model. The trade-offs of doing this are similar to the trade-offs of programming against an interface versus programming against an implementation. You get more flexibility in what you can do and you can make more assumptions, but you lose generality of the result and you need an accurate model. You can look at the semantics of things like separation logic or temporal logic for inspiration, but its basically the mathematical equivalent of a simple matter of programming at this point. You just start defining and proving things.
For example, working within a set theory, you might do something like say you have a set $O$ of objects, a set $F$ of fields, and a set $V$ of values. You define a set of states, $S$, as, say, the set of function $O\times F \to V$ (or maybe partial functions if not every object has every field or fields don't always have a value). If you have a notion of "change", you can formalize it as, say, a function $c : S\to S$ and then prove that $c$ only affects one value in the range of its argument, e.g. $$c(\sigma)(o,f)\neq \sigma(o,f) \land c(\sigma)(o',f')\neq \sigma(o',f') \to o=o' \land f=f'$$ Alternatively, you can ask to be provided with such a $c$ and require that it satisfy a law like this.