As I am new to Forcing, I would appreciate any help on whether the following is anywhere near being correct :
Given a Structure M, Enderton, 2001, "A Mathematical Introduction to Logic" defines truth in the structure M on page 84 with a valuation S, for an expression A(x) and the Truth Relation $\models$ :
(M, S) $\models$ $\neg$ A[S] $\;$ iff $\;$ (M, S) $\nvDash$ A[S]
In Cohen Forcing, "Set Theory and the Continuum Hypothesis" a very similar truth definition is taken for the Forcing Relation $\Vdash$ and conditions P,Q (having a role 'similar' to a valuation, but being undertaken in stages) containing a finite number of possible elements in/not in a set G. The Forcing truth definition for any expression A in a structure M is :
(M, P) $\Vdash$ $\neg$ A[P] $\;$ iff $\;$ For all Q $\supseteq$ P $\;$ (M, Q) $\nVdash$ A[Q]
Both these definitions are of the logical form (abstracting away the relations $\models$ and $\Vdash$) :
$\neg$ A $\iff$ A cant be shown to be true
This means that (a) one of either $\neg$ A or A will be true, (b) both $\neg$ A and A won't be true and (c) there will not be a case where neither is true.
Case (c) means in particular that if the expression A[P] can't be shown to be true, maybe due to some limitation in its expressive power, then the truth definition defaults to assuming its negation is true (as classical logic is assumed), even if its negation can also not be derived to be true 'in reality' due to the expressions limited expressive power. During the recursive evaluation of these truth relations, the truth of A is evaluated first and based upon the result, the truth of $\neg$ A is then assigned. So cases where the expression A has limited expressive power the truth evaluation defaults to assuming its negation is true.
In addition presumably the similar negation truth definition for both Tarski and Forcing relations ensures a consistent truth evaluation of the set of all expressions will be produced.
In Cohen Forcing the conditions P & Q are effectively finite lists of elements that are in/not in an object G, which is otherwise a 'blank set with no properties (called generic)' with no other properties than those determined by the conditions such as P.
Since any P is finite, it would be impossible for it to determine whether the 'blank' object G is finite or infinite or whether it is the same set as any other set in the model M. Even if G had some properties, unless they were included within the Forcing relation definition, the P wouldn't be able to determine what its properties were. As a result of the order the recursion is undertaken if A := "G is finite" then A[P] asks "Is G finite using the information in P". No finite amount of information would be enough to determine this 'positive statement'. As a result the truth definition defaults to its negation "G is not finite". In this case 'in reality' the negation "G is not finite" also could not be determined from the finite amount of information in P, but the order of the truth recursion forces "G is not finite' to be the true expression, despite neither being true 'in reality'.
So forcing looks to be all about a subtle use of 'lack of expressive power' in the conditions P, with a classical logic truth definition that ensures for all expressions A / $\neg$ A is produced, even when 'in reality' in some cases neither A nor $\neg$ A could be determined to be true. So expressions could be true in Forcing but have no meaning 'in reality'.
What is this "reality" of which you speak?
There's a lot of vagueness here, but I think the right way to ask the question is:
My jumping off point is that there's no kill like overkill: you can't be syntactically biased if you don't have syntax. Once we understand the syntactic approach as an "implementation" of a semantic one, things get much nicer and we see fundamental symmetry replace apparent bias. In the case of $\Vdash$, we have two plausible semantics, each of which is "unbiased," and the recursive definition of forcing corresponds to one but not the other (so that we see an apparent bias if we conflate the two approaches). In the case of $\models$ we get something even better: not only does the apparent bias go away, it actually guarantees a fundamental symmetry! How cool is that?$^1$
$^1$Pretty cool.
Generic comments
Let's look at forcing first, since it's more intuitively difficult in my opinion to critique more fundamental notions. I claim that while the definition of forcing may appear "biased towards negation," it's actually not: it's a concrete way of describing a fully unbiased semantics.
Roughly speaking, a given forcing notion $\mathbb{P}$ gives rise to a "big semantics" and a "small semantics" - the former consisting of all maximal filters through $\mathbb{P}$, and the latter of all sufficiently generic filters through $\mathbb{P}$.
Each semantics gives rise to an "all-at-once" notion of forcing (in addition to the usual recursive definition of forcing):
$p\Vdash_{all}\varphi$ iff every maximal filter through $\mathbb{P}$ extending $p$ has property $\varphi$.
$p\Vdash_{gen}\varphi$ iff every sufficiently generic maximal filter through $\mathbb{P}$ extending $p$ has property $\varphi$.
The key point now is that each of these is completely unbiased - indeed, each is completely non-syntactic (what sort of property is $\varphi$? anything!).
Now here's the key point: in the appropriate sense, the "recursive" notion $\Vdash_{rec}$ of forcing is exactly $\Vdash_{gen}$! This is one of the forcing theorems. So an apparently-biased definition ($\Vdash_{rec}$) is actually the same as a clearly-unbiased definition ($\Vdash_{gen}$).
So what's the other relation $\Vdash_{all}$ doing up there? Well, it's clarifying a separate confusion: when you write
you're observing that $\Vdash_{rec}=\Vdash_{gen}$ is not sound with respect to the "big" semantics (for which $\Vdash_{all}$ is sound), but that's not the only semantics in town: we also have the "small" semantics, which is "unbiased" as well and for which $\Vdash_{rec}=\Vdash_{gen}$ is sound. The key point of forcing is the observation that $\Vdash_{gen}$ is actually better in many situations, and in particular for building models of set theory.
Something satisfying
Now on to the more fundamental side of things: satisfaction in structures. Can we, similarly to forcing, "unbias" Tarski's truth definition? I claim that we can, and moreover when we look carefully the apparent syntactic bias is replaced by an actual guarantee of symmetry!
Let's set up our semantics. We have a collection $\mathfrak{S}$ of things called "structures," and sentences carve out subcollections. The usual approach in abstract model theory (unfortunately, I have to introduce my own terminology going forward) is:
First-order logic gives an example of an assertion logic, but there are plenty of others, including interesting ones like infinitary and second-order logics and very silly ones which don't have names because nobody likes them.
In this approach, the Tarskian definition of negation amounts to a kind of symmetry in that it tells us that for every $A\subseteq\mathfrak{S}$, the following are equivalent:
There is some $\varphi\in S$ such that for each $\mathcal{M}\in\mathfrak{S}$ we have $\mathcal{M}\leadsto\varphi$ iff $\mathcal{M}\in A$.
There is some $\varphi\in S$ such that for each $\mathcal{M}\in\mathfrak{S}$ we have $\mathcal{M}\leadsto\varphi$ iff $\mathcal{M}\not\in A$.
But this may feel contrived. I think there may be a more satisfying approach - a "negation-blind" notion of logics (which I've actually never seen before but I'm sure is well-known):
(Note that a partition logic has bivalence built in: every structure lies on exactly one side of each "question.") Negation-blindness is due to the fact that we're looking at unordered partitions: we don't distinguish $\mathfrak{S}=X\sqcup Y$ from $\mathfrak{S}=Y\sqcup X$.
Every assertion logic induces a partition logic in an obvious way. This passage is only one-way: two inequivalent assertion logics can yield the same partition logic (it's a good exercise to formalize and prove this claim). That said, it's easy to (formalize and) check that first-order logic is "maximal" with respect to the partition logic it generates, as a consequence of Tarski's definition of negation. So already we have a weak kind of symmetry here.
But even more can be said. Despite the loss of information when we pass from assertions to partitions, partition logics are still content-ful: properties of logical systems can be approached from this standpoint. For example,
would be expressed in this context by saying
(But see below ...)
The (downward) Lowenheim-Skolem property (for single sentences) is
And the compactness property is
Now we come to the symmetry issue. Consider first-order logic without negation. This possesses conjunctions and disjunctions, trivially. However, consider the principle $(**)$ above. That fails for this logic: take $A_0$ to correspond to $\exists x(P(x))$ and $B_1$ to correspond to $\exists x(Q(x))$. Then the partition $$\mathfrak{S}=(A_0\cap B_0)\sqcup (A_1\cup B_1)$$ is not expressible here since neither $\exists x,y(P(x)\wedge\neg Q(y))$ nor $\exists x,y(\neg P(x)\wedge Q(y))$ can be expressed in a negation-free way. Instead, negation-free first-order logic has only the much weaker version of $(**)$:
And ... $(***)$ is asymmetric at a semantic level, in that an expressible partition will in general have a "distinguished side." The Tarskian approach to negation prevents exactly this, since it says that whenever one side of a partition corresponds to a sentence, so does the other side (by virtue of passing from $\varphi$ to $\neg\varphi$).
Basically, Tarski's approach to negation tells us that we can always take a configuration occurring in our partition logic, "flip sides" of the partitions however we want, and still get the same configuration. And here we have a kind of symmetry which is really equivalent to the one coming from assertion logics, but may feel a bit more fundamental since it shows how even when we naively symmetrize everything, there are still asymmetries which can emerge, and that the Tarskian approach to negation is exactly what prevents them.