I was going through the completeness theorem for propositional logic from these notes (on page 23 Lemma 2.2.12). Before the proof they give a crucial definition:
$$ t_{\Sigma}(a) = \left\{ \begin{array}{ll} 1 & \mbox{if } \Sigma \vdash a \\ 0 & \mbox{if } \Sigma \not \vdash a \end{array} \right. $$
which they want to extend to all proposition. I noticed that there seems to be something extremely fishy about this definition because one can conclude:
$$ \Sigma \not \vdash a \implies \Sigma \vdash \neg a$$
without actually providing a proof and simply by only using the definition, which seemed like cheating to me (plus this shouldn't ALWAYS be true). For example:
If we have $\Sigma \not \vdash a \implies t_{\Sigma}(a) = 0$. Then we can conclude $t_{\Sigma}(\neg a) = 1$ because $ t_{\Sigma}(\neg a) = 1 - t_{\Sigma}(a) = 1 - 0 = 1$. But if you look at the definition of $t_{\Sigma}(a)$ that implies $\Sigma \vdash \neg a$ (because that truth function only returns 1 when things are provable).
This doesn't seem legit because we concluded something about provability through an artificial definition rather than actually proving it. What is wrong? Does it mean this definition is only true if we already know $\Sigma$ is consistent and thus complete?
I guess I am trying to understand what the deal is with non provability in propositional logic and propositional logic only. I always thought that statements in propositional logic were always decidable because we could brute force the values of all truth function in the atoms...though I feel I'm not connecting the dots somewhere...
Ok I am going to copy paste the exact definition of $t_{\Sigma}(a)$ from the book because thats what might be confusing me:
I'm not sure exactly what is confusing me but I suspect is this "otherwise" statement and what happens when $\Sigma \not \vdash a$.

EDIT: Per our discussion, I think it's worthwhile to record the following key statements and the conditions under which they are true (note that everything here is valid for both propositional logic and predicate logic; as far as completeness/soundness issues are concerned, they behave identically):
$(1)$ "For all $p$, we have $\Sigma\vdash p\iff \Sigma\models p$."
$(1')$ "For all $p$, we have $\Sigma\not\vdash p\iff \Sigma\not\models p$."
$(2)$ "For all $p$, we have $\Sigma\not\vdash p\iff \Sigma\vdash\neg p$."
This now requires a hypothesis about the theory in question: the left-to-right direction holds iff $\Sigma$ is complete, and the right-to-left direction holds iff $\Sigma$ is consistent. Meanwhile, the completeness theorem - that is, the completeness of the logical system as opposed to the specific theory - doesn't enter into the conversation since the above doesn't mix "$\vdash$" and "$\models$."
$(2')$ "For all $p$, we have $\Sigma\not\vdash p\iff \Sigma\models\neg p$."
By the completeness theorem - which applies to every theory $\Sigma$ - this is equivalent to $(2)$; the left-to-right direction holds iff $\Sigma$ is complete and the right-to-left direction holds iff $\Sigma$ is consistent.
I think the above more-or-less demonstrates the various ways that the issues at hand - $\vdash,\not\vdash,\models,\not\models,\neg$, completeness/soundness of the logic, and completeness/consistency of a particular theory - interact.
You write:
The emphasized part is your mistake. The definition of $t_\Sigma$ is made only for propositional atoms: we can't conclude $$(*)\quad\mbox{For every sentence $p$, } \quad t_\Sigma(p)=1\iff \Sigma\vdash p$$ immediately from the definition of $t_\Sigma$; all the definition gives us directly is $$(**)\quad\mbox{For every $propositional$ $atom$ $p$, } \quad t_\Sigma(p)=1\iff \Sigma\vdash p.$$ (Look back at the definition of $t_\Sigma$ carefully!)
That is:
We can, however, show that the full claim $(*)$ above holds if (and in fact only if) $\Sigma$ is complete. And this is exactly the content of Lemma 2.2.12: that in case $\Sigma$ is complete we can in fact say that $t_\Sigma(p)=1\iff\Sigma\vdash p$ for all sentences $p$, not just the propositional atoms.
I think the confusion here stems from the conflation of two closely-related objects, both of which the notes are calling "$t_\Sigma$" - see the note under the definition on page $15$:
The truth assignment for propositional atoms, $t_\Sigma^{atom}$. This is a function from $A$ to $\{0,1\}$, and is defined by setting $t_\Sigma^{atom}(a)=1\iff\Sigma\vdash a$. The function $t_\Sigma^{atom}$ is only defined on atoms, and so in particular it doesn't make sense to ask whether $t_\Sigma^{atom}(\neg a)=1$ or not (since $\neg a$ isn't a propositional atom).
The full truth assignment, $t_\Sigma^{full}$, induced by $t_\Sigma^{atom}$. We define $t_\Sigma^{full}(p)$ by induction:
If $p$ is an atom, then $t_\Sigma^{full}(p)=t_\Sigma^{atom}(p)$.
If $p$ has the form $\neg q$, then $t_\Sigma^{full}(p)=1-t_\Sigma^{full}(q)$.
If $p$ has the form $q\vee r$, then $t_\Sigma^{full}(p)=\max\{t_\Sigma^{full}(q), t_\Sigma^{full}(r)\}$.
(REMARK: That this is a valid definition takes proof, but this is a proof you should have already seen: that - given $t_\Sigma^{atom}$ - there is a unique function $t_\Sigma^{full}$ satisfying the properties above. So I'm going to take it for granted at the moment that you're comfortable with this sort of inductive definition. If not, since it is a bit detailed it might be a good idea to ask about it in a separate question.)
The crucial point is that $t_\Sigma^{atom}$ has a special property which $t_\Sigma^{full}$ doesn't necessarily have: namely, $t_\Sigma^{atom}$ satisfies "For every sentence $p$ in my domain, I map $p$ to $1$ iff $\Sigma\vdash p$." However, $t_\Sigma^{full}$ does not satisfy "For every sentence $p$ in my domain, I map $p$ to $1$ iff $\Sigma\vdash p$." And this shouldn't be surprising: the domain of $t_\Sigma^{full}$ is much bigger than the domain of $t_\Sigma^{atom}$.
Phrased this way, Lemma 2.2.12 should read: