For some time now, I have been able to use the Conditional Proof rule in Natural Deduction. We assume A, derive B within the scope of the assumption and discharge the assumption to get that A implies B. This is, I believe, the syntactic approach to the proof, pure rule-based. However, what is the motivation behind this rule? In other words, why do we take it to be A implies B and not, say, A and B, or, other truth-functions?
One suggestion may be that we take the implication truth-function because we have assumed A to be true and B to be true too if A is true, which defines the implication truth-function. But it is not clear to me if in a formal proof of validity where we have to derive the conclusion using pre-established rules only, we are allowed to conceive that the formulas in the proof have truth values.
I am might be making some mistake at some fundamental level. If so, where have I gone wrong? Please help me clarify my doubts. Any help would be appreciated.



You are right that a formal proof, and any of its individual rule applications, is a purely syntactical object. Indeed, that is the very point of using a formal language with formal rules: they allow you to ‘figure things out’ by mere symbol manipulation.
Indeed, the symbols do not by themselves mean anything. For all we know, the $A$’s and $B$’s stand for fruits, and the $\to$’s and $\land$’s for operation on those fruits, like putting them together in a fruit salad. However, if we do look at these symbols as representing sentences with a truth-value, then we’ll find that the inference rules reflect what we consider logically valid inference principles. Therefore, a formal derivation of $B$ that starts with assumption $A$ means that $B$ logically follows from $A$, which we can symbolize as a conditional $A \to B$. In other words: yes, a proof is all ‘just’ purely syntactical, but they ‘mirror’ meaningful inferences.