Backpropagating with matrices (instead of raw scalars) can help make the task much easier. My neural network looks like:
Input Layer Nodes: $I_a (Ate Cookies?)$, $I_b (Drank Milk?)$
Hidden Layer 1 Nodes: $H_c$, $H_d$, $H_e$
Hidden Layer 2 Nodes: $H_f$, $H_g$, $H_h$
Output Layer Nodes: $O_i (Full)$, $O_j (Unhappy)$
Expected Output: Drinking Milk AND Eating Cookies = Full, Drinking Milk XOR Eating Cookies = Unhappy, Neither = Nothing
Activation Function: $ActivateFunc(N_x)=\frac{1}{1+e^{-N_x}}$
Actual Output: (Cookies 1, Milk 1) -> (Unhappy 1, Full 1)
Weight Notation: $ac$ represents a connection from $I_a$ to $H_c$, for example.
I know that the weight update formula is: $w(n+1)=w(n)+R_{learning} \cdot w(n-1) + \eta \cdot \delta(n) \cdot y$
All I know so far about neural network matrices is that there is a matrix for the input layers, output layers, hidden layers, and weights.
How can I apply backpropagation to the neural network in matrix form?