Breakdown question of single node hidden layer neural networks

111 Views Asked by At

I am working on writing a paper about neural networks and have been doing research and I seem to be getting conflicting answers from different articles. For the paper process we are required to submit facts along the way. One of my facts can be seen below

For a single node hidden layer, the $z$ that is plugged into our activation equation is the sum of all our elements of $X$ multiplied by their weights $w$ with an added bias $b$. An example of this would be $$ z = w_1 \, x_1 + w_2 \, x_2 + w_3 \, x_3 + b. $$ if we were using an $X$ with $3$ elements for example. In more general terms $$ z = b + \sum_{k=1}^n w_k \, x_k. $$

My professor commented that my understanding of this was not entirely correct and no matter how much I look I cannot understand why. Some articles I have read show the bias not being in $z$ and therefore not being plugged into the activation equation so maybe that is my problem?

Any suggestions greatly appreciated I really want to properly understand what is going on within a neural network.