I'm creating a neural network like this one in Excel, which is https://www.youtube.com/watch?v=3993kRqejHc
N= X1·W1 + X2·W2 + X3·W3.
Is it compulsory to add the logic function that appears in column I? How is it called strictly that function in AI? Wouldn't the neural network work without it?
That function is called an Activation Function, and it is actually what makes Neuronal Networks interesting.
To give you an example, imagine you have a network with one input $x$ and one output $y$, if you ignore this function, the result is
$$ y = b + w x \tag{1} $$
where the parameters $b$ and $w$ are the numbers (weights) you need to find. The idea of training phase is to find the values of $a$ and $b$ that fit a bunch of training examples of the form $\{x_i, y_i \}$. So, you know what the output $y_1$ is when you feed the network an input $x_1$, also know what the output $y_2$ when you feed it $x_2$, ...
Point here is that Eq. (1) is just a straight line, so no matter how many training examples you use. So you may ask yourself, isn't the problem of fitting a line already solved with ordinary least squares? And the answer is yes, it is, no need for neuronal networks at all!
The obvious follow up question is then how to spice things up? And the answer will be, by introducing non-linearities
$$ y = f(b + w x) \tag{2} $$
There are several choices for $f$, this is a very simple one for binary classification
$$ f(x) = \begin{cases} 0 & x < 0 \\ 1 & {\rm otherwise}\end{cases} $$
here is another one
$$ f_{\rm sigmoid}(x) = \frac{1}{1 + e^{-x}} $$
and yet another one
$$ f_{\rm ReLU}(x) = \begin{cases} 0 & x < 0 \\ x & {\rm otherwise}\end{cases} $$
Each one has its merits, the last one has been recently used a lot for classifiers, but it kind of depends on the problem