In neural networks, they often take the dot product (standard inner product on $\mathbb{R}^n$) of an entire layer's activity (neuron value) with its weights (this is sometimes called the input function or as I am referring to it the input inner product(s)). These layers may be different dimensions.
Does this mean that when someone says "dot product" or "inner product" in this context, do they really mean the inner/dot products on $\mathbb{R}^n$ for $n=N_1,N_2,...N_L$ where $N_i$ is the number of neurons/weights in a layer? Would I be correct in saying that inner products are functions and in the context of neural network input inner products, it is actually many inner products?