I am building a Python library that creates Partially Connected Neural Networks based on input and output data (X,Y). The basic gist is that the network graph is arbitrarily updated with nodes and edges in the hidden layers.
Example Data:
X = np.array([[0,0],[0,1],[1,0],[1,1]], dtype=np.float)
Y = np.array([[0],[1],[1],[0]], dtype=np.float)
Example Graph: 
I am currently using the sum product aggregation function to calculate each layer's values:
$$ \sum_{i=0}^n w_{ij}x_i $$
The synapse weights are denoted by two subscripts $ij$. Subscript $i$ represents the previous neuron and subscript $j$ represents the current neuron under consideration. The sum product for current neuron $j$ is computed as:
$$ s_j = w_{0j}x_0 + w_{1j}x_1 + ... + w_{nj}x_n = \sum_{i=0}^n w_{ij}x_i $$
Is sum product an ideal aggregation function for PCNNs? Dot product won't work as the tensor sizes are almost always incompatible.
** I am self-taught so please forgive any glaring linear algebra misses.
My solution was to use a weight vector $w$ that is padded with zeros and reshaped for dot product. The community support for dot product is just so much better than other methods like Kronecker Product! Here is how I accomplished it in my Python library:
The result of the dot product looks like: