I'm trying to represent common machine learning/deep learning concepts and operations in the form of string diagrams. The many categorical quantum mechanics papers by Coecke et al have given me a good start on representing basic linear algebra, but there are a few missing pieces when translating to machine learning.
A very common type of neural network is a convolutional neural network, in which your input is typically a matrix of pixel values (a grayscale image) or possibly a 3-tensor if the image is RGG. Taking the simplest case, you will have at least one small convolutional matrix/kernel that you compute the dot produce of this kernel and a equally sized patch of the input image. By 'sliding' the convolution matrix across the input image 1 pixel at a time and computing the dot product, you build up a new matrix.
In practice, this operation is typically implemented as just a large matrix-matrix multiplication by reshaping the input image; however, this obscures the intuitive idea of a small matrix 'scanning' across the image. Is there anyway I can represent a convolution operation like this in a string diagram notation a la Coecke and others? I also want to know how to represent entry-wise matrix multiplication in string diagrams (matrix-matrix multiplication is easy, it's just two boxes sequentially composed along a string).