The early stages of the Convolutional Neural Networks are performing classical convolutions with a certain kernel size on the input image. Is is possible to express in common terms the type of filtering they are generally performing (lowpass, highpass, differential, anisotropic...), or are they completely application-dependent/impossible to interpret ?
Alternatively, can you show some sample kernel ? I would be very grateful.
There are some attempts to figure out what a CNN learned. Most interesting to me is LIME:
Simplified a lot, they do the following:
Another paper which is very famous for visualization is
Now to your other question:
I haven't heard of any other, non-application dependent interpretation.