I try to implement this paper. It's about an instrument that provides explanations on how the Graph Neural Network makes its prediction.
Reading this, I got the clear understanding of author idea but I can't understand the five formula $$ min_M{-\sum_{c=1}^C\mathbb{1}[y=c]log_{P_\Phi}(Y=y|G = A_S \odot \sigma(M), X=X_S)} $$, because I have no idea what $\mathbb{1}[y=c]$ term means and, therefor, from where came y. Is it an assigning or maybe it means the labeled sample?
Can anyone explain please?
$[y=c]$ is an Iverson bracket: it evaluates to $1$ when $y=c$ and to $0$ otherwise. Its use here is not a good example of the notation: it would have been better written as $$min_M{-\sum_{y=1}^C\mathbb{1}log_{P_\Phi}(Y=y|G = A_S \odot \sigma(M), X=X_S)}$$
The bold $\mathbb{1}$ is probably a vector filled with $1$ or an identity matrix, but I haven't studied the paper in enough detail to figure out its dimensions.