Let $g:[-1,1] \to \mathbb R$ be continuously differentiable almost-every where on $(-1,1)$ and square-integrable w.r.t $N(0,\varepsilon^2)$ for some $\varepsilon>0$ (take $\varepsilon=1$ if you like). For $z=(z_1,\ldots,z_n)$ uniform on the unit-sphere in $\mathbb R^n$, define the coefficients $a_n(g)$, $b_n(g)$, and $c_n(g)$ by
$$ a_n(g) := \mathbb E[g(z_1)g(z_2)],\,b_n(g):= \mathbb E[z_1g(z_1)g'(z_2)],\,c_n(g):=\mathbb E[g(z_1)^2]. $$
Also define their large-$n$ limits (whenever these limits exist!) $$ a(g) := \lim_{n \to \infty} a_n(g),\, b(g) := \lim_{n \to \infty} b_n(g),\,c(g) := \lim_{n \to \infty} c_n(g). $$
Claim. If $g$ is continuously-differentiable at $0$, then $$ a(g) = c(g) = g(0)^2,\, b(g) = 0. \tag{1} $$
Thus, $a(g)$, $b(g)$, and $c(g)$, Let us prove the above claim. To this end, let $g(x) = a_0 + a_1 x + O(x^2)$ be the first-order Taylor expansion of $g$ around $0$. Then, one computes $$ c_n(g):=\mathbb E[g(z_1)^2] = a_0^2 + a_1^2\mathbb E[z_1^2] + O(1/n) = a_0^2 + O(1/n) + O(1/n)=g(0)^2 + O(1/n). $$ Thus, $c(g) := \lim_{n \to \infty}c_n(g) = g(0)^2$, as claimed. Using analogous arguments, one proves that $a(g) = g(0)^2$ and $b(g) = 0$. $\quad\quad\Box$
Question. Without the condition "$g \in \mathcal C^1$ at $0$", are the formula (1) still valid ? Otherwise, what are weaker conditions under which (1) is valid ?
Here is an answer based on user8268's comment. It would be too long for a comment.
More generally, suppose for any $n$, $Z_n:=(z_1,\ldots,z_n)$ is a random vector in $\mathbb R^n$ with $\mathbb E[z_1^2], \mathbb E[z_2^2] \le v_n \to 0$ (in the limit $n \to \infty$). Let $h:[-1,1]^2 \to \mathbb R$ be a measurable function which is continuous at $(0,0)$ and assume $|h|$ is bounded. For example, if $h$ is continuous on $[-1,1]^2$, then all of these assumptions are met.
Proof. Set $M := \sup_{z_1,z_2 \in [-1,1]}|h(z_1,z_2)| \in [0,\infty)$. Let $\mu \in \mathcal P(\mathbb R^2)$ be the joint probability distribution of $(z_1,z_2)$. With $\epsilon_n:= \sqrt{v_n}$, one computes
\begin{eqnarray*} \begin{split} \mathbb E[h(z_1,z_2)] &= \underbrace{\int_{z_1^2+ z_2^2 < \epsilon_n} h(z_1,z_2)d\mu(z_1,z_2)}_{A_n} + \underbrace{\int_{z_1^2+ z_2^2 \ge \epsilon_n} h(z_1,z_2)d\mu(z_1,z_2)}_{B_n}. \end{split} \end{eqnarray*}
We now handle either term separately.