A simple, but fundamentally important, example of an RKHS can be obtained when $E$ is finite-dimensional. Thus, let $E=\{t_1, \ldots, t_p\}$ in which case the kernel $K$ is equivalent to the matrix $\mathscr{K}=\{K\left(t_i, t_j\right)\}_{i, j=1: p}$.
The RKHS is now found to be the set of functions on $E$ of the form $$ f(\cdot)=\sum_{i=1}^p a_i K\left(\cdot, t_i\right), $$ where $\left(a_1, \ldots, a_p\right)$ is perpendicular to the null space of $\mathscr{K}:$ i.e., the set of vectors a for which $\mathscr{K} a=0$. Note that $f(\cdot)$ can take on only $p$ values which means that it has a p-vector representation as $\left(f\left(t_1\right), \ldots, f\left(t_p\right)\right)^T=\mathscr{K}$ a for $a=$ $\left(a_1, \ldots, a_p\right)^T$.
For notational clarity in this instance, we will use $f(\cdot)$ to indicate its representation as a function on $E$ and $f$ to denote its vector form. With that convention, the inner product between $f_1(\cdot), f_2(\cdot) \in \mathbb{H}(K)$ is $$ \left\langle f_1(\cdot), f_2(\cdot)\right\rangle=f_1^T \mathscr{K}^{-} f_2 $$ with $\mathscr{K}^{-}$any generalized inverse of $\mathscr{K}$ : i.e., any matrix that satisfies $\mathscr{K} \mathscr{K}^{-} \mathscr{K}=\mathscr{K}$. The Moore-Penrose generalized inverse of $\mathscr{K}$ is one possible choice for $\mathscr{K}^{-}$and, of course, we use $\mathscr{K}^{-}=\mathscr{K}^{-1}$ when $\mathscr{K}$ is invertible.
Question 1: I don't know why we need the constraint that $\left(a_1, \ldots, a_p\right)$ is perpendicular to the null space of $\mathscr{K}$. I know if we choose the vector $\left(a_1, \ldots, a_p\right)$ in the null space of $\mathscr{K}$, then our function $f$ would become a zero function. Thus, we want to avoid any vector from the null space of $\mathscr{K}$. But I don't know why we need the constraint that $\left(a_1, \ldots, a_p\right)$ is perpendicular to the null space of $\mathscr{K}$.
Question 2: I try to get $ \left\langle f_1(\cdot), f_2(\cdot)\right\rangle=f_1^T \mathscr{K}^{-} f_2 $. This below is my attempt:
First, recall that for any $f_1(\cdot), f_2(\cdot) \in \mathbb{H}(K)$, we can express them in the form:
$$f_1(\cdot) = \sum_{i=1}^p a_i K(\cdot, t_i)$$ $$f_2(\cdot) = \sum_{j=1}^p b_j K(\cdot, t_j)$$
where $(a_1, \ldots, a_p)$ and $(b_1, \ldots, b_p)$ are perpendicular to the null space of the kernel matrix $\mathscr{K}$.
Now, we can represent $f_1(\cdot)$ and $f_2(\cdot)$ as $p$-dimensional vectors $f_1$ and $f_2$, respectively, by evaluating them at the points $t_1, \ldots, t_p$. That is,
$$f_1 = \begin{pmatrix} f_1(t_1) \\ \vdots \\ f_1(t_p) \end{pmatrix} \quad \text{and} \quad f_2 = \begin{pmatrix} f_2(t_1) \\ \vdots \\ f_2(t_p) \end{pmatrix}$$
Using the matrix representation of the kernel $\mathscr{K}$, we can write:
$$f_1 = \mathscr{K} \begin{pmatrix} a_1 \\ \vdots \\ a_p \end{pmatrix} \quad \text{and} \quad f_2 = \mathscr{K} \begin{pmatrix} b_1 \\ \vdots \\ b_p \end{pmatrix}$$
Now, the inner product between $f_1(\cdot)$ and $f_2(\cdot)$ in the RKHS $\mathbb{H}(K)$ can be computed using the kernel function $K$ as follows:
$$\begin{aligned} \langle f_1(\cdot), f_2(\cdot)\rangle &= \sum_{i=1}^p \sum_{j=1}^p a_i b_j K(t_i, t_j) \\ &= \begin{pmatrix} a_1 & \cdots & a_p \end{pmatrix} \mathscr{K} \begin{pmatrix} b_1 \\ \vdots \\ b_p \end{pmatrix} \\ &= f_1^T \mathscr{K} f_2 \end{aligned}$$
I find myself that I didn't get anything about the inverse matrix!