I can't understand the solution to this example problem:
Let $K_1:\mathbb{R}^n\times\mathbb{R}^n\rightarrow\mathbb{R}$ be an arbitrary kernel function. Prove that $K_2(x,y)=a\cdot K_1^2(x,y)+b$, where $a,b$ are positive real numbers, is also a kernel function.
The presented solution first uses the fact that $K$ is a kernel function $\implies$ there is a mapping $\varphi:\mathbb{R}^n\rightarrow\mathbb{R}^N$ , $N\gg n$ such that $K(x,y)=\varphi(x)\cdot\varphi(y)$.
Then my notes from the lecture consist of a series of computations. In short:
$K_1(x,y)=\varphi_1(x)\cdot\varphi(y)=(x_1,x_2,\ldots,x_N)\cdot(y_1,y_2,\ldots,y_N)=(x_1y_1+x_2y_2+\ldots+x_Ny_N)$
$K_2(x,y)=aK_1^2(x,y)+b=a(x_1y_1+\ldots+x_Ny_N)^2+b=a(x_1^2y_1^2+\ldots+x_N^2y_N^2+2x_1y_1x_2y_2+\ldots)+b=\varphi_2(x)\cdot\varphi_2(y)$
Where $\varphi_2(x)=\left(\sqrt ax_1^2,\sqrt ax_2^2,\ldots,\sqrt a x_N^2,\sqrt{2a}x_1x_2,\sqrt{2a}x_1x_3,\ldots,\sqrt b\right)$.
Now I have an annotation in my notes that I don't understand: Namely, that here we somehow used the fact that a kernel function is positive-definite, which means that the matrix $K(x_i,x_j)$ is positive definite.
...what?? What does this even mean: $K(x_i,x_j)$?? I thought that a kerner function accepts vectors as arguments, NOT elements of these vectors, and I'm interpreting $x_i,x_j$ to be elements of vectors! How can we construct a matrix of pairwise applications of $K$ to elements of a vector if $K$ accepts the vectors themselves and not their elements??
What does it mean that $K$ is positive definite and how to derive from this fact the shape of $\varphi_2$?