After reading an introduction to vector analysis I wanted to try out some operations myself checking whether I understood everything well. I thought this $$ \Delta(\phi\psi)=\phi\Delta\psi+2\nabla\phi\nabla\psi+\psi\Delta\phi \\ \phi,\psi\text{ are scalar,} $$ would be simple enough to start with, but I was wrong.
Here is what I tried $$ \begin{align} &\Delta(\phi\psi) \\ &=\partial_{xx}(\phi\psi)~~+~~\partial_{yy}(\phi\psi)~~+~~\partial_{zz}(\phi\psi) \\ &=\partial_x(\psi\cdot\partial_x\phi+\phi\cdot\partial_x\psi)~~+~~\partial_y(\psi\cdot\partial_y\phi+\phi\cdot\partial_y\psi)~~+~~\partial_z(\psi\cdot\partial_z\phi+\phi\cdot\partial_z\psi) \\ &=\partial_x\psi\cdot\partial_x\phi+\psi\cdot\partial_{xx}\phi+\partial_x\phi\cdot\partial_x\psi+\phi\cdot\partial_{xx}\psi \\ &~~~~~+\partial_y\psi\cdot\partial_y\phi+\psi\cdot\partial_{yy}\phi+\partial_y\phi\cdot\partial_y\psi+\phi\cdot\partial_{yy}\psi \\ &~~~~~+\partial_z\psi\cdot\partial_z\phi+\psi\cdot\partial_{zz}\phi+\partial_z\phi\cdot\partial_z\psi+\phi\cdot\partial_{zz}\psi \\ &=\psi(\partial_{xx}\phi+\partial_{yy}\phi+\partial_{zz}\phi) \\ &~~~~~+2(\partial_x\psi\cdot\partial_x\phi+\partial_y\psi\cdot\partial_y\phi+\partial_z\psi\cdot\partial_z\phi) \\ &~~~~~+\phi(\partial_{xx}\psi+\partial_{yy}\psi+\partial_{zz}\psi) \\ &=\psi\Delta\phi+2\nabla\psi\cdot\nabla\phi+\phi\Delta\psi \end{align} $$
The thing is my teacher* said that I cannot omit the dot in a scalar product, because otherwise it wouldn't be a scalar product anymore (but a dyadic product).
$$ \vec a\vec b=\pmatrix{a_1\\a_2\\a_3}\pmatrix{b_1\\b_2\\b_3}=\pmatrix{a_1b_1&a_1b_2&a_1b_3\\a_2b_1&a_2b_2&a_2b_3\\a_3b_1&a_3b_2&a_3b_3}~~\neq~~\vec a\cdot\vec b=a_1b_1+a_2b_2+a_3b_3 $$
So my question: Is $\nabla\phi\nabla\psi$ meant to be a dyadic product or a scalar product? I'd bet on the scalar product, because $\phi\Delta\psi$ and $\psi\Delta\phi$ are scalars (I think). And I think $\vec a$ and $\vec b$ actually must be a row matrix $\mathbf a\in\mathbb R^{3\times1}$ and a column matrix $\mathbf b\in\mathbb R^{1\times3}$. Are vectors and column/row matrices interchangeable (even if strictly adhering to their definitions)?
* I'm just a student at high school and I simply don't know anyone who can answer this question, even my math teacher and physics teacher aren't able to.
Your calculation is correct. You've clearly shown that when the author of the identity you're checking wrote $2\nabla\phi\nabla\psi$, they really meant $2\left(\nabla\phi\right)\cdot\left(\nabla\psi\right)$.
Let's just look at the types of objects involving, though. On the LHS of your identity you have $\Delta (\phi\psi)$. $\phi$ and $\psi$ are (presumably) scalar fields and we know that the product of scalar fields is also a scalar field. Then applying the Laplacian operator to a scalar field, it still remains a scalar field.
On the RHS you have three terms. From my statements in the above paragraph we can see immediately that $\phi(\Delta\psi)$ and $\psi(\Delta\phi)$ are scalar fields. The term we're not sure of is $2\nabla\phi\nabla\psi$. Whatever type of object it is, when we add it to a scalar field (the other terms) we should get back a scalar field (the LHS). The only reasonable thing it could be then is another scalar field.
But what if it weren't? Maybe $2\nabla\phi\nabla\psi$ is supposed to be the dyadic product of $\nabla \phi$ and $\nabla \psi$. In that case it'd be a rank 2 tensor field. But the sum of a rank 2 tensor field and a scalar field is undefined. And even if it were defined, it probably wouldn't be a scalar field, which is what it'd have to be for equality to hold in this identity. So the authors must have meant for it to be a scalar field.
This was just a little bit of reasoning to see what type of object $2\nabla\phi\nabla\psi$ is supposed to be. For the sake of clarity if you use this identity in the future I'd suggest you write this term as $2\nabla\phi\cdot\nabla\psi$ or even $2\left(\nabla\phi\right)\cdot\left(\nabla\psi\right)$ to avoid anyone else getting confused about what it is supposed to mean.
I should note that I've seen a couple of people -- including my old QM professor -- use $\mathbf v \mathbf w$ (where $\mathbf v, \mathbf w$ are vector quantities) to denote the dot product, but I'd discourage this myself.
As for your question about matrices, row and column matrices can be swapped in many circumstances. For instance when just taking linear combinations of vectors, it doesn't really matter whether you're denoting the vectors as row or column matrices. But when it comes to matrix multiplication row and column matrices act very differently.
In fact, what you've written in your question is incorrect. $\mathbf a\mathbf b$ is not defined if $\mathbf a, \mathbf b$ are both column matrices. Instead the product you are doing should have been written $\mathbf a \mathbf b^T$ where the $^T$ represents transposition of the matrix.
Now let's compare the dyadic and dot products of column matrices: The dyadic (or outer) product is $\mathbf a\mathbf b^T$ and the dot (or inner) product is $\mathbf a^T\mathbf b$. These two products are not the same and in fact yield two different types of matrices, as you can confirm using the rules for matrix multiplication.