I assume that the reader may be well versed with the various notations of partial derivatives.
My book gives the following definition of differentiability:
If $\Delta f(x,y)$ can be expressed in the form:
$\Delta f(x,y)=f_x(a,b)\ \Delta x + f_y(a,b)\ \Delta y + \varepsilon_1\ \Delta x + \varepsilon_2\ \Delta y$;
where $\varepsilon_1 \rightarrow 0$ and $\varepsilon_2 \rightarrow 0$ as $(\Delta x,\Delta y) \rightarrow (0,0)$;
then $f(x,y)$ is differentiable at $(a,b)$
However my calculation gives:
Provided that $f_x(a,b)$, $f_y(a,b)$ and $f_{xy} (a,b)$ exist,
$\Delta f(x,y)=f_{xy}(a,b)\ \Delta x \Delta y+f_x(a,b)\ \Delta x + f_y(a,b)\ \Delta y + \varepsilon_1\ \Delta x + \varepsilon_2\ \Delta y + \varepsilon_3\ \Delta x \Delta y $
where $\varepsilon_1 \rightarrow 0$, $\varepsilon_2 \rightarrow 0$ and $\varepsilon_3 \rightarrow 0$ as $(\Delta x,\Delta y) \rightarrow (0,0)$
Does this mean if $f_x(x,y)$ is not a function of $y$ , i.e. $f_{xy}(x,y)=0$ i.e. equation $(1)$ is obeyed and therefore $f(x,y)$ is differentiable at $(a,b)$???
In other words, if $f_x(x,y)$ is not a function of $y$, then is $f(x,y)$ is differentiable at $(a,b)$???
The calculation is simple, trivial but a bit lengthy. If the reader feels anything wrong, I can present the calculation
As stated in the question, a definition of differentiability is
$$\Delta f(x,y)=f_x(a,b)\ \Delta x + f_y(a,b)\ \Delta y + \varepsilon_1\ \Delta x + \varepsilon_2\ \Delta y \tag{1}\label{eq1}$$
I'm not quite sure how you came up with your calculation of
$$\Delta f(x,y) = f_{xy}(a,b)\ \Delta x \Delta y+f_x(a,b)\ \Delta x + f_y(a,b)\ \Delta y + \varepsilon_1\ \Delta x + \varepsilon_2\ \Delta y + \varepsilon_3\ \Delta x \Delta y \tag{2}\label{eq2}$$
For one thing, why did you choose just $f_{xy}(a,b)$ and not use $f_{y\, x}(a,b)$ as well? Regardless, the $2$ equations are not actually inconsistent in the limiting case as $(\Delta x,\Delta y) \rightarrow (0,0)$. This is because the extra $2$ terms you have are both bounded values multiplied by the second order delta values of $\Delta x \Delta y$ while all of the original values on the right are a multiple of just $\Delta x$ or $\Delta y$.
A related question about this was asked here at First principles derivation of area under a curve giving rise to an unexpected term before taking limits. As the comment there by Andy Walls basically states, when $\Delta x$ and $\Delta y$ become small, then their product becomes extremely small, e.g., $0.0000001 \times 0.0000001 = 0.000000000001$.
To help show why this works, consider that $\Delta x$ and $\Delta y$ are changing proportionally to each other, i.e., that $\frac{\Delta x}{\Delta y} = k$, for some non-zero constant $k$, so $\Delta x = k\varepsilon_4$ and $\Delta y = \varepsilon_4$ for some small real $\varepsilon_4$. Substitute this into the RHS of \eqref{eq2} and divide both sides by $\varepsilon_4$ to get
$$\frac{\Delta f(x,y)}{\varepsilon_4} = f_{xy}(a,b)\ k\varepsilon_4 + f_x(a,b) k + f_y(a,b) + \varepsilon_1\ k + \varepsilon_2\ + \varepsilon_3\ k\varepsilon_4 \tag{3}\label{eq3}$$
Now, taking the limit as the various $\varepsilon_i \to 0$ gives you on the RHS
$$f_x(a,b) k + f_y(a,b) \tag{4}\label{eq4}$$
Note doing the exact same calculations in \eqref{eq1} gives you the same result. As you can see, when you're taking limits going to $0$, only the lowest order terms will survive.
I hope this answers your question well enough. Keep in mind I used just a somewhat restricted case of $\Delta x \to 0$ and $\Delta y \to 0$ because I thought it'd be more straight forward & simpler than showing the general case, but you may wish to try that yourself to confirm your understanding of this issue.