I'm trying to prove that if we define $$F(x) =\int_{g(x)}^{h(x)} f(t) dt$$ If $f$ is continous in $x_0$, $g(x)$, $h(x)$ are differentiable in $x_0$, and the range of a neighbourhood of $h(x_0)$ and $g(x_0)$ is included in the domain of $f$ then
$F'(x_0)=f(h(x_0))h'(x_0)-f(g(x_0))g'(x_0)$
I tried to divide the proof in $g(x_0)<h(x_0)$, $h(x_0)<g(x_0)$ and $g(x_0)=h(x_0)$
When $g(x_0)<h(x_0)$, what I did is define
$$M(y) = \int _a^y f(t) dt$$ $$N(y) = \int_y^a f(t) dt$$
And I have proven that if $f$ is continous in $x_0$:
$$M'(x_0)=f(x_0)$$ $$N'(x_0)=-f(x_0)$$
As $F(x) = M(h(x)) + N(g(x)) $ when $g(x)<h(x)$ and this holds in a neighbourhood of $x_0$, $(x_0-ε,x_0+ε)$ as $g$ and $h$ are continous in $x_0$, it follows that $$F'(x_0)=f(h(x_0))h'(x_0)-f(g(x_0))g'(x_0)$$
Similarly we prove the case when $h(x_0)<g(x_0)$, using that $$F(x) =\int_{g(x)}^{h(x)} f(t) dt =-\int_{h(x)}^{g(x)} f(t) dt$$
The problem comes when $g(x_0)=h(x_0)$, using $$F(x_0)= \int_{g(x_0)}^{g(x_0)} f(t) dt = 0$$
as I don't know what happen in a neighbourhood of $x_0$
I tried to do $$F'(x_0)=\lim_{h\to 0}\frac{F(x_0+h)-F(x_0)}{h}=\lim_{h\to 0}\frac{F(x_0+h)}{h}$$
but I didn't get anything and I don't know what else should I do.
Edit:
I change the way to prove it and, with the same inicial conditions, I define $$M(y) = \int _a^y f(t) dt$$
We also have that if $f$ is continous in $x_0$, then:
$$M'(x_0)=f(x_0)$$
Now we have $F(x_0)=M(h(x_0))-M(g(x_0))$ and this holds for all 3 cases, so: $$F'(x_0)=f(h(x_0))h'(x_0)-f(g(x_0))g'(x_0)$$
I think the overuse of $\newcommand{d}{\,\mathrm d}x_0$ in the proof is problematic. You shouldn't be looking at $f(x_0)$ at all; $F'(x_0)$ can be equal to the desired formula even if $f(x_0)$ is undefined, and conversely $f$ might be continuous at $x_0$ and yet $F(x_0)$ is undefined (and therefore $F'(x_0)$ also is undefined).
For example, let $f(x) = \sqrt{1 - x^2},$ which is defined only on $[-1,1].$ Let $g(x) = x + 1$ and $h(x) = x + 2$. Then $F'(x_0)=f(h(x_0))h'(x_0)-f(g(x_0))g'(x_0)$ when $x_0 = -\frac32,$ although $f(x_0)$ is undefined; but if $x_0 = \frac12,$ then $f$ is continuous at $x_0,$ and $g$ and $h$ both are differentiable at $x_0,$ but $f$ is undefined in the neighborhoods of $g(x_0)$ and $h(x_0),$ so $F(x_0)$ and $F'(x_0)$ are undefined.
Rather than looking at $f$ at $x_0,$ you need to be concerned with $f$ at $g(x_0),$ at $h(x_0),$ and at values in between in order for $F(x_0)$ even to be defined, and you should be concerned about neighborhoods of $g(x_0)$ and $h(x_0)$ in order to ensure that $F'(x_0)$ is defined.
One way to do this (but not necessarily the only way) is to require that $f$ be continuous on an open interval $I$ that includes both $g(x_0)$ and $h(x_0).$ As before, let $g(x)$ and $h(x)$ be differentiable at $x_0$ and define $F$ by $$F(x) = \int_{g(x)}^{h(x)} f(t) \d t.$$
You could then proceed as follows:
Let $a \in I.$ Then $$ \int_{g(x)}^{h(x)} f(t) \d t = \int_a^{h(x)} f(t) \d t - \int_a^{g(x)} f(t) \d t. $$
Note that as long as we define this notation for definite integrals in the usual way so that $\int_a^b f(t) \d t = -\int_b^a f(t) \d t,$ the equation above is true no matter the order in which $g(x),$ $h(x),$ and $a$ appear on the number line.
Let $$ S(v) = \int_a^v f(t)\d t. $$ Then $S'(v) = f(v).$ Also,
$$ S(h(x)) = \int_a^{h(x)} f(t) \d t $$ and $$ S(g(x)) = \int_a^{g(x)} f(t) \d t.$$
It follows that $$ F(x) = S(h(x)) - S(g(x)) $$ and therefore \begin{align} F'(x_0) &= \left.\frac{\d}{\d x} S(h(x))\right|_{x=x_0} - \left.\frac{\d}{\d x} S(g(x))\right|_{x=x_0} \\ &= S'(h(x_0)) h'(x_0) - S'(g(x_0)) g'(x_0) \\ &= f(h(x_0)) h'(x_0) - f(g(x_0)) g'(x_0), \end{align}
which was what we wanted to prove.
A note on the conditions of the theorem:
In most treatments of the Fundamental Theorem of Calculus there is a "First Fundamental Theorem" and a "Second Fundamental Theorem." We use both of them in the proof above.
The Second Fundamental Theorem is invoked in order to be able to differentiate the function $S$ at $g(x_0)$ and at $h(x_0).$ This requires $f$ to be continuous in neighborhoods of $g(x_0)$ and $h(x_0).$
Note that in the statement of the Second Fundamental Theorem (at least as shown in the link above, and in many other places) it is not necessary that $a \leq v$ in order to be able to differentiate $S$ at $v.$ If your version of the Second Fundamental Theorem requires that $a \leq v$ then the proof will be slightly more complicated; a good first step would be to prove the usual version of the theorem first, because it is generally useful. Once you have done that, you no longer have to worry about the relative order of $a,$ $g(x_0),$ and $h(x_0)$ on the number line. You can put $a$ between $g(x_0)$ and $h(x_0)$ if you like.
The First Fundamental Theorem is invoked implicitly in order to guarantee that the integrals in $$ \int_{g(x)}^{h(x)} f(t) \d t = \int_a^{h(x)} f(t) \d t - \int_a^{g(x)} f(t) \d t $$ can all be evaluated when $x = x_0.$ This requires $f$ to be continuous on the three closed intervals whose endpoints are pairs of numbers from the list $a,$ $g(x_0),$ and $h(x_0).$
In particular, for the first integral we want $f$ to be continuous on the closed interval whose endpoints are $g(x_0)$ and $h(x_0).$ Combine that with the fact that $f$ is continuous in neighborhoods of $g(x_0)$ and $h(x_0),$ and we have the fact that $f$ is continuous on some open interval including $g(x_0)$ and $h(x_0).$ If we just place $a$ somewhere in that interval, we also ensure that $f$ is continuous on the other two closed intervals we need.
Additional notes:
Since one of the necessary conditions is that $f$ be continuous on an open interval $I$ such that $g(x_0)\in I$ and $h(x_0)\in I,$ it is always possible to find a number $a\in I$ such that $a < g(x_0)$ and $a < h(x_0).$ If you must use a definition of the definite integral notation $\int_a^b f(t)\d t$ that is valid only when $a \leq b,$ you will need one of the conditions of the theorem to be $g(x) \leq h(x)$ for $x$ in some neighborhood of $x_0,$ and therefore you will be able to choose $a$ such that $a < g(x_0)\leq h(x_0)$ and all the integrals are defined according to this particularly narrow definition of the definite integral. It is easy to do this: since you know $f$ must be continuous in some neighborhood of $x_0$, choose $\epsilon>0$ such that $(x_0 - \epsilon, x_0 + \epsilon)$ is a neighborhood on which $f$ is continuous, and set $a = x_0 - \frac\epsilon2$ (or any other number in the interval $(x_0 - \epsilon, x_0)$).
If you want to be able to handle the case where $g(x_0) > h(x_0),$ however, I think it is hard to avoid using the typical definition of the definite integral, according to which $\int_b^a f(t)\d t = -\int_a^b f(t)\d t$. That is, I think it would be hard to define the integral in such a way that this equation was not true and yet the theorem was true. But if you have accepted the usual definition, you can choose $a$ anywhere in the interval $I$ and continue to use all the integrals shown in the proof above. So I think a concern over the choice of $a$ within the interval $I$ is uncalled for if you also expect the theorem to be true in the case where $g(x_0) > h(x_0).$ That's why I made no effort to make $a < \min\{g(x_0), h(x_0)\},$ although it is easy to enforce that condition on $a$ if you want.