I have a cubic Bezier curve (4 points: P0, P1, P2, P3) : $$B(t)=((1-t)^3P_0+3t(1-t)^2P_1+3t^2(1-t)P_2+t^3P_3) , t \in [0,1]$$
I have a circle ("half-bottom-circle") defined as: $(a, b)$ its center, $R$ its radius, and parametric form : $$\left(u+a\ ,\ -\sqrt{R^{2}-u^{2}}+b\right), u \in [-R,R]$$
I would like to define $y_2$ (y-coordinate of $P_2$), so that when I move $y_1$, the circle and the Bezier are ALWAYS tangent. For them to be tangent, 2 conditions are necessary:
1- The slopes of the tangents (derivatives) at the point of tangency are equal : $$\frac{3\left(1-t\right)^{2}\left(y_{1}-y_{0}\right)+6\left(1-t\right)t\left(y_{2}-y_{1}\right)+3t^{2}\left(y_{3}-y_{2}\right)}{3\left(1-t\right)^{2}\left(x_{1}-x_{0}\right)+6\left(1-t\right)t\left(x_{2}-x_{1}\right)+3t^{2}\left(x_{3}-x_{2}\right)}=\frac{u}{\sqrt{R^{2}-u^{2}}}$$
2- The "y" coordinates at the point of tangency are equal : $$\left(1-t\right)^{3}y_{0}+3\left(1-t\right)^{2}ty_{1}+3\left(1-t\right)t^{2}y_{2}+t^{3}y_{3}=-\sqrt{R^{2}-u^{2}}+b$$
Now I have two unknowns "$t$" and "$u$", and I am looking for $y_2$. Algebraically I think it's going to be complicated. I am willing to use Newton-Raphson, but I don't know how to move forward. A little help would be welcome. Thank's. See my graph
Given the cubic Bézier curve of parametric equations:
$$ \begin{aligned} & b_x(u) := x_0\,(1 - u)^3 + 3\,x_1\,u\,(1 - u)^2 + 3\,x_2\,u^2\,(1 - u) + x_3\,u^3 \\ & b_y(u) := y_0\,(1 - u)^3 + 3\,y_1\,u\,(1 - u)^2 + 3\,y_2\,u^2\,(1 - u) + y_3\,u^3 \\ \end{aligned} \quad \quad \text{with} \; u \in [0,\,1] $$
and the circle of parametric equations:
$$ \begin{aligned} & c_x(v) := x_C + r\,\cos v \\ & c_y(v) := y_C + r\,\sin v \\ \end{aligned} \quad \quad \text{with} \; v \in [0,\,2\pi) $$
in order to be tangent it must be:
$$ \begin{cases} b_x(u) = c_x(v) \\ b_y(u) = c_y(v) \\ b_x'(u) = \lambda\,c_x'(v) \\ b_y'(u) = \lambda\,c_y'(v) \\ \end{cases} $$
with $0 < u < 1$, $0 < v < 2\pi$, $y_2 \in \mathbb{R}$, $\lambda \in \mathbb{R}$.
In light of this, defined the vector function $\mathbf{f} : \mathbb{R}^4 \to \mathbb{R}^4$ of law:
$$ \mathbf{f}(\mathbf{z}) := \left(b_x(u) - c_x(v), \; b_y(u) - c_y(v), \; b_x'(u) - \lambda\,c_x'(v), \; b_y'(u) - \lambda\,c_y'(v)\right) $$
with $\mathbf{z} \equiv \left(u,\,v,\,y_2,\,\lambda\right)$, the zeros of this function:
$$ \mathbf{f}(\mathbf{z}) = \mathbf{0} $$
can be calculated using the Newton-Raphson method, according to which:
$$ \mathbf{z}_{i + 1} = \mathbf{z}_i - \mathbf{J}_{\mathbf{f}}^{-1}(\mathbf{z}_i) \cdot \mathbf{f}(\mathbf{z}_i) $$
where $\mathbf{z}_0$ is an initial vector sufficiently close to the desired zero; the method stops when:
$$ \left\lVert \mathbf{J}_{\mathbf{f}}^{-1}(\mathbf{z}_i) \cdot \mathbf{f}(\mathbf{z}_i) \right\rVert \le \epsilon\,, $$
with $\epsilon > 0$ pre-established according to your needs.
Implementing this method (with some tricks) in Wolfram Mathematica 12.2:
we get:
that is, with only $6$ iterations, we obtain $y_2 = 7.66843$, which is what is desired.
Finally, to verify what has been obtained, by writing:
we get:
which actually shows what you want. I hope it's clear enough, good luck! ^_^
Addendum 1
A fairly faithful approach to the algorithm implemented in Wolfram Mathematica 12.2 is:
define the parameters $x_0,\,y_0,\,x_1,\,y_1,\,x_2,\,x_3,\,y_3,\,x_C,\,y_C,\,r,\,\epsilon,\,\text{countmax}$;
initialize $\text{count} = 0$, $u = \dots$, $v = \dots$, $y_2 = \dots$, $\lambda = \dots$, then define $\mathbf{z} = \left(u,\,v,\,y_2,\,\lambda\right)^t$;
calculate the components: $$ \small \begin{aligned} & f_1 = x_0 + 3\left(x_1 - x_0\right)u + 3\left(x_2 - 2\,x_1 + x_0\right)u^2 + \left(x_3 - 3\,x_2 + 3\,x_1 - x_0\right)u^3 - x_C - r\,\cos v \\ & f_2 = y_0 + 3\left(y_1 - y_0\right)u + 3\left(y_2 - 2\,y_1 + y_0\right)u^2 + \left(y_3 - 3\,y_2 + 3\,y_1 - y_0\right)u^3 - y_C - r\,\sin v \\ & f_3 = 3\left(x_1 - x_0\right) + 6\left(x_2 - 2\,x_1 + x_0\right)u + 3\left(x_3 - 3\,x_2 + 3\,x_1 - x_0\right)u^2 + \lambda\,r\,\sin v \\ & f_4 = 3\left(y_1 - y_0\right) + 6\left(y_2 - 2\,y_1 + y_0\right)u + 3\left(y_3 - 3\,y_2 + 3\,y_1 - y_0\right)u^2 - \lambda\,r\,\cos v \\ \end{aligned} $$ then define $\mathbf{f} = \left(f_1,\,f_2,\,f_3,\,f_4\right)^t$;
calculate the components: $$ \begin{aligned} & J_{11} = 3\left(x_1 - x_0\right) + 6\left(x_2 - 2\,x_1 + x_0\right)u + 3\left(x_3 - 3\,x_2 + 3\,x_1 - x_0\right)u^2 \\ & J_{12} = r\,\sin v \\ & J_{13} = 0 \\ & J_{14} = 0 \\ \\ & J_{21} = 3\left(y_1 - y_0\right) + 6\left(y_2 - 2\,y_1 + y_0\right)u + 3\left(y_3 - 3\,y_2 + 3\,y_1 - y_0\right)u^2 \\ & J_{22} = -r\,\cos v \\ & J_{23} = 3\,u^2\,(1 - u) \\ & J_{24} = 0 \\ \\ & J_{31} = 6\left(x_2 - 2\,x_1 + x_0\right) + 6\left(x_3 - 3\,x_2 + 3\,x_1 - x_0\right)u \\ & J_{32} = \lambda\,r\,\cos v \\ & J_{33} = 0 \\ & J_{34} = r\,\sin v \\ \\ & J_{41} = 6\left(y_2 - 2\,y_1 + y_0\right) + 6\left(y_3 - 3\,y_2 + 3\,y_1 - y_0\right)u \\ & J_{42} = \lambda\,r\,\sin v \\ & J_{43} = 3\,u\,(2 - 3\,u) \\ & J_{44} = -r\,\cos v \\ \end{aligned} $$ then define $\mathbf{Jf} = \begin{pmatrix} J_{11} & J_{12} & J_{13} & J_{14} \\ J_{21} & J_{22} & J_{23} & J_{24} \\ J_{31} & J_{32} & J_{33} & J_{34} \\ J_{41} & J_{42} & J_{43} & J_{44} \\ \end{pmatrix}$;
calculate: $$ \begin{aligned} & \mathbf{invJf} = \text{inverse}(\mathbf{Jf})\,; \\ & \mathbf{resf} = \mathbf{invJf}.\mathbf{f} \; \text{[product row by column]}\,; \\ & \text{errf} = \sqrt{\mathbf{resf}_1^2 + \mathbf{resf}_2^2 + \mathbf{resf}_3^2 + \mathbf{resf}_4^2}\,; \end{aligned} $$
while $\text{errf} > \epsilon$ and $\text{count} < \text{countmax}$: $$ \begin{aligned} & \mathbf{z} = \mathbf{z} - \mathbf{resf}\,; \\ & \left(u,\,v,\,y_2,\,\lambda\right)^t = \mathbf{z}\,; \\ & \text{if} \; u < 0 \; \; \text{or} \; \; u > 1 \; \; \; \text{then} \; u = \frac{1}{2}\,; \\ & \text{if} \; v < \pi \; \; \text{or} \; \; v > 2\pi \; \; \text{then} \; v = \frac{3\pi}{2}\,; \\ & \text{repeat steps 3, 4, 5}\,; \\ & \text{count} = \text{count} + 1\,. \\ \end{aligned} $$
In particular, to make the calculations faster, it's possible to calculate the residual vector once and for all, reducing everything to a series of elementary operations that are easy to iterate:
Although the definition of these parameters is very tedious, once this work is done, there is the advantage of being able to dominate the algorithm in an optimal way, as there are no more hidden steps.
In particular, I would like to underline the fact that the high efficiency of the Newton-Raphson method is paid for in terms of low robustness, since the convergence of this method is highly influenced by the choice of the first attempt solution vector.
Precisely for this reason, based on the problem under examination, it's necessary to intervene by optimizing it through some more or less intelligent measures. In this case, I intervened inside the while loop with a couple of if conditional checks to make $u$ and $v$ converge to a value within the respective intervals, but this can certainly be done more efficiently, as well as you will almost certainly have to intervene in the testing to adapt it to your needs.
I hope I have explained myself sufficiently, good job! ^_^
Addendum 2
If the specific problem doesn't allow to identify a robust strategy in the choice of the initial vector to trigger the Newton-Raphson method, it's necessary to change direction.
In particular, it's possible to abandon the world of local convergence methods by migrating to that of global convergence methods in which, when it comes to polynomials, the Aberth-Ehrlich method is undoubtedly an excellent compromise between algorithmic simplicity and at the same time efficiency/robustness.
On an algorithmic level, I would proceed like this:
through simple substitutions, the above system of equations can be simplified as follows: $$ \begin{cases} a + b\,u + c\,u^2 + d\,u^3 = \cos v \\ e + f\,u + \left(g + 3\,y_2/r\right)u^2 + \left(h - 3\,y_2/r\right)u^3 = \sin v \\ b + 2\,c\,u + 3\,d\,u^2 = -\lambda\,\sin v \\ f + 2\left(g + 3\,y_2/r\right)u + 3\left(h - 3\,y_2/r\right)u^2 = \lambda\,\cos v \\ \end{cases} $$ with $0 < u < 1$, $0 < v < 2\pi$, $y_2 \in \mathbb{R}$, $\lambda \in \mathbb{R}$;
dividing the first two equations member to member, I obtain the expression of $\sin v/\cos v$, then dividing the last two equations I obtain the expression of $-\sin v/\cos v$ which, when suitably equalized, allow to obtain a quadratic equation in $y_2$, an unknown which can therefore be expressed as a function of $u$;
adding the squares member to member of the first two equations, substituting the expression of $y_2$ and simplifying everything with further squares, it's possible to obtain a polynomial equation $p(u) = 0$, which in the most general case is 12th degree, where the coefficient of the monomial of maximum degree can be made unitary by dividing all the others by itself;
the time has therefore come to bring heavy artillery into the field by applying the above numerical method, which is conceived precisely for monical polynomial equations, in general with complex coefficients, through which it's possible to calculate all twelve complex roots simultaneously by updating them with the following formulation: $$ u_j' = u_j - \frac{1}{\frac{p'\left(u_j\right)}{p\left(u_j\right)} - \begin{aligned}\sum_{k = 1, \, k \ne j}^{12}\end{aligned} \frac{1}{u_j - u_k}} \;; $$
all that remains is to examine these roots one by one by selecting only those with a real part between $0$, $1$ and with an imaginary part close to zero, then through simple checks it's possible to determine the set of values $\left(u,\,v,\,y_2,\,\lambda\right)$ that satisfy all four equations placed in the system (due to the way the problem has been formulated, there may be more solutions than there may be, unless we introduce further constraints on the input parameters).
The advantage of this other method is that it doesn't require a particularly reasoned initial vector, it's sufficient to define it once and for all so that the (complex) components are asymmetrical and uniformly distributed (generally assumed on a circle).
On the other hand, it's necessary to refer to the arithmetic of complex numbers (which is the lesser evil, since it's easily algorithmizable), so it's necessary to establish a tolerance with which to discriminate the real roots from the complex roots by comparing it with the imaginary part (which could cause problems, since it isn't so obvious to define it arbitrarily).
Anyway, I tried to write this other algorithm in Visual Basic for Applications (VBA) in Excel and I was very satisfied, it seems very robust, although obviously it can always be improved:
I hope I haven't written nonsense, in case let me know in the comments. Good luck. ^_^