Geometric Pattern in Triangle Construction with Squares

184 Views Asked by At

I've been exploring some constructions in geometry recently and have been looking at the following algorithm:

  1. Start with an arbitrary triangle (red)
  2. Construct 3 squares from each of the sides of the triangle where the squares do not intersect
  3. Connect the centers of each square to form a new triangle (green)
  4. Repeat ad infinitum

It appears that this process will converge to an equilateral triangle after infinite repetitions. I'm curious as to why that is. I started brute forcing the calculation algebraically with arbitrary coordinates for the vertices of the original triangle, but it gets messy quickly. I'm wondering if there's a more elegant explanation.

Image

4

There are 4 best solutions below

8
On BEST ANSWER

Let the side lengths of the original triangle be $BC=a,AC=b,AB=c$, and let its area be $K$. Let the center of the square from side $BC$ be $S_{a}$, etc. and let $S_{b}S_{c}=x, S_{a}S_{c}=y,S_{a}S_{b}=z$.

enter image description here

Note that $\angle S_{b}AS_{c} = \frac{\pi}{4}+\angle A + \frac{\pi}{4} = \angle A + \frac{\pi}{2}$. Thus, by the Law of Cosines:

$$\begin{align}x^{2} &= AS_{b}^{2}+AS_{c}^{2} - 2AS_{b}AS_{c}\cos(\angle S_{b}AS_{c}) \\ &= \frac{b^{2}}{2} + \frac{c^{2}}{2} - bc\cos\left(\angle A + \frac{\pi}{2}\right)\\ & = \frac{b^{2}}{2} + \frac{c^{2}}{2} + bc\sin(\angle A)\\ &= \frac{b^{2}}{2} + \frac{c^{2}}{2} + 2K\end{align}$$

We find $y^{2},z^{2}$ similarly. You might already see that the differences between the sidelengths are being "smoothed out".

To formalize this, let's measure the "equilateralness" of a triangle as $\text{eq}(\triangle ABC) = (a^{2}-b^{2})^{2} + (b^{2}-c^{2})^{2} + (c^{2}-a^{2})^{2}$. You can check that this is zero iff $\triangle ABC$ is equilateral, and positive otherwise. We have:

$$\begin{align}\text{eq}(\triangle S_{a}S_{b}S_{c}) &= (x^{2}-y^{2})^{2} + (x^{2}-z^{2})^{2} + (y^{2}-z^{2})^{2}\\ &= \left(\frac{a^{2}-b^{2}}{2}\right)^{2} + \left(\frac{a^{2}-c^{2}}{2}\right)^{2} + \left(\frac{b^{2}-c^{2}}{2}\right)^{2}\\ &= \frac{1}{4}\text{eq}(\triangle ABC)\end{align}$$

Thus, the equilateralness converges geometrically to 0, as desired. Note that this could also signify the triangle shrinking to a point, but we can rule this out because the perimeter is increasing.

6
On

Geometry with complex numbers (+ linear algebra) provides an interesting proof that I am going to develope.

I will use the same notations for the points of the plane and for their associated complex numbers in a fixed reference frame.

Notations :

$$\omega:= e^{2 i \pi/3} \ \ \text{and} \ \ \alpha = \frac12(1+i)\tag{0}$$

(please note that $1,\omega,\omega^2$ are the 3 third roots of unity, and as such form an equilateral triangle)

Let $A_1,B_1,C_1$ be the vertices of the initial triangle.

Let $A_2,B_2,C_2$ be the centers of the squares built on sides $B_1C_1, C_1A_1, A_1B_1$ resp.

As triangle $B_1A_2C_1$ is a right isosceles triangle, one can
write :

$$B_1-A_2=i(C_1-A_2) \ \iff \ (i-1)A_1=B_1-iC_1 \ \iff \ A_2=\alpha(B_1-iC_1)$$

or $$A_2=\alpha B_1+\bar{\alpha}C_1\tag{1}$$

where $\alpha=\frac{1}{1-i}=\frac12(1+i)$ has been introduced in (0).

For similar reasons, by circular permutation, we have

$$\begin{cases}B_2=\alpha C_1+\bar{\alpha}A_1\\C_2=\alpha A_1+\bar{\alpha}B_1\end{cases}\tag{2}$$

We can now gather (1) and (2) under a matrix-vector form that we can at once make general in order to understand the whole process :

$$\underbrace{\pmatrix{A_{n+1}\\B_{n+1}\\C_{n+1}}}_{V_{n+1}}=\underbrace{\pmatrix{0&\alpha&\bar{\alpha}\\ \bar{\alpha}&0&\alpha\\ \alpha&\bar{\alpha}&0}}_M\underbrace{\pmatrix{A_{n}\\B_{n}\\C_{n}}}_{V_{n}}\tag{3}$$ Matrix $M$ has two properties :

  • (a) $M$ is hermitian ($M^H=M$ where symbol $H$ means transposition + conjugation);

  • (b) $M$ is circulant.

As a consequence of (a), $M$ has real eigenvalues :

$$\begin{cases} \lambda_1&=& \ \ \ 1\\ \lambda_2 &=& - \frac12 (1+\sqrt{3}) &\approx &-1.366 \\ \lambda_3&=& \ \ \ \frac12 (-1+\sqrt{3}) &\approx & \ \ \ 0.366\end{cases}\tag{4}$$

Please note that $\lambda_2$, that we will write $\lambda_{max}$, is the dominant eigenvalue.

As a consequence of (b), $M$ is diagonalized by Discrete Fourier matrix $F$ (see here) in this way :

$$M=\underbrace{\frac{1}{\sqrt{3}}\pmatrix{1&1&1\\1&\omega&\omega^2\\1&\omega^2&\omega}}_{F}\underbrace{\pmatrix{\lambda_1&&\\ &\lambda_2&\\ &&\lambda_3}}_{D}\underbrace{\frac{1}{\sqrt{3}}\pmatrix{1&1&1\\1&\omega^2&\omega\\1&\omega& \omega^2}}_{F^H}\tag{5}$$

Remark : the columns of $F$ are, of course, unit-norm eigenvectors associated to the $\lambda_k$s in their resp. order.

(5) gives, for large values of $n$, the (rather well known) approximation:

$$M^n=F D^n F^H \approx F \pmatrix{1&&\\ &(\lambda_{max})^n&\\ &&0} F^H \approx (\lambda_{max})^n UU^H, \tag{6}$$

where $U$ is the second column of matrix $F$ ($U$ is a unit-norm eigenvector associated with $\lambda_{max}$), namely :

$$U=\frac{1}{\sqrt{3}}\pmatrix{1 \\ \omega \\ \omega^2}\tag{7},$$

$U^H$ being the second line of matrix $F^H$.

Therefore :

$$V_{n+1}=M^n V_1 \approx (\lambda_{max})^n \frac13 \pmatrix{1 \\ \omega \\ \omega^2}\pmatrix{1&\omega^2&\omega}\pmatrix{A_1 \\ B_1 \\ C_1}$$

$$V_{n+1}\approx (\lambda_{max})^n \frac13 (A_1+\omega^2 B_1+\omega C_1) \pmatrix{1 \\ \omega \\ \omega^2}$$

Otherwise said, $V_{n+1}:=\pmatrix{A_{n+1}\\B_{n+1}\\C_{n+1}}$ is arbitrarily close to a (complex) multiple of $\pmatrix{1 \\ \omega \\ \omega^2}$ which is an equilateral triangle.

enter image description here

MATLAB program :

 a=(1+i)/2;b=conj(a);
 M=[0,a,b;
    b,0,a;
    a,b,0];
 A=1,B=exp(i*pi*rand);C=B*exp(i*pi*rand);
 V=[A;B;C];
 for k=1:7
    plot([V;V(1)]);hold on;axis equal;
    K=num2str(k);
    text(real(V),imag(V),{['A_',K],['B_',K],['C_',K]})
    V=M*V;
 end;
0
On

Here's a slightly different approach starting from Jean-Marie's excellent proof, but avoiding the "it is well known that..." step. It's about 80% copied from that answer, so Jean-Marie deserves all the credit. I'm marking it 'community wiki' because it really is a community effort.

I'll use the notation of that answer up through step 3, where it's shown that

$$\underbrace{\pmatrix{A_{n+1}\\B_{n+1}\\C_{n+1}}}_{V_{n+1}}=\underbrace{\pmatrix{0&\alpha&\bar{\alpha}\\ \bar{\alpha}&0&\alpha\\ \alpha&\bar{\alpha}&0}}_M\underbrace{\pmatrix{A_{n}\\B_{n}\\C_{n}}}_{V_{n}}\tag{3}$$

From this one can immediately conclude that $$V_{n+1}=M^n V_1 \tag{4}$$.

As Jean-Marie observes, $M$ is hermitian. In particular, it has real eigenvalues :

$\lambda_{max} = - \frac12 (1+\sqrt{3}) \approx -1.366 $ (dominant) , the other eigenvalues being $1$ and $\approx 0.366$.

A this point I diverge from Jean-Marie, and calling the eigenvalues $\lambda_1, \lambda_2, \lambda_3$ in decreasing order of size, with corresponding unit eigenvectors $u_1, u_2, u_3$, we have (because $M$ is Hermitian) that $u_i \cdot u_j = 0$ for $i \ne j$. If we make a matrix $U$ with columns $u_1, u_2,$ and $u_3$, then this says that $$ U^{*} U = I $$ the $3 \times 3$ identity, i.e., $U^{1} = U^{*}$.

Because the $u_i$ are eigenvectors, we have $$ MU = U\Lambda $$ where $\Lambda$ is diagonal with nonzero entries $\lambda_1, \lambda_2, \lambda_3$. That means that $$ M = U \Lambda U^{-1}. $$ That means that \begin{align} M^2 &= (U \Lambda U^{-1})(U \Lambda U^{-1})\\ &= U \Lambda (U^{-1}U) \Lambda U^{-1})\\ &= U \Lambda^2 U^{-1}\\ &= U \Lambda^2 U^{*}\\ \end{align} and similarly for higher powers. If you write this out in terms of the individual eigenvectors $u_i$, it says that $$ M^n = u_1\lambda_1^n u_1^{*} + u_2 \lambda_2^n u_2^{*} + u_3 \lambda_1^n u_3^{*}. $$ Dividing through the $\lambda_1^n$, this gives us \begin{align} \frac{1}{\lambda_1^n} M^n &= u_1 u_1^{*} + u_2 \frac{\lambda_2^n}{\lambda_1^n} u_2^{*} + u_3 \frac{\lambda_1^n }{\lambda_1^n}u_3^{*}\\ &= u_1 u_1^{*} + u_2 \left(\frac{\lambda_2}{\lambda_1}\right)^n u_2^{*} + u_3 \left(\frac{\lambda_1 }{\lambda_3}\right)^n u_3^{*}\\ \end{align} Now because $|\lambda_2| < |\lambda_1|$, and similarly for $\lambda_3$, the fractions in the second and third terms tend to zero as $n$ gets large, so we have $$ \lim_{n \to \infty} \frac{1}{\lambda_1^n} M^n = u_1 u_1^{*}. $$

where

$$u_1=\frac{1}{\sqrt{3}}\pmatrix{1 \\ \omega \\ \omega^2}\tag{6}$$

as Jean-Marie notes.

Now \begin{align} \frac{1}{\lambda_1^n}V_{n}&=\frac{1}{\lambda_1^n}M^n V_1 \end{align} so \begin{align} \lim_{n \to \infty} \frac{1}{\lambda_1^n}V_{n} &=\frac{1}{\lambda_1^n}M^n V_1\\ &=\frac{1}{\lambda_1^n}u_1 u_1^{*} V_1 \end{align} Thus if we scale down by a factor of $\lambda_1$ after each iteration of the "erect squares and connect centers" operation, the resulting sequence of triangles will have a limiting triangle. At this point, Jean-Marie's approximation steps can be replaced with equalities; because

V_1 \approx (\lambda_{max})^n \frac13 \pmatrix{1 \ \omega \ \omega^2}\pmatrix{1&\bar{\omega}&\bar{\omega}^2}\pmatrix{A_1 \ B_1 \ C_1}$$

$$ \lim_{n \to \infty} \frac{1}{\lambda_1}^{n+1}) V_{n+1} = \frac13 (A_1+\omega^2 B_1+\omega C_1) \pmatrix{1 \\ \omega \\ \omega^2}$$

(NB: I've probably lost a complex conjugate somewhere above, and maybe shifted an $n$ for an $n+1$; my apologies for that. I lack the time to go back and double-check every word, of which there are too many.)

I've edited the matlab program slightly to show the convergence of the triangles when you include the division.

enter image description here

MATLAB program :

 a=(1+i)/2;b=conj(a);
 M=[0,a,b;
    b,0,a;
    a,b,0];
 A=1;B=exp(i*pi*rand);C=B*exp(i*pi*rand);
 V=[A;B;C];
 Lmax = -(1./2.) * (1.0+sqrt(3.0));
 for k=1:7
    plot([V;V(1)]);hold on;axis equal;
    K=num2str(k);
    text(real(V),imag(V),{['A_',K],['B_',K],['C_',K]})
    V=(1/Lmax) * M*V;
 end;

Sample output: enter image description here

0
On

Follows a MATHEMATICA script which generates a sequence of triangles approaching an equilateral structure.

$$ \Delta_k = \Phi\left(\Delta_{k-1}\right) $$

Here $\Phi$ is defined as the module gTri[tri, f, sign] so we have equivalently tri = gTri[tri, f, sign]. The parameters are: sign which calculates the triangle orientation, and f = 0.5 for the square, as the semi-length of the side which is the square basis.

gTri[tri_, f_, sign_] := Module[{u, v, w, du, dv, dw, nu, nv, nw, c1, c2, c3},
  u = tri[[1]] - tri[[2]];
  v = tri[[2]] - tri[[3]];
  w = tri[[3]] - tri[[1]];
  du = Norm[u];
  dv = Norm[v];
  dw = Norm[w];
  nu = {u[[2]], -u[[1]]}/du;
  nv = {v[[2]], -v[[1]]}/dv;
  nw = {w[[2]], -w[[1]]}/dw;
  c1 = (tri[[1]] + tri[[2]])/2;
  c2 = (tri[[2]] + tri[[3]])/2;
  c3 = (tri[[3]] + tri[[1]])/2;
  Return[{c1 - nu du f sign, c2 - nv dv f sign, c3 - nw dw f sign}]
  ]

SeedRandom[10];
tri0 = RandomReal[{-1, 1}, {3, 2}];
u = tri0[[1]] - tri0[[2]];
v = tri0[[2]] - tri0[[3]];
sign = Sign[u[[1]] v[[2]] - u[[2]] v[[1]]];
grtris = {ListLinePlot[Join[tri0, {tri0[[1]]}], PlotStyle -> Red]};
tri = tri0;
ERR = {Abs[Norm[tri[[1]] - tri[[2]]] - (Norm[tri[[1]] - tri[[2]]] + 
       Norm[tri[[2]] - tri[[3]]] + Norm[tri[[3]] - tri[[1]]])/3]/Norm[tri[[1]] - tri[[2]]]};

error = 1000;
f = 0.285;
f = 0.5;
While[error > 10^-10,
 tri = gTri[tri, f, sign];
 error = Abs[Norm[tri[[1]] - tri[[2]]] - (Norm[tri[[1]] - tri[[2]]] + 
         Norm[tri[[2]] - tri[[3]]] + Norm[tri[[3]] - tri[[1]]])/3]/Norm[tri[[1]] - tri[[2]]];
 AppendTo[grtris, ListLinePlot[Join[tri, {tri[[1]]}]]];
 AppendTo[ERR, error]
 ]


Show[grtris, PlotRange -> All, AspectRatio -> 1, PlotRange -> {{-1, 1}, {-1, 1}}, Axes -> False]
ListLinePlot[ERR, PlotRange -> All]

Those plots show a typical case with f = 0.5

enter image description here

enter image description here

Now changing to f = 0.287 we obtain

enter image description here enter image description here

which suggests the existence of a value for f that leads to the equilateral triangle instantaneously.

NOTE

Applying symbolically this iteration to a generic triangle given by

$$ \Delta_0=((x_1,y_1),(x_2,y_2),(x_3,y_3)) $$

we obtain

$$ \Delta_1 = \cases{ ((x_1 + x_2)/2 - f (y_1 - y_2),(y_1+y_2)/2-f(-x_1+x_2))+\\ ((x_2 + x_3)/2 - f (y_2 - y_3), (y_2 + y_3)/2-f (-x_2 + x_3)) +\\ ((x_1 + x_3)/2 - f (-y_1 + y_3), (y_1 + y_3)/2-f (x_1 - x_3)) } $$

and after imposing same length for the sides of $\Delta_1$ we got

$$ f = \frac{1}{2\sqrt{3}} = 0.2886751345948129... $$

and thus we obtain the equilateral property instantaneously. Is this a known fact? The following plot is for SeedRandom[12]; and f = 0.2886751345948129; just in one shot. In red the initial triangle and in blue the iterated result.

enter image description here