Nuking the Mosquito — ridiculously complicated ways to achieve very simple results

11.4k Views Asked by At

Here is a toned down example of what I'm looking for:

Integration by solving for the unknown integral of $f(x)=x$:

$$\int x \, dx=x^2-\int x \, dx$$

$$2\int x \, dx=x^2$$

$$\int x \, dx=\frac{x^2}{2}$$

Can anyone think of any more examples?

P.S. This question was inspired by a question on MathOverflow that I found out about here. This question is meant to be more general, accepting things like solving integrals and using complex numbers to evaluate simple problems.

19

There are 19 best solutions below

16
On

This is the first example that comes to my mind:

Let $f(x)=\sin^2(x)+\cos^2(x)$ then $f(0)=1$ and

$$f'(x)=2\sin(x)\cos(x)-2\cos(x)\sin(x)=0$$

so $f$ is a constant function and since $f(0)=1 \implies f(x)=1=\sin^2(x)+\cos^2(x)$

Explanation: The explanation mostly depends on the discussion in the below comments;

When we think the usual definition of $\sin,\cos$ function by unit circle it is obvious fact that $\sin^2(x)+\cos^2(x)=1$ as they are coordinate of points of the circle.

Above proof used two strong tools one of them is derivatives second of them is a corollary of mean value theorem.

$f'(x)=0 \implies f$ is constant function. (most books prove this by the mean value theorem)

Of course it is a valid proof but I thought it as "killing a fly with atomic bomb".

By the way as @Alex zorn point out if we just start from $\sin'=\cos$ and $\cos'=-\sin$ it is a natural proof which was not my intention and @Bill thought that it is completely natural proof which I respect his opinion. (He also thought that my opinion about this proof will change when I get enough experience, but I do not think so:) )

Please notice that there is no exact definition of being "ridiculously complicated" and it is kind of subjective topic but it is natural since it is a soft question.

I feel need to make this explanation to show my thought behind it, thanks for all who thought about this example and give some reaction. (including the ones who downvote :))

4
On

$(1)$ By far the most complicated way of doing something I've seen came from my real analysis course. We were instructed to manipulate the power series of $\cos(x),\sin(x)$ namely, $$\cos(x) = 1 - \frac{x^2}{2!} +\frac{x^4}{4!} - \cdots , \sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots$$ to derive addition formulas for $\cos,\sin$ versus this or using rotation matrices to derive the result.

$(2)$ Other interesting results can be seen when proving the "big theorems" in analysis such as mean value, extreme value and intermediate value theorem. In topology although very general, the results are reached fairly quickly with the knowledge of connectedness, continuity. \

Check out analysis proof of IVT versus this topological proof: Let $a,b \in X$, and let $r \in Y$ lie between $f(a)$ and $f(b)$. Define sets $A=f(X)\cap(−\infty,r)$ and $B=f(X)\cap(r,\infty)$. These sets are clearly disjoint, and they are clearly nonempty since one contains $f(a)$ and the other contains $f(b)$. We can also see that they are both open by definition as the intersection of open sets. Assume there is no point c such that $f(c)=r$. Then $f(X)=A \cup B$, so $A$ and $B$ constitute a separation of $X$. But this contradicts the fact that the image of a connected space under a continuous mapping is connected.

0
On

Do you also accept algebraic propositions? If so:

Finite domains are fields (the standard proof is a one liner: note that multiplication by a nonzero element is injective hence surjective):

Let $A$ be a finite domain. Let $K$ its field of fractions. Certainly $K$ is a finite $A$-module. Consider $K/A \otimes_A K/A$. This is obviously $0$. But for finitely generated $A$ modules $M, N$ it is $\operatorname{supp}(M \otimes_A N) = \operatorname{supp}(M) \cap \operatorname{supp}(N)$. In other words $M \otimes M = 0$ implies $M=0$. Hence in our situation $K=A$.

2
On

You can also say when you do an integral by parts, that you are using the Stoke's theorem...

4
On

Here's one I experienced on this site not too long ago:

Question: Assume $K/F$ is a field extension of finite degree. Prove that the extension is algebraic.


My "atomic bomb":

To show that it is algebraic, select any $a \in K$ \ $F$. Then consider the evaluation homomorphism $ev_a:F[x] \rightarrow K$ defined as follows: $g(x) \mapsto g(\alpha)$. Certainly, this map cannot be injective because $L$ is finitely generated, so its kernel must be a nontrivial ideal. By the isomorphism theorems, we know that $F[x]/\ker(ev_a) \cong K$.

Next, $K$ is a field, so it is definitely an integral domain. We know that $F[x]/\ker(ev_a)$ is an integral domain $\iff$ $\ker(ev_a)$ is a prime ideal. This is only possible if $\ker(ev_a)$ is generated by an irreducible polynomial in $F[x]$.

We conclude that, for every $a \in K$ \ $F$, there exists an irreducible polynomial in $F[x]$ with $a$ as a root. Therefore, $K$ is an algebraic extension of $F$.


A much more elegant response by the user Fretty:

To show $K/F$ is algebraic if finite we must show that every element of $K$ satisfies a polynomial over $F$.

Suppose $[K : F] = n$ and choose $\alpha\in K$. Then consider the elements $1,\alpha,\alpha^2,...,\alpha^n$.

This is a list of $n+1$ elements in an $n$ dimensional $F$-vector space so must be linearly dependent. Thus there exists $a_0,a_1,...,a_n\in F$ not all zero such that $a_n \alpha^n + ... + a_2\alpha^2 + a_1\alpha + a_0 = 0$.

But then $\alpha$ is a root of the polynomial $a_nx^n + ... + a_2x^2 + a_1x + a_0$ over $F$.

5
On

solving integrals and using complex numbers to evaluate simple problems

Integrals:

FoxTrot

Complex numbers:

Calvin and Hobbes

And of course you can't help but use complex arithmetic to solve the problem of "is it numberwang?"

7
On

Below is an excerpt of integrating a helical staircase I submitted as a solution in a vector calculus class years ago on April Fools when I was still an undergraduate. I got full marks and no regrets.

…therefore our integral is \begin{align*} &a\int^{2\pi}_{0}\int^{1}_{0} \sqrt{a^{2}u^{2}+b^{2}} \, dudv\\ &= 2\pi a \int_{0}^{1} \sqrt{a^{2}u^{2}+b^{2}} \, du. \qquad (1) \end{align*}

Now clearly, we should use the substitution $au = b\tan{\theta}$, but we have already demonstrated how to integrate $\sec^{3}\theta$ in Assignment 8, $\S 15.3$ Question 6. So instead, suppose we restrict ourselves to non-trigonometric substitution. Then we should instead let $\sqrt{a^{2}u^{2}+ b^{2}} = t - ua$. Then squaring both sides gives $$a^{2}u^{2}+b^{2} = t^{2} - 2uat + u^{2}a^{2}$$ and solving for u gives $$\frac{t^{2}-b^{2}}{2at} = u.$$ Differentiating both sides gives $$du = \frac{4at^{2} - 2at^{2} + 2ab^{2}}{4a^{2}t^{2}} dt.$$ Substituting this in we have $(1)$ equal to \begin{align*} &2\pi a\int_{t(0)}^{t(1)} \left(t - \frac{a(t^{2}-b^{2})}{2at}\right) \left(\frac{4at^{2} - 2at^{2} + 2ab^{2}}{4a^{2}t^{2}}\right) dt\\ &=2\pi a \int_{t(0)}^{t(1)} \left(t - \frac{t^{2}-b^{2}}{2t}\right) \left(\frac{1}{a} -\left(\frac{1}{2a}\right) +\frac{b^{2}}{2at^{2}}\right)dt\\ &= 2\pi \int_{t(0)}^{t(1)} \left(\frac{2t^{2} - t^{2}+b^{2}}{2t}\right)\left(\left(\frac{1}{2}\right) +\frac{b^{2}}{2t^{2}}\right)dt\\ &= 2\pi\int_{t(0)}^{t(1)} \left(\frac{t^{2} + b^{2}}{2t}\right) \left(\frac{t^{2}+b^{2}}{2t^{2}}\right)\, dt\\ &= 2\pi\int_{t(0)}^{t(1)} \frac{t^{4} + t^{2}b^{2} + b^{2}t^{2} + b^{4}}{4t^{3}} dt\\ &= \pi \int_{t(0)}^{t(1)} \frac{t^{4} + t^{2}b^{2} + b^{2}t^{2} + b^{4}}{2t^{3}} dt\\ &= \pi \int_{t(0)}^{t(1)} \frac{t}{2} + \frac{b^{2}}{2t} + \frac{b^{2}}{2t} + \frac{b^{4}}{2t^{3}} dt\\ &= \pi \int_{t(0)}^{t(1)} \frac{t}{2} + \frac{b^{2}}{t} + \frac{b^{4}}{2t^{3}} \, dt\\ &= \pi \left( \frac{t^{2}}{4} + b^{2}\ln |t| - \frac{b^{4}}{4t^{2}}\right) \bigg|_{t(0)}^{t(1)}. \qquad (2) \end{align*} Now when $u = 1$, we have $t = \sqrt{a^{2}+b^{2}}+a$ and when $u = 0$ we have $t = b$. Plugging these in, we have $(2)$ equal to \begin{align*} &\pi \left(\frac{(\sqrt{a^{2}+b^{2}}+a)^{2}}{4} + b^{2}\ln \left| \sqrt{a^{2}+b^{2}}+a\right| - \left(\frac{b^{4}}{4(\sqrt{a^{2}+b^{2}}+a)^{2}}\right) - \frac{b^{2}}{4} - b^{2}\ln\left|b\right| + \frac{b^{4}}{4b^{2}}\right)\\ &= \pi \left( \frac{(\sqrt{a^{2}+b^{2}}+a)^{2}}{4} - \left(\frac{b^{4}}{4(\sqrt{a^{2}+b^{2}}+a)^{2}}\right) + b^{2}\ln\left(\frac{\sqrt{a^{2}+b^{2}}+a}{b}\right) \right). \qquad (3) \end{align*} Now noticing the similarity of the first two terms, our intuition suggests this is easily simplified, so bringing this under common denominator we have $(3)$ equal to \begin{align*} &\pi\left( \frac{(\sqrt{a^{2}+b^{2}} + a)^{4} - b^{4}}{4(\sqrt{a^{2}+b^{2}} + a)^{2}} + b^{2}\ln\left(\frac{\sqrt{a^{2}+b^{2}}+a}{b}\right) \right)\\ &= \pi \left(\frac{\left(2a^{2} + 2a\sqrt{a^{2}+b^{2}} + b^{2}\right)^{2} -b^{4}}{4(\sqrt{a^{2}+b^{2}} + a)^{2}} + b^{2}\ln\left(\frac{\sqrt{a^{2}+b^{2}}+a}{b}\right) \right). \end{align*} Expanding again we have \begin{align*} &\pi \left(\frac{4a^{4} + 4a^{3}\sqrt{a^{2}+b^{2}} + 2a^{2}b^{2} + 4a^{3}\sqrt{a^{2}+b^{2}}+4a^{2}(a^{2}+b^{2}) + 2ab^{2}\sqrt{a^{2}+b^{2}}}{4\left(\sqrt{a^{2}+b^{2}}+a\right)^{2}}\right. \dots\\ &\left.\dots +\frac{2a^{2}b^{2} + 2ab^{2}\sqrt{a^{2}+b^{2}} + b^{4} - b^{4}}{4\left(\sqrt{a^{2}+b^{2}}+a\right)^{2}} +b^{2}\ln\left(\frac{\sqrt{a^{2}+b^{2}}+a}{b}\right)\right)\\ &= \pi \left( \frac{8a^{4} + 8a^{2}b^{2} + 8a^{3}\sqrt{a^{2}+b^{2}}+4ab^{2}\sqrt{a^{2}+b^{2}}}{4(\sqrt{a^{2}+b^{2}}+a)^{2}} + b^{2}\ln\left(\frac{\sqrt{a^{2}+b^{2}}+a}{b}\right)\right)\\ &= \pi \left(\frac{2a^{4}+2a^{2}b^{2}+2a^{3}\sqrt{a^{2}+b^{2}}+ab^{2}\sqrt{a^{2}+b^{2}}}{(\sqrt{a^{2}+b^{2}}+a)^{2}} + b^{2}\ln \left(\frac{\sqrt{a^{2}+b^{2}}+a}{b}\right)\right). \qquad (4) \end{align*} Using our intuition we know that the only term that was canceled after expansion was $b^{4}$ so we should examine powers of $(\sqrt{a^{2}+b^{2}}+a$ before using more complicated methods. We know from earlier that $(\sqrt{a^{2}+b^{2}} + a)^{2} = 2a^{2} + 2a\sqrt{a^{2}+b^{2}} + b^{2}$ and by examining the numerator of the first term of $(4)$, we can see that $2a^{2}$, $2a\sqrt{a^{2}+b^{2}}$ and $b^{2}$ all share the common factor of $a\sqrt{a^{2}+b^{2}}$. Therefore, $a\sqrt{a^{2}+b^{2}}(\sqrt{a^{2}+b^{2}}+a)^{2}$ is a reasonable candidate for the correct factorization of the numerator. A quick check to confirm the cross-terms match shows that \begin{align*} a\sqrt{a^{2}+b^{2}}(\sqrt{a^{2}+b^{2}}+a)^{2} &= a\sqrt{a^{2}+b^{2}} (2a^{2} + 2a\sqrt{a^{2}+b^{2}} + b^{2})\\ &= 2a^{3}\sqrt{a^{2}+b^{2}} + 2a^{2}(a^{2}+b^{2}) + ab^{2}\sqrt{a^{2}+b^{2}}\\ &= 2a^{3}\sqrt{a^{2}+b^{2}} + 2a^{4} + 2a^{2}b^{2} + ab^{2}\sqrt{a^{2}+b^{2}}\\ &= 2a^{4} + 2a^{2}b^{2} + 2a^{3}\sqrt{a^{2}+b^{2}} +ab^{2}\sqrt{a^{2}+b^{2}}. \qquad (5) \end{align*} So, now that we have verified that the cross-terms match we can use $(5)$ and thus have $(4)$ equal to \begin{align*} \pi\left(\frac{a\sqrt{a^{2}+b^{2}}(\sqrt{a^{2}+b^{2}}+a)^{2}}{(\sqrt{a^{2}+b^{2}}+a)^{2}} + b^{2}\ln(\phi)\right) &= a\pi\sqrt{a^{2}+b^{2}} + \pi b^{2}\ln\left(\frac{\sqrt{a^{2}+b^{2}}+a}{b}\right) \end{align*} which was what was to be shown.

2
On

Here is a proof of a basic combinatorial identity, using algebraic topology.

For $n \ge 0$, the $n$-simplex $\Delta^n$ can be constructed using $\binom{n+1}{k+1}$ cells of dimension $k$, for $k=0,1,\dots,n$. (There are $n+1$ vertices present, and a $k$-cell $c$ is determined uniquely by a choice of $k+1$ of these vertices to be incident on $c$.)

Therefore the euler characteristic $\chi(\Delta^n)$ is given by the alternating sum

$$\binom{n+1}{1} - \binom{n+1}{2} + \dots = \sum_{i=1}^n (-1)^{i+1}\binom{n+1}{i}.$$

On the other hand, $\Delta^n$ is contractible, so $\chi(\Delta^n) = 1 = \binom{n+1}{0}$. Rearranging the equation

$$\binom{n+1}{0} = \sum_{i=1}^{n+1} (-1)^{i+1}\binom{n+1}{i}$$

yields the familiar

$$\sum_{i=0}^{n+1} (-1)^i \binom{n+1}{i} = 0.$$

0
On

I recall this from an actual math exam:

$ABCD$ is a square in a three-dimensional space.
The vectors $\overrightarrow{A}$, $\overrightarrow{AB}$, and $\overrightarrow{AD}$ are given. Find $\overrightarrow{C}$!

Half of the students were not able to solve it. The official solution was to solve a set of equations, and using the dot product. It was something like $\overrightarrow{AB} + \overrightarrow{BC} = \overrightarrow{AD} + \overrightarrow{DC}$; $\overrightarrow{AB} \cdot \overrightarrow{BC} = 0$; and $\overrightarrow{AD} \cdot \overrightarrow{DC} = 0$.

The topic of the exam was equation systems, so it made sense that such a solution was expected and all students, me included, tried to solve it that way. The task was worth 5 points, which means that the average student was expected to solve it in 5 minutes.

But of course, since we are talking about a square, we know that $\overrightarrow{BC} = \overrightarrow{AD}$. Thus, all it took was $\overrightarrow{C} = \overrightarrow{A} + \overrightarrow{AB} + \overrightarrow{AD}$, i.e., just add the three vectors that were given.

2
On

Let $x,y\in\mathbb{R}^n$. Let $A$ be the $n\times 2$ matrix whose columns are $x$ and $y$. A standard argument shows that any eigenvalue of $A^TA$ is nonnegative (consider $\langle v,A^TAv\rangle$); since $A^TA$ is symmetric, it's diagonalizable, so it follows that $\det(A^TA)\ge 0$. Thus: $$ 0 \le \det(A^TA) = \left|\begin{matrix} \|x\|^2 & \langle x,y\rangle \\ \langle x,y\rangle & \|y\|^2 \end{matrix}\right| = \|x\|^2\|y\|^2 - \langle x,y\rangle^2 $$ and we have proved the Cauchy-Schwarz inequality.

0
On

Here is a major number-theoretical nuking. By a result of Gronwall (1913) the Generalized Riemann Hypothesis (GRH) implies that the only quadratic number fields $\,K$ whose integers have unique factorization are $\,\Bbb Q[\sqrt {-d}],\,$ for $\,d\in \{1,2,3,7,11,19,43,67,163\}.\,$ Therefore, if $\,K$ is not in this list then it has an integer with a nonunique factorization into irreducibles.

But that can be proved much more simply in any particular case, e.g. the classic elementary proof that $\,2\cdot 3 = (1-\sqrt{-5})(1+\sqrt{-5})\,$ is a nonunique factorization into irreducibles in $\,\Bbb Z[\sqrt{-5}],\,$ which can easily be comprehended by a bright high-school student.

Similarly, other sledgehammers arise by applying general classification theorems to elementary problems, e.g. classifications of (finite) (abelian) (simple) groups. Examples of such sledgehammers can be found here and on MathOverflow by keyword searches.

0
On

Lemma: Let $X \sim \mathcal{N}(0,1)$ be normally distributed. Then $Y = \sigma X + \mu$ is normally distributed as $Y \sim \mathcal{N}(\mu,\sigma^2)$.

Proof:

Let $Y$ be represented by its polynomial chaos expansion: $$Y = \sum_{i=0}^\infty y_i\Phi_i(\zeta).$$ Choose $\zeta$ to to have a zero mean normal distribution inducing a choice of $\Phi_i = H_i$, where $H_i$ is the $i$th Hermite Polynomial appropriately scaled so that $\left\langle H_i H_j\right\rangle = \delta_{ij}$.

We compute the expansion coefficients using the Galerkin method by projecting each orthogonal polynomial basis function onto both sides of the expansion:

$$\left\langle Y H_j(\zeta)\right\rangle = \left\langle \sum_{i=0}^\infty y_iH_i(\zeta) H_j(\zeta)\right\rangle \\ y_j = \frac{1}{\left\langle H_j^2 (\zeta)\right\rangle} \int_{\mathbb{R}} YH_j(\zeta) w(\zeta)\ d\zeta,$$ where $w(\zeta)$ is the weighting function of the Hermite polynomials, appropriately scaled.

Since $Y$ and $\zeta$ are fully correlated, perform an inverse transform of their distribution functions to the same uniformly-distributed random variable $u$:

$$F(Y) = u = G(\zeta) \implies h(u) \equiv F^{-1}(u) = Y, l(u) \equiv G^{-1}(u) = \zeta.$$

Note that the CDF of the standard normal distribution can be written in terms of the error function: $$G(\zeta) = \frac12\left(1+\textrm{erf}\left(\frac{\zeta}{\sqrt{2}}\right)\right),$$ so we can write $$l(u) = \sqrt{2} \textrm{erf}^{-1}(2u-1).$$

Similarly, it is easy to show that $$h(u) = \sqrt{2\sigma^2}\textrm{erf}^{-1}(2u-1)+\mu.$$

Substituting all this into the integral for $y_j$, we find

$$\begin{align*} y_j & = \int_0^1 h(u)H_j(l(u))\ du \\ &= \int_0^1 \sqrt{2\sigma^2}\textrm{erf}^{-1}(2u-1)H_j(\sqrt{2} \textrm{erf}^{-1}(2u-1))\ du + \int_0^1 \mu H_j(\sqrt{2} \textrm{erf}^{-1}(2u-1))\ du \\ &= \underbrace{\int_{\mathbb{R}} \sqrt{2\sigma^2}\zeta H_j(\zeta) w(\zeta)\ d\zeta}_{\sqrt{\sigma^2}\left\langle H_1 H_j\right\rangle} + \underbrace{\int_\mathbb{R} \mu H_j(\zeta)w(\zeta)\ d\zeta}_{\left\langle H_0 , 1\right\rangle} \end{align*}$$

Hence, the first integral is non-zero only for $j=1$, and the second is non-zero for only $j=0$. By the appropriate choice of scaling of the Hermite polynomials, we have

$$Y = \mu + \sigma \zeta$$

and we finally note that $\zeta$ is identically distributed to $X$.

0
On

If you estimate the area of a circle by circumscribing a hexagon, you get the inequality $\pi < 2\sqrt3$. An alternative proof is: $$ \frac{\pi^2}{6} = \sum_{n=1}^\infty \frac1{n^2} < 1 + \sum_{n=2}^\infty \frac1{n(n-1)} = 1 + \sum_{n=2}^\infty \left(\frac1{n-1} - \frac1n\right) = 2 $$

2
On

I've always had the feeling that using the calculus of variations to prove that a line is the shortest path between two points is a bit overkill...

$$I(f)=\int_a^b \sqrt{1+y'^2}\,\mathrm{d}x$$ $$\frac{\partial f}{\partial y}=\frac{d}{dx}\frac{\partial f}{\partial y'}$$ $$0=\frac{d}{dx}\frac{y'}{\sqrt{1+y'^2}}$$ $$y'=C\sqrt{1+y'^2}\implies y'^2(C^2-1)=-C^2\implies y'=c$$ so that we finally get $$y=cx+d.$$

Who knew that proving such an simple statement could be so hard!

0
On

To prove that there are infinitely many primes, you can use the prime number theorem, since if there were finitely many the asymptotics of the number of primes less than or equal to $x$ would be ${\displaystyle {k \over x}}$, where $k$ is the number of primes.

Or you can just invoke the Green-Tao theorem....

(And don't give me any stuff about circular reasoning ok? ;))

2
On

Once during my math studies I encountered a task of creating an infinite sequence of numbers, which sums up to some arbitrary value. Since there were no other restrictions on the sequence, the simplest and most obvious solution was:

$$a, 0, 0, 0, ...$$

But instead I created a function, which, when given real $x$ and positive non-zero integer $n$ returned $n$th digit of that real value in appropriate 10th power. So for $\pi$ we would get $3, 0.1, 0.04, 0.001, 0.0005$ and so on.

The function looked more less like the following:

$f(x,n) = signum(x) \cdot (\lfloor \lvert x \rvert \cdot 10^{\lfloor -log_{10}(\lvert x \rvert) \rfloor} \cdot 10^n \rfloor - (\lfloor \lvert x \rvert \cdot 10^{\lfloor-log_{10}(\lvert x \rvert \rfloor} \rfloor \cdot 10^{n-1}) \cdot 10) \cdot 10^{\lfloor log_{10}(\lvert x \rvert)) \rfloor - (n-1)}$

$\text{where} ~ n = 1, 2, ...$

Then I declared the sequence to be:

$f(a, 1), f(a, 2), f(a, 3), ...$

Output from Maxima:

(%i1) log10(x) := log(x) / log(10);

(%o1) ${log10}\left( x\right) :=\frac{\mathrm{log}\left( x\right) }{\mathrm{log}\left( 10\right) }$

(%i2) f(x,n):=signum(x)*(floor(abs(x)*10^(floor(-log10(abs(x))))*10^n) - floor(abs(x)*10^(floor(-log10(abs(x))))*10^(n-1))*10)*10^(floor(log10(abs(x))) - (n-1));

(%o2) $\mathrm{f}\left( x,n\right) :=\mathrm{signum}\left( x\right) \,\left( \mathrm{floor}\left( \left| x\right| \,{10}^{\mathrm{floor}\left( -\mathrm{log10}\left( \left| x\right| \right) \right) }\,{10}^{n}\right) -\mathrm{floor}\left( \left| x\right| \,{10}^{\mathrm{floor}\left( -\mathrm{log10}\left( \left| x\right| \right) \right) }\,{10}^{n-1}\right) \,10\right) \,{10}^{\mathrm{floor}\left( \mathrm{log10}\left( \left| x\right| \right) \right) -\left( n-1\right) }$

(%i3) f(123.456, 2);

(%o3) $20.0$

(%i4) f(-%pi, 3);

(%o4) $-0.04$

Fun fact: I actually volunteered to solve this task on the chalkboard and when I finished writing the huge formula, the PhD who conducted the workshop casually - and without a second of hesitation - asked, "Ah, you are extracting n-th digit from the number?"

1
On

Inspired by your (far too simple!) example in the question.

Let us compute $\int_0^1 x^2\,dx$. We start with the obvious change of variables: $$ t = \frac{x}{1-x} \quad\Leftrightarrow\quad x = \frac{t}{1+t} $$ from which we get $$ dx =\frac{dt}{(1+t)^2}. $$ and the integral transforms to $$ \int_0^1 x^2\,dx = \int_0^\infty \frac{t^2}{(1+t)^4}\,dt $$ which can be attacked using residue calculus. Define $$ f(z) = \frac{z^2\log z}{(1+z)^4} $$ where $\log$ is chosen as the natural branch of the complex logarithm and integrate over a keyhole contour: enter image description here

Standard estimates on the various parts of the contour shows that on $C_R$: $$ \left| \frac{z^2\log z}{(1+z)^4} \right| \le \frac{R^2(\ln R + 2\pi)}{R^4-1} $$ so $$ \left| \int_{C_R} f(z)\,dz \right| \le 2\pi R \cdot \frac{R^2(\ln R + 2\pi)}{R^4-1} $$ which tends to $0$ as $R \to \infty$. Similarly, on $C_\varepsilon$: $$ \left| \frac{z^2\log z}{(1+z)^4} \right| \le \frac{\varepsilon^2(\ln \varepsilon + 2\pi)}{(1/2)^4} $$ (if $\varepsilon < 1/2$) so $$ \left| \int_{C_R} f(z)\,dz \right| \le 2\pi R \cdot 16 \varepsilon^2(\ln \varepsilon + 2\pi), $$ which tends to $0$ as $\varepsilon \to 0^+$. It remains to investigate what happens on $I^+$ and $I^-$. On $I^+$ we get $$ \int_{I^+} f(z)\,dz = \int_{\varepsilon}^R \frac{x^2\ln x}{(1+x)^4}\,dx $$ and on $I^-$: $$ \int_{I^+} f(z)\,dz = \int_R^{\varepsilon} \frac{x^2(\ln x+2\pi i)}{(1+x)^4}\,dx. $$

Putting everything together, using the residue theorem and letting $R\to\infty$, $\varepsilon\to0^+$ (note that the integrals containing $\ln x$ cancel) we get $$ -2\pi i \int_0^\infty \frac{x^2}{(1+x)^4}\,dx = 2\pi i\operatorname{Res}\limits_{z=-1} \frac{z^2\log z}{(1+z)^4}. $$ Finally, $$ \operatorname{Res}\limits_{z=-1} \frac{z^2\log z}{(1+z)^4} = \frac1{3!} (z^2\log z)^{'''}\big|_{z=-1} = -\frac13 $$ (omitting tedious algebra), and we reach the amazing result $$ \int_0^1 x^2\,dx = \int_0^\infty \frac{t^2}{(1+t)^4}\,dt = -\operatorname{Res}\limits_{z=-1} \frac{z^2\log z}{(1+z)^4} = \frac13. $$

0
On

Computing sums of powers of roots using the argument principle.


Example:

Let $\alpha,\beta,\gamma,\delta$ be the distinct roots of $x^4+x^2+1$. Find $\alpha^6+\beta^6+\gamma^6+\delta^6$.


Simple algebraic solution:

Note that $(x^2-1)(x^4+x^2+1)=x^6-1=0\Rightarrow x^6=1$, so each the answer is just $1+1+1+1=4$.


Using the argument principle:

Let $\sigma_n=\alpha^n+\beta^n+\gamma^n+\delta^n$.

By the generalization of the argument principle,

$$\sigma_6=\frac1{2\pi i}\oint_C\frac{4z^3+2z}{z^4+z^2+1}z^6~\mathrm dz$$

where $C$ is a circle of radius $2$. By long division this becomes

$$\frac{4z^3+2z}{z^4+z^2+1}z^6=4z^5-2z^3+2z+\frac{4z^3+2z}{z^4+z^2+1}$$

but by the argument principle, this simple becomes

$$\sigma_6=4$$


This can be used instead of things such as Newton's identities and Vieta's formulas, albeit very messily sometimes.

0
On

$n!$ is not a power of a prime for $n\geq 3$.

Boring proof: $n!$ is divisible by two distinct primes $2$ and $3$.

Cool proof: For $n\geq 5$, the only normal subgroups of $S_n$ are $\{1\},A_n$ and $S_n$. Since $A_n,S_n$ are not commutative, it follows that the center $Z(S_n)$ is trivial. On the other hand, if $|S_n|=n!$ was a power of a prime, then $S_n$ would be a (nontrivial) $p$-group, and such always has a nontrivial center, which is a contradiction.