What are the polynomial continued fractions?

Exploring various number representations not only unveils their fascinating nature but also deepens our understanding of these mathematical entities. Just as we write numbers in decimal or binary form, another intriguing representation is through simple continued fraction expansion, which in a sense originated all the way back in 300 BCE with Euclid’s division with remainder algorithm. This leads to presentation of numbers like the golden ratio \(\varphi=\frac{1+\sqrt{5}}{2}\) and \(e\) as

\( \varphi = 1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\ddots}}}} \; \; \; e = 2+\frac{1}{1+\frac{1}{2+\frac{1}{1+\frac{1}{1+\frac{1}{4+\frac{1}{1+\frac{1}{1+\cdots}}}}}}} \)

This expansion serves as a fundamental tool in mathematics in general, and in number theory in particular. However, the traditional algorithm for generating continued fractions encounters a challenge: it requires an infinite amount of time to compute the infinitely many numbers in the expansion. Fortunately, there exists a remarkable alternative known as polynomial continued fractions. While not as strong as their simple counterparts, polynomial continued fractions provide elegant formulas, even when dealing with ‘infinitely’ many numbers in the expansion.

For a deeper understanding of these concepts and their connection, we provide an introduction here. Additionally, we further explore (with mathematical proofs) how one of Euler’s intriguing idea can be extended to generate polynomial continued fractions for various well-known mathematical constants.

Irrationality testing

Simple continued fractions have a fascinating application in finding the ‘best’ rational approximations for a given number, known as good Diophantine approximations. Given any real number \(\alpha \in \mathbb{R}\) and any error bound \(\varepsilon>0\), it is always possible to find a rational number \(\frac{n}{m}\) such that \( |\alpha – \frac{n}{m}|<\varepsilon \) is in a sense as small as we want. However, as a general solution might have large values for \(n\) and \(m\), making the rational number seem “complicated”, the problem of good Diophantine approximation aims to find such solutions where both \(n,m\) are as small as possible (in absolute value).

While simple continued fractions provide the optimal Diophantine approximations, finding all of them is not straightforward, as the expansion itself is often unknown in advance. Conversely, polynomial continued fractions offer rational approximations that may not be the best overall but are significantly easier to calculate. Interestingly, if we can identify ‘sufficiently good’ rational approximations for a given real number, it implies that the number is irrational. We delve deeper into this simple yet fundamental concept of irrationality testing, exploring its connections to both simple and polynomial continued fractions.

Polynomial continued fractions and their Möbius presentation

One of the main tools used to study both the simple and polynomial continued fractions, are the Mobius transformation, which connects them to an interesting dynamical system. This transformation definition is simply

\(\begin{pmatrix}a & b\\c & d\end{pmatrix}\left(z\right):=\frac{az+b}{cz+d}.\)

In particular

\(\begin{pmatrix}0 & b\\1 & a\end{pmatrix}\left(z\right):=\frac{b}{a+z}\)

is a single “step” in the continued fraction expansion (with \(b=1\) in the simple continued fraction case). In other words, computing a continued fraction is a dynamic system where we apply more and more Mobius steps, and try to understand what happens in the limit. We describe this powerful tool here, and some of its implications for continued fraction expansions.

Families of polynomial continued fractions

Now that we know a little bit about the origin and applications of polynomial continued fractions, we can start investigating them and discover the patterns and ideas hidden within them. And as always, we should start with a good question, and in this case, we should ask when are two polynomial continued fractions are related to one another?

A single constant (and its “close family”) may have many polynomial continued fractions presentations. However, is it possible to start with a single presentation and then “improve” it in any way? For example, maybe the “improved” presentation provides better rational approximation? This idea on the one hand, and many examples generated using the Ramanujan machine on the other, led to a new and interesting structure that we call the conservative matrix field. Each such matrix field contains a family of polynomial continued fractions interconnected to one another, which under the right extra conditions can lead to irrationality proof of some mathematical constants. In particular, we can use this structure to motivate many of the steps in Apery’s proof that \( \zeta(3) \) is irrational.

We also describe one such interesting construction for conservative matrix fields, which relates to many well known mathematical constants.