A Brief Exposé of P-adics

Gavin Bala – commended entry in the 2021 Teddy Rocks Maths Essay Competition

Our first brush with the infinite usually comes when dividing 1 by 3. If we try it on a calculator, we get:

0.3333333333

But how can that be? Surely if we triple this, we get 0.9999999999, not 1? If we turn to long division to figure this out, we get:

0.3
3 ) 1.0000000
9
10

After we do the first step, we end up where we started, with a one and then a string of zeroes. (We can always add more as needed.) So we’re going to end up with a string of threes repeating for eternity: it can’t ever stop. The calculator only stops because it can’t go on forever, but we can think about the problem and realise what would happen.

This makes sense on further reflection. A decimal that stops is just a fraction where the denominator is some power of ten – a number that looks like one and then a finite string of zeroes. 1/3 can’t be written as such a fraction: ten isn’t a multiple of 3, and multiplying lots of tens together is never going to make it suddenly start being a multiple of 3.

But what does it really mean to say 1/3 = 0.333…? How can we add infinitely many things? And if we can let the expansions go off to infinity on the right, who says we can’t do that on the left?

The first question about infinity has been around for a long, long time – not infinitely long, though. If we take the expression 1/3 = 0.33333… that we obtained earlier, and triple it, we reach the conclusion that 1 = 0.99999… If we expand out that decimal as a sum, it says 0.9 + 0.09 + 0.009 + 0.0009…

In other words, if we are trying to reach a destination, we must first get nine-tenths of the way there. Once we have done that, we must traverse nine-tenths of the remaining distance. But once we have done that, we must traverse another nine-tenths of what remains. And if we believe Zeno of Elea, that means we must do infinitely many things to get anywhere.

This is not quite the place to consider the Planck length or any such obstacles to this line of reasoning in the real world, apart from off-handedly saying the common-sense observation that we do in fact reach our destinations on a daily basis. For it seems incontrovertible that, even if we accept infinitely dividing our journey, at some point we will get as close to our destination as we want. After one step we are within a tenth; after two steps, we are within a hundredth; after three steps, within a thousandth. In fact, given any threshold we might desire, we can name a step after which we are that close to our goal. So, even if we believe we’re not going to reach it in theory, we should probably believe that we will for all practical purposes arrive! So we dare to write:

0.9 + 0.09 + 0.009 + 0.0009 + … = 0.9999… = 1.

And what legitimises us to write this equality? Well: that is how we define summing infinitely many numbers! Suppose our enemy asks for a threshold, that we have to be within some threshold ε of the final sum. If we are in the situation that our partial sums – what we get when we’ve summed only finitely many of the terms – will always be less than ε away after a certain point – we win. If we can win against any ε our enemy throws at us, we have triumphed and may legitimately write the equality. In mathematics we call this convergence of a sum.

It should be mentioned that we do not need to always go straight to our sum: we can go up and down around it. Like this:

1 – 1/3 + 1/5 – 1/7 + … = π/4 = 0.785398163…

The right-hand side happens to be one quarter of a very famous irrational number indeed: π, the ratio of lengths of a circle’s circumference to its diameter.

And it should also be mentioned that just the numbers getting smaller is not sufficient. They have to get smaller quickly enough. The most famous case is the so-called harmonic series, which is:

1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + …

A famous proof by Nicole Oresme (c.1325-1382) shows that this cannot approach anything finite:

1 + 1/2 + (1/3 + 1/4) + (1/5 + 1/6 + 1/7 + 1/8) + …
> 1 + 1/2 + (1/4 + 1/4) + (1/8 + 1/8 + 1/8 + 1/8) + …
> 1 + 1/2 + 1/2 + 1/2 + …

And now it’s quite obvious that this will eventually get past anything we want, although it turns out that it takes quite a while for that to happen.

What is now pretty obvious is why we’re only writing the expansions going off to the right. When we write them going off to the right, at step n + 1 we are only correcting places smaller than 10-n, and so we must already be within 10−n of our destination. Obviously, by picking n as large as we like, this becomes as small a threshold as we could need. If we were to write them going off to the left, then we would be adding bigger and bigger numbers. Obviously, that must mean that we’re never going to zoom in to a goal!

Or is it obvious?

The sneaky thing here is the notion of distance. The notion of distance seems obvious indeed; but it is not the only possible one. If we can find a notion of distance in which the powers of ten do get smaller, then in fact we will be able to legitimately write an expression like …999.

Just what can this possibly be, for starters? Well, suppose we have found some notion of distance in which this sum converges. In that case, if we multiply it by ten, it ought to remain convergent:

…999 × 10 = (… + 900 + 90 + 9) × 10 = …+ 9000 + 900 + 90 = …9990.

But we can also see that this is just the same sum with the 9 at the end missing. In other words, if the sum is x, then 10x + 9 = x. That’s easy to solve, and we conclude that if …999 is to make any sense, it must be –1. This indeed makes some kind of sense, if the powers of ten are somehow approaching zero, since the partial sums are all one less than a power of ten.

It now remains to find a definition of distance that legitimises this. Such a notion in mathematics it is called the 10-adic metric.

Given any rational number, we can uniquely express it as something multiplied by a power of ten, where the remaining residue doesn’t involve any tens at all. For example:

7700 = 77 × 102
0.91 = 91 × 10−2
5 = 5 × 100
40/3 = 4/3 × 101

In general, we will write q = a × 10k. And then we will define the so-called 10-adic “absolute value” of a number to be |q|10 = 10-k, unless q=0 in which case it will be 0. The distance between two numbers p and q will, as we’re used to, simply be |p – q|10.

It’s not difficult to check that this satisfies what we think a distance should follow: it’s zero precisely when the two numbers are the same, it doesn’t matter how you order your endpoints, and it stays the same if you shift both numbers the same amount. And we can also check the triangle inequality (which basically says that to move from one point to another, it is never worth it to take a detour via an intermediate third point; for either it would be on the path anyway, so you’re not saving time, or it’s a detour and you’re wasting time):

|p–q|10 + |q–r|10 ≥ |p–r|10

But in some sense it’s also really strange, because repeatedly moving a shorter distance will not make you move a longer distance! In fact, something called the ultrametric inequality holds true here:

max(|p – q|10, |q – r|10) ≥ |p – r|10

This works because: if 10a divides p – q, and 10b divides q – r with b ≥ a, then clearly 10a divides their sum p – r. So the highest power of ten 10c dividing p – r satisfies c ≥ a. In fact, c = a unless a = b originally, in which case we may have c > a. So a picturesque consequence is that all triangles are isosceles!

What is the use of this, when it’s clear that distance doesn’t act this way in real life? Simply because this makes the 10-adic numbers encode information about the remainders! If we only care about the remainders of numbers when divided by ten, that’s the same as looking as the last digit. If we care about then when divided by 100, that’s the same as looking at the last two digits, and so on. So in some sense, this is like looking at the remainders of numbers, when divided by arbitrarily large powers of ten! Hence, though distance doesn’t act this way in real life, this really is helping us investigate a real property of numbers.

With distance defined thus, infinite expansions going off to the left have meaning, and now it is the ones going off to the right that don’t converge! So, we now have just the opposite situation to normal numbers: we can now have as many digits as we want to the left of the decimal point, but only finitely many to the right!

Unfortunately we have hit a little snag. The 10-adic numbers are flawed: one can find 10-adic numbers p and q such that their product is zero! Basically, the problem is that zero acts like an infinitely high power of ten here, but ten is composite. So if we can find numbers acting like an infinitely high power of two and an infinitely high power of five, they will have zero as their product. That’s why we put “absolute value” in brackets; it’s not a real absolute value, because it’s not multiplicative. One can find two numbers such that the product of the absolute values is not the absolute value of the product.

It’s not too hard to find such numbers. Take 5 and keep squaring the last n + 1 digits at the nth step:

52 = 25 (now square the last 2 digits)
252 = 625 (now square the last 3 digits)
6252 = 390625 (now square the last 4 digits)
06252 = 390625 (and now the last 5)
906252 = 8212890625 (and now the last 6) …

Eventually, the last few digits stabilise, and we have the expansion of a non-zero divisor of zero:

…557423423230896109004106619977392256259918212890625

This problem can be fixed by changing the base: if we work in a prime base, rather than ten, this never happens. Even better, we now get an actual absolute value, in the sense that the absolute value of a product becomes guaranteed to be the product of the absolute values (which in the above case, it isn’t).

These so-called p-adic numbers are extremely useful “extra worlds” that are analogous to the real numbers. Many different absolute values can be imposed on the rational numbers, with just one corresponding to how we make the real numbers out of the rationals. If we apply the same idea (add limits of all sequences, i.e. infinite decimals in base p) to the others, we get the p-adic numbers for each prime p. Because of this analogy, we can actually do calculus on the p-adics!

So, in some sense, the reals are very special!

2 comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s