Abel Prize 2020

Congratulations to Gregory (Grisha) Margulis and Hillel (Harry) Furstenberg on being awarded the 2020 Abel Prize. The prize is one of the most prestigious in mathematics and is presented annually by the Norwegian Academy of Science and Letters.

The official announcement states that Margulis and Furstenberg were awarded the prize “for pioneering the use of methods from probability and dynamics in group theory, number theory and combinatorics” and their work is described by Hans Munthe-Kaas, chair of the Abel committee, as “bringing down the traditional wall between pure and applied mathematics”. So, who are they?

Gregory Margulis

Born in Moscow in 1946, Margulis gained international recognition aged only 16 when he received a silver medal at the International Mathematical Olympiad. He began his academic career at Moscow State University and began working towards his PhD under the supervision of 2014 Abel Prize Laureate Yakov Sinai. At the age of 32, he was awarded the 1978 Fields Medal for his work on the ‘arithmeticity and superrigidity theorems’, but was unable to travel to Finland to receive the medal as the soviet authorities refused to provide him with a visa.

Another major result followed in 1984 with his proof of the Oppenheimer Conjecture – a problem in Number Theory first stated in 1929. The ideas he introduced here centred on what is known as ‘ergodic theory’ (more on this later), and have since been used by three recent Fields Medallists: Elon Lindenstrauss, Maryam Mirzakhani and Akshay Venkatesh. In 2008, Pure and Applied Mathematics Quarterly ran an article listing Margulis’s major results which ran to more than 50 pages.


Hillel Furstenberg

Originally thought to be a pseudonym for a group of mathematicians due to the vast range of ideas published in his early work, Furstenberg is a mathematician with a deep technical knowledge of countless areas of mathematics. He published his first papers as an undergraduate in 1953 and 1955, with the latter giving a topological proof of Euclid’s famous theorem that there are infinitely many prime numbers.

One of his key results came in 1977 when he used methods from ergodic theory to prove a celebrated result by 2012 Abel Prize Laureate Endre Szemerédi on arithmetic progressions of integers. The insights that came from his proof have led to numerous important results, including the recent proof by Ben Green and Terence Tao that the sequence of prime numbers includes arbitrarily large arithmetic progressions.

Screenshot 2020-03-24 at 15.36.43.png

So, what is ergodic theory?

Ergodic theory relates to probability and what we call ‘random walks’, best explained by thinking about a dog trying to find some treats buried in a garden…

If you hide some treats in your garden and let your dog try to to find them, it will most likely start sniffing in what seems to be an apparently random pattern. However, after a short period of time, the dog will more often than not successfully find the treats. This method of search might not seem to be systematic, but yet the dog is following its instinct telling it to randomly change its direction at regular intervals to maximise its chance of success. You can think of it as moving one step forwards, then flipping a coin to decide whether you next go left or right for one step, and repeating this indefinitely.

In maths, the dog’s behaviour is encoded in the concept of a random walk. A random walk is a mathematical object that describes a path consisting of a succession of random steps in some mathematical space. There are numerous examples of physical systems that are modelled by random walks: the behaviour of gas molecules, stock markets, the statistical properties of neurons firing in the brain… But, random walks can also be seen as a tool to explore a mathematical object, in the same way that the dog tries to understand the garden. Of course, Hillel Furstenberg and Gregory Margulis are not using random walks to find treats in a garden, they do random walks on graphs or on groups in order to reveal the secrets of these objects.

If the trajectory of the dog is ergodic, this means that the dog will eventually get close to the treat in the long term. In fact, if we were to draw a circle around the treat, of any size (even as small as you can possibly imagine), after some finite amount of time the dog will be sniffing inside the circle, and therefore will probably discover the treat. This is ergodic theory in a nutshell.

More information on the Abel Prize announcement can be found on the website of the Norwegian Academy of Sciences and Letters here or in the official citation here.

How do tissues grow?

Interview with Edouard Hannezo from the University of Cambridge for the Naked Scientists. You can listen to the full interview here.

This year marks the 100th anniversary since the landmark publication On Growth and Form by D’Arcy Wentworth Thompson, which describes the mathematical patterns seen across the natural world including in shells, seeds and bees. Now, a new study from the University of Cambridge has used the same ideas of self-organisation to give an elegant solution to a problem that has taxed biologists for centuries: how complex branching patterns arise in tissues such as the lungs, kidneys and pancreas. The answer involves some very simple maths…

Edouard – We are a team of physicists and we’ve been working with developmental biologists in order to understand how complex organs are formed during development. And what we’ve found is by using real organ reconstructions and mathematical modelling is that there are incredibly simple mathematical rules that are conserved among several organs, that allow organs to self-organise in a fundamentally random manner. Which means that organs don’t follow a precise blueprint, but rather each cell making up an organ behaves in a very random manner and is able to communicate with its neighbours in a simple way in order to generate a mature organ.

Tom – So in some sense each cell is kind of doing its own thing and then somehow all of the cells together give you your organ?

Edouard – Exactly, that’s something that has been widely studied in physics. For example, if you think about a tsunami wave, individual water molecules do not know that they’re forming a tsunami wave that’s moving cohesively. Each molecule just goes randomly around and it sonly if you put a lot of water molecules in a very specific way that you can form suddenly these self-reinforced structures that make up tsunami waves. And this is an exactly similar example in biology in which each cell behaves randomly and its through very simple interaction with the neighbours that they’re able to self-organise into complex patterns.

Tom – How do we end up with these incredibly complicated structures such as the lungs and the kidneys?

Edouard – What we found is that even though the global appearance of tissues such as kidneys and mammary glands is broadly similar, actually if you look in detail its actually like snowflakes – no two organs can be superposed and are exactly similar. And that’s a signature of the fact that the underlying mechanism is fundamentally random and that from random disorder these cells are able to self-organise into something that is almost robust, but never exactly the same.

Tom – A lung roughly speaking is about the same between most people. If it is as random as your suggesting, how do these cells know when to stop when they reach a certain size or how to form a lung?

Edouard – That’s the key rule that we’ve proposed in this article. So what we think is that cells proliferate, they grow randomly in all directions and of course they need to know when to stop – you want your organs to have a given size and not too much less and not too much more. And so the rule that we’ve shown is that even though each cell explores space completely randomly and divides randomly, we’ve shown that it’s able to measure the local density, so if it arrives in a place that’s a bit too crowded, it doesn’t try to keep growing it just stops. And stops growing forever. Therefore, with this density sensing of cells the organ is able to know the regions which are already dense enough – it shouldn’t grow anymore – and the regions which are not dense enough and it should grow additionally. And it’s this intrinsic self-correcting tool that allows for the self-organisation of organs and that allows organs to robustly develop from a series of random interactions between cells.

Tom – So can we imagine for a second that we are one of these tissues growing and can you talk me through the process that that tissue would undergo and how it develops into an organ.

Edouard – You can imagine a tree, and one of these trees starts with a single bud on top of a single trunk. This bud is going to start growing, it’s going to explore a random direction. And rather frequently these buds are going to divide and give rise to two branches, four branches and then eight branches, exactly as you can imagine in a tree. Therefore, this alone would never stop and it’s only thanks to this crowding induced termination that some of the tips turn off and stop growing, while other tips that are at the outer edges of the organ have access to low density regions and continue growing. There’s actually a pretty strong resemblance to what you can think about with a real tree, where you can imagine that the tips that have access to the sun will continue growing whereas tips that are overshadowed will stop.

Tom – And in terms of other applications of this kind of work beyond just understanding how our tissues develop, have you given any thought to other ways this could be used?

Edouard – One thing that we’ve started to look at in the paper is the question of developmental disorders. In particular, in the kidney, there are quite a few conditions in which unfortunately the kidneys stop growing before they are fully mature and so patients end up with at birth kidneys which are much smaller compared to normal. Therefore, we wonder if the fundamental randomness in organ development couldn’t explain pathological cases such as these developmental disorders.


Up ↑