TRM intern and University of Oxford student Kai Laddiman speaks to St John’s College Computer Scientist Stefan Kiefer about the infamous million-dollar millennium problem: P versus NP.
You can read more about P vs NP here.
Maths, but not as you know it…
TRM intern and University of Oxford student Kai Laddiman speaks to St John’s College Computer Scientist Stefan Kiefer about the infamous million-dollar millennium problem: P versus NP.
You can read more about P vs NP here.
Mathematician Thomas Hales explains the Honeycomb Conjecture in the context of bees. Hales proved that the hexagon tiling (hexagonal honeycomb) is the most efficient way to maximise area whilst minimising perimeter.
Produced by Tom Rocks Maths intern Joe Double, with assistance from Tom Crawford. Thanks to the Oxford University Society East Kent Branch for funding the placement and to the Isaac Newton Institute for arranging the interview.
The author H. P. Lovecraft often described his fictional alien worlds as having ‘Non-Euclidean Geometry’, but what exactly is this? And would it really break our brains?
Produced by Tom Rocks Maths intern Joe Double, with assistance from Tom Crawford. Thanks to the Oxford University Society East Kent Branch for funding the placement.
The year is 1888, and the infamous serial killer Jack the Ripper is haunting the streets of Whitechapel. As a detective in Victorian London, your mission is to track down this notorious criminal – but you have a problem. The only information that you have to go on is the map below, which shows the locations of crimes attributed to Jack. Based on this information alone, where on earth should you start looking?
The fact that Jack the Ripper was never caught suggests that the real Victorian detectives didn’t know the answer to this question any more than you do, and modern detectives are faced with the same problem when they are trying to track down serial offenders. Fortunately for us, there is a fascinating way in which we can apply maths to help us to catch these criminals – a technique known as geospatial profiling.
Geospatial profiling is the use of statistics to find patterns in the geographical locations of certain events. If we know the locations of the crimes committed by a serial offender, we can use geospatial profiling to work out their likely base location, or anchor point. This may be their home, place of work, or any other location of importance to them – meaning it’s a good place to start looking for clues!
Perhaps the simplest approach is to find the centre of minimum distance to the crime locations. That is, find the place which gives the overall shortest distance for the criminal to travel to commit their crimes. However, there are a couple of problems with this approach. Firstly, it doesn’t tend to consider criminal psychology and other important factors. For example, it might not be very sensible to assume that a criminal will commit crimes as close to home as they can! In fact, it is often the case that an offender will only commit crimes outside of a buffer zone around their base location. Secondly, this technique will provide us with a single point location, which is highly unlikely to exactly match the true anchor point. We would prefer to end up with a distribution of possible locations which we can use to identify the areas that have the highest probability of containing the anchor point, and are therefore the best places to search.
With this in mind, let’s call the anchor point of the criminal z. Our aim is then to find a probability distribution for z, which takes into account the locations of the crime scenes, so that we can work out where our criminal is most likely to be. In order to do this, we will need two things.
We’ll see why we need these in a minute, but first, how do we choose our PDF? The answer is that it depends on the type of criminal, because different criminals behave in different ways. There are two main categories of offenders – resident offenders and non-resident offenders.
Resident offenders are those who commit crimes near to their anchor point, so their criminal region (the zone in which they commit crimes) and anchor region (a zone around their anchor point where they are often likely to be) largely overlap, as shown in the diagram:
If we think that we may have this type of criminal, then we can use the famous normal distribution for our density function. Because we’re working in two dimensions, it looks like a little hill, with the peak at the anchor point:
Alternatively, if we think the criminal has a buffer zone, meaning that they only commit crimes at least a certain distance from home, then we can adjust our distribution slightly to reflect this. In this case, we use something that looks like a hollowed-out hill, where the most likely region is in a ring around the centre as shown below:
The second type of offenders are non-resident offenders. They commit crimes relatively far from their anchor point, so that their criminal region and anchor region do not overlap, as shown in the diagram:
If we think that we have this type of criminal, then for our PDF we can pick something that looks a little like the normal distribution used above, but shifted away from the centre:
Now, the million-dollar question is which model should we pick? Determining between resident and non-resident offenders in advance is often difficult. Some information can be made deduced from the geography of the region, but often assumptions are made based on the crime itself – for example more complex/clever crimes have a higher likelihood of being committed by non-residents.
Once we’ve decided on our type of offender, selected the prior distribution (1) and the PDF (2), how do we actually use the model to help us to find our criminal? This is where the mathematical magic happens in the form of Bayesian statistics (named after statistician and philosopher Thomas Bayes).
Bayes’ theorem tells us that if we multiply together our prior distribution and our PDF, then we’ll end up with a new probability distribution for the anchor point z, which now takes into account the locations of the crime scenes! We call this the posterior distribution, and it tells us the most likely locations for the criminal’s anchor point given the locations of the crime scenes, and therefore the best places to begin our search.
This fascinating technique is actually used today by police detectives when trying to locate serial offenders. They implement the same steps described above using an extremely sophisticated computer algorithm called Rigel, which has a very high accuracy of correctly locating criminals.
So, what about Jack?
If we apply this geospatial profiling technique to the locations of the crimes attributed to Jack the Ripper, then we can predict that it is most likely that his base location was in a road called Flower and Deane Street. This is marked on the map below, along with the five crime locations used to work it out.
Unfortunately, we’re a little too late to know whether this prediction is accurate, because Flower and Deane street no longer exists, so any evidence is certainly long gone! However, if the detectives in Victorian London had known about geospatial profiling and the mathematics behind catching criminals, then it’s possible that the most infamous serial killer in British history might never have become quite so famous…
Francesca Lovell-Read
Is alien maths different from ours? And if it is, will they be able to understand the messages that we are sending into space? My summer intern Joe Double speaks to philosopher Professor Adrian Moore from BBC Radio 4’s ‘a history of the infinite’ to find out…
The idea of complex numbers stems from a question that bugged mathematicians for thousands of years: what is the square root of -1? That is, which number do you multiply by itself to get -1?
Such a simple question has blossomed into a vast mathematical theory, for the simple reason that the answer isn’t real! It can’t be 1, as 1 * 1 = 1; it can’t be -1, as -1 * -1 = 1; whichever number you multiply by itself, you can’t get a negative number. Up until the 16th century, almost everyone ignored this issue; perhaps they were afraid of the implications it could bring. But then, gradually, people began to realise that there was a whole new world of mathematics waiting to be discovered if they faced up to the question.
In order to explain this apparent gap in maths, the idea of an ‘imaginary’ number was introduced. The prolific Swiss mathematician Leonhard Euler first used the letter i to represent the square root of -1, and as with most of his ideas, it stuck. Now i isn’t something that you’ll see in everyday life in relation to physical quantities, such as money. If you’re lucky enough to have money in your bank account, then you’ll see a positive number on your bank statement. If, as is the case for most students, you currently owe money to the bank (for example, if you have an overdraft), then your statement will display a negative number. However, because i is an ‘imaginary’ unit, it is neither ‘positive’ nor ‘negative’ in this sense, and so it won’t crop up in these situations.
Helpfully, you can add, subtract, multiply and divide using i in the same way as with any other numbers. By doing so, we expand the idea of imaginary numbers to the idea of complex numbers.
Take two real numbers a and b – these are the type that we’re used to dealing with.
They could be positive, negative, whole numbers, fractions, whatever.
A complex number is then formed by taking the number a + b * i. Let’s call this number z.
We say that a is the real part of z, and b is the imaginary part of z.
Any number that you can make in this way is a complex number.
For example, let a = -3 and b = 2; then -3 + 2*i, which we write as -3 + 2i, is a complex number.
As we saw before, complex numbers don’t actually pop up in ‘real-life’ situations. So why do we care about them? The reason is that complex numbers have some very neat properties that allow them to be used in all sorts of mathematical contexts. So even though you may not see the number i in everyday life, it’s very likely that there are complex numbers involved behind the scenes wherever you look. Let’s have a quick glance at some of these properties.
The key observation is that the square of i is -1, that is, i * i = -1.
We can use this fact to multiply complex numbers together.
Let’s look at a concrete example: multiply 2 + 2i by 4 – 3i.
We use the grid method for multiplying out brackets:
4 | -3i | |
2 | 2 * 4 = 8 | 2 * -3i = -6i |
+2i | 4 * 2i = 8i | 2i * -3i = -6 * i * i = -6 * -1 = 6 |
Adding the results together, we get (2 + 2i)(4 – 3i) = 8 + 6 – 6i + 8i = 14 + 2i.
Therefore, multiplying two complex numbers has given us another complex number!
This is true in general, and it turns out to be very handy. In fact, Carl Friedrich Gauss proved a very famous result – known as the Fundamental Theorem of Algebra because it’s so important – that effectively tells us that the solutions to all equations can be written as complex numbers. This is extremely useful because we know that we don’t have to go any ‘deeper’ into numbers; once you’ve got your head around complex numbers, you can proudly declare that you’ve mastered them all!
Because of this fundamental theorem, our little friend i pops up all over the place in physics, engineering, computer science, and of course, in all sorts of areas of maths. While it may only be imaginary, its applications can be very real, from air traffic control, to animating characters in films. It plays a really important role in much of theoretical mathematics, which in turn is used in almost every scientific discipline. And to think, all of this stemmed from an innocent-looking question about -1; what were they so scared of?!
Kai Laddiman