If you’ve been following along with my earlier posts on the complex number system then you’ve gotten a taste of the useful applications that \( i, \) in all its convoluted glory, presents in trigonometry, algebra, and more. But while you might be on board with treating complex numbers as a numerical trick that happens to have practical applications, it may still feel like a stretch to treat them as another number with its own properties and laws. That’s why today we’re going to reconstruct our number system all the way from the first inklings of addition and observe first-hand how imaginary numbers play a central role in the elementary algebra we use today thanks to the powerful property of closure. Let’s dive in!
Objects and Operations
After spending most of our lives working with numbers in some way or another, and especially after years of school spent waist-deep in how we use them, it can feel strange to step back and observe what exactly we’re using.
Well, the answer to that is that we’re working with a language. A representation, sometimes of reality and other times of some other abstract structures of logic. I want to address this before we begin, because we’re about to go on a journey through thousands of years of math, and it’s important to realize that this linguistic model is very applicable when considering how math evolves: Someone desires a way to express a certain idea through a new number or concept, then they spend the next dozen years grappling with how this new concept fits with the sprawling web of language around it. These actions can be broken down into and classified as “operations,” and they work on objects, which we can define as “things you perform operations on.”
One of these groups is the elementary algebraic operations, which are all the ideas most people think of as everyday math: Addition, subtraction, multiplication, division, and rational exponentiation. Most people don’t take any issue with these concepts on their own, but when faced with the implications that they have on the number system they lie in, it’s been a different story throughout history.
Mathematical Closure
The idea of closure is important in relationships and mathematics, and is often difficult to find in both. But even if you’ve never heard of it in the context of math before, it’s impossible not to use it. Whenever we add two numbers and plan on getting a third, that’s closure hard at work behind the scenes keeping everything running smoothly. What do I mean by this?
Well, imagine if I were to add together 2 random whole numbers and the output was a 4-dimensional vector. Or infinity. Or a matrix of infinities! The logic of that system would be completely off-the-rails. The fact that we know that any two whole numbers will add up to another whole number is what it means for the set of all whole numbers to be “closed under addition.” And it’s insanely useful. Remember, math is a logic system, and set closure ensures that all the objects in that set will follow the same rules and possess the same properties as before.
This might sound like a ridiculously abstract concern not worth pondering: Why would we ever run into this problem? It’s obvious that adding two whole numbers won’t break all of mathematics, right? Well, yes, you’re right. Although it still has been rigorously demonstrated just to be sure (see Peano axioms for a real rabbit hole of mathematical logic), we don’t have to worry about needing to dream up a whole new number system every time we add two natural numbers together. Unfortunately, the same isn’t, or at least wasn’t, true for subtracting them.
The Original Imaginary Numbers
The idea of negative numbers is one that we use in everyday math without a second thought. Taking away things is just as reasonable as adding things, so subtraction was always accepted as a valid operation by most ancient societies. What was less accepted was the consequences that came with it. Because if we take our idea of closure of the whole numbers, we can quickly see the cracks start to form in our seemingly complete system:
\( 1-2=-1 \)
The first 2 elements are clearly natural numbers, and yet the operation of subtraction takes them and spits out an entirely new object: The negative number. Going back to our idea of closure, we conclude that natural numbers are NOT closed under subtraction and that whole numbers are an incomplete system of algebra.
At least, you’d think that would be the obvious conclusion. But, while some societies like 2nd century BCE China adopted negative numbers as a tool for calculating debt, at the same time in Ancient Greece its most famous mathematicians remained willfully ignorant of this prospect. In Euclid’s revered Elements, where he lays out the foundations of geometry and rigorous proof, he takes special care to define subtraction as “The less of two unequal numbers […] being continually subtracted from the greater” to avoid having to grapple with the idea of negative quantities. Even centuries later when Greek mathematician Diophantus was outlining the basic rules of algebra that we use today, he wrote off the solution to his own equation, \( 4=4x+20 \) as “absurd”— not so different from “imaginary,” right? But why were the Greeks, in all their mathematical prowess, so against the idea of negative numbers?
The answer lies in their philosophy. See, the Greeks were geometers at heart, and this extensive focus on the mathematics of what they could see led them to develop a very limited idea of what numbers were. To them, the natural numbers were a system of physical measurement, and what would it mean to measure a length of -1, or to have a sphere with volume \( -\frac {4}{3}\pi \)? By contrast, ancient China’s use of numbers as a calculation tool for things like debt meant that negative quantities were as reasonable as the idea of owing 500 dollars to the state bank, and so they accepted the idea of less than nothing far quicker than Grecian mathematicians could. Ultimately, what our numbers represent is whatever we decide their logic fits, and while no physical quantity can be less than 0, that doesn’t mean that other concepts can’t be.
The Rise and Fall of Rationals
Let’s revisit our closure game. Clearly, the whole numbers aren’t going to cut it, so we have to expand our number system to include all the negative numbers too, leading us to our new set that is closed under addition and subtraction: the set of integers. This set is considerably more powerful, but it was only by the end of a slew of innovative mathematics by Indian and Islamic scholars from the 600s to 900s CE that even its basic properties were laid out; as it turns out, creating new math takes a lot of work, and this kind of process is necessary every time we find a hole in our number system. As we know though, this endeavor certainly had its uses, and Arabic and Hindu scholars utilized their new numbers to make remarkable strides in solving quadratic and cubic equations with positive and negative roots alike.
With that being said, it sure isn’t easy to reconstruct math, so let’s just hope that this problem doesn’t crop up again and continue our journey through history. Fortunately, multiplication plays nicely with the integers, and it’s easy to see that multiplying two integers will end up with another whole integer. But where multiplication remains contained, division wreaks havoc, and once again a simple expression drags us out of our comfortable new home for the number system:
\( 1 ÷ 2 = 0.5 \)
This problem is present whenever our two integers don’t share common factors, making the integers open under division. Thankfully, the idea of subdividing numbers is natural enough that it was a common practice in Ancient Egyptian mathematics, but it’s important to remember that this ease of acceptance wasn’t because fractions made any more logical sense than negatives— in fact, we’ve seen that a world with subtraction makes them pretty illogical to ignore — they just happened to carry more physical meaning. If the whole universe was composed of discrete, unbreakable chunks separated in space by perfectly defined units, then maybe fractions would be just as hard to swallow as negatives simply because there would be nothing to divide.
Centuries later during the Golden Age of Grecian mathematics, although scholars still preferred to treat fractions as ratios for their geometric concerns, the practical effect was similar enough that they formed their own branch of numbers. In fact, by 570 BCE the so-called rational number system was revered by Pythagoras and his followers, who viewed it as the key to understanding the universe. In their defense, the system appeared perfect: Between any two known rational numbers, they could construct an infinite amount of new rationals simply through division into smaller and smaller chunks, making the number line appear a continuous, harmonious stream. They elevated this perfection to a mystical level, forming the equivalent of a math cult (Ancient Greece was weird) that regarded this system as the key to analyzing and comprehending everything from the stars to the depths of human emotions.
So they were understandably frustrated when it failed to analyze a square.
The Rationale of Irrational Numbers
In a perfectly executed case of dramatic irony, the discoverer of this unholy flaw was a Pythagorean himself, and what’s more, he used Pythagoras’s most famous theorem to expose it! The whistleblower was Hippassus of Metapontum, a student of Pythagoras who sought the length of a unit square’s diagonal; in other words, the most basic application of the Pythagorean theorem there is.
Plugging in the numbers gave him \( \sqrt{2} \), a well-known and worshipped value amongst the Pythagoreans, and all was as expected. But despite its fame, no one had ever proven its rationality, assumed to be self-evident at the time. But try as he might, Hippassus was unable to find a ratio of 2 integers that resulted in the ethereal constant, so doubt started to creep into his mind. It’s debated what method Hipassus used to poke the first hole in the rational number system, so we’ll showcase Euclid’s famous proof here instead, written 300 years later in his Elements.
Euclid’s Proof
The proof uses the method of contradiction; it assumes the square root of two is rational, then takes the consequences of that assumption and. through a chain of cunning reasoning causes an obviously illogical statement to emerge, something as blatantly false as all numbers being equal, or 2 being an odd number. If each step is true, then the conclusion is that the premise must not be, and the proof is complete. But to get the ball rolling we have to leverage some of our claim’s basic properties. A rational number is defined as a fraction of two integers, so if we apply this to the square root of 2 we get the statement below:
\( \sqrt{2}=\frac pq \), where p and q are integers with no common factors.
Eliminating the square root by squaring both sides and then multiplying both sides by \( q^2 \) we find the following:
\( 2=\frac {p^2}{q^2} \)
\( 2q^2=p^2 \).
Here’s where it gets interesting: Because p^2 is the multiple of another integer \( q^2 \) and 2, it must be an even number itself. An important property of perfect squares is that if \( p^2 \) is even, so is p (Why? Hint: think about when 2 numbers have an even product).
Why is this useful? Well, this means that we can use our definition of even numbers again to write \( p \) as a multiple of some other integer h and 2, and plugging this into our expression gives us a startling result.
\( p=2h, \) where k is an integer.
\( 2q^2=(2h)^2=4h^2 \)
\( q^2=2h^2 \)
This shows us that q^2 must also be even… and therefore so must q by the same logic we used earlier!
\( q=2k \), where k is an integer.
And that’s the proof. Don’t see it? The contradiction is sneaky, but if we substitute these new values for p and q back into our original expression for \( \sqrt{2} \) we find the following:
\( \sqrt{2}=\frac {2h}{2k}=\frac hk \)
Remember how we explicitly ensured that p and q had no common factors? Well, we just pulled out a 2 from both p and q, meaning we’ve just found a contradiction: If we repeat this proof we can keep finding more supposedly reduced fractions for the same number, which leaves us with an absurdity that forces us to declare the premise false: there is no fraction of 2 integers that are equal to the square root of 2, and thus the rational numbers are no longer closed under exponentiation.
The Pythagoreans took this about as well as you might expect… after they drowned Hippassus at sea, they wrote off his proof as heresy and vowed to never speak of its unholy conclusion. The theory of irrational numbers poked holes not only in Pythagorean philosophy but also in their conception of mathematics: A complete description of the properties of rational numbers was still absent in Ancient Greece, where these numbers were still viewed as ratios rather than measures themselves, so the idea that there were measures of real, tangible geometric shapes that couldn’t be expressed in this language of ratios was devastating. Once again. the limitations of how the Greeks viewed numbers led them to denounce their own discovery as “irrational.”
Polynomials and Radicals
In the centuries following the collapse of Ancient Greece, mathematicians in India and the Middle East were hard at work tackling the many cans of worms Greek mathematicians opened and ignored in number theory. The Hindu mathematician Brahmagupta devised the first set of rules for working with negative numbers and zero in 7th century CE, and as we mentioned before the foundations of algebra were pioneered throughout these regions in the forms of polynomials and their corresponding solutions.
Going back to Algebra 1, a single-variable polynomial is a set of terms formed only with addition, subtraction, multiplication, and division by rational numbers, and powers with natural number exponents: In other words, it performs a combination of these operations on some number “x” and spits out the result. Applying all our new knowledge of closure to this string of operations, what do we know about the value of the polynomial below, even with no idea what the values of the various letters are?
\( ax^4+bx^3+cx^2+dx+e\).
Well, we know that performing these operations on any rational or irrational numbers will result in another number of either of these forms (natural number exponentiation is repeated multiplication, which we’ve seen closure demonstrated for), so we know that the combination of these operations that is the output must also share this condition: Its list of outputs given any rational or irrational inputs form a set closed under the operations that built it, which we call the set of real numbers.
The Inevitability of i
Hopefully, the end goal of understanding imaginary numbers is starting to become clearer. If you’ve read about how and why imaginary numbers were first created, then you’ll remember that, while the outputs of these polynomials may always be real, the inputs corresponding to a particular real output weren’t always so friendly/ We’ve discussed the infamous quadratic \( y=x^2+1 \) before, and right away it should be clear that our set closure falls apart, and we can locate the exact operation that pushes the real numbers to their limit by attempting to solve for its roots.
\( 0=x^2+1 \)
\( -1=x^2 \)
\( \sqrt{-1}=x^2 \).
Sure enough, this anomaly makes it clear that the rational exponents, shatter our bubble of neat numbers once again.
Creating New Numbers
Mathematicians at the time rejected \( i \) vehemently because let’s be honest, they were probably pretty sick of rewriting math for every smug new proof that floated their way, but we can look back and avoid the same pitfall of skepticism. We’ve seen that the logic behind numbers is entirely removed from their physicality, with negatives dissolving the notion of numbers being tangible objects, rationals eliminating the idea of unit lengths, and irrationals literally being a number impossible to ascribe a numerical value to. All that imaginary numbers are doing is lopping off one more link to reality, the idea that numbers have to be placed on a spectrum from least to greatest, while in return fulfilling a logical role of the missing piece to our most widespread algebra.
Numbers and Fields
At this point a good question may come to mind: Why don’t we make new numbers all the time? What’s to stop any old person from dreaming up a new number system and winning a Field’s Medal? And the answer is… nothing! If you can create a number that remains consistent with the web of mathematics that surrounds it, then congratulations, and be sure to let them know who gave you the idea! Modern algebra is home to every variety of exotic contortion of logic, some with practical applications, others infinitely far from it. But, if the history of our numbers is anything to go by, creating consistency is easier said than done.
To elaborate further, we have to ask a question that’s seemed obvious this whole time: what even is a number?
As far as mathematicians are concerned, the definition is straightforward and pretty uninteresting: a number can be any member of a collection of objects, or set, typically following the properties that define a group like possessing associativity, closure, identity operations, and inverses. But that includes things like matrices. Or functions. Or heck, rotations of a cube. As far as a traditional understanding of numbers goes, this is way too broad a definition.
Alternatively, we can rely on a concept in abstract algebra called a “field.” Fields refer to any group of objects that have defined addition, subtraction, multiplication, and division while satisfying the following axioms:
associativity | ||
commutativity | ||
distributivity | ||
identity | ||
inverses |
These operations are all familiar, and they eliminate things like vectors and matrices which don’t have well-defined multiplication and division satisfying the field axioms, while including the full set of complex numbers. Unfortunately, integers don’t have axiomatically consistent multiplication and division defined either— that’s why we had to step outside of them to the field of rational numbers where both 3 and 1/3 are defined— and so even though they can be part of the field of complex numbers, as a self-contained number system they don’t fly either.
There are lots of other number systems that are axed by this definition too: higher-dimensional extensions of the complex numbers called quaternions and octonions have sufficiently well-defined operations to be considered a group, but each one drops off a field axiom just to function, making them the looser structure of a “division algebra.” And there are p-adic numbers and transfinite numbers… the list of strange number-esque structures goes on and on. Ultimately, what is and isn’t a number isn’t what’s important, it’s the spaces in which we work with them and why. And thanks to algebraic closure, the way we work with our most familiar numbers is complete.
For what it’s worth though, imaginary numbers aren’t just an exercise in abstract logic like many fields of math tend to be: They have tons of applications to conventional algebra and geometry even at their most basic level, and knowing their fundamental role in algebra leaves us poised to hunt for more of these elegant and powerful connections. In a couple of posts we’ll perform a similar reconstruction of Euclidean geometry, and once again see how complex numbers are instrumental in this system too. See you then!