Infinite series are up there with some of the coolest functions in math. They’re at the heart of Calculus with Reimann sums and integration, and it’s no wonder they appear everywhere in analysis because of it. But what really cements their status in physics and other applied math is power series, sums of terms with higher and higher powers that can mimic functions with uncanny, eventually perfect accuracy. In this post, we’ll be taking a look at the anomaly of “convergence” of one of these real power series, and how its solution lies outside the box and in the complex plane.
Real Taylor Series
If you’ve taken a Calculus course, infinite series might have felt like the black sheep of the course at first: in the AP curriculum for high school, at least, they’re tacked on as a sort of after-thought after derivatives and integrals and differential equations are all neatly wrapped up. One of the goals of it all, though, was to culminate in these: by repeatedly differentiating a function at a point, we gather more and more information about how this function changes, and with more info, our approximation improves and eventually converges.
The question of how the information acquired at just this one point can represent the entire spread of a function is far from trivial, and, if you’ve been following previous posts, has a pretty satisfying answer with the complex generalization of the derivative as a linear transform, but if we take for granted that this is possible then the actual computation is simple and extremely powerful. Here’s the general form of a Taylor series for an arbitrary function:
\sum^{\infty}_{n=0}c_n(x-a)^n=c_0+c_1(x-a)+c_2(x-a)^2...
Where \( c_n=\frac{f^{(n)}(xa)}{n!} \):
=f(a)+f'(a)(x-a)+\frac {f''(a)}{2!}(x-a)^2...
For a quick refresher, let’s check how these polynomials actually approximate our chosen function around point \( x=a \). The two things we need to match with our approximation are “what f is at that point, and where it’s going,” and here we solve these by making sure our polynomial’s derivatives are all equal to the actual function’s: by plugging in \( x=a \) only the constant term remains, so our series equals the function at that point. Taking the 1st derivative drops \( f(0) \) and makes \( f'(a) \) the new constant term, and if we evaluate at \( a \) all other terms except for it will equal zero and only leave it behind as the matching derivative.
Generalizing this process for “n” derivatives and adding a factorial to cancel out the added factors that come from repeatedly applying the power rule (Ex: \( \frac{d^4}{dx^4}c_4x^4=4!c_0 \), we end up with our series that grows more accurate the more terms are used. It’s simple, versatile, and an approximation that approaches perfection (from here we’ll only be grappling with the math of Taylor Series: for more info on their power in physics and other sciences, see this video by Physics with Elliot for a detailed breakdown.
The Convergence Paradox
Well, for the most part, anyway. Although some functions like the sine function above and the exponential function have approximations that converge everywhere, others can only do so within a certain interval. And there are a couple of things about this interval that are bizarre.
For these and many other functions, the Taylor series diverges at some hard barrier, converging only within a single symmetric interval. The failure of the series on the left makes sense: if we factor the denominator into its zeroes we can see the function is discontinuous and non-differentiable at \( x=\pm 1 \), and so we can guess that the jump in behavior can’t be predicted by the smooth derivative. The function on the left, though, seems inexplicable: it’s continuous and differentiable, but our series seems to hit a wall at \( \pm 1 \) and diverge anyway. Where’s this behavior coming from?
Of course, if we take a detour into the complex plane then we can see that \( x^2+1 \) isn’t as flawless as we’d first thought: it has roots at \( \pm i \), analogous to our function on the right. Could these be the numbers responsible for our series’ demise?
Complex Convergence
Mathematically speaking, generalizing the functions we want to represent as a series to the complex plane is pretty simple: we need it to equal our real series for all real inputs, or along the real axis, and it needs to have a decomposition in terms of the complex variable \( z \). The obvious answer here seems to be just replacing any “x” we see in our function and power series with a “z” and worrying about the consequences later: believe it or not, that actually is the best approach!
If we’re not sure if this is the only possible series representation, we can use the fact that there exists a unique polynomial of degree \( n+1 \) that maps \( n+1 \) points to their image— this is why we need exactly 2 points to define a line \( y=c_0+c_1x \), 3 points to define a parabola \( y=c_0+c_1x+c_2x^2 \), and so on. We implicitly use this result when we settle on a single real power series, and since complex numbers still behave the same under algebraic operations there’s no reason it shouldn’t hold here too. That means that a complex power series (in other words, an infinite-degree polynomial) mapping all our real inputs to their real counterparts is the only one of its kind: in complex analysis, we call this the Identity Theorem.
f(z)=\sum^{\infty}_{n=0}c_n(z-a)^n, c_n=\frac{f^{(n)}(z)}{n!}
Why a Radius of Convergence?
So far in our posts we’ve touched on complex differentiation, but haven’t gone into explicit calculations: for now, the analogy of this series to our real one is the important part, and we’ll circle back to actually calculating them in a bit. For now, our goal is to understand the radius of convergence.
The term “radius” is pretty weird in the context of real series, but when we think of the 2-dimensional set of complex inputs, having a circle with radius \( r \) of inputs where our series converges makes perfect sense.
But how do we prove that our series converges in exactly a disk? Mathematically, this statement means that for any radius ε we choose around our desired point \( F(a) \), there’s some N-term partial sum of our series such that \( |F(a)-P_N(a)|<\epsilon \) for all \( n>N \) (hopefully this is looking familiar to real-number limits). With real numbers, we only have to worry about distance along the x-axis but to reach a similar effect in 2D space we use the modulus for distance.
f(z)=\sum^{\infty}_{n=0}|c_np^n|=|c_0p^0|+|c_1p^1|+|c_2p^2|...
Converging on a Circle
To derive our method of finding convergence in the complex plane, let’s start by assuming our function converges at some point \( z=p \) a distance \( |p| \) from our series’s center. in the plane. Since this means that our polynomial yields a finite result, we know that our terms can’t stay at a finitely large value, but have to die away to 0 at some point (notice the parallels to our nth-term test for real power series): in other words, we can always choose some magnitude \( \delta \) such that \( |c_np^n|<\delta \) after some “n” terms.
But since all points \( z_p \) inside the disc with radius \( |p| \) are a smaller distance from the series’s center, each term \( |c_nz_p^n| \) must be smaller than its corresponding p-term, and therefore their sum is a smaller, finite value and must converge as well. So, by knowing that some point a certain distance away converges, we know that all points closer to the series’s center must converge as well, establishing our disc!
Convergence, Divergence, and Everything in Between
But what if there are points outside our disc that also converge? We can use the same tactic we used here with a point \( d \) a distance \( |d| \) from the center, except this time it diverges. That’s fine, but let’s think about what would happen if any point further away from the center than d happened to converge: since we’ve already proven everything inside that point’s disc would have to converge too, that would include d, which we’ve already established is impossible. Because of this, we know that d is our outer limit: if one point diverges, all other points outside its disc must diverge.
Disc of divergence (black)
Unknown Region (grey)
All that leaves us is some region between a and d that doesn’t necessarily converge since it’s outside a, or diverge since it’s inside b. To prove that this logic works, though, let’s think about what would happen if we were to test another point in our unknown region: if it converges, we can expand our disk slightly to that point. If it diverges, we can shrink our region down to it. Repeating this ad infinitum, we end up shrinking down our region until it approaches a single distance: this is our radius of convergence!
A Lingering Mystery?
Practically speaking, if we want a point of divergence then we can just turn to our original function just like we turned to the vertical asymptotes and discontinuities of our real functions, only now we have the rigor to show that divergence at these points isn’t just a coincidence: because our original function tends to infinity at these points, so must our series, and all points beyond the closest of these points, which we call singularities, diverge as well.
As for the convergence of all points within that disc…that’s a lot harder to prove. We’d need to be able to take some circle around a cluster of points and gauge their properties just from that border. Weirdly enough, though, as we’ll see in part 2 of this power series breakdown, we’ll arrive at a result that comes out of nowhere and allows us to do just that. The answer we’ll find is that all points inside this disc end up converging, meaning that we can finally answer our question of the radius of convergence below:
For a power series of a given function \( f(z) \) centered at \( z=p \), the radius of convergence is the distance from that point to the nearest singularity \( s \), or \( R=|z-s| \)!
Restriction to Reality
With our new arsenal of complex tools, let’s jump back to our comparatively shallow home in the Cartesian plane. If we restrict our \( z \) inputs to those spanning the real number line, we arrive back at our simple Taylor series, but all our properties of modulus and convergence still hold…and so do our singularities. Just like the case of the missing roots we talked about in our very first post about the complex plane, our complex singularities are just as real as any vertical asymptote in the x-y plane. Let’s go back to our imaginary factorization of \( f(x)=\frac{1}{x^2+1} \), only this time considering its complex generalization:
\( f(z)=\frac{1}{z^2+1}=\frac{1}{(z+i)(z-i)} \): singularities at \( z=\pm i \).
When centered at \( z=0 \), this becomes \( R=|0 \pm i|=1 \).
We can see that our complex function can acknowledge the existence of these series without any issues, but the issue’s still easy to mistake for some vertical boundary at \( x=\pm 1 \) when we look at our real graph:
Complex “Graphs”
If we want a real look at our series, we’ll have to turn to a complex graph. Like we’ve mentioned before, a true complex graph with the full spectrum of 2D inputs and outputs represented as distinct points would require 4 dimensions, but we don’t need the full picture, just one that accounts for the full range of complex inputs and the magnitude of the outputs, which corresponds to their distance from the center point. That only takes 3 axes, and that means we can treat the magnitude as the height of a 3D graph, with the 2D complex plane as its flat base:
Credit: [WolframAlpha]
Even without the full 4D picture, this picture shows us exactly what we’d expect: at \( z=0 \pm i \) we can see two points where the graph suddenly spikes (the graph is cut off at \( |z|=1.5 \), but would increase up to infinity) and forms a 3D “asymptote”. Here, it’s clear the boundary is caused by the two complex singularities as expected, not some arbitrary vertical wall as we might assume from our real graph.
A Hidden Symmetry
That’s not all these 3D surfaces can model though. If we write our initial pair of real functions as their complex generalizations, we can see that the two share an interesting relationship:
f(z)=\frac{1}{1-z^2}
f(iz)=\frac{1}{1-(iz)^2}=\frac{1}{1+z^2}=g(z)
We’ve seen this before: in the complex plane, multiplication by \( i \) just represents a rotation by 90 degrees. In the xy-plane, that kind of relationship seems completely abstract, but what if we graph the two side by side with our complex graphs?
There we have it: the results are exactly what we’d expect, singularities and all! Viewed like this, the identical radii of convergence make sense since the graphs themselves are just rotated copies of one another, a result that we would have been hard-pressed to deduce from our basic real-valued Taylor Series.
Conclusion
At this point, hopefully, it’s clear that our real graphs aren’t just incomplete, but inexplicable without a complex lens. Believe it or not, we’ve only just scratched the surface of the complex Taylor series. What complex functions have series expansions? How do we find their coefficients? And if we view the familiar real-valued Taylor series as just a restriction of our broader complex series along the line \( y=0 \) in the complex plane, what other series can we generate with different restrictions? Next time, we’ll explore the last of those questions by looking at a different well-known series, and in the process stumble across a shocking result that extends well beyond what we came to look for. See you then!