Cauchy Residue Theorem

\(\DeclareMathOperator{\res}{Res}\)

The Cauchy Residue Theorem is a remarkable tool for evaluating contour integrals. Essentially, it says that, instead of computing an integral along a curve \(\gamma\), you can replace it with a sum of “residues” at some special points \(a_k\):

$$ \oint_\gamma f(z)~dz = 2 \pi i \sum_k \res(f, a_k) $$

But what is a residue? What are the \(a_k\)? What’s really going on here?

Residues

Since this isn’t a rigorous complex analysis text, it’s a post on some blog, we’ll gloss over some of the technicalities, such as verifying convergence, or checking that holomorphic functions are analytic. All we need is some imagination, and the following fact:

Path Independence

Let \(D\) be a region of the complex plane and \(f\) be a function holomorphic (complex-differentiable) on \(D\). If you take a curve \(\gamma\), and continuously deform it into a curve \(\gamma'\), staying inside \(D\), then

$$ \int_\gamma f(z)~dz = \int_{\gamma'} f(z)~dz $$

Also, we say two such curves are “homotopic”.

For example, if the blue dashed area is \(D\), the curves in the first picture are homotopic, but not the curves in the second picture. There is no way to deform one of the curves into the other, without leaving the domain.

Homotopic curves

Non-homotopic curves

If you’re comfortable with multivariable calculus, compare this to the Fundamental Theorem of Calculus for line integrals. How does complex-differentiability encode the “curl-free” condition?

This means that if \(\gamma\) is a closed loop and \(f\) is holomorphic on the region enclosed by \(\gamma\), then \(\gamma\) is homotopic to a point, which tells us that \(\int_\gamma f~dz\) must be zero. Where things get interesting is when there are points in \(D\) at which \(f\) is not holomorphic.


So let’s approach the theorem.

Let \(f\) be a function holomorphic on \(D\), except at a set of points \(a_k\), and \(\gamma\) a closed curve in \(D\), avoiding the points \(a_k\). Without loss of generality, we can assume all of the \(a_k\) lie within the region enclosed by \(\gamma\) (if not, we just make \(D\) smaller). We can use the path-independence of contour integrals to deform \(\gamma\), without changing the value of the integral:

A contour around several a_k

Deformed into several circles with sections between them

These corridors between the circles can be moved so they lie on top of each other, and cancel out. This leaves us with circles \(C_k\), one for each point \(a_k\).

$$ \oint_\gamma f(z)~dz = \sum_k \oint_{C_k} f(z)~dz $$

A few circular contours

So all we need to do to now is determine what the integral of \(f\) on each circle is.

Residue Definition #1

The residue of \(f\) at \(a\) is \(\displaystyle \frac{1}{2 \pi i} \oint_{C} f(z)~dz\), where \(C\) is a small circle around \(a\).

From path-independence, we know we can shrink the circles as much as we like without changing the value of the integral, which tells us this definition is well-defined (just make sure \(f\) is holomorphic everywhere else in your circle!).

“But wait,” you complain, “This definition is ridiculous; you set it up in such a way that the residue theorem is trivial! What gives?”

Well, there are other, equivalent definitions of residue that are much easier to compute, and those are what give the residue theorem its power. Sometimes people will use these computational definitions of residue as the primary definition, but this obscures what’s going on. When you think of what the residue means, in a spiritual sense, you should think of it as “the integral of a small loop around a point”.


A point at which \(f\) is not holomorphic is called a “singularity”, and there are a few types. The most manageable of these is the pole, where \(f(z)\) “behaves like” \(\frac{1}{(z-a)^n}\). To be more concrete, \(f\) has a pole (of order \(n\)) at \(a\) if \((z - a)^n f(z)\) is holomorphic and non-zero at \(a\). In other words, a zero of order \(n\) cancels out a pole of order \(n\).

For example, \(\frac{1}{\sin z}\) has a pole of order \(1\) at \(z = 0\), as evidenced by the fact that \(\frac{z}{\sin z}\) approaches \(1\) as \(z \to 0\). The rational function \(\frac{x-2}{x^2 + 1}\) has poles at \(\pm i\), also of order \(1\). And the function \(\frac{1}{\cos z - 1}\) has a pole of order \(2\) at zero.

There are other kinds of singularities, but nothing good comes from them, so we will henceforth only consider singularities that are poles.

If \(f\) has a pole of order \(n\) at \(a\), then \((z-a)^n f(z)\) has a Taylor series centered at \(z = a\), with non-zero constant term:

$$ (z-a)^n f(z) = b_0 + b_1 (z - a) + b_2 (z - a)^2 + b_3 (z - a)^3 + \cdots $$

Letting \(c_k = b_{k+n}\), we can define a series for \(f(z)\) itself, called the Laurent series:

$$ f(z) = \frac{c_{-n}}{(z-a)^n} + \frac{c_{-n+1}}{(z - a)^{n-1}} + \cdots + \frac{c_{-1}}{z - a} + c_0 + c_1 (z - a) + \cdots $$

It’s almost a Taylor series, but we allow (finitely many) negative terms as well. This expansion will allow us to compute the residue at \(a\).

Let’s just take a single term, \((z - a)^n\), and we’ll recombine our results at the end, because integrals are linear. What happens when we integrate around a circle centered at \(a\) with radius \(R\)? Subsitute \(z = a + R e^{it}\) for the contour:

$$ \oint (z - a)^n~dz = \int_0^{2\pi} (R e^{it})^n~d(R e^{it}) = i R^{n+1} \int_0^{2\pi} e^{(n+1) it}~dt = i R^{n+1} \left[ \frac{e^{(n+1)it}}{(n+1)i} \right]^{2\pi}_0 $$

Since \(n\) is an integer, \(e^{(n+1)2 \pi i} = 1\), and \(e^{0} = 1\), so this integral should be zero. But that doesn’t make any sense; that would suggest that the integral of any function around a circle is zero. But that’s not true.

We actually made a mistake in the last step; the antiderivative of \(e^{kt}\) is \(e^{kt} / k\) unless \(k = 0\). For that to happen, we need \(n = -1\), and in that case:

$$ \oint \frac{1}{z - a}~dz = \int_0^{2\pi} \frac{d(R e^{it})}{R e^{it}} = \int_0^{2\pi} i~dt = 2 \pi i $$

Therefore, when we integrate \(f(z) = \sum_{k = -n}^\infty c_k (z - a)^k\), all the terms vanish, except for the \(k = -1\) term, which pops out a \(2 \pi i \cdot c_{-1}\). This gives us another definition for the residue!

Residue Definition #2

If \(f\) has a pole at \(a\), and a Laurent series \(f(z) = \sum c_k (z - a)^k\), then the residue of \(f\) at \(a\) is \(c_{-1}\).


If this were all we knew, it would still be a pretty good theorem. Finding power series instead of taking integrals? Not too shabby. But we can take it one step more.

Finding power series can be frustrating; how many people know the power series for \(\tan z\) off the top of their head? Besides, we don’t need the whole thing, just a specific coefficient.

Instead, we’ll assume the existence of a power series, and use some tricks to extract \(c_{-1}\).

Say we’ve got a simple pole (a pole of order \(1\)). By multiplying by \((z - a)\), we can get a Taylor series:

$$ (z - a) f(z) = c_{-1} + c_0 (z - a) + c_1 (z - a)^2 + \cdots $$

If we plug in \(z = a\), then we’ll get \(c_{-1}\). Well, technically, we can’t plug in \(z = a\) directly, because \(f(z)\) isn’t defined at \(a\). But if we take a limit, that’s okay.

How about a pole of order \(2\)? Our trick won’t work the same way; if we apply it naively, we’ll just get \(c_{-2}\), which we don’t care about at all.

$$ (z - a)^2 f(z) = c_{-2} + c_{-1} (z - a) + c_0 (z - a)^2 + c_1 (z - a)^3 \cdots $$

But if we take the derivative, we can knock off a term from the end, and then we can take the limit as \(z \to a\).

$$ \frac{d}{dz} (z - a)^2 f(z) = c_{-1} + 2 c_0 (z - a) + 3 c_1 (z - a)^2 \cdots $$

For \(n = 3\), there’s a slight wrinkle; we end up with an extra factor of \(2\) that we have to divide out:

$$ \frac{d^2}{dz^2} (z - a)^3 f(z) = 2 c_{-1} + 6 c_0 (z - a) + 12 c_1 (z - a)^2 \cdots $$

The pattern for higher-order poles is similar:

  • multiply by \((z - a)^n\); this changes our term of interest to \(c_{-1} (z - a)^{n-1}\)
  • take \(n-1\) derivatives; the important term is now \((n-1)! c_{-1}\)
  • divide by \((n-1)!\); the important term is now \(c_{-1}\)
  • take the limit as \(z \to a\); all higher order terms vanish, and we are left with \(c_{-1}\)

We now have our last, and most computationally accessible, definition of residue:

Residue Definition #3

If \(f\) has a pole at \(a\) of order \(n\), then the residue of \(f\) at \(a\) is:

$$ \res(f, a) = \lim_{z \to a} \frac{1}{(n-1)!} \frac{d^{n-1}}{dz^{n-1}} (z - a)^n f(z) $$

This is the definition often presented as “the” definition of residue, but this hides where the residue theorem comes from, and why residues are defined the way they are.

Winding Number

As a final note, we can add a tiny bit more generality to the theorem.

Technically, we’ve been a little sloppy with our curve \(\gamma\). What if it goes the other way? Or loops around some points multiple times?

To fix this, we introduce \(W(\gamma, a)\), the winding number of \(\gamma\) around \(a\). It means exactly what the name suggests, it indicates how many times (and in what direction) \(\gamma\) loops around \(a\). Counter-clockwise is positive, and clockwise is negative. Two examples are pictured below:

A limacon

A lemniscate

In the first picture, the specified points have winding number +1 and +2, and in the second, they have -1 and +1. The only thing this changes about our proof is that when we deform our \(\gamma\) into circles, we may get multiple loops around the same point:

A limacon

A lemniscate

But by definition, the number of loops is exactly the winding number, and if the loop runs clockwise, we pick up a negative sign. So after accounting for multiplicity and direction, we get:

$$ \oint_\gamma f(z)~dz = \sum_k W(\gamma, a_k) \res(f, a_k) $$