Maxwell’s Equations: Differential and Integral Forms

Classical electromagnetism begins and ends with Maxwell’s Equations. In an introductory E&M course, all the year’s content slowly builds up until converging on the 4 equations that describe the electric and magnetic fields from the ground up. But for as hyped as these equations are, we don’t tend to linger on them for more than a week, partly because each one has been covered in isolation throughout the year, and partly because finals are looming over the horizon and this course is a mathematical minefield as is without another set of derivations to memorize before testing.

But if we can spare the time to explore these deceptively simple formulas, we glimpse how they express the basic physical postulates of electricity and magnetism and how they can be applied all across physics and engineering. To see this, we’re going to look at the differential and integral pairs of each equation and what their distinct interpretations mean. It’s worth noting, though, that these aren’t the only ways we can express these formulas: other versions are standard depending on the specific application you’re working with, and in the next post we’ll be covering those other standards and the unique math—including complex analysis—behind each of them, so stay tuned for those!

Integral Form: Describing Macro Distributions

Let’s start with the basics; in most courses and textbooks, these 4 integrals are peppered throughout the course as tools for solving various problems, so even if they weren’t explicitly named as Maxwell’s equations they’re probably familiar to E&M students anyway. To brush everyone up, here’s a quick rundown of what we’re looking at:

  • Gauss’s Law for Electric Fields: This equation states that the degree to which an electric field passes through a surface is determined solely by the charge it encloses (described with a surface integral), regardless of the shape of the surface or the positions of enclosed charges. This allows us to solve for the electric field for symmetric charge distributions, including a sphere (this is how we can derive Coulomb’s Law!).
\oiint_S{E \cdot dA}= \frac{Q}{\epsilon_0}
  • Gauss’s Law for Magnetic Fields: Similarly, this equation describes the degree to which a magnetic field passes through a surface, but this time concludes that it’s always zero. This tells us that every magnetic field line must begin and end inside whatever surface it’s surrounded by, which tells us that the field lines must be closed loops.
\oiint_S{B \cdot dA}=0
  • Faraday’s Law: Shifting from closed surfaces to closed loops, this integral states that a changing magnetic field through an area creates an electric field that travels along its border. Qualitatively, we can see from the negative sign that the electric field basically opposes changes to the magnetic field. This equation is invaluable for circuit analysis where magnetic fields are present since it generalizes laws like Kirchhoff’s Conservation Rules that break down when magnetic fields are sufficiently strong.
\oint_L{E \times dS}=-\frac{d\Phi_B}{dt}
  • Ampere-Maxwell Law: This last equation is analogous to the previous one in its relation of a changing electric field through an area to a magnetic field circulating its border. The first term accounts for current, or the movement of real charges, whilst Maxwell’s contribution of the second term pertains to the fields that create such currents in free space where no charges are moving.
\oint_L{B \times dS}=u_oI + \epsilon_0\mu_0\frac{d\Phi_E}{dt}
A cylindrical Gaussian surface (blue) around a linear charge distribution.
Credit: [mdashf]
A Gaussian surface around a magnetic dipole, each field line leaving and reentering to create 0 net flux.
Credit: [FOSCO]
Magnetic fields passing through the area solenoidal loops and the current and electric fields it induces
Credit: [Lumen Learning]
The flip side of the image above showcases a magnetic field being created within a loop of changing current.
Credit: [PhilsChatz]

The goal of the integral forms is, broadly speaking, to capture the behavior of the electric and magnetic fields for distributions of static and moving charges, and they fulfill that purpose well. There are a ton of problems that rely on these 4 equations, but if you have any background with an introductory E&M course then you’ve almost definitely seen them (if you’re new to E&M or want a quick refresher, I’d recommend Khan Academy India’s amazing videos like this one on Faraday’s Law and induction for practical applications): Maxwell’s equations are usually introduced over the span of the course for different problems, so we’ll spend less time on them than we will with other forms.

A Surface-Level Description

But what makes Maxwell’s equations the cornerstones of electromagnetism? They describe behavior at a macroscopic level, sure, but as far as describing all electromagnetic fields go they don’t seem to tell us much about individual charges or isolated fields without surfaces and borders thrown into the mix.

At a practical level, these laws have issues as well: laws like Ampere-Maxwell’s and Gauss’s are always applied for specific, symmetric charge/current distributions in a traditional E&M course because the actual value we want is the electric or magnetic field, not the flux. But this requires us to be able to separate our fields from the integrals they come in, which is why we rely on distributions with certain symmetries that keep the dot products inside constant: this way, we can treat the fields as constants and factor them out of the integral over the area.

The spherical, cylindrical, and planar surfaces that are used in almost all Gauss’s Law problems are chosen because they rely on electric field lines being perpendicular to the surfaces— in fact, problems with planar symmetry always stress that the sheet of charge has an area far greater than the distance between itself and the Gaussian surface, as with larger distances the field lines would begin to curve to the edges of the plate rather than remaining at a constant angle.

The edges of a parallel-plate capacitor demonstrate a breakdown of the parallel field lines.
Credit: [BCcampus]

Without these symmetries though, this logic breaks down: the dot product is always changing, meaning our integral is a function of the fields as well as the region and we can no longer tease them apart to solve for the fields. Since Maxwell’s Equations are fundamental because they describe the behavior of the electric and magnetic fields, clearly this form of them isn’t gonna cut it.

Divergence and Curl

Here’s where the fun math starts: the aforementioned symmetries allow first-year physics students to dodge a bullet with the nitty-gritty of vector calculus, but to get any further with Maxwell’s Equations we have to introduce two concepts, and two fundamental theorems, that allow us to make the shift in perspective we need. This isn’t going to focus as much on the math as other posts, so if you’re looking for that focus on theory you can check out this post on the algebraic structure of vector calculus: expect more coming soon!

A Physical Description

If you’ve analyzed electric currents in wires then you’ve probably seen the value in thinking of electric charges in a wire as fluid flowing through a pipe. The fact that we can treat conducting wires as streamlines where the electric field is most concentrated motivates us to apply this model to other aspects of the field. Two places where this comes up are with flux–the degree to which a field passes through a given area– and circulation– the degree to which a field aligns with the path around that area. The integral forms of Maxwell’s Equations are rooted in these, with Faraday’s and Ampere-Maxwell’s equations defining circulation integrals over a path and Gauss’s laws defining flux integrals over an area.

Below are two vector field and their corresponding functions that capture the essence of these ideas with constant flux and circulations respectively; notice that the electric field around positive charges matches the outward flux pattern, while the circular flow of the magnetic field around a current-carrying wire matches the circulation pattern.

Constant Flux in region given by \( F=\:<{x},{y}> \)
All graphs in this post created using Desmos
Constant Circulation given by \( F=\: <{-y},{x}> \)

We have to be careful though: it’s easy to look at the above fields and tell which one has circulation or flux respectively, but many fields defy this simplistic visual logic: the vortex field below, for example, has zero circulation despite looking almost exactly the same as our rotating field:

An irrotational field given by \( F=\:<\frac{-y}{\sqrt{x^2+y^2}}, \frac{x}{\sqrt{x^2+y^2}}> \)

The number-crunching and line integrals to check these values aren’t difficult, but we can come to the same conclusion with a new conceptual model. If we think of our vector field as describing the streamlines of a fluid like water and notice the qualitative difference in the two vector fields, then this suddenly clicks. As you go further out in the vortex field, the division by \( r^2 \) (radial distance from the origin) means the flow rate of our fluid stays the same, so even though it’s carried along in a circle with the fluid there’s no difference in the amount of fluid pushing it counterclockwise vs. clockwise to trigger any object in the streamline to rotate around its own axis:

On the other hand, our circulating vector field does scale in magnitude as we go further out, so the corresponding fluid flows faster on the outside of our object than the inside and causes rotation because of the difference in torque!

Green vectors depicting a difference in force applied on both ends of a propellor causing rotation: if this propellor was put anywhere except the origin in the vector field with curl, it would rotate!

Infinitesimal Flux

These figures would be a great way to describe our fields: in Maxwell’s equations, both the electric and magnetic fields are warped solely by static and moving charges, so an arbitrary superposition of these fields with constant circulation and flux should be equivalent to a corresponding distribution of static and moving charges… this is the idea that defines Maxwell’s equations in the integral form!

The differential forms, then, aim to look at the local behavior of these fields and their relation to infinitesimal changes in flux and circulation. Luckily, we already have fields to describe a unit of constant flux and circulation, and if we assume that in the local region of a point these approach constant values think of the “local linearity” required for differentiability) we can treat any distribution as some superposition of these fields: the name given to the flux through a surface as it shrinks to a point is known as the “divergence” of that point, and the circulation around that surface’s border approaches the point’s “curl.” To keep the flow going with our fluid analogies, we’ll call points of positive and negative divergence “sources” and “sinks” respectively, and points of positive and negative curl “whirlpools.”

A useful way to understand why these behaviors are so fundamental is to think about what they tell us about our vector fields: if we think about our point as emanating vectors to all the points along an infinitesimal circular loop, the divergence measures the degree that our vector field aligns with those radial position vectors, or how parallel the two are. Meanwhile, the curl measures the degree that our vector field aligns tangentially with those radial vectors, or how perpendicular they are.

Mathematical Formulation

This is expressed mathematically via the gradient operator \( \nabla \), which when applied to a function outputs a vector in the direction of its fastest increase; in this case, taking the dot product with this operator can be thought of as measuring that alignment with the direction of fastest increase—radially outward—and the cross product as aligning with the direction of least increase—tangentially around—from the point it’s taken at.

Depiction of a vector X and its breakdown into tangential and normal components.
Credit: “Visual Complex Analysis” by Tristan Needham
Divergence\: of \: \bold{X}:  \:\nabla \cdot \bold{X}
Curl\: of \: \bold{X} : \:\nabla \times \bold{X}

Divergence and Stokes’s Theorems

To see how adding up these fields works, which we do in Maxwell’s equations via surface and line integrals, let’s use a quick visualization: if we call our initial loop “C,” then make a quick slice down the middle to split it into “C1” and “C2,” the normal vectors in red must be equal and opposite, whereas the vector field along both lines should be equal in strength. This means that all flux– which in this case measures the alignment of the equivalent force fields with opposing normal vectors– along the 2 borders must cancel, and all that remains is that which was along the initial loop C. A similar thought process can be applied to the opposing tangent vectors describing curl.

If we can perform this subdivision once, there’s nothing stopping us from doing it again. Sure enough, as long as our loop is continuous and doesn’t exhibit any abnormal behavior like crossing itself or abruptly changing direction to mess up our canceling vectors, we can subdivide any loop C into tiny pockets of fluid flow through and around infinitesimal loops that converge on a single source/sink or whirlpool. This allows us to express our net flux and circulations as a sum of all the points contained within our curve, or in 3D, our surface:

Divergence \: Theorem:\oiint_S{\bold{F \cdot dS}}={\int\int\int}_V({\nabla \bold{\cdot F})\:dV}
Stokes's \: Theorem: \oint_L{\bold{F \cdot dr}}={\int \int}_A{(\nabla \bold{\times F)}\cdot \:d\bold{A}}

These two fundamental results give us exactly what we need for our 4 equations: sure, working with surface integrals is tricky, but by using these we can transform them into the everyday area (or volume) integrals we use all the time for derivations of mass from density.

That last point about density isn’t just an offhand example: like gravitational fields originating from mass distributions, electric and magnetic fields originate from charge distributions. We can apply this to treat any given net charge Q or current I as a function of its charge or current density over some region:

Q=\iiint_V\rho\:dV, \: \: I=\iint_AJ\:dV

(Note: because we usually talk about current as flowing through a wire like water flowing down a river, current density usually refers to the amount flowing through the cross-sectional area of the wire and assumed to be uniform throughout the length rather than discretized down to every point, which is why it’s defined in terms of unit area. We could express it in terms of unit volume, but luckily the more common convention suits our upcoming derivation better).

A Brief Derivation

Why bother with these substitutions? If you look at the density integrals and those offered by the Divergence and Stokes’s Theorem, you’ll notice that both of them transform previously inhomogeneous quantities like surface integrals and constants into the same type of summation over a volume or area. Because the charges and currents exist within the same regions being described by the surface and line integrals, all these integrals are effectively identical operations on both sides of the equations, which means we cancel them out! Here’s what our first two equations, which involve surface integrals, look like with these changes, using the divergence theorem on the left side and the charge density substitution on the right:

\iiint_V\bold{(\nabla \cdot E)}\: dV= \iiint_V\frac{\rho}{\epsilon_0}\:dV
\iiint_V{(\bold{\nabla \cdot B})\ dV}=0

The next two equations are line integrals, so we use Stokes’s Theorem on the left and the current density substitution on the right:

\iint_A{(\nabla \bold{\times E)}\cdot \:d\bold{A}}=-\frac{\partial}{\partial t}\iint_A\bold{B} \cdot d\bold{A}=-\iint_A\frac{\partial \bold{B}}{\partial t} \cdot d\bold{A}
\iint_A{(\nabla \bold{\times B)}\cdot \:d\bold{A}}=\mu_0\iint_A\bold{J \cdot dA}\: +\frac{\partial}{\partial t}\iint_A\bold{E} \cdot d\bold{A}=\iint_A\mu_0\bold{J \cdot dA}\:+\iint_A\frac{\partial \bold{E}}{\partial t} \cdot d\bold{A}

A couple of things worth mentioning with these last two substitutions are the partial derivatives and flux integrals: both Faraday’s and Ampere-Maxwell’s Laws involve taking the partial derivative of either the electric or magnetic flux, which we’ve already established is an integral over the area the field passes through. In our equations above we first replace the shorthand \( \phi \) symbol with the actual integral, then move the partial derivative inside the integral: this second step isn’t always possible for functions discontinuous in whatever variable you’re taking the derivative with respect to or if the integral’s bounds involve the variable, but in this case it’s mathematically sound.

If you’ve been keeping track you’ll notice that both sides of our equations have been translated into integrals over the same region and of the same form. And that means that just like any other operation, we can cancel them out on both sides!

Differential Form: A Local Description

And that’s all there is to it: we were looking for a description of the electric and magnetic fields’ local behavior, and now we have a rewrite of Maxwell’s equations describing them at any given point in space using divergence and curl. Let’s take a look at the final equations post-cancellation and see what insights we can gain from this new form:

  • Gauss’s Law for Electric Fields: This one has the most satisfying interpretation to me– Gauss’s Law tells us that the charge at a point in space causes the electric field to either push out or pull in other charges like we’ve seen it when working with individual charges and Coulomb’s Law (which we’ve seen is just a subset of this law to begin with): in other words, charges are the sources and sinks of the electric field! Although we take this for granted when drawing field lines, it’s the differential form of Gauss’s Law that allows us to represent protons with lines flowing outwards (sources) and electrons with lines inwards (sinks) as in the image to the right. This also allows us to use numerical integration to compute the electric field at a point for arbitrary charge distributions.
\nabla \cdot \bold{E}=\frac{\rho}{\epsilon_0}
  • Gauss’s Law for Magnetic Fields: This takes what we observed about an entire distribution—that the net magnetic flux is zero—and makes the statement even stronger: if the net flux in a region is always zero, that means that the local flux—i.e divergence—at all points inside must also be zero. This can be interpreted to mean that there are no “magnetic charges,” or “monopoles” that act as independent sources and sinks of the magnetic field, which we can see in the image to the right in the magnetic dipole that even spinning electrons create.
\nabla \cdot \bold{B}=0
  • Faraday’s Law: This form strips away a lot of added complexity with changing areas and angles, and basically says that a magnetic field changing with time causes an electric field to form a whirlpool in the local region, represented as an infinitesimal flux in the image to the right. This statement alone makes the connection between electric and magnetic fields much easier to see as a temporal change in the magnetic field producing a spatial change in the electric field,  but its real value lies in its interaction with the Ampere-Maxwell Law and the symmetries it reveals in the two fields. The relation of spatial and temporal change in the electric and magnetic fields also allows us to approximate one given the other in a localized region.
\nabla \cdot \bold{E}=-\frac{\partial \bold{B}}{\partial t}
  • Ampere-Maxwell Law: Like Faraday’s Law, this one serves to simplify the connection between the electric and magnetic fields changing in time and space. Unlike all the other laws, this one contains two terms, the second of which makes use of the current density and can be used to derive numerical estimates of the magnetic field in a similar manner to Gauss’s law for arbitrary charge distributions. Once again, though, it produces its most startling results in conjunction with Faraday’s Law, as we’re about to see for ourselves.
\nabla \cdot \bold{E}=\mu_0\bold{J}+\mu_0\epsilon_0\frac{\partial \bold{E}}{\partial t}
Infinitesimal Gaussian surfaces around source and sink charges and the field lines they create.
Credit: [CircuitBread]
The closed loops of magnetic dipoles, this time zoomed into the level of individual charges with the North and South Poles
Credit: [Phys.org]
An infinitesimal surface showcasing how loop C is broken into tiny area segments where only the scalar change with time is relevant.
Credit: [EM Geosci]
This common macro distribution also showcases the curl relationship of the magnetic field with a changing electric field at a microscopic scale
Credit: [EM Geosci]

Approximation for Arbitrary Distributions

One thing that came up for all of these laws was the value of these laws for approximation, where we use the differentials in each law to estimate the value of a field at a point even when tricks like symmetry can’t be applied to yield a simple closed-form integral. Let’s break down of how that process might work using Faraday’s Law:

  • Discretization: Divide the region into a grid of discrete points with arbitrary density, and the time domain into discrete time steps of arbitrary frequency (increased discretization improves accuracy as the calculation approaches the continuous distribution).
  • Temporal Integration: Start with the initial magnetic field distribution B at t = 0. For each time step, evaluate the time derivative of the magnetic field (∂B/∂t) using numerical differentiation methods such as the average slope over the step width.
  • Spatial Integration: Using the discrete magnetic field and its time derivative, we calculate the discrete electric field at each point using numerical integration methods. One common method is the finite difference method, where the spatial derivatives in the curl equation (∇ × E) are approximated using finite differences in the derivatives within the curl formula.
  • Repeat Steps 3 and 4: Iterate through each time step by continuously updating the magnetic field and its time derivative, then calculate the electric field over the region using numerical integration.
Image of the charge distribution in an alloy, divided into individual groups of atoms.
Credit: [Pilania 2014]

The basic idea behind these approximation techniques is that if we know what the building blocks are and can determine them with reasonable accuracy, we can combine them to reconstruct the integral without actually evaluating it: the differential form gives us those building blocks.

Fields in Free Space

But a discussion of the differential forms of Maxwell’s equations wouldn’t be complete without highlighting their most important result: the behavior of electric and magnetic fields in free space. With no charges or currents at play, we can equate all charge and current density terms to zero and find the following quartet of simplified equations:

\nabla \cdot \bold{E}=0
\nabla \cdot \bold{B}=0
\nabla \cdot \bold{E}=-\frac{\partial \bold{B}}{\partial t}
\nabla \cdot \bold{E}=\frac{\partial \bold{E}}{\partial t}

Even more explicitly than before, Maxwell’s equations in free space highlight the strange recursion of how electric and magnetic fields interact: a change in one causes a change in the other, and so on. Could this kind of loop sustain itself– that is, could the electric and magnetic fields self-propagate?

Electromagnetic Waves

Propagations of changes in a field are described as “waves” in physics, and are defined by this identity below: it basically relates a second-order change in time to a second-order change in space…which we can already see isn’t too far off from what we have already thanks to our differential forms!

\nabla^2\bold{F}=\frac{1}{v_w^2}(\frac{\partial \bold{F}}{\partial t^2})

Let’s see if we can make the final push, though: arbitrarily choosing to start with Faraday’s Law, we’re going to use an identity from Vector Calculus and take the curl of both sides: it looks complicated, but confirming it is just grunge-work expansion if you know how to work with vector products and partial derivatives:

\nabla \times \nabla \times \bold{E}=\nabla \:(\nabla \cdot \bold{E})-\nabla^2\bold{E}

Here’s where our free space requirements come into play: the two-term expansion of the LHS includes the dot product of the electric field, but without any sources or sinks present we know electric flux reduces to zero and we’re just left with the negative double-gradient, which mathematicians call “the Laplacian” and we call “the exact form we need for the wave equation!”

0-\nabla^2 \bold{E} = \nabla \times -\frac{\partial \bold{B}}{\partial t}

Now for the right side: partial derivatives are linear operators, so we can switch around the curl and derivative with respect to time so the curl is acting directly on the magnetic field:

0-\nabla^2 \bold{E} = -\frac{\partial}{\partial t}\:(\nabla \times \bold{B})

This is where Maxwell’s equations come in handy again: the spatial change of the magnetic field is proportional to the temporal change in the electric field, so we can swap it out and simplify to get our desired result:

0-\nabla^2 \bold{E} = -\frac{\partial}{\partial t}\:(\mu_0\epsilon_0\frac{\partial \bold{E}}{\partial t})
\nabla^2 \bold{E}=\mu_0\epsilon_0\frac{\partial^2 \bold{E}}{\partial t^2}

The exact same process can be applied with the magnetic field using magnetic flux and Faraday’s Law (try it!), and just like that, we’ve got two symmetric wave equations for our two fields!

Electromagnetic waves, drawn as perpendicular based on the curl equations that relate their fields.
Credit: [Principles of General Chemistry]

The Speed of Light

To see the full extent of what we’ve just discovered, if we match components of our equations with the general wave equation we find that the reciprocal of our waves’ velocity squared is equal to \( \mu_0\epsilon_0 \), and solving for v yields \( v_p= \frac{1}{\sqrt{\mu_0\epsilon_0}}=3.0*10^8\frac{m}{s^2} \), perfectly matching experimental measures of lightspeed!

We’ve known for a while that light behaves like a wave, and this result establishes its fundamental nature: propagation of the intertwined electric and magnetic fields’ interactions, or electromagnetic waves. The importance of this result can’t be overstated: it’s the foundation of much of optics, the study of light and its interactions with matter, and these waves feature in technology from signal antennae to microwaves. All that new info from the exact same equations!

There are More?

But those aren’t the only ways Maxwell’s equations can be twisted and molded to bear additional fruitful results: with these 2 fundamental methods covered along with a heaping of vector calculus, we’ll continue that route of exploration and tie in a foray into the complex plane— a familiar sight on this blog— to cover two more forms that are used all the time in engineering and physics to solve problems from circuit analysis to EM wave propagation. See you then!