We’ve been talking about reductionism in the past couple of posts, and we’ll continue the story by discussing power series in this post.

The idea behind reductionism in mathematics is to identify some elementary “objects” and to express a complicated “thing” in terms of the elementary things. The intention is either to make it easier to understand the complex thing, or to make it easier to work with it, or both.

The specific implementation of this idea in the context of power series is to express a function (which might be difficult to deal with) as a “linear combination” of powers; that is, the elementary objects are . I put “linear combination” within quotation marks because the term technically refers to a sum of a *finite* number of terms, and in a power series there are typically an infinite number of terms.

Consider the sine function. It is a well-known function to anyone who has done high-school mathematics, but what is not always appreciated is that when we write sin *x*, we are just writing the name of a function, not a formula for it. When we write *x*^{2} + 1, we have a formula for a function, so that we can calculate the value of the function for any real value of *x* using the algorithm implicit in the formula. Without a calculator, how on earth can you calculate the sine of 1 radian? Better yet, how is the calculator programmed to calculate the sine of 1 radian?

If we could approximate the sine function using a power series, that would solve our problem. As long as the approximation gets better and better the more terms in the series are used (a situation that holds in this case; we’d have to study the concept of convergence (and then convergence of power series) to make sense of this, which we will do another time), then we only need take a sufficient number of terms in the series to get an approximation that will be correct to 12 or 13 decimal places, which is sufficient for a calculator that displays 10 decimal places. This is more or less how the calculator is programmed to calculate values of the sine function.

Let’s see how this works. Let’s presume that we can model the function *y* = sin *x* by a power series

where each of the coefficients *a*_{0}, *a*_{1}, and so on, represents a number that we must determine. To determine the coefficients, we need to use the properties of the sine function. This can be done in several ways; one way is to use the known values of the function and its derivatives at *x* = 0:

Values for subsequent derivatives at *x* = 0 follow the same cyclic pattern of 0, 1, 0, –1, and so on. These properties of the sine function are enough to allow us to determine all of the coefficients. To begin, substitute 0 for *x* in the power series for the sine function. The result is that all of the terms in the series are zero, except for the first:

We conclude that *a*_{0} = 0. Next, differentiate the power series term-by-term, and then substitute 0 for *x* to obtain:

We conclude that *a*_{1} = 1. Continuing in this way we can determine all of the coefficients. The result is (where the exclamation point stands for factorial)

Of course, to be confident that this method always leads to valid results, we would have to dive into the theoretical details quite a lot more deeply; a good reason to take a university calculus course, don’t you think? (Or at least read a good book on the subject … Understanding Analysis by Stephen Abbott might be a good place for you to go if you’re interested.) (For example, we differentiated the power series term-by-term, assuming that the result would be the power series for the derivative of the function. Is this always valid?) But you can certainly play with the method for other functions, too; try the cosine function, the natural exponential function, and the function for starters.

Next consider the following differential equation with initial conditions:

Let’s suppose that we would like to solve this differential equation (which involves determining all possible functions that satisfy both the equation and the initial conditions), but don’t know how. One choice is to suppose that *y* has a power series expansion

Then differentiate the power series twice (assuming this is valid):

Then form the sum , collect like terms, and set the expression equal to 0. The conclusion is that each of the resulting coefficients must be zero, which allows us (with the use of the initial conditions) to solve for the coefficients of the original power series.

Try this for yourself! You’ll see that the result is

But hold on … this is the same as the power series for the sine function that we worked out earlier in a different way. The conclusion is that the sine function is a solution to the differential equation that satisfies the initial conditions.

Questions remain: Is this the only solution? If there are others, how can we determine them? And there are more general question: Can all functions be represented in a satisfactory way by power series? But we shall have to leave these questions for another time.

We’ll continue this story by discussing Fourier series in a subsequent post.

Pingback: An Operator Method for Solving Second Order Differential Equations | QED Insight

Pingback: Atoms in Mathematics and Science, Part 2: Infinite-Dimensional Spaces | QED Insight