In the two previous posts in this series we explored a method for solving second order linear differential equations with constant coefficients that is different from the standard textbook methods taught nowadays. I found the method in a 1941 book (or see here) by the Sokolnikoffs.
The key point of the method, as we learned, is the identification of the action of the inverse of the differential operator 1/(D + a)
with the action of the integral operator
The previous two posts described solid, well-established mathematics. But now let’s go out on a limb.
Time for some wild speculation
When I see the form of the operator
which can also be written as
I can’t help but think of the formula for the sum of an infinite geometric series:
You might recall that the formula is valid if and only if –1 < x < 1. For example, if x = 1/2, then the formula gives
which is reasonable if you think in terms of a journey on a number line that starts at 0, moves to the right by 1 unit, then moves to the right by an additional 1/2 unit, etc. However, if you try x = 1, you get nonsense on the right side of the formula, because division by 0 makes no sense, and nonsense on the left side of the formula, because adding 1s indefinitely does not lead to any real number as a result:
Similarly, if you try x = –1, you get nonsense on the left side of the formula, because partial sums alternate between 1 and 0, and so it seems entirely unreasonable to attribute any sum to the infinite series, even though the formula gives 1/2:
Back to the differential operator above. What happens if, without thinking much, we just apply the formula for the sum of an infinite geometric series to the differential operator
So here is the wild speculation: Could it be that the infinite series of operators in the previous line is “the same” as the integral operator given earlier? That is, could the following be true:
This is probably of no practical use, even if it is true, but it’s fun to explore the idea, and if it is true it would make for a nice connection. So let’s test it out: All you have to do is select various functions f(x), work out both sides of the previous equation (which I’ll call Equation *), and then compare them.
For , all is well, and both sides of Equation * work out to the same expression,
For , both sides of Equation * also work out to the same expression,
It seems that this ought to work for all powers, and therefore all polynomials. Can you prove this?
Let’s try some different types of function. For , again all is well, and both sides of Equation * work out to
For , both sides of Equation * work out to
OK, how about an exponential function, such as ; then provided that , both sides of Equation * yield
However, for , the right side of Equation * works out to , which is the correct solution of the corresponding differential equation. However, the left side yields the nonsensical
Similarly, for , the right side of Equation * works out to , which is correct. However, the left side yields the nonsensical
And if , or , then the infinite series on the left side of Equation * makes no sense (“does not converge”).
So, the operator equivalence in Equation * is sometimes true, sometimes not. Of course, we have only tried a few functions, and have made no effort to be systematic. Perhaps after further exploration it would be interesting to clarify the conditions for which the proposed equivalence is valid. Give it a try if you’re interested, but it does not look like an easy problem!