1.Power Series Solutions 2
1.1.Why is a power series sometimes referred to as “formal”? 2
1.2.Why is the “radius of convergence” the distance from to its closest complex singular point? 2
1.3.Is the radius of convergence useful in practice? 3
2.Fourier Series and Related 3
2.1.What is an eigenvalue problem? What is an eigenvalue? 3
2.2.Why are eigenvalues that are larger than zero insignificant? 4
2.3.In the case , how would you ever get an eigenvalue? Your general solution does not even have a ! 4
2.4.Why do we discuss three cases ? 4
2.5.Why is there generally still a constant in the eigenfunction? Will this always be the case? 5
2.6.When solving the eigenvalue problems, why do we sometimes write instead of ? 5
2.7.Why is the factor for Fourier cosine and Fourier sine, but for Fourier? 5
2.8.Why is the constant term written as in Fourier and Fourier cosine series? 5
2.9.Are Fourier cosine and Fourier sine series special cases of Fourier series? 6
3.Separation of Variables 7
3.1.How do I know when it's Fourier Cosine and when it's Fourier Sine? 7
A power series is a combination of numbers , and a symbol in the following particular manner:
or equivalently .
Such an infinite sum is often called “formal” because
Even if we assign some value to , say let , the resulting infinite sum of numbers
still may or may not converge. When it converges, it represents a number; When it does not, it represents nothing.
In summary, a power series represents a function only inside some interval . Outside, the meaning of is not clear and is thus purely formal (Many researchers have been trying to give meaning to the sum for outside the convergence disk. Such research results in many “named sums”: Cesaro sum, Borel sum, etc. )
First the distance from to its closest complex singular point is just a lower bound of the radius of convergence of the power series solution – meaning the radius is at least as large.
A full understanding of the “why” involves tedious calculation which can fill a couple of pages. But quick “pseudo-understanding” may be achieved through the following.
Consider a power series which solves a linear differential equation whose singular points are ,.
First, it is crucial to think of the in a power series not as only a real number, but a complex number. Thus for any complex number , we can set in the formal power series and obtain an infinite sum of numbers:
It turns out that, this converging disk can be at least so large that no singular point is inside it. Therefore the best we can do is to “expand” this disk until its boundary “touches” a singular point:
Indeed it is. For example, suppose after some great effort we have found out the first 4 terms of a power series solution
and have concluded that it is not possible to write down a general formula for the generic coefficient .
Thus we have to get some idea of using the first 4 terms. Naïvely we expect is close to . But how confident are we? Do we have any idea for which this is true and for which this is not? We have no idea.
Things change when we have the extra information of radius of convergence. Say we found out that . Now we can conclude:
An “eigenvalue problem” is a linear, homogeneous boundary value problem involving one unknown number. For example
is an eigenvalue problem. So an “eigenvalue problem” is in fact a collection of infinitely many boundary value problems.
If we assign a number to , the eigenvalue problem “collapses” to a usual boundary value problem. For example, if we set , the above problem “collapses” to
As an “eigenvalue problem” is linear and homogeneous, is always a solution, no matter what number is assigned. On the other hand, there usually exist a bunch of special numbers such that, when assigned to , the resulting boundary value problem has (besides ) non-zero solutions. These “special numbers” are called “eigenvalues”.
For example, consider the eigenvalue problem:
A word of caution here: An eigenvalue problem consists of three parts: an equation (involving ) and two boundary conditions. Slight change to any one part of the three may lead to big change in the looks of eigenvalues/eigenfunctions as well as the range of !
They are not insignificant. All eigenvalues are significant, larger than zero or not. Following N. Trefethen, we can say the set of all eigenvalues is the “signature” of the differential equation. We only see non-positive eigenvalues in class because we have only solved a couple of the simplest eigenvalue problems. It's purely accidental that there is no positive eigenvalue for these problems. If we have chance to see more sophsticated ones, there will be eigenvalues of both signs.
It should be emphasized that the question itself is not correct. For the eigenvalue problems we dealt in class, any cannot be an eigenvalue. So it's not that “eigenvalues ... larger than zero insignificant”, but “no eigenvalues larger than zero at all”.
Recall that an eigenvalue is just a number such that, if is set to this number, the resulting boundary value problem has non-zero solutions. So the whole discussion of the case is just checking whether is an eigenvalue or not: If we set in the problem, does the resulting boundary value problem have any non-zero solution? If the answer is yes, then is an eigenvalue; If the answer is no, then is not an eigenvalue.
For example, consider the eigenvalue problem
If we set , the problem becomes which gives as the only solution. So is not an eigenvalue for this problem.
On the other hand, if we consider a different eigenvalue problem
Setting gives which indeed has non-zero solutions, for example . So is an eigenvalue for this problem.
First it should be emphasized that this only happens when the equation in our problems if . If we change the equation, the cases will be different.
To understand why, we track how we find eigenvalues.
Yes this will always be the case. There will always be arbitrary constants in the formulas for eigenfunctions, and the number of such constants can be any positive number: one, two, three...
To understand why, we take a look at the eigenvalue problems we have solved:
in the last one are the roots of the transcendental equation . Also notice that in the 3rd problem two arbitrary constants are involved in the formula of the eigenfunction.
All these problems are linear and homogeneous, which means if solves the problem, so is , where are arbitrary constants. This property is enjoyed by all eigenvalue problems. As a consequence, if there is any nonzero solution to the problem, then automatically its constant multiples are also solutions.
The reason is we would like to write every complex number in its standard form where are real. For example, if we have , we usually write it as instead of just .
So when , we just write as this is a real number; But when , we prefer writing over because the latter is not in “standard form”.
Ans. The reason lies in that all three are special cases of orthogonal systems. and are orthogonal systems with weight over the interval , while is an orthogonal system with weight over the interval (note the interval is different!).
If is an orthogonal system over with weight , then the coefficients of the expansion
can be found through
Now we have
which explains the different factors.
Consider the Fourier series, where any is expanded with respect to the orthogonal system (which is orthogonal over with weight ). From the theory of orthogonal systems, we know that if we write
then the coefficients are given by
and so on. We see that the formulas for all except can be written as
This is not beautiful. To make things look better, instead of we write so that the new is the same as two times the old one, and can be computed through
On the theoretical side, all three (Fourier cosine, Fourier sine and Fourier) are special cases of orthogonal systems arising from solving eigenvalue problems.
No one is at a higher level or “more general” than another.
On the other hand, on the practical side, one can obtain the coefficients in a Fourier cosine or Fourier sine expansion of a certain function by computing the coefficients of the Fourier expansion of another function which is related to through:
If the Fourier cosine expansion of is
then the 's turn out to be the same as those 's in the Fourier expansion of the even extension of ,
If the Fourier sine expansion of is
then the 's turn out to be the same as those 's in the Fourier expansion of the odd extension of .
Such properties can be used to analyze the convergence properties of the Fourier cosine and Fourier sine expansions (although such detour becomes obsolete once one learns the full Sturm-Liouville theory).
The short answer is, you know automatically from solving the eigenvalue problem. If after solving the eigenvalue problem, you get for certain (usually given in the problem in the form – it can be given in other forms), then the initial value should be expanded into Fourier cosine series; If you get , Fourier sine.
The above guarantees quick reaction in exams. But a quiet mind and fearless heart can be reached through understanding the reason behind this mess. The fundamental reason is the following:
The eigenfunctions form an orthogonal system with certain weight , which means whenever . Consequently the expansion of any function into these eigenfunctions
can be computed through
When the eigenvalue problem is + boundary conditions, the weight function is always the constant function .
Therefore, when the eigenfunctions are , the coefficients are given by