\documentclass{book}
\usepackage{makeidx}
%\usepackage[hypertex]{hyperref}
%\usepackage[hyperindex,pdfmark]{hyperref}
\newcommand{\seq}[2]{\left(#1\right)_{#2=1}^{\infty}}
\hoffset=0.05\textwidth
\voffset=0.05\textheight
\textwidth=1.1\textwidth
\textheight=1.1\textheight
\bibliographystyle{amsalpha}
\usepackage{graphicx}
\newcommand{\f}[2]{\frac{#1}{#2}}
\newcommand{\fig}[2]{
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{graphs/#2}
\caption{#1}
\end{center}
\end{figure}
}
\newcommand{\figtwo}[3]{
\begin{figure}[ht]
\caption{#1}
\begin{center}
\includegraphics[width=0.45\textwidth]{graphs/#2}
\includegraphics[width=0.45\textwidth]{graphs/#3}
\end{center}
\end{figure}
}
\include{macros}
\renewcommand{\N}{{\mathbb N}}
\usepackage{fancybox}
\title{Calculus for Scientists and Engineers II:\\Math 20B at UCSD}
\author{William Stein\footnote{I am attending John Eggers lectures,
which have a {\em very} strong influence on these notes.}}
\makeindex
\newcommand{\dd}[1]{\frac{d}{dx}}
\DeclareMathOperator{\avg}{avg}
\newcommand{\forclass}[1]{\mbox{}\vspace{1ex}\par\noindent\shadowbox{\begin{minipage}{\textwidth}\small #1\end{minipage}}\par\noindent}
%\renewcommand{\forclass}{}
\newcommand{\boxit}[1]{\begin{center}
\shadowbox{\begin{minipage}{0.7\textwidth}\small #1\end{minipage}}\end{center}}
\newcommand{\sect}[1]{\section{#1}{\bf (William Stein, Math 20b, Winter 2006)}\\}
\begin{document}
\maketitle
\tableofcontents
\chapter{Preface}
In order to learn Calculus\index{Calculus} it's {\em crucial} for you to do all the
assigned problems and then some. When I was a student and started
doing well in math (instead of poorly!), the key difference was that I
started doing an insane number of problems (e.g., every single problem
in the book). Push yourself to the limit!
\section{Computers}
I think the best way to use a computer in learning Calculus is
as a sort of solutions manual, but better. Do a problem first
by hand. Then {\em verify} correctness of your solution.
This is way better than what you get by using a solutions manual!
\begin{itemize}
\item You can try similar problems (not in the homework) and also
verify your answers. This is like playing solitaire, but is much
more creative.
\item You can verify key steps of what you did by hand using the
computer. E.g., if you're confused about one of part of {\em your}
approach to computing an integral, you can compare what you get
with the computer. Solution manuals either give you only the solution
or a particular sequence of steps to get there, which might have little
to do with the brilliantly original strategy you invented.
\end{itemize}
For this course its most useful to have a program that does symbolic
integration. I recommend maxima, which is a fairly simple {\bf
completely free and open source} program written (initially) in the
1960s at MIT. Download it for free from
\begin{center}{\tt http://maxima.sourceforge.net}
\end{center}
It's not insanely powerful, but it'll instantly do (something with)
pretty much any integral in this class, and a lot more. Plus if you
know lisp you can read the source code. (You could also buy Maple or
Mathematica, or use a TI89 calculator.)
Here are some maxima examples:
\begin{verbatim}
(%i2) integrate(x^2 + 1 + 1/(x^2+1), x);
3
x
(%o2) atan(x) +  + x
3
(%i3) integrate(sqrt(5/x), x);
(%o3) 2 sqrt(5) sqrt(x)
(%i4) integrate(sin(2*x)/sin(x), x);
(%o4) 2 sin(x)
(%i5) integrate(sin(2*x)/sin(x), x, 0, %pi);
(%o5) 0
(%i6) integrate(sin(2*x)/sin(x), x, 0, %pi/2);
(%o6) 2
\end{verbatim}
\chapter{Definite and Indefinite Integrals}
\section{The Definite Integral}
\subsection{The definition of area under curve}
Let $f$ be a continuous function on interval $[a,b]$.
Divide $[a,b]$ into $n$ subintervals of length $\Delta x = (ba)/n$.
Choose (sample) points $x_i^*$ in $i$th interval, for each $i$.
The (signed) area between the graph of $f$ and the $x$ axis is approximately
\begin{align*}
A_n &\sim f(x_1^*) \Delta x + \cdots + f(x_n^*) \Delta x \\
& = \sum_{i=1}^n f(x_i^*) \Delta x.
\end{align*}
(The $\sum$ is notation to make it easier to write down and think
about the sum.)
\begin{definition}[Signed Area]
The \defn{(signed) area between the graph} of $f$ and the $x$ axis between
$a$ and $b$ is
$$
\lim_{n\to\infty} \left( \sum_{i=1}^n f(x_i^*) \Delta x \right)
$$
(Note that $\Delta x = (ba)/n$ depends on $n$.)
\end{definition}
It is a theorem that the area exists and doesn't depend
on the choice of $x_i^*$.
\subsection{Relation between velocity and area}
Suppose you're reading a car magazine and there is an article about
a new sports car that has this table in it:
\begin{center}
\begin{tabular}{llllllll}\hline
Time (seconds) & 0 & 1 & 2 & 3 & 4 & 5 & 6\\\hline
Speed (mph) & 0 & 5 & 15 & 25 & 40 & 50 & 60\\\hline
\end{tabular}
\end{center}
They claim the car drove $1/8$th of a mile after $6$ seconds,
but this just ``feels'' wrong... Hmmm...
Let's estimate the distance driven using the formula
$$
\text{ distance }= \text{ rate }\times \text{ time}.
$$
We overestimate by assuming the velocity is a constant
equal to the max on each interval:
$$
\text{estimate }=5 \cdot 1 + 15 \cdot 1 + 25 \cdot 1 + 40 \cdot 1 +
50 \cdot 1 + 60 \cdot 1 = \frac{195}{3600} \text{ miles } = 0.054...
$$
(Note: there are $3600$ seconds in an hour.)
But $1/8 \sim 0.125$, so the article is inconsistent. (Doesn't
this sort of thing just bug you? By learning calculus you'll
be able to doublecheck things like this much more easily.)
{\bf Insight!} {\em The formula for the estimate of distance traveled
above looks exactly like an approximation for the area under
the graph of the speed of the car!} In fact, if an object
has velocity $v(t)$ at time $t$, then the net change in position
from time $a$ to $b$ is
$$
\int_{a}^b v(t) dt.
$$
We'll come back to this observation frequently.
\subsection{Definition of Integral}
Let $f$ be a continuous function on the interval $[a,b]$.
The definite integral is just the signed area between the
graph of $f$ and the $x$ axis:
\begin{definition}[Definite Integral]\label{defn:int}
The \defn{definite integral} of $f(x)$ from $a$ to $b$ is
$$
\int_{a}^{b} f(x) dx = \lim_{n\to\infty}\left( \sum_{i=1}^n f(x_i^*) \Delta x \right).
$$
\end{definition}
Properties of Integration:
\begin{itemize}
\item $\int_{a}^b f(x) dx =  \int_{b}^{a} f(x) dx$
\item $\int_{a}^b c_1 f_1(x) + c_2 f_2(x) dx =
c_1 \int_{a}^b f_1(x) + c_2 \int_{a}^{b} f_2(x) dx.\qquad$ (linearity)
\item If $f(x)\geq g(x)$ on for all $x\in [a,b]$, then
$\int_{a}^b f(x) dx \geq \int_{a}^b g(x) dx$.
\end{itemize}
There are many other properties.
\subsection{The Fundamental Theorem of Calculus}
Let $f$ be a continuous function on the interval $[a,b]$.
The following theorem
is {\em incredibly} useful in mathematics, physics, biology, etc.
\begin{theorem}
If $F(x)$ is any differentiable function on $[a,b]$ such that
$F'(x) = f(x)$, then
$$
\int_{a}^{b} f(x) dx = F(b)  F(a).
$$
\end{theorem}
One reason this is amazing, is because it says that the area under the
entire curve is completely determined by the values of a (``magic'')
auxiliary function {\em at only $2$ points}. It's hard to believe. It
reduces computing (\ref{defn:int}) to finding a single function $F$,
which one can often do algebraically, in practice.Whether or not
one should use this theorem to evaluate an integral depends a lot on
the application at hand, of course. One can also use a partial limit via a
computer for certain applications (numerical integration).
\begin{example}
I've always wondered exactly what the area is
under a ``hump'' of the graph of $\sin$. Let's figure it out,
using $F(x) = \cos(x)$.
$$
\int_{0}^\pi \sin(x) dx
= \cos(\pi)  (\cos(0)) = (1)  (1) = 2.
$$
\end{example}
\vspace{2em}
But does such an $F$ always exist? The surprising answer is ``yes''.
\begin{theorem}\label{thm:fexists}
Let $F(x) = \int_{a}^{t} f(t) dt$.
Then $F'(x) = f(x)$ for all $x \in [a,b]$.
\end{theorem}
Note that a ``nice formula'' for $F$ can be hard to find or even
provably nonexistent.
The proof of Theorem~\ref{thm:fexists} is somewhat complicated but is
given in complete detail in Stewart's book, and you should definitely
read and understand it.
\begin{proof}[Sketch of Proof]
We use the definition of derivative.
\begin{align*}
F'(x) &= \lim_{h\to 0} \frac{F(x+h)  F(x)}{h} \\
&= \lim_{h\to 0} \left(\int_{a}^{x+h} f(t)dt  \int_{a}^{x} f(t)dt\right)/h\\
&= \lim_{h\to 0} \left(\int_{x}^{x+h} f(t)dt\right)/h
\end{align*}
Intuitively, for $h$ sufficiently small $f$ is essentially constant,
so $\int_{x}^{x+h} f(t)dt \sim hf(x)$ (this can be made precise using
the extreme value theorem). Thus
$$
\lim_{h\to 0} \left(\int_{x}^{x+h} f(t)dt\right)/h = f(x),
$$
which proves the theorem.
\end{proof}
\sect{Indefinite Integrals and Change}
\forclass{Homework: Do the following by Tuesday, January 17.\\
* Section 5.3: 13, 37, 55, 67\\
* Section 5.4: 2, 9, 13, 27, 33, 39, 45, 47, 51, 53\\
* Section 5.5: 11, 23, 31, 37, 41, 55, 57, 63, 65, 75, 79\\
The first quiz will be on Friday, Jan 20 and will consist
of two problems from this homework. {\bf Ace the first quiz!}
}
\subsection{Indefinite Integrals}
The notation
$\int f(x) dx = F(x)$ means that $F'(x) = f(x)$
on some (usually specified) domain of definition of $f(x)$.
\begin{definition}[Antiderivative]
We call $F(x)$ an \defn{antiderivative} of $f(x)$.
\end{definition}
\begin{proposition}
Suppose $f$ is a continuous function on an interval $(a,b)$.
Then any two antiderivatives differ by a constant.
\end{proposition}
\begin{proof}
If $F_1(x)$ and $F_2(x)$ are both antiderivatives of a function $f(x)$,
then
$$
(F_1(x)  F_2(x))' = F_1'(x)  F_2'(x) = f(x)  f(x) = 0.
$$
Thus $F_1(x)  F_2(x) = c$ from some constant $c$ (since only constant
functions have slope~$0$ everywhere). Thus $F_1(x) = F_2(x) + c$
as claimed.
\end{proof}
We thus often write
$$
\int f(x) dx = F(x) + c,
$$
where $c$ is an (unspecified fixed) constant.
Note that the proposition need not be true if $f$ is not defined
on a whole interval.
For example, $f(x) = 1/x$ is not defined at $0$. For any pair
of constants $c_1$, $c_2$, the function
$$
F(x) = \begin{cases}
\ln(x) + c_1 & x < 0,\\
\ln(x) + c_2 & x > 0,
\end{cases}
$$
satisfies $F'(x) = f(x)$ for all $x\neq 0$.
We often still just write $\int 1/x = \ln(x)+c$ anyways, meaning
that this formula is supposed to hold only on one of the intervals
on which $1/x$ is defined (e.g., on $(\infty,0)$ or $(0,\infty)$).
We pause to emphasize the notation difference between
definite and indefinite integration.
\begin{align*}
\int_{a}^{b} f(x) dx &\,\,= \text{ a specific number}\\
\int f(x) dx &\,\,= \text{ a (family of) functions}\\
\end{align*}
One of the \emph{main goals} of this course is to help you to get
really good at computing $\int f(x)dx$ for various functions $f(x)$.
It is useful to memorize a table of examples (see, e.g., page 406 of
Stewart), since often the trick to integration is to relate a given
integral to a known one. Integration is like solving a puzzle or
playing a game, and often you win by moving into a position where you
know how to defeat your opponent, e.g., relating your integral to
integrals that you already know how to do. If you know how to
do a basic collection of integrals, it will be easier for you
to see how to get to a known integral from an unknown one.
Whenever you successfully compute $F(x) = \int f(x) dx$, then you've
constructed a \emph{mathematical gadget} that allows you to very
quickly compute $\int_a^b f(x) dx$ for any $a,b$ (in the interval of
definition of $f(x)$). The gadget is $F(b)  F(a)$. This is really
powerful.
\subsection{Examples}
\begin{example}
\begin{align*}
\int x^2 + 1 + \frac{1}{x^2 + 1} dx
&= \int x^2dx + \int 1dx + \int \frac{1}{x^2+1} dx\\
&= \frac{1}{3} x^2 + x + \tan^{1}(x) + c.
\end{align*}
\end{example}
\begin{example}
$$
\int \sqrt{\frac{5}{x}} dx =
\int \sqrt{5} x^{1/2} dx = 2\sqrt{5} x^{1/2} + c.
$$
\end{example}
\begin{example}
$$
\int \frac{\sin(2x)}{\sin(x)} dx = \int \frac{2\sin(x)\cos(x)}{\sin(x)}
= \int 2\cos(x) = 2\sin(x) + c
$$
\end{example}
\subsection{Physical Intuition}
In the previous lecture we mentioned a relation between velocity,
distance, and the meaning of integration, which gave you a physical
way of thinking about integration. In this section we generalize our
previous observation.
The following is a restatement of the fundamental theorem
of calculus:
\begin{theorem}[Net Change Theorem]
The definite integral of the rate of change $F'(x)$ of some quantity
$F(x)$ is the net change in that quantity:
$$
\int_{a}^b F'(x) dx = F(b)  F(a).
$$
\end{theorem}
For example, if $p(t)$ is the population of students at UCSD
at time $t$, then $p'(t)$ is the rate of change. Lately
$p'(t)$ has been positive since $p(t)$ is growing (rapidly!).
The net change interpretation of integration is that
$$
\int_{t_1}^{t_2} p'(t) dt = p(t_2)  p(t_1) = \text{ change in number of students from time $t_1$ to $t_2$}.
$$
Another very common example you'll seen in problems involves water
flow into or out of something. If the volume of water in your bathtub
is $V(t)$ gallons at time $t$ (in seconds), then the rate at which
your tub is draining is $V'(t)$ gallons per second. If you have the
geekiest drain imaginable, it prints out the drainage rate $V'(t)$.
You can use that printout to determine how much water drained out from
time $t_1$ to $t_2$:
$$
\int_{t_1}^{t_2} V'(t) dt\,\, =
\text { water that drained out from time $t_1$ to $t_2$ }
$$
Some problems will try to confuse you with different notions of
change. A standard example is that if a car has \defn{velocity}
$v(t)$, and you drive forward, then slam it in reverse and drive
backward to where you start (say 10 seconds total elapse), then $v(t)$
is positive some of the time and negative some of the time. The
integral $\int_{0}^{10} v(t) dt$ is not the total distance registered
on your odometer, since $v(t)$ is partly positive and partly negative.
If you want to express how far you actually drove going back and
forth, compute $\int_{0}^{10} v(t) dt$. The following example
emphasizes this distinction:
\begin{example}
{\em An ancient dragon is pacing on the cliffs in Del Mar, and has
velocity $v(t)=t^22t8$. Find (1) the \defn{displacement} of the
dragon from time $t=1$ until time $t=6$ (i.e., how far the dragon is
at time $6$ from where it was at time $1$), and (2) the {\em total
distance} the dragon paced from time $t=1$ to $t=6$.}
For (1), we compute
\begin{align*}
\int_{1}^6 (t^2  2t  8) dt
= \left[ \frac{1}{3} t^3  t^2  8t \right]_{1}^6 =
 \frac{10}{3}.
\end{align*}
For (2), we compute the integral of $v(t)$:
\begin{align*}
\int_{1}^6 t^2  2t  8 dt
= \left[ \left(\frac{1}{3} t^3  t^2  8t\right) \right]_{1}^4
+ \left[ \frac{1}{3} t^3  t^2  8t \right]_{4}^6
= 18 + \frac{44}{3} = \frac{98}{3}.
\end{align*}
\end{example}
\section{Substitution and Symmetry}
\forclass{Homework reminder.\\Quiz reminder: Friday, Jan 20
({\bf Ace the first quiz!}).\\Office Hours: Tue 111.\\
Monday is a holiday!\\
Wednesday  areas between curves and {\em volumes}\\
First midterm: Wed Feb 1 at 7pm (review lecture during day!)\\
Quick 5 minute discussion of computers and Maxima.\\
Quiz format: one question on front; one on back.\\
Remarks:
\begin{enumerate}
\item The \defn{total distance traveled} is $\int_{t_1}^{t_2} v(t) dt$ since
$v(t)$ is the rate of change of $F(t)=$ distance traveled (your
speedometer displays the rate of change of your odometer).
\item
How to compute $\int_{a}^{b} f(x) dx$.
\begin{enumerate}
\item Find the zeros of $f(x)$ on $[a,b]$, and use these to break
the interval up into subintervals on which $f(x)$ is always $\geq 0$
or always $\leq 0$.
\item
On the intervals where $f(x) \geq 0$, compute the integral of $f$,
and on the intervals where $f(x)\leq 0$, compute the integral of $f$.
\item The sum of the above integrals on intervals is $\int f(x) dx$.
\end{enumerate}
\end{enumerate}
}
This section is primarly about a powerful technique for computing
definite and indefinite integrals.
\subsection{The Substitution Rule}
In first quarter calculus you learned numerous methods for computing
derivatives of functions. For example, the \defn{power rule} asserts
that
$$
(x^a)' = a \cdot x^{a1}.
$$
We can turn this into a way to compute certain integrals:
$$
\int x^a dx = \frac{1}{a+1} x^{a+1} \qquad \text{if $a\neq 1$}.
$$
Just as with the power rule, many other rules and results that you
already know yield \emph{techniques} for integration. In general
integration is potentially much trickier than differentiation,
because it is often not obvious which technique to use, or even
how to use it. {\em Integration is a more exciting than differentiation!}
Recall the \defn{chain
rule}, which asserts that
$$
\dd{x} f(g(x)) = f'(g(x)) g'(x).
$$
We turn this into a technique for integration as follows:
\begin{proposition}[Substitution Rule]
Let $u=g(x)$, we have
$$
\int f(g(x)) g'(x) dx = \int f(u) du,
$$
assuming that $g(x)$ is a function that is differentiable
and whose range is an interval on which $f$ is continuous.
\end{proposition}
\begin{proof}
Since $f$ is continuous on the range of $g$,
Theorem~\ref{thm:fexists} (the fundamental theorem of Calculus)
implies that there is a function $F$ such that $F'= f$.
Then
\begin{align*}
\int f(g(x)) g'(x) dx &=
\int F'(g(x)) g'(x) dx\\
&= \int \left(\dd{x} F(g(x))\right) dx \\
&= F(g(x)) + C \\
&= F(u) + C = \int F'(u) du
= \int f(u) du.
\end{align*}
\end{proof}
If $u=g(x)$ then $du = g'(x) dx$, and the substitution rule simply
says if you let $u=g(x)$ formally in the integral everywhere, what you
naturally would hope to be true based on the notation actually is
true. The substitution rule illustrates how the notation Leibniz
invented for Calculus is \defn{incredibly brilliant}. It is said that
Leibniz would often spend days just trying to find the right notation
for a concept. He succeeded.
As with all of Calculus, the best way to start to get your head around
a new concept is to see severally clearly worked out examples. (And
the best way to actually be able to use the new idea is to {\em do}
lots of problems yourself!) In this section we present examples that
illustrate how to apply the substituion rule to compute indefinite
integrals.
\begin{example}
$$
\int x^2(x^3 + 5)^9 dx
$$
Let $u=x^3 +5$. Then $du = 3x^2 dx$, hence $dx = du/(3x^2)$. Now substitute it all in:
$$
\int x^2 (x^3 + 5)^9 dx = \int \frac{1}{3} u^9 = \frac{1}{30} u^{10} = \frac{1}{30}(x^3 + 5)^{10}.
$$
There's no point in expanding this out: ``only simplify for a \emph{purpose}!''
\end{example}
\begin{example}
$$
\int \frac{e^x}{1+e^x} dx
$$
Substitute $u=1+e^x$. Then $du = e^x dx$, and the integral above becomes
$$
\int \frac{du}{u} = \lnu = \ln1+e^x = \ln(1+e^x).
$$
Note that the absolute values are not needed, since $1+e^x>0$ for all $x$.
\end{example}
\begin{example}
$$
\int \frac{x^2}{\sqrt{1x}} dx
$$
Keeping in mind the power rule, we make the substitution
$u = 1x$. Then $du = dx$. Noting that $x=1u$ by solving
for $x$ in $u=1x$, we see that the above integral becomes
\begin{align*}
\int  \frac{(1u)^2}{\sqrt{u}} du
&= \int \frac{1  2u + u^2}{u^{1/2}} du\\
&= \int u^{1/2}  2u^{1/2} + u^{3/2} du \\
&= \left(2u^{1/2}  \frac{4}{3} u^{3/2} + \frac{2}{5} u^{5/2}\right)\\
&= 2(1x)^{1/2} + \frac{4}{3}(1x)^{3/2}  \frac{2}{5} (1x)^{5/2}.
\end{align*}
\end{example}
\subsection{The Substitution Rule for Definite Integrals}
\begin{proposition}[Substitution Rule for Definite Integrals]
We have
$$
\int_{a}^{b} f(g(x)) g'(x) dx = \int_{g(a)}^{g(b)} f(u) du,
$$
assuming that $u=g(x)$ is a function that is differentiable
and whose range is an interval on which $f$ is continuous.
\end{proposition}
\begin{proof}
If $F' = f$, then by the chain rule,
$F(g(x))$ is an antiderivative of $f(g(x)) g'(x)$.
Thus
$$
\int_{a}^{b} f(g(x)) g'(x) dx = \Bigl[ F(g(x))\Bigr]_a^b = F(g(b))  F(g(a))
= \int_{g(a)}^{g(b)} f(u) du.
$$
\end{proof}
\begin{example}
$$\int_{0}^{\sqrt{\pi}} x \cos(x^2) dx$$
We let $u=x^2$, so $du=2xdx$ and $xdx = \frac{1}{2} du$ and the integral becomes
$$
\frac{1}{2} \cdot \int_{(0)^2}^{(\sqrt{\pi})^2} \cos(u) du
= \frac{1}{2}\cdot \left[ \sin(u) \right]_{0}^{\pi} = \frac{1}{2} \cdot (0  0) = 0.
$$
\end{example}
\subsection{Symmetry}
An \defn{odd function} is a function $f(x)$ such that $f(x) = f(x)$,
and an \defn{even function} one for which $f(x) = f(x)$.
If $f$ is an odd function, then for any $a$,
$$
\int_{a}^a f(x) dx = 0.
$$
If $f$ is an even function, then for any $a$,
$$
\int_{a}^a f(x) dx = 2 \int_{0}^a f(x) dx.
$$
Both statements are clear if we view integrals as computing
the signed area between the graph of $f(x)$ and the $x$axis.
\begin{example}
$$
\int_{1}^1 x^2 dx = 2 \int_{0}^1 x^2 dx = 2\left[\frac{1}{3} x^3\right]_{0}^1 = \frac{2}{3}.
$$
\end{example}
\chapter{Applications to Areas, Volume, and Averages}
\section{Using Integration to Determine Areas Between Curves}
\forclass{Today is 20060118.\\
Quiz reminder: Friday, Jan 20 (describe format)\\
How was your weekend?\\
Mine was greatI wrote open source math software nonstop for
days on end!}
This section is about how to compute the area of fairly general
regions in the plane. Regions are often described as the area
enclosed by the graphs of several curves. (``My land is the
plot enclosed by that river, that fence, and the highway.'')
Recall that the integral $\int_{a}^{b} f(x) dx$ has a geometric
interpretation as the signed area between the graph of $f(x)$ and the
$x$axis. We defined area by subdividing, adding up approximate areas
(use points in the intervals) as \defn{Riemann sum}, and taking the limit.
Thus we defined area as a limit of Riemann sums. The fundamental
theorem of calculus asserts that we can compute areas exactly when
we can finding antiderivatives.
Instead of considering the area between the graph of $f(x)$
and the $x$axis, we consider more generally two graphs,
$y=f(x)$, $y=g(x)$, and assume for simplicity that
$f(x)\geq g(x)$ on an interval $[a,b]$.
Again, we approximate the area {\em between} these two
curves as before using Riemann sums.
Each approximating rectangle has width $(ba)/n$ and height
$f(x)g(x)$, so
$$
\text{Area bounded by graphs} \sim \sum [f(x_i)g(x_i)]\Delta x.
$$
Note that $f(x)g(x)\geq 0$, so the area is nonnegative.
From the definition of integral we see that the exact area is
\begin{equation}\label{eqn:boundedform}
\text{Area bounded by graphs } = \int_{a}^b (f(x)  g(x)) dx.
\end{equation}
Why did we make a big deal about approximations instead of just
writing down (\ref{eqn:boundedform})? Because having a sense of how
this area comes directly from a Riemann sum is very important. But,
what is the point of the Riemann sum if all we're going to do is write
down the integral? The sum embodies the geometric manifestation of
the integral. If you have this picture in your mind, then the Riemann
sum has \defn{done its job}. If you understand this, you're more
likely to know what
integral to write down; if you don't, then you might not.
%(How to account for $g(x)$ or $f(x)$ under the $x$ axis.)
\begin{remark}
By the linearity property of integration,
our sought for area is the difference
$$
\int_{a}^b f(x) dx  \int_{a}^b g(x) dx,
$$
of two signed areas.
\end{remark}
\subsection{Examples}
\begin{example}
Find the area enclosed by $y=x+1$, $y=9x^2$, $x=1$, $x=2$.
\fig{What is the enclosed area?}{example_curve_area1}
%gnuplot.plot('plot [1:2] x+1, 9x^2', 'example_curve_area1')
$$
\text{Area} = \int_{1}^2 \Bigl[(9x^2)  (x+1)\Bigr] dx
$$
We have reduced the problem to a computation:
$$
\int_{1}^2 [(9x^2)  (x+1)] dx
= \int_{1}^2 (8xx^2)dx
= \left[ 8x  \frac{1}{2}x^2  \frac{1}{3}x^3 \right]_{1}^2
= \frac{39}{2}.
$$
%sage: maxima('(9x^2)(x+1)').integrate('x',1,2)
\end{example}
The above example illustrates the simplest case. In practice
more interesting situations often arise.
The next example illustrates finding the boundary points $a,b$
when they are not explicitly given.
\begin{example}
Find area enclosed by the two parabolas $y=12x^2$ and $y=x^26$.
\fig{What is the enclosed area?}{example_curve_area2}
%gnuplot.plot('plot [5:5] 12x^2, x^26', 'example_curve_area2')
Problem: We didn't tell you what the boundary points $a,b$
are. We have to figure that out. How? We must find
{\em exactly} where the two curves intersect, by setting
the two curves equal and finding the solution.
We have
$$x^26 = 12x^2,$$
so $0 = 2x^2 18 = 2(x^29) = 2(x3)(x+3)$, hence the intersect
points are at $a=3$ and $b=3$. We thus find the area
by computing
$$
\int_{3}^3 \left[12x^2  (x^26)\right] dx
= \int_{3}^3 (18  2x^2) dx
= 4 \int_{0}^3 (9  x^2) dx
= 4 \cdot 18 = 72.
$$
%sage: maxima('9x^2').integrate('x',0,3)
\end{example}
\begin{example}
A common way in which you might be tested to see if you {\em really}
understand what is going on, is to be asked to find the area between
two graphs $x = f(y)$ and $x=g(y)$. If the two graphs are vertical,
subtract off the rightmost curve. Or, just ``switch $x$ and $y$''
everywhere (i.e., reflect about $y=x$). The area is unchanged.
\end{example}
\begin{example}
Find the area ({\em not signed area!})
enclosed by $y=\sin(\pi x)$, $y=x^2  x$,
and $x=2$.
\fig{Find the area}{example_curve_area3}
% gnuplot.plot('plot [0.5:2.5] sin(pi*x), x^2x, 0', 'example_curve_area3')
Write $x^2  x = (x1/2)^2  1/4$, so that we can obtain
the graph of the parabola by shifting the standard graph.
The area comes in two pieces, and the upper and lower curve switch in
the middle.
Technically, what we're doing is integrating the
{\em absolute value} of the difference.
The area is
$$
\int_{0}^{1} \sin(\pi x)  (x^2x) dx

\int_{1}^{2} (x^2x)  \sin(\pi x)dx
= \frac{4}{\pi} + 1
$$
%sage: f=maxima('sin(%pi *x)  (x^2x)')
%sage: f.integrate(x,0,1)  f.integrate(x,1,2)
Something to take away from this is that in order to solve
this sort of problem, you need some facility with graphing
functions. If you aren't comfortable with this, review.
\end{example}
\section{Computing Volumes of Surfaces of Revolution}
Everybody knows that the voluem of a solid box is
$$
\text{volume} = \text{length} \times \text{width} \times \text{height}.
$$
More generally, the volume of cylinder is
$V = \pi r^2 h$ (cross sectional area times height).
Even more generally, if the base of a prism has area $A$, the
volume of the prism is $V=A h$.
But what if our solid object looks like a complicated blob? How would
we compute the volume? We'll do something that by now should seem
familiar, which is to chop the object into small pieces and take the
limit of approximations.
[[Picture of solid sliced vertically into a bunch
of vertical thin solid discs.]]
Assume that we have a function
$$
A(x) = \text{cross sectional area at $x$}.
$$
The volume of our potentially complicated blob
is approximately $\sum A(x_i) \Delta x$.
%(Joke: If slices thin enough, could chop them up, fry them,
%and have potatoe chipsthat's where we're headed.)
Thus
\begin{align*}
\text{volume of blob} &= \lim_{n\to\infty} \sum_{i=1}^n A(x_i) \Delta x\\
&= \int_{a}^b A(x) dx
\end{align*}
\begin{example}
Find the volume of the pyramid with height $H$ and
square base with sides of length $L$.
\fig{How Big is Pharaoh's Place?}{pyramid}
For convenience look at pyramid on its side, with the tip of the
pyramid at the origin. We need to figure out the cross sectional area
as a function of $x$, for $0\leq x \leq H$. The function that gives
the distance $s(x)$ from the $x$ axis to the edge is a line, with
$s(0)=0$ and $s(H) = L/2$. The equation of this line is thus $s(x) =
\frac{L}{2H} x$. Thus the cross sectional area is
$$
A(x) = (2s(x))^2 = \frac{x^2L^2}{H^2}.
$$
The volume is then
$$
\int_{0}^{H} A(x)dx =
\int_{0}^{H} \frac{x^2L^2}{H^2} dx
= \left[ \frac{x^3L^2}{3H^2}\right]_{0}^H
= \frac{H^3L^2}{3H^2} = \frac{1}{3} HL^2.
$$
\end{example}
\forclass{
Today: Quiz!\\
Next: Polar coordinates, etc.\\
Questions:?\\
Recall: Find volume by integrating cross section of area. (draw picture)
}
\begin{example}
Find the volume of the solid obtained by rotating the following
region about the $x$ axis: the region enclosed by
$y=x^2$ and $y=x^3$ between $x=0$ and $x=1$.
\fig{Find the volume of the flower pot}{example_rotate1}
% gnuplot.plot('plot [0:1] x^2,x^3, 0', 'example_rotate1')
The cross section is a ``washer'', and the area as a function
of $x$ is
$$
A(x) = \pi(r_o(x)^2  r_i(x)^2) = \pi (x^4  x^6).
$$
The volume is thus
$$
\int_{0}^1 A(x) dx = \int_{0}^1 \left(\frac{1}{5}x^5  \frac{1}{7}x^7\right) dx
= \pi \left[\frac{1}{5}x^5  \frac{1}{7}x^7\right]_{0}^{1}
= \frac{2}{35}\pi.
$$
\end{example}
\begin{example}\label{ex:spherevol}
One of the most important examples of a volume is the volume $V$
of a sphere of radius $r$. Let's find it!
We'll just compute the volume of a half and multiply by $2$.
\fig{Cross section of a half of sphere with radius 1}{example_sphere1}
The cross sectional area is
$$
A(x) = \pi r(x)^2 = \pi (\sqrt{r^2x^2})^2 = \pi (r^2x^2).
$$
Then
$$
\frac{1}{2} V = \int_{0}^r \pi (r^2x^2) dx
= \pi \left[ r^2 x  \frac{1}{3}x^3\right]_0^r
= \pi r^3  \frac{1}{3} \pi r^3 = \frac{2}{3}\pi r^3.
$$
Thus
$V = (4/3) \pi r^3$.
\end{example}
\begin{example}
Find volume of intersection of two spheres of radius $r$, where
the center of each sphere lies on the edge of the other sphere.
From the picture we see that the answer is
$$
2 \int_{r/2}^{r} A(x),
$$
where $A(x)$ is {\em exactly} as in Example~\ref{ex:spherevol}.
We have
$$
2 \int_{r/2}^{r} \pi (r^2x^2) dx = \frac{5}{12} \pi r^3.
$$
\end{example}
\section{Average Values}
\forclass{
Quiz Answers: (1) 29, (2)
$\frac{1}{2}\ln\leftx^2 + 1\right + \tan^{1}(x)$
Exam 1: Wednesday, Feb 1, 7:00pm7:50pm, here.\\
Today: \S6.5  Average Values\\
Today: \S10.3  Polar coords\\
NEXT: \S10.4 Areas in Polar coords\\
Why did we skip from \S6.5 to \S 10.3? Later we'll go back and look
at trig functions and complex exponentials; these ideas will fit
together more than you might expect. We'll go back to \S 7.1 on Feb 3.
}
In this section we use Riemann sums to extend the familiar notion of
an average, which provides yet another physical interpretation of
integration.
Recall: Suppose $y_1, \ldots, y_n$ are the amount of rain each
day in La Jolla, since you moved here. The average rainful
per day is
$$
y_{\avg} = \frac{y_1+\cdots + y_n}{n} = \frac{1}{n}\sum_{i=1}^n y_i.
$$
\begin{definition}[Average Value of Function]\label{def:avg}
Suppose $f$ is a continuous function on an interval $[a,b]$.
The \defn{average value} of $f$ on $[a,b]$ is
$$
f_{\avg} = \frac{1}{ba} \int_{a}^b f(x) dx.
$$
\end{definition}
Motivation: If we sample $f$ at $n$ points $x_i$, then
$$
f_{\avg} \sim \frac{1}{n}\sum_{i=1}^n f(x_i)
= \frac{(ba)}{n(ba)}\sum_{i=1}^n f(x_i)
= \frac{1}{(ba)}\sum_{i=1}^n f(x_i) \Delta x,
$$
since $\ds \Delta x = \frac{ba}{n}$.
This is a Riemann sum!
$$
\frac{1}{(ba)} \lim_{n\to\infty} \sum_{i=1}^n f(x_i) \Delta x
= \frac{1}{(ba)} \int_{a}^b f(x) dx.
$$
This explains why we defined $f_{\avg}$ as above.
\begin{example}\label{ex:avg1}
What is the average value of $\sin(x)$ on the interval $[0,\pi]$?
\fig{What is the average value of $\sin(x)$?\label{fig:avg1}}{example_avg1}
%gnuplot.plot('plot [0:pi] sin(x), 2/pi', 'example_avg1')
\begin{align*}
\frac{1}{\pi  0}\int_{0}^{\pi} \sin(x)dx
&= \frac{1}{\pi  0} \Bigl[\cos(x)\Bigr]_{0}^{\pi} \\
&= \frac{1}{\pi} \Bigl[(1)  (1)\Bigr]_{0}^{\pi}
= \frac{2}{\pi}
\end{align*}
\end{example}
Observation: If you multiply both sides by $(ba)$ in
Definition~\ref{def:avg}, you see that the average value times the
length of the interval is the area, i.e., the average value gives you
a rectangle with the same area as the area under your function.
In particular, in Figure~\ref{fig:avg1} the area between
the $x$axis and $\sin(x)$ is exactly the same as the
area between the horizontal line of height $2/\pi$
and the $x$axis.
\begin{example}\label{ex:avg2}
What is the average value of $\sin(2x) e^{1\cos(2x)}$ on the
interval $[\pi,\pi]$?
\fig{What is the average value?\label{fig:avg2}}{example_avg2}
%gnuplot.plot('plot [pi:pi] sin(2*x)*exp(1cos(2*x)), 0', 'example_avg2')
\begin{align*}
\frac{1}{\pi  (\pi)}\int_{\pi}^{\pi} \sin(2x) e^{1\cos(2x)} dx
= 0 \qquad (\text{since the function is odd!})
\end{align*}
\end{example}
\begin{theorem}[Mean Value Theorem]
Suppose $f$ is a continuous function on $[a,b]$. Then there
is a number $c$ in $[a,b]$ such that $f(c) =f_{\avg}$.
\end{theorem}
This says that $f$ assumes its average value. It is a used very often
in understanding why certain statements are true. Notice that in
Examples~\ref{ex:avg1} and \ref{ex:avg2} it is just the assertion
that the graphs of the function and the horizontal line interesect.
\begin{proof}
Let $F(x) = \int_{a}^x f(t) dt$. Then $F'(x) = f(x)$.
By the mean value theorem for derivatives, there is $c\in [a,b]$
such that
$f(c) = F'(c) = (F(b)  F(a))/(ba).$
But by the fundamental theorem of calculus,
$$f(c) = \frac{F(b)  F(a)}{ba} = \frac{1}{ba}\int_{a}^{b} f(x)dx = f_{\avg}.$$
\end{proof}
\chapter{Polar Coordinates and Complex Numbers}
\section{Polar Coordinates}
Rectangular coordinates allow us to describe a point
$(x,y)$ in the plane in a different way, namely
$$
(x,y) \leftrightarrow (r, \theta),
$$
where $r$ is any real number and $\theta$ is an angle.
Polar coordinates are extremely useful, especially
when thinking about complex numbers. Note, however,
that the $(r,\theta)$ representation of a point is
very nonunique.
First, $\theta$ is not determined by the point. You could add $2\pi$
to it and get the same point:
$$\left(2,\frac{\pi}{4}\right)= \left(2,\frac{9\pi}{4}\right)
= \left(2,\frac{\pi}{4}+389\cdot 2\pi\right).
= \left(2,\frac{7\pi}{4}\right)
$$
Also that $r$ can be negative introduces further nonuniqueness:
$$
\left(1, \frac{\pi}{2}\right) = \left(1, \frac{3\pi}{2}\right).
$$
Think about this as follows: facing in the direction $3\pi/2$ and
backing up 1 meter gets you to the same point as looking in the
direction $\pi/2$ and walking forward 1 meter.
We can convert back and forth between cartesian and polar
coordinates using that
\begin{align}\label{eqn:polar1}
x&=r\cos(\theta)\\
y&=r\sin(\theta),
\end{align}
and in the other direction
\begin{align}\label{eqn:polar2}
r^2&=x^2 + y^2\\
\tan(\theta)&=\frac{y}{x}
\end{align}
(Thus $r = \pm \sqrt{x^2+y^2}$ and $\theta = \tan^{1}(y/x).$)
\begin{example}
Sketch $r=\sin(\theta)$, which
is a circle sitting on top the $x$ axis.
\fig{Graph of $r=\sin(\theta)$.}{example_polar1}
%sage: gnuplot.plot('set polar; plot [0:pi] sin(t)', 'example_polar1')
We plug in points for one period of the function we are
graphingin this case $[0,2\pi]$:
\begin{center}
\begin{tabular}{ll}\hline
0 & $\sin(0) = 0$\\
$\pi/6$ & $\sin(\pi/6) = 1/2$\\
$\pi/4$& $\sin(\pi/4) =\frac{\sqrt{2}}{2}$\\
$\pi/2$ & $\sin(\pi/2) = 1$\\
$3\pi/4$ & $\sin(3\pi/4) = \frac{\sqrt{2}}{2}$\\
$\pi$ & $\sin(\pi) = 0$\\
$\pi + \pi/6$ & $\sin(\pi+\pi/6) = 1/2$\\\hline
\end{tabular}
\end{center}
Notice it is nice to allow $r$ to be negative, so we don't have to
restrict the input. BUT it is really painful to draw this
graph by hand.
To more accurately draw the graph, let's try converting the equation to
one involving polar coordinates. This is easier if we multiply both
sides by $r$:
$$
r^2 = r\sin(\theta).
$$
Note that the new equation has the extra solution
$(r=0,\theta=\text{anything})$, so
we have to be careful not to include this point.
Now convert to cartesian coordinates using (\ref{eqn:polar1})
to obtain (\ref{eqn:polar2}):
\begin{equation}
\label{eqn:polar3}
x^2 + y^2 = y.
\end{equation}
The graph of (\ref{eqn:polar3})
is the same as that of $r=\sin(\theta)$. To confirm
this we complete the square:
\begin{align*}
x^2 + y^2 &= y\\
x^2 + y^2  y &= 0\\
x^2 + (y1/2)^2 &= 1/4\\
\end{align*}
Thus the graph of (\ref{eqn:polar3})
is a circle of radius $1/2$ centered at $(0,1/2)$.
\end{example}
Actually {\em any} polar graph\index{polar graph} of the form $r=a\sin(\theta) +
b\cos(\theta)$ is a circle, as you will see in homework problem 67
by generalizing what we just did.
\vfill
\section{Areas in Polar Coordinates}
\forclass{
Exam 1 Wed Feb 1 7:00pm in Pepper Canyon 109 (not 106!! different class there!)\\
Office hours: 2:45pm4:15pm\\
Next: Complex numbers (appendix G); complex exponentials (supplement,
which is freely available online).\\
We will {\em not } do arc length.\\
People were most confused last time by plotting curves in polar
coordinates. (1) it {\em is} tedious, but easier if you do a few and
know what they look like (just plot some points and see); there's not
much to it, except plug in values and see what you get, and (2) can
sometimes convert to a curve in $(x,y)$ coordinates, which might be
easier.
GOAL for today: Integration in the context of polar coordinates.
Get much better at working with polar coordinates!
}
% what is hardest  do it first!!!?
\begin{example}
(From Stewart.) Find the area enclosed by one leaf of the
fourleaved rose $r=\cos(2\theta)$.
\figtwo{Graph of $y=\cos(2x)$ and $r=\cos(2\theta)$}%
{example_4rosecos}{example_4rose}
%sage: gnuplot.plot('set polar; plot [0:2*pi] cos(2*t)', 'example_4rose')
%sage: gnuplot.plot('plot [0:2*pi] cos(2*x)', 'example_4rosecos')
To find the area using the methods we know so far, we
would need to find a function $y=f(x)$ that gives
the height of the leaf.
Multiplying both sides of the equation $r=\cos(2\theta)$ by $r$ yields
$$
r^2 = r\cos(2\theta) = r(\cos^2 \theta  \sin^2 \theta) =
\frac{1}{r}((r\cos\theta)^2  (r\sin\theta)^2).
$$
Because $r^2 = x^2 + y^2$ and $x=r\cos(\theta)$ and $y=r\sin(\theta)$,
we have
$$
x^2 + y^2 = \frac{1}{\sqrt{x^2+y^2}} (x^2  y^2).
$$
Solving for $y$ is a crazy mess, and then integrating? It seems
impossible!
\end{example}
But it isn't... if we remember the basic idea of calculus: subdivide
and take a limit.
[[Draw a section of a curve $r=f(\theta)$ for $\theta$ in some interval
$[a,b]$, and shade in the area of the arc.]]
\begin{remark}
We will almost {\em never} talk about angles in degreeswe'll
almost always use radians.
\end{remark}
We know how to compute the area of a sector, i.e., piece of a circle
with angle $\theta$. [[draw picture]]. This is the basic polar region.
The area is
$$
A = \text{(fraction of the circle)}\cdot \text{(area of circle)}
= \left(\frac{\theta}{2\pi}\right) \cdot \pi r^2 = \frac{1}{2} r^2 \theta.
$$
We now imitate what we did before with Riemann sums. We chop
up, approximate, and take a limit.
Break the interval of angles from $a$ to $b$ into $n$ subintervals.
Choose $\theta_i^*$ in each interval.
The area of each slice is approximately
$(1/2) f(\theta_i^*)^2 \theta_i^2$.
Thus
$$
A = \text{Area of the shaded region}
\sim \sum_{i=1}^n \frac{1}{2} f(\theta_i^*)^2 \Delta(\theta).
$$
Taking the limit, we see that
$$
A = \lim_{n\to\infty} \sum_{i=1}^n \frac{1}{2} f(\theta_i^*)^2 \Delta(\theta)
= \frac{1}{2} \cdot \int_{a}^{b} f(\theta)^2 d\theta.
$$
Amazing! By understanding the definition of Riemann sum, we've
derived a formula for areas swept out by a polar graph. But does it work in
practice?
Let's revisit our clover leaf.
\subsection{Examples}
\begin{example}
Find the area enclosed by one leaf of the
fourleaved rose $r=\cos(2\theta)$.
%sage: gnuplot.plot('set polar; plot [0:2*pi] sin(2*t)', 'example_4rose')
\noindent Solution: We need the boundaries of integration. Start at
$\theta=\pi/4$ and go to $\theta=\pi/4$. As a check, note that
$\cos((\pi/4) \cdot 2) = 0 = \cos((\pi/4) \cdot 2).$ We evaluate
\begin{align*}
\frac{1}{2} \cdot \int_{\pi/4}^{\pi/4} \cos(2\theta)^2 d\theta
&= \int_{0}^{\pi/4} \cos(2\theta)^2 d\theta \qquad \text{(even function)}\\
&= \frac{1}{2} \int_{0}^{\pi/4} (1+\cos(4\theta)) d\theta \\
&= \frac{1}{2} \left[ \theta + \frac{1}{4}\cdot \sin(4\theta)\right]_{0}^{\pi/4} \\
&= \frac{\pi}{8}.
\end{align*}
We used that
\begin{equation}
\label{eqn:costwosintwo}
\cos^2(x) = (1+\cos(2x))/2
\qquad\text{and}\qquad
\sin^2(x) = (1\sin(2x))/2,
\end{equation}
which follow from
$$\cos(2x) = \cos^2(x)  \sin^2(x)
= 2\cos^2(x)  1 = 12\sin^2(x).$$
\end{example}
\begin{example}
Find area of region inside the curve $r=3\cos(\theta)$ and
outside the cardiod curve $r=1+\cos(\theta)$.
\fig{Graph of $r=3\cos(\theta)$ and $r=1+\cos(\theta)$}{example_card}
%sage: gnuplot.plot('set polar; plot [0:2*pi] 3*cos(t), 1+cos(t)', 'example_card')
\noindent Solution: This is the same as before. It's the difference of two areas.
Figure out the limits, which are where the curves intersect,
i.e., the $\theta$ such that
$$3\cos(\theta) = 1+\cos(\theta).$$
Solving, $2\cos(\theta) = 1$, so $\cos(\theta) = 1/2$, hence
$\theta = \pi/3$ and $\theta = \pi/3$.
Thus the area is
\begin{align*}
A &= \frac{1}{2} \int_{\pi/3}^{\pi/3}
(3\cos(\theta))^2  (1+\cos(\theta))^2 d\theta\\
&= \int_{0}^{\pi/3}
(3\cos(\theta))^2  (1+\cos(\theta))^2 d\theta \qquad\text{(even function)}\\
&= \int_{0}^{\pi/3}
(8 \cos^2(\theta) 2 \cos(\theta) 1) d\theta\\
&= \int_{0}^{\pi/3}
\left(8 \cdot \frac{1}{2}\left(1+\cos(2\theta)\right) 2 \cos(\theta) 1\right) d\theta\\
&= \int_{0}^{\pi/3}
3 + 4 \cos(2\theta)  2\cos(\theta) d\theta\\
&= \Bigl[3\theta + 2 \sin(2\theta)  2\sin(\theta)\Bigr]_{0}^{\pi/3}\\
&= \pi + 2\cdot \sqrt{\frac{3}{2}}  2\sqrt{\frac{3}{2}}  0  2\cdot 0  2\cdot 0\\
&= \pi
\end{align*}
\end{example}
\section{Complex Numbers}
A complex number is an expression of the form
$a+bi$, where $a$ and $b$ are real numbers,
and $i^2=1$. We add and multiply
complex numbers as follows:
\begin{align*}
(a+bi) + (c+di) &= (a+c) + (b+d)i\\
(a+bi) \cdot (c+di) &= (acbd) + (ad+bc)i
\end{align*}
The complex conjugate of a complex number is
$$
\overline{a+bi} = abi.
$$
Note that
$$
(a+bi) (\overline{a+bi}) = a^2 + b^2
$$
is a real number (has no complex part).
If $c+di \neq 0$, then
$$
\frac{a+bi}{c+di} = \frac{(a+bi)(cdi)}{c^2 + d^2}
= \frac{1}{c^2+d^2}((ac+bd) + (bcad)i).
$$
\begin{example}
$(12i)(83i) = 2  19i$
and $1/(1+i) = (1i) / 2 = 1/2  (1/2) i$.
\end{example}
Complex numbers are incredibly useful in providing better
ways to understand ideas in calculus, and more generally
in many applications (e.g., electrical engineering,
quantum mechanics, fractals, etc.). For example,
\begin{itemize}
\item Every polynomial $f(x)$ {\bf factors} as a product
of linear factors $(x\alpha)$, if we allow the
$\alpha$'s in the factorization to be complex numbers.
For example, $$f(x) = x^2+1 = (xi)(x+i).$$
This will provide an easier to use variant of
the ``partial fractions'' integration technique,
which we will see later.
\item Complex numbers are in {\bf correspondence} with
points in the plane via $(x,y) \leftrightarrow x+iy$.
Via this correspondence we obtain a way to add and
{\em multiply} points in the plane.
\item Similarly, points in {\bf polar coordinates}
correspond to complex numbers:
$$
(r,\theta) \leftrightarrow r (\cos(\theta) + i \sin(\theta)).
$$
\item Complex numbers provide a very nice way
to remember and {\bf understand trig identities}.
\end{itemize}
\subsection{Polar Form}
The \defn{polar form} of a complex number $x+iy$
is $r(\cos(\theta) + i\sin(\theta))$ where $(r,\theta)$
are any choice of polar coordinates that represent the
point $(x,y)$ in rectangular coordinates.
Recall that you can find the polar form of a point
using that
$$
r=\sqrt{x^2+y^2}\quad\text{ and }\quad\theta=\tan^{1}(y/x).
$$
NOTE: The ``existence'' of complex numbers wasn't generally accepted
until people got used to a geometric interpretation of them.
\begin{example}
Find the polar form of $1+i$.
\newline{\em Solution.}
We have $r=\sqrt{2}$, so
$$
1+i = \sqrt{2}\left( \frac{1}{\sqrt{2}} + \frac{i}{\sqrt{2}}\right)
= \sqrt{2}\left( \cos(\pi/4) + i \sin(\pi/4) \right).
$$
\end{example}
\begin{example}
Find the polar form of $\sqrt{3}  i$.
\newline{\em Solution.}
We have $r=\sqrt{3 + 1} = 2$, so
$$
\sqrt{3}  i = 2 \left( \frac{\sqrt{3}}{2} + i\frac{1}{2} \right)
= 2 \left( \cos(\pi/6) +i \sin(\pi/6)\right)
$$
[[A picture is useful here.]]
\end{example}
{\em Finding the polar form of a complex number is exactly
the same problem as finding polar coordinates of a point
in rectangular coordinates. The only hard part is figuring
out what $\theta$ is.}
If we write complex numbers in rectangular form, their
sum is easy to compute:
$$
(a+bi) + (c+di) = (a+c) + (b+d)i
$$
The beauty of polar coordinates is that if we write
two complex numbers in polar form, then
their \emph{product} is very easy to compute:
$$
r_1 (\cos(\theta_1) + i \sin(\theta_1))
\cdot r_2 (\cos(\theta_2) + i \sin(\theta_2))
=
(r_1 r_2) (\cos(\theta_1+\theta_2) + i \sin(\theta_1 + \theta_2)).
$$
The magnitudes multiply and the angles add.
The above formula is true because of the double angle identities
for $\sin$ and $\cos$ (and it is how I remember those formulas!).
\begin{align*}
(\cos(\theta_1) &+ i \sin(\theta_1))
\cdot (\cos(\theta_2) + i \sin(\theta_2)) \\
&= (\cos(\theta_1)\cos(\theta_2)  \sin(\theta_1)\sin(\theta_2))
+ i (\sin(\theta_1)\cos(\theta_2) + \cos(\theta_1)\sin(\theta_2)).
\end{align*}
For example, the power of a singular complex number in polar
form is easy to compute; just power the $r$ and multiply
the angle.
\begin{theorem}[De Moivre's]
For any integer $n$ we have
$$
(r (\cos(\theta) + i\sin(\theta)))^n
= r^n (\cos(n \theta) + i\sin(n \theta)).
$$
\end{theorem}
\begin{example}
Compute $(1+i)^{2006}$.
\newline{\em Solution.}
We have
\begin{align*}
(1+i)^{2006} &=
(\sqrt{2}\left( \cos(\pi/4) + i \sin(\pi/4) \right))^{2006}\\
&= \sqrt{2}^{2006} \left( \cos(2006 \pi/4) + i \sin(2006 \pi/4) \right))\\
&= 2^{1003} \left( \cos(3\pi/2) + i \sin(3\pi/2) \right))\\
&= 2^{1003} i
\end{align*}
To get $ \cos(2006 \pi/4) = \cos(3\pi/2)$ we use that
$2006/4 = 501.5$, so by periodicity of cosine, we have
$$
\cos(2006 \pi/4) = \cos((501.5)\pi  250(2\pi)) = \cos(1.5\pi) = \cos(3\pi/2).
$$
\end{example}
\forclass{
EXAM 1: Wednesday 7:007:50pm in Pepper Canyon 109 (!)\\
Today: Supplement 1 (get online; also homework online)\\
Wednesday: Review\\
Bulletin board, online chat, directory, etc.  see main course website.\\
Review day  I will prepare no LECTURE; instead I will answer questions.\\
Your {\em job} is to have your most urgent questions
ready to go!\\
Office hours moved: NOT Tue 111 (since nobody ever comes then and I'll
be at a conference); instead I'll be in my office
to answer questions WED 1:304pm, and after class on WED too.\\
Office: AP\&M 5111
}
\forclass{
Quick review:
Given a point $(x,y)$ in the plane, we can also view
it as $x+iy$ or in polar form as
$r(\cos(\theta) + i \sin(\theta))$. Polar form is great
since it's good for multiplication, powering, and for extracting roots:
$$r_1(\cos(\theta_1) + i \sin(\theta_1))
r_2(\cos(\theta_2) + i \sin(\theta_2))
= (r_1 r_2) (\cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)).
$$
(If you divide, you subtract the angle.) The point is that the polar
form {\em works better} with multiplication than the rectangular form.
\begin{theorem}[De Moivre's]
For any integer $n$ we have
$$
(r (\cos(\theta) + i\sin(\theta)))^n
= r^n (\cos(n \theta) + i\sin(n \theta)).
$$
\end{theorem}
}
Since we know how to raise a complex number in polar form
to the $n$th power, we can find all numbers with a given
power, hence find the $n$th roots of a complex number.
\begin{proposition}[$n$th roots]
A complex number $z=r(\cos(\theta) + i\sin(\theta))$ has
$n$ distinct $n$th roots:
$$
r^{1/n}\left(\cos\left(\frac{\theta+2\pi k}{n}\right)
+ i\sin\left(\frac{\theta+2\pi k}{n}\right) \right),
$$
for $k=0,1,\ldots, n1$. Here $r^{1/n}$ is the real
positive $n$th root of $r$.
\end{proposition}
As a doublecheck, note that by De Moivre, each
number listed in the proposition has $n$th power equal
to $z$.
An application of De Moivre is to computing
$\sin(n\theta)$ and $\cos(n\theta)$ in terms of $\sin(\theta)$ and
$\cos(\theta)$. For example,
\begin{align*}
\cos(3 \theta) + i\sin(3 \theta) &=
(\cos(\theta) + i\sin(\theta))^3\\
&=(\cos(\theta)^3  3\cos{}(\theta)\sin(\theta)^2) +
i(3\cos(\theta)^2\sin(\theta)  \sin(\theta)^3)
\end{align*}
Equate real and imaginary parts to get formulas
for $\cos(3\theta)$ and $\sin(3\theta)$.
In the next section we will
discuss going in the other direction, i.e., writing
powers of $\sin$ and $\cos$ in terms of $\sin$ and
cosine.
\begin{example}
Find the cube roots of $2$.\\
{\bf Solution.} Write $2$ in polar form as
$$
2 = 2 (\cos(0) + i \sin(0)).
$$
Then the three cube roots of $2$ are
$$
2^{1/3} (\cos(2\pi k/3) + i\sin(2\pi k/3)),
$$
for $k=0,1,2$.
I.e.,
$$
2^{1/3}, \quad 2^{1/3}(1/2 + i \sqrt{3}/2),
\quad 2^{1/3}(1/2  i \sqrt{3}/2).
$$
\end{example}
\section{Complex Exponentials and Trig Identities}
Recall that
\[\label{eqn:angleadd}
r_1(\cos(\theta_1) + i \sin(\theta_1))
r_2(\cos(\theta_2) + i \sin(\theta_2))
= (r_1 r_2) (\cos(\theta_1 + \theta_2) + i \sin(\theta_1 + \theta_2)).
\]
The angles add. You've seen something similar before:
$$
e^a e^b = a^{a+b}.
$$
This connection between exponentiation and (\ref{eqn:angleadd})
gives us an idea!
If $z=x+iy$ is a complex number, {\em define}
$$
e^z = e^x(\cos(y) + i\sin(y)).
$$
We have just written polar coordinates in another form. It's
a shorthand for the polar form of a complex number:
$$
r(\cos(\theta) + i\sin(\theta)) = re^{i\theta}.
$$
\begin{theorem}
If $z_1$, $z_2$ are two complex numbers, then
$$
e^{z_1} e^{z_2} = e^{z_1+z_2}
$$
\end{theorem}
\begin{proof}
\begin{align*}
e^{z_1} e^{z_2} &= e^{a_1}(\cos(b_1) + i\sin(b_1)) \cdot
e^{a_2}(\cos(b_2) + i\sin(b_2)) \\
&= e^{a_1+a_2}(\cos(b_1+b_2) + i\sin(b_1 + b_2))\\
&= e^{z_1+z_2}.
\end{align*}
Here we have just used (\ref{eqn:angleadd}).
\end{proof}
The following theorem is amazing, since it involves calculus.
\begin{theorem}\label{thm:ediff}
If $w$ is a complex number, then
$$
\frac{d}{dx} e^{w x} = w e^{wx},
$$
for $x$ real. In fact, this is even true for
$x$ a complex variable (but we haven't defined differentiation
for complex variables yet).
\end{theorem}
\begin{proof}
Write $w=a+bi$.
\begin{align*}
\frac{d}{dx} e^{w x} &=
\frac{d}{dx} e^{ax +bix} \\
&= \frac{d}{dx} (e^{ax} (\cos(bx) + i \sin(bx)))\\
&= \frac{d}{dx} (e^{ax} \cos(bx) + i e^{ax}\sin(bx))\\
&= \frac{d}{dx} (e^{ax} \cos(bx)) + i \frac{d}{dx} (e^{ax}\sin(bx))\\
\end{align*}
Now we use the product rule to get
\begin{align*}
\frac{d}{dx} (e^{ax} \cos(bx)) &+ i \frac{d}{dx} (e^{ax}\sin(bx))
\\
&= ae^{ax} \cos(bx)  be^{ax}\sin(bx)
+ i (a e^{ax}\sin(bx)
+ b e^{ax}\cos(bx))\\
&= e^{ax}(a \cos(bx)  b\sin(bx)
+ i (a \sin(bx)
+ b \cos(bx))\\
\end{align*}
On the other hand,
\begin{align*}
w e^{wx} &= (a+bi) e^{ax + bxi} \\
&=(a+bi) e^{ax} (\cos(bx) + i \sin(bx))\\
&=e^{ax}(a+bi)(\cos(bx) + i \sin(bx))\\
&= e^{ax}((a\cos(bx)  b\sin(bx)) + i ( a\sin(bx)) + b\cos(bx))
\end{align*}
Wow!! We did it!
\end{proof}
That Theorem~\ref{thm:ediff} is true is pretty amazing. It's
what really gets complex analysis going.
\begin{example}
Here's another fun fact: $e^{i\pi} + 1=0.$\\
{\em Solution.} By definition,
have $e^{i\pi} = \cos(\pi) + i \sin(\pi)
= 1 + i 0 = 1$.
\end{example}
\subsection{Trigonometry and Complex Exponentials}
Amazingly, trig functions can also be expressed back in terms of the
complex exponential. Then {\em everything} involving trig functions
can be transformed into something involving the exponential function.
This is very surprising.
In order to easily obtain trig identities like
$\cos(x)^2 + \sin(x)^2 = 1$, let's write $\cos(x)$
and $\sin(x)$ as complex exponentials.
From the definitions we have
$$
e^{ix} = \cos(x) + i\sin(x),
$$
so
$$
e^{ix} = \cos(x) + i\sin(x) = \cos(x)  i\sin(x).
$$
Adding these two equations and dividing by 2 yields
a formula for $\cos(x)$, and subtracting and dividing
by $2i$ gives a formula for $\sin(x)$:
\[\label{eqn:sincose}
\cos(x) = \frac{e^{ix} + e^{ix}}{2}
\qquad
\sin(x) = \frac{e^{ix}  e^{ix}}{2i}.
\]
We can now derive trig identities. For example,
\begin{align*}
\sin(2x) &= \frac{e^{i2x}  e^{i2x}}{2i} \\
&= \frac{(e^{ix}  e^{ix}) (e^{ix} + e^{ix})}{2i}\\
&= 2 \frac{e^{ix}  e^{ix}}{2i} \frac{e^{ix} + e^{ix}}{2}
= 2 \sin(x) \cos(x).
\end{align*}
I'm unimpressed, given that you can get this much
more directly using
$$
(\cos(2x) + i\sin(2x)) = (\cos(x) + i\sin(x))^2
= \cos^2(x)  \sin^2(x) + i2\cos(x)\sin(x),
$$
and equating imaginary parts.
But there are more interesting examples.
Next we verify that (\ref{eqn:sincose}) implies
that $\cos(x)^2 + \sin(x)^2 = 1$. We have
\begin{align*}
4(\cos(x)^2 + \sin(x)^2) &=
\left(e^{ix} + e^{ix}\right)^2
+ \left(\frac{e^{ix}  e^{ix}}{i}\right)^2\\
&=e^{2ix} + 2 + e^{2ix}  (e^{2ix}  2 + e^{2ix}) = 4.
\end{align*}
The equality just appears as a followyournose algebraic
calculation.
\begin{example}
Compute $\sin(x)^3$ as a sum of sines and
cosines with no powers.\\
\fig{What is $\sin(x)^3$?\label{fig:sin3}}{example_sin3}
{\em Solution.} We use (\ref{eqn:sincose}):
\begin{align*}
\sin(x)^3 &= \left( \frac{e^{ix}  e^{ix}}{2i} \right)^3 \\
&= \left( \frac{1}{2i}\right)^3
(e^{ix}  e^{ix})^3 \\
&= \left( \frac{1}{2i}\right)^3
(e^{ix}  e^{ix}) (e^{ix}  e^{ix})(e^{ix}  e^{ix})\\
&= \left( \frac{1}{2i}\right)^3
(e^{ix}  e^{ix})(e^{2ix}  2 + e^{2ix}) \\
&= \left( \frac{1}{2i}\right)^3
(e^{3ix}  2e^{ix} + e^{ix}  e^{ix} +2e^{ix}  e^{3ix})\\
&= \left( \frac{1}{2i}\right)^3
((e^{3ix}  e^{3ix})  3(e^{ix}e^{ix} ))\\
&=\left(\frac{1}{4}\right) \left[
\frac{e^{3ix}  e^{3ix}}{2i}  3\cdot \frac{e^{ix} e^{ix}}{2i}\right]\\
&= \frac{3\sin(x)  \sin(3x)}{4}.
\end{align*}
\end{example}
\chapter{Integration Techniques}
\section{Integration By Parts}
\forclass{
Quiz next Friday\\
Today: 7.1: integration by parts\\
Next: 7.2: trigonometric integrals
and supplement 2functions with complex values\\
Exams: Average 19.68 (out of 34).\\
Tetrahedron problem:
$$
\int_{0}^h \frac{1}{2}
\left(\frac{b}{h}x + b\right) \left(\frac{a}{h} x + a\right) dx
= \dots = \frac{abh}{6}.
$$
(The function that gives the base of the triangle
cross section is a linear function that is $b$ at $x=0$
and $0$ at $x=h$, which allows you to easily determine
it without thinking about geometry.)
}
\begin{center}
\begin{tabular}{ll}\hline
{\bf Differentiation} & {\bf Integration}\\\hline
Chain Rule & Substitution\\
Product Rule & Integration by Parts\\\hline
\end{tabular}
\end{center}
The product rule is that
$$
\frac{d}{dx}\left[ f(x) g(x)\right] = f(x) g'(x) + f'(x) g(x).
$$
Integrating both sides leads to a new fundamental technique
for integration:
\begin{equation}\label{eqn:intpartbs}
f(x)g(x) = \int f(x) g'(x)dx + \int g(x) f'(x) dx.
\end{equation}
Now rewrite (\ref{eqn:intpartbs}) as
$$
\int f(x) g'(x) dx = f(x)g(x)  \int g(x) f'(x) dx.
$$
Shorthand notation:
\begin{align*}
u &= f(x) &du &= f'(x)dx\\
v &= g(x) &dv &= g'(x)dx
\end{align*}
Then have
$$
\int u dv = uv  \int v du.
$$
{\bf So what! But what's the big deal?}
Integration by parts is a fundamental technique of integration. It
is also a key step in the proof of many theorems
in calculus.
\begin{example}
$\int x \cos(x) dx$.
\begin{align*}
u &= x & v &= \sin(x)\\
du &= dx & dv &= \cos(x)dx
\end{align*}
We get
$$\int x \cos(x) dx
= x\sin(x)  \int \sin(x) dx
= x\sin(x) + \cos(x) + c.
$$
``Did this do anything for us?'' Indeed, it did.
Wait a minutehow did we know to pick $u=x$
and $v=\sin(x)$? We could have picked them
other way around and still written down true statements.
Let's try that:
\begin{align*}
u &= \cos(x) & v &= \frac{1}{2}x^2\\
du &= \sin(x) dx & dv &= xdx
\end{align*}
$$\int x \cos(x) dx
= \frac{1}{2} x \cos(x) + \int \frac{1}{2}x^2 \sin(x) dx.
$$
Did this help!? NO. Integrating $x^2\sin(x)$ is harder
than integrating $x\cos(x)$. This formula is completely correct,
but is hampered by being useless in this case.
So how {\em do} you pick them?
\begin{quote} Choose the $u$ so that
when you differentiate it you get something {\em simpler};
when you pick $dv$, try to choose something whose
antiderivative is {\em simpler}.
\end{quote}
Sometimes you have to try more than once. But with
a good eraser nodoby will know that it took you two
tries.
\begin{question}
{\em If integration by parts once is good, then sometimes twice is
even better?} Yes, in some examples (see Example~\ref{ex:parts2}).
But in the above example, you just undo what you did and basically
end up where you started, or you get something even worse.
\end{question}
\end{example}
\begin{example}
Compute $\ds \int_0^{\frac{1}{2}} \sin^{1}(x) dx.$
Two points:
\begin{enumerate}
\item It's a definite integral.
\item There is only one function; would you think to do
integration by parts? But it is a product; it just
doesn't look like it at first glance.
\end{enumerate}
Your choice is made for you, since we'd be {\em back where
we started} if we put $dv = \sin^{1}(x)dx$.
\begin{align*}
u &= \sin^{1}(x) & v &=x \\
du &= \frac{1}{\sqrt{1x^2}} &dv &= dx
\end{align*}
We get
$$
\int_0^{\frac{1}{2}} \sin^{1}(x) dx
= \left[x\sin^{1}(x)\right]_0^{\frac{1}{2}}
 \int_0^{\frac{1}{2}} \frac{x}{\sqrt{1x^2}} dx.
$$
Now we use substitution with $w=1x^2$, $dw = 2xdx$, hence
$xdx = \frac{1}{2} dw$.
$$
\int_0^{\frac{1}{2}} \frac{x}{\sqrt{1x^2}} dx
= \frac{1}{2} \int w^{\frac{1}{2}} dw
= w^{\frac{1}{2}} + c = \sqrt{1x^2} + c.
$$
Hence
$$
\int_0^{\frac{1}{2}} \sin^{1}(x) dx
= \left[x\sin^{1}(x)\right]_0^{\frac{1}{2}}
+ \left[ \sqrt{1x^2}\right]^{\frac{1}{2}}_0
= \frac{\pi}{12} + \frac{\sqrt{3}}{2}  1
$$
{\em But shouldn't we change the limits because we did a substitution?}
(No, since we computed the indefinite integral and put it back;
this time we did the other option.)
{\em Is there another way to do this?} I don't know. But for
any integral, there might be several different techniques.
If you can think of any other way to guess an antiderivative,
do it; you can always differentiate as a check.
Note: Integration by parts is tailored toward doing indefinite
integrals.
\end{example}
\begin{example}
This example illustrates how to use integration by parts twice.
We compute
$$
\int x^2 e^{2x} dx
$$
\begin{align*}
u &= x^2 & v &= \frac{1}{2} e^{2x}\\
du &= 2x dx & dv &= e^{2x}dx
\end{align*}
We have
$$
\int x^2 e^{2x} dx
= \frac{1}{2}x^2 e^{2x} + \int x e^{2x} dx.
$$
Did this help? It helped, but it did {\em not} finish
the integral off. However, we can deal with the remaining
integral, again using integration by parts.
If you do it twice, you {\em what to keep going in the same
direction}. Do not switch your choice, or you'll undo
what you just did.
\begin{align*}
u &= x & v &= \frac{1}{2} e^{2x}\\
du &= dx & dv &= e^{2x}dx
\end{align*}
$$
\int x e^{2x} dx =
 \frac{1}{2} x e^{2x}
+ \frac{1}{2} \int e^{2x}dx
=
 \frac{1}{2} x e^{2x}
 \frac{1}{4} e^{2x} + c.
$$
Now putting this above, we have
$$
\int x^2 e^{2x} dx
= \frac{1}{2}x^2 e^{2x}
 \frac{1}{2} x e^{2x}
 \frac{1}{4} e^{2x} + c
= \frac{1}{4} e^{2x}(2x^2 + 2x + 1) + c.
$$
\end{example}
Do you think you might have to do integration by parts three times?
What if it were $\int x^3 e^{2x} dx$? Grrr  you'd have to do it
three times.
\begin{example}\label{ex:parts2}
{\em Compute $\ds \int e^x \cos(x) dx$.}
Which should be $u$ and which should be $v$? Taking the derivatives
of each type of function does not change the type. As a practical
matter, it doesn't matter. Which would you {\em prefer} to find the
antiderivative of? (Both choices work, as long as you keep going in
the same direction when you do the second step.)
\begin{align*}
u &= \cos(x) & v &= e^x\\
du &= \sin(x)dx & dv &= e^xdx
\end{align*}
We get
$$
\int e^x \cos(x) dx
= e^x \cos(x) + \int e^x \sin(x) dx.
$$
We have to do it again. This time we choose (going in the
{\em same direction}):
\begin{align*}
u &= \sin(x) & v &= e^x\\
du &= \cos(x) dx & dv &= e^xdx
\end{align*}
We get
$$
\int e^x \cos(x) dx
= e^x \cos(x) + e^x \sin(x)  \int e^x \cos(x) dx.
$$
Did we get anywhere? Yes! No! First impression: all this work,
and we're back where we started from! Yuck. Clearly we don't want
to integrate by parts yet again. {\bf BUT.}
Notice the {\em minus} sign in front of $\int e^x \cos(x) dx$;
You can add the integral to both sides and get
$$
2 \int e^x \cos(dx) = e^x \cos(x) + e^x \sin(x) + c.
$$
Hence
$$
\int e^x \cos(dx) = \frac{1}{2} e^x (\cos(x) + \sin(x)) + c.
$$
\end{example}
\section{Trigonometric Integrals}\label{sec:trigint}
\forclass{
Friday: Quiz 2\\
Next: Trig subst.
}
\boxit{
\begin{equation}
\label{eqn:costwosintwob}
\cos^2(x) =\frac{1+\cos(2x)}{2}
\qquad\text{and}\qquad
\sin^2(x) = \frac{1\sin(2x)}{2}.
\end{equation}
}
\begin{example}
Compute $\int \sin^3(x)dx$.\\
We use trig. identities and compute the integral directly as follows:
\begin{align*}
\int \sin^3(x)dx &= \int \sin^2(x) \sin(x) dx \\
&= \int [1\cos^2(x)]\sin(x) dx\\
&= \cos(x) + \frac{1}{3}\cos^3(x) + c \qquad\text{(substitution }u=\cos(x)\text{)}
\end{align*}
This always works for odd powers of $\sin(x)$.
\end{example}
\begin{example}
What about {\em even} powers?!
Compute $\int \sin^4(x)dx$.
We have
\begin{align*}
\sin^4(x) &= [\sin^2(x)]^2 \\
&= \left[\frac{1\cos(2x)}{2}\right]^2 \\
&= \frac{1}{4}\cdot \left[
1  2\cos(2x) + \cos^2(2x)\right]\\
&= \frac{1}{4}\left[
1  2\cos(2x) + \frac{1}{2} + \frac{1}{2} \cos(4x)
\right]
\end{align*}
Thus
\begin{align*}
\int \sin^4(x) dx &= \int \left[\frac{3}{8}  \frac{1}{2} \cos(2x)
+ \frac{1}{8} \cos(4x)\right] dx\\
&= \frac{3}{8} x  \frac{1}{4} \sin(2x)
+ \frac{1}{32} \sin(4x) + c.
\end{align*}
{\bf Key Trick:} Realize that we should write
$\sin^4(x)$ as $(\sin^2(x))^2$. The rest is straightforward.
\end{example}
\begin{example}
This example illustrates a method for computing integrals of trig
functions that doesn't require knowing any trig identities at all or
any tricks. It is very tedious though. We compute $\int
\sin^3(x)dx$ using \defn{complex exponentials}. We have
\[\label{eqn:sincose2}
\cos(x) = \frac{e^{ix} + e^{ix}}{2}
\qquad
\sin(x) = \frac{e^{ix}  e^{ix}}{2i}.
\]
hence
\begin{align*}
\int \sin^3(x)dx &= \int \left(\frac{e^{ix}  e^{ix}}{2i}\right)^3 dx \\
&= \frac{1}{8i} \int (e^{ix}  e^{ix})^3 dx\\
&= \frac{1}{8i} \int (e^{ix}  e^{ix})(e^{ix}  e^{ix})(e^{ix}  e^{ix})dx\\
&= \frac{1}{8i} \int (e^{2ix} 2 + e^{2ix})(e^{ix}  e^{ix})dx\\
&= \frac{1}{8i} \int e^{3ix}  e^{ix}  2e^{ix} + 2e^{ix}
+ e^{ix}  e^{3ix} dx\\
&= \frac{1}{8i} \int e^{3ix} e^{3ix} + 3e^{ix}  3e^{ix} dx\\
&= \frac{1}{8i} \left(\frac{e^{3ix}}{3i}  \frac{e^{3ix}}{3i}
+ \frac{3e^{ix}}{i}  \frac{3e^{ix}}{i}\right) + c\\
&= \frac{1}{4} \left( \frac{1}{3} \cos(3x)  3\cos(x)\right) + c\\
&= \frac{1}{12} \cos(3x)  \frac{3}{4}\cos(x) + c
\end{align*}
The answer looks totally different, but is in fact the same function.
\end{example}
Here are some more identities that we'll use in illustrating some tricks
below.
\boxit{
$$
\frac{d}{dx} \tan(x) = \sec^2(x)
$$
and
$$
\frac{d}{dx} \sec(x) = \sec(x)\tan(x).
$$
Also,
$$
1 + \tan^2(x) = \sec^2(x).
$$
}
\begin{example}
Compute $\int \tan^3(x) dx$.
We have
\begin{align*}
\int \tan^3(x) dx &= \int\tan(x) \tan^2(x) dx\\
&= \int \tan(x)\left[\sec^2(x)  1\right]dx\\
&= \int \tan(x)\sec^2(x)dx  \int\tan(x) dx\\
&= \frac{1}{2} \tan^2(x)  \ln\sec(x) + c
\end{align*}
Here we used the substitution $u=\tan(x)$, so $du = \sec^2(x) dx$,
so
$$
\int \tan(x)\sec^2(x)dx
= \int u du = \frac{1}{2}u^2 + c = \frac{1}{2}\tan^2(x) + c.
$$
Also, with the substitution $u=\cos(x)$ and $du=\sin(x)dx$
we get
$$
\int \tan(x)dx
= \int \frac{\sin(x)}{\cos(x)} dx
= \int \frac{1}{u} du = \lnu + c = \ln\sec(x) + c.
$$
{\bf Key trick:} Write
$\tan^3(x)$ as $\tan(x)\tan^2(x)$.
\end{example}
\begin{example}
Here's one that combines trig identities with the funnest
variant of integration by parts. {\em Compute $\int \sec^3(x) dx$.}\\
We have
$$
\int \sec^3(x)dx = \int \sec(x) \sec^2(x)dx.
$$
Let's use integration by parts.
\begin{align*}
u &= \sec(x) & v &= \tan(x)\\
du &= \sec(x)\tan(x) dx & dv &= \sec^2(x)dx
\end{align*}
The above integral becomes
\begin{align*}
\int \sec(x) \sec^2(x)dx &=
\sec(x) \tan(x)  \int \sec(x) \tan^2(x) dx\\
&=\sec(x) \tan(x)  \int \sec(x)[\sec^2(x)  1] dx\\
&= \sec(x) \tan(x)  \int \sec^3(x) + \int \sec(x) dx\\
&= \sec(x) \tan(x)  \int \sec^3(x) + \ln\sec(x) + \tan(x)\\
\end{align*}
This is familiar. Solve for $\int \sec^3(x)$. We get
$$
\int \sec^3(x) dx = \frac{1}{2}\Bigl[
\sec(x) \tan(x) + \ln\sec(x) + \tan(x) \Bigr] + c
$$
\end{example}
\subsection{Some Remarks on Using ComplexValued Functions}
Consider functions of the form
\begin{equation}\label{eqn:ri}
f(x) + i g(x),
\end{equation}
where $x$ is a real variable and $f,g$ are realvalued
functions. For example,
$$
e^{ix} = \cos(x) + i \sin(x).
$$
We observed before that
$$
\frac{d}{dx} e^{wx} = w e^{wx}
$$
hence
$$
\int e^{wx}dx = \frac{1}{w} e^{wx} + c.
$$
For example, writing it $e^{ix}$ as in (\ref{eqn:ri}),
we have
\begin{align*}
\int e^{ix} dx &= \int \cos(x) dx + i \int \sin(x) dx\\
&= \sin(x)  i \cos(x) + c\\
&= i (\cos(x) + i \sin(x)) + c \\
& = \frac{1}{i} e^{ix}.
\end{align*}
%\forclass{Correct something in the supplement as
%an example... (look up what exactly was wrong)}
\begin{example}
Let's compute $\ds \int \frac{1}{x+i} dx$. Wouldn't it be nice if
we could just write $\ln (x+i) + c$? This is useless for us though,
since we haven't even {\em defined} $\ln (x+i)$!
However, we can ``rationalize the denominator'' by writing
\begin{align*}
\int \frac{1}{x+i} dx &= \int \frac{1}{x+i}\cdot \frac{xi}{xi} dx \\
&= \int \frac{xi}{x^2+1} dx\\
&= \int \frac{x}{x^2+1} dx i \int \frac{1}{x^2+1} dx\\
&= \frac{1}{2}\lnx^2+1  i \tan^{1}(x) + c\\
\end{align*}
This informs how we would define $\ln(z)$ for $z$ complex (which
you'll do if you take a course in complex analysis).
{\bf Key trick:} Get the $i$ in the numerator.
\end{example}
The next example illustrates an alternative to the method
of Section~\ref{sec:trigint}.
\begin{example}
\begin{align*}
\int \sin(5x) \cos(3x) dx
&= \int\left(\frac{e^{i5x}  e^{i5x}}{2i}\right)\left(
\cdot \frac{e^{i5x} + e^{i5x}}{2}\right)dx\\
&= \frac{1}{4i} \int \left( e^{i8x}  e^{i8x} + e^{i2x}  e^{i2x}\right)dx + c\\
&= \frac{1}{4i} \left( \frac{e^{i8x}}{8i} + \frac{e^{i8x}}{8i}
+ \frac{e^{i2x}}{2i} + \frac{e^{i2x}}{2i}\right) + c\\
&= \frac{1}{4}\left[\frac{1}{4}\cos(8x) + \cos(2x)\right] + c
\end{align*}
This {\em is} more tedious than the method in \ref{sec:trigint}.
But it is {\em completely straightforward}. You don't need
any trig formulas or anything else. You just multiply it out,
integrate, etc., and remember that $i^2=1$.
\end{example}
\section{Trigonometric Substitutions}
\forclass{Return more midterms?\\
Rough meaning of grades:\\
\mbox{}$\qquad$ 2934 is A\\
\mbox{}$\qquad$ 2328 is B\\
\mbox{}$\qquad$ 1722 is C\\
\mbox{}$\qquad$ 1116 is D\\
Regarding the quizif you do every homework problem that was assigned,
you'll have a severe case of deja vu on the quiz! On the exam, we do
not restrict ourselves like this, but you get to have a sheet of paper.}
The first homework problem is to compute
\begin{equation}\label{eqn:invsubhw1}
\int_{\sqrt{2}}^2 \frac{1}{x^3 \sqrt{x^21}} dx.
\end{equation}
Your first idea might be to do some sort of
substitution, e.g., $u=x^21$, but $du=2xdx$
is nowhere to be seen and this simply doesn't work.
Likewise, integration by parts gets us nowhere.
However, a technique called ``inverse trig substitutions''
and a trig identity easily dispenses with the
above integral and several similar ones!
Here's the crucial table:
\begin{center}
\shadowbox{
\begin{tabular}{lll}\hline
Expression & Inverse Substitution & Relevant Trig Identity\\\hline
$\sqrt{a^2x^2}$ & $x=a\sin(\theta), \frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$ & $1\sin^2(\theta) = \cos^2(\theta)$\\\hline
$\sqrt{a^2+x^2}$ & $x=a\tan(\theta), \frac{\pi}{2} < \theta < \frac{\pi}{2}$ & $1+\tan^2(\theta) = \sec^2(\theta)$\\\hline
$\sqrt{x^2a^2}$ & $x=a\sec(\theta), 0 \leq \theta < \frac{\pi}{2}$ or
$\pi \leq \theta < \frac{3\pi}{2}$ & $\sec^2(\theta)  1 = \tan^2(\theta)$\\\hline
\end{tabular}
}
\end{center}
Inverse substitution works as follows. If we write $x=g(t)$, then
$$
\int f(x) dx = \int f(g(t)) g'(t) dt.
$$
This is {\em not} the same as substitution. You can just apply
inverse substitution to any integral directlyusually you get
something even worse, but for the integrals in this section using
a substitution can vastly improve the situation.
If $g$ is a $11$ function, then you can even use inverse substitution
for a definite integral. The limits of integration are obtained as follows.
\begin{equation}\label{eqn:invsub}
\int_{a}^{b} f(x) dx = \int_{g^{1}(a)}^{g^{1}(b)} f(g(t)) g'(t) dt.
\end{equation}
To help you understand this, note that as $t$ varies from
$g^{1}(a)$ to $g^{1}(b)$, the function $g(t)$ varies
from $a=g(g^{1}(a)$ to $b=g(g^{1}(b))$, so $f$ is being integrated
over exactly the same values. Note also that (\ref{eqn:invsub}) once
again illustrates Leibniz's brilliance in designing the notation
for calculus.
Let's give it a shot with (\ref{eqn:invsubhw1}).
From the table we use the inverse substition
$$
x = \sec(\theta).
$$
We get
\begin{align*}
\int_{\sqrt{2}}^2 \frac{1}{x^3 \sqrt{x^21}} dx &=
\int_{\frac{\pi}{4}}^{\frac{\pi}{3}} \frac{1}{\sec(\theta)}{\sqrt{\sec^2(\theta)1}} \sec(\theta)\tan(\theta) d\theta \\
&= \int_{\frac{\pi}{4}}^{\frac{\pi}{3}} \frac{1}{\sec(\theta)}{\tan(\theta)} \sec(\theta)\tan(\theta) d\theta \\
&= \int_{\frac{\pi}{4}}^{\frac{\pi}{3}} \cos^(\theta) d\theta \\
&= \frac{1}{2} \int_{\frac{\pi}{4}}^{\frac{\pi}{3}} 1 + \cos(2\theta) d\theta \\
&= \frac{1}{2} \left[ \theta + \frac{1}{2}\sin(2\theta)\right]_{\frac{\pi}{4}}^{\frac{\pi}{3}} \\
&= \frac{\pi}{24} + \frac{\sqrt{3}}{8}  \frac{1}{4}\\
\end{align*}
Wow! That was like magic. This is really an amazing technique.
Let's use it again to find the area of an ellipse.
\begin{example}
Consider an ellipse with radii $a$ and $b$, so it
goes through $(0,\pm b)$ and $(\pm a, 0)$. An equation
for the part of an ellipse in the first quadrant is
$$
y = b \sqrt{1\frac{x^2}{a^2}} = \frac{b}{a}\sqrt{a^2x^2}.
$$
Thus the area of the entire ellipse is
$$
A = 4 \int_{0}^a \frac{b}{a}\sqrt{a^2x^2} \, dx.
$$
The $4$ is because the integral computes $1/4$th of the area
of the whole ellipse.
So we need to compute
$$
\int_{0}^a \sqrt{a^2x^2}\, dx
$$
Obvious substitution with $u=a^2x^2$...? nope. Integration by parts...? nope.
Let's try inverse substitution.
The table above suggests using $x = a\sin(\theta)$, so
$dx = a\cos(\theta) d\theta$.
We get
\begin{align}
\int_{0}^{\frac{\pi}{2}} \sqrt{a^2  a^2\sin^2(\theta)} d\theta
&= a^2 \int_{0}^{\frac{\pi}{2}} \cos^2(\theta) d\theta\\
&= \frac{a^2}{2}\int_{0}^{\frac{\pi}{2}} 1+\cos(2\theta) d\theta\\
&= \frac{a^2}{2}\left[ \theta + \frac{1}{2}\sin(2\theta) \right]_0^{\frac{\pi}{2}}\\
&= \frac{a^2}{2}\cdot \frac{\pi}{2} = \frac{\pi a^2}{4}.
\end{align}
Thus the area is
$$
4 \frac{b}{a} \frac{\pi a^2}{4}
= \pi a b.
$$
{\bf Consistency Check:} If the ellipse is a circle, i.e., $a=b=r$, this is $\pi r^2$,
which is a wellknown formula for the area of a circle.
%But, how can trigonometric functions be simpler than algebraic
%functions? It looks like I'm taking something simple and making
%it more complicated (that's what your math professor always
%does, isn't it?). But in fact, it turns out that the trigonometric
%integral above was much simpler to evaluate.
\end{example}
\begin{remark}
Trigonometric substitution is useful for functions that
involve $\sqrt{a^2x^2}$, $\sqrt{x^2+a^2}$, $\sqrt{x^2a}$,
but {\em not all at once!}. See the above table for how
to do each.
\end{remark}
One other important technique is to use completing the square.
\begin{example}
Compute $\int \sqrt{5 + 4x  x^2}\, dx$.
We \defn{complete the square}:
$$
5 + 4x  x^2 = 5  (x  2)^2 + 4 = 9  (x2)^2.
$$
Thus
$$
\int \sqrt{5 + 4x  x^2} \, dx
= \int \sqrt{9  (x2)^2} \, dx.
$$
%[[Draw a right triangle with sides
%$x2$ and $\sqrt{9(x2)^2}$ and hypotenuse
%$3$, with angle $\theta$.
We do a usual substitution to get rid of the $x2$.
Let $u=x2$, so $du=dx$.
Then
$$
\int \sqrt{9  (x2)^2} \, dx
= \int \sqrt{9  y^2} \, dy.
$$
Now we have an integral that we can do; it's almost
identical to the previous example, but with $a=9$
(and this is an indefinite integral).
Let $y = 3\sin(\theta)$, so $dy = 3\cos(\theta)d\theta$.
Then
\begin{align*}
\int \sqrt{9  (x2)^2} \, dx
&= \int \sqrt{9  y^2} \, dy\\
&= \int \sqrt{3^2  3^2\sin^2(\theta)} 3\cos(\theta) d\theta\\
&= 9 \int \cos^2(\theta)\, d\theta\\
&= \frac{9}{2} \int 1 + \cos(2\theta) d\theta\\
&= \frac{9}{2} \left(\theta + \frac{1}{2}\sin(2\theta)\right) + c\\
\end{align*}
Of course, we {\em must transform} back into a function in $x$, and
that's a little tricky.
Use that
$$x  2 = y = 3\sin(\theta),$$
so that
$$
\theta = \sin^{1}\left(\frac{x2}{3}\right).
$$
\begin{align*}
\int \sqrt{9  (x2)^2} \, dx &= \cdots\\
&= \frac{9}{2} \left(\theta + \frac{1}{2}\sin(2\theta)\right) + c\\
&=\frac{9}{2} \left[\sin^{1}\left(\frac{x2}{3}\right)
+ \sin(\theta)\cos(\theta) \right] + c\\
&=\frac{9}{2} \left[\sin^{1}\left(\frac{x2}{3}\right)
+ \left(\frac{x2}{3}\right) \cdot
\left(\frac{\sqrt{9(x2)^2}}{3}\right) \right] + c.\\
\end{align*}
Here we use that $\sin(2\theta) = 2\sin(\theta)\cos(\theta)$.
Also, to compute $\cos(\sin^{1}\left(\frac{x2}{3}\right))$, we
draw a right triangle with side lengths $x2$ and $\sqrt{9(x2)^2}$,
and hypotenuse $3$.
\end{example}
\begin{example}
Compute
$$
\int \frac{1}{\sqrt{t^26t+13}} dt
$$
To compute this, we complete the square, etc.
\begin{align*}
\int \frac{1}{\sqrt{t^26t+13}} dt &=
\int \frac{1}{\sqrt{(t3)^2 + 4}} dt \\
\end{align*}
[[Draw triangle with sides $2$ and $t3$ and
hypotenuse $\sqrt{(t3)^2 + 4}$.
Then
\begin{align*}
t3 &= 2\tan(\theta)\\
\sqrt{(t3)^2 + 4} &= 2\sec(\theta) = \frac{2}{\cos(\theta)}\\
dt &= 2\sec^2(\theta) d\theta
\end{align*}
Back to the integral, we have
\begin{align*}
\int \frac{1}{\sqrt{(t3)^2 + 4}} dt &=
\int \frac{2\sec^2(\theta)}{2\sec(\theta)} d\theta\\
&= \int \sec(\theta) d\theta \\
&= \ln  \sec(\theta) + \tan(\theta) + c\\
&= \ln\left \sqrt{(t3)^2 + 4}{2} + \frac{t3}{2}\right + c.
\end{align*}
\end{example}
\section{Factoring Polynomials}
\forclass{Quizes today!}
How do you compute something like
$$
\int \frac{x^2+2}{(x1)(x+2)(x+3)} dx?
$$
So far you have no method for doing this.
The trick (which is called partial fraction decomposition),
is to write
\begin{equation}\label{eqn:polyfacinttex}
\int \frac{x^2+2}{x^{3} + 4x^{2} + x  6} dx
= \int \frac{1}{4(x1)}  \frac{2}{x+2} + \frac{11}{4(x+3)} dx
\end{equation}
The integral on the right is then
easy to do (the answer involves $\ln$'s).
But {\em how on earth} do you right the rational function
on the left hand side as a sum of the nice terms
of the right hand side? Doing this is called
``partial fraction decomposition'', and it is
a fundamental idea in mathematics. It relies on
our ability to factor polynomials and saolve linear
equations. As a first hint, notice that
$$
x^{3} + 4x^{2} + x  6 = (x  1) \cdot (x + 2) \cdot (x + 3),
$$
so the denominators in the decomposition correspond to
the factors of the denominator.
Before describing the secret behind (\ref{eqn:polyfacinttex}), we'll
discuss some background about how polynomials and rational functions
work.
\begin{theorem}[Fundamental Theorem of Algebra]
If $f(x)=a_n x^n + \cdots a_1 x + a_0$ is a polynomial,
then there are complex numbers $c, \alpha_1, \ldots \alpha_n$
such that
$$
f(x) = c(x\alpha_1)(x\alpha_2)\cdots (x\alpha_n).
$$
\end{theorem}
\begin{example}
For example,
$$
3x^{2} + 2x  1
= 3 \cdot \left(x  \frac{1}{3}\right) \cdot (x + 1).
$$
And
$$
(x^2 + 1) = (x+i)^2 \cdot (xi)^2.
$$
\end{example}
If $f(x)$ is a polynomial, the roots $\alpha$ of $f$
correspond to the factors of $f$. Thus if
$$
f(x) = c(x\alpha_1)(x\alpha_2)\cdots (x\alpha_n),
$$
then
$f(\alpha_i) =0$ for each $i$ (and nowhere else).
\begin{definition}[Multiplicity of Zero]
The \defn{multiplicity of a zero} $\alpha$ of $f(x)$ is the number
of times that $(x\alpha)$ appears as a factor of $f$.
\end{definition}
For example, if $f(x) = 7(x2)^{99}\cdot (x+17)^5\cdot (x\pi)^2$,
then $2$ is a zero with multiplicity $99$, $\pi$
is a zero with multiplicity $2$, and $1$ is a ``zero
multiplicity $0$''.
\begin{definition}[Rational Function]
A \defn{rational function} is a quotient
$$
f(x) = \frac{g(x)}{h(x)},
$$
where $g(x)$ and $h(x)$ are polynomials.
\end{definition}
For example,
\begin{equation}\label{eqn:ratfun1}
f(x) =\frac{x^{10}}{(xi)^2(x+\pi)(x3)^3}
\end{equation}
is a rational function.
\begin{definition}[Pole]
A \defn{pole} of a rational function $f(x)$
is a complex number $\alpha$ such that
$f(x)$ is unbounded as $x\to \alpha$.
\end{definition}
For example, for (\ref{eqn:ratfun1}) the poles
are at $i$, $\pi$, and $3$. They have
multiplicity $2$, $1$, and $3$, respectively.
\section{Integration of Rational Functions Using Partial Fractions}
\forclass{
Today: 7.4: Integration of rational functions and
Supp. 4: Partial fraction expansion\\
Next: 7.7: Approximate integration}
Our goal today is to compute integrals of
the form
$$
\int \frac{P(x)}{Q(x)} dx
$$
by decomposing $f=\frac{P(x)}{Q(x)}$.
This is called partial fraction expansion.
\begin{theorem}[Fundamental Theorem of Algebra over the Real Numbers]
A real polynomial of degree $n\geq 1$ can be factored as a constant
times a product of linear factors $xa$ and irreducible quadratic
factors $x^2+bx+c$.
\end{theorem}
Note that $x^2+bx+c = (x\alpha)(x\bar{\alpha})$, where
$\alpha=z+iw$, $\bar{\alpha} = ziw$ are complex conjugates.
Types of rational functions $f(x) = \frac{P(x)}{Q(x)}$. To do a
partial fraction expansion, first make sure $\deg(P(x)) < \deg(Q(x))$
using long division. Then there are four possible situation,
each of increasing generality (and difficulty):
\begin{enumerate}
\item $Q(x)$ is a product of distinct linear factors;
\item $Q(x)$ is a product of linear factors, some of which are repeated;
\item $Q(x)$ is a product of distinct irreducible quadratic factors,
along with linear factors some of which may be repeated; and,
\item $Q(x)$ is has repeated irreducible quadratic factors, along with
possibly some linear factors which may be repeated.
\end{enumerate}
The general partial fraction expansion theorem is beyond the
scope of this course. However, you might find the following
special case and its proof interesting.
\begin{theorem}
Suppose $p$, $q_1$ and $q_2$ are polynomials that are relatively
prime (have no factor in common). Then there
exists polynomials $\alpha_1$ and $\alpha_2$ such that
$$
\frac{p}{q_1 q_2} = \frac{\alpha_1}{q_1} + \frac{\alpha_2}{q_2}.
$$
\end{theorem}
\begin{proof}
Since $q_1$ and $q_2$ are relatively prime, using the Euclidean
algorithm (long division), we can find polynomials $s_1$ and $s_2$
such that
$$ 1 = s_1 q_1 + s_2 q_2 .$$
Dividing both sides by $q_1 q_2$ and multiplying by $p$ yields
$$
\frac{p}{q_1 q_2} = \frac{\alpha_1}{q_1} + \frac{\alpha_2}{q_2},
$$
which completes the proof.
\end{proof}
\begin{example}
Compute $$\int \frac{x^3  4x 10}{x^2x6} dx.$$
First do long division. Get quotient of $x+1$
and remainder of $3x4$.
This means that
$$
\frac{x^3  4x 10}{x^2x6}
= x + 1 + \frac{3x  4}{x^2x6}.
$$
Since we have distinct linear factors, we know
that we can write
$$
f(x) = \frac{3x  4}{x^2x6} = \frac{A}{x3} + \frac{B}{x+2},
$$
for real numbers $A,B$.
A clever way to find $A,B$ is to substitute
appropriate values in, as follows.
We have
$$
f(x) (x3) = \frac{3x4}{x+2} = A + B\cdot \frac{x3}{x+2}.
$$
Setting $x=3$ on both sides we have (taking a limit):
$$
A = f(3) = \frac{3\cdot 3  4}{3+2} = \frac{5}{5} = 1.
$$
Likewise, we have
$$
B = f(2) = \frac{3\cdot (2)  4}{23} = 2.
$$
Thus
\begin{align*}
\int \frac{x^3  4x 10}{x^2x6} dx &=
\int x + 1 + \frac{1}{x3} + \frac{2}{x+2} \\
&= \frac{x^2 + 2x}{2} + 2\log\leftx + 2\right + \log\leftx  3\right + c.
\end{align*}
\end{example}
\begin{example}
Compute the partial fraction expansion of
$\frac{x^2}{(x3)(x+2)^2}$.
By the partial fraction theorem, there are constants
$A,B,C$ such that
$$
\frac{x^2}{(x3)(x+2)^2}
= \frac{A}{x3} + \frac{B}{x+2} + \frac{C}{(x+2)^2}.
$$
Note that there's no possible way this could work
without the $(x+2)^2$ term, since otherwise
the common denominator would be
$(x3)(x+2)$.
We have
\begin{align*}
A &= \left[ f(x) (x3)\right]_{x=3}
= \frac{x^2}{(x+2)^2}_{x=3} = \frac{9}{25},\\
C &= \left[ f(x) (x+2)^2\right]_{x=2} = \frac{4}{5}.
\end{align*}
This method will not get us $B$!
For example,
$$
f(x) (x+2) = \frac{x^2}{(x3)(x+2)} = A\cdot\frac{x+2}{x3} + B + \frac{C}{x+2}.
$$
While true this is useless.
Instead, we use that we know $A$ and $C$,
and evaluate at another value of $x$, say $0$.
$$
f(0) = 0 = \frac{\frac{9}{25}}{3} + \frac{B}{2}
+ \frac{\frac{4}{5}}{(2)^2},
$$
so $B=\frac{16}{25}$.
Thus finally,
\begin{align*}
\int \frac{x^2}{(x3)(x+2)^2}
&= \int \frac{\frac{9}{25}}{x3} + \frac{\frac{16}{25}}{x+2}
+ \frac{\frac{4}{5}}{(x+2)^2}.\\
&= \frac{9}{25} \lnx3 + \frac{16}{25}\lnx+2
+ \frac{\frac{4}{5}}{x+2} + \text{constant}.
\end{align*}
\end{example}
\begin{example}
Let's compute $\int \frac{1}{x^3+1} dx$.
Notice that $x+1$ is a factor, since $1$ is a root.
We have
$$
x^3 + 1 = \left(x + 1\right)\left(x^2  x + 1\right).
$$
There exist constants $A,B,C$ such that
$$
\frac{1}{x^3+1} =
\frac{A}{x+1} + \frac{Bx+C}{x^2x+1}.
$$
Then
$$
A = f(x)(x+1)_{x=1} = \frac{1}{3}.
$$
You could find $B,C$ by factoring the quadratic over
the complex numbers and getting complex number
answers. Instead, we evaluate $x$ at a couple of values.
For example, at $x=0$ we get
$$
f(0) = 1 = \frac{1}{3} + \frac{C}{1},
$$
so $C=\frac{2}{3}$.
Next, use $x=1$ to get $B$.
\begin{align*}
f(1) = \frac{1}{1^3 + 1} &= \frac{\frac{1}{3}}{(1) + 1}
+ \frac{B(1) + \frac{2}{3}}{(1)^2  (1) + 1}\\
\frac{1}{2} &= \f{1}{6} + B + \f{2}{3},
\end{align*}
so
$$
B = \f{3}{6}  \f{1}{6}  \f{4}{6} = \f{1}{3}.
$$
Finally,
\begin{align*}
\int \f{1}{x^3+1} dx &= \int \f{\f{1}{3}}{x+1}
 \f{\f{1}{3} x}{x^2x1} + \f{\f{2}{3}}{x^2x1} dx\\
&= \f{1}{3}\lnx+1  \f{1}{3} \int\f{x2}{x^2x+1} dx
\end{align*}
It remains to compute
$$
\int\f{x2}{x^2x+1} dx.
$$
First, complete the square to get
$$
x^2  x + 1 = \left(x\f{1}{2}\right)^2 + \f{3}{4}.
$$
Let $u=(x\f{1}{2})$, so $du=dx$ and $x=u + \f{1}{2}$.
Then
\begin{align*}
\int\f{u  \f{3}{2}}{u^2 +\f{3}{4}} du &=
\int \f{udu}{u^2 + \f{3}{4}} 
\f{3}{2} \int\f{1}{u^2 + \left(\f{\sqrt{3}}{2}\right)^2} du\\
&= \f{1}{2}\ln\leftu^2 + \f{3}{4}\right
 \f{3}{2} \cdot \f{2}{\sqrt{3}}
\tan^{1}\left( \f{2u}{\sqrt{3}} \right) + c\\
&= \f{1}{2}\ln\leftx^2x+1\right
 \sqrt{3} \tan^{1} \left(\f{2x1}{\sqrt{3}}\right) + c
\end{align*}
Finally, we put it all together and get
\begin{align*}
\int \f{1}{x^3+1} dx &=
\f{1}{3}\lnx+1  \f{1}{3} \int\f{x2}{x^2x+1} dx\\
&= \f{1}{3}\lnx+1  \f{1}{6} \ln\leftx^2x+1\right
+ \f{\sqrt{3}}{3} \tan^{1} \left(\f{2x1}{\sqrt{3}}\right) + c
\end{align*}
\end{example}
\forclass{Discuss second quiz problem.
Problem: Compute $\int \cos^2(x) e^{3x} dx$ using
complex exponentials.
The answer is
$$
\frac{1}{6}e^{3x} + \frac{1}{13} e^{3x}\sin(2x) 
\frac{3}{26} e^{3x} \cos(2x) + c.
$$
Here's how to get it.
\begin{align*}
\int \cos^2(x) e^{3x} dx &=
\int \frac{e^{2ix} + 2 + e^{2ix}}{4} e^{3x} dx \\
&= \frac{1}{4} \left[ \frac{e^{(2i3)x}}{2i3}  \frac{2}{3} e^{3x} +
\frac{e^{(2i3)x}}{2i3}\right] + c\\
&= \frac{1}{6} e^{3x} + \f{e^{3x}}{4} \left[
\frac{e^{2ix}}{2i3}  \frac{e^{2ix}}{2i+3}\right] + c
\end{align*}
Simplify the inside part requires some imagination:
\begin{align*}
\frac{e^{2ix}}{2i3}  \frac{e^{2ix}}{2i+3} &=
\frac{1}{13} (2i e^{2ix}  3e^{2ix} + 2i e^{2ix}  3 e^{2ix})\\
&= \frac{1}{13}\left( 4\sin(2x)  6\cos(2x) \right)
\end{align*}
}
\section{Approximating Integrals}
\forclass{
Today: 7.7  approximating integrals\\
Friday: Third QUIZ and 7.8  improper integrals
}
Problem: Compute
$$
\int_0^1 e^{\sqrt{x}} dx.
$$
Hmmm... Any ideas?
Today we will revisit Riemann sums in the context
of finding numerical approximations to integrals,
which we might not be able to compute exactly.
Recall that if $y=f(x)$ then
$$
\int_a^b f(x) dx = \lim_{n\to\infty} \sum_{i=1}^n f(x_i^*) \Delta x.
$$
The fundamental theorem of calculus says that {\em if we
can find an antiderivative} of $f(x)$, then we can
compute $ \int_a^b f(x) dx$ exactly. But antiderivatives
can be either (1) hard to find, and sometimes worse (2)
impossible to find. However, we can always
approximate $\int_{a}^b f(x) dx$ (possibly very badly).
For example, we could use Riemann sums to approximate $\int_{a}^b f(x) dx$,
say using \defn{left endpoints}. This gives the approximation:
$$
L_n = \sum_{i=0}^{n1} f(x_i) \Delta x; \qquad x_0, \ldots, x_{n1} \text{ left endpoints}
$$
Using \defn{rightpoints} gives
$$
R_n = \sum_{i=1}^{n} f(x_i) \Delta x; \qquad x_1, \ldots, x_{n}
\text{ right endpoints}
$$
Using \defn{midpoints} gives
$$
M_n = \sum_{i=1}^{n} f(\overline{x}_i) \Delta x; \qquad \overline{x}_1,
\ldots, \overline{x}_{n}
\text{ midpoints},
$$
where $\overline{x}_i = (x_{i1} + x_i)/2$. The midpoint is
typically (but not always) much better than the left or right endpoint
approximations.
Yet another possibility is the \defn{trapezoid approximation},
which is
$$
T_n = \frac{1}{2}(L_n + R_n);
$$
this is just the average of the left and right approximations.
\begin{question}
But wouldn't the trapezoid and midpoint approximations be the
same?certainly not (see example below); interestingly, very often
the midpoint approximation is better.
\end{question}
\defn{Simpson's approximation}
$$
S_{2n} = \frac{1}{3}T_n + \frac{2}{3} M_n
$$
gives the area under bestfit parabolas that approximate our
function on each interval. The proof of this would be interesting
but takes too much time for this course.
Many functions have no elementary antiderivatives:
$$
\sqrt{1+x^3}, e^{x^2}, \frac{1}{\log(x)}, \frac{\sin(x)}{x}, \ldots.
$$
NOTE  they {\em\bf do} have antiderivatives; the problem is just that
there is no simple formula for them. Why are there no elementary
antiderivatives?
Some of these functions are extremly important. For example, the
integrals $\int_{\infty}^x e^{u^2/2} du$ are extremely important in
probability, even though there is no simple formula for the
antiderivative.
If you are doing scientific research you might spend months
tediously computing values of some function $f(x)$, for which no
formula is known.
\begin{example}
Compute $\int_{0}^1 e^{\sqrt{x}} dx$.
\begin{enumerate}
\item Trapezoid with $n=4$
\item Midpoint with $n=4$
\item Simpson's with with $2n=8$
\end{enumerate}
\fig{Graph of $e^{\sqrt{x}}$}{example_nint}
%sage: gnuplot.plot('plot [0:1] exp(sqrt(x))', 'example_nint')
The following is a table of the values
of $f(x)$ at $k/8$ for $k=0,\ldots, 8$.
\begin{center}
\begin{tabular}{ll}\hline
$k/8$ & $f(k/8)$\\\hline
$0$ & $V_{0} = 1.000000$ \\\hline
$\frac{1}{8}$ & $V_{1} = 0.702189$ \\\hline
$\frac{1}{4}$ & $V_{2} = 0.606531$\\\hline
$\frac{3}{8}$ & $V_{3} = 0.542063$ \\\hline
$\frac{1}{2}$ & $V_{4} = 0.493069$ \\\hline
$\frac{5}{8}$ & $V_{5} = 0.453586$ \\\hline
$\frac{3}{4}$ & $V_{6} = 0.420620$ \\\hline
$\frac{7}{8}$ & $V_{7} = 0.392423$ \\\hline
$1$ & $V_{8} = 0.367879$ \\\hline
\end{tabular}
\end{center}
$$L_4 = (V_0 + V_2 + V_4 + V_6) \cdot \frac{1}{4} = 0.630055$$
$$R_4 = (V_2 + V_4 + V_6 + V_8) \cdot \frac{1}{4} = 0.472025$$
$$M_4 = (V_1 + V_3 + V_5 + V_7) \cdot \frac{1}{4} = 0.522565$$
$$T_4 = \frac{1}{2}(L_4 + R_4) = 0.551040.$$
$$S_8 = \frac{1}{3} T_4 + \frac{2}{3} M_4 = 0.532057$$
Maxima gives $0.5284822353142306$ and
Mathematica gives $0.528482$.
Note that Simpsons's is the best; it better be, since we
worked the hardest to get it!
\begin{center}
\begin{tabular}{ll}\hline
Method & Error\\\hline
$L_4  I$ & 0.101573\\\hline
$R_4  I$ & 0.056458\\\hline
$M_4  I$ & 0.005917\\\hline
$T_{4}  I$ & 0.022558\\\hline
$S_{8}  I$ & 0.003575\\\hline
\end{tabular}
\end{center}
\end{example}
\section{Improper Integrals}
\forclass{
Exam 2 Wed Mar 1: 7pm7:50pm in ??\\
Today: 7.8 Improper Integrals\\
Monday  president's day holiday (and almost my bday)\\
Next  11.1 sequences}
\begin{example}
Make sense of $\int_{0}^{\infty} e^{x} dx$.
%gnuplot.plot('plot [0:10] exp(x)', 'example_ex_improper')
The integrals
$$
\int_0^t e^{x} dx
$$
make sense for each real number $t$. So consider
$$
\lim_{t\to\infty} \int_0^t e^{x} dx
= \lim_{t\to\infty} [e^{x}]_0^t = 1.
$$
Geometrically the area under the whole curve is the limit
of the areas for finite values of $t$.
\fig{Graph of $e^{x}$}{example_ex_improper}
\end{example}
\begin{example}
Consider $\int_0^1\frac{1}{\sqrt{1x^2}} dx$ (see Figure~\ref{example_ex_improper_blowup}).
\fig{Graph of $\frac{1}{\sqrt{1x^2}}$\label{example_ex_improper_blowup}}{example_ex_improper_blowup}
%gnuplot.plot('plot [0:1] 1/sqrt(1x^2)', 'example_ex_improper_blowup')
Problem: The denominator
of the integrand tends to $0$ as $x$ approaches the upper
endpoint.
Define
\begin{align*}
\int_0^1\frac{1}{\sqrt{1x^2}} dx
&= \lim_{t\to 1^} \int_0^t\frac{1}{\sqrt{1x^2}} dx\\
&= \lim_{t\to 1^} \Bigl( \sin^{1}(t)  \sin^{1}(0) \Bigr) = \sin^{1}(1) = \f{\pi}{2}\\
\end{align*}
Here $t\to 1^$ means the limit as $t$ tends to $1$ {\em from
the left}.
\end{example}
\begin{example}\label{ex:multipleimproper}
There can be multiple points at which the integral is improper.
For example, consider
$$
\int_{\infty}^{\infty} \f{1}{1+x^2} dx.
$$
A crucial point is that we take the limit for the
left and right endpoints independently. We use
the point $0$ (for convenience only!) to break
the integral in half.
\begin{align*}
\int_{\infty}^{\infty} \f{1}{1+x^2} dx &=
\int_{\infty}^{0} \f{1}{1+x^2} dx +
\int_{0}^{\infty} \f{1}{1+x^2} dx \\
&= \lim_{s\to \infty} \int_{s}^{0} \f{1}{1+x^2} dx +
\lim_{t\to\infty} \int_{0}^{t} \f{1}{1+x^2} dx \\
&= \lim_{s\to\infty} (\tan^{1}(0)  \tan^{1}(s))
+ \lim_{t\to\infty} (\tan^{1}(t)  \tan^{1}(0))\\
&= \lim_{s\to\infty} (\tan^{1}(s))
+ \lim_{t\to\infty} (\tan^{1}(t))\\
&= \f{\pi}{2} + \f{\pi}{2} = \pi.
\end{align*}
The graph of $\tan^{1}(x)$ is in Figure~\ref{fig:example_atan}.
\fig{Graph of $\tan^{1}(x)$\label{fig:example_atan}}{example_atan}
%gnuplot.plot('plot [20:20] atan(x)', 'example_atan.eps')
\end{example}
\begin{example}\label{ex:noant}
Brian Conrad's paper on impossibility theorems for elementary
integration begins: ``The Central Limit Theorem in
probability theory assigns a special significance to
the cumulative area function
$$\Phi(x) = \frac{1}{\sqrt{2\pi}} \int_{\infty}^x e^{u^2}{u} du$$
under the Gaussian bell curve
$$
y=\frac{1}{\sqrt{2\pi}} \cdot e^{u^2/2}.
$$
It is known that $\Phi(\infty) = 1$.''
What does this last statement {\em mean}? It means that
$$
\lim_{t\to\infty} \frac{1}{\sqrt{2\pi}} \int_{t}^0 e^{u^2}{u} du
+ \lim_{x\to\infty} \frac{1}{\sqrt{2\pi}} \int_{0}^x e^{u^2}{u} du
= 1.
$$
\end{example}
\begin{example}
Consider $\int_{\infty}^{\infty} x dx$.
Notice that
$$
\int_{\infty}^{\infty} x dx
= \lim_{s\to\infty} \int_{s}^{0} x dx
+ \lim_{t\to \infty} \int_{0}^{t} x dx.
$$
This diverges since each factor diverges independtly.
But notice that
$$
\lim_{t\to\infty} \int_{t}^t x dx = 0.
$$
This is {\em not} what $\int_{\infty}^{\infty} x dx$ means (in this
course  in a later course it could be interpreted this way)!
This illustrates the importance of treating each bad point
separately (since Example~\ref{ex:multipleimproper}) doesn't.
\end{example}
\begin{example}
Consider $\int_{1}^1 \frac{1}{\sqrt[3]{x}} dx$.
We have
\begin{align*}
\int_{1}^1 \frac{1}{\sqrt[3]{x}} dx
&= \lim_{s \to 0^} \int_{1}^s x^{\frac{1}{3}} dx
\,\, + \,\,\lim_{t\to 0^+} \int_t^1 x^{\frac{1}{3}} dx\\
&= \lim_{s \to 0^} \left(\frac{3}{2} s^{\f{2}{3}}  \f{3}{2}\right)
+ \lim_{t \to 0^+} \left(\f{3}{2}  \frac{3}{2} t^{\f{2}{3}} \right)
= 0.
\end{align*}
This illustrates how to be careful and break the function up into two
pieces when there is a discontinuity.
\end{example}
\forclass{
{\bf\Large NOTES for 20060222}\\
Midterm 2: Wednesday, March 1, 2006, at 7pm in Pepper Canyon 109\\
Today: 7.8: Comparison of Improper integrals\\
11.1: Sequences\\
Next 11.2 Series\\
}
\begin{example}
Compute $\int_{1}^3 \f{1}{x2} dx$.
A few weeks ago you might have done this:
$$\int_{1}^3 \f{1}{x2} dx
= [\lnx2]_{1}^{3}
= \ln(3)  \ln(1) \qquad{\text{(totally wrong!)}}
$$
This is not valid because the function we are
integrating has a pole at $x=2$ (see Figure~\ref{fig:example_invx2}).
The integral is improper, and is only defined
if both the following limits exists:
$$
\lim_{t\to 2^} \int_{1}^t \f{1}{x2} dx
\qquad
\text{and}
\qquad
\lim_{t\to 2^+} \int_{t}^3 \f{1}{x2} dx.
$$
However, the limits diverge, e.g.,
$$
\lim_{t\to 2^+} \int_{t}^3 \f{1}{x2} dx
= \lim_{t\to 2^+} (\ln1  \lnt2)
=  \lim_{t\to 2^+} \lnt2 = \infty.
$$
Thus $\int_{1}^3 \f{1}{x2} dx$ is divergent.
\fig{Graph of $\f{1}{x2}$\label{fig:example_invx2}}{example_invx2}
%gnuplot.plot('plot [1:3] 1/(x2)', 'example_invx2.eps')
\end{example}
\subsection{Convergence, Divergence, and Comparison}
In this section we discuss using comparison to determine if an
improper integrals converges or diverges.
Recall that if $f$ and $g$ are continuous functions on an
interval $[a,b]$ and $g(x) \leq f(x)$, then
$$
\int_{a}^b g(x) dx \leq \int_a^{b} f(x) dx.
$$
This observation can be {\em incredibly useful}
in determining whether or not an improper integral
converges.
Not only does this
technique help in determing whether integrals converge, but it also
gives you some information about their values, which is often much
easier to obtain than computing the exact integral.
\begin{theorem}[Comparison Theorem (special case)]\label{thm:comparison}
Let $f$ and $g$ be continuous functions with $0 \leq g(x) \leq f(x)$
for $x\geq a$.
\begin{enumerate}
\item If $\int_{a}^{\infty} f(x) dx$ converges, then
$\int_{a}^{\infty} g(x) dx$ converges.
\item If $\int_{a}^{\infty} g(x) dx$ diverges then
$\int_{a}^{\infty} f(x) dx$ diverges.
\end{enumerate}
\end{theorem}
\begin{proof}
Since $g(x)\geq 0$ for all $x$, the function
$$
G(t) = \int_{a}^{t} g(x) dx
$$
is a nondecreasing function.
If $\int_{a}^{\infty} f(x) dx$ converges to some value $B$, then
for any $t\geq a$ we have
$$
G(t) = \int_{a}^{t} g(x) dx \leq \int_{a}^{t} f(x) dx \leq B.
$$
Thus in this case $G(t)$ is a nondecreasing function bounded
above, hence the limit $\lim_{t\to\infty} G(t)$ exists.
This proves the first statement.
Likewise, the function
$$
F(t) = \int_{a}^{t} f(x) dx
$$
is also a nondecreasing function.
If $\int_{a}^{\infty} g(x) dx$ diverges then
the function
$G(t)$ defined above is still nondecreasing and
$\lim_{t\to\infty} G(t)$ does not exist, so $G(t)$
is not bounded. Since $g(x) \leq f(x)$ we
have $G(t)\leq F(t)$ for all $\geq a$, hence
$F(t)$ is also unbounded, which proves the second statement.
\end{proof}
The theorem is very intuitive if you think about areas under a graph.
``If the bigger integral converges then so does the smaller one, and
if the smaller one diverges so does the bigger ones.''
\begin{example}
Does $\int_0^{\infty} \f{\cos^2(x)}{1+x^2}dx$ converge? Answer: YES.
\fig{Graph of $\f{\cos(x)^2}{1+x^2}$ and $\f{1}{1+x^2}$\label{fig:example_compare1}}{example_compare1}
%gnuplot.plot('plot [0:10] cos(x)^2/(1+x^2), 1/(1+x^2)', 'example_compare1.eps')
Since $0\leq \cos^2(x) \leq 1$, we really do have
$$
0 \leq \f{\cos^2(x)}{1+x^2} \leq \f{1}{1+x^2},
$$
as illustrated in Figure~\ref{fig:example_compare1}.
Thus
$$
\int_0^{\infty} \f{1}{1+x^2} dx = \lim_{t\to\infty} \tan^{1}(t) = \f{\pi}{2},
$$
so $\int_0^{\infty} \f{\cos^2(x)}{1+x^2}dx$ converges.
But why did we use $\f{1}{1+x^2}$? It's a {\em guess} that turned out
to work. You could have used something else, e.g., $\f{c}{x^2}$ for
some constant $c$. This is an illustration of how in mathematics
sometimes you have to use your imagination or guess and see what
happens. Don't get anxiousinstead, relax, take a deep breath and
explore.
For example, alternatively we could have done the following:
$$
\int_1^{\infty} \f{\cos^2(x)}{1+x^2}dx
\leq \int_1^{\infty} \f{1}{x^2} dx = 1,
$$
and this works just as well, since
$\int_0^{1} \f{\cos^2(x)}{1+x^2}dx$ converges (as $\f{\cos^2(x)}{1+x^2}$
is continuous).
\end{example}
\begin{example}
Consider $\int_{0}^{\infty} \f{1}{x+e^{2x}} dx$. Does it converge
or diverge?
For large values of $x$, the term $e^{2x}$ very quickly goes to $0$,
so we expect this to diverge, since $\int_1^{\infty} \f{1}{x} dx$
diverges.
%Note that we can change the lower limit to $1$ since
%$\int_{0}^{1} \f{1}{x+e^{2x}} dx$ converges, being the
%integral of a continuous function on a closed interval.
%So instead we will consider
%$\int_{1}^{\infty} \f{1}{x+e^{2x}} dx$ for the rest of this problem.
For $x\geq 0$, we have $e^{2x} \leq 1$, so
for all $x$ we have
$$
\f{1}{x+e^{2x}} \geq \f{1}{x+1}\qquad\text{(verify by cross multiplying)}.
$$
But
$$
\int_{1}^{\infty} \f{1}{x+1} dx
= \lim_{t\to\infty} [\ln(x+1)]_1^{t}
= \infty
$$
Thus $\int_{0}^{\infty} \f{1}{x+e^{2x}} dx$ must also diverge.
\end{example}
Note that there is a natural analogue of Theorem~\ref{thm:comparison}
for integrals of functions that ``blow up'' at a point, but
we will not state it formally.
\begin{example}
Consider
$$\int_{0}^1 \f{e^{x}}{\sqrt{x}} dx
= \lim_{t\to 0^+} \int_{t}^1 \f{e^{x}}{\sqrt{x}} dx.$$
We have
$$
\f{e^{x}}{\sqrt{x}} \leq \f{1}{\sqrt{x}}.
$$
(Coming up with this comparison might take some work, imagination,
and trial and error.)
We have
$$
\int_{0}^1 \f{e^{x}}{\sqrt{x}} dx
\leq \int_0^1 \f{1}{\sqrt{x}} dx
= \lim_{t\to 0^+} 2\sqrt{1}  2\sqrt{t} = 2.
$$
thus $\int_{0}^1 \f{e^{x}}{\sqrt{x}} dx$ converges, even though
we haven't figured out its value. We just know that it is $\leq 2$.
(In fact, it is $1.493648265\ldots$.)
What if we found a function that is bigger than $
\f{e^{x}}{\sqrt{x}}$ and its integral diverges?? So what! This does
nothing for you. Bzzzt. Try again.
\end{example}
\begin{example}
Consider the integral
$$
\int_{0}^1 \f{e^{x}}{x} dx.
$$
This is an improper integral since
$f(x) = \f{e^{x}}{x}$ has a pole at $x=0$.
Does it converge? NO.\\
On the interal $[0,1]$ we have $e^{x} \geq e^{1}$.
Thus
\begin{align*}
\lim_{t\to 0^+} \int_{t}^1 \f{e^{x}}{x} dx
&\geq
\lim_{t\to 0^+}
\int_{t}^1 \f{e^{1}}{x} dx\\
&=
e^{1} \cdot \lim_{t\to 0^+}
\int_{t}^1 \f{1}{x} dx\\
&= e^{1} \cdot \lim_{t\to 0^+}
\ln(1)  \ln(t) = +\infty\\
\end{align*}
Thus $\int_{0}^1 \f{e^{x}}{x} dx$ diverges.
\end{example}
\chapter{Sequences and Series}
\forclass{
Exam 2: Wednesday at 7pm in PCYN 109\\
Today: Sequence and Series (\S11.1\S11.2)\\
Next: \S11.3 Integral Test, \S11.4 Comparison Test\\
}
Our main goal in this chapter is to gain a working knowledge of power
series and Taylor series of function with just enough discussion of
the details of convergence to get by.
\section{Sequences}
What is $$\lim_{n\to\infty} \f{1}{2^n}?$$
You may have encountered sequences long ago in earlier courses and
they seemed very difficult. You know much more mathematics now, so
they will probably seem easier. On the other hand,
we're going to go very quickly.
\forclass{
{\em We will completely skip several topics from Chapter 11.
I will try to make what we skip clear. Note that the homework
has been modified to reflect the omitted topics.}
}
A sequence is an ordered list of numbers. These numbers may be real,
complex, etc., etc., but in this book we will focus entirely on
sequences of real numbers. For example,
$$
\frac{1}{2},
\frac{1}{4},
\frac{1}{8},
\frac{1}{16},
\frac{1}{32},
\frac{1}{64},
\frac{1}{128}, \ldots, \f{1}{2^n},\ldots
$$
Since the sequence is ordered, we can view it as a function
with domain the natural numbers $=1,2,3,\ldots$.
\begin{definition}[Sequence]
A \defn{sequence} $\{a_n\}$ is a function
$a:\N \to \R$ that takes a natural number
$n$ to $a_n = a(n)$. The number $a_n$ is the \defn{$n$th term}.
\end{definition}
For example,
$$a(n) = a_n = \f{1}{2^n},$$
which we write as $\{\f{1}{2^n}\}$.
Here's another example:
$$
\seq{b_n}{n} = \seq{\f{n}{n+1}}{n}
= \f{1}{2}, \f{2}{3}, \f{3}{4}, \ldots
$$
\begin{example}
The Fibonacci sequence $\seq{F_n}{n}$ is defined recursively as follows:
$$
F_1 = 1,\,\, F_2 = 1,\,\, F_n = F_{n2} + F_{n1} \quad\text{for $n \geq 3$}.
$$
\end{example}
Let's return to the sequence $\seq{\f{1}{2^n}}{n}$.
We write $\lim_{n\to\infty} \f{1}{2^n} = 0$, since the terms get arbitrarily
small.
\begin{definition}[Limit of sequence]
If $\seq{a_n}{n}$ is a sequence then that \defn{sequence converges} to $L$,
written $\lim_{n\to\infty} a_n = L$, if $a_n$ gets arbitrarily close to $L$ as $n$ get sufficiently large.
{\em\sc Secret rigorous definition:} For every $\varepsilon>0$ there
exists $B$ such that for $n\geq B$ we have $a_n  L<\varepsilon$.
\end{definition}
This is exactly like what we did in the previous course when we considered
limits of functions. If $f(x)$ is a function, the meaning
of $\lim_{x\to\infty} f(x) = L$ is essentially the same. In fact, we have
the following fact.
\begin{proposition}\label{prop:limserfun}
If $f$ is a function with $\lim_{x\to\infty} f(x) = L$ and
$\seq{a_n}{n}$ is the sequence given by $a_n = f(n)$, then
$\lim_{n\to\infty} a_n = L$.
\end{proposition}
As a corollary, note that this implies that all the facts about limits
that you know from functions also apply to sequences!
\begin{example}
$$
\lim_{n\to\infty} \f{n}{n+1} = \lim_{x\to\infty} \f{x}{x+1} = 1
$$
\end{example}
\begin{example}
The converse of Proposition~\ref{prop:limserfun} is false {\em in general},
i.e., knowing the limit of the sequence converges doesn't imply
that the limit of the function converges.
We have $\lim_{n\to\infty} \cos(2\pi n) = 1$, but
$\lim_{x\to\infty} \cos(2\pi x)$ diverges.
The converse is OK if the limit involving the function converges.
\end{example}
\begin{example}
Compute $\ds \lim_{n\to\infty} \f{n^3 + n + 5}{17n^3  2006n + 15}$.
{\em Answer: $\f{1}{17}$.}
\end{example}
\section{Series}\label{sec:series}
What is $$\f{1}{2} + \f{1}{4} + \f{1}{8} + \f{1}{16} + \f{1}{32} + \ldots?$$
What is $$\f{1}{3} + \f{1}{9} + \f{1}{27} + \f{1}{81} + \f{1}{243} + \ldots?$$
What is $$\f{1}{1} + \f{1}{4} + \f{1}{9} + \f{1}{16} + \f{1}{25} + \ldots?$$
Consider the following sequence of partial sums:
$$
a_N = \sum_{n=1}^N \f{1}{2^n} = \f{1}{2} + \f{1}{4} + \dots + \f{1}{2^N}.
$$
Can we compute
$$
\sum_{n=1}^{\infty} \f{1}{2^n}?
$$
These partial sums look as follows:
$$
a_1 = \f{1}{2},\qquad
a_2 = \f{3}{4},\qquad
a_{10} = \f{1023}{1024},\qquad
a_{20} = \f{1048575}{1048576}
$$
It looks very likely that $\ds \sum_{n=1}^{\infty} \f{1}{2^n} = 1$, if it makes
any sense. But does it?
In a moment we will {\em define}
$$
\sum_{n=1}^{\infty} \f{1}{2^n} = \lim_{N\to\infty} \sum_{n=1}^N \f{1}{2^n}
= \lim_{N\to\infty} a_N.
$$
A little later we will show that $a_{N} = \f{2^N  1}{2^N}$, hence indeed
$\sum_{n=1}^{\infty} \f{1}{2^n} = 1$.
\begin{definition}[Sum of series]
If $\seq{a_n}{n}$ is a sequence, then the \defn{sum of the series}
is
$$
\sum_{n=1}^{\infty} a_n = \lim_{N\to\infty} \sum_{n=1}^N a_n = \lim_{N\to\infty} s_N
$$
provided the limit exists. Otherwise we say that
$
\sum_{n=1}^{\infty} a_n
$
\defn{diverges}.
\end{definition}
\begin{example}[Geometric series]\label{ex:geoser}
Consider the \defn{geometric series}
$\sum_{n=1}^{\infty} a r^{n1}$ for $a\neq 0$.
Then
$$
s_N = \sum_{n=1}^N a r^{n1} = \f{a(1r^N)}{1r}.
$$
To see this, multiply both sides by $1r$ and notice
that all the terms in the middle cancel out.
For what values of $r$ does
$\lim_{N\to\infty} \f{a(1r^N)}{1r}$ converge?
If $r<1$, then $\lim_{N\to\infty} r^N = 0$ and
$$\lim_{N\to\infty} \f{a(1r^N)}{1r} = \f{a}{1r}.$$
If $r> 1$, then $\lim_{N\to\infty} r^N$ diverges,
so $\sum_{n=1}^{\infty} a r^{n1}$ diverges.
If $r=\pm 1$, it's clear since $a\neq 0$ that the
series also diverges (since the partial sums are
$s_N = \pm Na$).
For example, if $a=1$ and $r=\f{1}{2}$, we get
$$
\sum_{n=1}^{\infty} a r^{n1} = \f{1}{1\f{1}{2}},
$$
as claimed earlier.
\end{example}
\section{The Integral and Comparison Tests}
\forclass{
Midterm Exam 2: Wednesday March 1 at 7pm in PCYNH 109 (up to {\em last} lecture)\\
Today: \S 7.37.4: Integral and comparison tests\\
Next: \S 7.6: Absolute convergence; ratio and root tests\\
Quiz 4 (last quiz): Friday March 10.\\
Final exam: Wednesday, March 22, 710pm in PCYNH 109.
}
What is $\sum_{n=1}^{\infty} \f{1}{n^2}$? What is $\sum_{n=1}^{\infty} \f{1}{n}$?
Recall that Section~\ref{sec:series} began by asking for the sum of
several series. We found the first two sums (which were geometric
series) by finding an exact formula for the sum $s_N$ of the first $N$
terms. The third series was
\begin{equation}\label{eqn:zeta2}
A = \sum_{n=1}^{\infty} \f{1}{n^2} = \f{1}{1} + \f{1}{4} + \f{1}{9} + \f{1}{16} + \f{1}{25} + \ldots.
\end{equation}
It is difficult to find a nice formula for the sum of the first
$n$ terms of this series (i.e., I don't know how to do it).
\begin{remark}
Since I'm a number theorist, I can't help but make some further
remarks about sums of the form (\ref{eqn:zeta2}).
In general, for any $s>1$ one can consider the sum
$$
\zeta(s) = \sum_{n=1}^{\infty} \f{1}{n^s}.
$$
The number $A$ that we are interested in above is thus $\zeta(2)$.
The function $\zeta(s)$ is called the \defn{Riemann zeta function}.
There is a natural (but complicated) way of extending $\zeta(s)$ to a
(differentiable) function on all complex numbers with a pole at $s=1$.
The \defn{Riemann Hypothesis} asserts that if $s$ is a complex number
and $\zeta(s)=0$ then either $s$ is an even negative integer or $s =
\f{1}{2} + bi$ for some real number $b$. This is probably {\em the}
most famous unsolved problems in mathematics (e.g., it's one of the
Clay Math Institute million dollar prize problems). Another famous
open problem is to show that $\zeta(3)$ is not a root of any
polynomial with integer coefficients (it is a theorem of
Ape\'ery that $zeta(3)$ is not a fraction).
The function $\zeta(s)$ is incredibly important in mathematics
because it governs the properties of prime numbers. The \defn{Euler
product} representation of $\zeta(s)$ gives a hint as to why this
is the case:
$$
\zeta(s) = \sum_{n=1}^{\infty} \f{1}{n^s} = \prod_{\text{primes $p$}}
\left( \f{1}{1  {p^{s}}} \right).
$$
To see that this product equality holds when $s$ is real
with $\Re(s)>1$, use Example~\ref{ex:geoser} with $r=p^{s}$ and $a=1$ from the
previous lecture. We have
$$
\f{1}{1p^{s}}
= 1 + p^{s} + p^{2s} + \cdots.
$$
Thus
\begin{align*}
\prod_{\text{primes $p$}}
\left( \f{1}{1  {p^{s}}} \right)
&= \prod_{\text{primes $p$}}
\left(1 + \f{1}{p^{s}} + \f{1}{p^{2s}} + \cdots\right)\\
&=
\left(1 + \f{1}{2^{s}} + \f{1}{2^{2s}} + \cdots\right)
\cdot \left(1 + \f{1}{3^{s}} + \f{1}{3^{2s}} + \cdots\right)
\cdots\\
&= \left(1 + \f{1}{2^s} + \f{1}{3^s} + \f{1}{4^s} + \cdots\right)\\
&= \sum_{n=1}^{\infty} \f{1}{n^s},
\end{align*}
where the last line uses the distributive law and that integers
factor uniquely as a product of primes.
Finally, Figure~\ref{fig:zetareal} is a graph $\zeta(x)$ as a
function of a real variable $x$, and Figure~\ref{fig:zetaabs} is a graph
of $\zeta(s)$ for complex $s$.
\fig{Riemann Zeta Function: $f(x)=\sum_{n=1}^{\infty} \f{1}{n^x}$\label{fig:zetareal}}{real_zeta}
\fig{Absolute Value of Riemann Zeta Function\label{fig:zetaabs}}{abs_zeta}
\end{remark}
This section is how to leverage what you've learned so far in this
book to say something about sums that are hard (or even ``impossibly
difficult'') to evaluate exactly. For example, notice (by considering
a graph of a step function) that if $f(x) = 1/x^2$, then for positive
integer $t$ we have
$$
\sum_{n=1}^t \f{1}{n^2} \leq \f{1}{1^2} + \int_{1}^{t} \f{1}{x^2} dx.
$$
Thus
\begin{align*}
\sum_{n=1}^\infty \f{1}{n^2}
&\leq \f{1}{1^2} + \int_{1}^{\infty} \f{1}{x^2} dx\\
&= 1 + \lim_{t\to\infty} \int_1^t \f{1}{x^2}dx \\
&= 1 + \lim_{t\to\infty} \left[\f{1}{x}\right]_1^t \\
&= 1 + \lim_{t\to\infty} \left[\f{1}{t} + \f{1}{1} \right] = 2
\end{align*}
We conclude that $\sum_{n=1}^{\infty}$ converges, since the sequence
of partial sums is getting bigger and bigger and is always $\leq 2$.
And of course we also know something about $\sum_{n=1}^\infty \f{1}{n^2} $
even though we do not know the exact value: $\sum_{n=1}^\infty \f{1}{n^2} \leq 2$.
Using a computer we find that
\begin{center}
\begin{tabular}{cl}\hline
$t$ & $\sum_{n=1}^t \f{1}{n^2}$\\\hline
$1$ & $1$\\\hline
$2$ & $\frac{5}{4}=1.25$\\\hline
$5$ & $\frac{5269}{3600} = 1.4636\overline{1}$\\\hline
$10$ & $\frac{1968329}{1270080} = 1.54976773117$\\\hline
$100$ & $1.63498390018$\\\hline
$1000$ & $1.64393456668$\\\hline
$10000$ & $1.64483407185$\\\hline
$100000$ & $1.6449240669$\\\hline
\end{tabular}
\end{center}
The table is consistent with the fact that
$\sum_{n=1}^{\infty} \f{1}{n^2}$ converges to
a number $\leq 2$. In fact Euler was the first
to compute $\sum_{n=1}^{\infty}$ exactly; he
found that the exact value is
$$
\f{\pi^2}{6} = 1.644934066848226436472415166646025189218949901206798437735557\ldots
$$
There are many proofs of this fact, but they don't
belong in this book; you can find them on the internet,
and are likely to see one if you take more math classes.
We next consider the \defn{harmonic series}
\begin{equation}\label{eqn:harmonic}
\sum_{n=1}^{\infty} \f{1}{n}.
\end{equation}
Does it converge? Again by inspecting a graph and viewing
an infinite sum as the area under a step function, we have
\begin{align*}
\sum_{n=1}^{\infty} \f{1}{n}
&\geq \int_{1}^{\infty} \f{1}{x} dx\\
&=\lim_{t\to\infty} \left[\ln(x)\right]_1^{t} \\
&= \lim_{t\to\infty} \ln(t)  0 = +\infty.
\end{align*}
Thus the infinite sum (\ref{eqn:harmonic}) must also diverge.
We formalize the above two examples as a general test for
convergence or divergence of an infinite sum.
\begin{theorem}[Integral Test and Bound]\label{thm:inttest}
Suppose $f(x)$ is a continuous, positive, decreasing function
on $[1,\infty)$ and let $a_n = f(n)$ for integers $n\geq 1$.
Then the series $\sum_{n=1}^{\infty} a_n$ converges if and only
if the integral $\int_{1}^{\infty} f(x) dx$ converges.
More generally, for any positive integer $k$,
\begin{equation}\label{eqn:inttest}
\int_{k}^{\infty} f(x) dx \,\,\,\,\leq\,\,\,\,
\sum_{n=k}^{\infty} a_n \,\,\,\,\leq\,\,\,\,
a_k + \int_{k}^{\infty} f(x) dx.
\end{equation}
\end{theorem}
The proposition means that you can determine convergence of an
infinite series by determining convergence of a corresponding
integral. Thus you can apply the powerful tools you know
already for integrals to understanding infinite sums.
Also, you can use integration along with computation of
the first few terms of a series to approximate a series
very precisely.
\begin{remark}
Sometimes the first few terms of a series are ``funny''
or the series doesn't even start at $n=1$, e.g.,
$$\sum_{n=4}^{\infty}\f{1}{(n3)^3}.$$
In this case use (\ref{eqn:inttest}) with any specific $k>1$.
\end{remark}
\begin{proposition}[Comparison Test]
Suppose $\sum a_n$ and $\sum b_n$ are two series with positive
terms. If $\sum b_n$ converges and $a_n\leq b_n$ for all $n$. then
$\sum a_n$ converges. Likewise, if $\sum b_n$ diverges and $a_n\geq
b_n$ for all $n$. then $\sum a_n$ must also diverge.
\end{proposition}
\begin{example}
{\em Does $\sum_{n=1}^{\infty} \f{1}{\sqrt{n}}$ converge?}
No. We have
$$
\sum_{n=1}^{\infty} \f{1}{\sqrt{n}}
\geq \int_{1}^{\infty} \f{1}{\sqrt{x}}dx\\
= \lim_{t\to\infty} (2\sqrt{t}  2\sqrt{1}) = +\infty
$$
\end{example}
\begin{example}
{\em Does $\sum_{n=1}^{\infty} \f{1}{n^2+1}$ converge?}
Let's apply the comparison test: we have $\f{1}{n^2+1} < \f{1}{n^2}$ for
every $n$, so
$$
\sum_{n=1}^{\infty} \f{1}{n^2+1} < \sum_{n=1}^{\infty} \f{1}{n^2}.
$$
Alternatively, we can use the integral test, which also gives
as a bonus an upper and lower bound on the sum.
Let $f(x) = 1/(1+x^2)$. We have
\begin{align*}
\int_{1}^{\infty} \f{1}{1+x^2} dx
&= \lim_{t\to\infty} \int_{1}^t \f{1}{1+x^2} dx\\
&= \lim_{t\to\infty} \tan^{1}(t)  \f{\pi}{4} = \f{\pi}{2}  \f{\pi}{4}
= \f{\pi}{4}\\
\end{align*}
Thus the sum converges. Moreover, taking $k=1$ in Theorem~\ref{thm:inttest}
we have
$$
\f{\pi}{4} \leq \sum_{n=1}^{\infty} \f{1}{n^2+1} \leq \f{1}{2} + \f{\pi}{4}.
$$
the actual sum is $1.07\ldots$, which is much different than
$\sum \f{1}{n^2} = 1.64\ldots$.
\end{example}
We could prove the following proposition using methods similar to those
illustrated in the examples above. Note that this is nicely illustrated
in Figure~\ref{fig:zetareal}.
\begin{proposition}
The series $\sum_{n=1}^{\infty} \f{1}{n^p}$ is convergent if $p>1$
and divergent if $p\leq 1$.
\end{proposition}
\subsection{Estimating the Sum of a Series}
Suppose $\sum a_n$ is a convergent sequence of positive integers. Let
$$
R_m = \sum_{n=1}^{\infty} a_n  \sum_{n=1}^{m} a_n = \sum_{n=m+1}^{\infty} a_m
$$
which is the error if you approximate $\sum a_n$ using the first $n$
terms.
From Theorem~\ref{thm:inttest} we get the following.
\begin{proposition}[Remainder Bound]
Suppose $f$ is a continuous, positive, decreasing function on
$[m,\infty)$ and $\sum a_n$ is convergent. Then
$$
\int_{m+1}^{\infty} f(x) dx \leq R_m
\leq \int_{m}^{\infty} f(x) dx.
$$
\end{proposition}
\begin{proof}
In Theorem~\ref{thm:inttest} set $k=m+1$.
That gives
$$
\int_{m+1}^{\infty} f(x) dx \,\,\,\,\leq\,\,\,\,
\sum_{n=m+1}^{\infty} a_n \,\,\,\,\leq\,\,\,\,
a_{m+1} + \int_{m+1}^{\infty} f(x) dx.
$$
But
$$
a_{m+1} + \int_{m+1}^{\infty} f(x) dx
\leq
\int_{m}^{\infty} f(x) dx
$$
since $f$ is decreasing and $f(m+1)=a_{m+1}$.
\end{proof}
\begin{example}
Estimate $\zeta(3) = \sum_{n=1}^{\infty} \f{1}{n^3}$
using the first $10$ terms of the series.
We have
$$
\sum_{n=1}^{10} = \frac{19164113947}{16003008000} = 1.197531985674193\ldots
$$
The proposition above with $m=10$ tells us that
$$
0.00413223140495867\ldots
=
\int_{11}^{\infty} \f{1}{x^3}dx \leq \zeta(3)  \sum_{n=1}^{10}
\leq \int_{10}^{\infty} \f{1}{x^3}dx = \f{1}{2\cdot 10^2} = \f{1}{200} = 0.005.
$$
In fact,
$$
\zeta(3) = 1.202056903159594285399738161511449990\ldots
$$
and we hvae
$$
\zeta(3)  \sum_{n=1}^{10} = 0.0045249174854010\ldots,
$$
so the integral error bound was really good in this case.
\end{example}
\begin{example}
Determine if $\sum_{n=1}^{\infty} \f{2006}{117n^2 + 41n + 3}$
convergers or diverges. Answer: It converges, since
$$\f{2006}{117n^2 + 41n + 3}
\leq \f{2006}{117n^2} = \f{2006}{117} \cdot \f{1}{n^2},
$$
and $\sum \f{1}{n^2}$ converges.
\end{example}
\section{Tests for Convergence}
\forclass{
Final exam: Wednesday, March 22, 710pm in PCYNH 109.\\
Quiz 4: Next Friday\\
Today: 11.6: Ratio and Root tests\\
Next: 11.8 Power Series\\
11.9 Functions defined by power series
}
\subsection{The Comparison Test}
\begin{theorem}[The Comparison Test]\label{thm:compare}
Suppose $\sum a_n$ and $\sum b_n$ are series with all $a_n$
and $b_n$ positive and $a_n \leq b_n$ for each $n$.
\begin{enumerate}
\item If $\sum b_n$ converges, then so does $\sum a_n$.
\item If $\sum a_n$ diverges, then so does $\sum b_n$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof Sketch]
The condition of the theorem implies that for
any $k$,
$$
\sum_{n=1}^{k} a_n \leq \sum_{n=1}^k b_n,
$$
from which each claim follows.
\end{proof}
\begin{example}
Consider the series $\sum_{n=1}^{\infty} \f{7}{3n^2 + 2n}$.
For each $n$ we have
$$
\f{7}{3n^2 + 2n} \leq \f{7}{3} \cdot \f{1}{n^2}.
$$
Since $\sum_{n=1}^{\oo} \f{1}{n^2}$ converges, Theorem~\ref{thm:compare}
implies that $\sum_{n=1}^{\infty} \f{7}{3n^2 + 2n}$ also converges.
\end{example}
\begin{example}
Consider the series $\sum_{n=1}^{\infty} \f{\ln(n)}{n}$.
It diverges since for each $n\geq 3$ we have
$$
\f{\ln(n)}{n} \geq \f{1}{n},
$$
and $\sum_{n=3}^{\infty} \f{1}{n}$
diverges.
\end{example}
\subsection{Absolute and Conditional Convergence}
\begin{definition}[Converges Absolutely]
We say that $\sum_{n=1}^{\infty} a_n$ \defn{converges absolutely}
if $\sum_{n=1}^{\oo} a_n$ converges.
\end{definition}
For example,
$$
\sum_{n=1}^{\infty} (1)^n \f{1}{n}
$$
converges, but does {\em not} converge absolutely (it converges
``conditionally'', though we will not explain why in this book).
\subsection{The Ratio Test}
Recall that $\sum_{n=1}^{\infty} a_n$ is a geometric series
if and only if $a_n = a r^{n1}$ for some fixed~$a$ and~$r$.
Here we call~$r$ the \defn{common ratio}. Notice
that the ratio of any two successive terms is $r$:
$$
\f{a_{n+1}}{a_n} = \f{a r^{n}}{a r^{n1}} = r.
$$
Moreover, we have
$
\sum_{n=1}^{\infty} a r^{n1} $
converges (to $\f{a}{1r}$) if and only if $r<1$ (and, of course
it diverges if $r\geq 1$).
\begin{example}
For example, $\sum_{n=1}^{\infty} 3\left(\f{2}{3}\right)^{n1}$
converges to $\f{3}{1\f{2}{3}}=9$. However,
$\sum_{n=1}^{\infty} 3\left(\f{3}{2}\right)^{n1}$ diverges.
\end{example}
\begin{theorem}[Ratio Test]\label{thm:ratiotest}
Consider a sum $\sum_{n=1}^{\infty} a_n$. Then
\begin{enumerate}
\item If $\lim_{n\to\infty} \left \f{a_{n+1}}{a_n}\right = L < 1$
then $\sum_{n=1}^{\oo} a_n$ is absolutely convergent.
\item If $\lim_{n\to\infty} \left \f{a_{n+1}}{a_n}\right = L > 1$
then $\sum_{n=1}^{\oo} a_n$ diverges.
\item If $\lim_{n\to\infty} \left \f{a_{n+1}}{a_n}\right = L = 1$
then we may conclude nothing from this!
\end{enumerate}
\end{theorem}
\begin{proof}
We will only prove 1.
Assume that we have $\lim_{n\to\infty} \left
\f{a_{n+1}}{a_n}\right = L < 1$. Let $r=\f{L+1}{2}$, and notice
that $L 0$).
Since $\lim_{n\to\infty} \left \f{a_{n+1}}{a_n}\right = L$, there is
an~$N$ such that for all $n>N$ we have
$$
\left\f{a_{n+1}}{a_n}\right < r,
\quad \text{ so }\quad a_{n+1} < a_n \cdot r .
$$
Then we have
$$
\sum_{n=N+1}^{\infty} a_n < a_{N+1} \cdot \sum_{n=0}^{\oo} r^n.
$$
Here the common ratio for the second one is $r<1$, hence
thus the righthand series converges, so the lefthand
series converges.
\end{proof}
\begin{example}
Consider $\ds \sum_{n=1}^{\oo} \f{(10)^n}{n!}$.
The ratio of successive terms is
$$
\left \f{\ds \f{(10)^{n+1}}{(n+1)!}}{\ds \f{(10)^n}{n!}} \right 
= \f{10^{n+1}}{(n+1)n!} \cdot \f{n!}{10^n} = \f{10}{n+1} \to 0 < 1.
$$
Thus this series converges {\em absolutely}.
Note, the minus sign is missing above since in the ratio test
we take the limit of the absolute values.
\end{example}
\begin{example}\label{ex:27n}
Consider $\ds \sum_{n=1}^{\infty} \f{n^n}{3^{1+3n}}$.
We have
$$
\left\ds\f{\ds \f{(n+1)^{n+1}}{3\cdot(27)^{n+1}}}
{ \ds\f{\ds n^n}{\ds 3^{1+3n}} }\right
= \f{(n+1)(n+1)^n}{27 \cdot 27^n} \cdot \f{27^n}{n^n}
= \f{n+1}{27} \cdot \left(\f{n+1}{n}\right)^n
\to +\infty
$$
Thus our series diverges. (Note here that we use that
$\left(\f{n+1}{n}\right)^n\to e$.)
\end{example}
\begin{example}
Let's apply the ratio test to $\sum_{n=1}^{\oo} \f{1}{n}$.
We have
$$
\lim_{n\to\infty} \left\f{\ds \f{1}{n+1}}{\ds \f{1}{n}}\right
= \f{1}{n+1} \cdot \f{n}{1} = \f{n}{n+1} \to 1.
$$
This tells us nothing.
If this happens... do something else! E.g., in this case, use the
integral test.
\end{example}
\subsection{The Root Test}
Since $e$ and $\ln$ are inverses, we have
$x = e^{\ln(x)}$. This implies the very useful fact
that
$$
x^a = e^{\ln(x^a)} = e^{a\ln(x)}.
$$
As a sample application, notice that for any nonzero $c$,
$$
\lim_{n\to\infty} c^{\f{1}{n}} =
\lim_{n\to\infty} e^{ \f{1}{n} \log(c)} = e^0 = 1.
$$
Similarly,
$$
\lim_{n\to\infty} n^{\f{1}{n}} =
\lim_{n\to\infty} e^{\f{1}{n} \log(n)} = e^0 = 1,
$$
where we've used that $\lim_{n\to\infty} \f{\log(n)}{n} = 0$,
which we could prove using L'Hopital's rule.
\begin{theorem}[Root Test]
Consider the sum $\sum_{n=1}^{\oo} a_n$.
\begin{enumerate}
\item If $\lim_{n\to\infty} a_n^{\f{1}{n}} = L < 1$, then
$\sum_{n=1}^{\infty} a_n$ convergest absolutely.
\item If $\lim_{n\to\infty} a_n^{\f{1}{n}} = L > 1$, then
$\sum_{n=1}^{\infty} a_n$ diverges.
\item If $L=1$, then we may conclude nothing from this!
\end{enumerate}
\end{theorem}
\begin{proof}
We apply the comparison test (Theorem~\ref{thm:compare}).
First suppose $\lim_{n\to\infty} a_n^{\f{1}{n}} = L < 1$.
Then there is a $N$ such that for $n\geq N$ we have
$a_n^{\f{1}{n}} < k < 1$. Thus for such $n$ we
have $a_n < k^n < 1$. The geometric series
$\sum_{i=N}^{\infty} k^i$ converges, so $\sum_{i=N}^{\oo} a_n$
also does, by Theorem~\ref{thm:compare}.
If $a_n^{\f{1}{n}} > 1$ for $n\geq N$, then we see
that $\sum_{i=N}^{\oo} a_n$ diverges by comparing with
$\sum_{i=N}^{\infty} 1$.
\end{proof}
%The proof is similar is spirt to that of
%Theorem~\ref{thm:ratiotest} and will be omitted.
\begin{example}
Let's apply the root test to
$$
\sum_{n=1}^{\infty} a r^{n1} = \f{a}{r} \sum_{n=1}^{\infty} r^n.
$$
We have
$$
\lim_{n\to\infty} r^n^{\f{1}{n}} = r.
$$
Thus the root test tells us exactly what we already know about
convergence of the geometry series (except when $r=1$).
\end{example}
\begin{example}
The sum $\sum_{n=1}^{\infty} \left(\f{n^2+1}{2n^2+1}\right)^n$
is a candidate for the root test.
We have
$$
\lim_{n\to \infty} \left\left(\f{n^2+1}{2n^2+1}\right)^n\right^{\f{1}{n}}
= \lim_{n\to \infty} \f{n^2+1}{2n^2+1}
= \lim_{n\to \infty} \f{1+\f{1}{n^2}}{2+\f{1}{n^2}} = \f{1}{2}.
$$
Thus the series converges.
\end{example}
\begin{example}
The sum $\sum_{n=1}^{\infty} \left(\f{2n^2+1}{n^2+1}\right)^n$
is a candidate for the root test.
We have
$$
\lim_{n\to \infty} \left\left(\f{2n^2+1}{n^2+1}\right)^n\right^{\f{1}{n}}
= \lim_{n\to \infty} \f{2n^2+1}{n^2+1}
= \lim_{n\to \infty} \f{2+\f{1}{n^2}}{1+\f{1}{n^2}} = 2,
$$
hence the series diverges!
\end{example}
\begin{example}
Consider $\sum_{n=1}^{\infty} \f{1}{n}$. We have
$$
\lim_{n\to\oo} \left \f{1}{n} \right^{\f{1}{n}} = 1,
$$
so we conclude {\em nothing!}
\end{example}
\begin{example}
Consider $\sum_{n=1}^{\infty} \f{n^n}{3\cdot (27^n)}$.
To apply the root test, we compute
$$
\lim_{n\to\infty} \left \f{n^n}{3\cdot (27^n)} \right^{\f{1}{n}}
= \lim_{n\to\infty} \left(\f{1}{3}\right)^{\f{1}{n}} \cdot \f{n}{27}
= +\infty.
$$
Again, the limit diverges, as in Example~\ref{ex:27n}.
\end{example}
\newpage
\section{Power Series}
\forclass{
Final exam: Wednesday, March 22, 710pm in PCYNH 109. Bring ID!\\
Quiz 4: This Friday\\
Today: 11.8 Power Series, 11.9 Functions defined by power series\\
Next: 11.10 Taylor and Maclaurin series
}
Recall that a \defn{polynomial} is a function of the form
$$
f(x) = c_0 + c_1 x + c_2 x^2 + \cdots + c_k x^k.
$$
\begin{center}
\shadowbox{\Large Polynomials are easy!!!}
\end{center}
They are easy to integrate, differentiate, etc.:
\begin{align*}
\f{d}{dx} \left(\sum_{n=0}^k c_n x^n\right) &= \sum_{n=1}^k n c_n x^{n1}\\
\int \sum_{n=0}^k c_n x^n dx &= C + \sum_{n=0}^k c_n \f{x^{n+1}}{n+1}.
\end{align*}
\begin{definition}[Power Series]
A \defn{power series} is a series
of the form
$$
f(x) = \sum_{n=0}^{\infty} c_n x^n = c_0 + c_1 x + c_2 x^2 + \cdots,
$$
where $x$ is a variable and the $c_n$ are coefficients.
\end{definition}
A power series is a function of $x$ for those $x$ for
which it converges.
\begin{example}
Consider
$$
f(x) = \sum_{n=0}^{\infty} x^n = 1 + x + x^2 + \cdots.
$$
When $x < 1$, i.e., $1 < x < 1$, we have
$$
f(x) = \f{1}{1x}.
$$
\end{example}
But what good could this possibly be? Why is writing the simple
function $\f{1}{1x}$ as the complicated series $\sum_{n=0}^{\infty}
x^n$ of any value?
\begin{enumerate}
\item Power series are {\em relatively easy to work with.}
They are ``almost'' polynomials. E.g.,
$$
\f{d}{dx} \sum_{n=0}^{\oo} x^n = \sum_{n=1}^{\oo} nx^{n1} =
1 + 2x + 3x^2 + \cdots = \sum_{m=0}^{\oo} (m+1)x^m,
$$
where in the last step we ``reindexed'' the series.
Power series are only ``almost'' polynomials, since they
don't stop; they can go on forever. More precisely,
a power series is a limit of polynomials. But in many
cases we can treat them like a polynomial.
On the other hand, notice that
$$
\f{d}{dx}\left( \f{1}{1x} \right)= \f{1}{(1x)^2} = \sum_{m=0}^{\infty} (m+1)x^m.
$$
\item For many functions, a power series is the {\em best
explicit representation available}.
\begin{example}
Consider $J_0(x)$, the Bessel function of order $0$. It
arises as a solution to the differential equation
$x^2 y'' + x y' + x^2 y = 0$, and has the following power
series expansion:
\begin{align*}
J_0(x) &= \sum_{n=1}^{\oo} \f{(1)^n x^{2n}}{2^{2n}(n!)^2}\\
&=1  \frac{1}{4}x^{2} + \frac{1}{64}x^{4}  \frac{1}{2304}x^{6} + \frac{1}{147456}x^{8}
 \frac{1}{14745600}x^{10} + \cdots.
\end{align*}
This series is nice since it converges for all $x$ (one can
prove this using the ratio test).
It is also one of the most explicit forms of $J_0(x)$.
%sage: j = sum([(1)^n * x^(2*n)/(2^(2*n)*factorial(n)^2) for n in range(20)]) + O(x^21)
%sage: x^2 * j.derivative().derivative() + x * j.derivative() + x^2 * j
% 0
\end{example}
\end{enumerate}
\subsection{Shift the Origin}
It is often useful to shift the origin of a power series, i.e., consider
a power series expanded about a different point.
\begin{definition}
The series
$$
\sum_{n=0}^{\oo} c_n (xa)^n = c_0 + c_1(xa) + c_2(xa)^2 + \cdots
$$
is called a \defn{power series centered at} $x=a$, or ``a power series about $x=a$''.
\end{definition}
\begin{example}
Consider
\begin{align*}
\sum_{n=0}^{\oo} (x3)^n &= 1 + (x3) + (x3)^2 + \cdots\\
&= \f{1}{1  (x3)}\qquad\qquad\text{equality valid when $x3<1$}\\
&= \f{1}{4x}
\end{align*}
Here conceptually we are treating $3$ like we treated $0$ before.
Power series can be written in different ways, which have different
advantages and disadvantages. For example,
\begin{align*}
\f{1}{4x} &= \f{1}{4} \cdot \f{1}{1x/4}\\
& = \f{1}{4} \cdot \sum_{n=0}^{\oo} \left( \f{x}{4}\right)^n \qquad\text{converges for all $x < 4$}.
\end{align*}
Notice that the second series converges for $x<4$, whereas the first converges
only for $x3<1$, which isn't nearly as good.
\end{example}
\subsection{Convergence of Power Series}
\begin{theorem}
Given a power series $\sum_{n=0}^{\oo} c_n(xa)^n$,
there are {\em exactly} three possibilities:
\begin{enumerate}
\item The series conveges only when $x=a$.
\item The series conveges for all $x$.
\item There is an $R>0$ (called the ``radius of convergence'')
such that $\sum_{n=0}^{\oo} c_n(xa)^n$ converges for
$xaR$.
\end{enumerate}
\end{theorem}
\begin{example}
For the power series $\sum_{n=0}^{\infty} x^n$, the radius $R$
of convergence is $1$.
\end{example}
\begin{definition}[Radius of Convergence]
As mentioned in the theorem, $R$ is called the \defn{radius of convergence}.
\end{definition}
If the series converges only at $x=a$, we say $R=0$, and if the series
converges everywhere we say that $R=\oo$.
The \defn{interval of convergence} is the set of $x$ for which the
series converges. It will be one of the following:
$$
(aR, a+R), \qquad [aR, a+R), \qquad (aR, a+R], \qquad [aR, a+R]
$$
The point being that the statement of the theorem only asserts
something about convergence of the series on the open interval
$(aR, a+R)$. What happens at the endpoints of the interval is
not specified by the theorem; you can only figure it out by
looking explicitly at a given series.
\begin{theorem}
If $\sum_{n=0}^{\infty} c_n (xa)^n$ has radius of convergence $R>0$,
then $f(x) = \sum_{n=0}^{\infty} c_n (xa)^n$ is differentiable
on $(aR, a+R)$, and
\begin{enumerate}
\item $\ds f'(x) = \sum_{n=1}^{\infty} n \cdot c_n (xa)^{n1}$
\item $\ds \int f(x) dx = C + \sum_{n=0}^{\infty} \f{c_n}{n+1}(xa)^{n+1}$,
\end{enumerate}
and both the derivative and integral have the same radius of convergence as $f$.
\end{theorem}
\begin{example}
Find a power series representation for $f(x) = \tan^{1}(x)$.
Notice that
$$
f'(x) = \f{1}{1+x^2} = \f{1}{1 (x^2)}
= \sum_{n=0}^{\oo} (1)^n x^{2n},
$$
which has radius of convergence $R=1$, since the above series
is valid when $x^2<1$, i.e., $x<1$.
Next integrating, we find that
$$
f(x) = c + \sum_{n=0}^{\oo} (1)^n \f{x^{2n+1}}{2n+1},
$$
for some constant $c$.
To find the constant, compute $c = f(0) = \tan^{1}(0) = 0$.
We conclude that
$$
\tan^{1}(x) = \sum_{n=0}^{\oo} (1)^n \f{x^{2n+1}}{2n+1}.
$$
\end{example}
\begin{example}
We will see later that the function $f(x) = e^{x^2}$ has power series
$$
e^{x^2} = 1  x^{2} + \frac{1}{2}x^{4}  \frac{1}{6}x^{6} + \cdots.
$$
Hence
$$
\int e^{x^2} dx = c + x  \frac{1}{3}x^{3} + \frac{1}{10}x^{5}  \frac{1}{42}x^{7} + \cdots.
$$
This despite the fact that the antiderivative of $e^{x^2}$ is not an
elementary function (see Example~\ref{ex:noant}).
\end{example}
\section{Taylor Series}
\forclass{
Final exam: Wednesday, March 22, 710pm in PCYNH 109. Bring ID!\\
Last Quiz 4: This Friday\\
Next: 11.10 Taylor and Maclaurin series\\
Next: 11.12 Applications of Taylor Polynomials\\
Midterm Letters:\\
A, 3238\\
B, 2631\\
C, 2025\\
D, 1419\\
Mean: 23.4, Standard Deviation: 7.8, High: 38, Low: 6.
}
\begin{example}\label{ex:findpoly}
Suppose we have a degree$3$ (cubic) polynomial $p$ and
we know that
$p(0) = 4$, $p'(0)=3$, $p''(0)=4$, and $p'''(0)=6$.
Can we determine $p$? Answer: Yes!
We have
\begin{align*}
p(x) &= a + bx + cx^2 + dx^3\\
p'(x) &= b + 2cx + 3dx^2\\
p''(x) &= 2c + 6dx\\
p'''(x) &= 6d
\end{align*}
From what we mentioned above, we have:
\begin{align*}
a &= p(0) = 4\\
b &= p'(0) = 3\\
c &= \f{p''(0)}{2} = 2\\
d &= \f{p'''(0)}{6} = 1
\end{align*}
Thus
$$
p(x) = 4 + 3x + 2x^2 + x^3.
$$
\end{example}
Amazingly, we can use the idea of Example~\ref{ex:findpoly} to compute power
series expansions of functions. E.g., we will show below that
$$
e^x = \sum_{n=0}^{\infty} \f{x^n}{n!}.
$$
\begin{center}
\shadowbox{\Large Convergent series are determined by the values
of their derivatives.}
\end{center}
Consider a general power series
$$
f(x) = \sum_{n=0}^{\oo} c_n (xa)^n = c_0 + c_1 (xa) + c_2 (xa)^2 + \cdots
$$
We have
\begin{align*}
c_0 &= f(a)\\
c_1 &= f'(a) \\
c_2 &= \f{f''(a)}{2}\\
\cdots \\
c_n &= \f{f^{(n)}(a)}{n!},
\end{align*}
where for the last equality we use that
$$
f^{(n)}(x) = n! c_n + (xa)(\cdots + \cdots)
$$
\begin{remark}
The definition of $0!$ is $1$ (it's the empty product).
The empty sum is $0$ and the empty product is $1$.
\end{remark}
\begin{theorem}[Taylor Series\index{Taylor Series}]
If~$f(x)$ is a function that equals a power series centered
about~$a$, then that power series expansion is
\begin{align*}
f(x) &= \sum_{n=0}^{\oo} \f{f^{(n)}(a)}{n!} (xa)^n \\
&= f(a) + f'(a)(xa) + \f{f''(a)}{2} (xa)^2 + \cdots
\end{align*}
\end{theorem}
\begin{remark}
WARNING: There are functions that have all derivatives defined, but do
not equal their Taylor expansion. E.g., $f(x) = e^{1/x^2}$ for
$x\neq 0$ and $f(0)=0$. It's Taylor expansion is the $0$ series (which
converges everywhere), but it is not the $0$ function.
\end{remark}
\begin{definition}[Maclaurin Series]
A \defn{Maclaurin series} is just a Taylor series with $a=0$.
I will not use the term ``Maclaurin series'' ever again (it's common
in textbooks).
\end{definition}
\begin{example}
Find the Taylor series for $f(x)=e^x$ about $a=0$.
We have $f^{(n)}(x) = e^x$. Thus $f^{(n)}(0) = 1$
for all $n$. Hence
$$
e^x = \sum_{n=0}^{\oo} \f{1}{n!} x^n = 1 + x + \f{x^2}{2} + \f{x^3}{6} + \cdots
$$
What is the radius of convergence?
Use the ratio test:
\begin{align*}
\lim_{n\to\infty} \left \f{ \f{1}{(n+1)!} x^{n+1}}{ \f{1}{n!} x^n} \right
&= \lim_{n\to\infty} \f{n!}{(n+1)!}x \\
&= \lim_{n\to\infty} \f{x}{n+1} = 0, \qquad\text{for any fixed $x$}.
\end{align*}
Thus the radius of convergence is $\oo$.
\end{example}
\begin{example}\label{ex:taysin}
Find the Taylor series of $f(x)=\sin(x)$ about
$x=\f{\pi}{2}$.\footnote{Evidently this expansion was first found in
India by Madhava of Sangamagrama (13501425).} We have
$$
f(x) = \sum_{n=0}^{\oo} \f{f^{(n)}\left(\f{\pi}{2}\right)}{n!}
\left(x  \f{\pi}{2}\right)^n.
$$
To do this we have to {\em puzzle out a pattern}:
\begin{align*}
f(x) &= \sin(x)\\
f'(x) &= \cos(x)\\
f''(x) &= \sin(x)\\
f'''(x) &= \cos(x)\\
f^{(4)}(x) &= \sin(x)
\end{align*}
First notice how the signs behave.
For $n=2m$ even,
$$
f^{(n)}(x) = f^{(2m)}(x) = (1)^{n/2} \sin(x)
$$
and for $n=2m+1$ odd,
$$
f^{(n)}(x) = f^{(2m+1)}(x) = (1)^{m} \cos(x) = (1)^{(n1)/2} \cos(x)
$$
For $n=2m$ even we have
$$
f^{(n)}(\pi/2) = f^{(2m)}\left(\f{\pi}{2}\right) = (1)^m.
$$
and for $n=2m+1$ odd we have
$$
f^{(n)}(\pi/2) = f^{(2m+1)}\left(\f{\pi}{2}\right) = (1)^m\cos(\pi/2) = 0.
$$
Finally,
\begin{align*}
\sin(x) &= \sum_{n=0}^{\infty} \f{f^{(n)}(\pi/2)}{n!}(x\pi/2)^n\\
&= \sum_{m=0}^{\infty} \f{(1)^{m}}{(2m)!}
\left(x  \f{\pi}{2}\right)^{2m}.\\
\end{align*}
Next we use the ratio test to compute the radius of convergence.
We have
\begin{align*}
\lim_{m\to\infty}
\f{\ds \left \f{(1)^{m+1}}{(2(m+1))!}
\left(x  \f{\pi}{2}\right)^{2(m+1)} \right}
{\ds \left
\f{(1)^{m}}{(2m)!}
\left(x  \f{\pi}{2}\right)^{2m}
\right}
&=
\lim_{m\to\infty}
\f{(2m)!}{(2m+2)!} \left(x\f{\pi}{2}\right)^2\\
&= \lim_{m\to\infty}
\f{\left(x\f{\pi}{2}\right)^2}{(2m+2)(2m+1)}
\end{align*}
which converges for each $x$. Hence $R=\infty$.
\end{example}
\begin{example}
Find the Taylor series for $\cos(x)$ about $a=0$.
We have $\cos(x) = \sin\left(x+\f{\pi}{2}\right)$.
Thus from Example~\ref{ex:taysin} (with infinite radius
of convergence) and that the Taylor
expansion is unique, we have
\begin{align*}
\cos(x) &= \sin\left(x+\f{\pi}{2}\right)\\
&= \sum_{n=0}^{\infty} \f{(1)^{n}}{(2n)!}
\left(x + \f{\pi}{2}  \f{\pi}{2}\right)^{2n}\\
&= \sum_{n=0}^{\infty} \f{(1)^{n}}{(2n)!}
x^{2n}
\end{align*}
\end{example}
\newpage
\section{Applications of Taylor Series}
\forclass{
Final exam: Wednesday, March 22, 710pm in PCYNH 109. Bring ID!\\
Last Quiz 4: Today (last one)\\
Today: 11.12 Applications of Taylor Polynomials\\
Next; Differential Equations
}
This section is about an example in the theory of relativity. Let $m$
be the (relativistic) mass of an object and $m_0$ be the mass at rest
(rest mass) of the object. Let $v$ be the velocity of the object relative to the
observer, and let $c$ be the speed of light. These three quantities
are related as follows:
$$
m = \f{m_0}{\ds\sqrt{1\f{v^2}{c^2}}} \qquad\text{(relativistic) mass}
$$
The total energy of the object is $mc^2$:
\begin{center}
\shadowbox{\LARGE $
E = mc^2.
$}
\end{center}
In relativity we define the kinetic energy to be
\begin{equation}\label{eqn:kinetic}
K = mc^2  m_0 c^2.
\end{equation}
{\em What?} Isn't the kinetic energy $\f{1}{2} m_0 v^2$?
Notice that
$$
mc^2  m_0 c^2 = \f{m_0c^2}{\sqrt{1\f{v^2}{c^2}}}  m_0 c^2
= m_0 c^2 \left[ \left(1  \f{v^2}{c^2}\right)^{\f{1}{2}}  1\right].
$$
Let
$$
f\left(x\right) =
\left(1  x\right)^{\f{1}{2}}  1
$$
Let's compute the Taylor series of $f$. We have
\begin{align*}
f(x) &= (1  x)^{\f{1}{2}}  1\\
f'(x) &= \f{1}{2}(1  x)^{\f{3}{2}}\\
f''(x) &= \f{1}{2}\cdot \f{3}{2} (1  x)^{\f{5}{2}}\\
f^{(n)}(x) &= \f{1\cdot 3 \cdot 5 \cdots (2n1)}{2^n} (1x)^{\f{2n+1}{2}}.
\end{align*}
Thus
$$
f^{(n)}(0) = \f{1\cdot 3 \cdot 5 \cdots (2n1)}{2^n}.
$$
Hence
\begin{align*}
f(x) &= \sum_{n=1}^{\infty} \f{f^{(n)}(0)}{n!} x^n\\
& = \sum_{n=1}^{\infty} \f{1\cdot 3 \cdot 5 \cdots (2n1)}{2^n \cdot n!} x^n\\
&= \frac{1}{2}x + \frac{3}{8}x^2 + \frac{5}{16}x^3 + \frac{35}{128}x^4 +\cdots
\end{align*}
We now use this to analyze the kinetic energy (\ref{eqn:kinetic}):
\begin{align*}
mc^2  m_0 c^2 &= m_0 c^2 \cdot f\left(\f{v^2}{c^2}\right)\\
&= m_0 c^2 \cdot \left(\frac{1}{2} \cdot \f{v^2}{c^2} + \f{3}{8}\cdot \f{v^2}{c^2} + \cdots\right) \\
&= \f{1}{2} m_0 v^2 + m_0 c^2 \cdot \left(\f{3}{8} \f{v^2}{c^2} + \cdots\right) \\
\end{align*}
And we can ignore the higher order terms if $\f{v^2}{c^2}$ is small.
But how small is ``small'' enough, given that $\f{v^2}{c^2}$ appears
in an infinite sum?
\subsection{Estimation of Taylor Series}
Suppose
$$
f(x) = \sum_{n=0}^{\oo} \f{f^{(n)}(a)}{n!}(xa)^n.
$$
Write
$$
R_N(x) := f(x)  \sum_{n=0}^{N} \f{f^{(n)}(a)}{n!} (xa)^n
$$
We call
$$
T_N(x) = \sum_{n=0}^{N} \f{f^{(n)}(a)}{n!} (xa)^n
$$
the $N$th degree \defn{Taylor polynomial}.
Notice that
$$
\lim_{N\to\infty} T_N(x) = f(x)
$$
if and only if
$$
\lim_{N\to\infty} R_N(x) = 0.
$$
We would like to estimate $f(x)$ with $T_N(x)$. We need
an estimate for $R_N(x)$.
\begin{theorem}[Taylor's theorem]\label{thm:taylor}
If $f^{(N+1)}(x) \leq M$ for $xa\leq d$, then
$$
R_N(x) \leq \f{M}{(N+1)!}  xa^{N+1}
\qquad\text{for $xa\leq d$.}
$$
\end{theorem}
For example, if $N=0$, this says that
$$
R(x) = f(x)f(a) \leq M xa,
$$
i.e.,
$$
\left\f{f(x)  f(a)}{xa}\right \leq M,
$$
which should look familiar from a previous class (Mean Value Theorem).
\vspace{3ex}
\noindent{\bf\large Applications:}
\begin{enumerate}
\item One can use Theorem~\ref{thm:taylor} to
prove that functions converge to their Taylor series.
\item
Returning to the relativity example above, we apply Taylor's theorem
with $N=1$ and $a=0$. With $x=v^2/c^2$ and $M$ any number such
that $f''(x)\leq M$, we have
$$
R_1(x) \leq \f{M}{2}x^2.
$$
For example, if we assume that $v\leq 100m/s$
we use
$$
f''(x) \leq \f{3}{2}(1100^2/c^2)^{5/2} = M.
$$
Using $c=3\times 10^8 m/s$, we get
$$
R_1(x) \leq 4.17 \cdot 10^{10} \cdot m_0.
$$
Thus for $v\leq 100m/s \sim 225 \text{mph}$, then the error in throwing away
relativistic factors is $10^{10} m_0$. This is like 200 feet out of
the distance to the sun (93 million miles).
So relativistic and Newtonian kinetic energies are almost the same
for reasonable speeds.
\end{enumerate}
%%%%%%%%%%
%\bibliography{biblio}
\chapter{Some Differential Equations}
\forclass{Final exam: Wed March 22 710pm in Pepper canyon 109.\\
Today: Section 9.5\\
Friday: Review (with special guest John Eggers).\\
Extra Office Hours: Monday 112pm
}
%\begin{example}
%Find orthogonal trajectories to the
%family of curves $y=ke^{x}$.
%$y'=y$. Orthogonal trajectories thus
%satisfy $y'=1/y$, since slopes must be negative reciprocal of
%each other. Solve $u'=1/u$. Get $u=\sqrt{2x+c}$.
%\end{example}
Introduction  not written.
\section{Separable Equations}
A \defn{separable differential equation} is a first order
differential equation that can be written in the form
$$
\f{dy}{dx} = \f{f(x)}{h(y)}.
$$
These can be solved by integration, by noting that
$$
h(y) dy = f(x) dx,
$$
hence
$$
\int h(y) dy = \int f(x) dx.
$$
This latter equation defines $y$ implicitly as a function of $x$, and
in some cases it is possible to explicitly solve for $y$ as a function
of $x$.
\section{Logistic Equation}
The logistics equation is a differential equation that models
population growth. Often in practice a differential equation models
some physical situtation, and you should ``read it'' as doing so.
Exponential growth:
$$
\f{1}{P} \f{dP}{dt} = k.
$$
This says that the ``relative (percentage) growth rate'' is constant.
As we saw before, the solutions are
$$
P_{(t)} = P_0 \cdot e^{kt}.
$$
Note that this model only works for a little while. In everyday life
the growth couldn't actually continue at this rate indefinitely. This
exponential growth model ignores limitations on resources, disease,
etc. Perhaps there is a better model?
Over time we expect the growth rate should level off, i.e., decrease
to $0$.
What about
\begin{equation}\label{eqn:logistic}
\f{1}{P} \f{dP}{dt} = k \left(1  \f{P}{K}\right),
\end{equation}
where $K$ is some large constant called the \defn{carrying capacity},
which is much bigger than $P=P(t)$ at time $0$. The carrying capacity
is the maximum population that the environment can support.
Note that if $P>K$, then $dP/dt<0$ so the population declines.
The differential equation (\ref{eqn:logistic}) is called the logistic
model (or logistic differential equation). There are, of course,
other models one could use, e.g., the Gompertz equation.
First question: are there any \defn{equilibrium solutions} to
(\ref{eqn:logistic}), i.e., solutions with $dP/dt = 0$, i.e., constant
solutions? In order that $dP/dt = 0$ then $0 = k \left(1 
\f{P}{K}\right)$, so the two equilibrium solutions are $P(t)=0$ and
$P(t)=K$.
The logistic differential equation (\ref{eqn:logistic}) is separable,
so you can separate the variables with one variable on one side of
the equality and one on the other. This means we can easily solve
the equation by integrating. We rewrite the equation as
$$
\f{dP}{dt} = \f{k}{K} P (P  K).
$$
Now separate:
$$
\f{KdP}{P(PK)} = k \cdot dt,
$$
and integrate both sides
$$
\int \f{KdP}{P(PK)} = \int k \cdot dt = kt + C.
$$
On the left side we get
$$
\int \f{KdP}{P(PK)} = \int \left( \f{1}{PK}  \f{1}{P} \right) dP
= \lnPK  \lnP + *
$$
Thus
$$
\lnKP  \lnP = kt + c,
$$
so
$$
\ln(KP)/P = kt + c.
$$
Now exponentiate both sides:
$$
(KP)/P = e^{kt + c} = A e^{kt}, \qquad \text{where $A=e^{c}$}.
$$
Thus
$$K = P(1+Ae^{kt}),$$
so
$$
P(t) = \f{K}{1+A e^{kt}}.
$$
Note that $A=0$ also makes sense and gives an equilibrium solution.
In general we have $\lim_{t\to\infty} P(t) = K$.
In any particular case we can determine $A$ as a function of $P_0 = P(0)$
by using that
$$
P(0) = \f{K}{1+A}\qquad\text{so}\qquad A = \f{K}{P_0}  1 = \f{KP_0}{P_0}.
$$
\printindex
\end{document}