Problem 1: We have interpreted $\lim_{x\to a}f(x)=L$ as "$f(x)$ gets arbitrarily close to $L$ as $x$ gets close to $a$." Here we want to see how close $x$ needs to be to $a$ in order to make $f(x)$ close to $L$ or discover that no matter how close $x$ is to $a$, $f(x)$ does not approach the alleged value or any single value.
For $f(x)=x^2$, we know that $\lim_{x\to 2}~x^2=4$. Estimate for which $x>0$ is $|f(x)-4|<1$ by graphing the function for an appropriate range of values. Is your solution an interval containing $x=2$?
Estimate for which $x>0$ is $|f(x)-4|<1$ by solving the inequality in Sage.
As in parts a. and b. pick 3 (very) small values $\epsilon>0$ and determine which $x>0$ make $|f(x)-4|<\epsilon$. Are your solutions intervals containing $x=2$?
For $g(x)=\sin\left(\frac{1}{x}\right)$, determine whether or not $\lim_{x\to 0}~g(x)$ can exist.
Problem 2: Using the Intermediate Value Theorem, we have seen how to test whether or not a continuous function $f(x)$ has a root, but it does not give us (directly) a method to find it. A procedure called Newton's method is a tool that can help us to find a root. It is given by defining a sequence (also in the book in Section 4.8, p.269-272) $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)},$$ with the successive terms (ideally) getting closer and closer to the zero. The prerequisite for Newton's method is that we can make a ``good guess'' for the zero. In the following let $h(x)=e^{\sqrt{x}}$ and $g(x)=x^4$
Find an interval for which the Intermediate Value Theorem shows that the two functions must intersect.
Use the interval in part (a) to make a guess at a value for the desired $x$.
Apply Newton's method to get approximations for the desired value for $x$.
Problem 1: The series $\sum_{n=1}^{\infty}\frac{1}{n}$ is called the harmonic series.
Determine, by trial and error, the smallest $N$ such that $\sum_{n=1}^{N}\frac{1}{n}~>~2$
Determine, by trial and error, the smallest $N$ such that $\sum_{n=1}^{N}\frac{1}{n}~>~6$
Determine, by trial and error, the smallest $N$ such that $\sum_{n=1}^{N}\frac{1}{n}~>~10$
Do you think this series converges or diverges?
Problem 2: For $|x|<1$, suppose we know $$\frac{1}{1-x}=\sum_{k=0}^{\infty}x^k.$$ Determine the Taylor series for $F(x)$ from the Taylor series for $\frac{1}{1-x}$.
$F(x)=(x+1)\ln(1+x)-x$.
$F(x)=\int\tan^{-1}(x)dx$.
Problem 3: Let $F(x)=\sum_{k=0}^{\infty}\frac{x^{2k}}{2^k\cdot k!}$.
Show that $F(x)$ has infinite radius of convergence.
Show that $y=F(x)$ is a solution of $y''=xy'+y$ with initial conditions $y(0)=1$ and $y'(0)=0$.
Plot the partial sums $S_N$ for $N=1,3,5,7$ on the same graph and describe any patterns or interesting attributes you see.
Problem 3:Evaluate $\int_c xdx~+~ydy~+~xydz$ using Stokes’ theorem, given that $c$ is the circle $x^2+y^2+z^2=4$ that lies inside the cylinder $x^2+y^2=1$ and above the $xy$-plane.
{{{id=213| x,y,z,t=var('x,y,z,t') p1 = implicit_plot3d(x^2+y^2+z^2==4, (x,-2,2), (y, -2,2), (z, 0,2), opacity=0.2, color="red", mesh=True) p2 = implicit_plot3d(x^2+y^2==1, (x,-2,2), (y, -2,2), (z, 0,sqrt(3)), opacity=0.5, color="blue", mesh=True) p3 = plot3d(0, (x,-2,2), (y, -2,2), opacity=0.3, color="orange", mesh=True) show(p1+p2+p3, aspect_ratio=1) # # i,j,k=var('i,j,k') r_t=vector(SR,[cos(t),sin(t),sqrt(3)]) show(r_t) # # dr_t=diff(r_t,t); show(dr_t) # F=vector(SR, [x,y,x*y]); show(F) # F_r_t=vector(SR, [cos(t),sin(t),cos(t)*sin(t)]); show(F_r_t) # show(integral(F_r_t*dr_t,t,0,2*pi())) /// }}}
Problem 1: In the tasks below, example commands are given with the equation $y'(x)=xy(x)$, initial condition $y(0)=1$ in mind. These are meant to be a guide, and you will have to modify these to get the results desired for the differential equation given.
Consider $y'(x)=x-y$.
Draw the slope field for $-1\leq x\leq 3$ and $-1\leq y\leq 3$. Describe any trends you see in the slope field? I.e. as we let $x\to\infty$ does there appear to be some tendency toward a particular line or curve? is there any symmetry?
E.g. plot_vector_field((1,x*y),(x,-1,1),(y,-1,1))
Use Euler's method to approximate 8 points (or so) of a solution, for $x=0$, $0.25$, $\ldots$ , $1.75$, $\ldots$ Plot your approximations.
E.g. ApproxPts=eulers_method(x*y,0,1,1/5,1,algorithm=``None'')
Determine (using Sage, not by hand) the exact solution with initial condition $y(0)=0$.
E.g. desolve(diff(y,x)-x*y, y, (0,1))
On the same graph, plot the slope field, the exact solution, and the approximations you obtained from Euler's method. You should focus the picture around the points obtained from Euler's method, and you may have to adjust the bounds in your slope field to make the picture nice.
E.g. show(list_plot(ApproxPts)+plot_vector_field((1,x*y),(x,-1,1),(y,-1,1)))
Repeat part (b) with 80 points instead of 8, and then plot again as in part (d). Describe the change you see.
Problem 2: Solve Bessel's equation of the $0^{th}$ order using the power series method.
$$x^2 y^{′′} + xy^{′} + x^2 y~=~0,~~y(0)=1,~~y^{′}(0)=0$$ {{{id=208| /// }}}Problem 1: Suppose a group of robots has been programmed to traverse the maze shown below, where at each junction a robot randomly chooses which path to take (including staying where it is). We assume that each junction and path can accommodate as many robots as needed, and moving from one junction to any other takes the same amount of time for each robot.
Construct a matrix (called the transition matrix) that describes the probabilities of where the robots will go during one time step.
If the initial arrangement of the group of robots is that every junction has 15 robots, what (probabilistically) happens after 1 time step? 2 time steps?
Is there a long term pattern? Is there an arrangement that the group tends towards? If so, can you prove such a trend?
Problem 2: (From Lay's Linear Algebra: 4.5 #33)
According to Theorem 11, a linearly independent set $\{v_1,\ldots,v_k\}$ in $\mathbb{R}^n$ can be expanded to a basis for $\mathbb{R}^n$. One way to do this is to create (a matrix) $A=[v_1~\cdots~v_k~~e_1~\cdots~e_n]$ with $e_1,\ldots,e_n$ the columns of the identity matrix; the pivot columns of $A$ form a basis for $\mathbb{R}^n$.
Use the method described to extend the following vectors to a basis for $\mathbb{R}^5$:
$v_1=[-9 ~ -7 ~~ 8 ~ -5 ~~ 7]^t$, $v_2=[9 ~~ 4 ~~ 1 ~~ 6 ~ -7]^t$, $v_3=[6 ~~ 7 ~ -8 ~~ 5 ~ -7]^t$.
Explain why the method words in general. Why are the original vectors $v_1,\ldots,v_k$ included in the basis found for $\text{Col}(A)$? Why is $\text{Col}(A)=\mathbb{R}^n$?
Problem 3: (From Lay's Linear Algebra: 4.3 #37)
Show that $\{t,~\sin(t),~\cos(2t),~\sin(t)\cos(t)\}$ is a linearly independent set of functions defined on $\mathbb{R}$. Start by assuming that $$c_1\cdot t+c_2\cdot \sin(t)+c_3\cdot \cos(2t)+c_4\cdot \sin(t)\cos(t)~=~0$$ This equation must hold for all real values of $t$, so choose several specific values of $t$ until you get a system of enough equations to determine that all the $c_j$ must be zero.
{{{id=207| /// }}}Problem 1: The goal is to find evidence for or against the statement ``$p(x)\in\mathbb{Z}[x]$ is irreducible if and only if $\tilde{p}(x)\in\left(\mathbb{Z}/p\mathbb{Z}\right)[x]$ irreducible,'' where $\tilde{p}(x)$ denotes the polynomial $p(x)$ with each coefficient reduced $\mod{p}$.
Let $f(x)=x^4+x^2+1$.
Let $g(x)=3x^6+2x^5+4x^4+x^2+4x+4$.
Let $h(x)=x^4+1$.
Problem 2: Suppose $p(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0\in\mathbb{Z}[x]$ is a polynomial with integer coefficients. Eisenstein's Criterion states that if there is a prime $p$ such that $p\nmid a_n$, $p\mid a_t$ for all $0\leq t\leq n-1$, and $p^2\nmid a_0$ then $f(x)$ is irreducible $\mathbb{Z}[x]$.
Determine the expansion of $f(x)=(x+1)^{p}-1$ for a few primes $p<20$. Is this polynomial irreducible?
Can you find a pattern among the coefficients of $f(x)$?
Is $\Phi_p(x)=\frac{x^p-1}{x-1}=x^{p-1}+x^{p-2}+\cdots+x+1$ irreducible as a polynomial over $\mathbb{Z}[x]$?