Example Sage lab problems

Calculus

Calc I

Problem 1: We have interpreted $\lim_{x\to a}f(x)=L$ as "$f(x)$ gets arbitrarily close to $L$ as $x$ gets close to $a$." Here we want to see how close $x$ needs to be to $a$ in order to make $f(x)$ close to $L$ or discover that no matter how close $x$ is to $a$, $f(x)$ does not approach the alleged value or any single value.

  1. For $f(x)=x^2$, we know that $\lim_{x\to 2}~x^2=4$. Estimate for which $x>0$ is $|f(x)-4|<1$ by graphing the function for an appropriate range of values. Is your solution an interval containing $x=2$?

  2. Estimate for which $x>0$ is $|f(x)-4|<1$ by solving the inequality in Sage.

  3. As in parts a. and b. pick 3 (very) small values $\epsilon>0$ and determine which $x>0$ make $|f(x)-4|<\epsilon$. Are your solutions intervals containing $x=2$?

  4. For $g(x)=\sin\left(\frac{1}{x}\right)$, determine whether or not $\lim_{x\to 0}~g(x)$ can exist.

{{{id=201| plot(x^2,(x,1.75,2.15)) /// }}} {{{id=219| solve(x^2-4<1,x) /// }}}

Problem 2: Using the Intermediate Value Theorem, we have seen how to test whether or not a continuous function $f(x)$ has a root, but it does not give us (directly) a method to find it. A procedure called Newton's method is a tool that can help us to find a root. It is given by defining a sequence (also in the book in Section 4.8, p.269-272) $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)},$$ with the successive terms (ideally) getting closer and closer to the zero. The prerequisite for Newton's method is that we can make a ``good guess'' for the zero. In the following let $h(x)=e^{\sqrt{x}}$ and $g(x)=x^4$

  1. Find an interval for which the Intermediate Value Theorem shows that the two functions must intersect.

  2. Use the interval in part (a) to make a guess at a value for the desired $x$.

  3. Apply Newton's method to get approximations for the desired value for $x$.

{{{id=220| f(x)=exp(sqrt(x))-x^4 /// }}} {{{id=210| N(x)=x-f(x)/(diff(f,x)); show(N) /// }}} {{{id=222| a=N(N(N(1.75))); show([a,f(a)]) /// }}}

Calc II

Problem 1: The series $\sum_{n=1}^{\infty}\frac{1}{n}$ is called the harmonic series.

  1. Determine, by trial and error, the smallest $N$ such that $\sum_{n=1}^{N}\frac{1}{n}~>~2$

  2. Determine, by trial and error, the smallest $N$ such that $\sum_{n=1}^{N}\frac{1}{n}~>~6$

  3. Determine, by trial and error, the smallest $N$ such that $\sum_{n=1}^{N}\frac{1}{n}~>~10$

  4. Do you think this series converges or diverges?

{{{id=202| RR(sum(1/x,x,1,10^5)) /// }}} {{{id=223| /// }}}

Problem 2: For $|x|<1$, suppose we know $$\frac{1}{1-x}=\sum_{k=0}^{\infty}x^k.$$ Determine the Taylor series for $F(x)$ from the Taylor series for $\frac{1}{1-x}$.

  1. $F(x)=(x+1)\ln(1+x)-x$.

  2. $F(x)=\int\tan^{-1}(x)dx$.

{{{id=211| F(x)=(x+1)*log(1+x)-x; F.taylor(x,0,6) /// }}} {{{id=227| g(x)=1/(1+x); s=g.taylor(x,0,5); show(s) /// }}} {{{id=229| I=integral(s,x); expand((1+x)*I-x) /// }}} {{{id=231| /// }}}

Problem 3: Let $F(x)=\sum_{k=0}^{\infty}\frac{x^{2k}}{2^k\cdot k!}$.

  1. Show that $F(x)$ has infinite radius of convergence.

  2. Show that $y=F(x)$ is a solution of $y''=xy'+y$ with initial conditions $y(0)=1$ and $y'(0)=0$.

  3. Plot the partial sums $S_N$ for $N=1,3,5,7$ on the same graph and describe any patterns or interesting attributes you see.

{{{id=212| /// }}}

Multivariable Calc

Definition: The gradient of a function $f(x,y)$ is an ordered pair $\nabla f(x,y):=\left(\frac{\partial f}{\partial x}(x,y),\frac{\partial f}{\partial y}(x,y)\right)$.

Theorem: If $P=(a,b)$ is a point on the curve $g(x,y)=0$ for which $z=f(x,y)$ is maximum or minimum, then $\nabla f(P)=\lambda\nabla g(P)$, where $\lambda$ is some constant.

Main idea in the theorem: For a maximum or minimum of $z=f(x,y)$ one can show that $\nabla f(P)$ is perpendicular to the contour line of $g(x,y)$ which contains the point $P$. Also, one shows that $\nabla g(P)$ is perpendicular to the contour line of $g(x,y)$ which contains the point $P$. Any two lines in the $xy$-plane perpendicular to the same thing must be parallel.

Some useful commands: {{{id=1| x,y=var('x,y') def contour_lines(G,x0,x1,y0,y1): # Takes as input a function G of x and y, starting x-value x0, ending x-value x1, # starting y-value y0, and ending y-value y1. # return contour_plot(G,(x,x0,x1),(y,y0,y1),contours=[0],fill=False) def gradplot(F,a,b,z): # Takes in a gradient F, x-input a, y-input b, and lastly a color choice # where 1 is red and 2 is green. # if (z==1): return parametric_plot([a+x*F[0](a,b),b+x*F[1](a,b)],(x,0,1),color="red") elif (z==2): return parametric_plot([a+x*F[0](a,b),b+x*F[1](a,b)],(x,0,1),color="green") else: print 'Error: Last input must be 1 or 2' /// }}} Instructions: For each of the following problems, define the relevant functions, determine the gradients of the functions, and solve the given optimization problem (using the above theorem). After determining the optimal point $P=(a,b)$, plot the gradients at $P$ and the contour line which contains $P$ in the same $xy$-plane, and then plot the optimized function in 3-dimensional space.

Problem 1: Find the point $(a,b)$ on the curve $y=e^x$ for which $ab$ is as small as possible. {{{id=215| x,y=var('x,y'); F(x,y)=x*y; G(x,y)=exp(x)-y; show([F,G]) /// }}} {{{id=3| Fgrad=F.gradient(); Ggrad=G.gradient() /// }}} {{{id=214| solve(y/x==-exp(x),y) /// }}} {{{id=217| P=point([-1,exp(-1)]) cplot=contour_lines(G,-2,0,exp(-1)-1,exp(-1)+1) Fgradplot=gradplot(Fgrad,-1,exp(-1),1) Ggradplot=gradplot(Ggrad,-1,exp(-1),2) show(P+Ggradplot+cplot) /// }}} Problem 2: With $x$ units of labor and $y$ units of capital, a company can produce $P(x,y)=30x^{0.35}y^{0.65}$ of goods. Find the maximum number of goods the company can produce on a budget of \$10,000, where labor costs \$80 per unit and capital costs \$120 per unit. {{{id=4| /// }}}

Problem 3:Evaluate $\int_c xdx~+~ydy~+~xydz$ using Stokes’ theorem, given that $c$ is the circle $x^2+y^2+z^2=4$ that lies inside the cylinder $x^2+y^2=1$ and above the $xy$-plane.

{{{id=213| x,y,z,t=var('x,y,z,t') p1 = implicit_plot3d(x^2+y^2+z^2==4, (x,-2,2), (y, -2,2), (z, 0,2), opacity=0.2, color="red", mesh=True) p2 = implicit_plot3d(x^2+y^2==1, (x,-2,2), (y, -2,2), (z, 0,sqrt(3)), opacity=0.5, color="blue", mesh=True) p3 = plot3d(0, (x,-2,2), (y, -2,2), opacity=0.3, color="orange", mesh=True) show(p1+p2+p3, aspect_ratio=1) # # i,j,k=var('i,j,k') r_t=vector(SR,[cos(t),sin(t),sqrt(3)]) show(r_t) # # dr_t=diff(r_t,t); show(dr_t) # F=vector(SR, [x,y,x*y]); show(F) # F_r_t=vector(SR, [cos(t),sin(t),cos(t)*sin(t)]); show(F_r_t) # show(integral(F_r_t*dr_t,t,0,2*pi())) /// }}}

Differential Equations

Problem 1: In the tasks below, example commands are given with the equation $y'(x)=xy(x)$, initial condition $y(0)=1$ in mind. These are meant to be a guide, and you will have to modify these to get the results desired for the differential equation given.

Consider $y'(x)=x-y$.

  1. Draw the slope field for $-1\leq x\leq 3$ and $-1\leq y\leq 3$. Describe any trends you see in the slope field? I.e. as we let $x\to\infty$ does there appear to be some tendency toward a particular line or curve? is there any symmetry?
    E.g. plot_vector_field((1,x*y),(x,-1,1),(y,-1,1))

  2. Use Euler's method to approximate 8 points (or so) of a solution, for $x=0$, $0.25$, $\ldots$ , $1.75$, $\ldots$ Plot your approximations.
    E.g. ApproxPts=eulers_method(x*y,0,1,1/5,1,algorithm=``None'')

  3. Determine (using Sage, not by hand) the exact solution with initial condition $y(0)=0$.
    E.g. desolve(diff(y,x)-x*y, y, (0,1))

  4. On the same graph, plot the slope field, the exact solution, and the approximations you obtained from Euler's method. You should focus the picture around the points obtained from Euler's method, and you may have to adjust the bounds in your slope field to make the picture nice.
    E.g. show(list_plot(ApproxPts)+plot_vector_field((1,x*y),(x,-1,1),(y,-1,1)))

  5. Repeat part (b) with 80 points instead of 8, and then plot again as in part (d). Describe the change you see.

{{{id=203| /// }}}

Problem 2: Solve Bessel's equation of the $0^{th}$ order using the power series method.

$$x^2 y^{′′} + xy^{′} + x^2 y~=~0,~~y(0)=1,~~y^{′}(0)=0$$ {{{id=208| /// }}}

Linear Algebra

Problem 1: Suppose a group of robots has been programmed to traverse the maze shown below, where at each junction a robot randomly chooses which path to take (including staying where it is). We assume that each junction and path can accommodate as many robots as needed, and moving from one junction to any other takes the same amount of time for each robot.

Robot Diagram
  1. Construct a matrix (called the transition matrix) that describes the probabilities of where the robots will go during one time step.

  2. If the initial arrangement of the group of robots is that every junction has 15 robots, what (probabilistically) happens after 1 time step? 2 time steps?

  3. Is there a long term pattern? Is there an arrangement that the group tends towards? If so, can you prove such a trend?

{{{id=205| /// }}}

Problem 2: (From Lay's Linear Algebra: 4.5 #33)

According to Theorem 11, a linearly independent set $\{v_1,\ldots,v_k\}$ in $\mathbb{R}^n$ can be expanded to a basis for $\mathbb{R}^n$. One way to do this is to create (a matrix) $A=[v_1~\cdots~v_k~~e_1~\cdots~e_n]$ with $e_1,\ldots,e_n$ the columns of the identity matrix; the pivot columns of $A$ form a basis for $\mathbb{R}^n$.

  1. Use the method described to extend the following vectors to a basis for $\mathbb{R}^5$:
    $v_1=[-9 ~ -7 ~~ 8 ~ -5 ~~ 7]^t$, $v_2=[9 ~~ 4 ~~ 1 ~~ 6 ~ -7]^t$, $v_3=[6 ~~ 7 ~ -8 ~~ 5 ~ -7]^t$.

  2. Explain why the method words in general. Why are the original vectors $v_1,\ldots,v_k$ included in the basis found for $\text{Col}(A)$? Why is $\text{Col}(A)=\mathbb{R}^n$?

{{{id=206| /// }}}

Problem 3: (From Lay's Linear Algebra: 4.3 #37)

Show that $\{t,~\sin(t),~\cos(2t),~\sin(t)\cos(t)\}$ is a linearly independent set of functions defined on $\mathbb{R}$. Start by assuming that $$c_1\cdot t+c_2\cdot \sin(t)+c_3\cdot \cos(2t)+c_4\cdot \sin(t)\cos(t)~=~0$$ This equation must hold for all real values of $t$, so choose several specific values of $t$ until you get a system of enough equations to determine that all the $c_j$ must be zero.

{{{id=207| /// }}}

Algebraic Structures

Problem 1: The goal is to find evidence for or against the statement ``$p(x)\in\mathbb{Z}[x]$ is irreducible if and only if $\tilde{p}(x)\in\left(\mathbb{Z}/p\mathbb{Z}\right)[x]$ irreducible,'' where $\tilde{p}(x)$ denotes the polynomial $p(x)$ with each coefficient reduced $\mod{p}$.

  1. Let $f(x)=x^4+x^2+1$.

  2. Let $g(x)=3x^6+2x^5+4x^4+x^2+4x+4$.

  3. Let $h(x)=x^4+1$.

{{{id=204| /// }}}

Problem 2: Suppose $p(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0\in\mathbb{Z}[x]$ is a polynomial with integer coefficients. Eisenstein's Criterion states that if there is a prime $p$ such that $p\nmid a_n$, $p\mid a_t$ for all $0\leq t\leq n-1$, and $p^2\nmid a_0$ then $f(x)$ is irreducible $\mathbb{Z}[x]$.

  1. Determine the expansion of $f(x)=(x+1)^{p}-1$ for a few primes $p<20$. Is this polynomial irreducible?

  2. Can you find a pattern among the coefficients of $f(x)$?

  3. Is $\Phi_p(x)=\frac{x^p-1}{x-1}=x^{p-1}+x^{p-2}+\cdots+x+1$ irreducible as a polynomial over $\mathbb{Z}[x]$?

{{{id=209| /// }}}