{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "We want to solve the linear system $$Ax=b$$ and we split the matrix into first unknown matrices $M$ and $N$\n", "as follows \n", "$$A=M-N$$\n", "where we assume that $M$ is invertible. We can then write our linear system as follows\n", "\\begin{align}\n", "Ax&=b\\\\\n", "\\Leftrightarrow&\\quad =b\\\\\n", "\\Leftrightarrow&\\quad Mx-Nx=b\\\\\n", "\\Leftrightarrow&\\quad Mx=b+Nx\\\\\n", "\\Leftrightarrow&\\quad x=M^{-1}b+M^{-1}Nx\\\\\n", "\\Leftrightarrow&\\quad x=M^{-1}b+M^{-1}(M-A)x\\\\\n", "\\Leftrightarrow&\\quad x=M^{-1}b+(I-M^{-1}A)x\\\\\n", "\\Leftrightarrow&\\quad x=x+M^{-1}(b-Ax)\\\\\n", "\\end{align}\n", "so far it seems nothing is gained but let's turn this into an iteration\n", "$$\n", "x^{(k+1)}=x^{(k)}+M^{-1}(b-Ax^{(k)}).\n", "$$\n", "This method does work quite well under certain circumstances. We now want to create an actual example and for that choose \n", "$$\n", "M=D\n", "$$\n", "with $D$ the diagonal of $A$. This is the so-called Jacobi iteration. Please implement a function that given a right hand side and a matrix implements this method. You can use the norm of the residual as a measure of convergence, i.e.,\n", "$$\n", "r^{(k)}=b-Ax^{(k)}.\n", "$$\n", "You can start with the zero vector for $x^{(0)}$.\n" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import scipy\n", "from scipy import linalg as la\n", "from scipy import optimize\n", "import scipy.sparse as spsp" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "# function should return solution x and \n", "# number of iteration, such it can be called as\n", "# (x,it)=jacobi(A,b,maxiter,tol,verbose)\n", "# if verbose==0 the function should not print out messages\n", "def jacobi(A,b,maxiter,tol,verbose):\n", " print(\"TODO\")\n", " xold=0 # dummy - to remove\n", " k=0 # dummy - to remove\n", "\n", " return (xold,k)\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now check out if our iteration converges, at first with random matrx and random right hand side" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "n=50\n", "A=np.random.rand(n,n)\n", "b = np.random.rand(n,1)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TODO\n", "0\n", "0\n" ] } ], "source": [ "(x,it)=jacobi(A,b,10,1e-5,verbose=1)\n", "print(x)\n", "print(it)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It seems not to converge, lets try another matrix, for example we make A diagonal dominant by replacing \n", "$A_{ii}$ with the sum of row $i$.\n", "\n" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TODO\n" ] } ], "source": [ "A2=A.copy()\n", "n1=np.ones((n,1))\n", "rs=A2@n1\n", "#print(n1)\n", "#print(rs)\n", "\n", "for i in range (n):\n", " A2[i,i]=rs[i]\n", " \n", "(x,it)=jacobi(A2,b,1000,1e-5,verbose=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Jacobi iteration computes the vector \n", "$ x^{(k+1)} $ from values of vector $ x^{(k)} $. \n", "Now the idea comes up to use for the computation of the component $i$ of $x^{(k+1)}$ the already computed components $x^{(k+1)}_1 \\cdots x^{(k+1)}_{i-1}$ from the new vector $x^{(k+1)}$ instead the components of the old vector $x^{(k)}$.\n", "\n", "This leads to the Gauss-Seidel iteration,\n", "instead of choosing\n", "$$ M=D $$ with $D$ the diagonal of $A$ in the Jacobi iteration now choose\n", "$$ M=(D+L) $$ where $L$ is the lower triangle of $A$.\n", "\n", "Please implement again a function that given a right hand side and a matrix implements this method. \n", "You can use the norm of the residual as a measure of convergence, i.e.,\n", "$$\n", "r^{(k)}=b-Ax^{(k)}.\n", "$$\n", "You can start with the zero vector for $x^{(0)}$.\n" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "# function should return solution x and \n", "# number of iteration, such it can be called as\n", "# (x,it)=gauss_seidel(A,b,maxiter,tol,verbose)\n", "# if verbose==0 the function should not print out messages\n", "def gauss_seidel(A,b,maxiter,tol,verbose):\n", " print(\"TODO\")\n", " xold=0 # dummy - to remove\n", " k=0 # dummy - to remove\n", "\n", " return (xold,k)\n" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TODO\n", "0\n", "0\n" ] } ], "source": [ "(x,it)=gauss_seidel(A2,b,1000,1e-5,verbose=1)\n", "print(x)\n", "print(it)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we want to examine the convergence of Jacobi and Gauss-Seidel iteration for different system sizes.\n", "Create a plot of iteration numbers for system sizes $n=5,10,15,20,25,30,40,50$. Use a matrix $A$ and vector $b$\n", "of your choice but ensure that this matrix will converge at all." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "# TODO\n", "\n", "\n", "# get the plot lib \n", "import matplotlib.pyplot as plt\n", "\n", "\n", "# TODO\n", "\n", "#plt.show\n", " \n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 4 }