Skip to content

mitmath/18335

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 

Repository files navigation

18.335J/6.7310J: Introduction to Numerical Methods

This is the repository of course materials for the 18.335J/6.7310J course at MIT, taught by Dr. Shi Chen, in Fall 2025.

Syllabus

Lectures: Monday/Wednesday 9:30am–11am in room 4-159

Office Hours: TBD in room 2-239B.

Contact: [email protected]

Topics: Advanced introduction to numerical linear algebra and related numerical methods. Topics include direct and iterative methods for linear systems, eigenvalue decompositions and QR/SVD factorizations, stability and accuracy of numerical algorithms, the IEEE floating-point standard. Other topics may include include memory hierarchies, nonlinear optimization, numerical integration, and FFTs. Problem sets will involve use of Julia, a Matlab-like environment (little or no prior experience required; you will learn as you go).

Launch a Julia environment in the cloud: Binder

Prerequisites: Understanding of linear algebra (18.06, 18.700, or equivalents). 18.335 is a graduate-level subject, however, so much more mathematical maturity, ability to deal with abstractions and proofs, and general exposure to mathematics is assumed than for 18.06!

Textbook: The primary textbook for the course is Numerical Linear Algebra by Trefethen and Bau.

Other Reading: Previous terms can be found in branches of the 18335 git repository. The course notes from 18.335 in much earlier terms can be found on OpenCourseWare. For a classical (and rigorous) treatment of rounding error analysis, see Higham's Accuracy and Stability of Numerical Algorithms. For a comprehensive introduction to classical methods in Numerical Linear Algebra, Golub and Uhlig's Matrix Computations. For a review of iterative methods, the online books Templates for the Solution of Linear Systems (Barrett et al.) and Templates for the Solution of Algebraic Eigenvalue Problems (Bai et al.) are useful surveys; see also Saad's Iterative Methods for Sparse Linear Systems. For a practical introduction to optimization, see Nocedal and Wright's Numerical Optimization. For a contemporary perspective, see Solomon's Numerical Algorithms.

Grading: 60% problem sets (three psets due / released every other Friday), 40% final project (See details below).

Collaboration policy: Talk to anyone you want to and read anything you want to, with three exceptions: First, you may not refer to homework solutions from the previous terms in which I taught 18.335. Second, make a solid effort to solve a problem on your own before discussing it with classmates or googling. Third, no matter whom you talk to or what you read, write up the solution on your own, without having their answer in front of you.

Final Projects: The final project will explore a numerical topic of your choice that is related to the course material but has not been covered in depth during lectures or problem sets. The project consists of two components:

  • Proposal (due by 11:59 PM, Sunday, November 9): A one-page summary outlining your chosen topic and its relevance.
  • Final submission: You may choose one of the following formats:
    • Technical report (5–15 pages) reviewing an interesting numerical algorithm not covered in the course, due by last day of class 11:59 PM, Wednesday, December 10.
    • Technical presentation (25-minute in-class lecture). Limited spots are available, and you may collaborate with one other classmate. You need to submit your code

See this page for more details.

Assignments

  • Pset 1 will be availble on September 10, and will be due on October 5 at 11:59pm.

  • Pset 2 will be availble on October 1, and will be due on November 2 at 11:59pm.

  • Pset 3 will be availble on October 29, and will be due on November 23 at 11:59pm.

Lecture Summaries and Handouts

Lecture 1 (September 3)

  • Scope of Numerical Methods and the key concerns: A broad introduction to the vast field of numerical methods. Unlike pure mathematics, numerical analysis must account for performance (efficiency) and accuracy (precision and stability).
  • The Importance of Numerical Linear Algebra (NLA): Why is NLA fundamental? Through examples, we demonstrate how NLA naturally arises in solving a wide range of problems in continuous mathematics.

Further Reading: L.N. Trefethen, The Definition of Numerical Analysis.

Lecture 2 (September 8)

  • Floating point arithmetic, exact rounding, and the "fundamental axiom"
  • Stability of summation algorithms
  • Forward and backward stability

Further Reading: L.N. Trefethen, Lectures 13 and 14. Also, see the notebook about floating point.

Lecture 3 (September 10)

  • Recap on forward and backward stability, and condition numbers
  • Accuracy <= backward stable algorithms + well-conditioned problems
  • Vector and matrix norms

Further Reading: L. N. Trefethen, Lectures 12 and 15. Also, see the notes about stability of sum and equivalence of norms.

Lecture 4 (September 15)

  • Solving Ax = b
  • Condition number of A
  • Orthogonal/Unitary matrices
  • The singular value decomposition (SVD)

Further Reading: L. N. Trefethen, Lectures 4 and 5.

Lecture 5 (September 17)

  • Recap on SVD and its applications
  • Least-squares solutions to overdetermined linear systems
  • Normal equations and pseudoinverse
  • QR factorization

Further Reading: L. N. Trefethen, Lectures 11, 18 and 19. Also, see the notes about least-squares.

Lecture 6 (September 22)

  • Gram-Schmidt orthogonalization and its stability
  • Householder transform, Householder QR algorithm and its stability

Further Reading: L. N. Trefethen, Lectures 7, 8 and 10. Also, see the notes about Gram-Schmidt.

Lecture 7 (September 24)

  • Solving Ax=b via QR factorization: flops and stability
  • Solving Ax=b via LU factorization: Gaussian elimination, flops and stability

Further Reading: L. N. Trefethen, Lectures 20, 21, and 22.

Lecture 8 (September 29)

  • Pivoting in LU factorization
  • LU factorization for symmetric positive definite matrix: Cholesky factorization

Further Reading: L. N. Trefethen, Lectures 21, and 22. Can you bring GE's instability to life in this example?

Lecture 9 (October 1)

  • Iterative methods for linear systems
  • Convergence of vector sequence
  • Matrix splittings and examples
  • Convergence of stationary iterative methods

Further Reading: Y. Saad, Iterative Methods for Sparse Linear Systems, Chapter 4.

Lecture 10 (October 6)

  • Jacobi iteration
  • Gauss-Seidel iteration
  • Successive over-relaxation (SOR) and optimal relaxation parameter

Further Reading: Y. Saad, Iterative Methods for Sparse Linear Systems, Chapter 4.

Lecutre 11 (October 8)

  • Review basics of eigenvalue problems
  • Schur factorization
  • Stability/perturbation theory of eigenvalue problems

Further Reading: L. N. Trefethen, Lectures 24, and 25.

Lecutre 12 (October 15)

  • Power methods
  • Rayleigh quotient methods
  • Simultaneous power iteration

Further Reading: L. N. Trefethen, Lectures 27, and 28.

Lecutre 13 (October 20)

  • QR algorithm
  • Upper Hessenberg factorization
  • QR algorithm with shift

Further Reading: L. N. Trefethen, Lectures 26, and 29.

Lecutre 14 (October 22)

  • Iterative methods for sparse matrices, and Krylov subspaces
  • Galerkin condition and Rayleigh-Ritz projection for eigenvalue problems
  • Arnoldi's iteration for finding orthonormal basis in Krylov subspaces

Further Reading: L. N. Trefethen, Lectures 32 and 33.

Lecutre 15 (October 27)

  • Arnoldi's method for Hermitian matrices, and Lanczos algorithm for sparse eigenvalue problems
  • Convergence of Lanczos algorithm and Arnoldi's method

Further Reading: L. N. Trefethen, Lectures 34 and 36. You can read about the implicitly restarted Arnoldi iteration and the bells and whistles. that made it a standard iterative eigensolver for non-Hermitian sparse matrices.

Lecture 16 (October 29)

  • Krylov subspace methods for linear system
  • Projection methods
  • Arnoldi's method for linear systems

Further Reading: Y. Saad, Iterative Methods for Sparse Linear Systems, Chapter 5.1, and 6.4.

Lecture 17 (November 3)

  • GMRES as a projection problem and a least-square problem
  • Convergence of GMRES

Further Reading: L. N. Trefethen, Lectures 35.

Lecture 18 (November 5)

  • Steepest Gradient Descent
  • Conjugate Gradient (CG)
  • Convergence of CG

Further Reading: L. N. Trefethen, Lectures 38, Introduction to CG.

Lecture 19 (November 12)

  • Preconditioning and CG
  • Nonlinear CG

Further Reading: J. Nocedal and S. J. Wright, Chapter 5. Notes on different options for solving Ax=b.

Lecture 20 (November 17)

  • Newton method and quasi-Newton methods

Further Reading: J. Nocedal and S. J. Wright, Chapter 6.

Lecture 21 (November 19)

  • Sampling as optimization over the space of probability measures

Lecture 22 (November 24)

  • Trace estimation
  • Optimal low-rank approximation and the SVD
  • Randomized SVD

Further Reading: See the now classic review paper by Halko, Martinsson, and Tropp for an introduction to randomized range-finders and approximate matrix decompositions.

Lecture 23 (November 26)

  • Randomized range-finders
  • Accuracy of randomized range-finders
  • Oversampling and failure probability

Further Reading: See for example this repo for a Julia implementation of randomized low-rank approximation.

Lecture 24 (December 1)

  • Why (and when) do we count flops?
  • Memory hierarchy and the "ideal cache"
  • Cache-aware matrix-matrix multiplication

Lecture 25 (December 3)

Student Presentation I

Lecture 26 (December 8)

Student Presentation II

Lecture 27 (December 10)

Student Presentation III

About

18.335 - Introduction to Numerical Methods course

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published