Numerical linear algebra

From Simple English Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In the field of numerical analysis, numerical linear algebra is an area to study methods to solve problems in linear algebra by numerical computation.[1][2][3] The following problems will be considered in this area:

  1. Numerically solving a system of linear equations.[4]
  2. Numerically solving an eigenvalue problem for a given matrix.[5]
  3. Computing approximate values of a matrix-valued function.[6]

Numerical errors can occur in any kind of numerical computation including the area of numerical linear algebra. Errors in numerical linear algebra are considered in another area called "validated numerics".[7]

Latest Studies[change | change source]

Methods for numerical linear algebra has been created by numerical analysts from many generations.[1][2][3] But today, some of them have been rejected due to their computation speed or accuracy.[1][2][3] Currently, the following methods are widely investigated:

  • QZ method[8]
  • dqds method (differential quotient difference with shift)[9]
  • oqds method (orthogonal quotient difference with shift)[10][11][12]
  • MRRR method (multiple relatively robust representations)[13]
  • MRTR method[14]
  • Sakurai-Sugiura method[15]
  • CIRR method (Rayleigh-Ritz type method with contour integral)[16]

Krylov Subspace Methods[change | change source]

In the field of numerical linear algebra, numerical methods based on the theory of Krylov subspaces are known as Krylov subspaces methods. They are considered to be one of the most successful studies in numerical linear algebra.[17][18] The next list is the examples of them:

  • MINRES (minimal residual) method[19]
  • CR (conjugate residual) method[20]
  • QMR type methods
    • QMR (quasi minimal residual) method[21][22]
    • QMR-SYM method[23][24]
    • TFQMR (transpose free quasi minimal residual) method[25]

Conjugate Gradient Methods[change | change source]

The conjugate gradient (CG) method is one of the best linear equation solving method. It was originally limited to specific linear systems.[26] In order to overcome this difficulty, many kinds of CG variants have benn created:

  • CGS (conjugate gradient squared method)[27]
  • PCG (preconditioned conjugate gradient method)
  • SCG (scaled conjugate gradient)[28]
  • ICCG (incomplete Cholesky conjugate gradient method)
  • COCG (conjugate orthogonal conjugate gradient method)[29]
  • GPBiCG[30]
  • Stabilized methods
    • BiCGSTAB (biconjugate gradient stabilized method)[31]
    • BiCGSTAB2[32]
    • QMRCGSTAB[33]
    • GBi-CGSTAB[34]
  • Block versions (dividing a given matrix into block matrices is a frequently used technique in numerical linear algebra[1][2][3])

Validated Numerics for Numerical Linear Algebra[change | change source]

While high accuracy and high speed methods in above have been cretaed, some experts have studied how to evaluate numerical errors in numerical linear algebra.[7] The following are their results:

  • Validating numerical solutions of a given system of linear equations[41][42]
    • Validated numerics for ill-conditioned problems[43][44][45] (Ill-conditioned problems are problems which are hard to compute accurately[1][2][3])
    • Pre-conditioning[46][47][48] (Pre-conditioning is a procedure to allow the given system of linear equations to be easily solved[1][2][3])
  • Validating numerically obtained eigenvalues[49][50][51]
    • Validating numerical solutions of inverse eigenvalue problems[52][53] (In inverse eigenvalue problems, you will compute and identify an unknown matrix by a given eigenvalue)
  • Rigorously computing determinants[54]
  • Validating numerical solutions of matrix equations[55][56][57][58][59][60][61]
  • Computing matrix functions rigorously (Approximate computation has been studied by N. J. Higham and others[62][63][64][65])

Software[change | change source]

Today, there are many tools for numerical linear algebra. One of the most famous one is MATLAB (matrix laboratory).[69][70][71] This was made by MathWorks.

References[change | change source]

  1. 1.0 1.1 1.2 1.3 1.4 1.5 Demmel, J. W. (1997). Applied numerical linear algebra. SIAM.
  2. 2.0 2.1 2.2 2.3 2.4 2.5 Ciarlet, P. G., Miara, B., & Thomas, J. M. (1989). Introduction to numerical linear algebra and optimization. Cambridge University Press.
  3. 3.0 3.1 3.2 3.3 3.4 3.5 Trefethen, Lloyd; Bau III, David (1997). Numerical Linear Algebra (1st ed.). Philadelphia: SIAM.
  4. Saad, Yousef (2003). Iterative methods for sparse linear systems (2nd ed.). SIAM.
  5. David S. Watkins (2008), The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods, SIAM.
  6. Higham, N. J. (2008). Functions of matrices: theory and computation. SIAM.
  7. 7.0 7.1 Rump, S. M. (2010). Verification methods: Rigorous results using floating-point arithmetic. Acta Numerica, 19, 287-449.
  8. Moler, C. B., & Stewart, G. W. (1973). An algorithm for generalized matrix eigenvalue problems. SIAM Journal on Numerical Analysis, 10(2), 241-256.
  9. Fernando, K. V., & Parlett, B. N. (1994). Accurate singular values and differential qd algorithms. Numerische Mathematik, 67(2), 191-229.
  10. U. von Matt, The orthogonal QD algorithm, SIAM J. Sci. Comput., 18 (1997), 1163–1186.
  11. Araki, S., Kimura, K., Yamamoto, Y., & Nakamura, Y. (2015). Implementation details of an extended oqds algorithm for singular values. JSIAM Letters, 7, 9-12.
  12. Fast Computation Method of Column Space by using the DQDS Method and the OQDS Method In Proceedings of 2018 International Conference on Parallel and Distributed Processing Techniques and Applications, 333-339, 2018/07, Sho Araki, Hiroki Tanaka, Masami Takata, Kinji Kimura, Yoshimasa Nakamura.
  13. Dhillon, I. S., Parlett, B. N., & Vömel, C. (2006). The design and implementation of the MRRR algorithm. ACM Transactions on Mathematical Software (TOMS), 32(4), 533-560.
  14. Abe, K., Zhang, S. L., & Mitsui, T. (1997). MRTR method: An iterative method based on the three-term recurrence formula of CG-type for nonsymmetric matrix. JSIAM, 7, 37-50.
  15. Sakurai, T., & Sugiura, H. (2003). A projection method for generalized eigenvalue problems using numerical integration. en:Journal of computational and applied mathematics, 159(1), 119-128.
  16. Sakurai, T., & Tadano, H. (2007). CIRR: a Rayleigh-Ritz type method with contour integral for generalized eigenvalue problems. Hokkaido mathematical journal, 36(4), 745-757.
  17. David S. Watkins (2008), The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods, SIAM.
  18. Liesen, J., & Strakos, Z. (2012). Krylov subspace methods: principles and analysis. OUP Oxford.
  19. Christopher C. Paige and Michael A. Saunders, Solution of sparse indefinite systems of linear equations, SIAM Journal on Numerical Analysis 1975; 12(4):617–629.
  20. Eduard Stiefel, Commentarii Mathematici Helvetici 1952; 29(1):157–179.
  21. Freund, R. and Nachtigal, N. "QMR: A Quasi-Minimal Residual Method for Non-Hermitian Linear Systems." Numer. Math. 60, 315-339, 1991.
  22. Freund, R. and Nachtigal, N. "An Implementation of the QMR Method Based on Coupled Two-Term Recurrences." SIAM J. Sci. Statist. Comput. 15, 313-337, 1994.
  23. R. Freund: Conjugate Gradient-Type Methods for Linear Systems with Complex Symmetric Coefficient Matrices, SIAM Journal on Scientific and Statistical Computing, Vol. 13, No. 1, pp. 425–448 (1992).
  24. X.-M. Gu, T.-Z. Huang, L. Li, H.-B. Li, T. Sogabe and M. Clemens: Quasi-Minimal Residual Variants of the COCG and COCR Methods for Complex Symmetric Linear Systems in Electromagnetic Simulations, IEEE Transactions on Microwave Theory and Techniques, Vol. 62, No. 12, pp. 2859–2867 (2014).
  25. Roland W. Freund, A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems, SIAM Journal on Scientific Computing 1993; 14(2):470–482.
  26. Hestenes, M. R., & Stiefel, E. (1952). Methods of conjugate gradients for solving linear systems. Washington, DC: NBS.
  27. Black, Noel and Moore, Shirley. "Conjugate Gradient Squared Method." From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein. http://mathworld.wolfram.com/ConjugateGradientSquaredMethod.html
  28. M. F. Moller. A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks, 6:525--533, 1993.
  29. H. A. van der Vorst and J. B. M. Melissen: A Petrov-Galerkin type method for solving , where is symmetric complex, IEEE Trans. Mag., Vol. 26, No. 2, pp. 706–708 (1990).
  30. S. L. Zhang "GPBiCG: Generalized Product-type Methods Based on Bi-CG for Solving Nonsymmetric Linear Systems", SIAM J. Sci. Stat. Comput. , vol.18, no.2, pp.537-551, March 1997.
  31. Van der Vorst, H. A. (1992). Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems. SIAM Journal on scientific and Statistical Computing, 13(2), 631-644.
  32. M. H.Gutknecht,Variants of BiCGSTAB for Matrices with Complex Spectrum,SIAM J. Sci. Statist. Comput.,14(1993), 1020-1033.
  33. Tony F. Chan, Efstratios Gallopoulos, Valeria Simoncini, Tedd Szeto and Charles H. Tong, A quasi-minimal residual variant of the Bi-CGSTAB algorithm for nonsymmetric systems, SIAM Journal on Scientific Computing 1994; 15(2):338–347.
  34. Tanio, M., & Sugihara, M. (2010). GBi-CGSTAB (s, L): IDR (s) with higher-order stabilization polynomials. Journal of Computational and Applied Mathematics, 235(3), 765-784.
  35. D. P. O’Leary: The block conjugate gradient algorithm and related methods, Linear Algebra Appl., 29 (1980), 293–322.
  36. A. A. Nikishin, A. Yu. Yeremin: Variable block CG algorithms for solving large sparse symmetric positive definite linear systems on parallel computers, I: general iterative methods, SIAM J. Matrix Anal. Appl., 16 (1995), 1135–1153.
  37. A. El Guennouni, K. Jbilou, H. Sadok: A block version of BiCGSTAB for linear systems with multiple right-hand sides, Electron. Trans. Numer. Anal, 16 (2003), 129–142.
  38. H. Tadano, T. Sakurai, Y. Kuramashi: Block BiCGGR: a new block Krylov subspace method for computing high accuracy solutions: JSIAM Lett., 1 (2009), 44–47.
  39. Tadano, H. (2019). Development of the Block BiCGGR2 method for linear systems with multiple right-hand sides. Japan Journal of Industrial and Applied Mathematics, 1-15.
  40. Tadano, H., & Kuramoto, R. (2019). Accuracy improvement of the Block BiCGSTAB method for linear systems with multiple right-hand sides by group-wise updating technique. Journal of Advanced Simulation in Science and Engineering, 6(1), 100-117.
  41. Yamamoto, T. (1984). Error bounds for approximate solutions of systems of equations. Japan Journal of Applied Mathematics, 1(1), 157.
  42. Oishi, S., & Rump, S. M. (2002). Fast verification of solutions of matrix equations. Numerische Mathematik, 90(4), 755-773.
  43. Kobayashi, Y., Ogita, T., & Ozaki, K. (2017). Acceleration of a preconditioning method for ill-conditioned dense linear systems by use of a BLAS-based method. Reliable Computing, 25, 15-23.
  44. Kobayashi, Y., & Ogita, T. (2016). Accurate and efficient algorithm for solving ill-conditioned linear systems by preconditioning methods. Nonlinear Theory and Its Applications, IEICE, 7(3), 374-385.
  45. A fast and efficient algorithm for solving ill-conditioned linear systems (JSIAM Letters Vol.7 (2015) pp.1-4) Yuka Kobayashi, Takeshi Ogita.
  46. Kobayashi, Y., Ogita, T., & Ozaki, K. (2017). Acceleration of a preconditioning method for ill-conditioned dense linear systems by use of a BLAS-based method. Reliable Computing, 25, 15-23.
  47. Kobayashi, Y., & Ogita, T. (2016). Accurate and efficient algorithm for solving ill-conditioned linear systems by preconditioning methods. Nonlinear Theory and Its Applications, IEICE, 7(3), 374-385.
  48. A fast and efficient algorithm for solving ill-conditioned linear systems (JSIAM Letters Vol.7 (2015) pp.1-4) Yuka Kobayashi, Takeshi Ogita.
  49. Yamamoto, T. (1980). Error bounds for computed eigenvalues and eigenvectors. Numerische Mathematik, 34(2), 189-199.
  50. Yamamoto, T. (1982). Error bounds for computed eigenvalues and eigenvectors. II. Numerische Mathematik, 40(2), 201-206.
  51. Mayer, G. (1994). Result verification for eigenvectors and eigenvalues. Topics in Validated Computations, Elsevier, Amsterdam, 209-276.
  52. Alefeld, G., Gienger, A., & Mayer, G. (1994). Numerical validation for an inverse matrix eigenvalue problem. Computing, 53(3-4), 311-322.
  53. Miyajima, S. (2017). Verified Solutions of Inverse Symmetric Eigenvalue Problems. Reliable Computing, 24(1), 31-44.
  54. Ogita, T. (2008). Verified Numerical Computation of Matrix Determinant. SCAN’2008 El Paso, Texas September 29–October 3, 2008, 86.
  55. Shinya Miyajima, Verified computation for the Hermitian positive definite solution of the conjugate discrete-time algebraic Riccati equation, Journal of Computational and Applied Mathematics, Volume 350, Pages 80-86, April 2019.
  56. Shinya Miyajima, Fast verified computation for the minimal nonnegative solution of the nonsymmetric algebraic Riccati equation, Computational and Applied Mathematics, Volume 37, Issue 4, Pages 4599-4610, September 2018.
  57. Shinya Miyajima, Fast verified computation for the solution of the T-congruence Sylvester equation, Japan Journal of Industrial and Applied Mathematics, Volume 35, Issue 2, Pages 541-551, July 2018.
  58. Shinya Miyajima, Fast verified computation for the solvent of the quadratic matrix equation, The Electronic Journal of Linear Algebra, Volume 34, Pages 137-151, March 2018
  59. Shinya Miyajima, Fast verified computation for solutions of algebraic Riccati equations arising in transport theory, Numerical Linear Algebra with Applications, Volume 24, Issue 5, Pages 1-12, October 2017.
  60. Shinya Miyajima, Fast verified computation for stabilizing solutions of discrete-time algebraic Riccati equations, Journal of Computational and Applied Mathematics, Volume 319, Pages 352-364, August 2017.
  61. Shinya Miyajima, Fast verified computation for solutions of continuous-time algebraic Riccati equations, Japan Journal of Industrial and Applied Mathematics, Volume 32, Issue 2, Pages 529-544, July 2015.
  62. Bini, D. A., Higham, N. J., & Meini, B. (2005). Algorithms for the matrix pth root. Numerical Algorithms, 39(4), 349-378.
  63. Deadman, E., Higham, N. J., & Ralha, R. (2012, June). Blocked Schur algorithms for computing the matrix square root. In International Workshop on Applied Parallel Computing (pp. 171-182). Springer, Berlin, Heidelberg.
  64. Hargreaves, G. I., & Higham, N. J. (2005). Efficient algorithms for the matrix cosine and sine. Numerical Algorithms, 40(4), 383-400.
  65. Davies, P. I., & Higham, N. J. (2003). A Schur-Parlett algorithm for computing matrix functions. SIAM Journal on Matrix Analysis and Applications, 25(2), 464-485.
  66. Miyajima, S. (2019). Verified computation of the matrix exponential. Advances in Computational Mathematics, 45(1), 137-152.
  67. Miyajima, S. (2019). Verified computation for the matrix principal logarithm. Linear Algebra and its Applications, 569, 38-61.
  68. Miyajima, S. (2018). Fast verified computation for the matrix principal pth root. en:Journal of Computational and Applied Mathematics, 330, 276-288.
  69. Gilat, Amos (2004). MATLAB: An Introduction with Applications 2nd Edition. John Wiley & Sons.
  70. Quarteroni, Alfio; Saleri, Fausto (2006). Scientific Computing with MATLAB and Octave. Springer.
  71. Gander, W., & Hrebicek, J. (Eds.). (2011). Solving problems in scientific computing using Maple and Matlab®. Springer Science & Business Media.

Other websites[change | change source]

Further Reading[change | change source]

  • Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). Baltimore: The Johns Hopkins University Press.
  • Matrix Iterative Analysis, Varga, Richard S., Springer, 2000.
  • Higham, N. J. (2002). Accuracy and stability of numerical algorithms. Society for Industrial and Applied Mathematics.
  • Liesen, J., & Strakos, Z. (2012). Krylov subspace methods: principles and analysis. OUP Oxford.