Psychology Wiki
Register
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


Numerical analysis is the study of algorithms for the problems of continuous mathematics (as distinguished from discrete mathematics). Some of the problems it deals with arise directly from the study of calculus; other areas of interest are real variable or complex variable questions, numerical linear algebra over the real or complex fields, the solution of differential equations, and other related problems arising in the physical sciences and engineering.

General introduction[]

Many problems in continuous mathematics do not possess a closed-form solution. Examples are finding the integral of exp(−x2) (see error function) and solving a general polynomial equation of degree five or higher (see Abel-Ruffini theorem). In these situations, one has two options left: either one tries to find an approximate solution using asymptotic analysis or one seeks a numerical solution. The latter choice describes the field of numerical analysis.

Direct and iterative methods[]

Some problems can be solved exactly by an algorithm. These algorithms are called direct methods. Examples are Gaussian elimination for solving systems of linear equations and the simplex method in linear programming.

However, no direct methods exist for most problems. In such cases it is sometimes possible to use an iterative method. Such a method starts from a guess and finds successive approximations that hopefully converge to the solution. Even when a direct method does exist, an iterative method may be preferable because it is more efficient or more stable.

Discretization[]

Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called discretization. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum.

The generation and propagation of errors[]

The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem. Round-off errors arise because it is impossible to represent all real numbers exactly on a finite-state machine (which is what all practical digital computers are). Truncation errors are committed when an iterative method is terminated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem.

Once an error is generated, it will generally propagate through the calculation. This leads to the notion of numerical stability: an algorithm is numerically stable if an error, once it is generated, does not grow too much during the calculation. This is only possible if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. Indeed, if a problem is ill-conditioned, then any error in the data will grow a lot.

However, an algorithm that solves a well-conditioned problem may or may not be numerically stable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem. A related art is to find stable algorithms for solving ill-posed problems, which generally requires finding a well-posed problem whose solution is close to that of the ill-posed problem and solving this well-posed problem instead.

Applications[]

The algorithms of numerical analysis are routinely applied to solve many problems in science and engineering. Examples are the design of structures like bridges and airplanes (see computational physics and computational fluid dynamics), weather forecasting, climate models, the analysis and design of molecules (computational chemistry), and finding oil reservoirs. In fact, almost all supercomputers are continually running numerical analysis algorithms.

Computational efficiency of an algorithm is thus an important consideration. A heuristic approximation may be preferred above a less efficient method with a solid theoretical foundation.

Generally, numerical analysis uses empirical results of computation runs to probe new methods and analyze problems, though it also employs mathematical axioms, theorems and proofs.

Areas of study[]

The field of numerical analysis is divided in different disciplines according to the problem that is to be solved.

Computing values of functions[]

One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating point arithmetic.

Interpolation, extrapolation and regression[]

Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points? A very simple method is to use linear interpolation, which assumes that the unknown function is linear between every pair of successive points. This can be generalized to polynomial interpolation, which is sometimes more accurate but suffers from Runge's phenomenon. Other interpolation methods use localized functions like splines or wavelets.

Extrapolation is very similar to interpolation, except that now we want to find the value of the unknown function at a point which is outside the given points.

Regression is also similar, but it takes into account that the data is imprecise. Given some points, and a measurement of the value of some function at these points (with an error), we want to determine the unknown function. The least squares-method is one popular way to achieve this.

Solving equations and systems of equations[]

Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not.

Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods i.e. methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss-Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems.

Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations.

Solving eigenvalue or singular value problems[]

The analysis based on eigenvalue decomposition or singular value decomposition (EVD or SVD) has become a convenient tool to solve mathemetical problems. Also, iterative methods to find EVD or SVD for a given problem, described by functions of matrix, has been developed. In particular, they are still focusing on the numerical stability and the reduced complexity for large systems, namely with a matrix of dimension nearly 100,000 or more.

Optimization[]

Main article: Optimization (mathematics)

Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints.

The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.

The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems.

Evaluating integrals[]

Main article: Numerical integration

Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton-Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods, or, in modestly large dimensions, the method of sparse grids.

Differential equations[]

Main articles: Numerical ordinary differential equations, Numerical partial differential equations.

Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations.

Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation.

History[]

The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method.

To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.

The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.

Software[]

Main article: List of numerical analysis software

Nowadays, most algorithms are implemented and run on a computer. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free alternative is the GNU Scientific Library. A different approach is taken by the Numerical Recipes library, which emphasizes understanding classic algorithms, as seen by non-specialists. (Some consider this a strength; others deplore the blunders and bad advice.)

Other popular languages for numerical computations are MATLAB, IDL, and Python. These are interpreted programming languages, but they allow for faster development and prototyping, and if necessary can be converted to Fortran and C for enhanced speed.

Many computer algebra systems such as Mathematica or Maple (free software systems include SAGE, Maxima, Axiom, calc and Yacas), can also be used for numerical computations. However, their strength typically lies in symbolic computations.

See also[]

  • Scientific computing
  • List of numerical analysis topics
  • Gram-Schmidt process
  • Important publications in numerical analysis
  • Halting problem
  • Numerical integration
  • Numerical differentiation

References[]

  • Nick Trefethen, "Numerical analysis," 20 pages, to appear in: Timothy Gowers and June Barrow-Green (editors), Princeton Companion of Mathematics, Princeton University Press. Available from the author's home page.
  • Numerische Mathematik (Complete digitized copies of volumes 1-66, spanning the years 1959 to 1994, of a well-known journal of numerical analysis.)

External links[]

Wikibooks
Numerical analysis may have more about this subject.



af:Numeriese analise ar:تحليل عددي cs:Numerická matematika de:Numerische Mathematik es:Análisis numérico fi:Numeerinen analyysi fr:Analyse numérique ko:수치 해석 he:אנליזה נומרית nl:Numerieke wiskunde pt:Análise numérica ro:Analiză numerică ru:Вычислительная математика su:Analisis numeris sv:Beräkningsvetenskap zh:数值分析

This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement