Parker–Sochacki method: Difference between revisions
→Advantages: Typo in "a priori" Tags: Mobile edit Mobile web edit |
|||
(16 intermediate revisions by 9 users not shown) | |||
Line 1: | Line 1: | ||
In [[mathematics]], the ''' |
{{Short description|Algorithm for solving ordinary differential equations}} |
||
In [[mathematics]], the '''Parker–Sochacki method''' is an [[algorithm]] for solving systems of ordinary [[differential equation]]s (ODEs), developed by [[G. Edgar Parker]] and [[James Sochacki]], of the [[James Madison University]] Mathematics Department. The method produces [[Maclaurin series]] solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. |
|||
==Summary== |
==Summary== |
||
The |
The Parker–Sochacki method rests on two simple observations: |
||
* If a set of ODEs has a particular form, then the [[Picard method]] can be used to find their solution in the form of a [[power series]]. |
* If a set of ODEs has a particular form, then the [[Picard iteration|Picard method]] can be used to find their solution in the form of a [[power series]]. |
||
* If the ODEs do not have the required form, it is nearly always possible to find an expanded set of equations that do have the required form, such that a subset of the solution is a solution of the original ODEs. |
* If the ODEs do not have the required form, it is nearly always possible to find an expanded set of equations that do have the required form, such that a subset of the solution is a solution of the original ODEs. |
||
Several coefficients of the power series are calculated in turn, a time step is chosen, the series is evaluated at that time, and the process repeats. |
Several coefficients of the power series are calculated in turn, a time step is chosen, the series is evaluated at that time, and the process repeats. |
||
The end result is a high order piecewise solution to the original ODE problem. The order of the solution desired is an adjustable variable in the program that can change between steps. The order of the solution is only limited by the floating point representation on the machine running the program. |
The end result is a high order piecewise solution to the original ODE problem. The order of the solution desired is an adjustable variable in the program that can change between steps. The order of the solution is only limited by the floating point representation on the machine running the program. And in some cases can be either extended by using arbitrary precision floating point numbers, or for special cases by finding solution with only integer or rational coefficients. |
||
==Advantages== |
==Advantages== |
||
The method requires only addition, subtraction, and multiplication, making it very convenient for high-speed computation. (The only divisions are inverses of small integers, which can be precomputed.) |
The method requires only addition, subtraction, and multiplication, making it very convenient for high-speed computation. (The only divisions are inverses of small integers, which can be precomputed.) |
||
Use of a high |
Use of a high order—calculating many coefficients of the power series—is convenient. (Typically a higher order permits a longer time step without loss of accuracy, which improves efficiency.) |
||
The order and step size can be easily changed from one step to the next. |
The order and step size can be easily changed from one step to the next. |
||
It is possible to calculate a guaranteed error bound on the solution. |
It is possible to calculate a guaranteed error bound on the solution. |
||
Arbitrary precision floating point libraries allow this method to compute arbitrarily accurate solutions. |
Arbitrary precision floating point libraries allow this method to compute arbitrarily accurate solutions. |
||
With the |
With the Parker–Sochacki method, information between integration steps is developed at high order. As the Parker–Sochacki method integrates, the program can be designed to save the power series coefficients that provide a smooth solution between points in time. The coefficients can be saved and used so that polynomial evaluation provides the high order solution between steps. With most other classical integration methods, one would have to resort to interpolation to get information between integration steps, leading to an increase of error. |
||
There is an ''a priori'' error bound for a single step with the Parker–Sochacki method.<ref>{{cite journal |
|||
⚫ | |||
| author1 = P.G. Warne |
|||
| author2 = D.P. Warne |
|||
| author3 = J.S. Sochacki |
|||
| author4 = G.E. Parker |
|||
| author5 = D.C Carothers |
|||
| title = Explicit a-priori error bounds and adaptive error control for approximation of nonlinear initial value differential systems |
|||
| journal = Computers & Mathematics with Applications |
|||
| volume = 52 |
|||
| issue = 12 |
|||
| pages = 1695–1710 |
|||
| url = http://educ.jmu.edu/~sochacjs/error.pdf |
|||
⚫ | |||
| doi=10.1016/j.camwa.2005.12.004 |
|||
| year=2006 |
|||
| doi-access = free |
|||
⚫ | |||
==Disadvantages== |
==Disadvantages== |
||
Most methods for numerically solving ODEs require only the evaluation of derivatives for chosen values of the variables, so systems like MATLAB include implementations of several methods all sharing the same calling sequence. Users can try different methods by simply changing the name of the function called. The |
Most methods for numerically solving ODEs require only the evaluation of derivatives for chosen values of the variables, so systems like MATLAB include implementations of several methods all sharing the same calling sequence. Users can try different methods by simply changing the name of the function called. The Parker–Sochacki method requires more work to put the equations into the proper form, and cannot use the same calling sequence. |
||
==References== |
|||
{{Reflist}} |
|||
==External links== |
==External links== |
||
* {{Citation |
* {{Citation |
||
| url = http://educ.jmu.edu/~sochacjs/PolynomialODEs-Sochacki.pdf |
| url = http://educ.jmu.edu/~sochacjs/PolynomialODEs-Sochacki.pdf |
||
| title = Polynomial ODEs |
| title = Polynomial ODEs – Examples, Solutions, Properties |
||
| accessdate = |
| accessdate = August 27, 2017 |
||
}}. A thorough explanation of the paradigm and application of the |
}}. A thorough explanation of the paradigm and application of the Parker–Sochacki method |
||
* {{Citation |
* {{Citation |
||
| author = Joseph W. Rudmin |
| author = Joseph W. Rudmin |
||
| title = Application of the |
| title = Application of the Parker–Sochacki Method to Celestial Mechanics |
||
| journal = Journal of Computational Neuroscience |
|||
| volume = 27 |
|||
| pages = 115–133 |
|||
| year = 1998 |
| year = 1998 |
||
| |
| arxiv = 1007.1677 |
||
⚫ | |||
⚫ | |||
⚫ | |||
* {{Citation |
* {{Citation |
||
| url = http://educ.jmu.edu/~sochacjs/PSM.html |
| url = http://educ.jmu.edu/~sochacjs/PSM.html |
||
Line 43: | Line 66: | ||
}}. A collection of papers and some Matlab code. |
}}. A collection of papers and some Matlab code. |
||
{{DEFAULTSORT:Parker-Sochacki method}} |
|||
{{mathanalysis-stub}} |
|||
[[Category:Mathematical analysis]] |
[[Category:Mathematical analysis]] |
Latest revision as of 21:48, 8 June 2024
In mathematics, the Parker–Sochacki method is an algorithm for solving systems of ordinary differential equations (ODEs), developed by G. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces Maclaurin series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form.
Summary
[edit]The Parker–Sochacki method rests on two simple observations:
- If a set of ODEs has a particular form, then the Picard method can be used to find their solution in the form of a power series.
- If the ODEs do not have the required form, it is nearly always possible to find an expanded set of equations that do have the required form, such that a subset of the solution is a solution of the original ODEs.
Several coefficients of the power series are calculated in turn, a time step is chosen, the series is evaluated at that time, and the process repeats.
The end result is a high order piecewise solution to the original ODE problem. The order of the solution desired is an adjustable variable in the program that can change between steps. The order of the solution is only limited by the floating point representation on the machine running the program. And in some cases can be either extended by using arbitrary precision floating point numbers, or for special cases by finding solution with only integer or rational coefficients.
Advantages
[edit]The method requires only addition, subtraction, and multiplication, making it very convenient for high-speed computation. (The only divisions are inverses of small integers, which can be precomputed.) Use of a high order—calculating many coefficients of the power series—is convenient. (Typically a higher order permits a longer time step without loss of accuracy, which improves efficiency.) The order and step size can be easily changed from one step to the next. It is possible to calculate a guaranteed error bound on the solution. Arbitrary precision floating point libraries allow this method to compute arbitrarily accurate solutions.
With the Parker–Sochacki method, information between integration steps is developed at high order. As the Parker–Sochacki method integrates, the program can be designed to save the power series coefficients that provide a smooth solution between points in time. The coefficients can be saved and used so that polynomial evaluation provides the high order solution between steps. With most other classical integration methods, one would have to resort to interpolation to get information between integration steps, leading to an increase of error.
There is an a priori error bound for a single step with the Parker–Sochacki method.[1] This allows a Parker–Sochacki program to calculate the step size that guarantees that the error is below any non-zero given tolerance. Using this calculated step size with an error tolerance of less than half of the machine epsilon yields a symplectic integration.
Disadvantages
[edit]Most methods for numerically solving ODEs require only the evaluation of derivatives for chosen values of the variables, so systems like MATLAB include implementations of several methods all sharing the same calling sequence. Users can try different methods by simply changing the name of the function called. The Parker–Sochacki method requires more work to put the equations into the proper form, and cannot use the same calling sequence.
References
[edit]- ^ P.G. Warne; D.P. Warne; J.S. Sochacki; G.E. Parker; D.C Carothers (2006). "Explicit a-priori error bounds and adaptive error control for approximation of nonlinear initial value differential systems" (PDF). Computers & Mathematics with Applications. 52 (12): 1695–1710. doi:10.1016/j.camwa.2005.12.004. Retrieved August 27, 2017.
External links
[edit]- Polynomial ODEs – Examples, Solutions, Properties (PDF), retrieved August 27, 2017. A thorough explanation of the paradigm and application of the Parker–Sochacki method
- Joseph W. Rudmin (1998), "Application of the Parker–Sochacki Method to Celestial Mechanics", Journal of Computational Neuroscience, 27: 115–133, arXiv:1007.1677, doi:10.1007/s10827-008-0131-5. A demonstration of the theory and usage of the Parker–Sochacki method, including a solution for the classical Newtonian N-body problem with mutual gravitational attraction.
- The Modified Picard Method., retrieved November 11, 2013. A collection of papers and some Matlab code.