1 Introduction
The real line is a tiny sliver in the whole of the complex plane. Wandering into the complex plane has often been employed to solve real problems. Two direct examples are the solution of complicated realvalued integrals via contour integrals [brown2009complex] and the usage of the intuition gained from poles in the complex plane in controltheory applications [nise2020control]. The complex plane provides wonderful insights to real problems, allowing mathematicians to impact practical applications. One of the motivations for this paper was a simple numerical application highlighted in a 2018 SIAM News article [high18diff] where Nick Higham describes how taking a numerical complex derivative neatly gets rid of roundoff error. However, most methods for numerically solving a differential equation from time to time , construct a path utilizing only the real line. We show that we can use the complex plane to our advantage when it comes to numerical time integration by taking time steps in the complex plane.
This paper is not the first to talk about complex time steps. Early work on time stepping in the complex plane focused on systematically avoiding singularities during numerical integration of singular differential equations [corliss1980integrating] and as techniques to achieve superconvergence for RungeKutta methods [orendt2009geometry]. A use of complex time steps most similar to our work was in [chambers2003symplectic], where Chambers explores operator splitting in symplectic integrators using complex substeps. Taking real substeps often results in negative substeps and substeps larger than the original step causing the symplectic integrator go back and forth along the direction of integration. By using complex substeps, Chambers showed how to avoid negative and unstable substeps, significantly reducing the truncation error. Chambers’s work inspired others to consider complex time steps as part of operator splitting methods for various parabolic and nonlinear evolution problems [castella2009splitting, auzinger2017practical, casas2021compositions, blanes2010splitting, hansen2009high]. Concurrent with our work, a very recent paper [casas2021high] in this area uses complex coefficients for high order composition methods. Another recent paper develops time integration schemes using nodes in the complex plane based on Cauchy’s integral formula and using the complex nodes to facilitate parallel BDF formulation [buvoli2019constructing]. Rather than focusing on a specific application or numerical scheme as has been done previously, we ask fundamental questions about the utility of numerical timestepping in the complex plane, including: ‘What paths lead to the least error?’ and ‘Are there general strategies and trade offs for complex timestepping?’. Here we show that stepping into the complex plane provides an extra dimension which can be exploited to improve accuracy, design expanded stability regions and even break through the order barrier [Butcher:2007, butcher2009fifth]. We compare against existing RungeKutta methods and demonstrate improvement for particular proof of concept cases. Finally we analyze increased operation counts due to complex operations, potential storage benefits, extensions to stiff solvers, and strong stability preserving properties. The code to reproduce the numerical experiments in this paper is available on GitHub ^{1}^{1}1https://github.com/Dirivian/complex_time_integrators .
2 Taking complex time steps with the Euler method
The idea of a complex time step can be unusual and unintuitive. To make it accessible, we first illustrate simple paths with complex time steps and demonstrate that applying the forward Euler method along the complex paths improves accuracy compared to forward Euler with real steps. The paths we choose in the complex plane must intersect the real line at the time points where we desire a solution. Our primary question is ‘what is the optimal nstep path in the complex plane over which to integrate between two real time points?’
To illustrate the simplest possible case, suppose we want to solve the equation with the initial condition at time . To find the solution , we can either take two real time steps of length or take a complex time step of length followed by a complex time step of length returning back to the real line. These two paths are shown in Fig. 1 (top). We numerically integrate the differential equations along these paths as shown in Fig. 1 (bottom). By eye, the complex time path clearly does much better than the real time path in this linear case. The improvement with a complex path can hold for nonlinear ODEs, second order ODEs, nonautonomous ODEs (Fig. 2) and nonlinear PDEs, such as the viscous Burgers equation (Fig. 3). In section 3, we describe the generalizations needed to solve nonlinear equations, but first we provide theoretical insight into the simple linear case.
Why does time stepping on the complex path improve accuracy of the numerical solution so significantly? To answer that, we analyze the linear problem with initial condition . Consider time stepping from the initial condition at time to time using 2 real time steps. The discrete values of are given by the following at and :
(1) 
Now consider timestepping from at time to time using the two complex time steps.
(2) 
The exact solution to the problem is given by the Taylor expansion
(3) 
The secret behind the higher accuracy of the complex 2step method is that the polynomial obtained through the numerical approximation matches the Taylor series of the exact solution to the second order term, making forward Euler method gain second order accuracy. This is one of two unique 2step paths in the complex plane which has this property. The other unique path uses the other permutation of the complex time steps instead of . Both permutations of these steps will lead to the same polynomial with coefficients that match the Taylor expansion to 2nd order.
By stepping into the complex plane, we gain an extra degree of freedom in the coefficients that allows us to design timesteps with improved accuracy. This extra dimension can be used to construct higher order paths for linear and nonlinear differential equations (Section 3), design improved stability (Section 6) and even break through the order barrier for RungeKutta methods (Section 5). The paths can be combined with other methods, such as the implicit midpoint method (RungeKutta2) as demonstrated in Section 8. In the next section, we demonstrate how to extend the linear example to construct paths with higher order accuracy.
3 Constructing a higher order time integrator
Is it possible then to construct 3step path on which forward Euler would be 3rd order accurate? Would it be nth order accurate on an nstep complex path? To build intuition for the simplest higher order construction, we continue analyzing the linear problem in Subsection 3.1, and then prove that achieving higher order requires complex timestepping. In Subsection 3.2, we extend the construction of complex timesteppers to the general nonlinear problem.
3.1 Higher order integrators for linear differential equations
In this subsection, we show how to construct a thirdorder time integrator for the ode by taking 3 forward Euler time steps of variable sizes in the complex plane that return to the real line at every desired time point.
If we take the three complex steps and to timestep from at to , the approximate numerical solution at time is given by
(4) 
where the ’s are complex variables. Collecting terms results in
(5) 
The exact solution is given by
(6) 
For the numerical approximation to be thirdorder accurate, the coefficients of for in equations (5) and (6) need to match. This results in the following order conditions:
(7) 
One of the solutions to the above nonlinear system is
(8) 
However, as the nonlinear system 3.1 is symmetric about and , all 6 permutations of result in unique 3step 3rd order complex time stepping paths. In Fig 4, all six 3step 3rd order paths are shown along with the two 2step 2nd order paths and the single step forward Euler path.
For a general nstep nth order path, the order conditions would be
(9) 
For higher orders, the solutions take on a complicated form and solving the nonlinear system becomes a tough symbolic task. However, we only need to find one solution for highorder approximations and such these solution can be found numerically and be stored in a library for general use.
These results beg the theoretical question: is there any way to take steps of varying size on the real line and achieve a higherorder time integrator? or is it only possible to do so by going into the complex plane?
Theorem 1
It is impossible to construct a time integrator with order greater than or equal to 2 using realvalued Euler time steps of variable sizes.
Proof
We prove the above theorem by showing that it is impossible to construct a secondorder time integrator using forward Euler steps of variable size on the real line.
Suppose we use time steps of variable size given by . The approximate solution to given by our time integrator would be
where we assume the first order condition is satisfied,
Gathering together different order terms,
where we have substituted in our first order condition. For the time stepper to be second order, we need the coefficient on to be
Factoring out a results in
Solving the first order condition for and incorporating produces
which rearranges to
Since all the s are real, we can express them as
where all the s are real. Then, we get
Rearranging in terms of powers of one has
This is quadratic equation to solve for in the form of . For to only have real solutions, we need a positive discriminant . In our case, the discriminant, , is
Since the discriminant is always negative, cannot be a purely real time step and must to be complex to satisfy our second order condition. Therefore, not only can we achieve higher order integrators through complex time stepping, but we must step into the complex plane to achieve higher order.
3.2 Higher order integrators for general nonlinear nonautonomous differential equations
In the previous subsection, we showed how to construct a 3step 3rd order complex path on which to time integrate using forward Euler steps. We can experimentally demonstrate that the order of accuracy is obeyed by every linear ordinary differential equation and partial differential equation that we have tested it on. However, the derivation assumed a linear problem and does not guarantee the same order of accuracy for nonlinear problems. Indeed, when we tested the complex timestepping schemes on nonlinear and nonautonomous equation, the 3 step paths achieved only second order accuracy. To achieve higher order accuracy for nonlinear, nonautonomous equations, we generalize the procedure we described so far to obtain 3rd order accuracy.
A general nonlinear ode is given by
(10) 
Let us assume the function is sufficiently analytic near the region of integration i.e it does not have any singularities close to the path of integration. At time , we have
(11) 
where
(12) 
Expanding equation 11, we get
(13)  
Now, consider a complex time stepper that takes 3 complex time steps starting from to go to . These steps are are given by:
(14)  
(15)  
(16)  
(17) 
One can expand around using the multivariate Taylor series. Doing so,
(18) 
For this three step complex integrator to have 3rd order accuracy, the coefficients of the partial derivatives in Eq 3.2 should match that of Eq 3.2. Equating coefficients, we get this system of nonlinear equations.
(19) 
Unfortunately, this system is overconstrained and does not have a solution, implying that we may need more than 3 steps for 3rd order accuracy. However, for systems with only real solutions we can take further advantage of the degree of freedom in the imaginary part of the step and find many 3step paths with 3rd order accuracy. If we allow the truncation error at the order of to be purely imaginary, we can effectively remove the error at the end of the 3 step sequence by taking only the real part of the numerical solution at the end of the complex path. Let be the true solution and be the approximate numerical solution. Then, with our setup, we would have
(20) 
where is real. Taking the real part on both sides, we get
(21) 
If the true solution to our differential equation and its initial condition are known to be realvalued at real time points, then we have and taking the real part of the solution at the end of complex path results in 3rd order accurate numerical solution. This formulation, allowing the error to be purely imaginary at has fewer constraints on the coefficients, which must now satisfy the following relaxed conditions:
(22) 
There are many paths satisfying these equations. Of the 3rd order paths in Fig 4 that were found to satisfy the linear order conditions, the top and bottom (cyan and brown) paths also satisfy the 3rd order nonlinear order conditions. We ran convergence tests for various nonlinear problems listed in Section 3.3, and demonstrated 3rd order accuracy with those paths (Fig 5).
In this subsection, we have derived paths in the complex plane that satisfy the order conditions for scalar nonautonomous differential equations. A natural followup question is whether these paths return the same order for vector valued nonautonomous differential equations, expanding our result to systems of differential equations and partial differential equations. Butcher showed in
[butcher2009fifth] that the order conditions for scalar nonautonomous differential equations are the same as those for vector valued differential equations up to order 4. As the paths derived here are of order 3 or less, they hold for systems of differential equations and partial differential equations as seen in the next subsection. To construct a fifth order or higher complex path, we would need extract the necessary order conditions using Butcher’s rooted tree approach [butcher2009fifth].3.3 Numerical Results
We have tested our complex timestepping schemes on a variety of differential equations to show that they satisfy the order conditions. The equations are listed in Table 1 along with a unique label corresponding to their legend on the convergence plot in Fig.5.
1. Linear ODEs



2. Nonlinear ODEs


3. Linear PDE : The advection equation (‘wave’)
This is a linear PDE given by


3. Nonlinear PDE : Viscous Burgers equation (‘burgers’)
This is a nonlinear PDE given by

The error for all ODEs and PDEs apart from the Van Der Pol oscillator is calculated by comparing the numerical solution to the exact solution. For the Van Der Pol oscillator, the numerical solution using complex paths is compared to the numerical solution obtained by using the ‘RK44’ 4th order method from Nodepy [ketcheson2020nodepy] with .
3.4 Higher order integrators tailored to specific problems
For specific problems, it is possible to derive complex time steppers that are uniquely accurate with fewer steps. For example, we can derive a 2step complex Euler method with 3rd order accuracy for the following nonlinear problem,
(25) 
The true solution at time can be approximated to third order accuracy using Eq (3.2) to obtain
(26) 
Note that one does not need the true solution to obtain Eq (26) as this expansion can be obtained from the right hand side of the differential equation.
Now, we apply the following twostep complex method to integrate from at time to at time using forward Euler steps.
(27) 
where are complex time steps. Writing in terms of the initial condition, we get
(28) 
This 2 step complex Euler method would have 3rd order accuracy if the coefficients in Eq. 26 and 28 match. However, there are no values that satisfy these conditions. So, we assume the case in which the true solution to (25) is real at real time points (valid if the initial condition is real). For this case, we can let the error be purely imaginary beyond order 1 and use the trick of taking the real part of the numerically calculated values every time we complete a complex path and return to the real line. This relaxed constraint produces the following a nonlinear system of equations that can be solved,
(29) 
One of the many solutions to this system is
(30) 
Applying this new 2step time stepper to the nonlinear problem, it indeed has 3rd order accuracy (Fig 6). Similar 2step paths are derived for and and they also show expected 3rd order accuracy in Fig. 6.
In the process of constructing this integrator, we needed to plug in the right hand side of the differential equation into Eq. 3.2 to derive the expansion. This may be an intensive symbolic task for more complicated problems. However, the right complex steps could be of ‘learned’ for specific and difficult problems through approaches similar to [guo2021personalized]
where a machine learning approach is used to obtain RungeKutta methods tailored to particular ODE families.
In this section, we have shown the process of constructing higher order paths for explicit RungeKutta type integrators. Later, in Section 8, we extend it to implicit methods. In the next section, we discuss the benefits and costs when compared to equivalent classical RungeKutta methods.
4 Operation counts and low storage benefits
Now, we discuss a potential drawback of the complex timestepping schemes, the increased computational cost for performing complex operations, and a potential benefit, the low storage cost compared to the RungeKutta methods. In Sec. 4.1, we demonstrate that low storage cost is the main benefit for complexvalued systems.
Consider the forward Euler method applied to the differential equation , yielding the step . If the timestepping is done along the real line, the computational cost is one real multiplication, one real addition, and the operation cost of one real function evaluation. If the timestepping is done along a path in the complex plane, the computational cost is one complex multiplication, one complex addition and the operation cost of one complex function evaluation. A complex multiplication costs 4M + 2A or 3M+5A (Gauss’s trick), where M and A are the costs of real multiplication and addition. The storage also needs to be doubled to store a complexvalued vector compared to the realvalued vector.
Consider taking a complex timestep with the ordinary differential equation . We can express it as the following system.
(31) 
where represents the complex time step and the subscripts and represent the real and imaginary parts. This results in 4M+4A cost of operations as opposed to M+A operations for purely real time steps. Based on Eq (31), the complex time stepping process shows promise for parallelization, which could reduce computational time by up to half.
Similarly, if we integrate the system of N equations with complex time steps, it becomes a system of equations. In that scenario, the dominant cost is the matrixvector multiplication () which takes operations with real time steps and operations with complex time steps. For a kstep Complex Euler method,there are k matrixvector multiplications and the operation count is compared to for a k stage RungeKutta Method.
For nonlinear systems, the dominant cost comes from function evaluations. Functions can be designed to minimize the computational cost, but the real equivalent of any complexvalued function can always be calculated using fewer operations. So, the complex time stepper will always be more expensive than an explicit RungeKutta method if the number of function evaluations is the same. In the next section, we explore how complex time integrators can require less function evaluations than the classical RungeKutta methods of the same order by breaking the RungeKutta order barrier [butcher2009fifth]. In that case, the complex integrators may end being the less expensive option.
Another potential advantage, is that the storage requirement for complex stepping is a vector of size 2N while general Runge Kutta scheme store the intermediate stages. For example, a 5th order complex timestepping scheme needs to storage a vector of size at all times while a classically implemented 5th order RungeKutta method requires memory for vector of size . However, there are lowstorage implementations of RungeKutta schemes that use fewer registers [carpenter1994fourth, KENNEDY2000177, ketcheson2008highly]. Storage is not much of an issue for typical odes and simple systems but it becomes quite relevant in various practical settings like in computational fluid dynamics [KENNEDY2000177] where partial differential equations like the Navier Stokes equations are solved in 3 dimensions at high resolution.
4.1 The linear Schrodinger equation : A clear computational advantage for the 3step complex Euler method
In this subsection, we demonstrate an example case where our complex methods have a clear benefit over existing methods. We compare the 3step complex Euler method to a standard RK3 method (Ralston’s method) for a specific problem. We choose to examine the linear Schrodinger equation because it is a partial differential equation with complex values, so the cost of complex operations is the same in both methods. We show that methods are of the same order, but because 3step complex Euler has lower storage requirements, it is advantageous.
The linear Schrodinger equation is given by
(32) 
We solve the equation from to on the domain with periodic boundary conditions. The initial condition is given by and the Fourier spectral method is used for the spatial differencing. The exact solution is given by .
Figs 7 and 8 show that the two methods have near identical error and require equal number of function evaluations when integrating the linear Schrodinger equation. The 3step complex method has one advantage over the RungeKutta3 method: Since the calculating the next step only requires the numerical solution at the current step, the required storage is the size of the initial condition vector. The RungeKutta3 method requires thrice the storage to store the 3 stages during each function evaluation in its basic implementation.
5 Breaking the order barrier for explicit RungeKutta methods with complex coefficients
One can construct an stage explicit RungeKutta method with order where up to order . But for order 5 and beyond, the number of stages needs to be more than the order desired [butcher2009fifth]. This is known as the order barrier for RungeKutta methods [Butcher:2007].
Specifically for the 5th order RungeKutta5 method, 6 stages are required and hence 6 function evaluations. This is because for 5th order accuracy, there are 17 order conditions and the 5 stages only provide 15 free parameters to satisfy them.
Can the order barrier be broken with complex time stepping? If we take complex Euler steps and use the trick of taking the real part and keeping the order error purely imaginary, we have free parameters ( complex variables that need to return to the real line). free parameters are insufficient even for 4th order accuracy because there are 8 order conditions and only 7 free parameters for .
Complex stepping can be applied to numerical methods apart from Forward Euler as well. What if we allowed some coefficients in a 5stage Runge Kutta step to be complex variables and constrained the error to be purely imaginary? That easily increases the number of free parameters from 15 to 17 or higher in order to solve the 17 order conditions!
Another approach to break the order barrier is to take two complex RungeKutta steps: one RungeKutta2 step and then one RungeKutta3 step. If we require that the two complex RungeKutta steps come back to the real line, the total number of free parameters is 10 (3 free parameters for RungeKutta2, 6 free parameters for RungeKutta3 and the two time steps sizes with the constraint that they add up to (purely real)). If all 10 of those free parameters are complex valued and the order error is purely imaginary, we get 20 free parameters which are sufficient to solve the 17 order conditions with 5th order accuracy. Using this method, we can achieve 5th order accuracy with only 5 function evaluations, breaking the order barrier. As these are complex function evaluations, the cost is higher than the real equivalent, but there are problems and architectures for which this is likely not an issue as discussed in Sec. 4.
As a proof of concept, we find a 5th order accurate complex RungeKutta2/RungeKutta3 method for the special case of scalar autonomous ODEs. We choose this particular case because it is more tractable than the 17 order conditions for the general case, which require a more complicated derivation involving a rooted tree approach [butcher2009fifth]. Even with the scalar autonomous case, there are 12 order conditions to satisfy (which cannot be done with only 10 free parameters of the purely real version of the RungeKutta2/RungeKutta3 method). Here, we show that the complex RungeKutta2/RungeKutta3 method returns a 5th order accurate numerical solution for scalar autonomous ODEs with 5 function evaluations, something that cannot be done with its real equivalent.
The following equations describe our two step complex method. First an RungeKutta2 step of size goes from to and then an RungeKutta3 step of size to goes from to . The steps and stages are
(33)  
(34)  
(35)  
(36)  
(37)  
(38)  
(39)  
(40) 
Ensuring the method returns to the real line at an effective time step of introduces the constraint .
With these 9 complex variables, we solve the 12 order conditions and the 1 constraint guaranteeing that the timestepper returns to the real line. The resulting scheme provides 5th order accuracy for scalar autonomous ODEs with only 5 function evaluations (Fig. 9).
6 Expanded regions of stability with complex timesteps
In the previous sections, we designed paths in the complex plane to achieve higher order. In this section, we leverage complex paths in an alternative way and expand the time steppers’s region of absolute stability. First, we discuss the linear stability regions of the complex integrators developed in Sec. 3.1. Next, we show how these regions can be expanded by designing the complex steps to optimize a desired stability polynomial. Expanded stability regions have been studied in a similar manner previously in the context of RungeKutta methods where the order of accuracy for the RungeKutta method is sacrificed to obtain a larger stability region [riha1972optimal, lawson1966order, abdulle2000roots, bogatyrev2004effective, ketcheson2013optimal, parsani2013optimized, kubatko2014optimal]. In most previous works, a new RungeKutta integrator (different integration method) was derived that had a stability function/polynomial corresponding to the desired stability region. Here, we keep the same integration method and traverse a path in the complex plane to obtain the desired stability region (different integration domain). We illustrate this approach using the forward Euler method. We also show that complex stepping may enable even larger regions designed for specific problems.
Consider the linear equation studied in Sec. 3.1. Most integrators, including those discussed here, convert the linear equation into the difference equation where is the stability function of the integrator [leveque2007finite]. The region where is the region of absolute stability. For a complex step th order method, the stability function is given by the first terms in the Taylor series of . For the 3 step 3rd order method, the stability function is
(41) 
The regions of absolute stability for the 1 step 1st order, 2 step 2nd order and 3 step 3rd order methods (Fig 10) are the same as those for the nstage RungeKutta because they have the same stability functions [leveque2007finite]. Note that in all these methods refers to the step size separating the beginning and end points of the complex path and hence a real time step. Since the 1step, 2step and 3step methods require a different number of steps and hence a different number of function evaluations to take a net step of size , a standard metric for comparison among methods is the effective step size where is the number of steps in the complex integrator case or the number of stages in the case of Runge Kutta methods [ketcheson2013optimal].
We would like to expand the regions in Fig 10 by optimizing the stability function. If we relax our requirement for a 3step integrator to be 3rd order accurate, and require only 2nd order accuracy, its stability function would be given by
(42) 
The coefficient in the above expression is the same as the coefficient in front of the in Eq (5): . We can choose the complex steps and such that they satisfy the first and second order conditions ()and make equal to our desired value of . The optimal
value is the one that would allow us to take the largest stable time step. For problems with purely real and imaginary eigenvalues, the optimal
values and the associated optimal stability functions/polynomials have been longstudied [riha1972optimal, bogatyrev2004effective, abdulle2000roots]. In the past decade, Ketcheson and coauthors [parsani2013optimized, kubatko2014optimal, ketcheson2013optimal] have explored optimal stability functions in the context of where is a matrix with quite arbitrary spectra. In [ketcheson2013optimal], Ketcheson and Ahmadia describe an optimization algorithm to obtain the optimal stability polynomial that allows the maximal time step depending on the spectra of . This algorithm (RKOpt [ketcheson2020rk]) is publicly available through Nodepy [ketcheson2020nodepy].By choosing the value that results in the largest permissible time step along the negative real eigenvalue line, we achieve a 3step 2nd order method (green) with expanded stability along the negative real axis compared to the 3step 3rd order method (red Fig 11). Requiring only 1st order accuracy for the 3step method can nearly triple the size of the stability region along the negative real axis (gold) and surprisingly requires only real timesteps (Fig 11).
To our knowledge, our work is the first to use complex time steps to create stability polynomials, enabling us to expand the stability region. Next, we will demonstrate a potential path for even larger stability regions via complex stepping.
The use of complex time steps in an integrator allows to expand the optimal stability polynomials beyond those with real coefficients. Consider the challenge of finding a 3step 2nd order integrator with an optimal stability region that encompasses the largest possible timestep along the eigenvalue (red dots Fig 12). The red dots further away from the origin correspond to larger values along the eigenvalue. We want to find the optimal stability polynomial/function that permits the largest for a 3 stage/step 2nd order method in this scenario.
The optimal stability polynomial returned by RKOpt[ketcheson2020rk] (magenta Fig. 12) is
(43) 
Through trial and error, we find that an even better stability polynomial (brown Fig. 12) given by
(44) 
The complex coefficient allows for a larger stable time step size for a problem with this eigenvalue. A tradeoff is that the optimal region is nolonger symmetric. So such regions would be primarily useful for complex problems, where complex eigenvalues appear outside conjugate pair. Alternatively, better stability region designs may be possible using more sophisticated optimization methodology.
This section has been an exploration of the paths in the complex plane on which an integrator has improved linear stability properties. In the next section, we consider nonlinear stability properties particularly strong stability properties. Later, in Section 8, we discuss the derivation of implicit complex methods with properties like Lstability.
7 Remarks on Strong Stability Preserving (SSP) properties
Time integrators with SSP properties are desirable because they extend they provide guarantees of stability for nonlinear problems and nonsmooth solutions. In this section, we highlight the need for new analysis for deriving practical SSP conditions for complex timesteppers and we show numerically that complex timestepping can result in larger SSP time steps.
Consider the following differential equation system,
If one use the forward Euler method is used to integrate the above system, the system would be said to satisfy a SSP property if
where the maximal SSP time step is given by
Much effort has focused on obtaining optimal SSP RungeKutta methods in various contexts [gottlieb2001strong, ketcheson2008highly]. The general argument used to show SSP is as follows; if forward Euler is SSP for some , these optimal SSP RungeKutta methods can be expressed as a convex combination of the forward Euler steps and guaranteed to be SSP for some . This is known as the SSP coefficient.
Following this line of argument, consider our step general complex Euler time stepper expressed as
Here, represent the solution at and time points on the real line. The represent the solution at the complex time points in between and the are the complex time steps. Unlike the RungeKutta methods in their ShuOsher form, finding an SSP coefficient using convex combinations of forward Euler is less straightforward because of the complex nature of and .
A strict condition for an SSP complex time integrator can be derived if we require
This results in a very restrictive time step.
Notably, the lower bound on the largest possible SSP timestep can be as small as zero if vanishes. This is too strict of a condition to obtain a practical SSP timestep. We can obtain a larger timestep with a looser SSP condition if we require that only the solutions at the real time points at the end of the path in the complex plane satisfy the SSP property . For the two step second order complex time integrator which takes the complex steps and at each real time point, where are and , , this translates to the following condition.
We can also take the step first and then the step before returning to the real line. Taking different paths with the same number of steps and the same order results in different SSP conditions. However, we have not yet been able to obtain a clean expression for an upper bound on the SSP timestep using this approach.
Using numerical experiments, we find the complex timestepping methods could allow for a larger SSP stable timestep. We demonstrate for the simple case,
(45) 
We consider three 2nd order timestepping methods.

The midpoint method:
(46) 
The optimal SSP RungeKutta 2 method: This is the second order RK method with the optimal SSP coefficient of 1. [gottlieb2001strong]
(47) 
The complex 2step method: We assume our initial condition is real. So, we make sure to take the real part of the solution at the end of the two steps.
(48)
We numerically compute the largest timestep for each method for various values such that for , the SSP condition () is satisfied. Different methods permit the largest timestep for different sizes of . However, for most values, the complex 2step method enables comparable maximal timesteps and for large values of , the complex 2step method appears to enable larger maximum timesteps (Fig. 13). While we have not yet been able to prove SSP properties for complex methods, these numerical experiments show it may have desirable properties.
8 Dealing with stiff systems: Implicit methods
When dealing with stiff systems, expanding the stability region of the integrator is rarely sufficient and implicit methods with properties like Lstability are required. In this section, we derive a higher order complex path for an implicit method: The implicit midpoint method. First we study linear ODEs (Sec. 8.1) and then we extend to general, nonlinear, nonautonomous differential equations (Sec. 8.2).
8.1 Linear differential equations
Consider the linear equation
Comments
There are no comments yet.