Вопрос

I'm solving a differential equation on the form $\ddot x = f( \dot x, x)$ on a microchip within a limited (real world) time frame, hence I want to use an adaptive step size to get as good of a result as possible. However, I only have a limited number of iterations, $N$, to solve the system. If I use the "regular" adaptive time step methods of estimating the local error and adapting the step size there's no guarantee that I will be able to solve the system in $N$ iterations.

Does anyone know where I can find algorithms for this? All I can find are the "regular" adaptive time step algorithms where $N$ isn't limited.

The current solution I have is to assume that the error is proportional to the magnitude of $\ddot x$ and projects a line from $x(t_0)$ with constant velocity $\dot x = c$ to reach $x(t_n)$ and then use the differential equation to get the distribution of the magnitude of $\ddot x$ to know where the step size needs to be small and adjust it accordingly.

This projected line obviously is not the real path but the idea is that it will give me a good approximation of how the error changes in the future so that I know if I can "afford" to have a small step size at each given iteration. This method assumes that you have the primitive function of $f(c,x(t))$ with respect to t when c is constant, $x(t)$ is linear and $f^{-1}$ exists. The method could probably be expanded to use constant acceleration to project the path instead of constant velocity, but the idea is the same.

The problem is that I can't find any information on the subject and I highly doubt that my naive initial idea is the best one out there.

Это было полезно?

Решение

I've come to the conclusion that it's not worth doing for my application, so I thought I would post my progress here for anyone else who might be interested in this same problem.

My more "proper" solution, which does not put any constraints on the differential equation but instead uses an error estimation for the adaptive steps, more like a traditional approach. For anyone dealing with the same problem or interested in a solution here it goes.

For the Euler method we can assume that the error is: $ \varepsilon = c h^2 $, and that $c \approx | \ddot x_t - \ddot x_{t-1}$|. In order to limit the number of iterations we will take we have to make some assumptions about the future. The assumption I made is that we will have some average error when we take an equidistant step, and this error will be constant. So let's define the "future average error" as $\hat \varepsilon = \hat c {\hat h}^2$. Since we assume that the future points will be equidistant we know that $\hat h = l/n$ where $l$ is the remaining length (of time) and $n$ is the remaining number of iterations. We don't know $\hat c$ but after some testing I found that the current average $c$ worked rather well, and an exponentially smoothed $c$ worked even better. Making projections on how $c$ will change based on previous data might yield even better results, but I'm trying to do this with few operations so I don't have time to make any advanced predictions.

Now we have all we need, given the assumption that the best accuracy is reached when the error is constant we can set the two equations equal and get: $c h^2 = \hat c (l/n)^2$ which gives us a step size of:

$h = \sqrt{ \frac{\hat c}{c}} \frac{l}{n}$

This step size alone performs rather well, but we do not have any upper bound on the step size and we will get division by 0 if c is zero (if the acceleration is constant between two points). So we simply introduce a lower bound on c defined by an upper bound on h:

$c_{min} = \hat c ( \frac{l}{n h_{max}} )^2$

After some testing I've found that a good value for $h_{max}$ is $\frac{l}{max(n-1,3)}$

So to recap, for the one who just want to test it the algorithm is as follows:

h_max = length/(max(n-1,3))
c = max(abs( a - a_previous ),c_hat*(length/(n*h_max))^2 )
c_hat = alpha*c + (1-alpha)*c_hat

h = sqrt(c_hat/c)*length/n

where a and a_previous is $\ddot x_t$ and $\ddot x_{t-1}$ for the differential equation. If you implement this it's important to remember that length and n is the remaining length and iterations, not the total, so it needs to be updated every iteration. It's also important to note that c_hat is updated with the current c value, I found this to yield much better results.

On to why I don't think it's worth it. During my testing I got a maximum of a 40% decrease in Mean Squared Error compared to the "true" solution (evaluated using many many more points). The error for the final point however was often worse than simple equidistant evaluation, even for 'simple' differential equations like y' = a -y. It was only for differential equations which varied a lot (like those with sinusoidal solutions) which gave an improvement in the accuracy of the final point, and since I'm mostly interested in the accuracy of the final point I think it's worth spending those computational resources on doing more iterations rather than adapting the step size. I can probably cram in twice the number of evaluations instead of doing adaptive step sizes which in my testing was always better than the adaptive strategy.

For anyone who wishes to test this further, the main reason why this method doesn't perform as good as one might expect is because of how $\hat c$ is evaluated. Since it's based on previous values (and heavily skewed toward the current value) the algorithm holds off on small step sizes which often is detrimental to performance. Trying to predict how $\hat c$ will actually look in the future would probably yield much better results and it would be a good place to start exploring further.

TLDR; If you have this problem, just spend the computing resources on more iterations (or higher order methods) instead of advanced methods to adapt the step size.

Лицензировано под: CC-BY-SA с атрибуция
Не связан с cs.stackexchange
scroll top