This has nothing to do with R, per-se, and everything to do with floating-point values as represented by your computer.
Recurrence relations are like differential equations, and the problem is stated in two parts - the relation and the initial conditions. Change the initial conditions, and you have a different solution.
Note that with initial conditions F[1] <- 1; F[2] <- 3
, the solution is 3^(x-1)
(stated without proof, but it is easy to verify). An increasing exponential function.
Next, note the ratio between elements (it is mildly instructive to also look at intermediate values of H
here):
H <- tail(F, -1) / head(F, -1)
c(head(H, 1), tail(H, 1))
## [1] 0.3333333 3.0000000
You are transitioning between the solution f(x) = 3^(1-x) and f(x) = 3^(x-k) (for some constant k - it is not 1 here, but it is pointless to compute it exactly).
The reason is that when you subtract F[k-2], the arithmetic is not exact, so you do not subtract quite enough at each stage, and it is as if you had a higher initial condition for the exact solution at that stage.
It is valid to give the first N points of F, then use the recurrence relation to solve at that stage. This gives a sequence of functions. And that is what happens when it is computed numerically - it is a different set of initial conditions at each computation.
You're actually computing the solution for f(x) = (10/3)f(x-1) - f(x-2) + e(f(x-2)) where e(x) > 0 for all x (and represents the bits that fall off the end in the subtraction).