The optimizer probably thought that A.TIMESTAMP > ...
would reduce the number of hits by so much that it would be cheaper to use nested loops for a small number of rows than to perform large joins.
The exact cause and whether there is an easy way to correct the problem is hard to determine based on the scarce information provided.
You should not be surprised that the execution plan changes drastically when you add an index (or a condition on an indexed column).
I'm a bit surprised that it chose to change the plan for a >
comparison. Is the limit a fixed value (i.e. is it known to the optimiser) and is it close to the highest value in the table (as recorded in the table statistics)?
There is a caveat regarding timestamps and that is that the highest value statistic can get outdated pretty fast. Let's say your statistics are 24 h old and that you are looking for dates within the last 24 hours. The optimiser will use the stats and predict that the query will result in 0 hits. So it will start with checking the index.
In reality, you have entered lots of new records in the last 24 hours. A whole days worth of new records...
One way too set the optimizer straight is to provide the cut-off date as a parameter (and pre-compile the question if applicable) so that the optimiser isn't fooled into thinking it will get 0 hits.