This reminds me vaguely of a back-propagation strategy as is often found in neural network training. I'll sketch two strategies, the first of which is going to be flawed:
- Compute the cost of your candidate path P, which we will call
c(P)
.
- Compute the cost of the shortest path S, which we will call
c(S)
.
- Reduce every edge weight
w(p) ∈ P
by (c(P) - c(S) - epsilon) / |P|
, where epsilon
is some vanishingly small constant by which you would like your path to be less than c(S), and |P|
is the number of edges in P
.
Of course, the problem with this is that you might well reduce the cost of path S
(or some other path) by more than you reduce the cost of P
! This suggests to me that this is going to require an iterative approach, whereby you start forwards and reduce the cost of a given weight relative to the shortest path cost which you recompute at each step. This is hugely more expensive, but thankfully shortest path algorithms tend to have nice dynamic programming solutions!
So the modified algorithm looks something like this (assume i = 0
to start):
- Compute the cost of the first
i
steps of your candidate path P, which we will call c(p_0...p_i)
.
- Compute the cost of the shortest path S, which we will call
c(S)
, and the cost of its first i
components, which we will denote by c(s_0...s_i)
.
- Reduce edge weight
w(p_n)
by c(p_0...p_i) - c(s_0...s_i) - epsilon
, where epsilon
is some vanishingly small constant by which you would like your path to be less than c(S).
- Repeat from step 1, increasing
i
by 1.
Where P = S
to begin with, if epsilon
is 0, you should leave the original path untouched. Otherwise you should reduce by no more than epsilon * |P|
beyond the ideal update.
Optimizing this algorithm will require that you figure out how to compute c(s_0...s_i+1)
from c(s_0...s_i)
in an efficient manner, but that is left as an exercise to the reader ;-)