A* variants

These are some quick notes listing papers related to A* itself and variants. In particular, here I’m interested in papers that update \(h\) during the A* search, as a background for pruning.

Specifically, our version of pruning increases \(h\) during a single A* search, and in fact the heuristic becomes in-admissible after pruning.

Changing \(h\)

The original A* paper has a proof of optimality. Later papers consider this also with heuristics that change their value over time.

  • Original A* paper (Hart, Nilsson, and Raphael 1968) does not consider a changing heuristic.

    • A later addendum (NO_ITEM_DATA:astar-correction-hart72) removes the need for the heuristic to be consistent in the optimality proof, but does not otherwise change things.
  • Gelperin (1977) considers that \(h\) may depend on the A* state. Notation: \(\hat h(n, m)\): the value of \(h\) in \(n\) just before closing (expanding) \(m\), and \(\hat h(n, \overline m)\), the value of \(h\) in \(n\) just after closing \(m\). State: \(\Sigma(m)\) resp. \(\Sigma(\overline m)\). Second argument may be omitted:

    When is it neither necessary nor helpful to use this new notation, we will use the older notation with search-state dependence understood.

  • Dechter and Pearl (1983) comment on and extend the proof of (Gelperin 1977), but are not specific about \(h\) depending on the state.

  • Somewhat unrelated, a nice paper going over some tricky details regarding A* is Holte (2010).

  • Multiple-path pruning is the technique from (Poole and Mackworth 2017) to remove paths going through expanded nodes to which an optimal path has been found.

Variants

There are some variants of A* for repeated searches that do incremental updates of \(h\) between iterations. Note that \(h\) is assumed to be admissible, and usually also consistent. Incremental A* refers to the entire class of versions of A* that reuse information between runs.

The Wikipedia page on incremental heuristic search has more information.

D* (Dynamic A*) (Stentz 1997, 1995)
Setting: a robot is navigating and discovers new obstacles along the way. This leads to increasing (or possibly decreasing) edge weights. They keep OPEN, RAISE, and LOWER lists. \(h\) is assumed to the distance to the end in the so-far explored graph.
Adaptive A* (Koenig and Likhachev 2005)
Setting: repeatedly find shortest path to a fixed set of goal states, but varying start states. Input: a consistent heuristic \(h\). (I’m not sure where/why consistency is needed.)

Intuitively, it uses that after making a search from \(s\), we know that all states close to \(s\) must have a distance that is not much smaller than the distance from \(s\).

The main insight of Adaptive A* is this: after running one iteration from \(s\) to \(t\), let the distance from \(s\) to \(t\) be \(h^*(s)\), and let \(g(u)\) be the shortest distance from \(s\) to \(u\) found so far. Write \(g^*\) for the distance from \(s\), \(h^*\) for the distance to \(t\), and \(f^*\) for their sum. By the triangle inequality, \(h^*(s) \leq f^*(u)\). We get

\begin{equation} h^*(s) \leq f^*(u) = g^*(u) + h^*(u) \leq g(u) + h^*(u). \end{equation}

Rewriting gives \(h^*(u) \geq h^*(s) - g(u)\), which we can use as a new value for \(h\) that is possibly better than the user provided value.

Edge weights may increase over time.

Real-Time Adaptive A* (RTAA*) (Koenig and Likhachev 2006)
Same setting as Adaptive A*, but now the model is a robot searching the grid. There is a limit on the lookahead and movements it may make.

After increasing edge weights, they show that the heuristic remains consistent.

Generalized Adaptive A* (GAA*) (Sun, Koenig, and Yeoh 2008)
Can additionally handle decrements in edge weights and changes of goal state. Input: a consistent heuristic \(H(s, t)\) for any pair of states, that additionally satisfies the more general triangle inequality.
D* Lite (Koenig and Likhachev 2002)
Again models a robot moving around.

Conclusion

While there are many methods that (implicitly) modify \(h\), their setting is usually in that of changing surroundings, repeated searches, or constrained to a single moving robot. All these are different from our case, where we are able to increase \(h\) during a single search for a single shortest path. Further, most variants keep a more complicated state to handle the updates than that of a single A*.

References

Dechter, Rina, and Judea Pearl. 1983. “The Optimality of a* Revisited.” In Aaai, 83:95–99.
Gelperin, David. 1977. “On the Optimality of a\_.” Artificial Intelligence 8 (1): 69–76. https://doi.org/10.1016/0004-3702(77)90005-4.
Hart, Peter, Nils Nilsson, and Bertram Raphael. 1968. “A Formal Basis for the Heuristic Determination of Minimum Cost Paths.” Ieee Transactions on Systems Science and Cybernetics 4 (2): 100–107. https://doi.org/10.1109/tssc.1968.300136.
Holte, Robert C. 2010. “Common Misconceptions Concerning Heuristic Search.” In Proceedings of the Third Annual Symposium on Combinatorial Search, SOCS 2010, Stone Mountain, Atlanta, Georgia, Usa, July 8-10, 2010, edited by Ariel Felner and Nathan R. Sturtevant. AAAI Press. http://aaai.org/ocs/index.php/SOCS/SOCS10/paper/view/2073.
Koenig, Sven, and Maxim Likhachev. 2002. “D*Lite.” In Eighteenth National Conference on Artificial Intelligence, 476–83. Edmonton, Alberta, Canada: American Association for Artificial Intelligence.
———. 2005. “Adaptive a.” Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems - Aamas ’05. https://doi.org/10.1145/1082473.1082748.
———. 2006. “Real-Time Adaptive a*.” Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems - Aamas ’06. https://doi.org/10.1145/1160633.1160682.
Poole, David L., and Alan K. Mackworth. 2017. “Artificial Intelligence,” September. https://doi.org/10.1017/9781108164085.
Stentz, Anthony. 1995. “The Focussed D* Algorithm for Real-Time Replanning.” In Proceedings of the 14th International Joint Conference on Artificial Intelligence - Volume 2, 1652–59. Ijcai’95. Montreal, Quebec, Canada: Morgan Kaufmann Publishers Inc.
———. 1997. “Optimal and Efficient Path Planning for Partially Known Environments.” Intelligent Unmanned Ground Vehicles, 203–20. https://doi.org/10.1007/978-1-4615-6325-9_11.
Sun, Xiaoxun, Sven Koenig, and William Yeoh. 2008. “Generalized Adaptive a*.” In Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1, 469–76. Aamas ’08. Estoril, Portugal: International Foundation for Autonomous Agents and Multiagent Systems.
NO_ITEM_DATA:astar-correction-hart72