Question

I just started trying to use the minimax/negamax algorithm and I came up with an idea that sounds good for me, but as nobody is using it it might be a flawed logic.

Why don't we do this:

Create a three with depth=x, figure out which move to make, and wait for our opponent. After he did his move we can just take the subtree of the moves we already evaluated and continue building it deeper while using the old nodes. We could use the already evaluated values of the nodes and weigh them with the new values from new deeper nodes.

Altough the new values might not be as exact as with the usual method we could get much deeper and profit from that.

I apologize for my and bad written and unstructured question, but I hope you get my idea.

Was it helpful?

Solution

I think what you're missing here is how minimax works. Minimax enumerates all of the possibilities to a specified depth D, then assigns a score to the nodes (game states) at D, and moving back up the tree, returns the MAX or MIN node at each depth based on whether I'm the maximizing player or the minimizing player.

Your proposal of doing it top down would mean you have to assign a score to the nodes at more shallow depths, resulting in a poorer evaluation.

OTHER TIPS

The idea is being used, but in a different way. Instead of keeping the search tree around, which would be memory prohibitive, evaluation scores are kept in the transposition table and reused. This can save time when doing iterative deepening, since many positions will have cached scores from previous searches. So reusing old search results can help with some of the intermediate searches and speed up move ordering, but the leaf nodes will still need to be evaluated at whatever terminal search depth the engine is using.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top