Question

I'm writing a Othello engine using minimax with alpha-beta pruning. It's working ok, but i found the following problem:

When the algorithm finds that a position is lost, it returns -INFINITY as expected, but in this case i'm not able to track the 'best' move...the position is already lost, but it should return a valid move anyway (preferably a move that survives longer, as the good chess engines does).

Here is the code:

private float minimax(OthelloBoard board, OthelloMove best, float alpha, float beta, int depth)
{             
    OthelloMove garbage = new OthelloMove();             
    int currentPlayer = board.getCurrentPlayer();

    if (board.checkEnd())
    {                        
        int bd = board.countDiscs(OthelloBoard.BLACK);
        int wd = board.countDiscs(OthelloBoard.WHITE);

        if ((bd > wd) && currentPlayer == OthelloBoard.BLACK)                
            return INFINITY;
        else if ((bd < wd) && currentPlayer == OthelloBoard.BLACK)                           
            return -INFINITY;            
        else if ((bd > wd) && currentPlayer == OthelloBoard.WHITE)                            
            return -INFINITY;            
        else if ((bd < wd) && currentPlayer == OthelloBoard.WHITE)                            
            return INFINITY;            
        else                             
            return 0.0f;            
    }
    //search until the end? (true during end game phase)
    if (!solveTillEnd )
    {
        if (depth == maxDepth)
            return OthelloHeuristics.eval(currentPlayer, board);
    }

    ArrayList<OthelloMove> moves = board.getAllMoves(currentPlayer);             

    for (OthelloMove mv : moves)
    {                        
        board.makeMove(mv);            
        float score = - minimax(board, garbage, -beta,  -alpha, depth + 1);           
        board.undoMove(mv);             

        if(score > alpha)
        {  
            //Set Best move here
            alpha = score;                
            best.setFlipSquares(mv.getFlipSquares());
            best.setIdx(mv.getIdx());        
            best.setPlayer(mv.getPlayer());                              
        }

        if (alpha >= beta)
            break;                

    }                
    return alpha;
}

I call it using:

AI ai = new AI(board, maxDepth, solveTillEnd);

//create empty (invalid) move to hold best move
OthelloMove bestMove = new OthelloMove();
ai.bestFound = bestMove;
ai.minimax(board, bestMove, -INFINITY, INFINITY, 0);

//dipatch a Thread
 new Thread(ai).start();
//wait for thread to finish

OthelloMove best = ai.bestFound();

When a lost position (imagine it's lost 10 moves later for example) is searched, best variable above is equal to the empty invalid move passed as argument...why??

Thanks for any help!

Was it helpful?

Solution

Your problem is that you're using -INFINITY and +INFINITY as win/loss scores. You should have scores for win/loss that are higher/lower than any other positional evaluation score, but not equal to your infinity values. This will guarantee that a move will be chosen even in positions that are hopelessly lost.

OTHER TIPS

It's been a long time since i implemented minimax so I might be wrong, but it seems to me that your code, if you encounter a winning or losing move, does not update the best variable (this happens in the (board.checkEnd()) statement at the top of your method).

Also, if you want your algorithm to try to win with as much as possible, or lose with as little as possible if it can't win, I suggest you update your eval function. In a win situation, it should return a large value (larger that any non-win situation), the more you win with the laregr the value. In a lose situation, it should return a large negative value (less than in any non-lose situation), the more you lose by the less the value.

It seems to me (without trying it out) that if you update your eval function that way and skip the check if (board.checkEnd()) altogether, your algorithm should work fine (unless there's other problems with it). Good luck!

If you can detect that a position is truly won or lost, then that implies you are solving the endgame. In this case, your evaluation function should be returning the final score of the game (e.g. 64 for a total victory, 31 for a narrow loss), since this can be calculated accurately, unlike the estimates that you will evaluate in the midgame.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top