Domanda

Sto scrivendo un motore Otello usando MINIMAX con la potatura alfa-beta. Sta funzionando bene, ma ho trovato il seguente problema:

Quando l'algoritmo ritrova che una posizione è persa, restituisce -infinity come previsto, ma in Questo caso non sono in grado di tenere traccia della mossa "migliore" ... La posizione è già persa, ma dovrebbe restituire una mossa valida comunque (preferibilmente una mossa che sopravvive più a lungo, poiché i buoni motori di scacchi lo fanno). Ecco il codice:

private float minimax(OthelloBoard board, OthelloMove best, float alpha, float beta, int depth)
{             
    OthelloMove garbage = new OthelloMove();             
    int currentPlayer = board.getCurrentPlayer();

    if (board.checkEnd())
    {                        
        int bd = board.countDiscs(OthelloBoard.BLACK);
        int wd = board.countDiscs(OthelloBoard.WHITE);

        if ((bd > wd) && currentPlayer == OthelloBoard.BLACK)                
            return INFINITY;
        else if ((bd < wd) && currentPlayer == OthelloBoard.BLACK)                           
            return -INFINITY;            
        else if ((bd > wd) && currentPlayer == OthelloBoard.WHITE)                            
            return -INFINITY;            
        else if ((bd < wd) && currentPlayer == OthelloBoard.WHITE)                            
            return INFINITY;            
        else                             
            return 0.0f;            
    }
    //search until the end? (true during end game phase)
    if (!solveTillEnd )
    {
        if (depth == maxDepth)
            return OthelloHeuristics.eval(currentPlayer, board);
    }

    ArrayList<OthelloMove> moves = board.getAllMoves(currentPlayer);             

    for (OthelloMove mv : moves)
    {                        
        board.makeMove(mv);            
        float score = - minimax(board, garbage, -beta,  -alpha, depth + 1);           
        board.undoMove(mv);             

        if(score > alpha)
        {  
            //Set Best move here
            alpha = score;                
            best.setFlipSquares(mv.getFlipSquares());
            best.setIdx(mv.getIdx());        
            best.setPlayer(mv.getPlayer());                              
        }

        if (alpha >= beta)
            break;                

    }                
    return alpha;
}
.

lo chiamo usando:

AI ai = new AI(board, maxDepth, solveTillEnd);

//create empty (invalid) move to hold best move
OthelloMove bestMove = new OthelloMove();
ai.bestFound = bestMove;
ai.minimax(board, bestMove, -INFINITY, INFINITY, 0);

//dipatch a Thread
 new Thread(ai).start();
//wait for thread to finish

OthelloMove best = ai.bestFound();
.

Quando una posizione persa (immagina che sia persa 10 mosse più avanti, ad esempio) viene cercata, la migliore variabile sopra è uguale alla mossa non valida vuota passata come argomento ... perché ??

Grazie per qualsiasi aiuto!

È stato utile?

Soluzione

Your problem is that you're using -INFINITY and +INFINITY as win/loss scores. You should have scores for win/loss that are higher/lower than any other positional evaluation score, but not equal to your infinity values. This will guarantee that a move will be chosen even in positions that are hopelessly lost.

Altri suggerimenti

It's been a long time since i implemented minimax so I might be wrong, but it seems to me that your code, if you encounter a winning or losing move, does not update the best variable (this happens in the (board.checkEnd()) statement at the top of your method).

Also, if you want your algorithm to try to win with as much as possible, or lose with as little as possible if it can't win, I suggest you update your eval function. In a win situation, it should return a large value (larger that any non-win situation), the more you win with the laregr the value. In a lose situation, it should return a large negative value (less than in any non-lose situation), the more you lose by the less the value.

It seems to me (without trying it out) that if you update your eval function that way and skip the check if (board.checkEnd()) altogether, your algorithm should work fine (unless there's other problems with it). Good luck!

If you can detect that a position is truly won or lost, then that implies you are solving the endgame. In this case, your evaluation function should be returning the final score of the game (e.g. 64 for a total victory, 31 for a narrow loss), since this can be calculated accurately, unlike the estimates that you will evaluate in the midgame.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top