Question

J'écris un moteur Othello en utilisant Minimax avec une taille alpha-bêta. Cela fonctionne bien, mais j'ai trouvé le problème suivant:

Lorsque l'algorithme trouve qu'une position est perdue, elle renvoie -frité comme prévu, mais dans Ce cas, je ne suis pas capable de suivre le "meilleur" bouge ... La position est déjà perdue, mais il doit renvoyer un geste valide de toute façon (de préférence un mouvement qui survit plus longtemps, car les bons moteurs d'échecs font).

Voici le code:

private float minimax(OthelloBoard board, OthelloMove best, float alpha, float beta, int depth)
{             
    OthelloMove garbage = new OthelloMove();             
    int currentPlayer = board.getCurrentPlayer();

    if (board.checkEnd())
    {                        
        int bd = board.countDiscs(OthelloBoard.BLACK);
        int wd = board.countDiscs(OthelloBoard.WHITE);

        if ((bd > wd) && currentPlayer == OthelloBoard.BLACK)                
            return INFINITY;
        else if ((bd < wd) && currentPlayer == OthelloBoard.BLACK)                           
            return -INFINITY;            
        else if ((bd > wd) && currentPlayer == OthelloBoard.WHITE)                            
            return -INFINITY;            
        else if ((bd < wd) && currentPlayer == OthelloBoard.WHITE)                            
            return INFINITY;            
        else                             
            return 0.0f;            
    }
    //search until the end? (true during end game phase)
    if (!solveTillEnd )
    {
        if (depth == maxDepth)
            return OthelloHeuristics.eval(currentPlayer, board);
    }

    ArrayList<OthelloMove> moves = board.getAllMoves(currentPlayer);             

    for (OthelloMove mv : moves)
    {                        
        board.makeMove(mv);            
        float score = - minimax(board, garbage, -beta,  -alpha, depth + 1);           
        board.undoMove(mv);             

        if(score > alpha)
        {  
            //Set Best move here
            alpha = score;                
            best.setFlipSquares(mv.getFlipSquares());
            best.setIdx(mv.getIdx());        
            best.setPlayer(mv.getPlayer());                              
        }

        if (alpha >= beta)
            break;                

    }                
    return alpha;
}

Je l'appelle en utilisant:

AI ai = new AI(board, maxDepth, solveTillEnd);

//create empty (invalid) move to hold best move
OthelloMove bestMove = new OthelloMove();
ai.bestFound = bestMove;
ai.minimax(board, bestMove, -INFINITY, INFINITY, 0);

//dipatch a Thread
 new Thread(ai).start();
//wait for thread to finish

OthelloMove best = ai.bestFound();

Lors d'une position perdue (imaginez qu'il est perdue 10 déplacements plus tard, par exemple) est recherché, la meilleure variable ci-dessus est égale à la bouge non valide vide passée comme argument ... pourquoi ??

Merci pour une aide!

Était-ce utile?

La solution

Your problem is that you're using -INFINITY and +INFINITY as win/loss scores. You should have scores for win/loss that are higher/lower than any other positional evaluation score, but not equal to your infinity values. This will guarantee that a move will be chosen even in positions that are hopelessly lost.

Autres conseils

It's been a long time since i implemented minimax so I might be wrong, but it seems to me that your code, if you encounter a winning or losing move, does not update the best variable (this happens in the (board.checkEnd()) statement at the top of your method).

Also, if you want your algorithm to try to win with as much as possible, or lose with as little as possible if it can't win, I suggest you update your eval function. In a win situation, it should return a large value (larger that any non-win situation), the more you win with the laregr the value. In a lose situation, it should return a large negative value (less than in any non-lose situation), the more you lose by the less the value.

It seems to me (without trying it out) that if you update your eval function that way and skip the check if (board.checkEnd()) altogether, your algorithm should work fine (unless there's other problems with it). Good luck!

If you can detect that a position is truly won or lost, then that implies you are solving the endgame. In this case, your evaluation function should be returning the final score of the game (e.g. 64 for a total victory, 31 for a narrow loss), since this can be calculated accurately, unlike the estimates that you will evaluate in the midgame.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top