Genetic algorithms use random sampling methods to create generations of random candidate solutions. For many kinds of problems, genetic algorithms can get "stuck" on local optima, and if other local optima (or the global optimum) is too "far" away, operations like crossing and mutation may not provide sufficient variation to get "unstuck". If you are consistently getting different solutions using the same parameters, then you've come across a valuable piece of information: either (a) your parameters are making your populations too homogenous (lacking variation), preventing them from departing local optima, or (b) your problem isn't well-suited for genetic algorithms.
Try increasing your mutation rate to extremes, running the algorithm for many more generations, and instead of looking at the final population (since its make-up will be fickle with a high mutation rate), look at the total lifetime "lived" by leading candidates.
However, your question is a bit puzzling to begin with. You're asking, "Which one is the best answer?" Well, you must have defined a fitness criterion for "killing" the least-fit candidates from generation to generation, no? Simply compute the fitness of each answer to see which is considered more "fit". If both (or all) answers are equally fit, then maybe there isn't a single solution to your problem!