Pregunta

I am new to machine learning and Encog overall, but I would have expected Encog to at least give consistent results from the examples to help me learn about Encog more easily. For me Encoge will give different directional results each time it is run.

Can anyone help me better understand why this is? Below is some modified sample code that's being used.

Direction correct:10/25

Direction correct:8/25

Direction correct:6/25

...

    public class MainPredict {
    public static void main(String[] args) {

        Co.println("--> Main Predict");

        final MarketLoader marketLoader = new YahooFinanceLoader();
        final MarketMLDataSet marketDataSet = new MarketMLDataSet(marketLoader, Config.INPUT_WINDOW, Config.PREDICT_WINDOW);
        final MarketDataDescription marketDataDescription = new MarketDataDescription(Config.TICKER, MarketDataType.adjusted_close, true, true);
        marketDataSet.addDescription(marketDataDescription);

        Calendar end = new GregorianCalendar();// end today
        Calendar begin = (Calendar) end.clone();// begin 30 days ago
        begin.add(Calendar.DATE, -60);
        end.add(Calendar.DATE, -60);
        begin.add(Calendar.YEAR, -2);

        marketDataSet.load(begin.getTime(), end.getTime());
        marketDataSet.generate();

        BasicNetwork basicNetwork = EncogUtility.simpleFeedForward(marketDataSet.getInputSize(), Config.HIDDEN1_COUNT, Config.HIDDEN2_COUNT, marketDataSet.getIdealSize(), true);

        ResilientPropagation resilientPropagation = new ResilientPropagation(basicNetwork, marketDataSet);
        resilientPropagation.setRPROPType(RPROPType.iRPROPp);

//      EncogUtility.trainToError(resilientPropagation, 0.00008);
        EncogUtility.trainConsole(basicNetwork, marketDataSet, 3);

        System.out.println("Final Error: " + basicNetwork.calculateError(marketDataSet));

        MarketEvaluate.evaluate(basicNetwork);

        Encog.getInstance().shutdown();
    }
}
¿Fue útil?

Solución

It's pretty common for neural network weights to be initialized to random values, which pretty much trashes determinacy right up front. So to have repeatable results, you'd need to save a particular instance of your network whose random initial weights you liked, and then load that into other runs as a starting point.

In this case, basicNetwork would be the one to save (perhaps with createPersistor() to serialize to XML), then reload each time you later wanted to reset it, rather than constructing a fresh one from scratch.

Another test you could try is use basicNetwork.clone(), and then run your experiment on both of them and see how the results turn out.

Links:

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top