Question

Following this tutorial, I have a doubt about the evaluation part in:

# test the model
n_batches = int(mnist.test.num_examples/batch_size)
total_correct_preds = 0
for i in range(n_batches):
    X_batch, Y_batch = mnist.test.next_batch(batch_size)
    _, loss_batch, logits_batch = sess.run([optimizer, loss, logits], feed_dict={X: X_batch, Y:Y_batch}) 
    preds = tf.nn.softmax(logits_batch)
    correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(Y_batch, 1))
    accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32)) # need numpy.count_nonzero(boolarr) :(
    total_correct_preds += sess.run(accuracy)   

print 'Accuracy {0}'.format(total_correct_preds/mnist.test.num_examples)

Note that this is done on the test set, so the goal is purely to obtain the accuracy using a previously trained model. However isn't calling the line:

_, loss_batch, logits_batch = sess.run([optimizer, loss, logits], feed_dict={X: X_batch, Y:Y_batch}) 

equivalent to re-optimizing the model using the test data (and labels)? Shouldn't we avoid re-running optimizer and loss and just compute the predictions?

Was it helpful?

Solution

I think you are correct. The line should be

loss_batch, logits_batch = sess.run([loss, logits], feed_dict={X: X_batch, Y:Y_batch}) 
Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top