Question

I am trying to learn TensorFlow, and I could understand how it uses the batch in this example:

cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run(tf.initialize_all_variables())
for i in range(20000):
  batch = mnist.train.next_batch(50)
  if i%100 == 0:
    train_accuracy = accuracy.eval(feed_dict={
        x:batch[0], y_: batch[1], keep_prob: 1.0})
    print("step %d, training accuracy %g"%(i, train_accuracy))
  train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})

print("test accuracy %g"%accuracy.eval(feed_dict={
    x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

My question is, why it get a batch of 50 training data, but only use the first one for training. Maybe I did not understand the code correctly.

Was it helpful?

Solution

If I understood you correctly, you are asking about this line of code:

train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})

Here you only specify which part of batch is used for features and which for your predicted class.

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top