Keras Neural Network training is stuck (gets stuck around epoch 6) [closed]
-
31-10-2019 - |
Question
I have a training dataset with 8000 rows, and I am trying to train a Keras Neural Network on it, using 100 epochs. However, the training process gets stuck around epoch 6 every time, as shown below. I'm not sure if it's because of my computer (8GB RAM Macbook Pro) or because of some inappropriately set parameters for my model. Thanks!
import keras
from keras.models import Sequential
from keras.layers import Dense
classifier = Sequential()
classifier.add(Dense(activation="relu", input_dim=11, units=6, kernel_initializer="uniform"))
classifier.add(Dense(activation="relu", units=6, kernel_initializer="uniform"))
classifier.add(Dense(activation="sigmoid", units=1, kernel_initializer="uniform"))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.fit(X_train, y_train, batch_size = 10, nb_epoch = 100)
8000/8000 [==============================] - 3s 348us/step - loss: 0.3890 - acc: 0.8367
Epoch 2/100
8000/8000 [==============================] - 3s 329us/step - loss: 0.3894 - acc: 0.8376
Epoch 3/100
8000/8000 [==============================] - 3s 314us/step - loss: 0.3887 - acc: 0.8370
Epoch 4/100
8000/8000 [==============================] - 3s 346us/step - loss: 0.3895 - acc: 0.8379
Epoch 5/100
8000/8000 [==============================] - 3s 320us/step - loss: 0.3887 - acc: 0.8370 - ETA: 2s - loss: 0.4087 - acc: 0.8077
Epoch 6/100
7650/8000 [===========================>..] - ETA: 0s - loss: 0.3881 - acc: 0.8363
//STUCK HERE
I do get the following message on my terminal window though (even though I'm using jupyter notebook):
2017-12-24 20:45:30.464660: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
No correct solution
Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange