Dropout Decreases Test and Train Accuracy in one layer LSTM in Pytorch
Question
I have a one layer lstm with pytorch on Mnist data. I know that for one layer lstm dropout option for lstm in pytorch does not operate. So, I have added a drop out at the beginning of second layer which is a fully connected layer. However, I observed that without dropout I get 97.75% accuracy on the test data and with dropout of 0.5 I get 95.36%. I want to ask whether I am doing something wrong or what is the reason for this phenomena? I change it into eval mode in test but I reach to 96.44% accuracy. Still it is less than without dropout. Thanks a lot
# RNN Model (Many-to-One)
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers,
batch_first=True,bidirectional=True)
self.fc = nn.Sequential(
nn.Dropout(0.1),
nn.Linear(hidden_size*2, num_classes),
nn.Softmax(dim=1)
)
def init_hidden(self,x):
return(Variable(torch.zeros(self.num_layers*2, x.size(0), self.hidden_size)).cuda(),
Variable(torch.zeros(self.num_layers*2, x.size(0), self.hidden_size).cuda()))
def forward(self, x):
# Set initial states
# Forward propagate RNN
hidden = self.init_hidden(x)
#print(len(hidden))
out, _ = self.lstm(x, hidden)
# Decode hidden state of last time step
out = self.fc(out[:, -1, :])
return out
No correct solution
Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange