Question

I am learning PyTorch and CNNs but am confused how the number of inputs to the first FC layer after a Conv2D layer is calculated.

My network architecture is shown below, here is my reasoning using the calculation as explained here.

The input images will have shape (1 x 28 x 28).

The first Conv layer has stride 1, padding 0, depth 6 and we use a (4 x 4) kernel. The output will thus be (6 x 24 x 24), because the new volume is (28 - 4 + 2*0)/1.

Then we pool this with a (2 x 2) kernel and stride 2 so we get an output of (6 x 11 x 11), because the new volume is (24 - 2)/2.

Same thing for the second Conv and pool layers, but this time with a (3 x 3) kernel in the Conv layer, resulting in (16 x 3 x 3) feature maps in the end.

My assumption would then be that the first linear layer should have 144 inputs (16 * 3 * 3), but when I calculate the inputs programatically, I get 400. What did I miss?

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 4)
        self.conv2 = nn.Conv2d(6, 16, 3)
        self.fc1 = nn.Linear(400, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, len(classes))

    def forward(self, x):
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(-1, self.num_flat_features(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]
        num_features = 1
        for s in size:
            num_features *= s
        return num_features # 400, not 144

Related but less so: is there a reasoning used by people to get a good kernel size, number of layers and number of pool layers or does everyone just look at what the SOTA papers do?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top