Question

In the page 3 of the paper of EfficientNet, there is a equation $$\mathcal{N} = \bigodot_{i=1...s} \mathcal{F}_{i}^{L_i} \big(X_{\langle H_i, W_i, C_i \rangle}\big)$$ where $\mathcal{N}$ is the conv net and each $\mathcal{F}_i^{L_i}$ is the $i$th-stage layer operator that has length $L_i$.

What I don't understand is, what is this $\odot$ in this equation? Does the author refer to the Hadamard product or does he refer to the function composition? He previously mentioned that $\mathcal{N} = \mathcal{F}_k \odot ... \odot \mathcal{F}_1 (X_1)$, where k is the depth of the net. So I thought it means that $\odot$ is just function composition. But EfficientNet has skip connection. In the keras implementation it uses layers.merge.Multiply() so it can also means that the input data $X$ is multiplied with the transformed $\mathcal{F}(X)$, and $\odot$ maybe means the Hadamard product.

Does anyone knows the answer? Thanks.

Was it helpful?

Solution

It probably represents repeated function composition.

Key reasons:

  1. The authors' wording before this equation is: "a list of composed layers".
  2. A Hadamard product could not be applied to layers of different sizes, whereas function composition of course can.
  3. Yes, EfficientNet has skip connections, but I think they try to validate function composition via a distinction between "layers" and "stages". They say that "$F_i^{L_i}$ denotes layer $F_i$ is repeated $L_i$ times in stage $i$". They also explain that "all layers in each stage share the same architecture. Therefore, I think they put skip-connections in the form of function composition, but it is confusing. I think it's their way of compactly expressing the (often) repeated architecture of skip-connected layers.
Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top