Question

My model's structure is

                                  Output
                                    ^
                                    |
                             ----------------
                             | Dense Network |
                             ----------------
                                   /\
                                   ||
                                   ||
                                   ||
   |--------------------|          ||          | ----------------------|  
   | RNN on features    | ========>||<======== |  Dense Network on non |
   | changing with time |        [concat]      |  time series data     |
   |--------------------|                      |-----------------------|

These are the Training and Validation set metric outputs of my model. Why are the values fluctuating so much for the validation set?. Any ideas?

4 Graphs. Loss, Accuracy, Precision, Recall

Update :

As suggested in comments I have tried increasing the validation set size Now the size ratio is 49.6%-50.4%

Also I have made the model very simple by Using fewer layers. The new graph obtained looks like this 4 Graphs. Loss, Accuracy, Precision, Recall on simpler model

Is this acceptable as 'okay-fluctuating'?

Était-ce utile?

La solution

Thanks for updating the post, this level of fluctuation in the validation set is a lot less dramatic than before and appears to be similar to regular fluctuation I have seen in my experience. Kudos that you have also managed to prevent the model from overfitting.

Licencié sous: CC-BY-SA avec attribution
scroll top