Question

As far as I know, there is practically no limit on the number of dimensions of input feature for LSTM. And it apparently can learn the sequence of data.

My question is does LSTM by nature, also take previous output values besides feature vector as new feature element for the next output values? i.e. If we have:

[t] -> [x1, ..., xn] [y_t] [t+1] -> [x1, ..., xn] [y_(t+1)]

Is it necessary to manipulate data like this

[t] -> [x1, ..., xn, y_(t-1)] [y_t] [t+1] -> [x1, ..., xn, y_t] [y_(t+1)]

or LSTM is already handling this for us?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top