9/13/2023 0 Comments Define permuteRaise Exception('ndim too large: ' + str(ndim)) On Tuesday, December 22, 2015, afsanehghasemi is an Error, I have got in theano_backend.py, But I'll leave the API decisions to others. Or, even better yet, for Flatten, Convolution2D, and Pooling2D, we can have a flag (say, td) such that if td is set to True, the layers do the appropriate operations to be TimeDistributed. It could be best to put all the time distributed layers in one place, to be Inclusion (in fact, TimeDistributedDense and TimeDistributedMergeĪre already specific time distributed layers). Nets), then they (or a subset thereof) may be worth considering for That could be useful to Keras users (especially those training CNN-RNN TimeDistributedPooling2D, and TimeDistributedFlatten seem to be something Much work or is taking too much time, and if TimeDistributedConvolution2D, However, if making a general time distributed layer is too I know had plans forĪ general time distributed layer, though I think it's been on the backlogįor a while. Separate repo as it is constantly changing. It's somewhat impractical to keep up with the changes in Keras as a , rather than on this thread On a related note, could these layers be considered for inclusion (preferably on the issues page on my repo, For any issues/bugs, feel free to let me know Version of Keras (this includes modifying theano_backend.py to include I have pushed an updated version of my code that works with the newest pile(optimizer='rmsprop', loss='categorical_crossentropy')Īm I using the package wrong? Or is there something I need to implement somewhere? Thanks for taking the time to read this Model.add(LSTM(512, return_sequences=True)) Model.add(TimeDistributedConvolution2D(256, 3, 3)) Model.add(TimeDistributedConvolution2D(256, 3, 3, border_mode='same')) Model.add(TimeDistributedConvolution2D(128, 3, 3)) Model.add(TimeDistributedConvolution2D(128, 3, 3, border_mode='same')) Model.add(TimeDistributedMaxPooling2D(pool_size=(2, 2))) Model.add(TimeDistributedConvolution2D(64, 3, 3, border_mode='same')) 4 of this paper: ), which is not what I want.Įxception: Invalid input shape - Layer expects input ndim=5, was provided with input shape (None, 30, 256, 256) I'm aware of #129 however, in this case, I believe the original poster wanted it so that the convolutional layer does not accept new inputs across timesteps (so something like Figure 3, pg. Then the next input i_ are produced, and so on. Specifically, I want the input i_t to the convolutional network at a given timestep t to consist of n frames (in the case of Figure 1, n = 1), so i_t would be of dimension (num_rows, num_cols, n), from which the features of i_t are extracted and fed into an LSTM network, which produces a prediction y_t and a hidden state h_t. I was wondering if there was a straightforward way in Keras (or would I have to write my own layer?) to combine a convolutional network which extracts features and then feeds it to an LSTM (or GRU, MUT1, etc) network (similar to Figure 1 of this paper: )?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |