문제

I am trying to implement demo of Image Captioning system from Keras documentation. From the documentation I could understand training part.

max_caption_len = 16
vocab_size = 10000

# first, let's define an image model that
# will encode pictures into 128-dimensional vectors.
# it should be initialized with pre-trained weights.
image_model = VGG-16 CNN definition
image_model.load_weights('weight_file.h5')

# next, let's define a RNN model that encodes sequences of words
# into sequences of 128-dimensional word vectors.
language_model = Sequential()
language_model.add(Embedding(vocab_size, 256, input_length=max_caption_len))
language_model.add(GRU(output_dim=128, return_sequences=True))
language_model.add(TimeDistributedDense(128))

# let's repeat the image vector to turn it into a sequence.
image_model.add(RepeatVector(max_caption_len))

# the output of both models will be tensors of shape (samples, max_caption_len, 128).
# let's concatenate these 2 vector sequences.
model = Merge([image_model, language_model], mode='concat', concat_axis=-1)
# let's encode this vector sequence into a single vector
model.add(GRU(256, 256, return_sequences=False))
# which will be used to compute a probability
# distribution over what the next word in the caption should be!
model.add(Dense(vocab_size))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

model.fit([images, partial_captions], next_words, batch_size=16, nb_epoch=100)

But now I am confused in how to generate caption for test image. Input here is [image, partial_caption] pair, now for test image how to input partial caption?

도움이 되었습니까?

해결책

This example trains an image and a partial caption to predict the next word in the caption.

Input: [🐱, "<BEGIN> The cat sat on the"]
Output: "mat"

Notice the model doesn't predict the entire output of the caption only the next word. To construct a new caption, you would have to predict multiple times for each word.

Input: [🐱, "<BEGIN>"] # predict "The"
Input: [🐱, "<BEGIN> The"] # predict "cat"
Input: [🐱, "<BEGIN> The cat"] # predict "sat"
...

To predict the entire sequence, I believe you need to use TimeDistributedDense for the output layer.

Input: [🐱, "<BEGIN> The cat sat on the mat"]
Output: "The cat sat on the mat <END>"

See this issue: https://github.com/fchollet/keras/issues/1029

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top