我正在尝试从 KERAS文档. 。从文档中,我可以理解培训部分。

max_caption_len = 16
vocab_size = 10000

# first, let's define an image model that
# will encode pictures into 128-dimensional vectors.
# it should be initialized with pre-trained weights.
image_model = VGG-16 CNN definition
image_model.load_weights('weight_file.h5')

# next, let's define a RNN model that encodes sequences of words
# into sequences of 128-dimensional word vectors.
language_model = Sequential()
language_model.add(Embedding(vocab_size, 256, input_length=max_caption_len))
language_model.add(GRU(output_dim=128, return_sequences=True))
language_model.add(TimeDistributedDense(128))

# let's repeat the image vector to turn it into a sequence.
image_model.add(RepeatVector(max_caption_len))

# the output of both models will be tensors of shape (samples, max_caption_len, 128).
# let's concatenate these 2 vector sequences.
model = Merge([image_model, language_model], mode='concat', concat_axis=-1)
# let's encode this vector sequence into a single vector
model.add(GRU(256, 256, return_sequences=False))
# which will be used to compute a probability
# distribution over what the next word in the caption should be!
model.add(Dense(vocab_size))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

model.fit([images, partial_captions], next_words, batch_size=16, nb_epoch=100)

但是现在我对如何生成测试图像的标题感到困惑。输入此处是[image,partial_caption]对,现在用于测试图像如何输入部分字幕?

有帮助吗?

解决方案

此示例训练图像和部分标题,以预测标题中的下一个单词。

Input: [🐱, "<BEGIN> The cat sat on the"]
Output: "mat"

请注意,该模型不仅预测字幕的整个输出。要构建一个新的字幕,您必须为每个单词进行多次预测。

Input: [🐱, "<BEGIN>"] # predict "The"
Input: [🐱, "<BEGIN> The"] # predict "cat"
Input: [🐱, "<BEGIN> The cat"] # predict "sat"
...

为了预测整个序列,我相信您需要使用 TimeDistributedDense 对于输出层。

Input: [🐱, "<BEGIN> The cat sat on the mat"]
Output: "The cat sat on the mat <END>"

查看此问题: https://github.com/fchollet/keras/issues/1029

许可以下: CC-BY-SA归因
scroll top