温馨提示:本文翻译自stackoverflow.com,查看原文请点击:python - Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata).
keras python

python - 模型的输出张量必须是Keras图层的输出(因此保留过去的图层元数据)。

发布于 2020-04-05 23:49:01

我正在尝试使用时间分布进行简单的cnn-lstm分类,但出现以下错误:模型的输出张量必须是 Keras 的输出Layer(因此保留了过去的层元数据)。发现:

我的样本是366通道和5x5尺寸的灰度图像,每个样本都有自己的独特标签。

model_input = Input(shape=(366,5,5))

model = TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same',data_format='channels_first')(model_input))
model = TimeDistributed(MaxPooling2D((2, 2),padding='same',data_format='channels_first'))

model = TimeDistributed(Conv2D(128, (3,3), activation='relu',padding='same',data_format='channels_first'))
model = TimeDistributed(MaxPooling2D((2, 2), strides=(2, 2),padding='same',data_format='channels_first'))


model = Flatten()

model = LSTM(256, return_sequences=False, dropout=0.5)
model =  Dense(128, activation='relu')


model = Dense(6, activation='softmax')

cnnlstm = Model(model_input, model)
cnnlstm.compile(optimizer='adamax',
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])
cnnlstm.summary()

查看更多

提问者
MBS
被浏览
86
Matias Valdenegro 2020-02-01 01:24

您必须在各层之间传递张量,因为这是使用所有Layer(params...)(input)表示法的API在所有层上的工作方式:

model_input = Input(shape=(366,5,5))

model = TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same',data_format='channels_first'))(model_input)
model = TimeDistributed(MaxPooling2D((2, 2),padding='same',data_format='channels_first'))(model)

model = TimeDistributed(Conv2D(128, (3,3), activation='relu',padding='same',data_format='channels_first'))(model)
model = TimeDistributed(MaxPooling2D((2, 2), strides=(2, 2),padding='same',data_format='channels_first'))(model)


model = TimeDistributed(Flatten())(model)

model = LSTM(256, return_sequences=False, dropout=0.5)(model)
model =  Dense(128, activation='relu')(model)


model = Dense(6, activation='softmax')(model)

cnnlstm = Model(model_input, model)

注意,我也纠正了第一TimeDistributed层,因为张量位于错误的部分。