我正在尝试使用时间分布进行简单的cnn-lstm分类,但出现以下错误:模型的输出张量必须是 Keras
的输出Layer
(因此保留了过去的层元数据)。发现:
我的样本是366通道和5x5尺寸的灰度图像,每个样本都有自己的独特标签。
model_input = Input(shape=(366,5,5))
model = TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same',data_format='channels_first')(model_input))
model = TimeDistributed(MaxPooling2D((2, 2),padding='same',data_format='channels_first'))
model = TimeDistributed(Conv2D(128, (3,3), activation='relu',padding='same',data_format='channels_first'))
model = TimeDistributed(MaxPooling2D((2, 2), strides=(2, 2),padding='same',data_format='channels_first'))
model = Flatten()
model = LSTM(256, return_sequences=False, dropout=0.5)
model = Dense(128, activation='relu')
model = Dense(6, activation='softmax')
cnnlstm = Model(model_input, model)
cnnlstm.compile(optimizer='adamax',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
cnnlstm.summary()
您必须在各层之间传递张量,因为这是使用所有Layer(params...)(input)
表示法的API在所有层上的工作方式:
model_input = Input(shape=(366,5,5))
model = TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same',data_format='channels_first'))(model_input)
model = TimeDistributed(MaxPooling2D((2, 2),padding='same',data_format='channels_first'))(model)
model = TimeDistributed(Conv2D(128, (3,3), activation='relu',padding='same',data_format='channels_first'))(model)
model = TimeDistributed(MaxPooling2D((2, 2), strides=(2, 2),padding='same',data_format='channels_first'))(model)
model = TimeDistributed(Flatten())(model)
model = LSTM(256, return_sequences=False, dropout=0.5)(model)
model = Dense(128, activation='relu')(model)
model = Dense(6, activation='softmax')(model)
cnnlstm = Model(model_input, model)
注意,我也纠正了第一TimeDistributed
层,因为张量位于错误的部分。
谢谢Matias。但是在上面的代码中运行时,我得到一个错误:“ time_distributed_304 / transpose”的维数必须为3,但必须为4,输入形状为[?,5,5],[4]。 我的输入形状似乎正确366通道5 * 5。不知道怎么了。
非常感谢Matias,您节省了我几个小时。现在工作正常。问题出在我的输入形状上,我没有指定每个时间步长的通道数(在我的情况下为1)。因此,model_input = Input(shape =(366,1,5,5))解决了该问题。现在,我的准确性没有改变,我希望对其进行修复。