温馨提示:本文翻译自stackoverflow.com,查看原文请点击:python 3.x - How to create a custom keras loss function with opencv?
keras opencv python-3.x computer-vision

python 3.x - 如何使用opencv创建自定义keras损失函数?

发布于 2020-03-27 11:29:17

我正在使用keras开发机器学习模型,并且注意到可用的损失函数无法在我的测试集上提供最佳结果。

我使用的是Unet架构,在其中输入(16,16,3)图像,并且网络还输出(16,16,3)图片(自动编码器)。我注意到,也许改进模型的一种方法是使用损失函数,将净输出与地面实况之间的梯度(拉普拉斯)上的像素进行比较。但是,我没有找到任何可以处理此类应用程序的教程,因为它需要在网络上的每个输出图像上使用opencv laplacian函数。

The loss function would be something like this:

def laplacian_loss(y_true, y_pred):

  # y_true already is the calculated gradients, only needs to compute on the y_pred
  # calculates the gradients for each predicted image
  y_pred_lap = []
  for img in y_pred:
    laplacian = cv2.Laplacian( np.float64(img), cv2.CV_64F )
    y_pred_lap.append( laplacian )

  y_pred_lap = np.array(y_pred_lap)

  # mean squared error, according to keras losses documentation
  return K.mean(K.square(y_pred_lap - y_true), axis=-1)

Has anyone done something like that for loss calculation?

查看更多

查看更多

提问者
Lucas Kirsten
被浏览
142
Lucas Kirsten 2019-07-03 22:48

I managed to reach a easy solution. The main feature was that the gradient calculation is actually a 2D filter. For more information about it, please follow the link about the laplacian kernel. In that matter, is necessary that the output of my network be filtered by the laplacian kernel. For that, I created an extra convolutional layer with fixed weights, exactly as the laplacian kernel. After that, the network will have two outputs (one been the desired image, and the other been the gradient's image). So, is also necessary to define both losses.

为了更清楚一点,我将举例说明。在网络的末端,您将看到以下内容:

channels = 3 # number of channels of network output
lap = Conv2D(channels , (3,3), padding='same', name='laplacian') (net_output)
model = Model(inputs=[net_input], outputs=[net_out, lap])

定义您要如何计算每个输出的损耗:

# losses for output, laplacian and gaussian
losses = {
"enhanced": "mse",
"laplacian": "mse"
}
lossWeights = {"enhanced": 1.0, "laplacian": 0.6}

编译模型:

model.compile(optimizer=Adam(), loss=losses, loss_weights=lossWeights)

定义拉普拉斯内核,将其值应用到上述卷积层的权重中,并将可训练的值设置为False(这样就不会更新)。

bias = np.asarray([0]*3)
# laplacian kernel
l = np.asarray([
  [[[1,1,1],
  [1,-8,1],
  [1,1,1]
  ]]*channels
  ]*channels).astype(np.float32)
bias = np.asarray([0]*3).astype(np.float32)
wl = [l,bias]
model.get_layer('laplacian').set_weights(wl)
model.get_layer('laplacian').trainable = False

训练时,请记住您需要两个基本事实值:

model.fit(x=X, y = {"out": y_out, "laplacian": y_lap})

观察:不要利用BatchNormalization层!如果您使用它,则拉普拉斯层的权重将被更新!