Warm tip: This article is reproduced from serverfault.com, please click

python-在Pytorch模型中更新权重和偏差时如何防止内存使用量增长

(python - How to prevent memory use growth when updating weights and biases in a Pytorch model)

发布于 2020-11-25 07:54:22

我正在尝试构建VGG16模型以使用Pytorch进行ONNX导出。我想用自己的权重和偏见来强制模型。但是在此过程中,我的计算机很快耗尽了内存。

这是我要执行的操作(这只是一个测试,在实际版本中,我读取了一组文件中的权重和偏差),此示例仅将所有值强制为0.5

# Create empty VGG16 model (random weights)
from torchvision import models
from torchsummary import summary

vgg16 = models.vgg16()
# la structure est : vgg16.__dict__
summary(vgg16, (3, 224, 224))

#  convolutive layers
for layer in vgg16.features:
    print()
    print(layer)
    if (hasattr(layer,'weight')):
        dim = layer.weight.shape
        print(dim)
        print(str(dim[0]*(dim[1]*dim[2]*dim[3]+1))+' params')

        # Remplacement des poids et biais
        for i in range (dim[0]):
            layer.bias[i] = 0.5
            for j in range (dim[1]):
                for k in range (dim[2]):
                    for l in range (dim[3]):
                        layer.weight[i][j][k][l] = 0.5

# Dense layers
for layer in vgg16.classifier:
    print()
    print(layer)
    if (hasattr(layer,'weight')):
        dim = layer.weight.shape
        print(str(dim)+' --> '+str(dim[0]*(dim[1]+1))+' params')
        for i in range(dim[0]):
            layer.bias[i] = 0.5
            for j in range(dim[1]):
                layer.weight[i][j] = 0.5

当我查看计算机的内存使用情况时,它会线性增长并在第一个密集层处理期间饱和16GB RAM。然后python崩溃了...

有没有其他更好的方法可以做到这一点,请记住我要在以后进行nnx导出模型?谢谢你的帮助。

Questioner
Fabrice Auzanneau
Viewed
0
Poe Dator 2020-11-25 18:00:42

内存增长是由于需要为每个权重和偏差更改调整梯度而引起的。尝试在更新之前.requires_grad属性设置为,False并在更新后将其还原。例子:

for layer in vgg16.features:
    print()
    print(layer)
    if (hasattr(layer,'weight')):
        
        # supress .requires_grad
        layer.bias.requires_grad = False
        layer.weight.requires_grad = False
        
        dim = layer.weight.shape
        print(dim)
        print(str(dim[0]*(dim[1]*dim[2]*dim[3]+1))+' params')

        # Remplacement des poids et biais
        for i in range (dim[0]):
            layer.bias[i] = 0.5
            for j in range (dim[1]):
                for k in range (dim[2]):
                    for l in range (dim[3]):
                        layer.weight[i][j][k][l] = 0.5
        
        # restore .requires_grad
        layer.bias.requires_grad = True
        layer.weight.requires_grad = True