Warm tip: This article is reproduced from stackoverflow.com, please click
keras python tensorflow activation-function

Restrict the sum of outputs in a neural network regression (Keras)

发布于 2020-03-31 23:01:37

I'm predicting 7 targets, which is ratio from one value, so for each sample sum of all predicted values should be 1. Except of using softmax at the output (which seems obviously incorrect) I just cant figure out other ways to restrict sum of all predicted outputs to be =1..
Thanks for any suggestuions.

input_x = Input(shape=(input_size,))
output = Dense(512, activation=PReLU())(input_x)
output = Dropout(0.5)(output)
output = Dense(512, activation=PReLU())(output)
output = Dropout(0.5)(output)
output = Dense(16, activation=PReLU())(output)
output = Dropout(0.3)(output)
outputs = Dense(output_size, activation='softmax')(output)
#outputs = [Dense(1, activation=PReLU())(output) for i in range(output_size)] #multioutput nn

nn = Model(inputs=input_x, outputs=outputs)
es = EarlyStopping(monitor='val_loss',min_delta=0,patience=10,verbose=1, mode='auto')
opt=Adam(lr=0.001, decay=1-0.995)
nn.compile(loss='mean_absolute_error', optimizer=opt)
history = nn.fit(X, Y, validation_data = (X_t, Y_t), epochs=100, verbose=1, callbacks=[es])

Example of targets:

enter image description here

So, this is all ratios from one feature, sum for each row =1.
For example Feature - 'Total' =100 points, A=25 points, B=25 points, all others - 10 points. So, my 7 target ratios will be 0.25/0.25/0.1/0.1/0.1/0.1/0.1.

I need to train and predict such ratios, so in future, knowing 'Total' we can restore points from predicted ratios.

Questioner
Oleksii
Viewed
98
Tomasz Gandor 2020-02-06 06:47

I think I understand your motivation, and also why "softmax won't cut it".

This is because softmax doesn't scale linearly, so:

>>> from scipy.special import softmax
>>> softmax([1, 2, 3, 4])
array([0.0320586 , 0.08714432, 0.23688282, 0.64391426])
>>> softmax([1, 2, 3, 4]) * 10
array([0.32058603, 0.87144319, 2.36882818, 6.4391426 ])

Which looks nothing like the original array.

Don't dismiss softmax too easy though - it can handle special situations like negative values, zeros, zero sum of pre-activation signal... But if you want the final regression to be normalized to one, and expect the results to be non-negative, you can simply divide it by the sum:

input_x = Input(shape=(input_size,))
output = Dense(512, activation=PReLU())(input_x)
output = Dropout(0.5)(output)
output = Dense(512, activation=PReLU())(output)
output = Dropout(0.5)(output)
output = Dense(16, activation=PReLU())(output)
output = Dropout(0.3)(output)
outputs = Dense(output_size, activation='relu')(output)
outputs = Lambda(lambda x: x / K.sum(x))(outputs)

nn = Model(inputs=input_x, outputs=outputs)

The Dense layer of course needs a different activation than 'softmax' (relu or even linear is OK).