Warm tip: This article is reproduced from serverfault.com, please click

How to minimize the error to a given dataset

发布于 2020-11-28 19:31:39

lets assume a function

f(x,y) = z 

Now I want to choose x so that the output of f matches real data, and y decreases in equidistant steps to zero starting from 1. The output is calculated in the function f by a set of differential equations.

How can I select x so that the error to the real outputs is as small as possible. Assuming I know a set of z - values, namely

f(x,1) = z_1
f(x,0.9) = z_2
f(x,0.8) = z_3 

now find x, that the error to the real data z_1,z_2,z_3 is minimal. How can one do this?

Questioner
CB95
Viewed
0
Daan R 2020-11-29 06:15:22

A common method of optimizing is least squares fitting, in which you would basically try to find params such that the sum of squares: sum (f(params,xdata_i) - ydata_i))^2 is minimized for given xdata and ydata. In your case: params would be x, xdata_i would be 1, 0.9 and 0.8 and ydata_i z_1, z_2 and z_3.

You should consider the package scipy.optimize. It's used in finding parameters for a function. I think this page gives quite a good example on how to use it.