If the user function is a negative log-likelihood function, it must again be correctly normalized, but the reasons and ensuing problems in this case are quite different from the chisquare case. The likelihood function takes the form (see [5], p. 155):
where each x represents in general a vector of observations, the a are the free parameters of the fit, and the function f represents the hypothesis to be fitted. This function f must be normalized:
that is, the integral of f over all observation space x must be independent of the fit parameters a.
The consequence of not normalizing f properly is usually that the fit simply will not converge, some parameters running away to infinity. Strangely enough, the value of the normalization constant does not affect the fitted parameter values or errors, as can be seen by the fact that the logarithm makes a multiplicative constant into an additive one, which simply shifts the whole log-likelihood curve and affects its value, but not the fitted parameter values or errors. In fact, the actual value of the likelihood at the minimum is quite meaningless (unlike the chi-square value) and even depends on the units in which the observation space x is expressed. The meaningful quantity is the difference in log-likelihood between two points in parameter-space, which is dimensionless.
For likelihood fits, the value UP=0.5
corresponds to
one-standard-deviation errors.
Or, alternatively, F may be defined as
,
in which case differences in F have the same meaning as for chi-square
and UP=1.0
is appropriate. The two different ways of introducing the
factor of 2 are quite equivalent in Minuit, and although most people
seem to use UP=0.5
, it is perhaps more logical to put the
factor 2 directly into FCN.