In the last decade several authors propagated the use of interval probabilities as alternative to
Bayesian models in reliability problems.
The basic idea of this approach is to start from some lower and upper bounds for functions of
random variables describing the failure probabilities or rates of the components of a system and
then to derive from these then bounds for the failure probability of the system. The advantage of
such bounds is that there are no classical or Bayesian confidence probabilities, one is 100%
certain that the calculated probabilities lie in the derived bounds.
If one considers the basic problem in reliability of finding the failure probability, this can be seen
as collecting information, one starts from total ignorance and gathering more and more
information one arrives at more specific estimates of the probability.
Using the mathematical definition of entropy and information, here it is shown that the method of
interval probabilities requires an infinite amount of information. A prerequisite which in halfway
realistic problems cannot be fulfilled.
«