Informationsgewinnung und -verarbeitung in nichtlinearen dynamischen Systemen (German Ph. D. Thesis)
by Christoph Arndt, Shaker-Verlag Aachen, ISBN 3-8265-2304-0
Description in english: Production and Processing of Information in nonlinear dynamic systems
In order to describe information in dynamical systems, it is necessary to examine the mathematical formulations of information and their results in practical applications. This, however, requires a comprehension of the information measures and their relations. Therefore the Ph. D. Thesis begins with an introduction in the informations of Hartley, Shannon, Kullback and Kolmogorov and it shows the connections between them and their results for gaussian distribution densities. All these measures of information are based on different postulates and produce scalar measures of information, even though the observed system is multi-dimensional. So these measures are amalgamations of the information that is contained in the dynamical system.
In the next part the focus is on the partitioning of the information on the different variables of our systems, which can only be described by matrices. Such multi-dimensional information measures are given by the variance bounds, with the well know Cramèr-Rao-Bound and various other bound from Barankin, Bhattacharrya or Kiefer-Wolfowitz. All these bound consist of the Cramèr-Rao-Bound (inverse of Fisher`s information) as the lowest variance bound. Furthermore Fisher`s information can be obtained out of the scalar information measures, so that there are direct relations between the scalar amalgamations of information and the information described by matrices. In the last part of the second chapter the Cramèr-Rao-Bound is examined for biased and unbiased estimators and it is found, that the variance of a biased estimator can be smaller than the variance of an unbiased estimator, because of the bias.
A first application of the measures of information is then used to compare the concepts of data-integration and data-fusion. In both concepts obtain the same number of components in the measurement vector, but in data-integration each component is evaluated separate, while data-fusion uses the mutual information between the components of the measurement vector to increas the measurement information and therefore to improve the state estimate. These advantages fo data-fusion over data-integration can be shown by use of the information measures.
The second application of the information measures deals with the nonlinear filter problem. Here we do only obtain an exact solution, when we compute the complete conditional distribution density of the desired state space variables. However, this is not possible in porcatical applications, where the distribution densities are approximated by their second order moments, i. e. by gaussian distribution densities. These approximations are optimized by minimizing the Kullback-Leibler distance. These unavoidable approximations and the linearization of nonlinear system and/or observation functions lead to a loss of information which can be compensated at least partially by pseudoredundant measurements.
The new concept of pseudoredundant measurements, which are extensions of the original measurement vector build out of the components of the measurement vector and therefore provide additional information for our estimator, is then introduced and described. These pseudoredundant measurements are theoretically redundant measurements, as they do not provide any new information as long as one deals with the complete distribution densities. The measurement information is only shifted in another form, given by the pseudoredundant measurements. Nevertheless, in practical applications the already mentioned approximations lead to a loss of information in an estimating procedure and therefore this transformed information of the pseudoredundant measurements increases the measurement information and thus reduces the variance of the state space estimates. To demonstrate this idea, the I-Q-Demodulation of bandpass-signals is used, where the original measurements are the real- and imaginary part of the obtained complex phasor. These measurements (contaminated with gaussian noise) are extended by a phase and an apmlitude, computed aout of the noisy real- and imaginary part of our phasor. The verification of the improvement is then shown by monte-carlo-simulation where different combinations of the 4 measurements are observed and where their information is computed and simulated.
For information please contact: