THE STANDARD DEVIATION METHOD - DATA ANALYSIS BY CLASSICAL MEANS AND BY NEURAL NETWORKS
The Standard Deviation Method is a method for determining particle size such as air-bubble sizes in a fermentation bio-reactor. The transmission coefficient of an ultrasound beam through a gassy liquid is measured repetitively. Due to the displacements and random positions of the bubbles, the measurements show a scatter whose standard deviation is dependent on the bubble-size. The precise relationship between the measured standard deviation, the transmission and the particle size has been obtained from a set of computer-simulated data. During the computer modelling, the size and transmission are specified in advance, and the standard deviation is calculated. For the evaluation of the experimental data, it is necessary to know the relationship between the measured parameters (transmission coefficients and standard deviation) and the unknown parameter (particle size). In order to develop the mapping between these measured inputs and the unknown output two very different approaches are examined and compared: - a classical series expansion fitting procedure; - training of a neural network. The particle size corresponding to real time experimental data is then determined by : - an iterative procedure applied to the best-fit series expansion; - direct input to the neural network.
Bibliographic Reference: REPORT: LRP 384/89 (1989) 19 PP. AVAILABLE FROM CONFEDERATION SUISSE, CENTRE DE RECHERCHES EN PHYSIQUE DES PLASMAS, ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE, 21 AVENUE DES BAINS, 1007 LAUSANNE (CH)
Record Number: 1989128036200 / Last updated on: 1990-11-09
Available languages: en