Skip to main content

Camera Observation and Modelling of 4D Tracer Dispersion in the Atmosphere

Periodic Reporting for period 3 - COMTESSA (Camera Observation and Modelling of 4D Tracer Dispersion in the Atmosphere)

Reporting period: 2018-11-01 to 2020-04-30

Turbulence is one of the long-standing big challenges in the atmospheric sciences. Kinetic energy produced at the largest atmospheric scales cascades down to the molecular scale where it dissipates, as described by L. F. Richardson’s (1922) poem: “Big whirls have little whirls that feed on their velocity, and little whirls have lesser whirls and so on to viscosity – in the molecular sense.” A related aspect of turbulence is its effect on tracer dispersion. Turbulence controls the dilution of pollution emitted into the atmospheric boundary layer (ABL). It determines how quickly tracers released at the surface are transported away – and this can limit the exchange process itself, with profound influence on fluxes of, e.g. water vapour or carbon dioxide (CO2).

A substance (a “passive scalar”) injected into a turbulent flow exhibits complex dynamical behaviour. Its distribution is chaotic, and the probability density function (PDF) of the scalar concentration field exhibits large fluctuations, which can depart substantially from Gaussian behaviour. While the PDF’s mean is often well accessible to measurements, little is known about its higher moments (variance, skewness, kurtosis). Yet, the higher moments are crucial if the relationship between the concentration fluctuations and their consequences is non-linear. For instance, toxicity, flammability and odour detection depend on exceedances of concentration thresholds. Non-linear chemical reactions are influenced by tracer fluctuations if the reaction and turbulence time scales are similar. E.g. the ozone formation in a pollution plume depends on how the plume mixes with the ambient air.

In field experiments, the concentration PDF has been measured mainly close to the ground but few data sets exist for higher altitudes. Based on these experiments, different functional forms for the concentration PDF were proposed based on purely empirical or partially theoretical ground. They, however, lack consistency and this is not surprising, given the small number of campaigns that could investigate the concentration fluctuations compared to the much larger number of experiments that allowed deriving the concentration means – and even for the means the understanding is poor for the stably stratified ABL (with downward directed surface heat flux). There, turbulence is weak and intermittent and air pollutants can sometimes accumulate to dangerous levels. Despite progress for the continuously turbulent stable boundary layer, theory of turbulence and wave structure under intermittent conditions is not well developed. Even the definition of the (typically shallow) height of the stable ABL is problematic and no unique single definition is accepted.

Dispersion modelling is limited by a lack of theoretical understanding as well as of experimental data. In past experiments, artificial tracers were released into the atmosphere and resulting atmospheric concentrations measured, mainly at discrete sampling locations. However, we notice two major shortcomings of most of these experiments: 1) The data collected were typically sufficient to derive the mean of the concentration PDF but insufficient to resolve its higher moments. To resolve this issue, large data sets of high-resolution (both in time and space) concentration measurements are needed. 2) There is a lack of experiments under highly stable conditions. In COMTESSA, we are executing a set of ground-breaking atmospheric tracer dispersion experiments to collect unprecedented four-dimensional (4D) tracer concentration data. These experiments are combined with state-of-the-art data analysis and modelling of turbulent dispersion, resulting in the development of new model parameterizations.

The experiments observe sulphur dioxide (SO2) puffs and plumes (both released artificially as well as from existing strong SO2 sources) with about nine simultaneously measuring cameras equipped with ultraviolet (UV) and infrared (IR) filters. This allows a high time- and 3D-space-resolution tomographic imaging of the plume. A state-of-the-art optical flow code will later be used to determine the velocity vector field of the plume and high spatial and temporal resolution will permit small-scale (meter-sized) eddies to be resolved. This measurement method in itself is completely novel.

The particular objectives of COMTESSA are to
1. build improved UV and IR cameras.
2. further develop the retrieval of SO2 from UV and IR camera observations.
3. develop a tomographic algorithm to reconstruct the 3D SO2 field based on simultaneous observations with a large number (ca. 9) of UV and IR cameras.
4. extend optical flow analysis of camera observations to three dimensions to retrieve wind vector fields.
5. test retrieval sensitivities, the optical flow analysis and the tomography algorithm in “virtual campaigns” by simulating artificial camera pictures, using a dispersion model and a 3D radiative transfer model.
6. conduct measurement campaigns studying suitable SO2 sources of opportunity (smelters, power plants) and perform controlled releases of SO2 under selected conditions.
7. measure the relative dispersion and meandering of instantaneous puffs under many different stability conditions to test the Richardson-Obukhov law under non-homogeneous conditions and estimate the value of the Richardson-Obukhov constant, a fundamental model parameter.
8. investigate which part of the meandering of a plume can be attributed to turbulence and which part is due to mesoscale disturbances, particularly in the stable boundary layer.
9. integrate the information obtained about the dispersion in the ABL in improved parameterizations for Lagrangian models of turbulent dispersion.
10. provide a data set of high order concentration moments to validate advanced numerical methods for simulating turbulent plume dispersion such as Large Eddy Simulation, Lagrangian particle and PDF methods and volumetric particle methods for concentration fluctuations under many stability conditions.
11. examine statistics of the measured and modelled velocity vector field and scalar concentration field and the relationship between the two.
12. investigate the two-point concentration structure function to validate recent theoretical advancements.
In the first half of the COMTESSA project, we developed six ultraviolet (UV) and three thermal infrared (TIR) cameras, we set-up a Large Eddy Simulation model, used a radiative transfer model to produce virtual camera pictures, and performed first tests with a tomography algorithm using artificial (virtual) campaign data. Most importantly, during summer 2017, we carried out the first measurement campaign, while another one is currently being prepared and will take place in July 2018. In the following, we describe the work carried out in the five different activity fields as described in the project proposal:

1) Improvement of cameras

The original plan of the project was to purchase the UV and TIR SO2 cameras from a daughter company of NILU, Nicarnica Aviation, the sole vendor of such instruments. Unfortunately, F. Prata who is a leading expert on UV/TIR camera development and C. Bernardo who was the engineer responsible for building the SO2 cameras, both left the company before we could purchase the cameras. It became clear that after their leave the company was not able to build the UV and IR cameras according to our specifications. We therefore decided to build the cameras ourselves, directly at NILU. Fortunately, we could sub-contract some of the hardware and software development to C. Bernardo’s new company. As a result, the camera development was a close collaboration between NILU staff and C. Bernardo.

We built six UV SO2 camera systems. We choose a double-camera setup with two high sensitive UV cameras (PCO.ultraviolet) with high transmission band-path filters centered at ~310 nm and ~330 nm, which are placed behind 25 or 12 mm UV lenses. The framerates of the cameras is 7.3 Hz (full resolution) - 27 Hz (v4 binning). A co-aligned spectrometer (AvaSpec-ULS2048x64, Avantes) is used in combination with large SO2 containing glass cells for calibration of the measurements. A mechanical shutter is built in for automated dark measurements. Auxiliary instruments built into the camera comprise of a 10 MP visible camera, an accurate dual axis digital inclinometer (absolute accuracy 0.02°), +/-30° range, which can be set at two mounting positions, and a GPS receiver to determine the exact camera position and pose. We also built three TIR SO2 camera systems, which each integrate three co-located IR cameras (Xenics Gobi-384-GigE), equipped with three filters (centered at 8.62 µm, 10.00 µm and 10.87 µm) mounted behind a 40 mm f/1 lens. Images can be recorded with a maximum framerate of 84 Hz; co-adding of images (2, 4, ...) is done to reduce the signal-to-noise ratio. For calibration of the IR cameras a rotating black-body shutter is moved in front of the three camera lenses when the system temperature changes. Similar to the UV cameras, the TIR SO2 cameras contain three peripheral instruments, i.e. a visible camera, an inclinometer and a GPS. For both, the UV and the TIR cameras, system temperatures and humidity are recorded continuously. A separate computer box for each camera contains a high performance fanless embedded computer with a 1 TB solid-state disk, an AC/DC power supply, and polarity protection, so that the system can be operated with 220 V as well as with 12 V battery power.

Both camera systems are connected to a computer running software to control camera operation and record data. Due to the change of plans with respect to buying/building the cameras, the hardware construction took longer than planned but was finished in about month 19 of the project, just before the first campaign. The software for controlling the cameras was at that time still rudimentary but has been improved considerably since then. Now, the software is at a stage where the different instruments are time-synchronized to UTC using the integrated GPS and controlled by the same computer programme. The GPS location is stored as binary raw data to enable post-processing to receive cm-accuracy relative position of the cameras. Further work is needed with a black-body calibration tank for the TIR instrument. We intend to implement few additional data acquisition and shutter control options and to make the software more user-friendly.

2) Plume tomography and virtual campaigns

Tomography was introduced by Radon in 1917. Classical medical tomography based on Radon transform based methods is, however, not applicable to the current setting with sparse measurements (only a few cameras are available). We focus on different iterative solvers instead, both sequential (Kaczmartz aka. ART and variants) and simultaneous (Landweber, Cimmino, SART, SIRT). We also used CGLS for comparison only in an idealized setting. For our tomography work, we use a custom version of the AIRTtools and the ASTRA software packages. The uploaded figure shows a sample reconstruction of a virtual tracer plume simulated with LES (but at degraded resolution) with 50 iterations of the Kaczmartz method and 20 cameras symmetrically distributed around the vertical axis. Notice that the number of cameras is larger than what COMTESSA can actually use but small compared to other tomography applications.

We performed most simulations in a dedicated instance of the Amazon Web Services cloud (EC2). The instance consists of 2x2500 GPGPU cores providing substantial acceleration with respect to the standard CPU compiled algorithms. The quality of the reconstructions depends on the number of iterations, and for noisy data over-fitting errors can blow up in the iterative process. Stopping rules therefore terminate the iterations near what is known as semi-convergence.

In contrast with medical tomography where many images are obtained by rotating an object, remote sensing with individual cameras contain larger gaps between views. Thus, the camera locations can have a substantial influence on the reconstruction. Using simulation experiments, optimal geometries can be estimated for a given set of restrictions. In contrast with the situation encountered in medical tomography, starting with a radially symmetric camera distribution, the optimal geometry can be non-symmetric. Results from the synthetic tomography experiments are used as guidance for placing cameras in the field.

A publication on COMTESSA synthetic tomography experiments is currently in preparation. The next step will be to use actual campaign data for tomography.

3) Measurement campaigns

The first COMTESSA campaign was carried out during the first three weeks of July 2017. At that time, six UV cameras were ready for carrying out measurements, while the three IR cameras were still under development and were not used. Notice, however, that the UV cameras were also not fully operational yet. While the cameras themselves were functional, auxiliary data from some other instruments (e.g. GPS data) could not be stored because the control software was not available yet and, therefore, the instruments were also not yet time-synchronized. In the meantime, this has been improved and should be ready for the next campaign. For the 2017 campaign, however, less accurate manual position measurements and time synchronizations needed to be made.

The release of SO2 gas required that we find a location where we could set up a mast, safely store SO2 flasks and instruments outside and handle the gas; thus, the space had to be closed off from the public. For setting up the cameras, we needed a reasonably flat area with a clear view of the release mast. We found a military testing ground northeast of the small city of Rena, Norway, in a remote mountain area, which seemed ideal for our purposes. We got permission to use this site by the Norwegian military (and we also got permission from the local authorities to release the gas) but were restricted to three weeks in July when the site was not used by the military. Unfortunately, the weather was particularly bad with very strong winds and rain occurring during more than half of the period. Worse, there was not a single totally cloud-free period during the entire campaign. Since SO2 retrievals from the camera data work well only when there are no clouds disturbing it, and the tomography requires observations in all directions, there was no “golden” day during the entire campaign. On 2-3 days, data from 3-4 cameras working simultaneously could be used for quantitative analysis, but this strongly limited our capacities to do tomography. Nevertheless, the campaign at least allowed us to test various aspects of our experimental methods, and some scientific analyses of the data were possible.

We set up three 10-m high towers, which were equipped with eddy covariance measurement systems to measure turbulent fluxes of sensible and latent heat, and momentum. To one of these masts, a pipe was attached through which SO2 gas was blown up to the top of the mast with a blower from a SO2 bottle placed on the ground. Bottle valves were opened and closed manually. We performed two types of experiments: 1) puffs, for which the valve was opened only for one second and closed again; 2) plumes for which the valve was opened for longer periods. In principle, more information on turbulent dispersion can be gained from the puff releases, but the released SO2 amount during these experiments was only about 1 g each, limiting detectability of the gas to small regions close to the release tower. During the upcoming next campaign, we will use a higher mast (extendable up to 60 m) and will release about 25 g SO2 per puff. This will be facilitated by using up to three SO2 bottles simultaneously, warming the bottles to obtain higher pressure, and adding a container where 25 g of SO2 can be stored temporarily, which will be released instantaneously to generate a puff. These upgrades are ready and will be tested in July 2018. They will allow plume observations much further downwind than in 2017.

In July 2017, we used different set-ups of the SO2 cameras around the release mast. We initially monitored the SO2 release from close distance to confirm that there were no potentially hazardous leakages in the pipe system. The person opening the gas bottles wore a gas mask but without it could have been in danger in case of leakage of SO2 close to the ground. No leakages were found, though. For testing of the cameras, we placed all cameras together to make directly comparable measurements of SO2. This allowed verifying that all cameras worked properly and delivered comparable data.

During the experiment with the most suitable weather conditions, we placed the cameras in a semi-circle with ca. 160 m radius around a point 18 meters downwind of the release mast. The cameras were all pointing towards the same volume of air (roughly 40 m x 40 m x 20 m) and observed SO2 puffs as they were travelling through this volume. As mentioned earlier, some cameras were always affected by clouds in the background as there was no completely cloud-free day, and this limits the quantitative retrieval of SO2 and the tomographical reconstruction of the concentration PDF. However, we developed a simplified but robust tomographical method to retrieve the first moments of the concentration PDF – see next section.

Photographs taken during the measurement campaign and example videos of dispersing SO2 puffs and plumes can be found at the project website A scientific paper describing the campaign and an analysis of a small subset of the data is nearly finished and will be submitted by the end of June 2018: Dinger et al. (2018), Characterising vertical turbulent dispersion by observing artificially released SO2 puffs with UV cameras, in preparation for Atmos. Meas. Tech.

4) Analysis of the campaign data

Both the camera and the eddy covariance measurement data sets obtained during the July 2017 campaign have been screened and quality controlled. It is planned to describe the obtained data set in a data publication and make the data publicly available. However, the screened data set is several dozen terabytes large and to reduce this data volume to a more manageable size, selection of the periods of potential greatest interest to others, is needed. This work was interrupted by the preparation for the 2018 campaign but will be continued in autumn 2018.

For determining turbulent dispersion parameters, we developed a simplified tomography method that also works with only 3-4 cameras and is robust to small position or pointing errors. Using simple triangulation, we first determined the 3D position of the centre of mass of each puff at each moment in time (every 250 milliseconds). Knowing the approximate distance of the puff to every camera at each time, we could then scale the image pixel size to real-world coordinates. This, in turn, allowed determining the total mass of the puff (which should be constant in time and serves as a constraint on the results) by spatial integration over a camera frame. Furthermore, the spread of the puff mass around its centre of mass, i.e. the relative tracer dispersion, could also be determined. Knowing the centre-of-mass trajectories for an ensemble of puffs, we could also determine the meandering of the individual puffs around their mean trajectory. Combining meandering and relative dispersion, the absolute dispersion of the puffs could be determined. The turbulent dispersion in the vertical is then used to estimate the effective source size, the source time scale and the Lagrangian integral time. In principle, the Richardson-Obukhov constant of relative dispersion in the inertial subrange could be also obtained, but the observation time was not sufficiently long in comparison to the source time scale to allow an observation of this dispersion range.

The method and results are presented in the already mentioned paper by Dinger et al. (2018), which is nearly ready for submission.

For future campaigns, we will have highly accurate (2-3 cm) GPS position data for every camera, and we will also measure the camera pose (i.e viewing direction, tilt) using known GPS reference points in the field. We will position the cameras much further away from the source (several hundred meters), and we have exchanged the camera lenses, doubling their field-of-view. We will also release much more SO2. Thus, we should be able to measure the dispersing SO2 puffs much further downwind than in 2017. We also hope for less strong winds, extending the observation times even more than the observation distance.

The most important requirement for future campaigns is to encounter clear-sky conditions. Since the weather in Rena was so bad in July 2017, we tried to organize a campaign in Austria in July 2018. We found an ideal military testing ground for that and obtained all permissions for the release from the Austrian authorities, including passing a detailed environmental impact assessment in a nature protection area, but after long negotiations did not get the required contract with the Austrian military. Therefore, the July 2018 campaign will be in Rena again, and we can only hope for better weather than in 2017.

5) Dispersion modelling

The atmospheric layer is characterized by the presence of turbulence. Turbulent motions are composed by a multitude of eddies of different sizes ranging from more than a kilometer to about a millimeter. The most advanced computational method feasible nowadays to resolve turbulence and dispersion in the atmospheric boundary layer is the Large Eddy Simulation (LES). This computational method aims to explicitly resolve the large eddies and parametrize the small ones, of size smaller than the grid used to resolve the equations (see uploaded figure). Unfortunately, this computational method has a significant dependence of its results on the grid resolution.

Although LES has been applied many times to atmospheric turbulence simulations a comprehensive investigation of the dependence of plume turbulent dispersion results on LES grid and the chose
During the first COMTESSA campaign, we used UV cameras to monitor the dispersion of a tracer in the real atmosphere. This was the first time this has been done. We were limited by the bad weather but could nevertheless demonstrate that turbulence parameters (e.g. characterizing meandering, relative and absolute dispersion) can be retrieved from such data.

The experiment will be repeated in July 2018 on a larger scale and using also IR cameras. Given better weather conditions than in 2017, we should be able to obtain statistically robust dispersion parameters that can be used to test theory and large eddy simulations. Eventually, we hope to obtain data during an even larger range of meteorological conditions (e.g. different stability conditions) in future campaigns.
Absolute dispersion as a function of time derived from puff observations.
Horizontal projection of the centre-of-mass trajectories of six tracer puffs.
Example of one camera image showing two puffs dispersing from the release tower.
Example tomographic reconstruction of a tracer plume simulated with a LES model.
One eddy covariance system mounted on a mast in Rena in July 2017.
Relative dispersion as a function of time derived from puff observations.
Meandering as a function of time derived from puff observations.
Example showing how tracer puffs are detected in an UV image.
Sketch of eddies in the ABL and the grid used by LES to resolve the filtered equations.
Example of a tracer plume simulated with LES.
Set-up of a puff experiment at Rena in July 2017, showing camera locations and cloudiness (grey).
Picture showing one UV camera with computer and battery and release tower in background
PDF of concentration distribution calculated by the LES model for different grid resolutions.