Skip to main content

Deciphering super-Earths using Artificial Intelligence

Periodic Reporting for period 2 - ExoAI (Deciphering super-Earths using Artificial Intelligence)

Reporting period: 2019-07-01 to 2020-12-31

Understanding the origins and prevalence of life in the universe are some of the oldest and most fundamental pursuits of society. The study of exoplanets - i.e. planets orbiting other stars – offers one of the most promising avenues of answering fundamental questions of planet formation, astrobiology and extra-terrestrial habitability. In the last two and a half decades, we have undergone what is best described as a second Copernican revolution. The discovery of extrasolar planets has fundamentally transformed our understanding of planets, solar systems, their formation histories and our place in the grander scheme of the Milky Way. From the avalanche of recent discoveries, we have learned something rather unexpected. Despite our strong observational bias for detecting large gas giants (i.e. Saturn and Jupiter type planets), current statistics show a very different picture. Contrary to expectations, super-Earths (planets roughly between 1 - 10 Earth masses) are in fact the most abundant planets. In other words, our galaxy is one of Earths and slightly larger planets, not giants. This discovery poses a rather fundamental question: “What exactly are super-Earths?”, “How do they form?”, “Are they habitable and what is the weather like?” - To which the current response is a somewhat sobering: “We don’t really know”.
In order to answer these questions, we require a step change in the data analysis of current exoplanet observations. In short, the more sensitive we are to the faintest signals, the more information we can gleam from the light of these foreign worlds. Recent developments in machine learning and artificial intelligence (AI) have revolutionised many areas of industry and science. The ExoAI project focuses on the development of state-of-the-art AI solutions to existing issues in data analysis of exoplanetary observations as well as the theoretical modelling of exoplanet atmospheres. Through the use of state-of-the-art machine learning, we can better understand the behaviour of instruments such as the Hubble Space Telescope (HST) and disentangle the faint planetary signatures from the systematic noise of the instrument. Similarly, on the theoretical side, we have developed AI algorithms that speed up traditionally computationally intensive simulations of the exoplanet atmospheres. So called ‘inverse models’ translate the observations into a description of the physical characteristics of the exoplanet. By making these models significantly faster, we are now able to incorporate more realistic physics and chemistry than what was possible before. These AI powered solutions will help us to study these exotic worlds using a consistent methodology and gain insight into the population of super-Earths in far greater detail than previously possible.
The ExoAI team has made significant headway in a number of areas during the first half of this project. Perhaps the most outstanding achievement is the first detection of an atmosphere around a super-Earth (Tsiaras et al. 2019, Nature Astronomy). K2-18b is a potentially rocky planet and is in its star’s so-called ‘habitable zone’ – the region around a star that is just right for liquid water to exist on the planet’s surface. The detection of the first water-rich atmosphere around a temperate exoplanet constitutes a very significant step forward in our quest to be able to characterise and understand potentially habitable planets around our own solar system.

Whilst current observations with the Hubble Space Telescope (HST) cannot confirm other molecules additional to water in the atmosphere of K2-18b, we expect them to be present and we will study the chemistry of this exciting world with the next generation of space and ground telescopes. Along with K2-18b we have published a series of studies on hot-Neptunes and hot-Jupiters (e.g. Changeat et al. 2020, Edwards et al. 2020), including the first catalogue of uniformly analysed exoplanets (Tsiaras et al. 2018).
To improve our understanding of the data and exoplanets, multiple aspects of the project have to come together in unison, in particular the data analysis of ultra-faint signals and the theoretical modelling of the atmospheric spectra observed by the HST. On the data analysis side, we have pioneered the use of deep long-short-term-memory (LSTM) neural networks that are able to learn the intrinsic behaviour of instruments over time and by doing so they are able to disentangle the faint planetary signals from a heap of instrumental noise (Morvan et al. 2020). Similarly, in Yip et al. (2019) we have pioneered the first un-supervised learning approach in the detection of very young hot-Jupiters in directly imaged Hubble data. This technique uses generative adversarial neural networks that learn to imitate realistic observations and by doing so we can train specialised detection networks more easily. Deep-learning approaches can often out-perform traditional data analysis techniques by being able to include many more auxiliary variables in an organic framework than what is otherwise possible. In other words, our artificial intelligence algorithms are significantly better in finding the needle in the haystack that is instrumental noise.

Once the data is analysed and the faint planetary signal dug out of the instrument noise, we need to interpret the data. This requires sophisticated modelling of the atmosphere of the exoplanets. As previously mentioned, we call this translation from observational signature to atmospheric parameter (such as temperature, water abundance, etc) an ‘inverse retrieval’. During the last two years, we have re-built one of the leading retrieval algorithms, TauREx2 from scratch to make it into a fully open-source analysis framework for planetary science. The new TauREx 3 framework (Al-Rafaie et al. 2020) is significantly faster (up to a factor of 200x) and is the first of its kind to allow the seamless integration of any chemistry, cloud-formation or radiative transfer algorithm available in the exoplanet community. This is a game changer as this will allow us, for the first time, to benchmark all model assumptions across the field of exoplanets and provide a uniform framework for the analysis of exoplanets and solar system planets (Cann et al. 2020). Only by analysing observations in a uniform manner, are we able to draw conclusions on population characteristics of exoplanets.

Similar to the data analysis, deep learning holds significant promise to improve the retrieval process. Once trained, neural networks are very fast in analysis data. Currently, traditional retrieval approaches are limited in complexity by their computational demand and long run-times. By learning the behaviour of physical and chemical models, we can incorporate significantly more complex models that are computationally too slow for traditional inverse retrieval algorithms to run. ExoGAN (Zingales & Waldmann 2018) was the first deep learning retrieval framework in the field demonstrating the potential for deep learning approaches in this area. Since then, a small but steadily growing field of AI driven retrieval algorithms has emerged.

The algorithms developed by the ExoAI group to date find wide applications across several fields of exoplanets, planetary science and beyond. Some notable examples are the application of TauREx 3 to the search for methane on Mars (Cann et al. 2020) and the discovery of large-scale ammonia clouds around dark-storm regions on Saturn (Waldmann & Griffith 2019, Nature Astronomy). Other spin-off projects include exoplanet discovery algorithms using deep learning on NASA-TESS mission data (Morvan et al. in prep.), unsupervised mapping of surface compositions on Mars (Hippserson et al. in prep.) and the identification of Illegal, Unreported and Unregulated (IUU) fishing activity using space-based synthetic aperture radar images (Nikolau et al. in prep.).
The second half of the ExoAI project will see three main developments: 1) the integration of data analysis algorithms with atmospheric models, 2) the publication of our explainable AI framework in atmospheric retrieval modelling and the exploration of causal models, 3) the uniform analysis of the James Webb Space Telescope Early Release Science exoplanet data set.

To date, data analysis (i.e. noise de-trending) and data interpretation (i.e. atmospheric retrievals) are two separate sub-fields of exoplanets. This is a significant issue as, in particular in the very low signal-to-noise regime, the instrument model assumed to de-trend the data can often impact the final interpretation of the data. This inter-dependence occurs due to accidental over- or under-correction of the raw observed signal. In order to fully understand the impact on the final interpretation of the exoplanet, we need to propagate these errors organically through our analysis pipeline. This requires a full merging of data analysis, instrumental and atmospheric models in a single unified model. By building a fully holistic framework we can fully map and understand the full chain of uncertainties and biases that remain hidden to date. To this end we have made progress in merging exoplanet lightcurve modelling and atmospheric retrievals as a first step (Yip et al. 2020). The plug-in system of the re-designed TauREx 3 framework (Al Rafaie et al. in prep.) now provides the programmatic flexibility to build this unified framework in the coming year.

In the second main development aim we will focus more specifically on the explainability of deep learning approaches in atmospheric modelling. Since the ExoGAN paper has been published, several groups have presented deep learning approaches of their own. The main criticism of deep learning is its black-box nature and the difficulty in explaining neural networks’ decision making. ExoGAN and similar papers are not excluded from this criticism and interpretability of the networks’ decisions is currently lacking. In two upcoming publications we will address these drawbacks directly. We will draw on recent advances in explainable AI to define (architecture independent) methods to interpreting the results and reliabilities of a neural network atmospheric retrieval. We hope that this work will provide a firm basis for future developments of deep learning approaches in the field.

The James Webb Space Telescope will be launched in the spring of 2021 with the first scientific return (the Early Release Science Programme 1366) expected in the autumn of the same year. Using our analysis tools and uniform modelling frameworks developed, we will be in a prime position to analyse the exiting new observations in unparalleled detail. It will remain to be seen what exciting new science we find but the unprecedented wavelength coverage, spectral resolution and precision will constitute a game changer in the way we understand planets outside our own solar system.
Exoplanet K2-18b and its host star (artist impression). Credit: ESA/Hubble, M. Kornmesser