# MPE CDT Student Cohort 2014

### Lily Greig

**Based at:** University of Reading

Leads are fractures in sea ice. They provide a significant contribution to the polar heat balance despite making up only 5-10% of the sea ice cover, as gradients of sea ice concentration can result in lateral gradients in surface forcing and density gradients in the mixed layer. Through baroclinic instability, these fronts can energise submesoscale eddies. Submesoscale eddies have relatively fast time scales (hours to days), living in a parameter regime with finite Rossby and Richardson numbers. If energised they drive large horizontal exchange between ice-free and ice-covered ocean, and previous work showed that such dynamics could have an order 1 impact on the sea ice melt. Grid scales in the current generation of climate models are greater than the scale of submesoscale eddies and sea ice leads and ignore the effects of the sub-grid scale processes on the net polar heat balance. This project aims to explore these effects. It will start by building a mathematical model to develop understanding of the time and space scales of the density fronts formed under leads. Next it will explore under which conditions the density fronts may become unstable and spawn a submesoscale eddy field. Finally, this project will assess how subsmesoscale dynamics modulate air sea exchanges and if these processes should be included in climate models.

### Calvin Nesbitt

**Based at:** University of Reading

### Chiara Cecilia Maiocchi

**Based at:** University of Reading

### Niccolò Zagli

**Based at:** Imperial College London

### Ollie Street

**Based at:** Imperial College London

The issue of ocean plastics has recently been much discussed by academics, policy makers, and environmental campaigners. The mathematical models which are used to describe the advection of plastics have largely ignored key factors such as sub-grid-scale dynamics and the mass of the debris. This raises the interesting question of how inertial particles move in a fluid governed by a SPDE. Using recent developments in stochastic fluid equations [Holm 2015] as a springboard, we will explore how the introduction of transport noise affects the properties (such as well posedness and smoothness) of a fluid model. In particular, can this type of noise restore uniqueness to a model? Furthermore, we will input the velocity field of the fluid into an equation which will return the velocity of the debris [Maxey & Riley, 1983], exploring the validity of doing this and whether this accurately models reality. Such a model would have applications in predicting the motion of ocean debris (such as icebergs, plastics, or aircraft wreckage) and, considering the model as an inverse problem, calibrating ocean models from drifter buoy data by understanding how the movement of the buoys differs from that of the fluid.

### Swinda Falkena

**Based at:** University of Reading

Predictions of future climate change are usually represented as best estimates with an uncertainty range. However, at the regional scale, changes in atmospheric circulation play a large role and several outcomes may be possible. Under such conditions, an aggregate approach does not provide informative statements about risk. Storylines provide a way to represent the uncertainty in climate change itself, but need to be embedded within a probabilistic framework to account for the uncertainty in the particular realization of climate that will occur.

In this PhD project we use Bayesian causal networks to combine the storyline approach with probability. We focus on atmospheric circulation regimes in the Euro-Atlantic sector, since these have a large influence on the weather over Europe, and study their link with regional changes in extreme events. To inform the derivation of the causal network, expert knowledge will be used, which can be (partially) based on dynamical relationships derived from complex simulation models. The network will incorporate memory effects present in these dynamical relationships, which can give rise to persistent circulation anomalies. This will lead to a stronger physical foundation of the derived causal networks.

### Ryo Kurashina

**Based at:** Imperial College London

### Oliver Phillips

**Based at:** University of Reading

### James Woodfield

**Based at:** University of Reading

Supervisors:

Hilary Weller (Reading Meteorology)

Colin Cotter (Mathematics, Imperial)

Christian Kühnlein (ECMWF)

Transport, or advection, is arguably the most important part of an atmospheric prediction model. Everything in the atmosphere is transported by the wind - temperature, pollutants, moisture, clouds and even the wind itself (non-linear advection). Operational weather and climate centres, such as the Met Office and ECMWF, are developing new atmospheric dynamical cores to run on modern computer architectures and they need accurate, efficient and conservative advection schemes that are stable for long time steps suitable for their new models. Their current transport schemes are accurate and stable for long time steps but do not conservative. This project will develop implicit methods to achieve long stable time steps on arbitrary grids of the sphere for linear and non-linear problems. We will start by creating a model for Rayleigh-Benard convection and we will develop a Newton solver to achieve long, stable time steps.

### Sam Harrison

**Based at:** University of Reading

### Tom Gregory

**Based at:** Imperial College London

### Manu Sidhu

**Based at:** University of Reading

### Robin Evers

**Based at:** Imperial College London

### Philipp Breul

**Based at:** Imperial College London

### Lois Baker

**Based at:** Imperial College London

It is an emerging picture that deep ocean turbulence exerts a control over the climate system through regulating the oceanic uptake and redistribution of heat, carbon, nutrients and other tracers. Observations of such turbulence, and our ability to model it numerically, however, have been limited if non-existent until very recently. In recent years, a few major international field programs have shed light on deep ocean turbulence by state-of-art observations of turbulence generated by deep ocean waves that can be as small as few meters tall or as tall as a few skyscrapers.

Our ability to mimic such turbulence in numerical models of high resolution is recent and helps out putting the isolated, yet expensive, observational data in the context of the large scale climate dynamics. The challenge ahead is to understand physics of such turbulence to help represent them properly in climate models that are coarse resolution, hence incapable of resolving such waves. This project aims at significantly enhancing our understanding of deep ocean turbulence and its representation in climate models. It will build on theoretical study of turbulence transition through stability analysis, numerical verification, and comparison with recent observational data.

Supervisors:

Lead Supervisor: Dr Ali Mashayek (Imperial College London)

Co-supervisors: Dr John Taylor (University of Cambridge), Professor Martin Siegert (Imperial College London)

### Edward Calver

**Based at:** University of Reading

### Cathie Wells

**Based at:** University of Reading

Air travel is the subject of much current controversy. Statistics for fuel use and CO2 emissions make uncomfortable reading for both airlines and environmental groups. Today’s flight routes avoid areas of strong headwinds and make use of available tailwinds, for a set optimal low fuel burn air speed. During the MRes phase of the project, however, it was shown that these trajectories do not always minimise fuel burn.

Airlines are keen to be able to provide a timetable that is unaffected by a particularly strong wind field. Delays are costly and early arrival can often result in extra fuel burn due to holding patterns. This PhD project will find optimal routes to minimise fuel burn for set departure and arrival times. Varying both airspeed and altitude, whilst considering the expected background wind field and the change in aircraft mass due to fuel burn, will provide a realistic model for the cruise phase of transatlantic flights.

Using Optimal Control theory, the dynamical system of routing equations derived in each situation can be solved numerically. The fuel burn statistics from the model can then be compared with recent actual flight data and recommendations made to the airline industry.

Lead supervisor: Paul Williams (Reading)

Co-supervisors: Dante Kalise (Imperial) and Nancy Nichols (Reading) Industrial co-supervisor: Ian Poll (Poll AeroSciences Ltd)

### Elena Saggioro

**Based at:** University of Reading

Supervisors: Professor Ted Shepherd (Lead Supervisor, Department of Meteorology, University of Reading), Professor Sebastian Reich (Department of Mathematics and Statistics, University of Reading), Dr Jeff Knight (Met Office)

Project summary: Although there is confidence in thermodynamic aspects of global climate change for a given global warming, crucially there is still large uncertainty in the dynamical response at the regional scale. This is due to the role of atmospheric circulation, projected changes in which are poorly constrained by Global Climate models (GCMs) which give widely divergent responses, reflecting underlying model errors.

In order to identify the physical range of plausible responses, it is firstly necessary to identify models’ errors on short-timescale behaviour, for instance by comparing outputs with observed seasonal variability. Secondly, the connection between such errors and their spread in future projection needs to be understood and used to rule out unphysical projections. Within climate science this method is referred to as ‘emergent constraints’, its validity being rooted in the principles behind the fluctuationdissipation theorem (FDT) in statistical physics. Whilst promising, the application of emergent constraints in climate science has often failed, this due arguably to unsuitable practical estimations of both short-term errors and connection with long-term responses.

In this PhD we aim to tackle the issue of constraining the circulation response to climate change adopting time-series Bayesian Causal network (BCNs). This is a mathematical framework suitable to address questions related to causality, and its practical implementation results in a tool for robust statistical inference. A N-variate time evolving process can be associated to a time-series BCN by representing relations of pairwise conditional dependence in the process as lag-specific time-oriented links in the graph. The definition translates into a practical procedure for inferring causal links from data, once a test for conditional independence is chosen.

In the PhD we will use BCNs to estimate model errors on seasonal time scales, by comparing causal mechanisms as detected from reanalysis data with the same as extracted from model outputs. Then, we will connect short-timescale model errors to the long-term projection. The idea here is to complement the FDT-based reasoning with the insights in data provided by BCN representation.

### Niraj Agarwal

**Based at:** Imperial College London

Principal Supervisor: Prof. Pavel Berloff (Department of Mathematics, Imperial College London)

Co-advisor: Peter Dueben, (ECMWF)

Summary: The oceanic turbulent circulation exhibits multiscale motions on very different space and time scales interacting with each other; e.g., jets, vortices, waves, and large-scale variability. In particular, mesoscale oceanic eddies populate nearly all parts of the ocean and need to be resolved in order to represent their effects on the general ocean and atmosphere circulations. However, capturing effects of these small-scale flows is highly challenging and requires non-trivial approaches and skills, especially when it comes to representing their effects in non-eddy resolving ocean circulation models. Therefore, the main goal of my project is to develop data-driven eddy parameterizations for use in both eddy-permitting and non-eddy-resolving ocean models. Dynamical models of reduced complexity will be developed to emulate the spatio-temporal variability of mesoscale eddies as well as their feedbacks across a large range of scales. These can serve as a low-cost oceanic component for climate models; and therefore the final aim of this project is to use the existing observational data to feed eddy parameterizations in comprehensive ocean circulation and climate models such as the ones used in global weather forecasts or in Climate Model Intercomparison Project(CMIP) models like CMIP7.

We will employ a variety of both common and novel techniques and methods of statistical data analysis and numerical linear algebra to extract the key properties and characteristics of the space-time correlated eddy field. The key steps involved in this framework are, a) first, find the relevant data-adaptive basis functions, i.e. the decomposition of time evolving datasets into their leading spatio-temporal modes using, for example, variance-based methods such as Principal Component Analysis (PCA) and, b) once the subspace spanned by above basis functions are obtained, we derive the evolution equations that emulate the spatio-temporal correlations of the system using methods such as nonlinear autoregression, artificial neural network, Linear Inverse Modelling (LIM), etc.

The proposed new science will help develop a state-of-the-art data-adaptive modelling framework for evaluation and application of Machine Learning and rigorous mathematical theory for dynamical and empirical reduction within the hierarchy of existing oceanic models.

### Mariana Clare

**Based at:** Imperial College London

Supervisors: Matthew Piggott (Lead supervisor, Department of Earth Science & Engineering, Imperial College London) and Colin Cotter (Department of Mathematics, Imperial College London). Industry supervisor: Dr Catherine Villaret (East Point Geo Consulting).

Summary: An estimated 250 million people live in regions that are less than 5 metres above sea level. Hence with sea level rise and an increase in both the frequency and severity of storms as a result of climate change, the coastal zone is becoming an ever more critical location for the application of advanced mathematical techniques. Models are currently used to assist in the design of coastal zone engineering projects including flood defences and marine renewable energy arrays. There are many challenges surrounding the development and application of appropriate coupled numerical models because they include both hydrodynamic and sedimentary processes and need to resolve spatial scales ranging from sub-metre to 100s of kilometres.

My project aims to develop and use advanced numerical modelling and statistical tools to improve the understanding of hazards and the quantification and minimisation of erosion and flood risk. Throughout this project, I will consider the hazards in the context of idealised as well as real world scenarios.

The main model I will use in my project is XBeach, which uses simple numerical techniques to compute dune erosion, scour around buildings and overwash. XBeach is also currently used, to a limited degree, with Monte Carlo techniques to generate a large number of storm events with different wave climate parameters. Uncertain atmospheric forcing is very important in erosion/scour processes and flood risk, which are intimately linked in many situations and cannot be considered in isolation. In my project I will explore how the new technique of Multi-level Monte Carlo simulations can be combined with XBeach to quantify erosion/flood risk. I am not only interested in the effects of extreme events, but also the cumulative effect of minor storm events for which Monte Carlo techniques are particularly appropriate. I will also explore how an adaptive mesh approach can be coupled with the statistical approach to assess the risk to coastal areas.

My project aims to develop and use advanced numerical modelling and statistical tools to improve the understanding of hazards and the quantification and minimisation of erosion and flood risk. Throughout this project, I will consider the hazards in the context of idealised as well as real world scenarios. The main model I will use in my project is XBeach, which uses simple numerical techniques to compute dune erosion, scour around buildings and overwash. XBeach is also currently used, to a limited degree, with Monte Carlo techniques to generate a large number of storm events with different wave climate parameters. Uncertain atmospheric forcing is very important in erosion/scour processes and flood risk, which are intimately linked in many situations and cannot be considered in isolation. In my project I will explore how the new technique of Multi-level Monte Carlo simulations can be combined with XBeach to quantify erosion/flood risk. I am not only interested in the effects of extreme events, but also the cumulative effect of minor storm events for which Monte Carlo techniques are particularly appropriate. I will also explore how an adaptive mesh approach can be coupled with the statistical approach to assess the risk to coastal areas.

### George Chappelle

**Based at:** Imperial College London

Supervisors: Martin Rasmussen (Imperial College London, Department of Mathematics), Jochen Broeker (University of Reading, Department of Mathematics and Statistics), Pavel Berloff (Imperial College London, Department of Mathematics)

Summary: The concept of a tipping point (or critical transition) describes a phenomena where the behaviour of a physical system changes drastically, and often irreversibly, compared to a small change in its external environment. Relevant examples in climate science are the possible collapse of the Atlantic Meridional Overturning Circulation (AMOC) due to increasing freshwater input, or the sudden release of carbon in peatlands due to an external temperature increase. The aim of this project is to develop the mathematical framework for tipping points and therefore contribute to a deeper understanding of them.

A number of generic mechanisms have been identified which can cause a system to tip. One such mechanism is rate-induced tipping, where the transition is caused by a parameter changing too quickly - rather than it moving past some critical value. The traditional mathematical bifurcation theory fails to address this phenomena. The goal of this project is to use and develop the theory of non-autonomous and random dynamical systems to understand rate-induced tipping in the presence of noise. A question of particular practical importance is whether it is possible to develop meaningful early-warning indicators for rate-induced tipping using observation data. We will investigate this question from a theoretical viewpoint and apply it to more realistic models.

### Stuart Patching

**Based at:** Imperial College London

Supervisors: Xue-Mei Li (Department of Mathematics, Imperial College London, Lead supervisor), Darryl Holm ( Department of Mathematics, Imperial College London), Dan Crisan (Department of Mathematics, Imperial College London)

Summary: The Gulf Stream can be thought of as a giant meandering ribbon-like river in the ocean which originates in the Caribbean basin and carries warm water across the Atlantic to the west coast of Europe, keeping the European climate relatively mild. In spite of its significance to weather and climate, the Gulf Stream has remained poorly understood by oceanographers and fluid dynamicists for the past seventy years. This is largely due to the fact that the large-scale flow is significantly affected by multi-scale fluctuations known as mesoscale eddies. It is hypothesised that the mesoscale eddies produce a backscatter effect which is largely responsible for maintaining the eastward jet extensions of the Gulf Stream and other western boundary currents.

The difficulty in modelling such currents lies in the high computational cost associated with running oceanic simulations with sufficient resolution to include the eddy effects. Therefore approaches to this problem have been proposed which involve introducing some form of parameterisation into the numerical model, such that the small scale eddy effects are taken into account in coarse grid simulations.

There are three main approaches we may consider in including this parameterisation: the first is stochastic advection, the second is deterministic roughening and the third is data-driven emulation.

These approaches have all be explored for relatively simple quasi-geostrophic ocean models, but we shall attempt to apply them to more comprehensive primitive equation models which have greater practical applications in oceanography. In particular we shall be using the MITgcm and FESOM2 models, to which we shall apply our parameterisations and run on a low-resolution grid and compare the results with high-resolution simulations.

### Louis Sharrock

**Based at:** Imperial College London

Supervisors: Nikolas Kantas (Department of Mathematics, Imperial College London, Lead supervisor), Professor Alistair Forbes (NPL)

Summary: This project aims to develop new methodology for performing statistical inference in environmental modelling applications. These applications require the use of a large number of sensors that collect data frequently and are distributed over a large region in space. This motivates the use of a space time varying stochastic dynamical model, defined in continuous time via a (linear or non-linear) stochastic partial differential equation, to model quantities such as air quality, pollution level, and temperature. We are naturally interested in fitting this model to real data and, in addition, on improving on the statistical inference using a carefully chosen frequency for collecting observations, an optimal sensor placement, and an automatic calibration of sensor biases. From a statistical perspective, these problems can be formulated using a Bayesian framework that combines posterior inference with optimal design.

Performing Bayesian inference or optimal design for the chosen statistical model may be intractable, in which case the use of simulation based numerical methods will be necessary. We aim to consider computational methods that are principled but intensive, and given the additional challenges relating to the high dimensionality of the data and the model, must pay close attention to the statistical model at hand when designing algorithms to be used in practice. In particular, popular methods such as (Recursive) Maximum Likelihood, Markov Chain Monte Carlo, and Sequential Monte Carlo, will need to be carefully adapted and extended for this purpose.

### Adriaan Hilbers

**Based at:** Imperial College London

Supervisors: Prof Axel Gandy (Statistics Section, Department of Mathematics, Imperial College London), Dr David Brayshaw (Department of Meteorology, University of Reading)

In the face of climate change, considerable efforts are being undertaken to reduce carbon emissions. One of the most promising pathways to sustainability is decarbonising electricity generation and electrifying other sources of emissions such as transport and heating. This requires a near-total decarbonisation of power systems in the next few decades.

Making strategical decisions regarding future power system design (e.g. what power plant to build) is challenging for a number of reasons. The first is their complexity: electricity grids can be immensely complicated, making the effect of e.g. an additional power plant difficult to estimate. The second is the considerable uncertainty about future technologies, fuel prices and grid improvements. Finally, especially as more weather-dependent renewables are added, there is climate-based uncertainty: we simply don’t know what the future weather will be, or how well times of high demand will line up with times of high renewable output.

This project aims to both understand the effect of climate-based uncertainty on power system planning problems and develop methodologies for robust decision-making under these unknowns. This will be done in the language of statistics, using techniques such as uncertainty quantification, data reduction and decision-making under uncertainty. Furthermore, this investigation will employ power system models, computer programs simulating the operation of an electricity grid.

### Georgios Sialounas

**Based at:** University of Reading

Supervisors: Tristan Pryer (University of Reading, Department of Mathematics and Statistics, Lead supervisor)

Summary: Hierarchical modelling is a common feature in many application areas. Indeed, most large scale geophysical simulations are built upon the basis of modelling phenomena with systems of PDEs. Depending on the application and the scale of the features needing to be simulated various levels of approximation are conducted, based on some underlying physical reasoning, resulting in a hierarchy of PDE models. At the top level of this hierarchy sits a PDE system that contains all information currently known about the process. For example, climate models contain a huge amount of information, including atmospheric composition, hydrology, impacts of ice sheets, human influence, vegetation, oceanographic aspects, solar inputs and so on. These extremely complicated mathematical models are far too complex to construct any analytical solution method for the resultant system, so, practically, reductions are made, with information being ignored so that the system has a lower complexity. Naturally, this reduction gives rise to hierarchies of models. I study how to make use of these hierarchies from the numerical perspective.

### Alexander Alecio

**Based at:** Imperial College London

When modelling complicated physical systems such as the ocean/atmosphere system with relatively simple mathematical models based on (ordinary/partial, deterministic/stochastic) dierential equations, we expect some discrepancy between the mathematical model and the actual physical system. It is by now well understood that model error, plays an important role in the delity of the mathematical model and on its predictive capabilities. Model uncertainty, together with additional sources of randomness due, e.g. to incomplete knowledge of the current state of the system, sensitive dependence on initial conditions, parameterization of the small scales etc, should be taken into account when making predictions about the system under investigation.

In addition, many climatological models exhibit 'tipping points' - critical transitions where the output of the model changes disproportionately compared to the change in a parameter. [LHK+08] documents several, the most pertinent to British weather being the Stommel-Cessi box model for Atlantic thermohaline circulation, which suggests the collapse of the Atlantic Meridional Overturning Circulation, upon small changes in freshwater input.

Weather forecasting bodies overcome these inherent difficulties ensemble techniques (or probabilistic forecasting), running multiple simulations accounting for the range of possible scenarios. A forecast should then skilfully indicate the confidence the forecaster can have in their prediction, by accurately representing uncertainty [AMP13]. Clearly, model uncertainty can have a dramatic effect on the predictive capabilities of our mathematical model when we are close to a noise induced transition, a tipping point or a phase transition. This poses an important mathematical question: how can we systematically quantify the propogation of uncertainty through the model, from model parameters and initial conditions, to model-output, even in cases of 'tipping'?

[LHK+08] Timothy M. Lenton, Hermann Held, Elmar Kriegler, Jim W. Hall, Wolfgang Lucht, Stefan Rahmstorf, and Hans Joachim Schellnhuber. Tipping elements in the earth's climate system. Proceedings of the National Academy of Sciences, 105(6):1786{1793, 2008.

[AMP13] H. M. Arnold, I. M. Moroz, and T. N. Palmer. Stochastic parametrizations and model uncertainty in the Lorenz '96 system. Philosophical Transactions of the Royal Society of London Series A, 371:20110479{20110479, April 2013.

Supervisor: G.A. Pavliotis (Imperial College London); V. Lucarini (U. Reading)

### Rhys Leighton Thompson

**Based at:** University of Reading

Supervisors: Clare Watt, Department of Meteorology, Reading (Main), Paul Williams, Department of Meteorology, Reading (Co-Supervisor)

Abstract: Space Weather is the name given to the natural variability of the plasma and magnetic field conditions in near-Earth Space. 21st Century technology is increasingly reliant on space-based assets and infrastructure that are vulnerable to extreme space weather events. Due to the sparse nature of in-situ measurements, and the relative infancy of numerical space plasma physics models, we lack the ability to predict the timing and severity of space weather disruptions to either mitigate their effects, or adequately plan for their consequences.

In this project, we focus on important improvements to the numerical modelling of the Earth’s Outer Radiation Belt; a highly-variable region of energetic electrons in near-Earth space. In the Outer Van Allen Radiation Belt, electrons are trapped by the Earth's magnetic field and can be accelerated to a significant fraction of the speed of light. At such high energies, they pose significant hazards to spacecraft hardware. Most importantly for mankind’s reliance on space-based systems, the Outer Radiation Belt encompasses orbital paths that are of great use to society (e.g. geosynchronous orbit, and Medium Earth Orbits that contain global positioning and navigation systems).

The student will construct idealised numerical models of simple 1D diffusion problems with Dirichlet or Neumann boundary conditions and investigate their behaviour when appropriate stochastic parameterisations of diffusion coefficients are chosen. Initial and boundary values will be chosen to mimic realistic values in near-Earth space, and the solutions from the stochastic model will be compared with solutions from a traditional deterministic model. Given the novel nature of stochastic parameterisations in the field of space plasma physics modelling, the results from the MRes project will provide an important demonstration of the differences between stochastic and deterministic modelling and offer ideas of how to shape space weather models moving forward.

### Manuel Santos

**Based at:** University of Reading

Supervisors: Valerio Lucarini (lead, U. Reading), Jochen Broecker (U. Reading), Tobias Kuna (U. Reading)

Climate is a complex, forced, non-equilibrium dissipative system that can be understood as a high-dimensional dynamical system. Moreover, climate is subject to different kinds of forcing that create fluctuations in the governing dynamics. In our project we shall delve into the so called transfer operator methods in dynamical systems. The transfer operator is a mathematical device that describes the evolution of distributions in phase-space. As such, it captures the information related to the statistics of our system and allows to construct a response theory based on it. I my project, we will concerned with the validity of these methods in a geophysics context. We will study the properties of these operator in coarse-grained phase-space and how they capture information about the (perturbed) dynamics.

By working on phase-space on can construct matrix-approximates of te transfer operator. In particular, we will study the validity of response formulas based on these approximates to investigate its applicability. What is the suitable mathematical framework for these formulas to be valid? How well do they capture the effects of perturbations? Further, since real-world systems are high-dimensional, we will asses the problem of the dimensionality reduction. When the dynamics are projected onto the variables of interest, some of the properties of the transfer operator are lost. What are the mechanisms that provoke the loss of these properties? An answer to these questions will give evidence of the applicability of transfer operator methods in the study of climate putting an emphasis on its structural statistical properties.

### Leonardo Ripoli

**Based at:** University of Reading

Supervisor: Valerio Lucarini (Department of Mathematics and Statistics, University of Reading)

Co-advisor: Paul Williams (Department of Meteorology, University of Reading), Niklas Boers (Grantham Institute - Climate Change and the Environment, Imperial College London)

Description: The construction of parameterisation for multi-scale systems system is a key research area for GFD, because the dynamics of atmosphere and of the ocean covers a wide range of temporal and spatial scales of motion (Berner et al. 2017). Additionally, the variability of the geophysical fluids is characterized by a spectral continuum, so that it is not possible to define unambiguously a spectral gap separating slow from fast motions. As a result, usual mathematical methods based on homogeneization techniques cannot be readily applied to perform the operation of coarse graining. As shown in recent literature (Chekroun et al. 2015, Wouters and Lucarini 2012, 2013, Demayer and Vannitsem 2017, Vissio and Lucarini 2017), the lack of time scale separation leads unavoidably to the presence of non-markovian terms when constructing the effective equations for the slower modes of variability - which are those we want to explicitly represent - able to surrogate the effect of the faster scales - which are, instead, those we want to parameterise.

Two methods have been proposed to deal effectively and rigorously with this problem:

1) The direct derivation of effective evolution equations for the variables of interest, obtained through a perturbative expansion of the Mori-Zwanzig operator (Wouters & Lucarini 2012, 2013);

2) The reconstruction of the effective evolution equations for the variables of interest though an optimization procedure due to Kondrashov et al. (2015) and Chekroun et al. (2017).

Both methods (which we refer to as top-down and bottom-up, respectively) lead to the definition of parameterisation including a deterministic, a stochastic, and a non-markovian (memory effects) component. The two methods are conceptually analogous, but have never been compared on a specific case study of interest. The MSc project here proposed builds upon the earlier results of Vissio and Lucarini (2017) and deals with constructing and comparing the two parameterisation for the 2-level Lorenz ’96 system, which provides a classic benchmark for testing new theories in GFD. The goal will be to understand merits and limits of both parameterisations and to appreciate their differences in terms of precision, adaptivity, and flexibility.

### Ben Ashby

**Based at:** University of Reading

Supervisors:

Tristan Pryer – University of Reading (Lead Supervisor)

Alex Lukyanov – University of Reading

Cassiano Bortolozo – Brazil, Centro Nacioal de Monitoramento e Alertas de Desastres Naturais (CEMADEN)

Summary of the project: Landslides are extreme events that occur when the topsoil on a hill becomes weakened. The result of this can be devastating, both through loss of life and also economic damage. In 2011 a series of floods and mudslides took place in the state of Rio de Janeiro, Brazil. This catastrophe caused over 900 people to lose their lives. This was the driving force behind the creation of the National centre for natural disaster monitoring and alerts (CEMADEN).

For my MRes project, I applied a simple adaptive scheme to numerically solve a simplified PDE model of flow in a porous medium. Data was collected by CEMADEN in an area considered to be at risk from landslides and incorporated into the model to test its sensitivity to the huge variation in soil parameters that determine the flow. Mesh adaptivity was informed by rigorous error estimates involving only the problem data and the numerical solution. Deriving such estimates is known as a posteriori error analysis. The resulting mesh was found to capture the influences of the multiscale data on the solution quite well, but with some undesirable numerical artefacts.

The model used, however, was a heavy simplification. Thus, one of the first steps of my PhD research will be to investigate strategies for the numerical solution of more realistic PDE models with finite element methods. The PDE is degenerate and nonlinear, meaning that even obtaining a numerical solution is much more difficult, and standard techniques for a posteriori analysis cannot be readily applied. If error bounds can be derived, the model will then be tested with mesh adaptivity on data collected during our visit to CEMADEN in Brazil in August 2018. The aim is to create a model to efficiently simulate conditions in the soil so that the team at CEMADEN can use this to inform their work, in which they are responsible for issuing warnings if they believe a landslide is imminent.

As the research progresses, we hope to work more closely with CEMADEN to both use data that they collect and try to adapt our work towards their specific needs in landslide prediction, with the end goal being to provide an accurate and efficient model, informed by the needs of the users.

### Ieva Dauzickaite

**Based at:** University of Reading

Supervisors: Peter Jan van Leeuwen (lead supervisor, Department of Meteorology, University of Reading), Jennifer Scott (Department of Mathematics and Statistics, University of Reading), Amos Lawless (Department of Mathematics and Statistics, University of Reading).

Summary of the MRes project: Geophysical systems can be characterised as high-dimensional, nonlinear, with complex feedbacks among a multitude of scales. Understanding the working of these systems, and predicting their future behaviour is a huge challenge. Solid progress has been made through analytical analysis, but computer simulations are an essential ingredient for research and predictions. Unfortunately, these simulations tend to drift quickly and strongly from reality.

Incorporating observational information in these models via data assimilation would allow us to study the true evolution of the system in unprecedented detail, and provide accurate forecasts. Data-assimilation is used routinely for numerical weather forecasting.

The main workhorse is 4DVar, a variational method that tries to find a best trajectory over a certain assimilation window, typically of 6 to 12 hours. The main bottlenecks of this method are the difficulty to make the computations parallel, the inability to make the assimilation windows longer because of the chaotic nature of the atmosphere, and the difficulty in obtaining proper uncertainty estimates.

A solution to the first two problems is to allow for model errors in the data-assimilation framework. This will allow for parallelisation and reduce the strong dependence to initial conditions, making the problem less nonlinear. The last problem can be addressed via an ensemble of 4DVars. A natural way is to treat each 4DVar as a draw from a proposal density in a particle filter.

We propose to investigate efficient solution methods for this minimisation problem, also exploring the fact that similar problems have to be solved in parallel for the different particles in the particle filter. If successful this would not only be a significant step forward in particle filtering, but also lay a solid foundation for the present ensemble methodology used by ECMWF and the Met Office, potentially leading to large improvements in weather forecasting.

This PhD project will provide strong mathematical foundations for this new class of minimisation problems in high-dimensional systems with the aim to make them robust for practical applications. The work will consist of both mathematical explorations, such as convergence proofs and developing the methods further, and numerical experiments on medium to high-dimensional systems, with the aim to eventually reach out to operational data-assimilation practice. For the data-assimilation experiments we will make use of the Parallel Data Assimilation Framework (PDAF) software, developed at AWI and now being incorporated into NCEO plans, which allows different data-assimilation algorithms to be tested on a range of models.

### Sebastiano Roncoroni

**Based at:** University of Reading

Supervisors: Dr. David Ferreira (Lead supervisor) and Dr. Maarten Ambaum (University of Reading, Meteorology), Dr. Valerio Lucarini (University of Reading, Mathematics)

Summary: The Southern Ocean is remote in location, but plays an important role in the global climate system: for example, it absorbs up to 75% of heat and up to 45% of carbon produced by human activity. As observations show that winds blowing over the Southern Ocean (which drive its circulation) have strengthened and shifted poleward in the past few decades, it is natural to ask whether it will continue absorbing heat and CO2 at the same rate. Furthermore, an increase of sea ice cover in the Southern Ocean has been observed during the same period of time, in stark contrast with the decreasing trend observed in the Northern Hemisphere, and this effect is attributed to wind stress modification too. Coupled ocean-atmosphere global circulation models, however, predict that this tendency will invert in the future, but the typical time-scale of the process is still a matter of debate. For these reasons, understanding and constraining the intensity and the time scales of the response of the Southern Ocean is a crucial topic in research.

A wide range of studies have investigated the equilibrium response of the Southern Ocean to wind changes, revealing that its sensitivity is significantly damped by interactions between mesoscale eddies (i.e. turbulent motion) and mean flow. However, a few recent works have also shown that the response actually comprises a superposition of multiple timescales, ranging from one month to more than a decade. Therefore, to capture past and future decadal trends it is essential to consider the transient adjustment of the Southern Ocean, and not just its equilibrium response.

The aim of my project is to discuss the physical processes and time scales involved in the transient response rigorously. From a physical perspective, the eddy-mean flow interaction may be described as a nonlinear oscillatory dynamical system which has already been successfully employed to study storm track variability. All existing models of the Southern Ocean response are linear, but it is vital to explore the nonlinear regime too. This will be used to complement and guide the interpretation of numerical step-change experiments conducted with a high resolution global circulation model. Finally, response theory and nonequilibrium statistical mechanics are powerful tools to investigate the response of a complex climate system to modifications of a forcing parameter, and I plan to extend this approach to the dynamics of the Southern Ocean

### Marco Cucchi

**Based at:** University of Reading

MPE CDT Aligned student

Supervisors: Valerio Lucarini (lead supervisor) and Tobias Kuna

Project Abstract: In this project I’m going to investigate extreme events in simplified atmospheric models of the mid-latitudes using the point of view of Extreme Value Theory (EVT; Coles 2001). The idea here is to extend the work Felici et al. (2007a, 2007b), where it was first shown that EVT can be used to look at extremes generated by an atmospheric model, going beyond the diagnostic analysis, and taking advantage of the theoretical framework presented in Lucarini et al. (2016). I’m going to investigate the properties of extremes of observables where different levels of spatial and temporal coarse graining procedures are performed, so to understand the effect of averaging on our estimates of extremes. Additionally, statistics of extremes of coarse grained fields will be compared with what obtained running models with coarser resolution. Finally, I will investigate the response of the extremes to both time-independent and dependent perturbations affecting the dynamics, using response theory and pullback attractors. Throughout this work both deterministic and stochastic perturbations will be investigated, and results will be used for model error assessment and analysis of multiscale effects.

As a practical application, this work will lead to the definition of functions describing societal and economic impact of extreme climatic events, along with financial and insurance tools able to manage time-dependent risk assessment.

### Jennifer Israelsson

**Based at:** University of Reading

Supervisors: Prof. Emily Black (Lead supervisor, Department of Meteorology, University of Reading), Dr. Claudia Neves (Department of Mathematics and Statistics, University of Reading)

Farmers in Africa are highly vulnerable to variability in the weather. Robust and timely information on risk can enable farmers to take action to improve yield. Ultimately, access to effective early warning improves global food security. Such information also forms the basis of financial instruments, such as drought insurance. Monitoring weather conditions is, however, difficult in Africa because of the heteorogeneity of the climate, and the sparcity of the ground-observing network. Remotely sensed data (for example satellite-based rainfall estimates) are an alternative to ground observations – but only if the algorithms have skill across the whole rainfall distribution; and if the rainfall estimates are integrated into effective decision support frameworks. Current satellite-based rainfall works well for estimating occurrence and low and medium intensity rainfall, but has low rarely estimate heavy rainfall.

Rainfall is often assumed to come from a gamma distribution, which fits well to the low and mid intensity rainfall, but underestimates the probability of heavy rainfall. To more accurately model the tails, we apply the method of “Extreme value statistics”, using both the “Block maxima” and “Peak-over-threshold” method.With this method, only the largest values in the data are used, which makes it suitable for modelling changes in the most adverse events due to, for example, climate change.

In this project we will assess the effect of climate change on the likelihood of extreme rainfall/temperature events in Africa, and subsequently of adverse agricultural outcomes. We will do so by modelling the probability distributions of gauge observations TAMSAT V3 and reanalysis data, with a focus on return periods for extreme rainfall and assessment of uncertainties in return periods for extreme rainfall.This analysis will also be extended to modelled datasets for the historical period, including the new ultra-high resolution (~4km horizontal resolution) CP4Africa and other high resolution data as well as CMIP5. In addition to evaluating the behaviour and representation of heavy rainfall in these dataset, bivariate extreme analysis of temperature and precipitation will be conducted to evaluate the effect of a warming climate on precipitation. The results will be applied to TAMSAT calibration algorithms and to improve climatologies for the TAMSAT-ALERT risk assessments.

### So Takao

**Based at:** Imperial College London

MPE CDT Aligned student

Supervisor: Darryl Holm

Research Interests: Point Vortex Dynamics, Turbulence, Geophysical Fluid Dynamics, Stochastic Analysis, Symmetry and Reduction in Hamiltonian and Lagrangian Systems, Differential Geometry

Research Project: My current research ideas lie at the intersection of 2D point vortex dynamics and geometric mechanics. Firstly, my idea is to explore a stochastic theory of the motion of point vortices based on the recent work of Holm (2015) on deriving stochastic fluid models using techniques from geometric mechanics, in order to help understand the phenomena of vortex crystal relaxation in 2D turbulence of inviscid fluids. Vortex crystal formation has been observed repeatedly in experiments on magnetized electron columns, which is governed by the same equations as ideal fluids, and in numerical simulations of point vortices, but their formation process is not completely understood. Modelling the weak background vortices as noise may help us give insight to this formation process. Secondly, I am thinking about controlling the motion of point vortices on a curved surface (sphere, paraboloid, etc) by rigid body motion. This can be seen as a generalisation of for instance the motion of point vortices on a rotating sphere.

### Erwin Luesink

**Based at:** Imperial College London

Supervisors: Darryl Holm (Lead Supervisor, Department of Mathematics, Imperial College London), Dan Crisan (Department of Mathematics, Imperial College London), Colin Cotter (Department of Mathematics, Imperial College London)

Summary: Weather and ocean prediction requires solving the equations of fluid dynamics. However, our incomplete understanding of turbulence and other subgridscale effects, the chaotic nature of these equations as well as the changing climate are several factors that make solving these equations incredibly difficult. By means of introducing stochastic transport noise [Holm2015] in the equations of geophysical fluid dynamics we will try to improve weather forecasting, but more importantly also provide a proper estimate of the uncertainty in the forecasts.

### Mary O’Donnell

**Based at:** Imperial College London

MPE CDT Aligned student

Research project: Vortices are near-ubiquitous geophysical and astrophysical phenomena. The study of vortices which occur in Earth’s oceans is crucial to our understanding of oceanic currents and climate, in part because the majority of kinetic energy of the ocean is contained within mesoscale vortices, but our fluid dynamical understanding of them is constrained both experimentally and practically. This research project aims to model turbulence in quasigeostrophy and describe, kinetically and statistically, the vortex population which naturally arises. An understanding of vortices modelled in this way should provide insight into the population of vortices which arises due to similar flow regimes in the ocean.

### Maha Hussein Kaouri

**Based at:** University of Reading

MPE CDT Aligned student

Research project: My research will focus on variational data assimilation schemes where we aim to approximately minimize a function of the residuals of a nonlinear least-squares problem by using newly developed, advanced numerical optimization methods. As the function usually depends on millions of variables, solving such problems can be time consuming and computationally expensive. A possible application of the method would be to estimate the initial conditions for a weather forecast. Weather forecasting has a short time window (the forecast will no longer be useful after the weather event occurs) and so it is important to choose a method which gives the most optimal solution in the given time. This is why the analysis of the efficiency of new techniques is of interest. In summary, the aim of my PhD research is to apply the latest mathematical advances in optimization in order to improve the forecast made by environmental models whilst keeping computational cost and calculation time to a minimum.

### Philip Maybank

**Based at:** University of Reading

MPE CDT Aligned student

Research project: In Neuroscience, mean-field models are nonlinear dynamical systems that are used to describe the evolution of mean neural population activity, within a given brain region such as the cortex. Mean-field models typically contain 10-100 unknown parameters, and receive high-dimensional noisy input from other brain regions. The goal of my PhD is to develop statistical methodology for inferring mechanistic parameters in this type of differential equation model.

### Karina McCusker

**Based at:** University of Reading

MPE CDT Aligned student

Research project: Fast, approximate methods for electromagnetic wave scattering by complex ice crystals and snowflakes. The goal of my PhD is to develop a method to approximate the scattering properties of ice particles in clouds. This could be used to improve scattering models that are available, and therefore allow more precise retrievals of ice cloud properties. These retrievals could be used to evaluate model-simulated clouds and identify problems that exist in the model, thus enabling improvements to be made to the parameterization of ice processes.

### James Shaw

**Based at:** University of Reading

MPE CDT Aligned student

Research project: Next-generation atmospheric models are designed to be more flexible than previous models, so that the choice of mesh and choices of numerical schemes can be deferred or changed during operation (Ford et al., 2013; Theurich et al., 2015). My PhD project seeks to make numerical weather and climate predictions more accurate by developing new meshes and numerical schemes that are suitable for next-generation models. In particular, the project addresses the modelling of orographic flows on arbitrary meshes, focusing on three aspects: first, how orography is best represented by a mesh; second, how to accurately advect quantities over orography and, third, how to avoid unphysical solutions in the vertical balance between pressure and temperature.

### William Mcintyre

**Based at:** University of Reading

MPE CDT Aligned student

Research project: Atmospheric convection occurs on length scales far smaller than the grid scales of numerical weather prediction and climate models. However, as the resolution of modern models continues to increase, the local convective effects become evermore significant and, thus, there is a demand for new convection schemes which can produce accurate results at these new scales. One such candidate is through conditional averaging, an approach in which grid boxes are split into convectively stable and unstable regions where separate differential equations are solved for each. The scheme incorporates mass transport by convection and memory which are features often ignored in current models. There is thus a possibility to better represent convection using this new approach.

**Fatal error**: Cannot use string offset as an array in **/home/content/64/11557064/html/mpecdt/wp-content/themes/bones-mpecdt/archive-student.php** on line **51**