# MPE CDT Student Cohort 2017

### Elena Saggioro

**Based at:** University of Reading

**Research project:** Causal approaches to climate variability and change.

Supervisors: Professor Ted Shepherd (Lead Supervisor, Department of Meteorology, University of Reading), Professor Sebastian Reich (Department of Mathematics and Statistics, University of Reading), Dr Jeff Knight (Met Office)

Project summary: Although there is confidence in thermodynamic aspects of global climate change for a given global warming, crucially there is still large uncertainty in the dynamical response at the regional scale. This is due to the role of atmospheric circulation, projected changes in which are poorly constrained by Global Climate models (GCMs) which give widely divergent responses, reflecting underlying model errors.

In order to identify the physical range of plausible responses, it is firstly necessary to identify models’ errors on short-timescale behaviour, for instance by comparing outputs with observed seasonal variability. Secondly, the connection between such errors and their spread in future projection needs to be understood and used to rule out unphysical projections. Within climate science this method is referred to as ‘emergent constraints’, its validity being rooted in the principles behind the fluctuationdissipation theorem (FDT) in statistical physics. Whilst promising, the application of emergent constraints in climate science has often failed, this due arguably to unsuitable practical estimations of both short-term errors and connection with long-term responses.

In this PhD we aim to tackle the issue of constraining the circulation response to climate change adopting time-series Bayesian Causal network (BCNs). This is a mathematical framework suitable to address questions related to causality, and its practical implementation results in a tool for robust statistical inference. A N-variate time evolving process can be associated to a time-series BCN by representing relations of pairwise conditional dependence in the process as lag-specific time-oriented links in the graph. The definition translates into a practical procedure for inferring causal links from data, once a test for conditional independence is chosen.

In the PhD we will use BCNs to estimate model errors on seasonal time scales, by comparing causal mechanisms as detected from reanalysis data with the same as extracted from model outputs. Then, we will connect short-timescale model errors to the long-term projection. The idea here is to complement the FDT-based reasoning with the insights in data provided by BCN representation.

### Niraj Agarwal

**Based at:** Imperial College London

**Research project:** Data-driven reduced order modelling of multiscale ocean variability

Principal Supervisor: Prof. Pavel Berloff (Department of Mathematics, Imperial College London)

Co-advisor: Peter Dueben, (ECMWF)

Summary: The oceanic turbulent circulation exhibits multiscale motions on very different space and time scales interacting with each other; e.g., jets, vortices, waves, and large-scale variability. In particular, mesoscale oceanic eddies populate nearly all parts of the ocean and need to be resolved in order to represent their effects on the general ocean and atmosphere circulations. However, capturing effects of these small-scale flows is highly challenging and requires non-trivial approaches and skills, especially when it comes to representing their effects in non-eddy resolving ocean circulation models. Therefore, the main goal of my project is to develop data-driven eddy parameterizations for use in both eddy-permitting and non-eddy-resolving ocean models. Dynamical models of reduced complexity will be developed to emulate the spatio-temporal variability of mesoscale eddies as well as their feedbacks across a large range of scales. These can serve as a low-cost oceanic component for climate models; and therefore the final aim of this project is to use the existing observational data to feed eddy parameterizations in comprehensive ocean circulation and climate models such as the ones used in global weather forecasts or in Climate Model Intercomparison Project(CMIP) models like CMIP7.

We will employ a variety of both common and novel techniques and methods of statistical data analysis and numerical linear algebra to extract the key properties and characteristics of the space-time correlated eddy field. The key steps involved in this framework are, a) first, find the relevant data-adaptive basis functions, i.e. the decomposition of time evolving datasets into their leading spatio-temporal modes using, for example, variance-based methods such as Principal Component Analysis (PCA) and, b) once the subspace spanned by above basis functions are obtained, we derive the evolution equations that emulate the spatio-temporal correlations of the system using methods such as nonlinear autoregression, artificial neural network, Linear Inverse Modelling (LIM), etc.

The proposed new science will help develop a state-of-the-art data-adaptive modelling framework for evaluation and application of Machine Learning and rigorous mathematical theory for dynamical and empirical reduction within the hierarchy of existing oceanic models.

### Mariana Clare

**Based at:** Imperial College London

**Research project:** Advanced numerical techniques to assess erosion/flood risk in the coastal zone

Supervisors: Matthew Piggott (Lead supervisor, Department of Earth Science & Engineering, Imperial College London) and Colin Cotter (Department of Mathematics, Imperial College London). Industry supervisor: Dr Catherine Villaret (East Point Geo Consulting).

Summary: An estimated 250 million people live in regions that are less than 5 metres above sea level. Hence with sea level rise and an increase in both the frequency and severity of storms as a result of climate change, the coastal zone is becoming an ever more critical location for the application of advanced mathematical techniques. Models are currently used to assist in the design of coastal zone engineering projects including flood defences and marine renewable energy arrays. There are many challenges surrounding the development and application of appropriate coupled numerical models because they include both hydrodynamic and sedimentary processes and need to resolve spatial scales ranging from sub-metre to 100s of kilometres.

My project aims to develop and use advanced numerical modelling and statistical tools to improve the understanding of hazards and the quantification and minimisation of erosion and flood risk. Throughout this project, I will consider the hazards in the context of idealised as well as real world scenarios.

The main model I will use in my project is XBeach, which uses simple numerical techniques to compute dune erosion, scour around buildings and overwash. XBeach is also currently used, to a limited degree, with Monte Carlo techniques to generate a large number of storm events with different wave climate parameters. Uncertain atmospheric forcing is very important in erosion/scour processes and flood risk, which are intimately linked in many situations and cannot be considered in isolation. In my project I will explore how the new technique of Multi-level Monte Carlo simulations can be combined with XBeach to quantify erosion/flood risk. I am not only interested in the effects of extreme events, but also the cumulative effect of minor storm events for which Monte Carlo techniques are particularly appropriate. I will also explore how an adaptive mesh approach can be coupled with the statistical approach to assess the risk to coastal areas.

My project aims to develop and use advanced numerical modelling and statistical tools to improve the understanding of hazards and the quantification and minimisation of erosion and flood risk. Throughout this project, I will consider the hazards in the context of idealised as well as real world scenarios. The main model I will use in my project is XBeach, which uses simple numerical techniques to compute dune erosion, scour around buildings and overwash. XBeach is also currently used, to a limited degree, with Monte Carlo techniques to generate a large number of storm events with different wave climate parameters. Uncertain atmospheric forcing is very important in erosion/scour processes and flood risk, which are intimately linked in many situations and cannot be considered in isolation. In my project I will explore how the new technique of Multi-level Monte Carlo simulations can be combined with XBeach to quantify erosion/flood risk. I am not only interested in the effects of extreme events, but also the cumulative effect of minor storm events for which Monte Carlo techniques are particularly appropriate. I will also explore how an adaptive mesh approach can be coupled with the statistical approach to assess the risk to coastal areas.

### George Chappelle

**Based at:** Imperial College London

**Research project:** Rate-induced tipping in non-autonomous random dynamical systems

Supervisors: Martin Rasmussen (Imperial College London, Department of Mathematics), Jochen Broeker (University of Reading, Department of Mathematics and Statistics), Pavel Berloff (Imperial College London, Department of Mathematics)

Summary: The concept of a tipping point (or critical transition) describes a phenomena where the behaviour of a physical system changes drastically, and often irreversibly, compared to a small change in its external environment. Relevant examples in climate science are the possible collapse of the Atlantic Meridional Overturning Circulation (AMOC) due to increasing freshwater input, or the sudden release of carbon in peatlands due to an external temperature increase. The aim of this project is to develop the mathematical framework for tipping points and therefore contribute to a deeper understanding of them.

A number of generic mechanisms have been identified which can cause a system to tip. One such mechanism is rate-induced tipping, where the transition is caused by a parameter changing too quickly - rather than it moving past some critical value. The traditional mathematical bifurcation theory fails to address this phenomena. The goal of this project is to use and develop the theory of non-autonomous and random dynamical systems to understand rate-induced tipping in the presence of noise. A question of particular practical importance is whether it is possible to develop meaningful early-warning indicators for rate-induced tipping using observation data. We will investigate this question from a theoretical viewpoint and apply it to more realistic models.

### Stuart Patching

**Based at:** Imperial College London

**Research project:** Analysis of Stochastic Slow-Fast Systems

Supervisors: Xue-Mei Li (Department of Mathematics, Imperial College London, Lead supervisor), Darryl Holm ( Department of Mathematics, Imperial College London), Dan Crisan (Department of Mathematics, Imperial College London)

Summary: The Gulf Stream can be thought of as a giant meandering ribbon-like river in the ocean which originates in the Caribbean basin and carries warm water across the Atlantic to the west coast of Europe, keeping the European climate relatively mild. In spite of its significance to weather and climate, the Gulf Stream has remained poorly understood by oceanographers and fluid dynamicists for the past seventy years. This is largely due to the fact that the large-scale flow is significantly affected by multi-scale fluctuations known as mesoscale eddies. It is hypothesised that the mesoscale eddies produce a backscatter effect which is largely responsible for maintaining the eastward jet extensions of the Gulf Stream and other western boundary currents.

The difficulty in modelling such currents lies in the high computational cost associated with running oceanic simulations with sufficient resolution to include the eddy effects. Therefore approaches to this problem have been proposed which involve introducing some form of parameterisation into the numerical model, such that the small scale eddy effects are taken into account in coarse grid simulations.

There are three main approaches we may consider in including this parameterisation: the first is stochastic advection, the second is deterministic roughening and the third is data-driven emulation.

These approaches have all be explored for relatively simple quasi-geostrophic ocean models, but we shall attempt to apply them to more comprehensive primitive equation models which have greater practical applications in oceanography. In particular we shall be using the MITgcm and FESOM2 models, to which we shall apply our parameterisations and run on a low-resolution grid and compare the results with high-resolution simulations.

### Louis Sharrock

**Based at:** Imperial College London

**Research project:** Large Scale Inference With Applications to Environmental Monitoring

Supervisors: Nikolas Kantas (Department of Mathematics, Imperial College London, Lead supervisor), Professor Alistair Forbes (NPL)

Summary: This project aims to develop new methodology for performing statistical inference in environmental modelling applications. These applications require the use of a large number of sensors that collect data frequently and are distributed over a large region in space. This motivates the use of a space time varying stochastic dynamical model, defined in continuous time via a (linear or non-linear) stochastic partial differential equation, to model quantities such as air quality, pollution level, and temperature. We are naturally interested in fitting this model to real data and, in addition, on improving on the statistical inference using a carefully chosen frequency for collecting observations, an optimal sensor placement, and an automatic calibration of sensor biases. From a statistical perspective, these problems can be formulated using a Bayesian framework that combines posterior inference with optimal design.

Performing Bayesian inference or optimal design for the chosen statistical model may be intractable, in which case the use of simulation based numerical methods will be necessary. We aim to consider computational methods that are principled but intensive, and given the additional challenges relating to the high dimensionality of the data and the model, must pay close attention to the statistical model at hand when designing algorithms to be used in practice. In particular, popular methods such as (Recursive) Maximum Likelihood, Markov Chain Monte Carlo, and Sequential Monte Carlo, will need to be carefully adapted and extended for this purpose.

### Adriaan Hilbers

**Based at:** Imperial College London

**Research project:** Understanding climate-based uncertainty in power system design

Supervisors: Prof Axel Gandy (Statistics Section, Department of Mathematics, Imperial College London), Dr David Brayshaw (Department of Meteorology, University of Reading)

In the face of climate change, considerable efforts are being undertaken to reduce carbon emissions. One of the most promising pathways to sustainability is decarbonising electricity generation and electrifying other sources of emissions such as transport and heating. This requires a near-total decarbonisation of power systems in the next few decades.

Making strategical decisions regarding future power system design (e.g. what power plant to build) is challenging for a number of reasons. The first is their complexity: electricity grids can be immensely complicated, making the effect of e.g. an additional power plant difficult to estimate. The second is the considerable uncertainty about future technologies, fuel prices and grid improvements. Finally, especially as more weather-dependent renewables are added, there is climate-based uncertainty: we simply don’t know what the future weather will be, or how well times of high demand will line up with times of high renewable output.

This project aims to both understand the effect of climate-based uncertainty on power system planning problems and develop methodologies for robust decision-making under these unknowns. This will be done in the language of statistics, using techniques such as uncertainty quantification, data reduction and decision-making under uncertainty. Furthermore, this investigation will employ power system models, computer programs simulating the operation of an electricity grid.

### Georgios Sialounas

**Based at:** University of Reading

**Research project:** Hierarchical Model Adaptivity

Supervisors: Tristan Pryer (University of Reading, Department of Mathematics and Statistics, Lead supervisor)

Summary: Hierarchical modelling is a common feature in many application areas. Indeed, most large scale geophysical simulations are built upon the basis of modelling phenomena with systems of PDEs. Depending on the application and the scale of the features needing to be simulated various levels of approximation are conducted, based on some underlying physical reasoning, resulting in a hierarchy of PDE models. At the top level of this hierarchy sits a PDE system that contains all information currently known about the process. For example, climate models contain a huge amount of information, including atmospheric composition, hydrology, impacts of ice sheets, human influence, vegetation, oceanographic aspects, solar inputs and so on. These extremely complicated mathematical models are far too complex to construct any analytical solution method for the resultant system, so, practically, reductions are made, with information being ignored so that the system has a lower complexity. Naturally, this reduction gives rise to hierarchies of models. I study how to make use of these hierarchies from the numerical perspective.

### Alexander Alecio

**Based at:** Imperial College London

**Research project:** Uncertainty quantication, linear response theory and predictability for nonequilibrium systems near phase transitions

When modelling complicated physical systems such as the ocean/atmosphere system with relatively simple mathematical models based on (ordinary/partial, deterministic/stochastic) dierential equations, we expect some discrepancy between the mathematical model and the actual physical system. It is by now well understood that model error, plays an important role in the delity of the mathematical model and on its predictive capabilities. Model uncertainty, together with additional sources of randomness due, e.g. to incomplete knowledge of the current state of the system, sensitive dependence on initial conditions, parameterization of the small scales etc, should be taken into account when making predictions about the system under investigation.

In addition, many climatological models exhibit 'tipping points' - critical transitions where the output of the model changes disproportionately compared to the change in a parameter. [LHK+08] documents several, the most pertinent to British weather being the Stommel-Cessi box model for Atlantic thermohaline circulation, which suggests the collapse of the Atlantic Meridional Overturning Circulation, upon small changes in freshwater input.

Weather forecasting bodies overcome these inherent difficulties ensemble techniques (or probabilistic forecasting), running multiple simulations accounting for the range of possible scenarios. A forecast should then skilfully indicate the confidence the forecaster can have in their prediction, by accurately representing uncertainty [AMP13]. Clearly, model uncertainty can have a dramatic effect on the predictive capabilities of our mathematical model when we are close to a noise induced transition, a tipping point or a phase transition. This poses an important mathematical question: how can we systematically quantify the propogation of uncertainty through the model, from model parameters and initial conditions, to model-output, even in cases of 'tipping'?

[LHK+08] Timothy M. Lenton, Hermann Held, Elmar Kriegler, Jim W. Hall, Wolfgang Lucht, Stefan Rahmstorf, and Hans Joachim Schellnhuber. Tipping elements in the earth's climate system. Proceedings of the National Academy of Sciences, 105(6):1786{1793, 2008.

[AMP13] H. M. Arnold, I. M. Moroz, and T. N. Palmer. Stochastic parametrizations and model uncertainty in the Lorenz '96 system. Philosophical Transactions of the Royal Society of London Series A, 371:20110479{20110479, April 2013.

Supervisor: G.A. Pavliotis (Imperial College London); V. Lucarini (U. Reading)

### Rhys Leighton Thompson

**Based at:** University of Reading

**Research project:** Diffusion models of Earth’s Outer Radiation Belt using Stochastic Parameterisations

Supervisors: Clare Watt, Department of Meteorology, Reading (Main), Paul Williams, Department of Meteorology, Reading (Co-Supervisor)

Abstract: Space Weather is the name given to the natural variability of the plasma and magnetic field conditions in near-Earth Space. 21st Century technology is increasingly reliant on space-based assets and infrastructure that are vulnerable to extreme space weather events. Due to the sparse nature of in-situ measurements, and the relative infancy of numerical space plasma physics models, we lack the ability to predict the timing and severity of space weather disruptions to either mitigate their effects, or adequately plan for their consequences.

In this project, we focus on important improvements to the numerical modelling of the Earth’s Outer Radiation Belt; a highly-variable region of energetic electrons in near-Earth space. In the Outer Van Allen Radiation Belt, electrons are trapped by the Earth's magnetic field and can be accelerated to a significant fraction of the speed of light. At such high energies, they pose significant hazards to spacecraft hardware. Most importantly for mankind’s reliance on space-based systems, the Outer Radiation Belt encompasses orbital paths that are of great use to society (e.g. geosynchronous orbit, and Medium Earth Orbits that contain global positioning and navigation systems).

The student will construct idealised numerical models of simple 1D diffusion problems with Dirichlet or Neumann boundary conditions and investigate their behaviour when appropriate stochastic parameterisations of diffusion coefficients are chosen. Initial and boundary values will be chosen to mimic realistic values in near-Earth space, and the solutions from the stochastic model will be compared with solutions from a traditional deterministic model. Given the novel nature of stochastic parameterisations in the field of space plasma physics modelling, the results from the MRes project will provide an important demonstration of the differences between stochastic and deterministic modelling and offer ideas of how to shape space weather models moving forward.

### Manuel Santos

**Based at:** University of Reading

**Research project:** Transfer operators and response in climate dynamics

Supervisors: Valerio Lucarini (lead, U. Reading), Jochen Broecker (U. Reading), Tobias Kuna (U. Reading)

Climate is a complex, forced, non-equilibrium dissipative system that can be understood as a high-dimensional dynamical system. Moreover, climate is subject to different kinds of forcing that create fluctuations in the governing dynamics. In our project we shall delve into the so called transfer operator methods in dynamical systems. The transfer operator is a mathematical device that describes the evolution of distributions in phase-space. As such, it captures the information related to the statistics of our system and allows to construct a response theory based on it. I my project, we will concerned with the validity of these methods in a geophysics context. We will study the properties of these operator in coarse-grained phase-space and how they capture information about the (perturbed) dynamics.

By working on phase-space on can construct matrix-approximates of te transfer operator. In particular, we will study the validity of response formulas based on these approximates to investigate its applicability. What is the suitable mathematical framework for these formulas to be valid? How well do they capture the effects of perturbations? Further, since real-world systems are high-dimensional, we will asses the problem of the dimensionality reduction. When the dynamics are projected onto the variables of interest, some of the properties of the transfer operator are lost. What are the mechanisms that provoke the loss of these properties? An answer to these questions will give evidence of the applicability of transfer operator methods in the study of climate putting an emphasis on its structural statistical properties.

### Leonardo Ripoli

**Based at:** University of Reading

**Research project:** Constructing Parameterisations for GFD systems – a comparative approach

Supervisor: Valerio Lucarini (Department of Mathematics and Statistics, University of Reading)

Co-advisor: Paul Williams (Department of Meteorology, University of Reading), Niklas Boers (Grantham Institute - Climate Change and the Environment, Imperial College London)

Description: The construction of parameterisation for multi-scale systems system is a key research area for GFD, because the dynamics of atmosphere and of the ocean covers a wide range of temporal and spatial scales of motion (Berner et al. 2017). Additionally, the variability of the geophysical fluids is characterized by a spectral continuum, so that it is not possible to define unambiguously a spectral gap separating slow from fast motions. As a result, usual mathematical methods based on homogeneization techniques cannot be readily applied to perform the operation of coarse graining. As shown in recent literature (Chekroun et al. 2015, Wouters and Lucarini 2012, 2013, Demayer and Vannitsem 2017, Vissio and Lucarini 2017), the lack of time scale separation leads unavoidably to the presence of non-markovian terms when constructing the effective equations for the slower modes of variability - which are those we want to explicitly represent - able to surrogate the effect of the faster scales - which are, instead, those we want to parameterise.

Two methods have been proposed to deal effectively and rigorously with this problem:

1) The direct derivation of effective evolution equations for the variables of interest, obtained through a perturbative expansion of the Mori-Zwanzig operator (Wouters & Lucarini 2012, 2013);

2) The reconstruction of the effective evolution equations for the variables of interest though an optimization procedure due to Kondrashov et al. (2015) and Chekroun et al. (2017).

Both methods (which we refer to as top-down and bottom-up, respectively) lead to the definition of parameterisation including a deterministic, a stochastic, and a non-markovian (memory effects) component. The two methods are conceptually analogous, but have never been compared on a specific case study of interest. The MSc project here proposed builds upon the earlier results of Vissio and Lucarini (2017) and deals with constructing and comparing the two parameterisation for the 2-level Lorenz ’96 system, which provides a classic benchmark for testing new theories in GFD. The goal will be to understand merits and limits of both parameterisations and to appreciate their differences in terms of precision, adaptivity, and flexibility.

### Ben Ashby

**Based at:** University of Reading

**Research project:** Adaptive Finite Element Methods for Landslide Prediction

Supervisors:

Tristan Pryer – University of Reading (Lead Supervisor)

Alex Lukyanov – University of Reading

Cassiano Bortolozo – Brazil, Centro Nacioal de Monitoramento e Alertas de Desastres Naturais (CEMADEN)

Summary of the project: Landslides are extreme events that occur when the topsoil on a hill becomes weakened. The result of this can be devastating, both through loss of life and also economic damage. In 2011 a series of floods and mudslides took place in the state of Rio de Janeiro, Brazil. This catastrophe caused over 900 people to lose their lives. This was the driving force behind the creation of the National centre for natural disaster monitoring and alerts (CEMADEN).

For my MRes project, I applied a simple adaptive scheme to numerically solve a simplified PDE model of flow in a porous medium. Data was collected by CEMADEN in an area considered to be at risk from landslides and incorporated into the model to test its sensitivity to the huge variation in soil parameters that determine the flow. Mesh adaptivity was informed by rigorous error estimates involving only the problem data and the numerical solution. Deriving such estimates is known as a posteriori error analysis. The resulting mesh was found to capture the influences of the multiscale data on the solution quite well, but with some undesirable numerical artefacts.

The model used, however, was a heavy simplification. Thus, one of the first steps of my PhD research will be to investigate strategies for the numerical solution of more realistic PDE models with finite element methods. The PDE is degenerate and nonlinear, meaning that even obtaining a numerical solution is much more difficult, and standard techniques for a posteriori analysis cannot be readily applied. If error bounds can be derived, the model will then be tested with mesh adaptivity on data collected during our visit to CEMADEN in Brazil in August 2018. The aim is to create a model to efficiently simulate conditions in the soil so that the team at CEMADEN can use this to inform their work, in which they are responsible for issuing warnings if they believe a landslide is imminent.

As the research progresses, we hope to work more closely with CEMADEN to both use data that they collect and try to adapt our work towards their specific needs in landslide prediction, with the end goal being to provide an accurate and efficient model, informed by the needs of the users.

### Ieva Dauzickaite

**Based at:** University of Reading

**Research project:** Efficient weak-constraint data assimilation for geophysical systems

Supervisors: Peter Jan van Leeuwen (lead supervisor, Department of Meteorology, University of Reading), Jennifer Scott (Department of Mathematics and Statistics, University of Reading), Amos Lawless (Department of Mathematics and Statistics, University of Reading).

Summary of the MRes project: Geophysical systems can be characterised as high-dimensional, nonlinear, with complex feedbacks among a multitude of scales. Understanding the working of these systems, and predicting their future behaviour is a huge challenge. Solid progress has been made through analytical analysis, but computer simulations are an essential ingredient for research and predictions. Unfortunately, these simulations tend to drift quickly and strongly from reality.

Incorporating observational information in these models via data assimilation would allow us to study the true evolution of the system in unprecedented detail, and provide accurate forecasts. Data-assimilation is used routinely for numerical weather forecasting.

The main workhorse is 4DVar, a variational method that tries to find a best trajectory over a certain assimilation window, typically of 6 to 12 hours. The main bottlenecks of this method are the difficulty to make the computations parallel, the inability to make the assimilation windows longer because of the chaotic nature of the atmosphere, and the difficulty in obtaining proper uncertainty estimates.

A solution to the first two problems is to allow for model errors in the data-assimilation framework. This will allow for parallelisation and reduce the strong dependence to initial conditions, making the problem less nonlinear. The last problem can be addressed via an ensemble of 4DVars. A natural way is to treat each 4DVar as a draw from a proposal density in a particle filter.

We propose to investigate efficient solution methods for this minimisation problem, also exploring the fact that similar problems have to be solved in parallel for the different particles in the particle filter. If successful this would not only be a significant step forward in particle filtering, but also lay a solid foundation for the present ensemble methodology used by ECMWF and the Met Office, potentially leading to large improvements in weather forecasting.

This PhD project will provide strong mathematical foundations for this new class of minimisation problems in high-dimensional systems with the aim to make them robust for practical applications. The work will consist of both mathematical explorations, such as convergence proofs and developing the methods further, and numerical experiments on medium to high-dimensional systems, with the aim to eventually reach out to operational data-assimilation practice. For the data-assimilation experiments we will make use of the Parallel Data Assimilation Framework (PDAF) software, developed at AWI and now being incorporated into NCEO plans, which allows different data-assimilation algorithms to be tested on a range of models.

### Sebastiano Roncoroni

**Based at:** University of Reading

**Research project:** Non-linear transient adjustment of the Southern Ocean to wind changes

Supervisors: Dr. David Ferreira (Lead supervisor) and Dr. Maarten Ambaum (University of Reading, Meteorology), Dr. Valerio Lucarini (University of Reading, Mathematics)

Summary: The Southern Ocean is remote in location, but plays an important role in the global climate system: for example, it absorbs up to 75% of heat and up to 45% of carbon produced by human activity. As observations show that winds blowing over the Southern Ocean (which drive its circulation) have strengthened and shifted poleward in the past few decades, it is natural to ask whether it will continue absorbing heat and CO2 at the same rate. Furthermore, an increase of sea ice cover in the Southern Ocean has been observed during the same period of time, in stark contrast with the decreasing trend observed in the Northern Hemisphere, and this effect is attributed to wind stress modification too. Coupled ocean-atmosphere global circulation models, however, predict that this tendency will invert in the future, but the typical time-scale of the process is still a matter of debate. For these reasons, understanding and constraining the intensity and the time scales of the response of the Southern Ocean is a crucial topic in research.

A wide range of studies have investigated the equilibrium response of the Southern Ocean to wind changes, revealing that its sensitivity is significantly damped by interactions between mesoscale eddies (i.e. turbulent motion) and mean flow. However, a few recent works have also shown that the response actually comprises a superposition of multiple timescales, ranging from one month to more than a decade. Therefore, to capture past and future decadal trends it is essential to consider the transient adjustment of the Southern Ocean, and not just its equilibrium response.

The aim of my project is to discuss the physical processes and time scales involved in the transient response rigorously. From a physical perspective, the eddy-mean flow interaction may be described as a nonlinear oscillatory dynamical system which has already been successfully employed to study storm track variability. All existing models of the Southern Ocean response are linear, but it is vital to explore the nonlinear regime too. This will be used to complement and guide the interpretation of numerical step-change experiments conducted with a high resolution global circulation model. Finally, response theory and nonequilibrium statistical mechanics are powerful tools to investigate the response of a complex climate system to modifications of a forcing parameter, and I plan to extend this approach to the dynamics of the Southern Ocean

### Marco Cucchi

**Based at:** University of Reading

**Research project:** Sensitivity of Extremes in Simplified Models of the Mid-latitude Atmospheric Circulation

MPE CDT Aligned student

Supervisors: Valerio Lucarini (lead supervisor) and Tobias Kuna

Project Abstract: In this project I’m going to investigate extreme events in simplified atmospheric models of the mid-latitudes using the point of view of Extreme Value Theory (EVT; Coles 2001). The idea here is to extend the work Felici et al. (2007a, 2007b), where it was first shown that EVT can be used to look at extremes generated by an atmospheric model, going beyond the diagnostic analysis, and taking advantage of the theoretical framework presented in Lucarini et al. (2016). I’m going to investigate the properties of extremes of observables where different levels of spatial and temporal coarse graining procedures are performed, so to understand the effect of averaging on our estimates of extremes. Additionally, statistics of extremes of coarse grained fields will be compared with what obtained running models with coarser resolution. Finally, I will investigate the response of the extremes to both time-independent and dependent perturbations affecting the dynamics, using response theory and pullback attractors. Throughout this work both deterministic and stochastic perturbations will be investigated, and results will be used for model error assessment and analysis of multiscale effects.

As a practical application, this work will lead to the definition of functions describing societal and economic impact of extreme climatic events, along with financial and insurance tools able to manage time-dependent risk assessment.

### Jennifer Israelsson

**Based at:** University of Reading

**Research project:** Developing novel methods for for early warning of high impact weather in Africa

Supervisors: Prof. Emily Black (Lead supervisor, Department of Meteorology, University of Reading), Dr. Claudia Neves (Department of Mathematics and Statistics, University of Reading)

Farmers in Africa are highly vulnerable to variability in the weather. Robust and timely information on risk can enable farmers to take action to improve yield. Ultimately, access to effective early warning improves global food security. Such information also forms the basis of financial instruments, such as drought insurance. Monitoring weather conditions is, however, difficult in Africa because of the heteorogeneity of the climate, and the sparcity of the ground-observing network. Remotely sensed data (for example satellite-based rainfall estimates) are an alternative to ground observations – but only if the algorithms have skill across the whole rainfall distribution; and if the rainfall estimates are integrated into effective decision support frameworks. Current satellite-based rainfall works well for estimating occurrence and low and medium intensity rainfall, but has low rarely estimate heavy rainfall.

Rainfall is often assumed to come from a gamma distribution, which fits well to the low and mid intensity rainfall, but underestimates the probability of heavy rainfall. To more accurately model the tails, we apply the method of “Extreme value statistics”, using both the “Block maxima” and “Peak-over-threshold” method.With this method, only the largest values in the data are used, which makes it suitable for modelling changes in the most adverse events due to, for example, climate change.

In this project we will assess the effect of climate change on the likelihood of extreme rainfall/temperature events in Africa, and subsequently of adverse agricultural outcomes. We will do so by modelling the probability distributions of gauge observations TAMSAT V3 and reanalysis data, with a focus on return periods for extreme rainfall and assessment of uncertainties in return periods for extreme rainfall.This analysis will also be extended to modelled datasets for the historical period, including the new ultra-high resolution (~4km horizontal resolution) CP4Africa and other high resolution data as well as CMIP5. In addition to evaluating the behaviour and representation of heavy rainfall in these dataset, bivariate extreme analysis of temperature and precipitation will be conducted to evaluate the effect of a warming climate on precipitation. The results will be applied to TAMSAT calibration algorithms and to improve climatologies for the TAMSAT-ALERT risk assessments.