# MPE CDT Student Cohort 2016

### So Takao

**Based at:** Imperial College London

MPE CDT Aligned student

Supervisor: Darryl Holm

Research Interests: Point Vortex Dynamics, Turbulence, Geophysical Fluid Dynamics, Stochastic Analysis, Symmetry and Reduction in Hamiltonian and Lagrangian Systems, Differential Geometry

Research Project: My current research ideas lie at the intersection of 2D point vortex dynamics and geometric mechanics. Firstly, my idea is to explore a stochastic theory of the motion of point vortices based on the recent work of Holm (2015) on deriving stochastic fluid models using techniques from geometric mechanics, in order to help understand the phenomena of vortex crystal relaxation in 2D turbulence of inviscid fluids. Vortex crystal formation has been observed repeatedly in experiments on magnetized electron columns, which is governed by the same equations as ideal fluids, and in numerical simulations of point vortices, but their formation process is not completely understood. Modelling the weak background vortices as noise may help us give insight to this formation process. Secondly, I am thinking about controlling the motion of point vortices on a curved surface (sphere, paraboloid, etc) by rigid body motion. This can be seen as a generalisation of for instance the motion of point vortices on a rotating sphere.

### Erwin Luesink

**Based at:** Imperial College London

Research Project: Ocean and weather prediction is done with dynamical cores that require a huge amount of computational power. One of the reasons behind this computational cost is that the models describe many small scale effects (e.g. turbulence) that determine the spatial and temporal scales on which the model is allowed to be resolved. Additionally, fluid models have been shown to be chaotic in nature, meaning that they are sensitive to initial conditions. Hence small errors can lead to completely different outcomes. By replacing the small scale effects with a clever type of noise (see Holm 2015), the fluid model is made stochastic in a way that preserves most conservation laws, leading to a model that is easier to resolve than its deterministic counterpart with a higher accuracy. A particular well-studied example is the Lorenz 1963 (L63) model, which is a Fourier-mode projection of a Rayleigh-Bénard convection problem. By introducing the noise as dictated by the method of Holm 2015, the stochastic version of the L63 model is derived. It is discovered that this type of noise behaves differently to the most common noise found in literature both on a theoretical level as well as on a numerical level. This gives insight into how the models for ocean and weather prediction can be made stochastic with as much resemblance as possible to the physics of the deterministic version.

### Mary O’Donnell

**Based at:** Imperial College London

**Research project:** Vortices in quasigeostrophic turbulence

MPE CDT Aligned student

Research project: Vortices are near-ubiquitous geophysical and astrophysical phenomena. The study of vortices which occur in Earth’s oceans is crucial to our understanding of oceanic currents and climate, in part because the majority of kinetic energy of the ocean is contained within mesoscale vortices, but our fluid dynamical understanding of them is constrained both experimentally and practically. This research project aims to model turbulence in quasigeostrophy and describe, kinetically and statistically, the vortex population which naturally arises. An understanding of vortices modelled in this way should provide insight into the population of vortices which arises due to similar flow regimes in the ocean.

### Maha Hussein Kaouri

**Based at:** University of Reading

MPE CDT Aligned student

Research project: My research will focus on variational data assimilation schemes where we aim to approximately minimize a function of the residuals of a nonlinear least-squares problem by using newly developed, advanced numerical optimization methods. As the function usually depends on millions of variables, solving such problems can be time consuming and computationally expensive. A possible application of the method would be to estimate the initial conditions for a weather forecast. Weather forecasting has a short time window (the forecast will no longer be useful after the weather event occurs) and so it is important to choose a method which gives the most optimal solution in the given time. This is why the analysis of the efficiency of new techniques is of interest. In summary, the aim of my PhD research is to apply the latest mathematical advances in optimization in order to improve the forecast made by environmental models whilst keeping computational cost and calculation time to a minimum.

### Philip Maybank

**Based at:** University of Reading

MPE CDT Aligned student

Research project: In Neuroscience, mean-field models are nonlinear dynamical systems that are used to describe the evolution of mean neural population activity, within a given brain region such as the cortex. Mean-field models typically contain 10-100 unknown parameters, and receive high-dimensional noisy input from other brain regions. The goal of my PhD is to develop statistical methodology for inferring mechanistic parameters in this type of differential equation model.

### Karina McCusker

**Based at:** University of Reading

MPE CDT Aligned student

Research project: Fast, approximate methods for electromagnetic wave scattering by complex ice crystals and snowflakes. The goal of my PhD is to develop a method to approximate the scattering properties of ice particles in clouds. This could be used to improve scattering models that are available, and therefore allow more precise retrievals of ice cloud properties. These retrievals could be used to evaluate model-simulated clouds and identify problems that exist in the model, thus enabling improvements to be made to the parameterization of ice processes.

### James Shaw

**Based at:** University of Reading

MPE CDT Aligned student

Research project: Next-generation atmospheric models are designed to be more flexible than previous models, so that the choice of mesh and choices of numerical schemes can be deferred or changed during operation (Ford et al., 2013; Theurich et al., 2015). My PhD project seeks to make numerical weather and climate predictions more accurate by developing new meshes and numerical schemes that are suitable for next-generation models. In particular, the project addresses the modelling of orographic flows on arbitrary meshes, focusing on three aspects: first, how orography is best represented by a mesh; second, how to accurately advect quantities over orography and, third, how to avoid unphysical solutions in the vertical balance between pressure and temperature.

### William Mcintyre

**Based at:** University of Reading

**Research project:** Mathematical Modelling of Atmospheric Convection for better Regional Climate Change Prediction

MPE CDT Aligned student

Research project: Atmospheric convection occurs on length scales far smaller than the grid scales of numerical weather prediction and climate models. However, as the resolution of modern models continues to increase, the local convective effects become evermore significant and, thus, there is a demand for new convection schemes which can produce accurate results at these new scales. One such candidate is through conditional averaging, an approach in which grid boxes are split into convectively stable and unstable regions where separate differential equations are solved for each. The scheme incorporates mass transport by convection and memory which are features often ignored in current models. There is thus a possibility to better represent convection using this new approach.

### Maximilian Engel

**Based at:** Imperial College London

**Research project:** Bifurcations in random dynamical systems

MPE CDT Aligned student

Research project: my research is divided into two closely related parts. In the first part, I consider two-dimensional models of stochastically driven limit cycles, which are used to describe oceanic weather fluctuations, and study the effect of the interaction between the noise excitation and a phase amplitude coupling, also called shear. I can show that for a certain class of such models there is a transition from noise-induced synchronisation to noise-induced chaos depending on the level of shear.

The synchronisation means convergence to one trajectory for all initial conditions, given a certain noise realisation. The chaos is measured by a positive stability exponent, the Lyapunov exponent, positive entropy and certain properties of the invariant random measure. In models without shear, we can show that noise destroys the bifurcation from equilibria to limit cycles, i.e. Hopf bifurcations, with respect to the attracting objects but that the bifurcation still manifests itself in terms of loss of hyperbolicity.

The second part consists in comparing such bifurcation phenomena for unbounded noise with scenarios for bounded noise. The first approach is to look at killed processes, i.e. to study only trajectories that survive on a bounded domain for potentially unbounded noise. The second approach is to model the bounded noise as a function of a fast chaotic variable perturbing the slow variable where the noise converges to Brownian motion in the scaling limit. I can use recent important results concerning stochastic limits of fast-slow systems.

### Hemant Khatri

**Affiliations:** Department of Mathematics

**Based at:** Imperial College London

**Research project:** Alternating Jets in the Oceans

MPE CDT Aligned student

Supervisor: Pavel Berloff (Imperial College London, Department of Mathematics)

Research project: Alternating jets have been well observed in Earth's oceans and planetary atmospheres like Jupiter. These are elongated structures in the zonal velocity contours, which can be few hundreds to thousands of kilometres long and can stay for a time period of few weeks to months. There are many mechanisms proposed to explain the phenomenon however not all characteristics of these jets are explained and the reality is rather more complex. The aim of my project is to understand the phenomenon using more complex mathematical models, which incorporate real world parameters. In particular, I am studying what effects does bottom topography have on the stability of the jets

### Ivis Kerama

**Based at:** University of Reading

**Research project:** Improved Approximate Bayesian Computation (ABC) for inference of the likely effects of climate change on animal populations

Supervisors: Richard Everitt (Lead Supervisor, Department of Mathematics and Statistics, University of Reading), Richard Sibly (School of Biological Sciences, University of Reading) and Robert Thorpe (Centre for Environment, Fisheries & Aquaculture Science (Cefas))

Summary: individual-based models (IBMs) are used to simulate the actions of individual animals as they interact with one another and the landscape in which they live. IBMs are also known as Agent Based Models, and have a history dating back to the 1940s. The first models were types of cellular automata, which dynamically model the states (e.g. black or white) of a grid of cells through the application of rules that act locally (e.g. the state of a cell at a particular time depends on the state of its neighbours at the previous time). Conway’s “Game of Life” is the most widely known example of such a system. Now, the term IBM is used for any model in which a global system is modelled through local interactions; these systems play a big role in “Complexity Science”. The study of such systems often focuses on whether useful global-scale emergent behaviour arises through modelling on the local-scale. However, this project is less focussed on the characteristics of the forward model, and more on solving the inverse problem of parameter estimation.

When used in spatially-explicit landscapes IBMs can show how populations change in response to climate change or management actions. For instance, IBMs are being used to design strategies of conservation and of the exploitation of fisheries given the likely effects of climate change on fish species. Stochastic computer simulation models are often the only practical way of answering such questions relating to climate change and ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate: there is an urgent need for improved methodology for performing these tasks, since existing methods are too slow, and not always accurate. This project aims to improve the best existing method: Approximate Bayesian

Computation (ABC). ABC is currently being used at Reading for statistical inference in a diverse range of applications in ecology, evolution and more widely. Examples include models of mackerel in the North East Atlantic, elephants in Amboseli National Park in Kenya, and local butterfly populations. These projects are investigating the likely impact of climate change. Other projects are studying cockles in the Burry Inlet; the evolution of pathogens; social network analysis; and statistical physics. In most of these cases the challenges of parameter estimation and model comparison are both of importance, but implementation can prove computationally expensive. This project aims to improve ABC methods and to collaborate with model builders to help them in fitting models to data. Initial focus will be on IBMs developed for fisheries management by Cefas, part of the UK government, where predicting the likely effects of climate change is of central importance. ABC works by comparing model outputs with data and is particularly useful for statistical inference where the model is only available in the form of a computer simulator such as an IBM. Ideal Bayesian methods produce a multivariate posterior distribution over the parameters. This posterior distribution specifies the degree of support for different parameter vectors given the model, data and prior knowledge about the values the parameters are likely to take. Identifying the exact posterior is not always feasible, leading to the development of approximate Bayesian methods, such as ABC. ABC outputs a sample of parameter vectors, which constitute an approximation to the posterior distribution. This sample is found through intelligently searching through the space of possible parameters, finding parameters that yield simulation outputs similar to real data. Originally developed within population genetics, ABC is now widely used, with recent applications to, for example, range expansions, emerging infectious diseases, and forest dynamics. Despite its successes, there are situations in which most currently available ABC methodologies are not computationally feasible: in cases when either the parameter space is high dimensional, or when the simulator is computationally expensive. In previous work on IBMs we have also observed limitations of ABC techniques on models that do not provide an ideal fit to data. The overarching aim of this project is to improve ABC methods to make them sufficiently fast and accurate that they can be widely used to evaluate the likely effects of extreme weather and climate change.

### Golo Wimmer

**Based at:** Imperial College London

**Research project:** Compatible numerics for numerical weather prediction

Supervisors: Colin Cotter (Lead Supervisor, Department of Mathematics, Imperial College London), Tommaso Benacchio (Met Office), Werner Bauer (Department of Mathematics, Imperial College London)

Summary: compatible nite element methods have been recently proposed as a flexible discretisation for the dynamical core of weather and climate models. This flexibility allows to use more general types of grids that avoid the parallel computing scalability bottlenecks associated with the latitude-longitude grid. They are currently being developed for a new dynamical core at the Met Offce, within the Gung Ho project. In this PhD project we investigate conservation properties of these discretisations, and approximate systems of equations that filter out sound waves, for example.

### Ben Snowball

**Based at:** Imperial College London

**Research project:** A Generalised Computational Mathematics Framework for Lagrangian Particle Tracking Oceanography

Supervisors: David Ham (Lead Supervisor, Department of Mathematics, Imperial College London), Erik van Sebille, (Department of Physics and Grantham Institute, Imperial College London), Michael Lange (Department of Earth Science and Engineering, Imperial College London)

Summary of MRes project: Lagrangian particle tracking is a key analytic tool in oceanography. This project will apply the techniques of symbolic numerical mathematics to express Lagrangian particle problems in a high level mathematical syntax. The project will employ the Firedrake automatic numerical PDE system to represent the Eulerian background fields in parallel and will couple this to the recently developed Parcels Lagrangian ocean particle tracking code to create the fully capable system. Mathematically, this requires understanding the graph theory of parallel meshes, the functional analysis of Eulerian PDE methods and the linear analysis of numerical ODE solvers. This understanding must be used to create and compose symbolic code objects with the correct mathematical properties and operations.

### Christodoulos Savva

**Based at:** Imperial College London

**Research project:** Computational complexity of climate predictions through non-linear dynamics

Supervisors: Davoud Cheraghi (Lead Supervisor, Department of Mathematics, Imperial College London) and Gabriel Rooney (Met Office)

Summary: The main objective of this project is to carry out a systematic study of the computational complexity of long term predictions" for Lorenz systems. We shall quantify the measure of complexity of the algorithms needed to obtain given precision. This will have a number of applications to the subject of numerical algorithms for climate predictions modelled by the Lorenz flow. Notable, we expect to explain the effect of round off errors in such systems, and also hardware requirements to analyse ensemble data to achieve a given level of resolution.

### Maria Jacob

**Based at:** University of Reading

**Research project:** Modelling Scedasis in Electricity Load Proles

Supervisors: Danica Vukadinovic Greetham (Lead Supervisor, Department of Mathematics and Statistics, University of Reading), Claudia Neves (Department of Mathematics and Statistics, University of Reading)

Industrial Partner: Dr. Maciej Fila (Scottish and Southern Electricity Networks, Reading)

Summary: electric load forecasts are extremely important for both industry and society as they are hugely informative in various decision making processes. For example, in the utility industry, electric load forecasts are used to make decisions about energy trading, pricing and generation. As the electricity distribution from renewable sources increases and heat and transport move toward electrication, individual forecasts become more important as they can inform, both at the national and community level, if and by how much the infrastructure should be upgraded. Most studies in electric load forecasting in the past century have focused on point load forecasting. However, in the most recent decade, more researchers have delved into providing probabilistic load forecasts (PLF) as business needs and electricity demand and generation evolve. Hong and Fan (2016) provided an in depth literature review outlining various multiple linear regression, machine learning and graph theory based forecasts and various ways to validate a forecasts such as p norm error. While most of these forecasts reasonably well in terms of averages, they tend to smooth out much of the peakiness of the actual electric load. To the best of our knowledge, no studies quantifying the extremal behaviour of the underlying probability distribution of the load have been conducted. For most electricity providers it is advantageous and indeed valuable to be able to forecast and prepare for spikes in demand particularly when they occur simultaneously in multiple households. Engaging in this kind of study, which enables electricity distributors to generate better electricity load forecasts, will enhance capacity building for innovation in the management of electricity grid, being able not only to implement timely infrastructure development but also to minimise resource consumption and thus reduce greenhouse gas emissions. Reciprocally, in energy trading, load determines the instantaneous unit price of electricity. Thus, being able to forecasts peaks accurately both in time and magnitude allows to maximise prots in the energy market.

This project will take a two-pronged approach. One of the two prongs will look to improving individual electric load forecasts in general by adding more features such as renewable energy integration, temperature, demographics, clustering. The other will adapt existing methodology in extreme value analysis to the data and application at hand. The project will bring the two together by using concepts of heteroscedasticity, error analysis and forecast validation. It will also include parametric estimators which describe the dependency between meteorological weather telltale signs such as rainfall and temperature and thus describe extreme behaviour of electricity loads under various conditions.

### Tsz Yan Leung

**Based at:** University of Reading

**Research project:** The Role of Mesoscale Dynamics in Predictability

Supervisors: Ted Shepherd (Lead Supervisor, Department of Meteorology, University of Reading), Sebastian Reich (Potsdam University and Department of Mathematics and Statistics at the University of Reading) and Martin Leutbecher (ECMWF)

Summary: it is a well-accepted fact in dynamic meteorology that the chaotic nature of atmospheric dynamics imposes an inherent finite limit of predictability (the celebrated ‘butterfly effect’). The quantification of forecast uncertainty is thus an important question. A mathematical theory behind this fact was first treated rigorously by Lorenz (1969), who used a perfect model to estimate the growth of initial errors.

However, atmospheric models cannot be perfect. Phenomena beyond the truncation scale have to be parameterised, leading to unavoidable model errors in the forecasts. An important class of such small-scale phenomena is convection. Increasing the horizontal resolution to allow explicit convection may therefore change the role of model error in the uncertainty of forecasts. The ultimate goal of the project is to study error growth properties of such a convection-permitting version of the ECMWF’s Integrated Forecasting System and compare them with the lower-resolution, convection-parameterised runs. Implications for ensemble forecasting will also be studied.

As an intermediate step, I have already involved in a numerical investigation of the dependence of error growth behaviour on initial and model errors in the context of the idealised α-turbulence model in my MRes work. This helps develop understanding and provides insights into diagnostic quantities of error growth in the full model. The PhD project shall then expand on this by studying the relevant mathematical aspects (Lyapunov exponents, closure theory and model attractors) and extend the conclusions to the ECMWF model.

The project is expected to help provide guidance on the importance of representing initial and model errors of current numerical weather prediction (NWP) ensembles at scales in the dissipation range (i.e. less than ~100 km), and on how important it is to correctly model the background energy spectrum in the atmosphere in order to realistically simulate error growth in NWP ensembles.

### Ioannis Katharopoulos

**Based at:** University of Reading

**Research project:** Glacial-Interglacial cycles and stochastic resonance

Supervisors: David Ferreira (Lead Supervisor, Department of Meteorology, University of Reading) and Tobias Kuna (Department of Mathematics and Statistics, University of Reading).

Summary: for the last 3 million years, Earth climate has been oscillating between interglacial states (like today’s climate) and glacial states (when ice sheets covered North America and Scandinavia). Various observations establish a statistical link between the Glacial-Interglacial Cycles (GIC) and the Milankovitch cycles, the millennial oscillations of Earth’s orbital parameters - eccentricity (100 kyr), obliquity (41 kyr), and precession (23 kyr) that perturb the incoming solar radiation on Earth. However, we do not have a well-established theory for this link, revealing a critical gap in our understanding of the climate system and raising questions on our ability to predict its future evolution. Paillard (2015) recently reviewed the existing theories for the GIC found in the literature, revealing the lack of consensus and even the absence of a leading hypothesis.

In the 80s, Nicolis, Benzi and collaborators proposed a novel hypothesis: Earth’s climate is in stochastic resonance (SR) with the Milankovitch cycles (e.g. Benzi, 2010). This requires that Earth’s climate is endowed with multiple stable states. Under this condition, matching between the characteristic timescale of unforced transitions (Kramers rate, controlled by the noise) and the external forcing timescale (Milankovitch) could generate regular transitions between the two states, even for a very weak forcing (i.e. too weak to force transitions deterministically). Though very appealing, the application of this idea ran into two obstacles:

1) complex ocean-atmosphere General Circulation Models (GCM) (and so the real Earth one concludes) did not exhibit multiple stable states and 2) the stable states of classical low-order models were not representative of the observed GIC. Ferreira et al. (2011), Rose et al. (2013) and Ferreira et al. (2017) (in revision) made significant progress in this direction: firstly they have found multiple equilibria in a coupled GCM, secondly they have showed that the climate shift between states is comparable to that of the GIC and finally they have developed a 1D Energy Balanced Model (EBM) that mimics the behavior of the coupled GCM. There are a few applications of stochastic resonance to climate problems, however most of these are concerned with Dansgaard-Oeschger events during glacial times, rather than the GIC themselves. For GIC applications, one has to go back to the early work of Nicolis and Nicolis (1981) and Benzi (1982). However, they used 0Dmodels, which were physically unrealistic or employed ad-hoc formulations (i.e. these models did not exhibit stable states similar to GIC nor did they include feedback of the climate system).

The goal of this PhD is to revisit a stochastic resonance theory for the GIC in the light of these new results mentioned above. This will be the first attempt to apply SR to a 2-layer 1D EBM models which exhibits multiple stable states similar to GIC. The latter point is crucial as it addresses the caveats of earlier studies and will allow us to make connections with the observations and engage with the paleoclimate community, raising the potential impact of this work.

### Laura Mansfield

**Based at:** University of Reading

**Research project:** Model reduction using emulation for understanding and predicting climate responses to different regional emission forcings

Supervisors: John Methven (Lead Supervisor, Department of Meteorology, University of Reading), Hilary Weller (Department of Meteorology, University of Reading) and Tristan Pryer (Department of Mathematics and Statistics, University of Reading)

Prof. Sir Brian Hoskins FRS (Department of Meteorology, University

of Reading), Secondary- Dr. Apostolos Voulgarakis (Department of Physics, Imperial

College London), Dr. Richard Everitt (Department of Maths, University of Reading)

Summary: the overall global response of the atmosphere to different forcings has been researched substantially, but global and regional responses of region-specific emissions have not been studied in as much depth. Long-lived pollutants (e.g. greenhouse gases (GHGs) such as CO2) become evenly spread over the atmosphere and cause homogeneous forcing worldwide. In contrast, short-lived pollutants, such as aerosols (e.g. sulphate, black carbon), and hence their associated forcing, are distributed inhomogeneously. Short-lived pollutants are important for decadal climate prediction as they have a strong impact on

climate on such time-scales, while their sustained emissions changes can lead to implications for future peak temperatures under GHG mitigation scenarios and how those scale for different regions of interest. The focus of the majority of previous studies has been on the mid-latitudes where most of the developed countries lie (e.g. Europe, U.S., China and East Asia). However, anthropogenic emissions from the tropics are likely to also change substantially in the near-future, due to the rapidly developing countries e.g. in Africa, so more research in this area is needed to predict future global and regional climate response to those changing forcings. Additionally, emissions from fire activity in the tropics can simultaneously impact largescale climate in ways that are largely unexplored. The primary aim of the project is to reduce a complex, computationally demanding climate model down to a statistical model which describes regional patterns in the climate response to regional forcings. Secondly, investigations of the physical mechanisms that link regional forcings with the global and regional responses will be pursued, such as for example the interactions of forcings with internal large-scale climate oscillations.

### Giulia Carigi

**Based at:** University of Reading

**Research project:** Ergodic properties and response theory for models in geophysical fluid dynamics of intermediate complexity

Supervisors: Jochen Broecker (Lead Supervisor, Department of Mathematics and Statistics, University of Reading), Martin Rasmussen (Department of Mathematics, Imperial College London), Tobias Kuna (Department of Mathematics and Statistics, University of Reading), Valerio Lucarini (Department of Mathematics and Statistics, University of Reading)

Summary: in the Climate Sciences, there is enormous interest in understanding the long term average behaviour of the climate system. In the context of climate models, this behaviour is expressed intrinsically through concepts as invariant measures, attractors, Lyapunov vectors and Lyapunov exponents, or more generally pullback attractors in the presence of time dependent forcing (deterministic or stochastic). Further, the response of these objects to changes in external parameters is important as this provides a mathematical framework in which climate change triggered by a change in external parameters or forcings can be investigated. Therefore, by studying the ergodic properties and response theory for models in geophysical fluid dynamics, we expect not only a better understanding of the dynamical effects of climate change in general but also whether statistical properties derived under current climate conditions, such as downscaling approaches or forecast quality assessments, will be valid under future climates. As opposed to conceptual (low dimensional) climate models, these questions will be addressed in the context of more realistic atmospheric and ocean models such as two dimensional Navier Stokes, classical two layer quasigeostrophic (QG) models or three layer QG models where the third layer represents an ocean.

The main goal of this project is to investigate the ergodic properties of medium complexity QG models, in particular attractors, Lyapunov exponents, and pullback attractors (in the nonautonomous case). Further, the response of such models to changes in parameters will be considered (with and without stochastic forcing). In particular, we aim to establish results regarding response to time dependent perturbations, thereby putting important techniques

in the climate sciences on a rigorous footing. These questions will be addressed in the context of 2d Navier Stokes but also two layer and three layer QG models. As part of this, we expect that some of the following results available for 2d Navier Stokes be established for these more general models (potentially in modified form):

1. Finite dimensional attractors under constant forcing

2. Local Lyapunov exponents under constant forcing (accompagnied by numerical experiments)

3. Pullback attractors under time dependent deterministic and random forcing

4. Ergodic invariant measures under random forcing

Invariant measures for nonautonomous dynamical systems have not been analysed so far in any depth. The project contribute to the theory for measures on pullback attractors (with response to time dependent forcing as part of this). Depending on time and progress, the project will also look into bifurcations of such models, since bifurcations essentially imply the break down of linear response theory.

### Kevin Synnott

**Based at:** University of Reading

**Research project:** Inferring Gaussian Markov random field models for estimating the surface ocean dissolved CO2 distribution

Supervisors: Richard Everitt (Lead Supervisor, Department of Mathematics and Statistics, University of Reading), Heather Graven (Department of Physics, Imperial College London)

Summary of the MRes project: In the study of weather and climate data, accounting accurately for its spatial (and often spatio-temporal) variation is important to guarantee the accuracy of inferences made from the data. An area well-suited for further development and application of new statistical mapping approaches is the estimation of the global distribution of dissolved carbon dioxide (CO2) in the surface ocean using available sparse observations. The ocean presently absorbs approx. 25% of the emissions of CO2 from human activities, which are the primary cause of climate change. One of the main techniques for estimating ocean CO2 uptake makes use of ship-based observations of dissolved CO2 in the surface ocean from research cruises and commercial “ships-of-opportunity”. My MRes project proposes to model spatial dependence through the use of Gaussian Markov random fields, applied to ocean CO2 data. This is a mathematically rich subject area that involves ideas from statistical inference, stochastic PDEs and numerical linear algebra. Continuously indexed Gaussian fields (GFs) are a cornerstone of spatial statistics, but suffer from the “big n problem”, referring to the high computational cost (O(n3)) when factorising covariance matrices of dimension n*n, where n is large. Some GFs may be seen to be solutions to linear stochastic partial differential equations, and it has recently been shown (Lindgren, Rue, and Lindström 2011) that approximate stochastic weak solutions to these SPDEs are given by discretely indexed Gaussian Markov random fields (GMRFs). GMRFs are special cases of multivariate Gaussian distributions that have the property that many of the entries in their precision matrices are zero. This makes doing computations using GMRFs more feasible than GFs, using ideas from numerical linear algebra to significantly simplify the matrix manipulations.

### Marco Gorelli

**Based at:** University of Reading

**Research project:** Modelling the surface of cloud and snow via Kardar-Parisi-Zhang equation

Supervisors: Horatio Boedihardjo (Lead Supervisor, Department of Mathematics and Statistics, University of Reading), Jochen Broecker (Department of Mathematics and Statistics, University of Reading), Chris Westbrook (Department of Meteorology, University of Reading), Dan Crisan (Department of Mathematics, Imperial College London), Tobias Kuna (Department of Mathematics and Statistics, University of Reading)

Summary: the mathematical description of meteorological fields such as snow cover and clouds remains a serious challenge due to the highly irregular structure of these fields. In this project, we will investigate the modelling of clouds and other irregular meteorological fields via the Kardar-Parisi-Zhang equation (or the KPZ equation in short). The equation has been commonly used in theoretic physics in modelling the macroscopic behaviour of highly irregular surface growth. It has been suggested that the KPZ equation is a good model for cloud perimeters and snow surface. The main goal of this project is to investigate these suggestions using recent mathematical developments in the understanding of the KPZ equation In Physics, the Kardar-Parisi-Zhang (KPZ) equation is a stochastic partial differential equation (SPDE) which describes the universal large scale behaviour of surface growth. It is therefore natural to conjecture that KPZ may be applied to model clouds and snow as well as other "growing surfaces’’ in atmospheric physics, such as the height of the convective boundary layer. There is preliminary evidence that this is true. A key goal in this Phd project is to evaluate the effectiveness of using KPZ as a model for such variables.

### Birgit Sützl

**Based at:** Imperial College London

**Research project:** Quantifying heterogeneity and microclimates in urban developments

Supervisors: Maarten van Reeuwijk (Lead Supervisor, Department of Civil and Environmental Engineering, Imperial College London), Colin Cotter ( Department of Mathematics, Imperial College London) and Gabriel Rooney (Met Office)

Summary: The majority of the world’s population now lives in cities, and with a continuing trend of urbanisation, cities are under increasing pressure from resource scarcity, air pollution and hazards from a changing climate. In order to develop effective strategies for these problems, it is important to understand the complex interplay of the heterogeneous urban environments of streets, buildings and open space with each other and the various atmospheric weather conditions. This project will seek to gain fundamental insight into the modelling of urban heterogeneity and its effects on the urban microclimate. Based on the DALES-URBAN Large-eddy-simulation model, which can resolve the air flow between individual buildings to the scale of meters, the aim is to develop a reduced order model for the heterogeneous urban surface. This model can then be used to study optimal design strategies for streets, open spaces and the use of urban vegetation, to contribute to sustainable and resilient urban developments.

### Imogen Dell

**Based at:** Imperial College London

**Research project:** Two-way Interactions of Troposphere and Stratosphere via Radiation, Re ection and Breakdown of Rossby Waves

Supervisors: Xuesong Wu (Lead Supervisor, Department of Mathematics, Imperial College London) and John Methven (Department of Meteorology, University of Reading)

Summary: weather and climate are fundamentally underpinned by complex processes taking place in the troposphere and stratosphere. The troposphere is extremely active, and there a variety of atmospheric waves are generated through dierent mechanisms including topographic forcing, heat sources and shear instabilities. These waves propagate upward to influence the stratosphere. Primarily, the momenta carried by waves are deposited there thereby changing the zonal mean flow and causing interannual variability. However, rather than being passive, the stratosphere affects also troposphere dynamics, that is, there is a two-way coupling between the stratosphere and troposphere. It is now recognized that understanding and accounting for this coupling are crucial for improving weather and climate predictions. Among numerous mechanisms proposed, an interesting and important one is the mutual interactions through Rossby waves: the troposphere radiates Rossby waves, which are reflected by the stratosphere to the troposphere simultaneously influuencing the radiation itself. Strong evidence for this has been provided by careful analysis of observation data.

The aim of the present project is (a) to investigate rst the key fundamental aspects in troposphere-stratosphere interactions, namely, generation, reflection and breakdown of Rossby waves as well as their back action on the troposphere, (b) to integrate these constituting fundamental processes in a unied framework, thereby constructing a self-consistent physics-based model for troposphere-stratosphere coupling, and (c) to diagnose how processes such as Rossby wave reflection and its interference with upwards radiation influence the predictability of the flows in stratosphere and troposphere. Troposphere-stratosphere coupling involves fluid physics on dierent scales, and it will be tackled by the mathematical tools of matched asymptotic expansion and multiple-scale techniques.

### Joseph Wallwork

**Based at:** Imperial College London

**Research project:** Anisotropic mesh adaptive methods in ocean modelling

Supervisors: Matthew Piggott (Lead Supervisor, Department of Earth Science & Engineering, Imperial College London), David Ham (Department of Mathematics, Imperial College London) and Hilary Weller (Department of Meteorology, University of Reading)

Summary: mesh adaptivity centres around the manipulation of the computational meshes used in solving differential equations in such a way that refinement is made where the error in the solution is estimated to be relatively large and coarsening is made where it is small. As such, it is possible to optimise the quality of the solution obtained, within bounds on mesh size enforced by computational constraints, thereby obtaining both an accurate and efficiently-computed solution to the problem at hand. Efficiently achieving an accurate solution is an important factor in various fields related to the Mathematics of Planet Earth,

including tracking the dispersal of oil spills, storm surge modelling and the simulation of ring formations such as the Agulhas current the South Atlantic.

Mesh adaption is traditionally achieved by either mesh refinement (h-adaptivity) or mesh relocation (r-adaptivity), the two of which have not yet been successfully coupled in such a way that they do not work in conflict with one another. This PhD project will seek to combine the two approaches to form a hybrid (hr) approach, allowing for both refinement and relocation to be implemented together. Through exploring the avenues opened by mesh adaptivity, there is plenty of scope for the advancement of computational models in the Planet Earth related application areas listed, amongst many more.

### Peter Shatwell

**Based at:** Imperial College London

**Research project:** A process study of heat uptake by the global ocean

Supervisors: Arnaud Czaja (Lead Supervisor, Department of Physics, Imperial College London), David Ferreira (Department of Meteorology, University of Reading) and Greg Pavliotis (Department of Mathematics, Imperial College London)

Summary: a compilation of hydrographic surveys over the 20th century and the recent development of the ARGO float program have revealed significant variability of the oceanic heat content in all basins. Some of these changes must be related to the heat-content increase in response to the accumulation of CO2 and other long-lived greenhouse gases in the atmosphere, but some must also reflect fluctuations intrinsic to the coupled ocean-atmosphere-cryosphere

system. Due to limited observations and the costly computing requirements to study deep ocean heat uptake, our understanding of these fluctuations is poor.

The goal of this PhD project is to step back and conduct a process study of heat content change, or heat uptake, by different elements of the global ocean circulation: wind driven gyres (the global ocean), buoyancy-forced overturning circulation in a narrow basin (North Atlantic), and wind- and buoyancy-forced circulation in a channel geometry (Southern Ocean). The aim is to develop simple (linear) models of heat uptake i.e. models in which knowledge of the pre-industrial oceanic state allows for a prediction of heat uptake in response to anthropogenic forcing. This prediction will then be applied and compared to

observations.