Title: Trends in UK Peak Flow Data: When did they start?
Speaker: Adam Griffin (Centre for Ecology and Hydrology)
There is currently a great deal of interest in the potential effects of climate and environmental change on the magnitude and frequency of extreme floods. For example, repeated extreme floods experienced in north-west England over the last few years have caused concern about whether such events are evidence of changes in flood-producing mechanisms. Classical approaches to flood frequency analysis rely on the assumption of stationarity, i.e. that there is no trend in the peak flow data. However, the occurrence of the most extreme events (larger than any other on record) can have a marked effect on return period estimates, which in turn introduces uncertainty when considering the design lifetime of flood risk management measures. This paper describes the application of non-stationary flood frequency analysis to annual maxima time series in the UK, and how characteristics of the most extreme events change over time.Following on from work on low and high flows by Harrigan et al. 2017, trends in extreme flood distributions in the United Kingdom have been investigated. The UKBN2 is a near-rural collection of UK catchments with long records (more than 40 years) which makes it possible to investigate possible hydrological and meteorological trends without having to account for the influence of river management and land use change. Fitting time-dependent location, scale and shape parameters of the Generalised Logistic distribution enabled us to see the changes in flood frequency curve characteristics over time. Additionally, fitting time-independent distributions over moving and increasing time-windows allows us to identify when particular events have a marked influence on parameter estimates and hence flood frequency curves. Spatial patterns in these parameter changes across the UK are evaluated together with changes through time. Examples of changes in the 30-, 50- and 100-year floods are presented for stations in the Benchmark Network.
February 6, 20193:00 pm - 4:00 pm Slingo LT, JJT Building
Title: Mathematics and Weather
“Mathematics is at the heart of all serious weather and climate science. Primitive approaches to forecasting depended on relating an observation at a location to future weather at the same location. With the invention of the telegraph, spatial pressure patterns became the basis of prediction with sophisticated rules for moving and developing the patterns. Modern computer-based Numerical Weather Prediction requires mathematics in its manipulation and solving of the equations of physics. Beyond prediction, the application and communication of weather knowledge requires mathematics to describe people’s behaviour. As we move into the era of big data, new mathematical tools will provide the keys to extract information.”
“Professor Brian Golding is a Fellow in Weather Impacts at the Met Office, visiting professor at Bristol University and co-chair of the World Meteorological Organisation’s 10-year High Impact Weather project. Brian’s research has spanned data assimilation, NWP model development, flood and ocean wave prediction, interactive forecasting graphics, and several application areas. Following his retirement as Deputy Director of Weather Science in 2012, he was awarded the OBE. His current work is focused on design of next generation warnings for weather-related hazards, applicable worldwide, and spanning crowd-sourcing of hazard observations through impact prediction to the psychology of response.”
February 6, 20194:00 pm - 5:00 pm Slingo LT, JJT Building
Weather Forecasting – the oldest Big Data challenge
Weather forecasting employs some of the largest and most complex mathematical modelling systems in the world running on the largest supercomputers available, and always have done since before most people had even heard of computers in the 1950’s. Huge numbers of weather observations are assimilated every day from around the world to set up the models to represent the current state of the atmosphere and then the models calculate how the weather might evolve – “might” because the atmosphere is a classic chaotic system, sensitive to small errors in the initial state, so in practice we run ensembles of many forecasts to assess the uncertainty in the outcome. With huge supercomputers now employed, the capacity to generate vast volumes of forecast data, over 4000GB per day, is incredible. Meteorologists have been handling Big Data since long before that term was coined, but how to process and exploit this vast and growing stream of data, most of which goes out of date within hours of being created, remains a formidable challenge. Today the Met Office is running a major project to build a new system to blend several continuous streams of data to give a single best estimate probabilistic weather forecast updated continuously.
Big Data today means much more than just large volumes, but is also about how we make data available for all to use, and how we can build relationships with other data to support decision making in government, business and many other fields. Big data is also associated with Machine Learning methods which can be used to enhance the quality of the weather forecast itself, for example by correcting for biases and errors in the forecast models, or for relating forecast data to other datasets to aid prediction of weather-related variables such as energy production or swimwear sales. One of the great challenges for meteorologists is the prediction of extreme events – our reputation depends on getting them right, but they are by nature rare, making it particularly difficult to train machine learning systems. We are just beginning to explore how machine learning might help us to address these fundamental problems.
Ken Mylne is Head of Verification, Impacts and Post-Processing in the Weather Science section of the Met Office. He has worked in the Met Office for 34 years in a range of roles, including 6 years as an operational forecaster writing forecasts for aviation and the well-known Shipping Forecast. He led the development of ensemble forecasting as a technique for understanding the uncertainty in weather forecasts to the point where it is now core to the operational forecast systems. In his current role he leads the science strategy for verifying and exploiting the forecast outputs and ensuring that probabilistic forecasts are used effectively for decision-making.
Imperial - talk by Aretha Teckentrup (University of Edinburgh)
February 13, 201911:00 am - 12:00 pm ICL, ICSM building, EPSRC CDT hub, room 402
Title: Surrogate models in Bayesian inverse problems
We are interested in the inverse problem of estimating unknown parameters in a mathematical model from observed data. We follow the Bayesian approach, in which the solution to the inverse
problem is the probability distribution of the unknown parameters conditioned on the observed
data, the so-called posterior distribution. We are particularly interested in the case where the
mathematical model is non-linear and expensive to simulate, for example given by a partial differ-
ential equation. We consider the use of surrogate models to approximate the Bayesian posterior
distribution. We present a general framework for the analysis of the error introduced in the pos-
Title: Time structures in model error within data assimilation
The explicit consideration of model error in data assimilation is increasing. While this improves the realism of the situation (i.e. models have deficiencies), it also increases the complexity of the problem. Two common situations are often explored: independent model errors every time step (easy to study in theory) and fixed model errors (easy to implement in practice). We present the solution for an (ensemble) Kalman smoother in the presence of auto-correlated model error with a general (non-zero and non-infinite) memory. Moreover, we study the consequences of using a wrongly guessed memory in the data assimilation which is different from the true memory of the system.
We also provide some insight into the situations when model error with different time-scales may arise. We provide a simple analysis for a linear two-scale problem with a fast and a slow component. We show how the interactions can lead to three elements in the slow scale: direct, memory and noise (simple and complex). We discuss how these elements are addressed in sequential DA and provide some ideas of how to improve this treatment.