- Forecast post-processing
- Forecast verification
- Calibration of computer models
- Uncertainty quantification in complex systems
- Environmental hazards
- Air quality modeling
- Spatial epidemiology
Computer models of atmospheric physics used for weather and climate prediction are only imperfect representations of the real world.
We develop statistical modelling techniques to correct systematic forecast biases resulting from structural model errors, to improve the skill and reliability of weather and climate forecasts.
Forecast evaluation is about assessing the performance of forecasts by comparing past forecasts to observations. Evaluating forecasts can be difficult because a) there are many different types of forecast (point forecasts, probability forecasts, interval forecasts, ensemble forecasts), and b) there are many ways in which a forecast can be wrong.
A careful and informative assessment of forecast quality can help to guide decision making, to monitor changes in performance, and to improve forecasts across a wide range of fields, such as economics, finance, health, hydrology, and meteorology.
In order to have confidence in computer model output of complex systems such as the climate system, the model inputs must be tuned (‘calibrated’) so that the model outputs match historical observations. However, running complex simulation models is computationally expensive and the space of possible input parameter is large, which complicates direct calibration.
Our group develops uncertainty quantification methodology for probabilistic calibration and history matching, to establish optimal input parameters for the computer model to produce realistic output.
Selected publications: Salter et al. (2018)
We develop statistical methodologies to quantify uncertainties when phenomena in the real-world are described by complex computer models in fields such as hydrology, manufacturing, energy, finance and geosciences. Propagating uncertainties from the unknown model inputs to the model outputs requires thousands of evaluations of the model at different input parameters, which is impossible if a single model run takes days or even weeks on a supercomputer.
Statistical surrogate models (‘emulators’), that can be run cheaply in place of the computationally expensive complex model, allow us to obtain a complete picture of the uncertainties related to the models fidelity and robustness, and extract more reliable predictions from the computer model.
Selected publications: Mohammadi et al. (2018)
We have a world-renowned track record in the quantification of risk due hydro-meteorological hazards responsible for large insurance losses, and in the visualisation and communication of the associated uncertainty.
We are particularly interested in hazards associated with intense extratropical cyclones, hurricanes, extreme rainfall, and floods.
Using techniques from dynamical systems theory, extreme value theory, climate science, and spatial statistics we aim to improve fundamental understanding and prediction of severe weather and its impact on society.
Selected publications: Youngman et al. (2017)
Air pollution is a leading global disease risk factor, and tracking progress requires accurate, spatially resolved exposure estimates.
We develop statistical methodology to integrate large amounts of data from numerous sources (satellites, station measurements, indirect predictors) to estimate air pollution at high spatial and temporal resolutions.
Selected publications: Shaddick et al (2017)
We develop disease risk prediction models to assist in public health decision making, using spatial, temporal and space-time statistical modelling including Bayesian hierarchical modelling and MCMC techniques.
One of the special interests in the group is the incorporation of dynamical climate models into statistical models of disease risk, to predict climate-related health hazards such as heat-wave mortality or the effect of climate on the spread of vector-borne diseases.
Selected publications: McKinley et al. (2018)