Durham University Statistics and Probability
Durham University Statistics and Probability


Welcome to the Stats4Grads website! Here you will find all the information about the seminar series.

Student seminars are usually held on Wednesdays afternoons, 13.00-14.00 pm, in CM105 (first floor in Mathematical Sciences). Tea, coffee and biscuits are provided by the department of Mathematical Sciences.

Feel free to invite a friend or collaborator from another institution to give a talk if they're in town. We are particularly keen on students coming from the Social Sciences and Applied Social Sciences.

Organiser: Samuel Jackson. For information contact: samuel.jackson@durham.ac.uk .

Scope of these Postgraduate seminar series

This project is a "get together" of postgraduate students in statistics with other postgraduates that work on quantitative problems in their research.

The role of the statisticians will be to help other sciences to understand the importance of statistics, the new methods and techniques that exist and offer help.

The role of the other scientists will be to demonstrate to the statisticians the usage of statistics, the broad application statistics has and the presentation of interesting results, which may spur on and direct future research in statistics.

In other words it will be biscuits and statistics!

Seminar Series Timetable 2015/2016

Wednesday 1st June 2016:

Study of Joint Type-II Censoring in Heterogeneous Populations

Speaker: Lida Fallah, Department of Mathematics, Statistics and Applied Mathematics, National University of Ireland: Galway


Time to event, or survival, data is common in the biological and medical sciences with typical examples being time to death and time to recurrence of a tumour. In practice, survival data is typically subject to censoring with incomplete observation of some failure times due to drop-out, intermittent follow-up and finite study duration. Here, we consider the analysis of time to event data from two populations undergoing life-testing, mainly under a joint Type-II censoring scheme for heterogeneous situations. We consider a mixture model formulation and maximum likelihood estimation using the EM algorithm and conduct a simulation to study the effect of the form of censoring scheme on parameter estimation and study duration.

Wednesday 11th May 2016:

Efficient algorithms for checking consistency of probability bounds.

Speaker: Nawapon Nakharutai, Department of Mathematical Sciences, Durham University


In situations where we have little data or where we have little expert opinion, instead of stating probabilities which can lead to an erroneous conclusion, we can specify probability bounds. Lower previsions (Walley, 1991) provide a good way to do this, by bounding expectation. In this study, we explore more efficient algorithms for checking an important basic consistency principle for lower previsions, called "avoiding sure loss". The problem of checking avoiding sure loss can be written as a fully degenerate linear program. This linear program can be solved by standard methods such as the simplex method or the affine scaling method. We propose a new way of reducing the size of this linear program and for minimal introduction of artificial variables. Since the simplex method can be extremely inefficient for fully degenerate linear programs, we explore whether there is a benefit in using other methods, such as the affine scaling method. We propose a simple way to obtain an initial interior solution for our problem, which is required for starting the affine scaling method. We also identify a condition under which the algorithm can detect inconsistency much earlier compared to standard stopping criteria from the literature. In future, we plan to investigate which method is the best suited for checking whether a lower prevision avoids sure loss. We hope that this work will encourage people to use more efficient algorithms for checking avoiding sure loss, instead of standard methods such as the simplex method which are potentially very inefficient for this specific problem.

Wednesday 27th April 2016:

Bayesian emulation and its application to analysing chemical interactions in biological plant models.

Speaker: Samuel Jackson, Department of Mathematical Sciences, Durham University


Many processes in our world are represented in the form of complex simulator models. These models frequently take large amounts of time to run. Emulators are statistical approximations of these simulators that make predictions, along with corresponding uncertainty estimates, of what the simulator would produce. The main advantage of these emulators is the speed at which they run, which, in general, is many orders of magnitude faster than the simulators which they aim to approximate. Emulation can be used in any area of science that represents real-world systems in the form of complex models. I will provide an accessible introduction to the ideas of Bayesian emulation and history matching. I will then explain my application of Bayesian history matching by emulation in the context of biological plant models, and in particular a model of the chemical interaction network in the roots of the plant Arabidopsis. I will explain some of the practical difficulties of emulating such a complex biological model before showing some of the results I have thus far achieved. I will finally discuss briefly the idea of using these emulation techniques in the future design of actual biological experiments.

Wednesday 2nd March 2016:

Rendered invisible by official statistics: Polish workers and informal care and welfare networks in NE England.

Speaker: Lucy Szablewska, Department of Geography, Durham University


Lucy Szablewska is carrying out qualitative research into the lived experiences of intergenerational kinship care and informal welfare networks from the perspective of transnational Polish workers and their households in NE England and Poland. One of the research aims is to shed light on broader issues - such as population aging - which are rendered largely invisible in the current debate over welfare and citizenship in the European Union. However quantitative researchers may think this sort of research is irrelevant due to the small sample size. Lucy will explain why she thinks her research is valuable, and ask how statisticians would approach the topic and measure ‘informal care’, and what the challenges of and possibilities for collaboration between quantitative and qualitative researchers in this particular field are.

Wednesday 3rd February 2016:

How different contexts influence creativity and innovation in children?

Speaker: Zarja Mursic, Department of Anthropology, Durham University


Children are very creative but perform poorly when it comes to innovating useful tools. I study children in the contexts of a science museum to see whether different contexts influence their capabilities to innovate. I will present my first study, which tackles the question whether instructions squash creativity. In my research I am using an exhibit that is already in the museum, and also specially designed puzzle boxes and different tasks to test innovation in children. I code children’s behaviours and compare it across different conditions and contexts. At the end I might also present some plans for the future studies that are currently being designed or are in the pilot phases. All involve similar questions in relation to creativity and innovation in children.

Wednesday 20th January 2016:

Controls on the geometry of foreshore platforms: a statistical study of the North Yorkshire coast

Speaker: Zuzanna Swirad, Department of Geography, Durham University


Foreshore platforms are semi-horizontal rock surfaces backed by coastal cliffs. Numerous studies have focused on identifying relationships between platform geometry (width, elevation and gradient) and wave intensity, rock strength and structure. However, those approximations are based on simplified models of relationships between geomorphology, geology and wave action. These therefore lack sufficient spatial resolution and coverage to enable predictive analyses of likely response of a coast to predicted changes in marine conditions (sea level and wave intensity). Here, I present a systematic study of a 4 km coastline of Staithes, North Yorkshire, based on high-resolution point cloud (ca. 100 points/m2) and ortho-photographs (pixel size ca. 0.03 m) obtained with airborne LiDAR. I represent the coast as a series of densely-spaced (25 m) and resampled (0.2 m) cross-sections normal to the coastline and link their morphometric characteristics to the spatial variability in rock properties and marine action. Statistical analysis enables the identification of key controls on platform geometry and assessment of relative roles of geological and marine factors in shaping rocky coasts.

Wednesday 16th December 2015:

Nonparametric Predictive Inference (NPI) with Copula for Bivariate Diagnostics Test Results

Speaker: Muhammad Noryanti, Department of Mathematical Sciences, Durham University


The Receiver Operating Characteristic (ROC) curve is a common statistical tool to measure the accuracy of a diagnostic test that yields ordinal or continuous results. It is increasingly clear that in medical settings, one test result (biomarker) will not be sufficient to serve as screening device for early detection of many diseases and may be very costly. Many researchers believe that a combination of test results will potentially lead to more sensitive screening rules for detecting diseases. In this study we present a new linear combination of two test results by considering the dependence structure, by combining Nonparametric Predictive Inference (NPI) for the marginals with copulas to take dependence into account. Our method uses a discretized version of the copula which fits perfectly with the NPI method for the marginals and leads to relatively straightforward computations because there is no need to estimate the marginals and the copula simultaneously. We investigate and discuss the performance of this method by presenting results from simulation studies. The method is further illustrated via application in real data sets from the literature. We also briefly outline related challenges and opportunities for future research.

Wednesday 2nd December 2015:

Did Globalisation Exist Before the 16th Century?

Speaker: Ran Zhang, Department of Archaeology, Durham University


It has been suggested that globalisation was gradually formed after the 16th century, while European travellers and merchants entered the Indian Ocean. However Archaeological evidence suggests that an earlier globalisation process had already been established by trades, conflicts and communications in Eurasia since the 12th and 13th centuries. To give insight into this issue, quantitative methods play an important role in understanding these historical changes. This talk aims to introduce a preliminary attempt of applying quantitative methods on archaeological topics and share some of the problems which are being faced.

Wednesday 18th November 2015:

Decision Making and Planning Under Uncertainty

Speaker: Anthony Lawson, Department of Mathematical Sciences, Durham University


At the time investment decisions are made there may be uncertainty in many aspects which affect a decision and its outcome. Further, work with expensive simulators can make it difficult to asses how uncertainty in input affects output. An example will be presented which will consider transmission expansion planning (building new power lines) under uncertainty and how statistical emulators are a useful tool for approximating simulators and adequately considering uncertainties when making a decision.

Wednesday 4th November 2015:

Chocolate, X-Rays and Bayes Linear Methods

Speaker: Benjamin Lopez, Department of Mathematical Sciences, Durham University


Bayes Linear Methods offer a generalisation to the Bayesian approach where the, often unrealistic, requirement for a full prior probabilistic specification is relaxed. In this talk I will discuss the foundations of Bayes linear statistical inference, taking expectation not probability to be the primitive quantity. The talk will be illustrated with 'glamorous' examples from the X-ray industry, particularly developing an on-line algorithm for detecting plastic containment in a popular brand of chocolate.

Previous talks 2014/2015

Wednesday 5th May 2015:

Random walking on mysterious light beams across night sky: An interesting half strip model with applications

Speaker: Chak Hei Lo, Durham University


Many random processes arising in applications exhibit a range of possible behaviours depending upon the values of certain key factors. Investigating critical behaviour for such systems leads to interesting and challenging mathematics. Much progress has been made over the years, using various techniques. The most subtle case is when the system is near a critical point. This presentation will give a brief introduction to a near-critical random walk model, demonstrating various applications in economics, physics and logistics.

Wednesday 22nd April 2015:

The Role of Self-discipline in Predicting Achievement for 10th Graders

Speaker: Rui Zhao, Durham University


This study investigated how sub-dimensions of self-discipline (behavioral control, thinking control, and emotional control) in predicting 10th graders` achievement. A total of 608 10th graders were recruited in tshis study. Self-discipline was measured by The Middle School Students’ Self-control Ability Questionnaire. Students’ previous academic achievement is assessed by the Senior High School Entrance Examination (SHSEE, known as “Zhongkao”), and the composite scores of a school monthly exam served as the later achievement. Results show a certain amount of mediating effect that behavioral, thinking, and emotional control have in predicting academic achievement. Those sub-dimensions add small, but incremental variance to explain later academic achievement.

Wednesday 11th March 2015:

Modelling the competition: from fighting lizards to journal citations

Speaker: Dr. Helen Ogden, University of Warwick


I will talk about models for tournament data. Some applications of these models are obvious -- for example, to rank sports players based on the outcomes of matches played between them. But competition models can also be used in some slightly surprising areas. I will discuss examples from animal behaviour (modelling fights between lizards) and bibliometrics (ranking journals, based on the citations between them). Competition models also provide some interesting statistical challenges, and I will briefly discuss my own work on improving the approximations which are used for inference in some of these models.

Wednesday 25th February 2015:

Fisher information under Gaussian quadrature models

Speaker: Hermes Marques Da Silva Junior, University of Durham


We develop explicit expressions to compute the Fisher information matrix for the estimation of random effect models through Gaussian Quadrature. Illustrative examples using real data application and simulated data are provided.

Wednesday 11th February 2015:

Gender Discrimination in Academia: Afghan Context

Speaker: Y. Afzali, University of Durham


In this seminar I am going to discuss the findings of a survey of perceptions of gender discrimination in academia in Afghanistan. The aim is to explore how female and male academics perceive the level of overt discrimination in various aspects of academic life in an Afghan context. SPSS (Statistical Package for Social Sciences) is used to analyse the quantitative data. Bivariate crosstabulation and chi-square tests of statistical significance are used to explore possible differences between male and female academics with respect to their perceptions of gender (in)equality in their workplace. Multivariate crosstabulation and binary logistic regression is also used to explore how respondents’ perceptions of discrimination are shaped by the interaction of gender with their other characteristics, both personal and professional.

Wednesday 28th January 2015:

Patterns and Processes Revealed in High-Frequency Environmental Data

Speaker: A. Elayouty, University of Glasgow


Advances in sensor technology enable monitoring programmes to record and store measurements at a high temporal resolution, enhancing the capacity to detect and understand short duration changes that would not have been apparent in the past with monthly, fortnightly or even daily sampling. Although these high-frequency data are advantageous, there are challenges in their processing and analysis such as the large volumes of data, their complex behaviour over the different timescales and the strong correlation structure that persists over a large number of lags. The aim of this talk is to present the complexities of modelling high-frequency data which arise from environmental applications. Surface waters are considered as key sources of atmospheric CO2, thus comprehensive understanding of the CO2 dynamics in surface waters is valuable. We consider a 15-minute resolution sensor-generated time series of the over-saturation of CO2, EpCO2, in a small order river system of the River Dee. Advanced statistical approaches used to analyse and model the data, which include visualization tools for exploratory analysis, wavelets, generalized additive models and functional data analysis, will be presented. These methods reveal the complex dynamics of EpCO2 over different timescales, the multivariate relationships of EpCO2 with hydrology and the temporal auto-correlation structures, which are time and scale dependent.

Wednesday 21st January 2015:

Hamiltonian Monte Carlo and its variants

Speaker: D. Tang, Durham University


Hamiltonian Monte Carlo (HMC), also known as Hybrid Monte Carlo, is one of the Markov Chain Monte Carlo (MCMC) sampling methods which offer different strategies to generate a sequence of correlated samples converging to the desired distribution. In many situations, especially Bayesian statistics, target distributions usually have complicated forms, high correlated parameters and large dimension size. Traditional MCMC methods, such as random-walk Metropolis Hasting and Gibbs sampling, might have slow exploration of state space and low accepted rate caused by both random walk behaviour of traditional MCMC methods and the complex nature of target distributions. HMC is a new sampling algorithm which tries to avoid these problems by taking several steps according to gradient information of target distribution. This makes HMC have remote proposals and converge quicker than traditional random walk methods. Although the demonstrated ability of HMC sampling to overcome random walks in MCMC sampling suggests that it should be a highly successful tool for Bayesian inference, its performances depend on its algorithm parameters. Three HMC variants that provides automatically tuning will be discussed in the talk.

Wednesday 10th December 2014:


Speaker: E. Waldmann, University of Liverpool


Under many circumstances the application of classical (generalized) linear regression is not enough to describe the relationship between a set of covariates and a dependent variable. Especially the key assumption of a closed form distribution is violated frequently. One of the approaches to overcome those problems is quantile regression, developed by Roger Koenker in the 1970s. Even though quantile regression is widely used by now,there is no standard approach for modelling the impact of covariates on two or more dependent variables simultaneously. Our developments are motivated by the analysis of data from the field of biodiversity, where we want to use covariates, like temperature, topographic diversity (the maximal elevational range within one region), habitatial diversity (the abundance of different ecosystems in one region) and the number of rainy days to explain both, the number of animal species and plant species in one region.

Wednesday 26th November 2014:

Robust Crop Rotation Modelling

Speaker: L. Paton, Mathematical Sciences


Farmers often follow set patterns of crop choices in order to maximise profits and preserve nutrients in the soil. However, these crop choices are dependant on a variety of factors, including the climate and the economy. Modelling and predicting these crop rotations is an important task in order to analyse the effect changes in climate or the economy may have on agricultural output. One major difficulty in crop rotation modelling is the shortage of observations of some crop types. A robust Bayesian approach allows us to handle these rare crop types, by allowing us to obtain intervals of predictions which more accurately represent our knowledge. I will talk about this approach.

Wednesday 12th November 2014:

Statistical shape analysis

Speaker: T.Tsiftsi, Mathematical Sciences


The recognition of objects in images is an important problem in many branches of science. Statistics can help to solve this problem in many ways so statistical shape analysis is an integral part of object recognition. In this talk I will explain what shapes are, why they are important, how they can be used and how statistical shape analysis can help. I will try to explain why Bayesian shape analysis is so important and how supervised and unsupervised learning can help us tackle the problems. I will also give examples on how all the above can be used in geological applications.

Previous talks 2013/2014

Wednesday 19th March 2014:

Techniques of ancient - DNA analysis

Speaker: Liisa Loog, Department of Archaeology, Durham University


The field of ancient DNA has grown tremendously in recent years. Although modern genetic data has been used for some time to make inferences about the past, ancient DNA is an invaluable new tool for archaeological research as it provides direct information about the genetic diversity of past populations. This new temporal dimension in the data also requires new analytical approaches, different from the classical ones commonly used to analyse modern genetic variation. In this seminar I am going to talk about some already existing approaches to accommodate time-stamped genetic variation data (computer simulation based approaches as well as recently created new summary statistics) along side with a new method that we developed for exploring migratory activity of past populations.

Extra Stats4Grads activity!

On Wednesday 12th March, our very own Frank Coolen and Louis Aslett from Oxford will hold a tutorial on "Bayesian inference for reliability of systems and networks using the survival signature." We decided to include this in our schedule as an extra Stats4Grads activity. This meeting will include a short presentation followed by discussion and the tutorial for "Bayesian inference" and is part of the Asset Management Work Group in Durham.

For more information please visit the Asset Management webpage.

The meeting will be held in CM105 (ground floor in Mathematical Sciences) at 13.00. Please feel free to come and let any one of interest know about it.

Wednesday 5th March 2014:

Applied Nonparametric Circular Methods

Speaker: Dr Maria Oliveira, Department of Mathematics, Statistics and Probability Group


The goal of this talk is to introduce nonparametric methods for density and regression estimation for circular data, analyzing their performance through simulation studies and illustrating their use by real data applications. In addition, the R library NPCirc, which implements the proposed methods, will be presented.

Wednesday 19th February 2014:

Decision making under uncertainty

Speaker: Dr Nathan Huntley, Department of Mathematics, Statistics and Probability Group


Whether we are policy-makers planning flood defences, or just customers trying to choose their preferred sandwich, we all have to make decisions with uncertain consequences. The theory of expected utility provides a popular and convenient method to deal with decision problems, but is it the right approach? In this talk I'll illustrate the theory, some criticisms and alleged paradoxes, and some possible alternatives.

Wednesday 5th February 2014:

Turning Lines into Numbers and Other Stories from an Archaeologist

Speaker: Michelle de Gruchy, Department of Archaeology


A persistent challenge in archaeology is that often our data does not look like the data presented in statistical courses or textbooks. Half the battle is figuring out how to turn our data into meaningful values so that it is possible to have the means, standard deviations, and so on required in statistical tests. In my research, this has inventing a quantitative method for looking at archaeologically preserved routes by building populations from single samples and turning computer- or hand-drawn lines into numbers, in order to learn why people walked one way and not another thousands of years ago. This talk tells the story of how this quantiative method was invented and what it is starting to tell us about travel during the Early Third Millennium B.C.

Wednesday 22nd January 2014:

Uncertainty Analysis of Future Power Systems

Speaker: A. Lawson, Mathematical Sciences


At the time investment decisions are made there is a lot of uncertainty in the future of Britain's power system. Full simulators are too expensive to be used in the face of uncertainty. Statistical emulators are therefore used to approximate simulators and make good investment decisions.

Back to the Statistics Seminar list.