Abstracts for keynote and invited speakers.

Click on the speaker's name to view the abstract.

A/Prof Res Altwegg, University of Cape Town

Mapping biodiversity in remote areas

When the South African government considered developing the vast arid interior of the country for shale gas extraction in 2016, a group of scientists teamed up to collect much-needed distribution data for hundreds of species with the view to guide later impact assessments. This project raised a number of questions that typically need to be addressed when designing biodiversity surveys in remote areas, and during the subsequent data analysis. How do we select sampling sites and how many of them do we need? How do we design a sampling protocol that allows us to get robust estimates of occupancy probability while still being practical in the field? How do we adapt the protocol for each taxonomical group (including plants, insects, scorpions, mammals, reptiles, birds and others)? How do we make best use of sparse data? We used a probabilistic design that sampled the important environmental gradients efficiently and also took into account costs of reaching each site. We found that each taxonomic group needed a slightly different data-collection protocol but in general, a combination of spatially repeated sampling and recording time to first detection of each species proved workable. We analysed the data using hierarchical multi-species occupancy models that used time-to-detection information. The time-to-detection information improved occupancy estimates even though some replication was still necessary to get reliable estimates. For most taxa, we had additional distribution information from opportunistically collected presence-only records. We included this information in our analyses using recent data fusion techniques and found that it improved our occupancy estimates substantially for most species. Mapping biodiversity in remote areas is challenging but recent developments in data collection techniques, sampling protocols and data analysis tools offer exciting new possibilities.

Back to top

Professor Rachel Fewster, University of Auckland

How to count the unseen when we can't count the seen: new methods for dealing with uncertain identity in capture-recapture studies

Capture-recapture is one of the most popular methods for estimating population size and trends. However, physically capturing and tagging animals can be a dangerous and stressful experience for both the animals and their human investigators - or if it transpires that the animals actually enjoy it, biased inference may result. Consequently, researchers increasingly favour non-invasive sampling using natural tags that allow animals to be identified by features such as coat markings, dropped DNA samples, acoustic profiles, or spatial locations. These innovations greatly broaden the scope of capture-recapture estimation and the number of capture samples achievable. However, they are imperfect measures of identity, effectively sacrificing sample quality for quantity and accessibility. As a result, capture-recapture samples no longer generate capture histories in which the matching of repeated samples to a single identity is certain. Instead, they generate data that are informative - but not definitive - about animal identity.

I will describe two ways of dealing with capture-recapture data in the face of uncertain identity. The key characteristic of these methods is that they do not make any attempts to explicitly match samples to the same animal. This means that computation speeds remain fast even for very large sample sizes. Analysing data with millions of samples takes much the same time as analysing hundreds.

The first approach is a new analysis framework termed Cluster Capture-Recapture (CCR). For CCR analyses, we assume that repeated samples from the same animal will be similar, but not necessarily identical, to each other. We treat the sample data as a clustered point process, and derive the necessary properties of the process to estimate abundance and other parameters. I will describe a preliminary application of CCR for acoustic monitoring.

The second approach concerns a broad class of models known as latent multinomial models. These include two-source capture-recapture models - where animals are captured by two different protocols such as photo-ID and DNA samples that cannot be matched to each other - and related multi-list models in the context of medical and social statistics, as well as more general models where data are summarized before reporting. Previous approaches to model-fitting largely use simulation-based methods such as data-augmentation via MCMC. I will show how a likelihood based on the saddlepoint approximation method can deliver remarkably fast and accurate inference for these models.

Back to top

Dr Nick Golding, University of Melbourne

Fitting demographic models to species distribution data

Correlative species distribution models are good at describing species' current distributions, and inferring their environmental drivers. However they are pretty bad predicting what will happen if we change something, like fragmenting habitat, introducing other species, or implementing a control or conservation action. Replacing the statistically convenient (but ecologically meaningless) internal structure of these models with demographic models should enable us to make much better predictions of how distributions will change.

I will present recent work developing demographic species distribution models that extend matrix population models to explicitly consider how vital rates vary through space (like spatial integral projection models) but are fitted to commonly available species distribution data (like dynamic range models). Combining these approaches enables us to fit ecologically-realistic species distribution models without the need for detailed demographic data. We can include density dependence, dispersal, biotic interactions and prior knowledge of species' population dynamics and ecology.

Asking for more information from the same data means we have to deal with a number of potential issues, including poorly-identified parameters and more computationally intensive statistical inference. I'll argue that non-identifiability is actually a good thing (in this context) and show how the computational issues can be resolved using the Bayesian inference package greta, and some new extensions for modelling dynamical systems.

Back to top

Graham McBride, NIWA, Hamilton

All this debate about p-values misses the point: test sensible hypotheses, or simply make an assessment

In 2016 a group of senior international epidemiologists and statisticians published commentary on 25 contentious issues relating to the use of p -values. These included inferences that "Statistical significance indicates an important relation has been detected", or "… p -value greater than 0.05 means that no effect was observed". A premiere science journal ( Nature ) has this year published five commentaries on shortcomings and defences of p -values. Yet most applied science authors and editors seem not to be aware of the issues; vast numbers of p -values are reported as the norm in science journals.

I argue that’s because it is seldom explained that these p -value difficulties arise from testing hypotheses that are a priori false-so we learn very little from the results. If it really is necessary to test, then do so for hypotheses that may be true, i.e., interval or one-sided tests. Or simply make an assessment. For example, we have developed a trend assessment procedure aligned with a requirement of the National Policy Statement for Freshwater Management (Objective A2) that overall water quality in a ‘Freshwater Management Unit’ is improved or maintained. It abandons testing altogether and instead considers two questions directly related to these requirements: (i) what is the direction of change? (ii) if that can be inferred, is that trend environmentally significant? The key postulate-in place of a hypothesis-is that there is always a trend, of whatever magnitude (but you may not have enough data to decide on its direction).

This approach has recently found favour with water quality monitoring and management agencies. An example is given of a country-wide assessment that, for the first question, reports more meaningful information on trend direction than the traditional testing approach. The case of environmental significance is rather more complex (and is ignored altogether in point-hypothesis tests). Some ideas are proposed for how that may be addressed.

Back to top

Professor Shirley Pledger, Victoria University of Wellington

The symbiosis between ecologists and statisticians

Useful scientific advances are often made in interdisciplinary research. One classical example is so-called "Applied Mathematics" which blends physics and mathematics to provide formulae for topics as disparate as bridge design and planetary motion. Another far-reaching example is the development of applied statistics by R. A. Fisher and others for the design and analyse agricultural trials, leading to more efficient provision of food.

Applied statisticians are fortunate in being able to work with researchers in many different disciplines, learning their jargon, finding what they need to know, linking it with existing statistical models and/or developing new study designs or models to approximate the reality of the situation.

The interplay of ideas between ecologist and statistician will be illustrated by examples from capture-recapture and ecological community analysis. In these examples, chains of ideas over time have refined the models, made them more useful for description and prediction, given insights for generating more studies, and provided methods which are also useful in other disciplines. Topics touched on will include heterogeneity, over-dispersion, spatial distributions, pattern detection, under-dispersion and "Big Data". (There will be quite a lot about underdispersion, which may even culminate in a theorem, but very little indeed on big data.)

It is to be hoped that the relationship between ecologist and statistician is always symbiotic, mutualistic in a positive way, and not parasitic.

Back to top

Dr Danielle Shanahan, Director Centre for People and Nature, Zealandia

Conservation in cities: does it make sense?

Conservation is extremely challenging in urban landscapes, yet cities continue to invest significant resources into managing and promoting biodiversity in these highly altered environments. Does this make sense where global resources for conservation are scarce? In this talk I will discuss key motivations for carrying out conservation in cities, drawing on policy and planning statements from across the Oceania region. I will highlight the transformation of Wellington’s birdlife, which is perhaps the only city in the world where bird biodiversity is on the rise, not the decline. Finally, I will talk about what becomming nature-rich means for people, focusing on key research outcomes that have begun to quantify the health and wellbeing benefits that people receive from nature experiences.

Back to top

Dr James Thorson, Alaska Fisheries Science Center, National Marine Fisheries Service, NOAA, Seattle, WA, USA

Improved realism in model-based community ecology: identifying nonlocal associations in multivariate spatio-temporal models

Community ecology is central to understanding global change in the Anthropocene, including (to choose a few examples) the impact of invasive earthworms on Boreal carbon cycling, trends in fish migration in response to melting sea ice, and changing fire regimes in high-latitude forests. These examples all include complex linkages between multiple physical and biological variables operating at both local and regional scales. I begin by claiming that community ecology can address these problems by treating each variable as a function defined across continuous space and discrete time, where expected change is represented by a multivariate functional. I summarize efforts to approximate this problem using localized dynamics, where previous research has addressed four “big-N” problems posed by spatial correlations, correlated process errors across species, advective-diffusive movement, and asymmetric species interactions. I then discuss new avenues to represent nonlocal effects, representing e.g. behavioral adaptation to changing landscape conditions. To do so, I first show that multivariate spatio-temporal models can generalize the “empirical orthogonal function” (EOF) analysis commonly used in atmospheric science and oceanography, and can extract dominant “features” from spatially distributed physical measurements. I then explore variable-coefficient models as a way to estimate changes in biological processes resulting from dominant physical features. Finally, I introduce “empirical orthogonal regression,” where EOF analysis is conducted simultaneously with a time-series biological model, to estimate the rotation of physical features that has maximal explanatory power for a regional biological process. I end by discussing why this class of nonseparable spatio-temporal models is likely to be useful to account for behavioral adaption to changing physical conditions.

Back to top

Simon Upton, NZ Parliamentary Commissioner for the Environment

Environmental Reporting in New Zealand

The Parliamentary Commissioner for the Environment has a statutory role under the Environmental Reporting Act 2015 to comment on NZ’s environmental reporting system. With the completion of the first full cycle of five domain reports and a synthesis report under the Act, the Commissioner has decided to review its efficacy. The review is due for completion in late 2019.

The scope of the review covers the Environmental Reporting Act itself, the structure and implications of the current reporting framework, the wider environmental ‘data’ system and the roles of different central and local government agencies in that system. The review is guided by three key questions:
  1. What is the purpose of environmental reporting and how do the reports currently being undertaken contribute to that purpose?
  2. What sort of information is needed to support environmental reporting, what underlying research is needed to inform the collection of information and what analytical framework is required to present the results of data collection?
  3. What contribution do environmental reports make to improving environmental outcomes and well being?

Environmental reporting in NZ has come a long way in twenty years, and is still evolving today. In his presentation, Simon Upton will present the key conclusions of his review, and will discuss his recommendations for improvement to New Zealand’s environmental reporting system.

Back to top