skip to content

Timetable (FOSW01)

The nature of questions arising in court that can be addressed via probability and statistical methods

Tuesday 30th August 2016 to Friday 2nd September 2016

Tuesday 30th August 2016
09:00 to 09:40 Registration
09:40 to 09:50 Welcome from Christie Marr (INI Deputy Director) INI 1
09:50 to 10:30 Jane Hutton (University of Warwick)
Estimates of life expectancy for compensation after injury

When a compensation case arises from an injury, which might be caused by medical error or an industrial or traffic accident, the financial settlement will often depend on the life expectancy. Compensation might be for expected reduction in life time, or for the cost of additional care during the rest of the injured person's life.  Estimates based on particular injuries or individual factors might be requested.

Estimates of effects of injury and life style on mortality use a variety of data sources, with no common statistics. Many lawyers assume that a larger data set is always better than a smaller data set. Statisticians should address the questions 'What is the quality of data used?' and 'What are the biases?'. Assessments of the intended population, the accuracy of individual items, the completeness of follow-up and the precise inclusion and exclusion criteria have to be made and explained. An article on mortality after spinal cord injury used a database of 49,214 people, initially 50,661 people. Five restrictions, three of which were discussed, left 31,531 (62%) eligible people. The impact of excluding people with missing data on major covariates was not reported. I suggest that the detailed check-lists provide by the equator network are an important resource for evaluation (

For some claims, the effects of smoking, alcohol consumption, illegal substance use and anorexia or obesity have to be considered as well as the main motivation of the claim. Effect sizes might be given as hazard ratios, standardised mortality rates, from univariate or multivariate models. Approaches to estimating life expectancy which allow for these personal factors include using reported relative risks, hazard ratios and excess death rates to modify the death rates from national or regional life tables. I will discuss the challenges I have faced, both in estimation and in communicating results in court, and the solutions I have adopted.

Related Links

10:30 to 11:00 Angela Gallop (Axiom International Ltd)
The Changing Face of Forensic Science
Against the background of 40 years in operational forensic science, Professor Gallop considers how forensic science has changed and the impact this has had on the application of statistical approaches and availability of data to underpin them. She uses two complex cases to illustrate some important points, and concludes by considering some immediate issues which need to be tackled to avoid a new tranche of forensic science related miscarriages of justice in the future.
11:00 to 11:30 Morning Coffee
11:30 to 12:00 Michael Finkelstein (Columbia University)
The problem of false positives, some lessons from the bullet lead story, and the new U.S. Department of Justice guidance for expert testimony
False positive probability is a key element in a statistical evaluation of forensic evidence.  The Committee on Scientific Assessment of Bullet Lead Elemental Composition Comparison of the U.S. National Research Council struggled with the problem of its estimation and its conclusion is instructive.  It is an interesting fact that the recently issued guidance for expert testimony proposed by the U.S. Department of Justice points in a similar direction.  
12:00 to 12:30 Allan Jamieson (Scotland)
Casework problems with the Likelihood Ratio
This presentation will illustrate the difficulties that arise when using the LR as evidence in court with an emphasis on DNA profiles. 
Is the LR still fit for purpose?  Was it ever?
12:30 to 13:30 Lunch @ Wolfson Court
13:30 to 14:00 David Bentley (Doughty Street Chambers)
Probability and statistics – a criminal lawyer’s perspective.
This talk will be informed by my experience as a criminal trial lawyer, and will look at some of the issues that arise when probability and statistical methods are used in the trial process. I will focus particularly on the rapidly developing area of DNA analysis, and of the challenges that are presented by complex mixed profiles and of the statistics generated. Given the power that juries tend to attach to such evidence, I will suggest that there is a heightened need for clarity and transparency, as well as objective validation of new types of modelling.

14:00 to 14:30 Cheryl Thomas (University College London)
14:30 to 15:00 Bernard Robertson (LexisNexis)
The nature of questions arising in court that can be addressed via probability and logic

At the admissibility stage, judges are faced with questions such as "can extrinsic evidence of the truth of a confession (or eye-witness identification) be taken into account when considering whether a confession (or identification) is "reliable"? 

The structure of these problems will be addressed using Bayesian logic along with the more difficult question over which the High Court of Australia recently divided in IMM v R [2016] HCA 14 namely whether when evidence has to reach a threshold of "heightened probative value" to be admitted, the judge should consider the credibility and reliability of the witness and the evidence or whether the judge should consider only the LR for the content of the evidence as evidence of the final probandum, assuming it to be true.

15:00 to 15:40 Joseph Gastwirth (George Washington University)
Statistical Measures and Methods Used to Analyze the Representativeness of Jury Pools

In the Castaneda v. Partida (1977) case the U.S. Supreme Court accepted statistical hypothesis testing for the analysis of data on the demographic mix of the individuals called for jury service over a period of time. Two years later, in Duren v. Missouri (1979), the Court noted that in order for defendant’s to receive a fair trial the system used to summon individuals for jury service should produce jury pools with a demographic mix similar to that of the jury-eligible members of the community. This talk will review the commonly used measures and methods and illustrate their use. A novel measure called disparity of the risk that was adopted by the Supreme Court of Michigan will be described and shown to be extremely stringent. It will be seen that in a jurisdiction where minorities form eight percent of the jury-eligible population, jury pools with a minority representation less than four percent will be deemed representative by this measure. If time permits an altern ative measure of the effect of minority under-representation on the chances of a defendant obtaining a “fair” jury will be recommended. 

15:40 to 16:00 Afternoon Tea
16:00 to 17:00 Discussion Panel INI 1
17:00 to 18:00 Welcome Wine Reception
Wednesday 31st August 2016
09:30 to 10:20 William Thompson (University of California, Irvine)
Lay Understanding (and Misunderstanding) of Quantitative Statements about the Weight of Forensic Evidence



Co-author: Rebecca Grady (University of California, Irvine)
The explanations that forensic scientists offer for their findings in reports and testimony should meet two important requirements: first, they should be scientifically correct—warranted by the underlying findings; second, they should be understandable to the lay audiences, such as lawyers and jurors, who will rely upon the reports and testimony. This presentation will describe a series of studies exploring lay reactions to quantitative statements about the weight of forensic evidence. Key issues examined include the way in which various formats for describing the weight of forensic evidence affect: (1) people’s sensitivity to important variations in the weight of the forensic evidence; (2) people’s susceptibility to fallacious misinterpretation of forensic evidence; and (3) the logical coherence of judgments made on the basis of forensic evidence. Implications of this research for forensic practice and legal policy will be discussed. 


10:20 to 11:00 Kristy Martire (University of New South Wales)
Exploring mock-juror evidence interpretation and belief updating within a probability framework

The examination and evaluation of juror interpretations of evidence presented at trial is well suited to consideration within a probability framework. In particular Bayes Theorem provides a useful method for setting and comparing against ‘normative’ expectations regarding the evaluation of evidence weight. It is also valuable for refining experimental designs in line with the theorem. In this presentation I will review two lines of research applying Bayes Theorem to the belief updating of lay-decision-makers: The first exploring the alignment between expert intentions and lay interpretations of forensic science expert evaluative opinions expressed using numerical and linguistic likelihood ratios; the second examining juror sensitivity to evidence relevance in the assessment of expert testimony. Some benefits and limitations of the application of a probability frameworks to these issues will be discussed.

11:00 to 11:30 Morning Coffee
11:30 to 12:00 Tim Clayton (LGC Limited)
12:00 to 12:30 Ruth Morgan (University College London)
Forensic trace evidence – what are the questions we need to answer?
Trace evidence has been under significant scrutiny and in the last two years the resources allocated to make use of intelligence and evidence trace evidence can offer crime reconstructions have been significantly reduced in the UK.   However, the value of trace evidence is significant, and there is a growing body of research being undertaken to ensure that it has the appropriate empirical evidence bases for the application of the classification of trace materials into forensic reconstruction contexts.  This research is focussed in two critical areas: 1. Enhancing our understanding of the dynamics of trace evidence within different environments and 2. Understanding the role of cognition in the interpretation of such evidence, so that together the weight and significance of specific forms of trace evidence can be established in a robust, transparent, and reproducible manner.  The importance of asking the most appropriate questions to ensure that trace evidence can offer investigators and the criminal justice system the most robust inferences as to the significance of a trace material can not be overstated.  In order to identify the characteristics of these questions, four aspects need to be considered: 1.  The importance of situating evidence within a holistic forensic science process (from crime scene to court); 2. The importance of taking an exclusionary approach in the comparison and analysis of trace evidence to infer provenance; 3. The importance of an empirical underpinning for assessing and expressing the weight of evidence in uncertainty; 4. The interaction of different lines of evidence within a forensic reconstruction.  This presentation will outline some of the questions it is important for trace evidence to answer with specific reference to environmental evidence.
12:30 to 13:30 Lunch @ Wolfson Court
13:30 to 14:00 Patricia Wiltshire (University of Aberdeen)
Forensic Ecology: How do we get answers to questions? How do we present them to the court?
14:00 to 14:30 Marieke Dubelaar (Universiteit Leiden)
Law, statistics and psychology, do they match?
Fact finding and evidence in criminal procedure has increasingly gained attention from other, non-legal disciplines, such as statistics, legal psychology and legal epistemology. Those disciplines provide useful insights as to how to arrive at a decision on the facts and possible pitfalls. However, those insights don’t always find their way into legal practice. Judicial concepts and doctrine don’t seem to match with those insights. In this presentation these problems are addressed from a juridical (continental) point of view. Where do the (systemic) weaknesses lie in the judicial system and in doctrine when it comes to the use of evidence and what are potential hindrances in the use of probabilistic reasoning and narratives in criminal procedure?
14:30 to 15:00 Lonneke Stevens (Vrije Universiteit Amsterdam)
Struggling judges: do they need a probability help desk? A daily legal practice point of view
Would all judicial decisions benefit from using probability and statistics? Judges do not seem to think so. In daily practice most cases are just not very complex. However, case-law demonstrates that judges recurrently struggle with questions and concepts of fact-finding even in the not too complex cases. What are these questions, which mistakes are made and how could reasoning with probabilities help?      
15:00 to 15:30 Afternoon Tea
15:30 to 17:00 Discussion Panel INI 1
Thursday 1st September 2016
10:00 to 10:30 Frans Alkemade (Other)
Bayes & the Blame Game: How to ease the mutually felt frustration between law professionals and scientists.

Not enough meaningful communication is taking place between the scientific community and the judiciary. Usually scientists start the dialogue by pointing out to judges and prosecutors that they utterly lack the skills needed to find the truth in criminal cases. Remarkably, not all judges and prosecutors are happy to accept this message right away. To make things worse, they are told that they can only cure their ignorance by swallowing some very distasteful medication, brewed from such dark ingredients as numbers and probability theorems. Naturally they tend to show a certain resistance. This leads some experts to the conclusion that the judiciary is unwilling and/or unable to listen to reason.

But that’s unfair. As a teacher to the judiciary and expert witness in criminal cases I’ve noticed that the majority of judges and prosecutors are intelligent and conscientious people who are willing to put a lot of effort into understanding what math and science have to offer. Unfortunately, after they learn the basics of probabilistic reasoning, it is very hard for them to get any further help when they try to apply this new, scientific way of reasoning to actual criminal cases in their daily practice. In my opinion judges and prosecutors won’t make the transition from confirmative thinking to Bayesian methods, unless they see convincing examples of complete Bayesian analyses of complicated criminal cases, including the construction of meaningful hypotheses, the choice of sensible priors, actual estimates of all likelihoods involved, and a final calculation of the posterior, including realistic uncertainties. But that’s something  most experts scare away from. Rather they - correctly - explain to the court why it’s so hard to report anything about priors or to give an opinion on e.g. the dependency of findings. Judges usually end up with just a few isolated LRs in their hands and a lot of questions on their minds.   

Now it’s the judiciary’s turn to get frustrated: Scientists keep telling that law professionals are stupid and ignorant, but apparently science doesn’t have a solution either.

How can we solve this? In my talk I will present some thoughts, based on my discussions with forensic experts, and on my contacts with a small but growing group of Dutch judges, prosecutors and police who are cautiously beginning to accept Bayesian reasoning. I will discuss some ideas on how to deal with the considerable risks involved with presenting Bayesian analyses to the courts, and present some examples from my own Bayesian reporting. Finally I’ll allow myself to dream up some over-optimistic visions of the future.  

10:30 to 11:00 Paul Roberts (University of Nottingham)
All Talk and No Conversation? Methodological Preconditions of An Interdisciplinary Forensic Science
Ten years ago I wrote a paper titled ‘Can we Talk?’, drawing attention to the importance of promoting more effective interdisciplinary communication between lawyers and scientists in the area of forensic science. The argument was addressed to practitioners as well as to scholars concerned with the administration of criminal justice. Since that time, there have been repeated efforts and numerous enterprising projects to bring scientists (of various kinds) and criminal justice scholars and professionals together to facilitate interdisciplinary communication about forensic science, several of which I have participated in myself. These occasions are always enlightening and instructive, but often also frustrating. Talking to or at or over is the not the same as talking with. A cacophony is not a conversation. Nor is to talking to yourself.  
  Part of the problem, to be sure, is that not everybody is sold on the idea of interdisciplinary collaboration in forensic science. There are income streams, professional self-identifies and disciplinary turf to defend. But growing ranks of practitioners and scholars appreciate the value, and even the necessity, of interdisciplinary cooperation in forensic science theory and practice. Exploring ‘the nature of questions arising in court that can be addressed via probability and statistical methods’ is evidently intended to contribute to an interdisciplinary ‘forensic science’, in the broad sense in which I understand that designation of field. For those well-motivated to contribute to interdisciplinary communication, the barriers to successful collaboration are primarily cognitive and methodological.  
  Interdisciplinarity, in forensic science or anything else, is hard to do well, much harder than one might initially imagine. Interdisciplinary communication is not merely a matter of sharing information, but rather of crossing between different professional and practical life-worlds constituted by their own peculiar set of objectives, values, methods, technology, discourses, institutions and cultures. A didactic model of communication is not well-suited to interdisciplinary collaboration, nor is a simplistic model of scientific research according to which the exposure and correction of errors by superior logic or data must – sooner or later – force consensus. A genuinely interdisciplinary forensic science must be a collaborative co-construction, generating new forms of knowledge, practical techniques and policy interventions (including law reform). To advance this project requires real conversation between knowledgeable, well-motivated and reflective experts across a range of pertinent disciplines, not just more talk. As a contribution to translating talk into conversation, this paper identifies some methodological preconditions for a genuinely interdisciplinary forensic science, with illustrations drawn from recent cases in which English courts have found themselves grappling with probability and/or statistics.  

11:00 to 11:30 Morning Coffee
11:30 to 12:00 Bruce Weir (University of Washington)
How should we interpret Y-chromosome evidence?



Co-author: Taryn Hall (University of Washington)
Although the interpretation of DNA evidence has been discussed extensively, there are still areas where there remains debate on the best methods. One area is for profiles on the Y chromosome, where the lack of recombination suggests the locus-specific profiles are not independent. Although an examination of published data demonstrates that many of the loci do have independent profiles, there are sufficient dependencies that there seems little need to continue adding loci to increase discrimination: profiles matching at 30 loci are unlikely not to match at the 31st locus for example.

Purely statistical approaches break down in practice because most evidential profiles are not represented in profile databases. Observed profile frequencies offer little guide to the evidential strength of a Y-chromosome match. We have used both published and simulated data to evaluate various genetic models that serve as a basis for estimating match probabilities.

Y-chromosome lineages often cross geographic or ethnic population boundaries. Genetic models allow predictions of the probability that two men, one of whom is unknown, will share Y-chromosome profiles when they are members of the same or different populations. 


12:00 to 12:30 Sheila Bird (Medical Research Council); (University of Strathclyde)
Statistical issues arising in the conduct of fatal accident inquiries



Review of fatal accident inquiries into 97 deaths in Scottish prison custody identified failure properly to secure evidence; procurator-failure to allow the full extent of post-mortem; sherrif-failure properly to report actually-prescribed medication-dosages; failure to lead admissible empirical evidence, for example in reports by HM Chief Inspector of Prisons for Scotland; reliance instead on subjectivity; long delays in providing written determinations;  inconsistency in the determinations reached on similar cases; failure to post all written determinations on Scottish courts' web-site; failure to consider the epidemiology of prisoner-deaths in deciding that a written determination should be made.

For Scotland, most of these failures have been taken into account, and remedied, by recommendations in the report of the Lord Cullen.


12:30 to 13:30 Lunch @ Wolfson Court
13:30 to 14:00 Alex Biedermann (Université de Lausanne)
Recent Pan-European advances in harmonising evaluative reporting in forensic science: scope, principles and pending challenges
Co-authors: Christophe Champod (University of Lausanne), Sheila Willis (Forensic Science Ireland)

Since decades, the question of how to assess and report the value of forensic results preoccupies academics and practitioners in both forensic science and the law across Europe and beyond. In essence, this topic gravitates around the issue of what constitutes a logical framework of reasoning, and how it can be operationalized in the applied context of legal trials. Often, statistics and probabilistic reasoning are promoted as \emph{the} framework, yet the overarching topic is larger and is concerned with the reasonable reasoning in the face of uncertainty. Unfortunately, restricted views over the former have limited viable contributions by the latter. To help overcome these barriers, forensic science and legal practitioners across Europe have partnered -- over the past few years -- in the development of mutual understanding on general principles of forensic interpretation in terms of a guideline, delivered as the result of a project in the ENFSI (European Network of Forensic Science Institutes) Monopoly Programme scheme `Strengthening the Evaluation of Forensic Results across Europe' (financially supported by the European Commission). Built upon elements of previously published standards (e.g., by the Association of Forensic Science Providers), the \emph{ENFSI Guideline For Evaluative Reporting In Forensic Science} also includes an assessment template for forensic expert reports and a roadmap for implementation. This makes it one of the most cross-disciplinary, institutionally supported acknowledgments of current understandings of logical inference in the courtroom, and scholarly research in this area. This talk will focus on presenting the scope and major principles of the ENFSI Guideline, and discuss challenges associated with its wider and more systematic implementation. It will be argued that the guideline's matured principles make it an inevitable component of future works that seek to promote and facilitate the smoother operating of logical judicial proces

Related Links
14:00 to 14:30 Ulrich Simmross (Bundeskriminalamt (BKA), Forensic Science Institute)
Towards a better communication between theory and imperfect realities of professional practice - On barriers among stakeholders and possible ways out

It still seems that the academic ‘logical approach’ does not play much of a role in the communication of the results of forensic science examinations in Germany, save for a small number of suitable cases involving DNA evidence. From a German forensic practitioner’s perspective the presentation will focus on stakeholders, their incentives, and the question, whether influential advocates might also contribute to barriers. In addition, a few proposals aiming at moderate implementation will be introduced. It is thought-provoking that despite a lot of publications and promotion in the forensic science community for almost 20 years Bayes has had minimal impact in the law. This may be the case because, apart from an appealing and coherent theory, many questions around the production and presentation of scientific evidence in the legal systems still persist. Presumably one can expect further studies on efficient communication (with mock jurors) and new high-profile case reviews again. It is also almost foreseeable that various theoretical issues will be addressed and refined to be challenged again in academic publications. Nonetheless, it seems inevitable to go into the depths of the imperfect realities of professional practice where ordinary obstacles and sometimes irrational barriers last. It is from this reality that lessons can be learnt, little steps of pragmatic solutions appear to be concrete and achievable. Those educational experiences might encourage stakeholders in - with regard to Bayes and law - less developed countries to improve communication between forensic experts and judicial personnel. Countries such as the Netherlands and Sweden already stand for advanced developments. It would also be helpful to gain knowledge on its effectiveness and exhaustiveness of implementation.

14:30 to 15:00 TBC INI 1
15:00 to 15:30 Afternoon Tea
15:30 to 17:00 Discussion Panel INI 1
19:30 to 22:00 Formal Dinner at Trinity College
Friday 2nd September 2016
09:30 to 10:00 Alicia Carriquiry (Iowa State University)
Forensic databases: size, completeness, usefulness

Co-authors:  Anjali Mazumder, Stephen Fienberg
Databases play an increasingly important role in forensic sciences, both as a means to develop and validate technologies, and in case work, to find potential matches to a crime scene sample.  Many of the databases used for research by the forensic community are lacking in different ways.  We use the elemental composition of glass as an example to highlight how data that are widely used by forensic scientists are not designed to permit answering questions of interest.  In case work, many of the databases that are used by law enforcement are privately owned and inaccessible, but as a rule, lack relevance, are not representative and in general, are assembled haphazardly using data arising in case work or other convenience samples.

10:00 to 10:30 Richard Lempert (University of Michigan)
Courts and Statistics: Varieties of Statistical Challenges

This talk will look at some of the different ways in which statistical analyses can figure in trials or on appeal and will discuss ways in which these differences give rise to different challenges that courts and statistical experts will have to meet.  For example, when statistics are used to challenge the make-up of juries the issues posed and the models used will be different than those that are posed when statistics are used to establish claims of discriminatory hiring, and the issues posed when courts draw on statistics to help resolve either of these matters will differ from issues that arise when statistics are used to convey the probative value of DNA evidence.

10:30 to 11:00 Anne Ruth mackor (University of Groningen)
Novel Facts

According to scenario-based approaches to the assessment of evidence courts should compare different scenarios. More in particular they should investigate whether and to what extent different scenarios are capable of explaining the available evidence. Scenario-based approaches emphasize the importance of evidence that discriminates between scenarios.

In my presentation I hypothesize that the criterion of discriminating facts might be too weak in some cases and too strong in others. I investigate whether and how the criterion of novel facts can play a role next to the criterion of discriminating facts. My question is whether and if so how Bayesian analysis, more specifically Bayesian Networks, can help to clarify the relevance of novel facts and to make application of the criteria of discriminating and of novel facts feasible for courts

11:00 to 11:30 Morning Coffee
11:30 to 12:00 Norman Fenton (Queen Mary, University of London)
The challenges of Bayes in the Law

This talk reviews the potential and actual use of Bayes in the law and explains the main reasons for its lack of impact on legal practice. These include misconceptions by the legal community about Bayes’ theorem, over-reliance on the use of the likelihood ratio and the lack of adoption of modern computational methods. I will explain why I believe that Bayesian Networks, which automatically produce the necessary Bayesian calculations, provide an opportunity to address most concerns about using Bayes in the law.


12:00 to 12:30 David Caruso (University of Adelaide)
Capacity and Comprehension of Mathematical Reasoning by Counsel and Courts

This presentation begins by questioning the extent to which courts can comprehend, so as to meaningfully address, evidential and trial questions from a statistical/probabilistic method. Scientific evidence is increasingly used in trials, meaning there is a growing need for courts and judicial officers to understand the expression of the scientific method. This presentation will explore the extent to which the courts currently comprehend mathematical rationalisation of evidence; comparing Australia and the United Kingdom.   The presentation will examine the nature of legal education as the cornerstone and building block of litigation capacities (at least, for lawyers), both pre and post admission. There is limited scope within common law legal education for preparing future litigators for mathematical approaches to evidence. Except “experience”, there is limited continuing training for lawyers/judges post admission. Current reforms to legal education within the tertiary sector and legal profession will be analysed in the context of the potential for reforms to promote interdisciplinary capacities with respect to litigious proof.   The presentation will discuss the current and long sustained fallibilities of litigation relying on subjective experience based decision making. Courts in Australia have often criticised the Bayesian method and have shown hesitation to move away from a legal system based on “human” experience and “human” reasoning. Mathematical approaches to proof will be considered from the perspective of their effect for common law notions of the fair trial and, primarily, the participants to be involved in the delivery of a fair trial. Modernisation of litigation services (through electronic and database resourcing) equally requires consideration of frameworks for statistical/mathematical evaluation of evidence and, necessarily, who is best placed to develop and implement such systems. Unless legal education is altered, modern litigation may decreasingly rely on the traditional skill sets of jurists, jurors and lawyers.   This presentation examines the merits of combining a statistical analysis of separate pieces of evidence into an ultimate probability of guilt, as against retention of tested methods for dispute resolution.

12:30 to 13:30 Lunch @ Wolfson Court
14:00 to 14:30 Colin Aitken (University of Edinburgh)
Relevant populations and data

The likelihood ratio has many advocates amongst forensic statisticians as the best way to evaluate evidence. Its use often requires training data from populations whose relevance to the issue at trial is determined by propositions put forward by the prosecution and defence in a criminal trial. The choice of these populations and the choice of the sampling of data from them are two reasons for the courts to query an approach to evidence evaluation based on the likelihood ratio. Consideration of these choices will be discussed in the context of recent work on the evaluation of evidence of the quantities of cocaine on banknotes in drug-related crimes.

14:30 to 15:00 Karen Kafadar (University of Virginia)
Statistical Issues and Reliability of Eyewitness Identification

Among the 330 wrongful convictions identified by the Innocence Project that were later overturned by DNA evidence resurrected from the crime scene, 238 (72%) involved eyewitness testimony. Courtroom identifications from an eyewitness can be extremely powerful evidence in a trial. Yet memory is not a perfect ideo recording of events; one's recollection of the events surrounding an incident is even less reliable. The U.S. National Academy of Sciences issued a report evaluating the scientific research on memory and eyewitness identification. The Committee, comprised of researchers (statisticians, psychologists, sociologists) and judicial system personnel (judges, attorneys) reviewed published research on the factors that influence accuracy and consistency of eyewitnesses' identifications, conducted via laboratory and field studies. I will describe the research on memory and recollection, shortcomings in the statistical methods used in evaluating laboratory studies, and Committee recommendations for better statistical evaluation, standardization of procedures and informing judicial personnel of factors that can negatively
impact accuracy of eyewitness testimony in the courtroom.

15:00 to 15:30 Afternoon Tea
15:30 to 16:30 David Spiegelhalter (University of Cambridge)
Communicating likelihood ratios
One method for communicating likelihood ratios is to use words to express ranges of values. I shall look at the development of these recommendations, and make comparisons with other areas in which similar proposals have been made for communicating probabilities or risks using words, such as climate change and drug safety, and how these have been interpreted by audiences.
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons