# Seminars (FOS)

Videos and presentation materials from other INI events are also available.

Search seminar archive

Event When Speaker Title Presentation Material
FOS 24th August 2016
09:30 to 10:00
Jane Hutton Property and competence of witnesses
FOS 24th August 2016
10:00 to 10:30
Richard Gill The fundamental problem of forensic statistics
FOS 24th August 2016
10:30 to 11:00
Nadine Smit Vulnerabilities revealed in successful appeal cases in the UK
FOS 24th August 2016
14:00 to 14:30
Patricia Wiltshire Challenges in forensic trace evidence
FOS 24th August 2016
14:30 to 15:00
Norman Fenton Use of Bayesian networks in legal reasoning
As preparation for the forthcoming FOS workshops I will provide an overview of the challenges and opportunities in using Bayesian networks in legal arguments and, in particular, how they address fundamental limitations of the likelihood ratio based approach that is assumed to be 'best practice' with respect to Bayesian reasoning in the law.

FOS 24th August 2016
15:30 to 16:00
FOS 24th August 2016
16:00 to 16:30
Joseph Gastwirth Some (hopefully) interesting uses of statistical reasoning in legal cases
FOSW01 30th August 2016
09:50 to 10:30
Jane Hutton Estimates of life expectancy for compensation after injury
When a compensation case arises from an injury, which might be caused by medical error or an industrial or traffic accident, the financial settlement will often depend on the life expectancy. Compensation might be for expected reduction in life time, or for the cost of additional care during the rest of the injured person's life.  Estimates based on particular injuries or individual factors might be requested.

Estimates of effects of injury and life style on mortality use a variety of data sources, with no common statistics. Many lawyers assume that a larger data set is always better than a smaller data set. Statisticians should address the questions 'What is the quality of data used?' and 'What are the biases?'. Assessments of the intended population, the accuracy of individual items, the completeness of follow-up and the precise inclusion and exclusion criteria have to be made and explained. An article on mortality after spinal cord injury used a database of 49,214 people, initially 50,661 people. Five restrictions, three of which were discussed, left 31,531 (62%) eligible people. The impact of excluding people with missing data on major covariates was not reported. I suggest that the detailed check-lists provide by the equator network are an important resource for evaluation (http://www.equator-network.org/).

For some claims, the effects of smoking, alcohol consumption, illegal substance use and anorexia or obesity have to be considered as well as the main motivation of the claim. Effect sizes might be given as hazard ratios, standardised mortality rates, from univariate or multivariate models. Approaches to estimating life expectancy which allow for these personal factors include using reported relative risks, hazard ratios and excess death rates to modify the death rates from national or regional life tables. I will discuss the challenges I have faced, both in estimation and in communicating results in court, and the solutions I have adopted.

FOSW01 30th August 2016
10:30 to 11:00
Angela Gallop The Changing Face of Forensic Science
Against the background of 40 years in operational forensic science, Professor Gallop considers how forensic science has changed and the impact this has had on the application of statistical approaches and availability of data to underpin them. She uses two complex cases to illustrate some important points, and concludes by considering some immediate issues which need to be tackled to avoid a new tranche of forensic science related miscarriages of justice in the future.
FOSW01 30th August 2016
11:30 to 12:00
Michael Finkelstein The problem of false positives, some lessons from the bullet lead story, and the new U.S. Department of Justice guidance for expert testimony
False positive probability is a key element in a statistical evaluation of forensic evidence.  The Committee on Scientific Assessment of Bullet Lead Elemental Composition Comparison of the U.S. National Research Council struggled with the problem of its estimation and its conclusion is instructive.  It is an interesting fact that the recently issued guidance for expert testimony proposed by the U.S. Department of Justice points in a similar direction.
FOSW01 30th August 2016
12:00 to 12:30
Allan Jamieson Casework problems with the Likelihood Ratio
This presentation will illustrate the difficulties that arise when using the LR as evidence in court with an emphasis on DNA profiles.
Is the LR still fit for purpose?  Was it ever?
FOSW01 30th August 2016
13:30 to 14:00
David Bentley Probability and statistics – a criminal lawyer’s perspective.
This talk will be informed by my experience as a criminal trial lawyer, and will look at some of the issues that arise when probability and statistical methods are used in the trial process. I will focus particularly on the rapidly developing area of DNA analysis, and of the challenges that are presented by complex mixed profiles and of the statistics generated. Given the power that juries tend to attach to such evidence, I will suggest that there is a heightened need for clarity and transparency, as well as objective validation of new types of modelling.

FOSW01 30th August 2016
14:00 to 14:30
Cheryl Thomas tba
FOSW01 30th August 2016
14:30 to 15:00
Bernard Robertson The nature of questions arising in court that can be addressed via probability and logic
At the admissibility stage, judges are faced with questions such as "can extrinsic evidence of the truth of a confession (or eye-witness identification) be taken into account when considering whether a confession (or identification) is "reliable"?

The structure of these problems will be addressed using Bayesian logic along with the more difficult question over which the High Court of Australia recently divided in IMM v R [2016] HCA 14 namely whether when evidence has to reach a threshold of "heightened probative value" to be admitted, the judge should consider the credibility and reliability of the witness and the evidence or whether the judge should consider only the LR for the content of the evidence as evidence of the final probandum, assuming it to be true.
FOSW01 30th August 2016
15:00 to 15:40
Joseph Gastwirth Statistical Measures and Methods Used to Analyze the Representativeness of Jury Pools
In the Castaneda v. Partida (1977) case the U.S. Supreme Court accepted statistical hypothesis testing for the analysis of data on the demographic mix of the individuals called for jury service over a period of time. Two years later, in Duren v. Missouri (1979), the Court noted that in order for defendant’s to receive a fair trial the system used to summon individuals for jury service should produce jury pools with a demographic mix similar to that of the jury-eligible members of the community. This talk will review the commonly used measures and methods and illustrate their use. A novel measure called disparity of the risk that was adopted by the Supreme Court of Michigan will be described and shown to be extremely stringent. It will be seen that in a jurisdiction where minorities form eight percent of the jury-eligible population, jury pools with a minority representation less than four percent will be deemed representative by this measure. If time permits an altern ative measure of the effect of minority under-representation on the chances of a defendant obtaining a “fair” jury will be recommended.
FOSW01 31st August 2016
09:30 to 10:20
William Thompson Lay Understanding (and Misunderstanding) of Quantitative Statements about the Weight of Forensic Evidence
Co-author: Rebecca Grady (University of California, Irvine)

The explanations that forensic scientists offer for their findings in reports and testimony should meet two important requirements: first, they should be scientifically correct—warranted by the underlying findings; second, they should be understandable to the lay audiences, such as lawyers and jurors, who will rely upon the reports and testimony. This presentation will describe a series of studies exploring lay reactions to quantitative statements about the weight of forensic evidence. Key issues examined include the way in which various formats for describing the weight of forensic evidence affect: (1) people’s sensitivity to important variations in the weight of the forensic evidence; (2) people’s susceptibility to fallacious misinterpretation of forensic evidence; and (3) the logical coherence of judgments made on the basis of forensic evidence. Implications of this research for forensic practice and legal policy will be discussed.
FOSW01 31st August 2016
10:20 to 11:00
Kristy Martire Exploring mock-juror evidence interpretation and belief updating within a probability framework
The examination and evaluation of juror interpretations of evidence presented at trial is well suited to consideration within a probability framework. In particular Bayes Theorem provides a useful method for setting and comparing against ‘normative’ expectations regarding the evaluation of evidence weight. It is also valuable for refining experimental designs in line with the theorem. In this presentation I will review two lines of research applying Bayes Theorem to the belief updating of lay-decision-makers: The first exploring the alignment between expert intentions and lay interpretations of forensic science expert evaluative opinions expressed using numerical and linguistic likelihood ratios; the second examining juror sensitivity to evidence relevance in the assessment of expert testimony. Some benefits and limitations of the application of a probability frameworks to these issues will be discussed.
FOSW01 31st August 2016
11:30 to 12:00
Tim Clayton tba
FOSW01 31st August 2016
12:00 to 12:30
Ruth Morgan Forensic trace evidence – what are the questions we need to answer?
Trace evidence has been under significant scrutiny and in the last two years the resources allocated to make use of intelligence and evidence trace evidence can offer crime reconstructions have been significantly reduced in the UK.   However, the value of trace evidence is significant, and there is a growing body of research being undertaken to ensure that it has the appropriate empirical evidence bases for the application of the classification of trace materials into forensic reconstruction contexts.  This research is focussed in two critical areas: 1. Enhancing our understanding of the dynamics of trace evidence within different environments and 2. Understanding the role of cognition in the interpretation of such evidence, so that together the weight and significance of specific forms of trace evidence can be established in a robust, transparent, and reproducible manner.  The importance of asking the most appropriate questions to ensure that trace evidence can offer investigators and the criminal justice system the most robust inferences as to the significance of a trace material can not be overstated.  In order to identify the characteristics of these questions, four aspects need to be considered: 1.  The importance of situating evidence within a holistic forensic science process (from crime scene to court); 2. The importance of taking an exclusionary approach in the comparison and analysis of trace evidence to infer provenance; 3. The importance of an empirical underpinning for assessing and expressing the weight of evidence in uncertainty; 4. The interaction of different lines of evidence within a forensic reconstruction.  This presentation will outline some of the questions it is important for trace evidence to answer with specific reference to environmental evidence.
FOSW01 31st August 2016
13:30 to 14:00
Patricia Wiltshire Forensic Ecology: How do we get answers to questions? How do we present them to the court?
FOSW01 31st August 2016
14:00 to 14:30
Marieke Dubelaar Law, statistics and psychology, do they match?
Fact finding and evidence in criminal procedure has increasingly gained attention from other, non-legal disciplines, such as statistics, legal psychology and legal epistemology. Those disciplines provide useful insights as to how to arrive at a decision on the facts and possible pitfalls. However, those insights don’t always find their way into legal practice. Judicial concepts and doctrine don’t seem to match with those insights. In this presentation these problems are addressed from a juridical (continental) point of view. Where do the (systemic) weaknesses lie in the judicial system and in doctrine when it comes to the use of evidence and what are potential hindrances in the use of probabilistic reasoning and narratives in criminal procedure?
FOSW01 31st August 2016
14:30 to 15:00
Lonneke Stevens Struggling judges: do they need a probability help desk? A daily legal practice point of view
Would all judicial decisions benefit from using probability and statistics? Judges do not seem to think so. In daily practice most cases are just not very complex. However, case-law demonstrates that judges recurrently struggle with questions and concepts of fact-finding even in the not too complex cases. What are these questions, which mistakes are made and how could reasoning with probabilities help?
FOSW01 1st September 2016
10:00 to 10:30
Frans Alkemade Bayes & the Blame Game: How to ease the mutually felt frustration between law professionals and scientists.
Not enough meaningful communication is taking place between the scientific community and the judiciary. Usually scientists start the dialogue by pointing out to judges and prosecutors that they utterly lack the skills needed to find the truth in criminal cases. Remarkably, not all judges and prosecutors are happy to accept this message right away. To make things worse, they are told that they can only cure their ignorance by swallowing some very distasteful medication, brewed from such dark ingredients as numbers and probability theorems. Naturally they tend to show a certain resistance. This leads some experts to the conclusion that the judiciary is unwilling and/or unable to listen to reason.

But that’s unfair. As a teacher to the judiciary and expert witness in criminal cases I’ve noticed that the majority of judges and prosecutors are intelligent and conscientious people who are willing to put a lot of effort into understanding what math and science have to offer. Unfortunately, after they learn the basics of probabilistic reasoning, it is very hard for them to get any further help when they try to apply this new, scientific way of reasoning to actual criminal cases in their daily practice. In my opinion judges and prosecutors won’t make the transition from confirmative thinking to Bayesian methods, unless they see convincing examples of complete Bayesian analyses of complicated criminal cases, including the construction of meaningful hypotheses, the choice of sensible priors, actual estimates of all likelihoods involved, and a final calculation of the posterior, including realistic uncertainties. But that’s something  most experts scare away from. Rather they - correctly - explain to the court why it’s so hard to report anything about priors or to give an opinion on e.g. the dependency of findings. Judges usually end up with just a few isolated LRs in their hands and a lot of questions on their minds.

Now it’s the judiciary’s turn to get frustrated: Scientists keep telling that law professionals are stupid and ignorant, but apparently science doesn’t have a solution either.

How can we solve this? In my talk I will present some thoughts, based on my discussions with forensic experts, and on my contacts with a small but growing group of Dutch judges, prosecutors and police who are cautiously beginning to accept Bayesian reasoning. I will discuss some ideas on how to deal with the considerable risks involved with presenting Bayesian analyses to the courts, and present some examples from my own Bayesian reporting. Finally I’ll allow myself to dream up some over-optimistic visions of the future.
FOSW01 1st September 2016
10:30 to 11:00
Paul Roberts All Talk and No Conversation? Methodological Preconditions of An Interdisciplinary Forensic Science
Ten years ago I wrote a paper titled ‘Can we Talk?’, drawing attention to the importance of promoting more effective interdisciplinary communication between lawyers and scientists in the area of forensic science. The argument was addressed to practitioners as well as to scholars concerned with the administration of criminal justice. Since that time, there have been repeated efforts and numerous enterprising projects to bring scientists (of various kinds) and criminal justice scholars and professionals together to facilitate interdisciplinary communication about forensic science, several of which I have participated in myself. These occasions are always enlightening and instructive, but often also frustrating. Talking to or at or over is the not the same as talking with. A cacophony is not a conversation. Nor is to talking to yourself.
Part of the problem, to be sure, is that not everybody is sold on the idea of interdisciplinary collaboration in forensic science. There are income streams, professional self-identifies and disciplinary turf to defend. But growing ranks of practitioners and scholars appreciate the value, and even the necessity, of interdisciplinary cooperation in forensic science theory and practice. Exploring ‘the nature of questions arising in court that can be addressed via probability and statistical methods’ is evidently intended to contribute to an interdisciplinary ‘forensic science’, in the broad sense in which I understand that designation of field. For those well-motivated to contribute to interdisciplinary communication, the barriers to successful collaboration are primarily cognitive and methodological.
Interdisciplinarity, in forensic science or anything else, is hard to do well, much harder than one might initially imagine. Interdisciplinary communication is not merely a matter of sharing information, but rather of crossing between different professional and practical life-worlds constituted by their own peculiar set of objectives, values, methods, technology, discourses, institutions and cultures. A didactic model of communication is not well-suited to interdisciplinary collaboration, nor is a simplistic model of scientific research according to which the exposure and correction of errors by superior logic or data must – sooner or later – force consensus. A genuinely interdisciplinary forensic science must be a collaborative co-construction, generating new forms of knowledge, practical techniques and policy interventions (including law reform). To advance this project requires real conversation between knowledgeable, well-motivated and reflective experts across a range of pertinent disciplines, not just more talk. As a contribution to translating talk into conversation, this paper identifies some methodological preconditions for a genuinely interdisciplinary forensic science, with illustrations drawn from recent cases in which English courts have found themselves grappling with probability and/or statistics.

FOSW01 1st September 2016
11:30 to 12:00
Bruce Weir How should we interpret Y-chromosome evidence?
Co-author: Taryn Hall (University of Washington)

Although the interpretation of DNA evidence has been discussed extensively, there are still areas where there remains debate on the best methods. One area is for profiles on the Y chromosome, where the lack of recombination suggests the locus-specific profiles are not independent. Although an examination of published data demonstrates that many of the loci do have independent profiles, there are sufficient dependencies that there seems little need to continue adding loci to increase discrimination: profiles matching at 30 loci are unlikely not to match at the 31st locus for example.

Purely statistical approaches break down in practice because most evidential profiles are not represented in profile databases. Observed profile frequencies offer little guide to the evidential strength of a Y-chromosome match. We have used both published and simulated data to evaluate various genetic models that serve as a basis for estimating match probabilities.

Y-chromosome lineages often cross geographic or ethnic population boundaries. Genetic models allow predictions of the probability that two men, one of whom is unknown, will share Y-chromosome profiles when they are members of the same or different populations.
FOSW01 1st September 2016
12:00 to 12:30
Sheila Bird Statistical issues arising in the conduct of fatal accident inquiries
Review of fatal accident inquiries into 97 deaths in Scottish prison custody identified failure properly to secure evidence; procurator-failure to allow the full extent of post-mortem; sherrif-failure properly to report actually-prescribed medication-dosages; failure to lead admissible empirical evidence, for example in reports by HM Chief Inspector of Prisons for Scotland; reliance instead on subjectivity; long delays in providing written determinations;  inconsistency in the determinations reached on similar cases; failure to post all written determinations on Scottish courts' web-site; failure to consider the epidemiology of prisoner-deaths in deciding that a written determination should be made.

For Scotland, most of these failures have been taken into account, and remedied, by recommendations in the report of the Lord Cullen.
FOSW01 1st September 2016
13:30 to 14:00
Alex Biedermann Recent Pan-European advances in harmonising evaluative reporting in forensic science: scope, principles and pending challenges
Co-authors: Christophe Champod (University of Lausanne), Sheila Willis (Forensic Science Ireland)

Since decades, the question of how to assess and report the value of forensic results preoccupies academics and practitioners in both forensic science and the law across Europe and beyond. In essence, this topic gravitates around the issue of what constitutes a logical framework of reasoning, and how it can be operationalized in the applied context of legal trials. Often, statistics and probabilistic reasoning are promoted as \emph{the} framework, yet the overarching topic is larger and is concerned with the reasonable reasoning in the face of uncertainty. Unfortunately, restricted views over the former have limited viable contributions by the latter. To help overcome these barriers, forensic science and legal practitioners across Europe have partnered -- over the past few years -- in the development of mutual understanding on general principles of forensic interpretation in terms of a guideline, delivered as the result of a project in the ENFSI (European Network of Forensic Science Institutes) Monopoly Programme scheme Strengthening the Evaluation of Forensic Results across Europe' (financially supported by the European Commission). Built upon elements of previously published standards (e.g., by the Association of Forensic Science Providers), the \emph{ENFSI Guideline For Evaluative Reporting In Forensic Science} also includes an assessment template for forensic expert reports and a roadmap for implementation. This makes it one of the most cross-disciplinary, institutionally supported acknowledgments of current understandings of logical inference in the courtroom, and scholarly research in this area. This talk will focus on presenting the scope and major principles of the ENFSI Guideline, and discuss challenges associated with its wider and more systematic implementation. It will be argued that the guideline's matured principles make it an inevitable component of future works that seek to promote and facilitate the smoother operating of logical judicial proces

FOSW01 1st September 2016
14:00 to 14:30
Ulrich Simmross Towards a better communication between theory and imperfect realities of professional practice - On barriers among stakeholders and possible ways out
It still seems that the academic ‘logical approach’ does not play much of a role in the communication of the results of forensic science examinations in Germany, save for a small number of suitable cases involving DNA evidence. From a German forensic practitioner’s perspective the presentation will focus on stakeholders, their incentives, and the question, whether influential advocates might also contribute to barriers. In addition, a few proposals aiming at moderate implementation will be introduced. It is thought-provoking that despite a lot of publications and promotion in the forensic science community for almost 20 years Bayes has had minimal impact in the law. This may be the case because, apart from an appealing and coherent theory, many questions around the production and presentation of scientific evidence in the legal systems still persist. Presumably one can expect further studies on efficient communication (with mock jurors) and new high-profile case reviews again. It is also almost foreseeable that various theoretical issues will be addressed and refined to be challenged again in academic publications. Nonetheless, it seems inevitable to go into the depths of the imperfect realities of professional practice where ordinary obstacles and sometimes irrational barriers last. It is from this reality that lessons can be learnt, little steps of pragmatic solutions appear to be concrete and achievable. Those educational experiences might encourage stakeholders in - with regard to Bayes and law - less developed countries to improve communication between forensic experts and judicial personnel. Countries such as the Netherlands and Sweden already stand for advanced developments. It would also be helpful to gain knowledge on its effectiveness and exhaustiveness of implementation.
FOSW01 1st September 2016
14:30 to 15:00
TBC
FOSW01 2nd September 2016
09:30 to 10:00
Alicia Carriquiry Forensic databases: size, completeness, usefulness
Co-authors:  Anjali Mazumder, Stephen Fienberg
Databases play an increasingly important role in forensic sciences, both as a means to develop and validate technologies, and in case work, to find potential matches to a crime scene sample.  Many of the databases used for research by the forensic community are lacking in different ways.  We use the elemental composition of glass as an example to highlight how data that are widely used by forensic scientists are not designed to permit answering questions of interest.  In case work, many of the databases that are used by law enforcement are privately owned and inaccessible, but as a rule, lack relevance, are not representative and in general, are assembled haphazardly using data arising in case work or other convenience samples.
FOSW01 2nd September 2016
10:00 to 10:30
Richard Lempert Courts and Statistics: Varieties of Statistical Challenges
This talk will look at some of the different ways in which statistical analyses can figure in trials or on appeal and will discuss ways in which these differences give rise to different challenges that courts and statistical experts will have to meet.  For example, when statistics are used to challenge the make-up of juries the issues posed and the models used will be different than those that are posed when statistics are used to establish claims of discriminatory hiring, and the issues posed when courts draw on statistics to help resolve either of these matters will differ from issues that arise when statistics are used to convey the probative value of DNA evidence.
FOSW01 2nd September 2016
10:30 to 11:00
Anne Ruth mackor Novel Facts
According to scenario-based approaches to the assessment of evidence courts should compare different scenarios. More in particular they should investigate whether and to what extent different scenarios are capable of explaining the available evidence. Scenario-based approaches emphasize the importance of evidence that discriminates between scenarios.

In my presentation I hypothesize that the criterion of discriminating facts might be too weak in some cases and too strong in others. I investigate whether and how the criterion of novel facts can play a role next to the criterion of discriminating facts. My question is whether and if so how Bayesian analysis, more specifically Bayesian Networks, can help to clarify the relevance of novel facts and to make application of the criteria of discriminating and of novel facts feasible for courts
FOSW01 2nd September 2016
11:30 to 12:00
Norman Fenton The challenges of Bayes in the Law
This talk reviews the potential and actual use of Bayes in the law and explains the main reasons for its lack of impact on legal practice. These include misconceptions by the legal community about Bayes’ theorem, over-reliance on the use of the likelihood ratio and the lack of adoption of modern computational methods. I will explain why I believe that Bayesian Networks, which automatically produce the necessary Bayesian calculations, provide an opportunity to address most concerns about using Bayes in the law.

FOSW01 2nd September 2016
12:00 to 12:30
David Caruso Capacity and Comprehension of Mathematical Reasoning by Counsel and Courts
This presentation begins by questioning the extent to which courts can comprehend, so as to meaningfully address, evidential and trial questions from a statistical/probabilistic method. Scientific evidence is increasingly used in trials, meaning there is a growing need for courts and judicial officers to understand the expression of the scientific method. This presentation will explore the extent to which the courts currently comprehend mathematical rationalisation of evidence; comparing Australia and the United Kingdom.   The presentation will examine the nature of legal education as the cornerstone and building block of litigation capacities (at least, for lawyers), both pre and post admission. There is limited scope within common law legal education for preparing future litigators for mathematical approaches to evidence. Except “experience”, there is limited continuing training for lawyers/judges post admission. Current reforms to legal education within the tertiary sector and legal profession will be analysed in the context of the potential for reforms to promote interdisciplinary capacities with respect to litigious proof.   The presentation will discuss the current and long sustained fallibilities of litigation relying on subjective experience based decision making. Courts in Australia have often criticised the Bayesian method and have shown hesitation to move away from a legal system based on “human” experience and “human” reasoning. Mathematical approaches to proof will be considered from the perspective of their effect for common law notions of the fair trial and, primarily, the participants to be involved in the delivery of a fair trial. Modernisation of litigation services (through electronic and database resourcing) equally requires consideration of frameworks for statistical/mathematical evaluation of evidence and, necessarily, who is best placed to develop and implement such systems. Unless legal education is altered, modern litigation may decreasingly rely on the traditional skill sets of jurists, jurors and lawyers.   This presentation examines the merits of combining a statistical analysis of separate pieces of evidence into an ultimate probability of guilt, as against retention of tested methods for dispute resolution.
FOSW01 2nd September 2016
14:00 to 14:30
Colin Aitken Relevant populations and data
The likelihood ratio has many advocates amongst forensic statisticians as the best way to evaluate evidence. Its use often requires training data from populations whose relevance to the issue at trial is determined by propositions put forward by the prosecution and defence in a criminal trial. The choice of these populations and the choice of the sampling of data from them are two reasons for the courts to query an approach to evidence evaluation based on the likelihood ratio. Consideration of these choices will be discussed in the context of recent work on the evaluation of evidence of the quantities of cocaine on banknotes in drug-related crimes.
FOSW01 2nd September 2016
14:30 to 15:00
Karen Kafadar Statistical Issues and Reliability of Eyewitness Identification
Among the 330 wrongful convictions identified by the Innocence Project that were later overturned by DNA evidence resurrected from the crime scene, 238 (72%) involved eyewitness testimony. Courtroom identifications from an eyewitness can be extremely powerful evidence in a trial. Yet memory is not a perfect ideo recording of events; one's recollection of the events surrounding an incident is even less reliable. The U.S. National Academy of Sciences issued a report evaluating the scientific research on memory and eyewitness identification. The Committee, comprised of researchers (statisticians, psychologists, sociologists) and judicial system personnel (judges, attorneys) reviewed published research on the factors that influence accuracy and consistency of eyewitnesses' identifications, conducted via laboratory and field studies. I will describe the research on memory and recollection, shortcomings in the statistical methods used in evaluating laboratory studies, and Committee recommendations for better statistical evaluation, standardization of procedures and informing judicial personnel of factors that can negatively
impact accuracy of eyewitness testimony in the courtroom.
FOSW01 2nd September 2016
15:30 to 16:30
David Spiegelhalter Communicating likelihood ratios
One method for communicating likelihood ratios is to use words to express ranges of values. I shall look at the development of these recommendations, and make comparisons with other areas in which similar proposals have been made for communicating probabilities or risks using words, such as climate change and drug safety, and how these have been interpreted by audiences.
FOS 21st September 2016
09:30 to 12:00
Geoffrey Stewart Morrison An introduction to Forensic Voice Comparison
FOSW02 26th September 2016
10:15 to 11:00
Philip Dawid An Introduction to Bayesian Networks
I will outline some basic theory of Bayesian Networks, with forensic applications.  Topics will include qualitative and quantitative representation,  object-oriented networks, and (time permitting) causal diagrams.
FOSW02 26th September 2016
11:30 to 12:15
Henry Prakken On how expert witnesses can give useful Bayesian analyses of complex criminal cases
In this talk I will discuss how expert witnesses in criminal trials can give useful Bayesian analyses of complex criminal cases. I will discuss several questions that have to be answered to make Bayesian networks useful in this context and what kinds of expertise are required to answer these questions. The discussions will be partly based on my recent experiences as an expert witness in a murder trial and a serial arson case.
FOSW02 26th September 2016
13:30 to 14:15
Julia Mortera Paternity testing and other inference about relationships from DNA mixtures
DNA is now routinely used in criminal  and civil investigations.  DNA samples are of varying quality and therefore present challenging problems for their interpretation. We present a statistical model for the quantitative peak information obtained from an electropherogram (EPG) of a forensic DNA sample and illustrate its potential use for the analysis of civil and criminal cases. In contrast to most previously used methods, we directly model the peak height information and incorporate important artefacts associated with the production of the EPG. The model has a number of unknown parameters, that  can be estimated in the presence of multiple unknown contributors; the computations exploit a Bayesian network   representation of the model.

We illustrate real casework examples from a criminal case and a disputed paternity case, where in both cases part of the evidence was from a DNA mixture.   We present methods for inference about the relationships between contributors to a DNA mixture of unknown genotype and other individuals of known genotype: a basic example would be testing whether a contributor to a mixture is the father of a child of known genotype (or indeed the similar question with the roles of parent and child reversed). Following commonly accepted practice, the evidence for such a relationship is presented as the likelihood ratio for the specified relationship versus the alternative that there is no such relationship, so the father is taken to be a random member of the population. Our methods are based on the statistical model for DNA mixtures, in which a Bayesian network is used as a computational device for efficiently computing likelihoods; the present work builds on that approach, but makes more explicit use of the BN in the modelling.

Based on joint work with Peter Green.
FOSW02 26th September 2016
14:15 to 15:00
Paolo Garbolino Bayes Nets for the evaluation of testimony
David Schum gave in his book The Evidential Foundations of Probabilistic Reasoning an analysis of testimony that can be formalized by a Bayes net whose nodes represent the hypothesis of interest, the basic attributes of the credibility of human witnesses (observational accuracy, objectivity and veracity) and the ancillary evidence bearing upon those attributes. It will be shown as, given some plausible hypotheses, the net can provide a new answer to a classical riddle in the literature about evidential probabilistic reasoning, the so-called “blue bus problem”.
FOSW02 26th September 2016
15:30 to 16:15
Giulio D'Agostini Basic probabilistic issues in the Sciences and in Forensics (hopefully) clarified by a Toy Experiment modelled by a Bayesian Network
FOSW02 26th September 2016
16:15 to 17:00
Peter Vergeer A Bayesian network for glass evidence evaluation at activity level: a novel approach to model the background distribution
In burglary cases the comparison of glass particles found on a piece of clothing of a suspect and a broken reference glass pane is of importance. Often, suspects are known as multiple offenders and may have a large collection of glass on their clothing. Therefore, in order to evaluate the strength of evidence, current likelihood ratio formulas contain parameters such as the number of groups of glass found on a piece of clothing, and the size of the matching group [1]. In order to obtain probabilities for these parameters, glass particles found on clothing of suspects have been counted and grouped, see e.g. [2]. In general, the amount of glass particles found on a suspect is limited in these studies.

A database of glass from suspects in the Netherlands shows quite deviant results. Up to a few hundred of glass particles are often encountered and only samples may be analyzed. In order to evaluate the evidential strength of a sample of particles with a background model based on samples from casework requires a different background model. We propose to model the background distribution of the sample by a ‘Chinese restaurant process’ [3].

[1] Forensic Interpretation of Glass Evidence, CRC Press. (2000). https://www.crcpress.com/Forensic-Interpretation-of-Glass-Evidence/Curran-Hicks-Champod-Buckleton/97... (accessed April 19, 2016).

[2] J.A. Lambert, M.J. Satterthwaite, P.H. Harrison, A survey of glass fragments recovered from clothing of persons suspected of involvement in crime, Sci. Justice. 35 (1995) 273–281. doi:10.1016/S1355-0306(95)72681-8.
[3] D.J. Aldous, Exchangeability and related topics, in: P.L. Hennequin (Ed.), Éc. DÉté Probab. St.-Flour XIII — 1983, Springer Berlin Heidelberg, 1985: pp. 1–198. http://link.springer.com/chapter/10.1007/BFb0099421 (accessed September 2, 2016).
FOSW02 27th September 2016
09:30 to 10:15
Charles Berger On activity level propositions addressing the actor or activity
Bas Kokshoorn, Bart Blankers, Jacob de Zoete, Charles Berger

More often than not, the source of DNA traces found at a crime scene is not disputed, but the activity or timing of events that resulted in their transfer is. As a consequence, practitioners are increasingly asked to assign a value to DNA evidence given propositions about activities provided by prosecution and defense counsel. Given that the dispute concerns the nature of the activity that took place or the identity of the actor that carried out the activity, several factors will determine how to formulate the propositions. Determining factors are (1) whether defense claims the crime never took place, (2) whether defense claims someone other than the accused (either an unknown individual or a known person) performed the criminal activity, and (3) whether it is claimed and disputed that the suspect performed an alternative, legitimate activity or has a relation to the victim, the object, or the scene of crime that implies a legitimate interaction. Addressing such propositions using Bayesian networks, we demonstrate the effects of the various proposition sets on the evaluation of the evidence.
FOSW02 27th September 2016
10:15 to 11:00
Norman Fenton Bayesian networks: challenges and opportunities in the law
FOSW02 27th September 2016
11:30 to 12:15
Anjali Mazumder Using Chain Event Graphs to Address Asymmetric Evidence in Legal Reasoning
Co-author: James Q. Smith (University of Warwick)

Bayesian networks (BNs), a class of probabilistic graphical models, have been useful in providing a graphical representation of a problem, calculating marginal and conditional probabilities of interest, and making inferences particularly addressing propositions about the source or an evidential-sample. To address propositions relating to activities, there is a need to account for different plausible explanations of a suspect/perpetrator’s actions and events as it relates to the evidence. We propose the use of another class of graphical models, chain event graphs (CEGs), exploiting event tree structures to depict the unfolding events as postulated by each side (defence and prosecution) and differing explanations/scenarios. Different explanations/scenarios can introduce different sets of relevant information affecting the dependence relationship between variables and symmetry of the structure. With the use of case examples involving transfer and persistence and different evidence types (but in which DNA provides a sub-source level of attribution), we further show how CEGs can assist in the careful pairing and development of propositions and analysis of the evidence by addressing uncertainty and the asymmetric unfolding of the events to better assist the courts.
FOSW02 27th September 2016
13:30 to 14:15
Barbaros Yet A Framework to Present Bayesian Networks to Domain Experts and Potential Users
Knowledge and assumptions behind most Bayesian network models are often not clear to anyone other than their developers. This limits their use as decision support tools in clinical and legal domains where the outcomes of decisions can be critical. We propose a framework for representing knowledge supporting or conflicting with BN, and knowledge associated with factors that are relevant but excluded from the BN. The aim of this framework is to enable domain experts and potential users to browse, review, criticise and modify a BN model without having deep technical knowledge about BNs.

Co-authors: Zane Perkins (Queen Mary University of London), Nigel Tai (The Royal London Hospital), William Marsh (Queen Mary University of London)

FOSW02 27th September 2016
14:15 to 15:00
Hana Chockler Causality and Responsibility in Formal Verification and Beyond
In this talk, I will (briefly) introduce the theory of actual causality as defined by Halpern and Pearl. This theory turns out to be extremely useful in various areas of computer science (and also, perhaps surprisingly, psychology) due to a good match between the results it produces and our intuition. I will outline the evolution of the definitions of actual causality and intuitive reasons for the many parameters in the definition using examples from formal verification. I will also introduce the definitions of responsibility and blame, which quantify the definition of causality.

We will look in more detail at the applications of causality to formal verification, namely, explanation of counter-examples, refinement of coverage metrics, and symbolic trajectory evaluation. It is interesting to note that explanation of counter-examples using the definition of actual causality is implemented in an industrial tool and produces results that are usually consistent with the users’ intuition, hence it is a popular and widely used feature of the tool.

Finally, I will briefly discuss recent applications of causality to legal reasoning and to understanding of political phenomena, and will conclude with outlining promising future directions.

The talk is based on many papers written by many people, and is not limited to my own research. The talk is reasonably self-contained.
FOSW02 27th September 2016
15:30 to 16:15
Richard Overill Using Bayesian Networks to Quantify Digital Forensic Evidence and Hypotheses
In what appears to be an increasingly litigious age, courts, legal officials and law enforcement officers in a number of adversarial legal jurisdictions are starting to look for quantitative indications of (i) the probative value (or weight) of individual items of digital evidence connected with a case; and (ii) the relative plausibility of competing hypotheses (or narratives) purporting to explain how the recovered items of digital evidence (traces) were created.

In this presentation, we review the contributions that Bayesian Networks are capable of making to the understanding, analysis and evaluation of crimes whose primary items of evidence are digital in nature, and show how as a consequence they may fulfill both of the two above desiderata.
FOSW02 27th September 2016
16:15 to 17:00
Eyal Zamir The Anti-Inference Bias and Circumstantial Evidence
My presentation will be based on two studies: Seeing is Believing: The Anti-Inference Bias, co-authored with Ilana Ritov and Doron Teichman (see here), and New Evidence about Circumstantial Evidence, co-authored with Elisha Harlev and Ilana Ritov (see here).

Judicial fact-finders are commonly instructed to determine the reliability and weight of any evidence, be it direct or circumstantial, without prejudice to the latter. Nonetheless, studies have shown that people are reluctant to impose liability based on circumstantial evidence alone, even when this evidence is more reliable than direct evidence. Proposed explanations for this reluctance have focused on factors such as the statistical nature of some circumstantial evidence, the tendency of fact-finders to assign low subjective probabilities to circumstantial evidence, and the fact that direct evidence can rule out with greater ease any competing factual theory regarding liability.

In the first article, we demonstrated experimentally that even when these factors are controlled for, the disinclination to impose liability based on non-direct evidence remains. For instance, people are much more willing to convict a driver of a speeding violation on the basis of a speed camera than on the basis of two cameras documenting the exact time a car passes by them — from which the driver’s speed in the pertinent section of the road is inferred. While these findings do not necessarily refute the previous theories, they indicate that they are incomplete. The new findings point to the existence of a deep-seated bias against basing liability on inferences — an anti-inference bias.

The second article describe seven new experiments that explore the scope and resilience of the anti-inference bias. It shows that this bias is significantly reduced when legal decision-makers confer benefits, rather than impose liability. We thus point to a new legal implication of the psychological phenomenon of loss aversion. In contrast, we find no support for the hypothesis that the reluctance to impose legal liability on the basis of circumstantial evidence correlates with the severity of the legal sanctions. Finally, the article demonstrates the robustness of the anti-inference bias and its resilience to simple debiasing techniques.

Taken together, the studies show that the anti-inference bias reflects primarily normative intuitions, rather than merely epistemological ones, and that it reflects conscious intuitions, rather than wholly unconscious ones. The articles discuss the policy implications of the new findings for procedural and substantive legal norms, including the limited potential (and questionable desirability) of debiasing techniques, the role of legal presumptions, and the advantages of redefining offenses in a way that obviates the need for inferences.

FOSW02 28th September 2016
09:30 to 10:15
Bart Verheij Three Normative Frameworks for Evidential Reasoning and their Connections: Arguments, Scenarios and Probabilities
Artificial intelligence research on reasoning with criminal evidence in terms of arguments, hypothetical scenarios, and probabilities showed that Bayesian networks can be used for modeling arguments and structured hypotheses. Also well-known issues with Bayesian networks were encountered: More numbers are needed than are available, and there is a risk of misinterpretation of the graph underlying the Bayesian network, for instance as a causal model. The formalism presented models arguments and scenarios in terms of models that have a probabilistic interpretation, but do not represent a full distribution over the variables.
FOSW02 28th September 2016
10:15 to 11:00
Floris Bex From Natural Language to Bayesian Networks (and back again)
Decision makers and analysts often use informal, linguistic concepts when they talk about a case: they tell the story that explains the evidence, or argue against a particular interpretation of the evidence. On the other hand, mathematicians and logicians present formal frameworks to precisely capture and support reasoning about evidence.

In this talk, I will show how different Artificial Intelligence techniques can be used to close the gap between these two extremes - messy, informal natural language and specific, well-defined formalisms such as Bayesian Networks.

FOSW02 28th September 2016
11:30 to 12:15
Martin Neil Modelling competing Legal Arguments using Bayesian Model Comparison and Averaging
Bayesian models of legal arguments generally aim to produce a single integrated model, combining each of the legal arguments under consideration. This combined approach implicitly assumes that variables and their relationships can be represented without any contradiction or misalignment and in a way that makes sense with respect to the competing argument narratives. In contrast to this integrated approach, Non-Bayesian approaches to legal argumentation have tended to be narrative based and have focused on comparisons between competing stories and explanations. This paper describes a novel approach to compare and ‘average’ Bayesian models of legal arguments that have been built independently and with no attempt to make them consistent in terms of variables, causal assumptions or parameterization. The approach is consistent with subjectivist Bayesian philosophy. Practically, competing models of legal arguments are assessed by the extent to which the credibility of the sources of evidence are confirmed or disconfirmed in court. Those models that are more heavily disconfirmed are assigned lower weights, as model confidence measures, in the Bayesian model comparison and averaging approach adopted. In this way plurality of arguments are allowed yet a single judgement based on all arguments is possible and rational.

Authors: Prof. Martin Neil (Queen Mary, University of London), Prof. Norman Fenton (Queen Mary, University of London), Prof David Lagnado (UCL), and Prof Richard Gill
(Leiden University)
FOSW02 28th September 2016
13:30 to 14:15
Jacob de Zoete Generating Bayesian networks in Forensic Science An example from crime linkage
Co-author: Marjan Sjerps (Netherlands Forensic Institute)

The likelihood ratio framework for evaluating evidence is becoming more common in forensic practice. As a result, the interest in Bayesian networks as a tool to analyse cases and performing computations has increased. However, constructing a Bayesian network from scratch for every situation that one encounters is too costly. Therefore, several researchers have proposed Bayesian networks that correspond with frequent problems [1,2]. These building blocks' allow the user to only concentrate on the conditional probabilities that fit their particular situation. This results in a more efficient workflow: the effort to construct the Bayesian network is taken away. Furthermore, it is no longer necessary that the user is experienced in constructing Bayesian networks. However, when the problem does not follow the exact' assumptions of the building block, the Bayesian network can only serve as a starting point when constructing a model that does. In some situations, it is clear how one should model a certain problem, regardless of the case specific details. For example, a Bayesian network for a source level hypotheses pair where the evidence consists of a DNA profile has the same structure for any number of loci. Each locus can be added as a node together with it's corresponding drop-out/drop-in probabilities. For these type of problems, one can take away the effort of constructing the network. This facilitates the practical application of Bayesian networks for forensic casework. We will show an example of generating Bayesian networks' for a problem from crime linkage. In [3] a structure for modeling crime linkage with Bayesian networks is introduced. This structure is implemented in R which allows the user to insert the parameters corresponding to their situation (e.g. the number of crimes/number of different types of evidence). Subsequently, this network can be used to obtain posterior probabilities or likelihood ratios. We will show how this is useful in casework.
FOSW02 28th September 2016
14:15 to 15:00
Marjan Sjerps DNA myth busting with Bayesian networks

co-authors J de Koeijer, J de Zoete and B Kokshoorn

The Netherlands Forensic Institute is currently exploring the use of Bayesian networks in their forensic casework. We identified a number of different ways that networks can be used, e.g., as a probability calculator, as an exploratory tool for complex problems and as a reasoning tool. In this presentation we focus on the latter and discuss a recent case in which Bayesian networks were used in this way. The case concerned a series of six different robberies in which several DNA matches with a suspect were found on “movable objects” in each case. We were asked to assess the evidential value of the combined DNA evidence. Bayesian networks proved to be very valuable to assist in our reasoning and in busting a few important DNA myths that are common in the legal domain. We chose not to report the networks themselves but only the reasoning, and explain why.
FOSW02 28th September 2016
15:30 to 16:15
Maria Cuellar Shaken Baby Syndrome on Trial: Causal Problems and Sources of Bias
Over 1,100 individuals are in prison today on charges related to the diagnosis of Shaken Baby Syndrome (SBS). In recent years this diagnosis has come under scrutiny, and more than 20 convictions made on the basis of SBS have been overturned. The overturned convictions have fueled a controversy about alleged cases of SBS. In this talk, I will review the arguments made by the prosecution and defense in cases related to SBS and point out two problems: much of the evidence used has contextual bias, and the expert witnesses and attorneys ask the wrong causal questions. To resolve the problem of asking the wrong causal questions, I suggest that a Causes of Effects framework be used in formulating the causal questions and answers given by attorneys and expert witnesses. To resolve the problem of bias, I suggest that only the task-relevant information be provided to the individual who determines the diagnosis. I also suggest that in order for this to be possible, there must be a change in the definition of SBS so it does not include the manner in which the injuries were caused. I close with recommendations to researchers in statistics and the law about how to use scientific results in court.
FOSW02 28th September 2016
16:15 to 17:00
Bart Verheij, Henry Prakken Report on a pre-workshop on analysing a murder case
Floris Bex, Anne Ruth Mackor, Henry Prakken, Christian Dahlman, Martin Neil, Bart Verheij

We will discuss the results of a pre-workshop held at the INI on September 23-24. During this workshop a Dutch murder case was analyzed from several theoretical perspectives, including Bayesian, argumentative and narrative approaches.
FOSW02 29th September 2016
09:30 to 10:15
Amanda Luby A Graphical Model Approach to Eyewitness Identification Data
Although eyewitness identification is generally regarded as relatively inaccurate among cognitive psychologists and other experts, testimony from eyewitnesses continues to be prolific in the court system today. There is great interest among psychologists and the criminal justice system to reform eyewitness identification procedures to make the outcomes as accurate as possible. There has been a recent push to adopt Receiver Operating Characteristic (ROC) curve methodology to analyze lineup procedures, but has not been universally accepted in the field. This work addresses some of the shortcomings of the ROC approach and proposes an analytical approach based on log-linear models as an alternative method to evaluate lineup procedures. Unlike approaches that emphasize correct and incorrect identifications and rejections, our log-linear model approach can distinguish among all possible outcomes and allows for a more complete understanding of the variables at work during a lineup task. Due to the high-dimensional nature of the resulting model, representing the results through a dependence graph leads to a deeper understanding of conditional dependencies and causal relationships between variables involved. We believe that graphical models have been under-utilized in the field, and demonstrate their utility for not only broader statistical insights, but as an intuitive way to communicate complex relationships between variables to practitioners. We find that log-linear models can incorporate more information than previous approaches, and provide flexibility needed for data of this nature
FOSW02 29th September 2016
10:15 to 11:00
Ulrike Hahn Bayesian Argumentation in the Real World
The talk provides a brief introduction to Bayesian Argumentation, including work on fallacies of argumentation (both inside and outside the law) and then focusses on the scope for Bayesian Argumentation and the normative considerations it provides in actual real world contexts, such as legal decisions.
FOSW02 29th September 2016
11:30 to 12:15
David Lagnado Causal networks in evidential reasoning
How do people reason in the face of complex and contradictory evidence? Focusing on investigative and legal contexts, we present an idiom-based approach to evidential reasoning, in which people combine and reuse causal schemas to capture large bodies of interrelated evidence and hypotheses. We examine both the normative and descriptive status of this framework, illustrating with real legal cases and empirical studies. We also argue that it is qualitative causal reasoning, rather than fully Bayesian computation, that lies at the heart of human evidential reasoning.
FOSW02 29th September 2016
13:30 to 14:15
Christian Dahlman Prior Probability and the Presumption of Innocence
My talk will address a problem of fundamental importance for the Bayesian approach to evidence assessment in criminal cases. How shall a court, operating under the presumption of innocence, determine the prior probability that the defendant is guilty, before the evidence has been presented? I will examine some ways to approach this problem, and review different solutions. The considerations that determine the prior probability can be epistemic or normative. If they are purely epistemic, the fact that the defendant has been selected for prosecution must be considered as evidence for guilt, and this violates the presumption of innocence (Dawid 1993, 12). The prior probability must therefore be determined completely or partly on normative grounds. It has been suggested by Dennis Lindley and others that the prior probability shall be determined as 1/N, where N is the number of people who could have committed the act that the defendant is accused of (Lindley 1977, 218; Dawid 1993, 11; Bender & Nack 1995, 236), but there are several objections to this solution. As Leonard Jaffee has pointed out, the prior probability will not be equal in all criminal trials, as N will vary from case to case (Jaffee 1988, 978). This is problematic since the doctrine of fair trial requires that defendants are treated equally. Furthermore, the court will not have sufficient knowledge about all possible scenarios to determine N with the robustness required by the standard of proof (Dahlman, Wahlberg & Sarwar 2015, 19). My suggestion for the problem is that the prior probability should be determined completely on normative grounds, by assigning a standardized number to N, for example N = 100. If the number of people who could have committed the crime is always presumed to be 100, the probability that the defendant is guilty before the evidence has been presented will be 1% in all trials. According to this solution, the prior probability is an institutional fact (Searle 1995, 104).
FOSW02 29th September 2016
14:15 to 15:00
Mehera San Roque Admissibility and evaluation of identification evidence: 'experts', bystanders and shapeshifters
FOSW02 29th September 2016
15:30 to 16:15
Simon De Smet tba
FOSW02 29th September 2016
16:15 to 17:00
William Thompson Using Bayesian Networks to Analyze What Experts Need to Know (and When they Know Too Much)
What is the proper evidentiary basis for an expert opinion? When do procedures designed to reduce bias (e.g., blinding, sequential unmasking) hide too much from the expert; when do they hide too little? When will an expert’s opinion be enhanced, and when might it be degraded or biased, by the expert’s consideration of contextual information? Questions like this are important in a variety of domains in which decision makers rely on expert analysis or opinion. This presentation will discuss the use of conditional probabilities and Bayesian networks to analyze these questions, providing examples from forensic science, security analysis, and clinical medicine. It will include discussion of the recommendations of the U.S. National Commission on Forensic Science on determining the “task-relevance” of information needed for forensic science assessments.
FOS 6th October 2016
14:00 to 16:00
Frans Alkemade Bayes and the Blame Game: What I learned from presenting Bayesian reasoning in criminal cases
FOS 13th October 2016
11:00 to 12:30
Therese Graversen Statistical interpretation of forensic DNA mixtures
FOS 3rd November 2016
11:00 to 12:00
Sandy Zabell German Mathematicians and Cryptology during WWII
The exploits of British mathematicians such as Alan Turing during the 1939-1945 cryptologic war are well-known.  But what about the other side?  Germany was after all (at least before the Nazis) the pre-eminent country for mathematics.

The answer turns out to be surprising:  a large number of German mathematicians who had distinguished careers after the war (including three future Presidents of the DMV, the German Mathematical Society) worked in signals intelligence during WWII;  and the Germans had many successes in this area.

Why then did they have such a meltdown on the defensive side of communications security, most of their methods of encryption being broken by the Allies?  Here too, surprisingly, there is a very simple answer.
FOSW03 7th November 2016
14:00 to 15:00
Gerd Gigerenzer Risk Literacy: How to Make Sense of Statistical Evidence
We are taught reading and writing, but rarely statistical thinking. Law schools and medical schools have not yet taken sufficient efforts to teach their students how to understand and communicate statistical evidence. The result is collective risk illiteracy: many judges, lawyers, doctors, journalists and politicians do not understand statistical evidence and draw wrong conclusions unawares. Focusing on legal and medical evidence, I will discuss common errors in evaluating evidence and efficient tools that help professionals to overcome these. Risk literacy, together with efficient techniques for communicating statistical information, is a necessary precondition for meeting the challenges of modern technological society.
FOSW03 7th November 2016
15:30 to 16:15
Geoffrey Stewart Morrison What should a forensic scientist's likelihood ratio be?
How should a forensic scientist arrive at a value for the strength of evidence statement that they present to the court? A number of different answers have been proposed.

One proposal is to assign probabilities based on experience and subjective judgement. This appears to be advocated in the Association of Forensic Science Providers (AFSP) 2009 standards, and the 2015 European Network of Forensic Science Institutes (ENFSI) guideline on evaluative reporting. But the warrant for such subjective judgements has been questioned. The 1993 US Supreme Court Daubert ruling and the 2016 report by the President’s Council of Advisors on Science and Technology (PCAST) argue strongly that subjective judgment is not enough, that empirical validation is needed.

If a forensic likelihood ratio is to be based on subjective judgement, it has been proposed that the judgement be empirically calibrated.

The PCAST report proposes a procedure which results in a dichotomous likelihood ratio. The practitioner applies a threshold and declares “match” or “non-match”. If a “match” is declared, the empirically derived correct acceptance  rate and false acceptance rate are also provided (dividing the former by the latter would produce a likelihood ratio). Mutatis mutandis if a “non-match” is declared. This has been criticised for discarding information and thus resulting in poor performance.

The AFSP standards and ENFSI guideline propose the use of ordinal scales – each level on the scale covers a pre-specified range of likelihood ratio values, and has an associated verbal expression. These have been criticised on a number of grounds, including for having arbitrary ranges, for suffering from cliff-edge effects, and for verbal expressions being vague – they will be interpreted differently by different individuals, and differently by the same individual in different contexts.

It has also been proposed that numeric likelihood ratios be calculated on the basis of relevant data, quantitative measurements, and statistical models, and that the numeric likelihood ratio output of the statistical model be directly reported as the strength of evidence statement. Such an approach is transparent and replicable, and, relative to procedures based primarily on subjective judgement, it is easier to empirically calibrate and validate under conditions reflecting those of the case under investigation, and it is more resistant to cognitive bias.

Score based procedures first calculate a score which quantifies degree of similarity (or difference) between pairs of objects, then applies a subsequent model which converts scores to likelihood ratios (the second stage can be considered an empirical calibration stage). Scores which only take account of similarity (or difference), however, do not account for typicality with respect the relevant population for the case, and this cannot be corrected at the score to likelihood ratio conversion stage. If a score based procedure is used, the scores should take account of both similarity and typicality.

Numeric likelihood ratios can be calculated in a frequentist manner or a subjectivist Bayesian manner. Philosophically the former is an estimate of a true but unknown value, and the latter is a state of belief, a personal probability. A frequentist will assess the precision of their estimate, whereas a subjectivist Bayesian will have attempted to account for all sources of uncertainty in the assignment of the value of their likelihood ratio (a Bayes factor). The merits of the two approaches are hotly debated (including currently in a virtual special issue in Science & Justice), but if presented with a frequentist point estimate plus degree of precision the trier of fact may decide to use a likelihood ratio closer to 1 than the point estimate (the deviation depending on the degree of precision), and (depending on the prior used) the value of the Bayes factor will be closer to 1 than a frequentist point estimate of a likelihood ratio. Can these be considered to have the same practical result? Which would be preferred by the courts? Can Bayesian procedures with empirical or reference priors be adopted without having to buy in to the subjectivist philosophy? What should a forensic scientist’s likelihood ratio be?

FOSW03 7th November 2016
16:15 to 17:00
Daniel Ramos Measuring Performance of Likelihood-Ratio-Based Evidence Evaluation Methods
The value of the evidence in forensic science is increasingly expressed by a Likelihood Ratio (LR), following a Bayesian framework. The LR aims at providing the information given by the evidence to the decision process in a trial. Although theoretical aspects of statistical models are essential to compute the LR, in real forensic situations there exist many other factors (including e.g. data sparsity, data variability and dataset shift) that might degrade the performance of the LR. This means that the computed LR values might be misleading, ultimately causing a loss in the accuracy of the decisions made by the fact finder. Therefore, it is essential to measure the performance of LR methods in forensic situations, with the further objective of validating LR methods for its use in casework. In this talk, we will present several popular performances measures for LR values. We will provide examples where these measures are used to compare different methods in the context of trace evidence. Finally, we will present a recently-proposed guideline for the validation of LR methods in forensic science, that relies upon the use of performance measures of LR methods.
FOSW03 8th November 2016
09:30 to 10:15
Gabriel Vivó-Truyols Interpreting (chemical) forensic evidence in a Bayesian framework: a multidisciplinary task
Co-authors: Marjan Sjerps (University of Amsterdam / Dutch Forensic Institute), Martin Lopatka (University of Amsterdam / Dutch Forensic institute), Michael Woldegebriel (University of Amsterdam), Andrei Barcaru (University of Amsterdam)

The interpretation and evaluation of chemical forensic evidence is a challenging task of multidisciplinary nature. Interaction between diferent disciplines (Bayesian statisticians, analytical chemists, signal processers, instrument expers, etc.) is necessary. In this talk I will illustrate different cases of such interaction to evaluate pieces of evidence in a forensic context: The first case is the evaluation of fire debris using two-dimensional gas chromatography. Such a technique analyses fire debris to look for traces of (pyrolized) hydrocarbons. However, the classification of such hydrocarbons is a difficult task, demanding experts in (analytical) chemistry. Even more difficult is to interpret such evidence in a Bayesian framework. The second case is the application of Bayesian inference in the toxicology laboratory. In this case, a set of targeted compounds is analysed via LC-MS. Instruments are normally pre-processing the data in a deterministic manner, providing the so-called peak table. We propose an alternative that uses the raw data as evidence, instead of using such peak table. The third case is the exploration of differences between different analysis, in order to find illegal additives in a complex matrix. In this case, the use of Jensen-Shannon divergence has been applied in a Bayesian framework to highlight such differences.
FOSW03 8th November 2016
10:15 to 11:00
William Thompson Elicitation of Priors in Bayesian Modeling of DNA Evidence
Bayesian networks have been helpful for analyzing the probative value of complex forms of forensic DNA evidence, but some of these network models require experts to estimate the prior probability of specific events. This talk discusses procedures that might be used for elicitation of priors with an eye toward minimizing bias and error. As an illustration it uses a model proposed by Biedermann, Taroni & Thompson (2011) to deal with situations in which the "inclusion" of the suspect as a possible contributor to a mixed DNA sample depends on the value of an unknown variable. (Biedermann, A., Taroni, F. & Thompson, W.C. Using graphical probability analysis (Bayes nets) to evaluate a conditional DNA inclusion. Law, Probability and Risk, 10: 89-121, 2011).
FOSW03 8th November 2016
11:30 to 12:15
John Aston Inverting Entomological Growth Curves Using Functional Data Analysis
Co-authors: Davide Pigoli (University of Cambridge), Anjali Mazumder (Carnegie Mellon University), Frederic Ferraty (Toulouse Jean Jaures University), Martin Hall (Natural History Museum)

It is not unusual in cases where a body is discovered that it is necessary to determine a time of death or more formally a post mortem interval (PMI). Forensic entomology can be used to estimate this PMI by examining evidence obtained from the body from insect larvae growth. Growth curves however are temperature dependent, and usually direct temperature measurements from the body location are unavailable for the time periods of interest. In this work, we investigate models for PMI estimation, including temperature prediction, based on functional data analysis. We will evaluate the possibilities of using different models, particularly based on ideas from function registration, to try to obtain inferences concerning PMI and indeed whether multiple species data can be incorporated into the model. This can allow even more accurate estimation of PMI.
FOSW03 8th November 2016
12:15 to 13:00
Sue Pope Modelling the best evidence
This talk will consider the benefits for the courts of maximising the amount of information gained for a result before statistical modelling is carried out, using complex DNA results as an example. Some mixtures of DNA obtained from crime and questioned samples are either too complex or too partial to give meaningful likelihood ratios even with the wider range of calculation software now available. One current option of providing the expert’s qualitative opinion in place of a formal likelihood ratio, while sanctioned by the courts, has been controversial. The time and effort spent improving the amount and quality of DNA being analysed before attempting to carry out specialist DNA mixture calculations will be repaid by achieving a more discriminating likelihood ratio. The relevance of the samples to the evidential issues will also be discussed.
FOSW03 8th November 2016
15:30 to 16:15
Norah Rudin Complex DNA profile interpretation: stories from across the pond
A story of samples and statistics: The history of a forensic sample, the history and current state of forensic DNA interpretation and statistics in the U.S.

With the continued increase in the sensitivity of DNA testing systems comes a commensurate increase in the complexity of the profiles generated. Numerous sophisticated statistical tools intended to provide an appropriate weight of evidence for these challenging samples have emerged over the last several years.  While it seems clear that only a likelihood ratio-based probabilistic genotyping approach is appropriate to address the ambiguity inherent in these complex samples, the relative merits of the different approaches are still being investigated.
The first part of this talk will address the generation of DNA samples from a forensic science perspective.  Long before the statistical weight of evidence is considered, numerous decision points determine what samples are collected, what samples are tested, how they are tested and what questions are asked of them.  It is critical to understand the sample history and the milieu in which their journey takes place on their way to becoming profiles that require interpretation and statistical assessment. We will then summarize the history of approaches typically used by working analysts in the US, and discuss the current state of the practice. In 2005 and 2013, NIST distributed sets of mixtures to working laboratories and collected their interpretations and statistical weights. They found a wide range of variation both within and between laboratories in calculating the weight of evidence for the same sample in both surveys.  Most disturbing was the continued use of simplistic tools, such as the CPI/CPE (RMNE), long considered inadequate for specific types of profiles. A number of publications and reports over the last 15 years have commented on the interpretation and statistical weighting of forensic DNA profiles. These include the ISFG commission papers of 2006 and 2012, the NAS 2009 report, the 2010 SWGDAM STR interpretation guidelines, and the 2015 SWGDAM probabilistic genotyping software validation guidelines. Several high profiles criticisms of laboratory protocols (e.g. Washington D.C. and the TX laboratory system) have emerged that have fueled debate. Most recently, PCAST published a report commenting on the state of forensic science disciplines in the US, including DNA. An updated draft of the SWGDAM STR interpretation guidelines is currently posted for comment.  We will discuss these various publications and commentaries as time permits.
FOSW03 8th November 2016
16:15 to 17:00
Keith Inman Complex DNA profile interpretation: stories from across the pond
A comparison of complex profiles analyzed with different software tools

The second part of this talk will document the creation and evaluation of ground-truth samples of increasing complexity. Variables include: the number of contributors, the amount of allele sharing, variability in template amount, and varying mixture ratios of contributors in mixed samples. These samples will be used to evaluate the efficacy of four open-source implementations of likelihood ratio approaches to estimating the weight of evidence: Lab Retriever (an implementation of likeLTD v2), LRMix Studio, European Forensic Mixture, and likeLTD v6. This work was initiated during the summer of 2016 in conjunction with the special semester at the Newton Institute devoted to Probability and Statistics in Forensic Science, and continue at the time of submission of this abstract. We look forward to presenting results “hot off the press” at this meeting.
FOSW03 9th November 2016
09:30 to 10:15
Roberto Puch-Solis Evaluation of forensic DNA profiles while accounting for one and two repeat less and two repeat more stutters
Co-author: Dr Therese Graversen (University of Copenhagen)

Current forensic DNA profile technology is very sensitive and can produce profiles from a minute amount of DNA, e.g. from one cell. A profile from a stain recovered from a crime scene is represented through an electropherogram (epg), which consists of peaks located in positions corresponding to alleles. Peak heights are related to the originating amount of DNA: the more DNA the sample contains, the taller the peaks are.

An epg also tends to contain artefactual peaks of different kinds. Some of these artefacts originate during PCR duplication and are usually called ‘stutters’. The most predominant of the stutter appears one STR less to the corresponding alleles and it is about 10% of the height of the allelic peak, although this percentage vary from locus to locus. Given the sensitivity of the DNA systems, other stutters also tend to appear in the epg: one located two STR less and the other one STR more of the allelic peak. They tend to be much smaller than their corresponding one STR less stutters.

Many stain profiles from samples taken from a scene of a crime originate from more than one person where each of them contributes different amounts of DNA. The peaks of minor contributors can be about the same height of the stutters of a major contributor. A stutter could also combine with an allelic peak or with other stutters, making an evaluation more complicated. Caseworkers are also scrutinised on their stutters designations in court.

Graversen & Lauritzen (2015) introduced an efficient method for calculating likelihood ratios using Bayesian Networks. In this talk, this method is extended to consider two STR less and one STR more stutters, and the complexities of the extension is discussed.

Reference

Graversen T. & Lauritzen S. (2015). Computational aspects of DNA mixture analysis: exact inference using auxiliary variables in a Bayesian network. Statistics & Computing 25, pp. 527-541.
FOSW03 9th November 2016
10:15 to 11:00
Therese Graversen An exact, efficient, and flexible representation of statistical models for DNA profiles
Many different models have been proposed for a statistical interpretation of mixed DNA profiles. Regardless of the model, a computational bottleneck lies in the necessity to handle the large set of possible combinations of DNA profiles for the contributors to the DNA sample.

I will explain how models can be specified in a very general setup that makes it simple to compute both the likelihood and many other quantities that are crucial to a thorough statistical analysis. Notably all computations in this setup are exact, whilst still efficient.

I have used this setup to implement the statistical software package DNAmixtures.

FOSW03 9th November 2016
11:30 to 12:00
Jacob de Zoete Cell type determination and association with the DNA donor
Co-authors: Wessel Oosterman (University of Amsterdam), Bas Kokshoorn (Netherlands Forensic Institute), Marjan Sjerps (Korteweg-de Vries Institute for Mathematics, University of Amsterdam)

In forensic casework, evidence regarding the type of cell material contained in a stain can be crucial in determining what happened. For example, a DNA match in a sexual offense can become substantially more incriminating when there is evidence supporting that semen cells are present.

Besides the question which cell types are present in a sample, also the question who donated what (association) is very relevant. This question is surprisingly difficult, even for stains with a single donor. The evidential value of a DNA profile needs to be combined with knowledge regarding the specificity and sensitivity of cell type tests. This, together with prior probabilities for the different donor-cell type combinations, determines the most likely combination.

We present a Bayesian network that can assist in associating donors and cell types. A literature overview on the sensitivity and specificity of three cell type tests (PSA test for seminal fluid, RSID saliva and RSID semen) is helpful in assigning conditional probabilities. The Bayesian network is linked with a software package for interpreting mixed DNA profiles. This allows for a sensitivity analysis that shows to what extent the conclusion depends on the quantity of available research data. This can aid in making decisions regarding further research.

It is shown that the common assumption that an individual (e.g. the victim) is one of the donors in a mixed DNA profile can have unwanted consequences for the association between donors and cell types.
FOSW03 9th November 2016
12:00 to 12:30
Maarten Kruijver Modeling subpopulations in a forensic DNA database using a latent variable approach
Several problems in forensic genetics require a representative model of a forensic DNA database. Obtaining an accurate representation of the offender database can be difficult, since databases typically contain groups of persons with unregistered ethnic origins in unknown proportions. We propose to estimate the allele frequencies of the subpopulations comprising the offender database and their proportions from the database itself using a latent variable approach. We present a model for which parameters can be estimated using the expectation maximization (EM) algorithm. This approach does not rely on relatively small and possibly unrepresentative population surveys, but is driven by the actual genetic composition of the database only. We fit the model to a snapshot of the Dutch offender database (2014), which contains close to 180,000 profiles, and find that three subpopulations suffice to describe a large fraction of the heterogeneity in the database. We demonstrate the utility and reliability of the approach by using the model to predict the number of false leads obtained in database searches. We assess how well the model predicts the number of false leads obtained in mock searches in the Dutch offender database, both for the case of familial searching for first degree relatives of a donor and searching for contributors to three person mixtures. We also study the degree of partial matching between all pairs of profiles in the Dutch database and compare this to what is predicted using the latent variable approach.
FOSW03 9th November 2016
12:30 to 13:00
Torben Tvedebrink Inference platform for Ancestry Informative Markers
Co-authors: Poul Svante Eriksen (Aalborg University), Helle Smidt Mogensen (University of Copenhagen), Niels Morling (University of Copenhagen)

In this talk I will present a platform for making inference about Ancestry Informative Markers (AIMs), which are a panel of SNP markers used in forensic genetics to infer the population origin of a given DNA profile.

Several research groups have proposed such AIM panels, each with a specific objective in mind. Some were designed to discriminate between closely related ethnic groups whereas other focus on larger distances (more remotely located populations). This talk is not about selecting markers or populations for testing. The focus will be about how to provide forensic geneticists with a tool that can be used to infer the most likely population(s) of a given DNA profile.

By the use of R (www.r-project.org) and Shiny (web applications framework for R, RStudio) I have developed a platform that provides the numerical and visual output necessary for the geneticist to analyse and report the genetic evidence.

In the talk I will discuss the evidential weight in this situation and uncertainties in population frequencies. As the database of populations is not exhaustive, there is no guarantee that there exists a \textsl{relevant} population in the database, where \textsl{relevant} means a population sufficiently close to the \textsl{true} population. We derive a database score specific for each DNA profile and use this score to assess the relevance of the database relative to the DNA profile.
FOSW03 9th November 2016
17:00 to 18:00
Bernard Silverman Pre-Dinner Lecture: Forensic Science from the point of view of a Scientific Adviser to Government
I will give some examples of ways in which mathematical and statistical issues relevant to Forensic Science have arisen during my seven years as Chief Scientific Adviser to the Home Office, and also reflect more widely on the role of a scientist appointed to this role in Government. Some of the aspects of my 2011 report into research into Forensic Science remain relevant today and indeed pose challenges for all the participants in this conference. I hope that my talk will also be an opportunity to thank the Newton Institute for playing its part, alongside other national and international research organisations, in raising the profile of this important discipline.
FOSW03 10th November 2016
09:30 to 10:15
Mikkel Andersen Y Chromosomal STR Markers: Assessing Evidential Value
Y chromosomal short tandem repeats (Y-STRs) are widely used in forensic genetics. The current application is mainly to detect non-matches, and subsequently release wrongly accused suspects. For matches the situation is different. For now, most analysts will just say that the haplotypes matched but they will not assess the evidential value of this match. This is understandable given the fact that a consensus of estimating the evidential value has not yet been reached. However, work on getting there is in progress. In this talk, the aim is to review some of the current methods for assessing the evidential value of a Y-STR match. This includes proposal for a new way to compare methods estimating match probabilities and a discussion of correcting for population substructure through the so-called θ (theta) method.
FOSW03 10th November 2016
10:15 to 11:00
Amke Caliebe Estimating trace-suspect match probabilities in forensics for singleton Y-STR haplotypes using coalescent theory
Estimation of match probabilities for singleton haplotypes of lineage markers, i.e. for haplotypes observed only once in a reference database augmented by a suspect profile, is an important problem in forensic genetics. We compared the performance of four estimators of singleton match probabilities for Y-STRs, namely the count estimator, both with and without Brenner’s so-called kappa correction, the surveying estimator, and a previously proposed, but rarely used, coalescent-based approach implemented in the BATWING software. Extensive simulation with BATWING of the underlying population history, haplotype evolution and subsequent database sampling revealed that the coalescent-based approach and Brenner’s estimator are characterized by lower bias and lower mean squared error than the other two estimators. Moreover, in contrast to the two count estimators, both the surveying and the coalescent-based approach exhibited a good correlation between the estimated and true match probabilities. However, although its overall performance is thus better than that of any other recognized method, the coalescent-based estimator is still very computation-intense. Its application in forensic practice therefore will have to be limited to small reference databases, or to isolated cases of particular interest, until more powerful algorithms for coalescent simulation have become available.

FOSW03 10th November 2016
11:30 to 12:15
Peter Gill Challenges of reporting complex DNA mixtures in the court-room
The introduction of complex software into routine casework is not without challenge. A number of models are available to interpret complex mixtures. Some of these are commercial, whereas others are open source. For a comprehensive validation protocol see Haned et al Sci and Justice (206) 104-108). In practice, methods are divided into quantitative or qualitative models and there is no preferred method. A number of cases have been reported in the UK using different software. For example, in R v. Fazal, the prosecution and defence used different software to analyse the same case. Different likelihood ratios were obtained and both were reported to the court – a way forward to prevent confusion in court is presented. This paper also highlights the necessity of ‘equality of arms’ when software is used, illustrated by several other cases. Examples of problematic proposition formulations that may provide misleading results are described.
FOSW03 10th November 2016
12:15 to 13:00
Klaas Slooten The likelihood ratio as a random variable, with applications to DNA mixtures and kinship analysis
In forensic genetics, as in other areas of forensics, the data to be statistically evaluated are the result of a chance process, and hence one may conceive that they had been different, resulting in another likelihood ratio. Thus, one thinks of the obtained likelihood ratio in the case at hand as the outcome of a random variable. In this talk we will discuss the way to formalize this intuitive notion, and show general properties that the resulting distributions of the LR must have. We illustrate and apply these general results both to the evaluation of DNA mixtures and to kinship analysis, two standard applications of forensic DNA profiles. For mixtures, we discuss how model validation can be aided by investigation of the obtained likelihood ratios. For kinship analysis we observe that for any pairwise kinship comparison, the expected likelihood ratio does not depend on the allele frequencies of the loci that are used other than through the total number of alleles. We compare the behavior of the LR as a function of the allele frequencies with that of the weight of evidence, Log(LR), and argue that the WoE is better behaved. This talk is largely based on a series of three papers in International Journal of Legal Medicine co-authored with Thore Egeland.

Exclusion probabilities and likelihood ratios with applications to kinship problems, Int. J. Legal Med. 128, 2014, 415---425,
Exclusion probabilities and  likelihood ratios with applications to mixtures, Int. J. Legal Med. 130, 2016, 39---57,
The likelihood ratio as a random variable for linked markers in kinship analysis, Int. J. Legal Med. 130, 2016, 1445---1456
FOSW03 10th November 2016
15:30 to 16:15
Michael Sigman Assessing Evidentiary Value in Fire Debris Analysis
Co-author: Mary R. Williams (National Center for Forensic Science, University of Central Florida)

This presentation will examine the calculation of a likelihood ratio to assess the evidentiary value of fire debris analysis results. Models based on support vector machine (SVM), linear and quadratic discriminant analysis (LDA and QDA) and k-nearest neighbors (kNN) methods were examined for binary classification of fire debris samples as positive or negative for ignitable liquid residue (ILR). Computational mixing of data from ignitable liquid and substrate pyrolysis databases was used to generate training and cross validation samples. A second validation was performed on fire debris data from large-scale research burns, for which the ground truth (positive or negative for ILR) was assigned by an analyst with access to the gas chromatography-mass spectrometry data for the ignitable liquid used in the burn. The probabilities of class membership were calculated using an uninformative prior and a likelihood ratio was calculated from the resulting class membership probabilities . The SVM method demonstrated a high discrimination, low error rate and good calibration for the cross-validation data; however, the performance decreased significantly for the fire debris validation data, as indicated by a significant decrease in the area under the receiver operating characteristic (ROC) curve. The QDA and kNN methods showed performance trends similar to those of SVM. The LDA method gave poorer discrimination, higher error rates and slightly poorer calibration for the cross validation data; however the performance did not deteriorate for the fire debris validation data.
FOSW03 10th November 2016
16:15 to 17:00
James Curran Understanding Intra-day and Inter-day Variation in LIBS
Co-authors: Anjali Gupta (University of Auckland), Chris Triggs (Universtity of Auckland), Sally Coulson (ESR)

LIBS (laser induced breakdown spectroscopy) is a low-cost alternative and highly portable instrument that can be used in forensic applications to determine elemental composition. It differs from more traditional instruments such as ICP-MS and $\mu$-XRF in that the output is a spectrum rather than the concentration of elements. LIBS has great appeal in forensic science but has yet to enter the mainstream. One of the reasons for this is a perceived lack of reproducibility in the measurements over successive days or weeks. In this talk I will describe a simple experiment we designed to investigate this phenomenon, and the consequences of our findings. The analysis involves both classical methodology and a simple Bayesian approach.
FOSW03 11th November 2016
09:45 to 10:30
Giulia Cereda A Bayesian nonparametric approach for the rare type problem
Co-author: Richard Gill (Leiden University)

The evaluation of a match between the DNA profile of a stain found on a crime scene and that of a suspect (previously identified) involves the use of the unknown parameter p=(p1, p2, ...), (the ordered vector which represents the proportions of the different DNA profiles in the population of potential donors) and the names of the different DNA types.

We propose a Bayesian nonparametric method which considers p as the realization of a random variable P distributed according to the two-parameter Poisson Dirichlet and discard information about DNA types.

The ultimate goal of this model is to evaluate DNA matches in the rare type case, that is the situation in which the suspect's profile, matching the crime stain profile, is not one of those in the database of reference. This situation is so problematic that has been called “the fundamental problem of forensic mathematics” by Charles Brenner.
FOSW03 11th November 2016
11:00 to 11:30
Marjan Sjerps Evaluating a combination of matching class features: a general 'blind' procedure and a tire marks case
Co-authors: Ivo Alberink (NFI), Reinoud Stoel (NFI)

Tire marks are an important type of forensic evidence as they are frequently encountered at crime scenes. When the tires of a suspect’s car are compared to marks found at a crime scene, the evidence can be very strong if so-called ‘acquired features’ are observed to correspond. When only ‘class characteristics’ such as parts of the profile are observed to correspond, it is obvious that many other tires will exist that correspond equally well. This evidence is, consequently, usually considered very weak or it may simply be ignored. Like Benedict et al. (2014) we argue that such evidence can still be strong and should be taken into account. We explain a method for assessing the evidential strength of a set of matching class characteristics by presenting a case example from the Netherlands in which tire marks were obtained. Only part of two different tire profiles were visible, in combination with measurements on the axes width. Suitable databases were found already existing and accessible to forensic experts. We show how such data can be used to quantify the strength of such evidence and how it can be reported. We also show how the risk of contextual bias may be minimized in cases like this. In the particular exemplar case quite strong evidence was obtained, which was accepted and used by the Dutch court. We describe a general procedure for quantifying the evidential value of an expert’s opinion of a ‘match’. This procedure can directly be applied to other types of pattern evidence such as shoeprints, fingerprints, or images. Furthermore, it is ‘blind’ in the sense that context inf ormation management (CIM) is applied to minimize bias.
FOSW03 11th November 2016
11:30 to 12:00
Karen Kafadar Quantifying Information Content in Pattern Evidence
The first step in the ACE-V process for comparing fingerprints is the "Analysis" phase, where the latent print under investigation is subjectively assessed for its "suitability" (e.g., clarity and relevance of features and minutiae). Several proposals have been offered for objectively characterizing the "quality" of a latent print. The goal of such an objective assessment is to relate the "quality metric" (which may be a vector of quality scores) to the accuracy of the call (correct ID or correct exclusion), so that latent print examiners (LPEs) can decide immediately whether to proceed with the other steps of ACE-V. We describe some of these proposals that attempt to quantify the "information content" of a latent print or of its individual features ("minutiae") and describe initial efforts aimed at assessing their association with accuracy, using first NIST's public SD27a latent fingerprint database containing prints judged by "experts" as "good," "bad," or "ugly." One proposed metric, based on gradients to determine the clarity of the minutiae, correlates well with the general classification and thus can serve as an objective, vs subjective, measure of information content.
FOSW03 11th November 2016
12:00 to 12:30
David Balding Inference for complex DNA profiles
I will outline some interesting aspects of the analysis of complex DNA profiles that have arisen from my recent work in court cases and in my research.  The latter has been largely performed in association with former PhD student Christ Steele at UCL and Cellmark UK.  We developed a new statistical model and software for the evaluation of complex DNA profiles [1], and some new approaches to validation.  We also investigated using a dropin model to account for multiple very-low-level contributors, the statistical efficiency of a split-and-replicate profiling strategy versus a one-shot profile, and a simple linkage adjustment.  If time and opportunity permit I will use my last-speaker slot to comment on relevant issues raised during the conference,
1. Steele C, Greenhalgh M, Balding D (2016). Evaluation of low-template DNA profiles using peak heights. Statistical Applications in Genetics and Molecular Biology, 15(5), pp. 431-445. doi:10.1515/sagmb-2016-0038

FOS 14th November 2016
16:00 to 17:00
Gerd Gigerenzer Rothschild Lecture: Helping Doctors and Patients Make Sense of Health Statistics
Efficient health care requires informed doctors and patients. Yet studies consistently show that most physicians have great difficulties understanding health statistics. This widespread innumeracy makes informed decision-making difficult, a problem amplified by conflicts of interest and defensive medicine. Here I report about the efforts of the Berlin Harding Center to help professionals and laypeople to understand evidence, and stamp out the use of misleading statistics in pamphlets, journals, and the press. Raising taxes or rationing care is often seen as the only viable alternative to exploding health care costs. There is, however, a third option: by promoting health numeracy, better care can be had for less money.
FOS 6th December 2016
11:00 to 12:00
Jacob de Zoete Bayesian networks for the evaluation of evidence when attributing paintings to painters
Questions of provenance and attribution have long since motivated art historical research. Current authentication studies combine traditional humanities-based methods (for example stylistic analysis, archival research) with scientific investigation using instrumental analysis techniques like X-ray based methods, GC-MS, spectral imaging and metal-isotope research. Keeping an overview of information delivered by different specialists and establishing its relative weight is a growing challenge.   To help clarify complex situations where the relative weight of evidence needs to be established, the Bayesian framework for interpretation of evidence shows great promise. Introducing this mathematical system to calculate the probability of hypotheses based on various pieces of evidence, will strengthen the scientific basis for (art) historical and scientific studies of art. Bayesian networks can accommodate a large variation in data and can quantify the value of each piece of evidence. Their flexibility allows us to incorporate new evidence and quantify its influence.   In this presentation I will present the first results of a pilot study regarding the opportunities and the challenges of implementing Bayesian networks to structure evidence/arguments in painting attribution questions. This research is based on the painting Sunset at Montmajour that was attributed to Vincent van Gogh in 2013.

FOS 9th December 2016
11:00 to 12:00
Jane Hutton How should we assess whether a medical device is safe? What are the possible comparators?