Event Summary – Perspectives on Transcription in Criminal Justice (symposium)
The Research Hub for Language in Forensic Evidence recently took part in the Perspectives on Transcription in Criminal Justice symposium, which was held online on Thursday March 11. The event was run by the SILC team (Centre for Spoken Interaction in Legal Contexts) within the Aston Institute for Forensic Linguistics.
Around 130 people logged in to hear three scheduled talks, and participate in the panel session afterwards. The presenters and their talk titles are listed below, and highlights from the presentations are the main focus of this post.
The presentations were recorded and a link is provided at the end of this post.
Dr Martha Komter, Netherlands Institute for the Study of Crime and Law Enforcement (NSCR)
Perspectives on Transcription in Criminal Justice: Talk, text, context
Dr. Komter spoke about “Entextualisation”, and how discourse is extracted from its original context. In a forensic situation, the process often follows this sequence of events:
- Talk to text
- Text to evidence
- Evidence to case file
- Court
- Each step may involve changes in meaning – this is seen as a source of contamination.
Dr. Komter also took the audience through some examples of how exactly original statements are different from transcripts. Broadly, she has shown that a large proportion of questions from police, and follow up questions, are not written down in a transcript. She talked about the fact that while representations of talk may be “eloquent” in their transcribed form, they certainly do not reflect exactly what was said. She has also found under-reporting of antagonism by the police, and more monologues (rather than dialogues) reported, compared to the actual speech events. On the other hand, she has also shown that transcriptions tend to overstate a suspect’s acceptance and co-operation, and (over)emphasise the suspect’s authorship.
Some final points Dr. Komter left the audience with, for further discussion, were:
- Even when a transcript is “in the suspect’s own words” interactional context is removed, and this needs to be acknowledged.
- Language ideologies play a huge role in the formation of a transcript.
Dr. Komter has written a book about the life cycle of a suspect’s statement, with more discussion of these topics. Details are:
Komter, M. (2019) The Suspect’s Statement: Talk and Text in the Criminal Process. Cambridge: CUP.
Dr Kate Haworth, Dr Felicity Deamer and Dr Emma Richardson (SILC team, AIFL)
The SILC team each gave presentations on their work within the Aston Institute of Forensic Linguistics.
Haworth: ‘For The Record’: applying linguistics to improve evidential consistency in English police interview records
Dr. Haworth was up first, introducing the SILC project ‘For The Record’. In the UK, there is an evidential use of the record of interview, so accuracy is therefore crucial. The project aims to determine how to deal with inevitable changes between the spoken and written forms, and to see what role linguists can take alongside legal practitioners? She described their project as having three strands:
- qualitative, focusing on data and evidential integrity (led by Emma Richardson)
- psycholinguistic experiments (led by Felicity Deamer)
- focus groups, questionnaires (led by Kate Haworth)
Dr. Haworth reported that intended outcomes of this project are transcription guidance and standardization, as well as input into training. The SILC team wants to ensure that a flexible exploratory approach is used and that the project continues to take practitioner input into account.
Deamer: Exploring variability in interpretations of police investigative interviews
Dr. Felicity Deamer was up next, talking about the experimental approach in the study. She reported on an experiment using 60 participants, who were presented with the same police interview, taken from publicly available footage on YouTube of a suspect interview in a murder trial. Participants were asked a series of questions to determine how they interpreted traits of the interviewee (and thus how the interviewee as ultimately viewed).
The participants were split into two groups, with 30 watching the actual interview, and 30 reading a transcript (which included stress, “emotion”, overlapping speech and pauses). Factors such as interviewee credibility, plausibility, sincerity and emotion were then rated.
Dr. Deamer found that participants who read the transcript (rather than heard the audio) were significantly more likely to:
- perceive the interviewee as anxious and unrelaxed;
- interpret the interviewee’s behaviour as being agitated, aggressive, defensive, and nervous;
- determine that the interviewee is un-calm and uncooperative; and ultimately
- deem the interviewee’s version of events to be untrue.
She made the excellent point that “It is perhaps intuitive that vitally important information is lost in the transformation of spoken interaction into written format. It is somewhat less intuitive that this lost information might have a negative impact on evidential value”.
Dr. Deamer ended by saying that through this experimental approach, SILC hopes to move from the “Whether to the Why” – stating that “if we know the mechanisms by which evidential value is negatively impacted, then we can take steps to mitigate that impact”.
Richardson: Factors influencing evidential consistency in police investigative interview records
In her talk, Dr. Emma Richardson covered five factors influencing evidential consistency, noting a trade-off between what is ideal, practical, and required in context. The five influencing factors are:
- ownership: the rights participants have over the data (their talk);
- agency: noting that accounts are mediated;
- accuracy: what was said, and what as actually recorded;
- usability: of audio, text and video (though video contains more information, written records are far easy to use);
- resource efficiency: there is a trade-off between what is ideal, practical, & required in context.
Dr. Richardson concluded by noting that transcripts are necessary, but it must be acknowledged that they are produced subjectively, and will inevitably lose detail. Details that are included should be adequate and serve the intended purpose.
Professor Helen Fraser (Research Hub for Language in Forensic Evidence, Unimelb)
Transcription of indistinct forensic audio – and a framework for understanding factors affecting the creation and evaluation of transcripts
Professor Fraser started off by agreeing with the previous speakers regarding the complex ‘entextualisation’ involved in creating a transcript, and the problems that can arise if these complexities are not fully understood in creating transcripts for legal contexts.
She then introduced a different legal context (and what the Hub focuses on!): transcription of indistinct covert recordings used as evidence in criminal trials. This is an even more complex type of transcription situation than those discussed by the other speakers – and all the issues they raised apply in even more problematic ways to forensic transcription.
You may have already read about these issues on our Hub site. In case you would like to explore more, Helen’s website has many examples (you can try for yourself!).
Next, Helen moved to the main topic of her talk. If we are to solve some of the problems raised during the Symposium, we need a good framework for understanding the complex nature of transcription, and deciding what kind of transcript is suitable for different transcription situations.
To help with this, Helen outlined five key factors that affect transcript reliability, offering a framework for best practice:
- The medium: Is the transcript of live speech, or a recording; if it is a recording, what is its quality?
- The speech: What language is the material in? Which dialect? What is the formality – is it conversational? How much overlapping speech is there? What is its duration and continuity?
- The listener / perceiver: Professor Fraser noted that it is easy to forget about this factor in situations of shared literacy. What is the listener’s knowledge of the language / dialect / register? What are their knowledge, assumptions and expectations of context?
- The transcriber: What is their skill? Are they accredited, do they have linguistic knowledge? What is the transcriber’s understanding of the end-user of the transcript, and what is the overall purpose of the transcript? Is it for an “aide memoire”, is it an official record, is it for linguistic research? How independent is the transcriber?
- The evaluator: Who is the evaluator of the transcript, reviewing and checking it? Is it the speaker? A third party? We need to know how skilled the evaluator is, and whether they are independent? What method will be used?
Professor Fraser compared some examples of how the framework would differ when comparing transcripts of court proceedings with transcripts of indistinct covert recordings.
She concluded by talking about the fact that the Research Hub for Language in Forensic Evidence is working towards a solution regarding the treatment of language in forensic evidence (watch this space!).
Panel Discussion
Finally, the symposium ended with a panel discussion hosted by Dr Debbie Loakes. This brought out themes discussed throughout the talks, touching on issues such as information that should and should not be included in transcripts, and automatic transcription.
Watch the symposium online
You can listen to the panel discussion and watch this event online here (scroll down to “Symposia” and find “Perspectives on Transcription in Criminal Justice”).
Next steps
All of the speakers from the session plus the panel chair have teamed up as editors for a Frontiers research topic called “Capturing Talk: The Institutional Practices Surrounding the Transcription of Spoken Language”. Click here for more information.