Moving on withTranscripts

Laptop and notepad on the laps of students in a lecture

Over the years researchers have shown how it is possible to have live interactive highlighted transcripts without character or line restrictions, such as is needed with captions. This is only possible when using technology, but with many more students using tablets, mobiles and phones during lectures it is surprising to find how few lecture capture systems offer this option.

It has been shown that physically writing notes by hand can aid retention and using laptops etc in lectures means there is access to all the other distractions such as social media and emails! However, having the availability of a transcript that provides interaction allows for key points to be selected and annotation improves retention for those students who find it hard to take notes whether by hand or using technology (Wald, 2018).

Systems that also offer transcript annotation linked to the presentation slides, intergrated with the ability to make personal notes alongside the synchronised text, are hard to find. Ways to correct words as you hear or see them, where there are subject complexities can also be difficult.

As was described in our last blog it is clear that all the corrections needed, tend to be measured by different forms of accuracy levels, whether it is the number of incorrect words, ommissions and substitutions. Further work on the NLive transcript has also shown that where English is not a first language those manually making corrections may falter when contractions and conditional tense are used and if the speaker is not a fluent English speaker, corrections can take up to five times longer (according to a recent discussion held by the Disabled Students’ Commission on 6th December).

Difficulties with subject related words have been addressed by CaptionEd with related glossaries, which is the case with many specialist course captioning offerings where companies have been employed to provide accurate outputs. Other companies, such as Otter.ai and Microsoft Teams automatically offer named speaker options which is also helpful.

Professor Mike Wald has produced a series of interesting figures as a possible sample of what can happen when students just see an uncorrected transcript, rather than actually listen to the lecture. This is important as not all students can hear the lecture or even attend in person or virtually. It is also often the case that the transcript of a lecture is used long after the event. The group of students he was working with six years ago found that:

  • Word Error Rate counts all errors (Deletions, substitutions and insertions in the classical scientific way used by speech scientists): WER was 22% for a 2715 word transcript.
  • Concept Error Rate counts errors of meaning: This was 15% assuming previous knowledge of content (i.e. ignoring errors that would be obvious if student knew topic content) but 30% assuming no previous knowledge of content.
  • Guessed error rate counts errors AFTER student has tried to correct transcript by them ‘guessing’ if words have errors or not: there was little change in Word Error Rate as words guessed correctly were balanced by words guessed incorrectly (i.e. correct words that student thought were incorrect and changed).
  • Perceived error rate asks student to estimate % errors: Student readers’ perception of Word Error Rate varied from 30% – 50% overall and 11% – 70% for important/key words: readers thought there were more errors than there really were and so found it difficult and frustrating.
  • Key Errors (i.e. errors that change meaning/understanding) were 16% of the total errors and therefore would only require 5 corrections per minute to improve Concept Error Rate from 15% to 0% (speaking rate was 142 wpm and there were approx 31 errors per minute) but it is important to note that this only improves the scientifically calculated Word Error Rate from 22% to 18%.

This is such an important challenge for many universities and colleges at the moment, so to follow on from this blog you may be interested to catch up with the transcript provided from the Disabled Students’ Commission Roundtable debate held on 6th December. One of the summary comments highlighted the importance of getting the technology right as well as manual support, but overriding this all was the importance of listening to the student voice.

Finally, if you ever wonder why speech recognition for automated captioning and transcription still fails to work for us all, have a look at a presentation by Speechmatics about AI bias, inclusion and diversity in speech recognition . An interesting talk about using word error rates, AI and building models using many hours of audio with different phonetic structures to develop language models that are more representative of the voices heard across society.

Guidance for captioning rich media from Advance HE (26/02/2021)

Accessibility Maze Game

maze game screen grabIf you want to learn about digital accessibility in a fun way try the Accessibility Maze Game developed by The Chang School, Ryerson University in Ontario, Canada. It takes a bit of working out and you may not get to all the levels but have a go!

When you have managed to get through the levels there is a useful “What you can do to Remove Barriers on the Web” pdf downloadable ebook telling you all about the issues that you will have explored during the Accessibility Maze Game. These are all related to W3C Web Cotent Accessibility Guidelines but presented in ten steps.

The ebook is available in an accessible format and has been provided under Creative Common licencing (CC-BY-SA-4.0)

SCULPT for Accessibility

SCULPT process thanks to Digital Worcester – Download the PDF infographic

Helen Wilson has very kindly shared her link to SCULPT for Accessibility. Usually we receive strategies that relate to student’s work, but in this case, this is a set of resources that aim “to build awareness for the six basics to remember when creating accessible documents aimed at the wider workforce in a local authority or teachers creating learning resources.”

It seemed at this time whilst everything was going online due to COVID-19 this was the moment to headline the need to make sure all our work is based on the principles of accessibility, usability and inclusion. JISC has provided a new set of guidelines relating to public service body regulations and providing online learning materials. Abilitynet are also offering useful links with more advice for those in Further and Higher Education

Windows 10 support for Visual Impairment

YouTube online access

If you are supporting students or want to learn more about the way Microsoft Windows 10 provides built in assistive technologies to support visual impairments Craig Mill and CALL Scotland have a blog on the subject and Craig has made a YouTube playlist. All the videos have captions and the transcripts are readily available.

The videos are short bite-sized guides and comprise of the following topics:

  • Part 1: Customising the desktop using some simple adjustments in Windows 10.
  • Part 2: Magnifying information in apps – some useful hints and tips on zooming in and out of browsers and other apps.
  • Part 3: Customising Mouse Tools and Pointer – how to make changes to the Mouse Pointer using Windows ‘legacy’ tools.
  • Part 4: Using keyboard shortcut keys to increase the font size in Microsoft Word – improving speed and workflow.
  • Part 5 (a): Using Immersive Reading tools in Microsoft Word to customise the font / text and listen to it spoken aloud.
  • Part 5 (b): Using Learning Tools in Microsoft Edge Browser to customise font/text, layout and hear it read aloud.
  • Part 6: Introduction to Microsoft Ease of Access Tools Display Settings – how to ‘Make text size bigger’, ‘Make everything bigger’ and how to adjust the mouse pointer size and colour.
  • Part 7: Using Windows Magnifier – how to use Windows Magnifier in combination with other Ease of Access Display Settings such as ‘Make everything bigger’ etc.
  • Part 8: Colour filters – maximising computer accessibility for learners who experience colour blindness.
  • Part 9: High Contrast Filter – how to customise the colours of elements such as menu bars, backgrounds, buttons etc, in Windows.
  • Part 10 (a): Microsoft Narrator – an introduction to using screen reading with Windows Narrator.
  • Part 10 (b): Using Windows Narrator to navigate the desktop and Microsoft Word.

iPhone, iPad or iOS 12 Shortcuts app to create custom shortcuts

shortcuts appThe Apple iOS free Shortcuts app can be used to make many tasks one step easier.  If you go to the Apple store you can download the ShortCuts app

Several websites have commented on how useful it is, and AbilityNet provided an early review about how Siri can be used in additional ways with this app.  It makes it possible to combine several steps into one automated step using the Shortcuts Widget or just asking Siri.

There is a gallery of shortcuts and one of them is a universal clipboard that allows you to dictate content using Siri for pasting into any other app or sending a message, email etc with just one command.

GadgetHacks has more hints and tips about this app.

YouTube video on making Siri shortcuts by Max Dalton (Published on 18 Sep 2018)

Seeing AI for recognising things and reading out what it has found!

According to Stuart Ball this free Seeing AI iPhone or iPad app has multiple benefits for those with visual impairments or who are blind.   It has been developed by Microsoft so has the ‘swiss army knife approach’ according to AccessWorld to telling you about the world around you.  It searches out light sources, identifies colours and money and describes them using text to speech.  It will recognise a person is approaching and offer a description.  Barcodes can be read and optical character recognition is used for documents etc.  Clear handwriting can be deciphered and scenes described.

Another college student called Veronica in USA has provided a very helpful Seeing AI review from a blind student’s point of view

Microsoft have produced a YouTube video about the Seeing AI app.

Thank you so much Stuart for providing this strategy.

Stuart Ball is an Assessor at the Cardiff Metropolitan University.