Moving on withTranscripts

Laptop and notepad on the laps of students in a lecture

Over the years researchers have shown how it is possible to have live interactive highlighted transcripts without character or line restrictions, such as is needed with captions. This is only possible when using technology, but with many more students using tablets, mobiles and phones during lectures it is surprising to find how few lecture capture systems offer this option.

It has been shown that physically writing notes by hand can aid retention and using laptops etc in lectures means there is access to all the other distractions such as social media and emails! However, having the availability of a transcript that provides interaction allows for key points to be selected and annotation improves retention for those students who find it hard to take notes whether by hand or using technology (Wald, 2018).

Systems that also offer transcript annotation linked to the presentation slides, intergrated with the ability to make personal notes alongside the synchronised text, are hard to find. Ways to correct words as you hear or see them, where there are subject complexities can also be difficult.

As was described in our last blog it is clear that all the corrections needed, tend to be measured by different forms of accuracy levels, whether it is the number of incorrect words, ommissions and substitutions. Further work on the NLive transcript has also shown that where English is not a first language those manually making corrections may falter when contractions and conditional tense are used and if the speaker is not a fluent English speaker, corrections can take up to five times longer (according to a recent discussion held by the Disabled Students’ Commission on 6th December).

Difficulties with subject related words have been addressed by CaptionEd with related glossaries, which is the case with many specialist course captioning offerings where companies have been employed to provide accurate outputs. Other companies, such as Otter.ai and Microsoft Teams automatically offer named speaker options which is also helpful.

Professor Mike Wald has produced a series of interesting figures as a possible sample of what can happen when students just see an uncorrected transcript, rather than actually listen to the lecture. This is important as not all students can hear the lecture or even attend in person or virtually. It is also often the case that the transcript of a lecture is used long after the event. The group of students he was working with six years ago found that:

  • Word Error Rate counts all errors (Deletions, substitutions and insertions in the classical scientific way used by speech scientists): WER was 22% for a 2715 word transcript.
  • Concept Error Rate counts errors of meaning: This was 15% assuming previous knowledge of content (i.e. ignoring errors that would be obvious if student knew topic content) but 30% assuming no previous knowledge of content.
  • Guessed error rate counts errors AFTER student has tried to correct transcript by them ‘guessing’ if words have errors or not: there was little change in Word Error Rate as words guessed correctly were balanced by words guessed incorrectly (i.e. correct words that student thought were incorrect and changed).
  • Perceived error rate asks student to estimate % errors: Student readers’ perception of Word Error Rate varied from 30% – 50% overall and 11% – 70% for important/key words: readers thought there were more errors than there really were and so found it difficult and frustrating.
  • Key Errors (i.e. errors that change meaning/understanding) were 16% of the total errors and therefore would only require 5 corrections per minute to improve Concept Error Rate from 15% to 0% (speaking rate was 142 wpm and there were approx 31 errors per minute) but it is important to note that this only improves the scientifically calculated Word Error Rate from 22% to 18%.

This is such an important challenge for many universities and colleges at the moment, so to follow on from this blog you may be interested to catch up with the transcript provided from the Disabled Students’ Commission Roundtable debate held on 6th December. One of the summary comments highlighted the importance of getting the technology right as well as manual support, but overriding this all was the importance of listening to the student voice.

Finally, if you ever wonder why speech recognition for automated captioning and transcription still fails to work for us all, have a look at a presentation by Speechmatics about AI bias, inclusion and diversity in speech recognition . An interesting talk about using word error rates, AI and building models using many hours of audio with different phonetic structures to develop language models that are more representative of the voices heard across society.

Guidance for captioning rich media from Advance HE (26/02/2021)

PDF reader in Microsoft Edge and Immersive Reader goes mobile.

We don’t usually have a collection of stategies but in this case Alistair McNaught has posted an interesting comment on Linkedin that he now uses Edge to read PDFs. From the quote below the browser offers better reading experiences not just with the usual table of contents, page view and text to speech.

Microsoft Edge comes with a built-in PDF reader that lets you open your local pdf files, online pdf files, or pdf files embedded in web pages. You can annotate these files with ink and highlighting. This PDF reader gives users a single application to meet web page and PDF document needs. The Microsoft Edge PDF reader is a secure and reliable application that works across the Windows and macOS desktop platforms. More Microsoft Edge features

Microsoft have also updated their Immersive Reader so that it now works on iOS and Android. The following text has been taken from a post that might be useful ‘What’s New in Microsoft Teams for Education | July 2021’

  • Immersive Reader on iOS and Android. Immersive Reader, which uses proven customization techniques to support reading across ages and abilities, is now available for Teams iOS and Android apps. You can now hear posts and chat messages read aloud using Immersive Reader on the Teams mobile apps.
  • Access files offline on Android. The Teams mobile app on Android now allows you to access files even when you are offline or in bad network conditions. Simply select the files you need access to, and Teams will keep a downloaded version to use in your mobile app. You can find all your files that are available offline in the files section of the app. (This is already available on iOS.)
  • Teams on Android tablets. Now you can access Teams from a dedicated app from Android tablets.
  • Inline message translation in channels for iOS and Android. Inline message translation in channels lets you translate channel posts and replies into your preferred language. To translate a message, press and hold the channel post or reply and then select “Translate”. The post or reply will be translated to your UI language by default. If you want to change the translation language, go to Settings > General > Translation.”

Thank you Alistair for this update on some new strategies.

Android Accessibility: Introducing Action Blocks for rapid access.

Google Action Blocks designed for those with cognitive impairments, but actually useful for anyone who wants a one tap selection to important features on their Android phone.

Action Blocks,  a new Android app that allows you to create customisable home screen buttons.  This mean you can create widgets with direct access a particular phone number, to a video, diary schedule for the day, documents etc. Google accessibility software engineer Ajit Narayanan and accessibility product manager Patrick Clary share more on the YouTube video below.

Download the app from Google Play

Android Accessibility: Introducing Action Blocks

The Verge provide more information: ” After you install the Action Blocks app, you set one up by choosing from a list of predefined actions or by typing in your own. It works via Google Assistant, so anything you can ask for with your voice can be typed in. After you test that it works, you can save it as a button on the home screen.

Importantly, you’ll have the option to put your own custom image on the button. Again, the purpose of the features isn’t to let productivity junkies make workflows; it’s to help people with cognitive disabilities achieve tasks on their phones. So setting a big photo of a family member to make a video call is an essential feature.”

Accessibility Maze Game

maze game screen grabIf you want to learn about digital accessibility in a fun way try the Accessibility Maze Game developed by The Chang School, Ryerson University in Ontario, Canada. It takes a bit of working out and you may not get to all the levels but have a go!

When you have managed to get through the levels there is a useful “What you can do to Remove Barriers on the Web” pdf downloadable ebook telling you all about the issues that you will have explored during the Accessibility Maze Game. These are all related to W3C Web Cotent Accessibility Guidelines but presented in ten steps.

The ebook is available in an accessible format and has been provided under Creative Common licencing (CC-BY-SA-4.0)

SCULPT for Accessibility

SCULPT process thanks to Digital Worcester – Download the PDF infographic

Helen Wilson has very kindly shared her link to SCULPT for Accessibility. Usually we receive strategies that relate to student’s work, but in this case, this is a set of resources that aim “to build awareness for the six basics to remember when creating accessible documents aimed at the wider workforce in a local authority or teachers creating learning resources.”

It seemed at this time whilst everything was going online due to COVID-19 this was the moment to headline the need to make sure all our work is based on the principles of accessibility, usability and inclusion. JISC has provided a new set of guidelines relating to public service body regulations and providing online learning materials. Abilitynet are also offering useful links with more advice for those in Further and Higher Education

Windows 10 support for Visual Impairment

YouTube online access

If you are supporting students or want to learn more about the way Microsoft Windows 10 provides built in assistive technologies to support visual impairments Craig Mill and CALL Scotland have a blog on the subject and Craig has made a YouTube playlist. All the videos have captions and the transcripts are readily available.

The videos are short bite-sized guides and comprise of the following topics:

  • Part 1: Customising the desktop using some simple adjustments in Windows 10.
  • Part 2: Magnifying information in apps – some useful hints and tips on zooming in and out of browsers and other apps.
  • Part 3: Customising Mouse Tools and Pointer – how to make changes to the Mouse Pointer using Windows ‘legacy’ tools.
  • Part 4: Using keyboard shortcut keys to increase the font size in Microsoft Word – improving speed and workflow.
  • Part 5 (a): Using Immersive Reading tools in Microsoft Word to customise the font / text and listen to it spoken aloud.
  • Part 5 (b): Using Learning Tools in Microsoft Edge Browser to customise font/text, layout and hear it read aloud.
  • Part 6: Introduction to Microsoft Ease of Access Tools Display Settings – how to ‘Make text size bigger’, ‘Make everything bigger’ and how to adjust the mouse pointer size and colour.
  • Part 7: Using Windows Magnifier – how to use Windows Magnifier in combination with other Ease of Access Display Settings such as ‘Make everything bigger’ etc.
  • Part 8: Colour filters – maximising computer accessibility for learners who experience colour blindness.
  • Part 9: High Contrast Filter – how to customise the colours of elements such as menu bars, backgrounds, buttons etc, in Windows.
  • Part 10 (a): Microsoft Narrator – an introduction to using screen reading with Windows Narrator.
  • Part 10 (b): Using Windows Narrator to navigate the desktop and Microsoft Word.