Immersive Reader working within Virtual Learning Environments

There are many ways that Immersive Reader can be used and LexDis already has stratgies for using this read aloud and text support app on mobile and as a set of immersive reading tools with OneNote on Microsoft 365.

However, recently Ros Walker sent an email to the JISC Assistive technology list about some updates that have occurred. One important point was her note about the app working with virtual learning environments such as Blackboard Ally alternative formats and it is now possible to create in Moodle, an ‘Immersive reader’ option as an alternative format for most files that are added into a Moodle course.

uploaded file with link to Immersive Reader icon
Image thanks to Ros Walker – uploaded file with link to Ally

The student’s view on the Moodle course will allow them to select the A (ally logo) at the end of the title of the file they want as well as being presented with all the accessibility options. The University of Plymouth have provided guidance illustrating how this happens from the staff and student perspective as well as accessibility checks.

Introduction to Ally and Immersive Reader for Moodle

Immersive Reader in Word highlighting part of speech, colour background changes and text style.
Immersive Reader in Word highlighting part of speech, colour background changes and text style.

Ros has also been kind enough to link to her video about Immersive Reader in Word and how she has worked with PDFs to make the outcome a really useful strategy for students looking for different ways to read documents.

“If you haven’t seen the Immersive reader before, it is available in most Microsoft software and opens readings in a new window that is very clean and you can read the text aloud. (The Immersive Reader)”

Thanks to Ros Walker, University of St. Andrews

Dyslexia Awareness Month/Week/Day – Reading and Notetaking Strategies

Across the world Dyslexia awareness is being championed in various ways during October. There is a week in the UK and a month is USA. But when looking around at all the expertise available for students it seemed a good moment to link to ideas around notetaking as this is a subject that we feel is very important for college and university students.

Studies on learning have shown that actively engaging with a topic by listening and then summarising what has been said helps understanding and provides a way of remembering content in the future.

Dominik Lukes has been studying the wider aspects of note taking that include reading and writing in his collection of web pages explained in a recent JISC presentation about Oxford University Reading and writing innovation lab where you can also download the transcript. Below is the Slide Share version of the video slides which allows you to pause and study the various technologies that might help in the process of reading and note taking.

Dominik has also developed a series of web pages including a series of Academic Productivity: Tools and Strategies and a 2022 Dyslexia Awareness Week challenge that included really useful tips

  1. About dyslexia and its immediate impacts on fluent and accurate reading and writing
  2. Structured, undistorted text is dyslexia friendly for everyone
  3. Listening to text reduces the processing overload
  4. Dictating instead of writing can reduce the spelling overhead
  5. Some other things that make reading easier for everyone

But what happens if there is just too much information and it is all rather overwhelming and anxiety creeps in as John Hicks describes in his blog “What Can It Feel Like To Take Notes When Dyslexic?”. John also adds some useful strategies.

bulb, two pencils and a rubber on a piece of paper.

 However, on the whole notetaking applications do not automate the process of synthesising the key points. Machine learning and natural language processing has made it easier for automatic summarisation to occur in recent years and some provide key words, but there is still the need for human intervention!

What is considered a key word by an algorithm may not necessarily be what an individual wants to focus on! Also, what is missed may be rather important! Therefore, evaluating the different summarisation systems would seem to be vital in order for us to improve the systems. We are discovering that this is not easy!

Nevertheless, our aim is to try to make this an automatic process in a way that can show that automatic summarisations with key wording have the potential to be consistent, fluent and relevant. But for now more reading and note taking about the subject is needed!

References

Lukes, Dominik (2021) Towards a digital reading lab: discovering and learning new affordances. Journal of Academic Development and Education Special Edition (Becoming Well Read). ISSN 2051-3593

G. Salton et al., “Automatic Text Structuring and Sum-marization,” Information Processing & Management,Vol. 33, No. 2, 1997, pp. 193-20

Fabbri, A.R., Kryściński, W., McCann, B., Xiong, C., Socher, R. and Radev, D., 2021. Summeval: Re-evaluating summarization evaluationTransactions of the Association for Computational Linguistics9, pp.391-409.

W3C WCAG 2.2 is on its way with interesting additions!

globe with social media icons.

For those interested in web content accessibility the WCAG 2.2 guidelines will have some newly added Success Criteria (SGs) that will build on WCAG 2.1 when they are published in September 2022. The additions aim to tackle some of the latest barriers found in various web services. It is not easy with multifactor authentication changes, dynamic data driven pages and the different types of web and mobile apps.

Ensuring the maintenance of conformance levels and checking for potential accessibility issues has become harder. If you are someone who spends time evaluating web content and have become reliant on the automated checkers, it is worth being aware of their limitations! Watch out for the automated results that give you a 90 -100% score! Can they reach the parts of multi-factor authentication that we have to use? Are they including visual focus checks to track when keyboard access is hidden? Just two of the many issues that can catch us out.

Web Content Accessibility Guidelines (WCAG) 2.2 W3C Editor’s Draft (25 August 2022) – the full document

The following Success Criteria are new in WCAG 2.2:

Of all the new Success Criteria it is felt that Visual Focus is not always understood but its links with keyboard accessibility have alwys been important. WebAim talk about focus indicators and provide several tips on the subject before linking out to an  ARIA authoring practices document written by the W3C Web Accessibility Initiative (WAI) team that may help developers. WAI also offer advice about mobile accessibilitiy checks.

help keyboar button

Finally the Bureau of Internet Accessibility have provided a blog on 5 Quick Ways to Check Your Site Against New WCAG 2.2 Standards

TPGi also offer the ” new SCs in WCAG 2.2, describing their requirements in plain language, and discussing how to meet them.”

Using Google Docs and Sheets with JAWS, NVDA or most other screen readers

Google letter G

This strategy is not new but may be useful if you are using a screen reader as there are tricks that may be missed if you are not aware of the changes needed when using Google docs or sheets because it is working in a browser such as Chrome, Edge or Firefox.

The blog about ‘Google Docs and Sheets with a Screen Reader’ comes from The Perkins School for the Blind in USA and Mark Babaita added an easy tip that might also help those testing the accessibility of the content withiin a doc or sheet:

If you hear JAWS move to a heading on the page and read that heading, you know that the virtual cursor is still active. Use Insert + Z to toggle the virtual cursor on and off.

https://www.googlesupportnumber.com/

July 11th, 2018

The authors have added useful links to the Freedom Scientific free webinars and printed resources covering a variety of topics, including for using Google Drive, Docs and Sheets with JAWS and Magic.  

Exploring the embedding of accessible image descriptions into image metadata

Time to move into images as part of the accessible package we can offer students when working online! If you are a Graphic Designer or Photographer using tools to embed accessibility tags please check what I am saying makes sense!

Many in the world of digital accessibility know the work of W3C WCAG and image accessibility, and are used to adding alternative text and long descriptions for informing users about the contents of an image, diagram, photograph etc, in particular for screen reader users. These tags are added by those who upload the image to a web page, document and other publications. Matt Deeprose (University of Southampton) recently posted some videos on the subject “What is alternative text? How do I write it for images, charts, and graphs?” These videos are really helpful if you are a content provider.

But what about enabling the designer or photographer to add the ‘alt text’ and ‘long desc’ to their image as they save it? This may not suit all situations, but it has the potential to ensure accessibiity ‘metadata’ (data about the image in this case) is always in place when sharing takes place. The data can be adapted later if necessary and those uploading images can still add tags, if the original metadata cannot be read by certain screen readers or applications.

My journey into improving image accessibility all started when i wanted to add some metadata to pictographic symbols and I was exploring how to work with accessible Scalable Vector Graphics (.svgs) in CorelDRAW, because they are much more responsive to size changes and Deque had highlighted their accessibility advantages and most browsers now support the .svg file format

graphics designer working at a desk on drawings - image metadata wth URL, name size etc but no alt tag or description.
https://jimpl.com/ offering online image metadata information – No list item or property telling us that there was no alt text or description, just no location!

I needed to learn why I was failing to find a way of achieving the embedding of additional accessibility metadata and Chris OShea from PPA Training introduced me to a set of editable properties that are available and then shared the link to Adobe Bridge to enable their applications to carry the accessibility tags. The secret is to find a format that is part of a recognised standard for example the Extensible Metadata Platform (XMP) standard so this can also be machine read by various digitial asset management tools.

In 2021 The “International Press Telecommunications Council IPTC’s Photo Metadata Standard included the two essential properties: Alt Text (Accessibility) and Extended Description (Accessibility).” The IPTC blog announcing this news said that

“IPTC’s new accessibility properties will make it easier for platforms and software to comply with WCAG requirements and deliver images that are inclusive for everyone. Embedding accessible image descriptions into the photo metadata will make it possible for alt text and extended descriptions to travel wherever the image goes on the web or in books or other documents provided as EPUBs.”

IPTC October 27th 2021

However this is not going to happen overnight because Chris and I discovered when testing the procedures, that not all software companies allow the accessibility metadata to be added to their graphics packages in a way that can be read by a screen reader. Richard Orme, CEO Daisy Consortium, kindly got in touch about his paper on “Making use of IPTC alt text accessibility metadata” where I learnt that at the moment the use of the ExifTool by Phil Harvey is the stepping stone that we need!

AVPreserve Exiftool Tutorial Part 1 on YouTube (22 Nov 2013, 4.45 mins) https://youtu.be/CWcMrAfhlKI

Exiftool is not a new tool and has been used by those setting up photographic repositories for many years and neither is the discussion about using it for adding accessibility metadata as the NCAM Potential Use of Image Description Metadata for Accessibility paper (2011) illustrates. A 2021 Exiftool set of instructions by Chris Blackden describes how metadata can be seen by everyone and removed or added. There is a very helpful video, but as yet he does not describe the addition of alt text and long descriptions.

So for those wishing to try Exiftool with a set of command lines, Phil Henry has examples on his Exfitool pages and Richard Orme has offered examples for adding the accessibility metadata

ExifTool command line utility

Rename the executable to exiftool for command line use

To set metadata use:

exiftool filename -AltTextAccessibility=”Your alt text here.”

exiftool filename -extDescrAccessibility=”Your extended description text here.”

To read the metadata use:

exiftool filename -AltTextAccessibility

exiftool filename -extDescrAccessibility

Hopefully, soon all graphic design software packages will include the additional properties for accessibility metadata and digital asset management tools will support the IPTC standard, so that users of assistive technology such as screen readers and text to speech apps will be able to find the accessibility tags when available!

It all seems much more complicated than I first thought whilst Artificial Intelligence and machine learning have moved the goal posts into new realms of digital image recognition. However, just allowing an image to be saved with embedded accessibility information did not seem such a knotty problem when I started on the journey!

A tangle of wires on a telegraph pole

Multifactor Authentication types across 50 universities

When considering the different types of Multifactor Authentication (MFA) it is clear that many could be a challenge for students with a wide range of disabilities. However, when you add the use of assistive technologies and customisation or potential personalisation the barriers begin to come down. That is as long as the actual website or app hosting the required verification of a sign up or log in is accessible.

With these caveats in place it seemed that as long as students were provided with at least three or more choices it would be possible to navigate MFA. That thought led to a mini survey of around a third of the universities in UK to see what was on offer.

graph of MFA choices in 50 universities
Vertical axis has the MFA options and the horizontal axis is the number of universities offering that type of option

Several universities offer a password as their main login method and then additional security for certain more sensitive areas. 42 out of 50 universities offer apps, but only two apppear to provide 2 options for the type of app, such as Microsoft and Authy on a desktop, which can be very helpful for assistive technology users who do not have smart phones or find their desktop AT easier to use.  8 universities offer hardware tokens and 6 offer at least 5 options but 9 had no alternatives that could be easily found and 14 universities made searching for support difficult by not having easy to reach information pages.

Microsoft authentication app, a text message to a mobile phone or a call to either a landline or mobile, were the most common verification methods after a login email and password had been generated. 

So in summary…

  • many students have limited options if they do not want to or could not use the Microsoft Authentication app or do not have a smart phone.  
  • there are rarely more than two options if using an app is not possible and one includes the use of a landline, which may not always be possible in a college or university setting
  • it often took more than ‘three clicks’ or selection choices to reach any supporting materials and these rarely mentioned the use of assistive technologies.  However, there was usually a contact form or email address available.