The Problem with Automated A11y Tools

Automated tools are great but there are cases where they can be a hindrance. Here we look into why and what can be done about it

Here at Code Computerlove, we take accessibility very seriously and are working on getting an automated suite of tools in our pipelines to run checks, as we code, to catch accessibility issues before they can make it into our production codebase.

The problem is that automated tools don't catch everything and there are certain scenarios where they are unable to identify whether something is an issue or not.

We had a scenario from a report where certain criteria were failing but in our automated tests, they were only being reported as 'warnings'. The specific rule was regarding the text used for a set of buttons that were displayed next to each other. They all read 'Book now', but were in containers that had headings giving more detail on what you are booking. However, when tabbing through the page with a keyboard, the buttons are read out sequentially, and there's no context as to what they actually do or where they go.

In our automated report, this is a warning, but a visually impaired user wouldn't have any idea what they were actually booking when tabbing through the buttons. The report we received was correct in citing this as a fail, but only when you consider how an actual person might interact with the page. This is the crux of the issue and one where automated tools fall down.

Warnings that are not warnings

On the flip side of this, we were seeing hundreds of warnings regarding the tool being unable to test the contrast ratio of some links because they are within an absolutely positioned element. This is something that needs a manual test, but I don't find it helpful to list it as a warning (rather than, say, a notice) because, if a client or somebody not well versed in accessibility sees it, they will flag it as an issue when it is just a signal to do a quick manual check of that element.

The most prominent issue when it comes to automated testing of accessibility is that it is only ever likely to catch up to ~70% of the issues. There is no alternative but to run manual tests alongside the automated ones to get a full overview of the accessibility issues impacting a site. This is where reports like the one we had at Code can feel confusing because, when running the site through a tool like Pa11y, there are inconsistencies.

It's important to realise that these issues are born of actual people using and testing the site and reporting the issues that they have. If somebody can't use or understand your website, particularly when it is part of the website's goal (e.g. buying or booking something), then it should fail accessibility because it isn't accessible to everyone who is trying to use it.

author

Alex is a Principal Front End Engineer currently working at Choreograph - a WPP company. He has over 15 years of experience in the web development industry.