Accessibility Guidelines 3.0
Last updated: 19th November 2021
What is WCAG 3.0 / Silver?
How does it differ from WCAG 2.X?
The WCAG 2.0 guidelines were published in 2008 and are the primary guidelines used to assess the accessibility of a website throughout the world.
Trouble is, 2008 was a long time ago, light years in technology terms. The guidelines can be clarified, but not change as they are an agreed upon standard. So they have become very outdated and, in my opinion, quite hard to follow.
While the 2.0 guidelines couldn't be amended, it was becoming known that there were some significant gaps in them. Accessibility issues with modern websites that were getting through under WCAG 2.0
So, in 2018, the W3C published WCAG 2.1. This is basically exactly the same as WCAG 2.0, but with additional criteria to cover the holes.
There's also a WCAG version 2.2 (currently in working draft status), which adds yet more criteria.
Versions 2.1 and 2.2 only solved a tiny portion of the issues. The WCAG guidelines are still confusing, and the way the levels work is not ideal.
They needed a complete overhaul, and in late 2016 a taskforce was put together to create a brand new version, WCAG 3.0.
At the moment, the entire project is codenamed Silver.
Why Silver? Well, WCAG stands for "Web Content Accessibility Guidelines". As the scope of the guidelines has grown beyond just web content, we drop the WC and are left with AG. Ag is the chemical symbol for Silver.
When is WCAG 3.0 going to be ready?
While the team's original project plan was targeting late 2022, the current documentation is targeting completion in "a few more years", with 2024 referenced in one project plan.
Though the W3C does reassure us that, "WCAG 2 will not be deprecated, for at least several years after WCAG 3 is finalized".
Will WCAG 3.0 have Level A, AA and AAA?
No. The team are working on a different model for measuring conformance. There will still be a number of criteria to meet though.
At the moment, the level A, AA or AAA model is pretty good for making statements about whether a site meets legal guidelines. What it doesn't do is provide a decent assessemnt of how accessible a site is on a sliding scale.
In WCAG 2.0, a site could meet all the Level AA criteria and 24/25 of the Level A criteria. It would not meet any of the levels at all. The site is forced to declare that it does not meet even the minimum standard of Level A, when in reality that site would likely be very accessible for the majority of people.
More nuance is needed to grade the 98% of sites that do not currently meet WCAG Level AA, as they will vary hugely in their accessibility provision.
WCAG 3.0 scoring, rating and levels:
Bronze, Silver and Gold
There are three different levels of compliance in AG 3.0, like the level A, AA and AAA of WCAG 2. In AG 3.0, they will be called Bronze, Silver and Gold.
Interestingly, WCAG 2.1 AA compliance will only equate to Bronze in Silver. Presumably this means that sites currently meeting Level A wouldn't comply with Bronze, and would no longer be considered accessible.
Scoring and Rating
WCAG 3.0 uses a rating / scoring system. This is the current model for how it works:
Elements of the site are tested, for example all of the images on the site are checked to see if they have a text alternative.
This gives a percentage of images that pass the test, the score.
The guidelines map this to a rating, between 0 (Poor) and 4 (Excellent). To gain a rating of 4 for text alternatives, it's suggested that a minimum of 95% of images pass.
All of the 0-4 ratings are then averaged.
There's a bit more complexity though. Each of the tests carried out are assigned to a "functional category", loosely related to the people that they are trying to make the content accessible to. For example, one of them is "Cognitive - Language & Literacy".
For a site to be considered Bronze under WCAG 3.0, it must have an average rating of at least 3.5 overall, and an average of at least 3.5 in every functional category.
This ensures that the site is broadly accessible to everyone, and a subset of people are not excluded.
Interestingly, Silver and Gold levels don't just require higher averages. They require some level of "holistic testing", which is defined as, "assistive technology testing, user-centered design methods, and both user and expert usability testing".
It sounds like potentially a user group would have to be involved for these levels, rather than just an accessibility assessment by a specialist.
Details here are pretty vague at the moment, with a comment simply stating, "Use of holistic tests to meet this level will be further explored in future drafts"
Like Level AAA in the current system, it's looking like Silver and Gold levels will be mainly undertaken by specialist websites, not the majority.
Accessibility testing of huge websites can be very time consuming. WCAG 3.0 will likely have some guidance around how much of a site needs to be tested.
There are some notes around ensuring that essential functions and high traffic areas are tested, but only manually testing a percentage of the remaining pages.
For example, a site with 100 to 1000 pages would have to have automated tests run over all pages, but manual checking of only the core functions and then 10% of the remaining non-essential pages.
This guidance is still being debated as it needs a clear definition on what constitutes a core/essential feature.
Processes and Views
WCAG 3.0 defines processes and views, with a process being, "A sequence of steps that need to be completed in order to accomplish an activity / task from end-to-end." and a view largely corresponding to a web page.
It's recommended that conformance is assessed for each process in turn, and the test methods reflect this. It's actually a pretty useful way of looking at a website and approaching the assessment.
Conformance Over Time
The working group are considering whether accessibility ratings should degrade in some way over time.
Is a rating obtained 5 years ago still as valid today?
Probably not, but this has to be balanced with an understanding that accessibility testing can be quite time consuming. It would be prohibitive to force organisations to re-test every single year.
Rather than ratings decreasing over time, it's been agreed that the date of assessment should be stated alongside the rating, along with a version number of the software where relevant.
This provides some additional insight into how accurate the conformance claim may be at the current time.
How the guidelines are structured
WCAG 3.0 has a completely different structure. We no longer have Success Criteria, but Outcomes, Methods and Tests.
It's pretty complicated, so I've summarised in a diagram:
I'll run through each of these components in more detail.
Largely the same concept as in WCAG 2.X.
For example, in WCAG 2.X you have the guideline, "Guideline 1.2 Time-based Media" and then within that sit the Success Criteria, such as "Success Criterion 1.2.1 Audio-only and Video-only (Prerecorded)".
Guidelines are essentially a grouping of similar areas to test.
A functional category is a new concept for WCAG 3.0.
They are groups of the functional needs of users. There are 14 in the current draft, and they include, "Hearing and Auditory" and "Attention".
The idea is that the rating system uses the functional categories to ensure no specific set of users will be overlooked.
An outcome is most comparable to a success criterion in WCAG 2.X.
However, according to the latest draft, there are some key differences planned:
Outcomes will be:
- "More user-need oriented instead of technology oriented;
- More granular, so there will be more of them; and
- More flexible to allow more tests than the true/false statements of WCAG 2.X."
An outcome relates to a collection of functional categories, to show the people that the outcome is trying to assist.
Outcomes are rated between 0 and 4, and the rules for how this rating is determined is specific to each outcome.
The current draft doesn't seem to have numbered the outcomes - I hope they do because it makes them much easier to reference!
Each outcome may have one or more critical errors. These are issues that are considered so critical that the entire outcome must be rated a zero, regardless of other accessibility efforts.
An example related to video captioning is, "Any video without captioning that is needed to complete a process. For example, an education site with a video that a student will be tested on".
The outcome has one or more methods, which are sort of areas or concepts to test. They aren't specific enough to actually be tests though.
For example, within the "Text alternatives available" outcome, there is a method for decorative images, and another for images of text.
The methods are where the majority of the explanations lie, there is an introduction, examples to read through and links to other references for more information.
The methods have one or more tests, which are the sections that actually go into technical detail on specific things to test and how they are scored.
Tests are given scores, which are used to determine the overall rating of the outcome.
Drafts of the Guidelines
The WCAG 3.0 team have started writing the first drafts of a few of the guidelines.
There are two versions floating around, the latest published working draft, which is the content officially available for review, and the editor's draft, which is a bit more up to date.
They are definitely a work in progress, I'm not sure any of the outcomes has every single test fully written up, but it's a clear indication of the intent and the structure.
I do find the structure too fractured and complex currently. Hopefully this is something the team will work on.
If you'd like to provide feedback to the team, add an issue to the Silver GitHub page.
Watch this space
I am going to be adding more content here as more about Silver is known.
Eventually, as we reach the first official draft of all the guidelines, I will put together a full course on how to meet them.