Skip to content
All posts

Artificial Intelligence and Accessibility: Turning Hope into Lasting Impact

Artificial intelligence (AI) now holds a central place in the technological imagination—somewhere between fascination and caution. For the accessibility community, and more broadly for anyone using digital tools with sensory, motor, or cognitive constraints, AI represents both a promise and a challenge. A promise to make information closer, clearer, and more actionable. A challenge to ensure the technology remains inclusive, reliable, and verifiable. Through the combined perspectives of Maxime Varin, David Demers, and Jean‑Baptiste Ferlet—accessibility professionals and, for two of them, blind—this article offers a concrete framework to understand what AI changes, what it doesn’t, and what tech teams must do now to ensure accessibility advances rather than retreats.

AI as a bridge: making the inaccessible readable

For Maxime Varin, an A11y specialist and blind professional, AI is “a new toy” that’s already useful: text editing, captioning, summaries, and structured reports. Above all, it has a key virtue: producing text answers when the original source is inaccessible (for example, overly visual interfaces, poor markup, or complex PDFs). In practice, AI can act as a bridge: it digests information and returns content that a screen reader can work with, sparing users from exhausting journeys through poorly designed pages.

This bridge role extends to tasks notoriously cumbersome with a screen reader. Comparing two Word documents to isolate differences? AI lets you ask “show me only what changed,” “list new words,” or “highlight modified sections.” The gains in productivity and clarity are substantial. And the impact goes beyond blindness: people with motor, cognitive, or hearing limitations benefit from tools that summarize, rephrase, and structure—functions that reduce cognitive load and improve access to what matters.

The accessibility overlay: a necessary ambition

David Demers, director of CNIB’s Access Labs and a blind professional, describes his experience: AI capable of describing landscapes or photos, interpreting posters, and extracting useful information from screens or non‑compliant sites. In other words, an “accessibility overlay” that conveys meaning regardless of flawed form.

This ambition comes with a requirement: AI must remain inclusive and accessible. The recent history of the Web is instructive. We went from an era where content—mostly text—was relatively accessible, to rich, dynamic, visual interfaces that too often left people behind. Repeating that drift with AI—letting it grow in complexity without guardrails—would be a major mistake. Accessibility cannot be a late add‑on: it’s a non‑negotiable quality criterion, embedded from design and measured continuously.

The black box and the principle of verifiability

Jean‑Baptiste Ferlet, an accessibility specialist and developer, highlights a technical reality every tech reader should keep in mind: AI outputs are open to interpretation, sometimes “hallucinated,” and therefore demand human verification. AI operates like a black box: you provide an input and receive an output, with little transparency about the steps in between. Powerful? Yes. Infallible? No.

Jean‑Baptiste ran a telling experiment: he asked AI for an “accessible carousel.” The first result was non‑compliant. It required adjustments, re‑testing, and validation. Hence the analogy of a “trainee who needs supervision”: AI is fast and prolific, but human expertise remains indispensable for setting criteria, controlling quality, and owning responsibility.

For a tech‑enthusiast audience, the lesson is clear:

  • Models generate plausible outputs, not guarantees of compliance.
  • Automated tests are useful, but manual testing and evaluation by users (including blind users) remain essential.
  • The cost of verification is an investment, not a burden: it secures AI’s real value.

Accelerator of production or amplifier of problems? The tipping point

Used without mastery of fundamentals, AI can amplify existing issues: produce non‑compliant content faster, spread inaccessible patterns at scale, and multiply interfaces that “seem” right but fail on keyboard navigation, screen reader support, contrast, and heading/landmark logic.

Conversely, with trained teams and clear guardrails, AI becomes an accelerator:

  • Generating image alt text, followed by targeted human validation.
  • Summarizing large documents with extraction of essential metadata (titles, sections, glossaries).
  • Comparing versions for content reviews and QA.
  • Rapid prototyping of accessible UI elements, then hardening through real‑world testing.
  • Clearer documentation tailored to assistive technology needs.

The tipping point isn’t technological; it’s organizational. What determines whether AI improves or degrades accessibility is team culture and discipline.

Action framework for tech teams: 12 concrete steps

Here’s a pragmatic plan to make AI a lever for accessibility—not a risk:

  1. Prompts with embedded accessibility criteria
    • Systematically include requirements: relevant ARIA roles, keyboard navigation, focus management, text alternatives, structured headings, AA/AAA contrast, readable error messages.
    • Require coded examples and validation checklists in outputs.
  2. Dual validation: automated + human
    • Combine automated audits with manual testing by specialists and blind users or screen reader users.
    • Define clear acceptance criteria: “no keyboard traps,” “consistent announcements in NVDA/JAWS/VoiceOver,” “logical reading order.”
  3. Accessibility point person in every squad
    • Designate an accessibility point person in each product team, trained and mandated to validate deliverables.
    • Allocate dedicated time budgets to prevent accessibility from becoming “optional.”
  4. Hardened design systems and components
    • Maintain a library of accessible components (visible focus, explicit labels, coherent states).
    • Add unit and functional tests specific to accessibility (tab order, required aria-* attributes).
  5. Documentation oriented toward assistive tech
    • Document expected behaviors for screen readers, keyboard shortcuts, ARIA announcements, and transitions (modals, carousels, menus).
    • Describe “accessibility intent”: why a given role is chosen; what feedback is announced and when.
  6. Traceability of prompts and decisions
    • Version prompts, AI outputs, human corrections, and the rationale for adjustments.
    • Use this archive as learning material to improve quality and consistency.
  7. Impact metrics
    • Track indicators: number of A11y issues per release, correction time, success rates in screen reader testing, user satisfaction (accessibility‑focused NPS).
    • Put these metrics on par with performance and security.
  8. Inclusion of affected users
    • Involve blind users and other profiles in usability testing.
    • Pay for testing time and integrate feedback into a prioritized backlog.
  9. AI risk governance
    • Map risk scenarios (hallucinations, alteration of critical text, misleading descriptions).
    • Establish usage thresholds: when is human review mandatory? When should AI not decide alone?
  10. Responsible “AI overlay”
    • Deploy agents that transform inaccessible content into readable formats (text, structured audio), but enforce guardrails: human corrections, clear disclaimers, and signaling uncertainties.
    • Prevent AI from masking defects: it can help users get around barriers, but must also push for source corrections.
  11. Continuous training
    • Train developers, designers, and writers: WCAG, accessible patterns, cognitive ergonomics, model limitations.
    • Share common “anti‑patterns”: non‑navigable carousels, focus‑trapping modals, tables without headers, unlabeled icons.
  12. Culture of care
    • Treat accessibility as a core quality attribute, on par with performance and security.
    • Celebrate A11y fixes, make the work visible, and value contributions.

Further resources