Artificial intelligence (AI) now holds a central place in the technological imagination—somewhere between fascination and caution. For the accessibility community, and more broadly for anyone using digital tools with sensory, motor, or cognitive constraints, AI represents both a promise and a challenge. A promise to make information closer, clearer, and more actionable. A challenge to ensure the technology remains inclusive, reliable, and verifiable. Through the combined perspectives of Maxime Varin, David Demers, and Jean‑Baptiste Ferlet—accessibility professionals and, for two of them, blind—this article offers a concrete framework to understand what AI changes, what it doesn’t, and what tech teams must do now to ensure accessibility advances rather than retreats.
For Maxime Varin, an A11y specialist and blind professional, AI is “a new toy” that’s already useful: text editing, captioning, summaries, and structured reports. Above all, it has a key virtue: producing text answers when the original source is inaccessible (for example, overly visual interfaces, poor markup, or complex PDFs). In practice, AI can act as a bridge: it digests information and returns content that a screen reader can work with, sparing users from exhausting journeys through poorly designed pages.
This bridge role extends to tasks notoriously cumbersome with a screen reader. Comparing two Word documents to isolate differences? AI lets you ask “show me only what changed,” “list new words,” or “highlight modified sections.” The gains in productivity and clarity are substantial. And the impact goes beyond blindness: people with motor, cognitive, or hearing limitations benefit from tools that summarize, rephrase, and structure—functions that reduce cognitive load and improve access to what matters.
David Demers, director of CNIB’s Access Labs and a blind professional, describes his experience: AI capable of describing landscapes or photos, interpreting posters, and extracting useful information from screens or non‑compliant sites. In other words, an “accessibility overlay” that conveys meaning regardless of flawed form.
This ambition comes with a requirement: AI must remain inclusive and accessible. The recent history of the Web is instructive. We went from an era where content—mostly text—was relatively accessible, to rich, dynamic, visual interfaces that too often left people behind. Repeating that drift with AI—letting it grow in complexity without guardrails—would be a major mistake. Accessibility cannot be a late add‑on: it’s a non‑negotiable quality criterion, embedded from design and measured continuously.
Jean‑Baptiste Ferlet, an accessibility specialist and developer, highlights a technical reality every tech reader should keep in mind: AI outputs are open to interpretation, sometimes “hallucinated,” and therefore demand human verification. AI operates like a black box: you provide an input and receive an output, with little transparency about the steps in between. Powerful? Yes. Infallible? No.
Jean‑Baptiste ran a telling experiment: he asked AI for an “accessible carousel.” The first result was non‑compliant. It required adjustments, re‑testing, and validation. Hence the analogy of a “trainee who needs supervision”: AI is fast and prolific, but human expertise remains indispensable for setting criteria, controlling quality, and owning responsibility.
For a tech‑enthusiast audience, the lesson is clear:
Used without mastery of fundamentals, AI can amplify existing issues: produce non‑compliant content faster, spread inaccessible patterns at scale, and multiply interfaces that “seem” right but fail on keyboard navigation, screen reader support, contrast, and heading/landmark logic.
Conversely, with trained teams and clear guardrails, AI becomes an accelerator:
The tipping point isn’t technological; it’s organizational. What determines whether AI improves or degrades accessibility is team culture and discipline.
Here’s a pragmatic plan to make AI a lever for accessibility—not a risk: