To be fair, TUIs are strictly worse accessibility-wise than GUIs.
There's no standard to communicate TUI semantics to assistive technology, and whatever few standards actually exist (like using the cursor to navigate menus instead of some custom highlight) aren't followed.
With GUIs, those standards exist, and are at least somewhat implemented by all major (non-Rust)UI frameworks.
Text isn't more legible without structure, and without communicating structure, which is what various accessibility toolkits do, you don't get this supposed benefit
Text has inherent structure that GUIs don't. The ceiling for GUIs is higher (thanks to standards and supporting frameworks), but the floor for TUIs is higher.
Ok, what is the inherent structure of these two columns and how is a screen reader supposed to divine that structure without the framework telling it that there are 2 headers with the following text? And imagine the layout is space-separated as in cli utils
Even at the worst case, text can be read aloud and give some indication of what the screen contains. This is absolutely not true for a GUI which could easily just be an opaque rendered canvas. The fact remains: TUIs are inherently legible in ways that GUIs are not guaranteed to be.
Both false: you'll have NO indication if you read letters from different words out of order!
You'll not understand whether 'o' is a value or a continuation of the column name even in the primitive example above, and for anything even remotely complicated it's even worse.
> This is absolutely not true for a GUI which could easily just be an opaque rendered canvas.
Are you not aware of OCR? Besides, GUIs have special accessibility tools, which almost none of the TUIs have, so your opaque canvas isn't universal.
> The fact remains:
That's a myth, not a fact, and you fail to establish "the fact" even in the most basic example
There are ways to do it in most GUI applications. On Windows, pressing Alt will sometimes show you the combination to activate certain parts of the UI (keyboard accelerators). It's not obvious anymore because people don't focus on accessibility. Sadly, it's not common practice to ensure a good workflow, because it's assumed that they will use a mouse. Or people keep re-inventing TUI every time they think they want a terminal-friendly utility.
TUIs still need to comply with 508 so that “massive pain” is there either way.
What’s actually hard with screen readers isn’t getting text (that’s been easy on most GUI systems for decades) but communicating things in the right order, removing the need to see spatial relationships or color to understand what’s going on.
TUIs make that harder for everything beyond mid-20th century-style prompt / response interfaces because you don’t want to have to reread the entire screen every time a character changes (and some changes like a clock updating might need to be ignored) so you want to present updates in a logical order and also need to come up with text alternatives to ASCII art. For example, if I made a tool which shows server response times graphically a screen reader user might not want to hear an update every second and if the most interesting thing was something like a histogram I might need to think about how to communicate the distribution which is better than rereading a chart every second only to say that the previous one has shifted to the left by one unit and a single new data point has been added.
Those are non-trivial problems in any case but they’re all harder with a TUI because you’re starting with less convention and without the libraries which GUI interface developers have to communicate lots of context to a screen reader.
There's no protocol that tells a screen reader to say something different than is actually displayed on the screen. The best you can do is having a whitelist of screen reader process names and changing how your TUI works if one of them is detected, but that's brittle and doesn't work over SSH. You'd also have to think about how to do container escaping and interfacing with the host system when you're running in WSL, as the screen reader is almost certainly on the host side.
There's no standard to communicate TUI semantics to assistive technology, and whatever few standards actually exist (like using the cursor to navigate menus instead of some custom highlight) aren't followed.
With GUIs, those standards exist, and are at least somewhat implemented by all major (non-Rust)UI frameworks.