Calling all accessibility technology experts! What follows is a line of thinking about using characteristics of the FEDORA digital object repository to enable access to content through non-graphical interfaces. Thanks to Linda Newman from the University of Cincinnati and others on the Friday morning DRC Developers conference call for triggering this line of thinking.
In a recent post defining universal disseminators for every object in our repository (if the last dozen words didn’t make sense, please read the linked article and come back), I hinted at having an auditory derivative of each object, at least at the preview level. During today’s conference call, Linda asked if such a disseminator could be used to offer different access points for non-GUI users. Well, why not? Let’s look back at the “presentation” part of the disseminator label:
A presentation can be one of:
- “preview” – a small/short version of the datastream returned in the datastream’s original format
- “screen” – a roughly GUI-screen-sized version of the datastream returned in the datastream’s original format
- “thumb” – a small, static image derivative of the datastream
- “audio” – an auditory derivative of the datastream
- “description” – a Dublin Core description of the item marked up in an HTML table
- “record” – HTML markup of Thumb plus Description (suitable, for instance, as a representation of the object in a browse list)
Specifically, we talked about the audio presentation for non-audio objects (digital objects where audio is not the fundamental focus of the object).
- There could be a descriptive audio track, similar in concept to video description, that would be returned to the calling application. Perhaps, for instance, the audio of a commentary found on the handheld audio tour devices in art museums.
- In the absence of other audio description, the disseminator could run a text-to-speech algorithm against the title, creator, and description fields and return that to the calling application.
This brings to mind another transformation we could apply to give a preview of an object. (No, I haven’t moved into the odor or tactile senses — yet.) Would it be useful to have a disseminator return a text summary and/or metadata aggregation of an object on demand?
And, of course, the critical question: is it the job of a repository to build in access methods like this to be used by applications designed with alternate accessibility in mind? Should it be up to the application to create the derivatives necessary? I took a brief look at Section 508 requirements, but couldn’t find any real guidance about serving up accessible forms of content. (There is a great deal of information about how to create accessible web pages, but very little I could find about making assets in those pages accessible.) But I only know enough about this area to have it on my radar…and certainly not enough to formulate any answers.
The original goal of defining these universal disseminators was to assure a base level of functionality for every object in the system — a contract, if you will, between the repository and any consuming application that it could ask an object to return itself or a derivative of itself in all of these forms. What do you think? Should that contract be consciously extended to include universal accessibility?