Thinking about Our Fedora Disseminators

Posted on 5 minute read

× This article was imported from this blog's previous content management system (WordPress), and may have errors in formatting and functionality. If you find these errors are a significant barrier to understanding the article, please let me know.

Another reason to consider the FEDORA digital object repository system, if having the ability to put all of your content in one place and reducing the complexity of digital preservation aren't enough, is the capability to create and define behaviors that the content can perform. In the FEDORA world, these behaviors are called disseminators.

By way of example -- say you have a digital object that is an image that you want to display to users. Furthermore, you want to create at the time of the user's request a smaller "thumbnail" version on search results lists and so forth. (Let's set aside for a moment that the system could create thumbnail derivatives in batch and simply deliver that to the user. Someday I'll propose that dynamic derivatives from a JPEG2000 master are a better way to go, but not now. Stick with'll be worth it.) From a system architecture point of view, the resize operation can happen in at least two places: as a function of the content repository or as a function of the interface application. In this simple example, one might argue that the best place is in the interface application.

Now let's say that your content repository has not only still images but moving image files. Uh, oh. That means the application now has to be smart enough to know whether a particular search results hit is a still image or a moving image datastream. And the application is going to have to know how to transform a large image to a thumbnail and know how to extract a key frame from a sequence of moving images. And if you have more than one interface to the object repository (say, one that presents a digital library interface and one that integrates objects into your learning environment) then you're going to have to replicate that still image and moving image capability in more than one application.

So instead, what if we put the "smarts" of the object into the repository and create some well-defined expectations for what every object in the repository has to do. That "smarts" in the repository are the Disseminators and the well-defined expectations is the Content Model. The "dumb" application (relatively speaking) gets a list of record identifiers that are the results of a search and asks the repository to give it a thumbnail images for each one. The first record is a still image, so the repository resizes the image and delivers the result to the application. The second record is a moving image file, so the repository extracts one frame, resizes it, and delivers it back to the application. The third record is that of a book -- want to guess what happens? Perhaps the repository returns a thumbnail-sized image of the book jacket? Or maybe an image rendering of the title page? Okay, now so the fourth record is of a dataset -- how do we get a thumbnail of a dataset? Maybe a reduced size of visualization of the image? What if the fifth was of a website? Have you seen "thumbnail" sizes of websites, such as through Alexa or

This is the key point: for each record, the application simply asked the repository to deliver a thumbnail of the object. And the repository, regardless of media type, delivered one.

Okay, enough background. Also keep in mind that OhioLINK's Fedora repository vision doesn't expect to have one front end; rather we anticipate getting to the repository data from a number of genre-, topic-, or technology-specific interfaces. In doing so, I think a lot of the intelligence about how to handle media types needs to go into the disseminators. So I'm thinking about how an object can present itself in generic ways to a wide variety of interfaces.

So in my current line of thinking, the name of a disseminator in the repository has three parts:

  • action
  • presentation
  • optional sizing parameters

An action can be one of:

  • "get" - raw stream of bits from the datastream
  • "view" - HTML-wrapped version of the stream of bits plus activities that can be applied to the datastream intended for access by a GUI or to be transformed via XSLT

I tried to combine the GUI and XSLT actions into "view" on the theory
that the HTML wrapper would have sufficient CSS "id" and "class" values
to make it possible to style it with CSS or transform it with XSLT.
This may not be a practical theory once we get to implementation.

A presentation can be one of:

  • "preview" - a small/short version of the datastream returned in the datastream's original format
  • "screen" - a roughly GUI-screen-sized version of the datastream returned in the datastream's original format
  • "thumb" - a small, static image derivative of the datastream
  • "audio" - an auditory derivative of the datastream
  • "description" - a Dublin Core description of the item marked up in an HTML table
  • "record" - HTML markup of Thumb plus Description (suitable, for instance, as a representation of the object in a browse list)

The final piece of the name is "Sized" which can be used to pass parameters that override the dimensions of the "preview" and "thumb" presentations.

So these would get put together like this (with examples based on still images):

  • "getPreview" - return an x-by-y derivative of the datastream
  • "getThumb" - in the case of still images, same as "getPreview"
  • "viewThumb" - the same derivative as "getThumb" wrapped in an HTML div such as:

    (where [PID] is the Fedora PID and [DS] is the datastream label)

For non-static images, it gets a little more interesting because:

  • "getPreview" of a video would return a short video segment defined as the 'preview' of the larger video where as "getThumb" of that same video datastream would return just a single frame from the video.
  • "getPreview" of a journal article could return a block of text that is the abstract of the article while "getThumb" of that same journal article could return an image rendering of the first page of the article
  • "getScreen" of a journal article could return an HTML fragment of the article itself while "getAudio" might return a prerecorded or computer-synthesized rendition of the article

That's the basic plan, open for comments before we get too far with the coding part of the project. Thoughts?

The text was modified to update a link from to