Threads this week without commentary. (It has been a long week that included only one flight of four that actually happened without a delay, cancellation, or redirection.) Big announcements are one from the Library of Congress to re-envision the way bibliographic information travels, one from Douglas County (Colorado) Library’s experiment with taking ownership of ebooks and applying its own digital rights management, and a study on the ecosystem of spam.
It has been years since I’ve done meaningful work with JPEG2000, but I still try to keep tabs on what is happening in that community. In that vein, Rob Buckley — formerly of Xerox Research and now on his own with a consulting business — pointed me to an announcement about a .
Earlier this year, Microsoft announced that it was giving $3 million in “funding, software, technological expertise, training and support services” to the Library of Congress to build on-site and online exhibits of LC historical collections. Others have commented on this. From a Jester’s point of view, I’ve got problems with this on two fronts: Microsoft using LC in a cheap marketing ploy and LC’s use of a new technology that impedes access for no good technical reason.
Roy Tennant is advocating the phrase “Descriptive Enrichment” over “Bibliographic Control” in response to draft report from the Library of Congress Working Group on the Future of Bibliographic Control, and I’m stepping up to say — I’m right there with you, Roy!1 Your analysis reminds me of statements made by David Weinberger in the Google Tech Talk in response to his book Everything is Miscellaneous. David offers new definitions to words that we use regularly: “metadata” is what we know and “data” is what we want to find out. In the talk, he gave an example (29 minutes and 25 seconds into the playback; this link will take you right there) of using something you know — like a quote from a book — to find something you don’t know — like the author — by putting the quote into a search engine. The “metadata” (the quote) was used to find the “data” (the author) that was being sought.
Xerox and the Library of Congress announced a joint effort last week to study the use of JPEG 2000. This is welcome news! The project is “designed to help develop guidelines and best practices for digital content,” a result that will be most welcome for those of us that want to do the right thing but lack the time and/or technical expertise to pin down exactly what the right thing is. I think it is safe to say that inertia has taken us this far with our collective TIFF-based practice, and even the most conservative preservationist would probably acknowledge that the state of the art has moved in the past quarter century to a point where there might be a better way.
Activity still continues on the National Digital Information Infrastructure and Preservation Program (NDIIP). There were two stories in Washington DC newspapers in recent weeks. The more interesting of the two came from the May 16th Washington Post in a column by Jim Barksdale and Francine Berman called Saving our Digital Heritage. Barksdale — of Netscape Corp. fame and now a member of the NDIIP advisory council — and Berman make a brief but impassioned plea for restoring the NDIIP funding that was rescinded earlier this year. (The other article, in the Washington Times, (“Saving the digital record”, 25-Apr-2007, article no longer available online) oddly praises the program but makes no mention of the funding rescission.) And I heard today from an “Unnamed Washington Source” that the leadership at the Library of Congress will seek to have some, if not all, of the funding restored as part of a future continuing resolution. (Hopefully one that won’t get vetoed.)