Threads this week without commentary. (It has been a long week that included only one flight of four that actually happened without a delay, cancellation, or redirection.) Big announcements are one from the Library of Congress to re-envision the way bibliographic information travels, one from Douglas County (Colorado) Library’s experiment with taking ownership of ebooks and applying its own digital rights management, and a study on the ecosystem of spam.
Thanks to everyone for participating in the first Code4Lib Virtual Lightning Talks on Friday. In particular, my gratitude goes out to Ed Corrado, Luciano Ramalho, Michael Appleby, and Jay Luker being the first presenters to try this scheme for connecting library technologists. My apologies also to those who couldn’t connect, in particular to Elias Tzoc Caniz who had signed up but found himself locked out by a simultaneous user count in the presentation system. Recordings of the presentation audio and screen capture video are now up in the Internet Archive.
|Edward M. Corrado||CodaBox: Using E-Prints for a small scale personal repository|
My employer recently became a member of NISO and I was made the primary representative. This is my first formal interaction with the standards organization heirarchy (NISO → ANSI → ISO) and as one of the side effects I’m being asked to provide advice to NISO on how its vote should be cast on relevant ISO ballots. Much of it has been pretty routine so far, but today one jumped out at me — the systematic review for the standard ISO 2709:2008, otherwise blandly known as Information and documentation — Format for information exchange. You might know it as the underlying structure of MARC. (Though, to describe it accurately, MARC is a subset or profile of ISO 2709.) And the voting options are: Confirm (as is), Revise/Amend, Withdraw (the standard), or Abstain (from the vote).
Eric Morgan posted a message to the Next Generation Catalog for Libraries mailing list this morning that points to a announcement by the University of Florida library that they are now applying a Creative Commons Public Domain Dedication statement to MARC records they create. Their announcement says:
Beginning March 2011, the University of Florida Smathers Libraries implemented a policy to include a Creative Commons license in all of its original cataloging records. The records are considered public domain with unrestricted downstream use for any purpose.
It wasn’t too long ago that the music industry was in an uproar about stories of how easy it was to copy digital audio files and make digital copies with high fidelity. It was predicted that we would see the same thing in other media forms, and this week’s DLTJ Thursday Threads has two stories on the topic of book publishing. First is news of another inexpensive and simple (and now to be commercially produced) book digitizing system. Although the process of “ripping” a book from its physical medium might take longer than an audio track, these kind of devices are emerging that will make it simple to do. What happens with the digital copy after that? The second Thursday Threads pointer is to an interview with the founder of book publishing industry consultant about the state of book piracy, how it is measured, and why digital rights management software is a poor way to stop it. The last entry this week is a short excerpt of a brief summary of a study conducted by OCLC last year on the usage of MARC tags in cataloging records.
In preparation for the last webinar of the three-part series “Using RDA: Moving into the Metadata Future“, I’m reading again Karen Coyle‘s “Library Data in a Modern Context” — the first chapter of Understanding the Semantic Web: Bibliographic Data and Metadata. Right at the start she has a clear and useful definition of this thing we call “metadata.”
This week I sat in on the first of the three “Using RDA: Moving into the Metadata Future” webinars being hosted by ALA. This one was hosted by Karen Coyle with the title New Models of Metadata where she talked about library-specific efforts such asRDA and FRBR as well as the linked data effort in the wider world of information. There was a great deal of concern expressed in the chat window by participants about the future of cataloging, of cataloguers, and of MARC. The latter brought up memories of Roy Tennant‘s “MARC Must Die” declaration. My take away, though, isn’t that MARC is dead as much as MARC is a dead end.
This is definitely becoming a habit…welcome to the fourth edition of DLTJ‘s Thursday Threads. If you find these interesting and useful, you might want to add the Thursday Threads RSS Feed to your feed reader or subscribe to e-mail delivery using the form to the left. If you would like a more raw and immediate version of these types of stories, watch my FriendFeed stream (or subscribe to its feed in your feed reader). Comments, as always, are welcome.
This year the ALCTS Forum at ALA Midwinter brought together three perspectives on massaging bibliographic data of various sorts in ways that use MARC, but where MARC is not the end goal. What do you get when you swirl MARC, ONIX, and various other formats of metadata in a big pot? Three projects: ONIX Enrichment at OCLC, the Open Library Project, and Google Book Search metadata.
At ALA Midwinter, ALCTS sponsored a panel discussion about sharing library-created data inside and outside the library community, with a particular focus on cataloging data. I was honored to be ask to speak on the topic from the perspective of a consortial office. This is the second and final post in a series that represents an approximation of what I said on the panel.
The first part examined the nature of surrogate records that we create as a means to get users to content. The post looked at where we get records, how humans and machines can create them, and the rights associated with component data that makes up the records.