ResourceSync — a joint effort of NISO and the Open Archives Initiative (OAI) team with work funded by the Sloan Foundation — has published a draft specification that I urge members of the library technology community to look at. Building on the OAI-PMH strategies for synchronizing metadata, this project is modern web architecture technologies to enable the synchronization of the objects themselves, not just their metadata. From the abstract of the draft specification:
It wasn’t too long ago that the music industry was in an uproar about stories of how easy it was to copy digital audio files and make digital copies with high fidelity. It was predicted that we would see the same thing in other media forms, and this week’s DLTJ Thursday Threads has two stories on the topic of book publishing. First is news of another inexpensive and simple (and now to be commercially produced) book digitizing system. Although the process of “ripping” a book from its physical medium might take longer than an audio track, these kind of devices are emerging that will make it simple to do. What happens with the digital copy after that? The second Thursday Threads pointer is to an interview with the founder of book publishing industry consultant about the state of book piracy, how it is measured, and why digital rights management software is a poor way to stop it. The last entry this week is a short excerpt of a brief summary of a study conducted by OCLC last year on the usage of MARC tags in cataloging records.
In preparation for the last webinar of the three-part series “Using RDA: Moving into the Metadata Future“, I’m reading again Karen Coyle‘s “Library Data in a Modern Context” — the first chapter of Understanding the Semantic Web: Bibliographic Data and Metadata. Right at the start she has a clear and useful definition of this thing we call “metadata.”
This week I sat in on the first of the three “Using RDA: Moving into the Metadata Future” webinars being hosted by ALA. This one was hosted by Karen Coyle with the title New Models of Metadata where she talked about library-specific efforts such asRDA and FRBR as well as the linked data effort in the wider world of information. There was a great deal of concern expressed in the chat window by participants about the future of cataloging, of cataloguers, and of MARC. The latter brought up memories of Roy Tennant‘s “MARC Must Die” declaration. My take away, though, isn’t that MARC is dead as much as MARC is a dead end.
This is definitely becoming a habit…welcome to the fourth edition of DLTJ‘s Thursday Threads. If you find these interesting and useful, you might want to add the Thursday Threads RSS Feed to your feed reader or subscribe to e-mail delivery using the form to the left. If you would like a more raw and immediate version of these types of stories, watch my FriendFeed stream (or subscribe to its feed in your feed reader). Comments, as always, are welcome.
Did you know that Amazon offers a facility to make corrections to its catalog? Somewhere in the past few months someone mentioned this to me and I tried it out. (
Unfortunately, it has been long enough now that I’ve forgotten who told me; if you are the one, please fess up in this post’s comments section. ) And it works! Is this a model for crowdsourced corrections to library data?
Jerome McDonough of the Graduate School of Library & Information Science at the University of Illinois at Urbana-Champaign presented a paper this summer at the Balisage conference with the title Structural Metadata and the Social Limitation of Interoperability: A Sociotechnical View of XML and Digital Library Standards Development.1 The title is very hard to penetrate, but the contents of the paper lay bare a theory for why we don’t have large, swirling pools of shared digital objects that cross institutional silo boundaries.
Earlier this week I received an e-mail from the director of the ISSN International Center announcing a session at the ALA Annual Conference in Anaheim to talk about the “linking ISSN”. Abbreviated ISSN-L, this is a new addition to the revised ISSN standard (ISO 3297, published last August) that allows for the collocation of separate ISSNs under a single ISSN-L. The ISSN standard now explicitly states that an
ISSN is a unique identifier for a specific serial in a defined medium. In other words, separate ISSN should be assigned to each different medium version of a serial. The ISSN-L table brings these separate ISSNs together.
This morning I got an invitation to join ResearcherID, a new author profile service from Thomson Scientific. The service sounds nice enough — who doesn’t want to take steps to avoid confusion between authors? — and if you have access to other Thomson products (like ISI Web of Knowledge or Web of Science) it may be even nicer. I’m all for the establishment of unique identifiers so we can start to do some interesting things with co-citation analysis and mining the web of connections in journal articles, but I’m not signing up. At least not yet.
In reading a background paper for the American Social History Online portal, I was reacquainted with a paper by Muriel Foulonneau, Thomas Habing and Tim Cole from UIUC called “Automated Capture of Thumbnails and Thumbshots for Use by Metadata Aggregation Services.”1 This is the abstract: