The W3C Library Linked Data (LLD) Incubator Group invites librarians, publishers, linked data researchers, and other interested parties to review and comment on drafts of reports to be published later this year. The LLD group has been chartered from May 2010 through August 2011 to prepare a series of reports on the existing and potential use of Linked Data technology for publishing library data. The group is currently preparing:
This week’s list of threads starts with a pointer a statement by the International Coalition of Library Consortia on the growing pressure between publishers and libraries over the appropriate rights and permissions for scholarly material. In that same vein, Joe Lucia writes about his vision for libraries and the cultural commons to the Digital Public Library of America mailing list. On the more geeker side is a third link to an article with the experience of content producers creating HTML5-enabled web apps. And finally, on the far geeky side, is a view of what happens when a whole lot of new wireless devices — smartphones, tablets, and the like — show up on a wifi network.
Wandering into public or semi-public wireless networks makes me nervous because I know how my network traffic can be easily watched, and because I’m a geek with control issues I’m even more nervous when using devices that I can’t get to the insides of (like phones and tablets). One way to tamp down my concerns is to use a Virtual Private Network (VPN) to tunnel the device’s network connection through the public wireless network to a trusted end-point, but most of those options require a subscription to a VPN service or a VPN installed in a corporate network. I thought about using one of the open source VPN implementations with an Amazon EC2 instance, but it isn’t possible with the EC2 network configuration judging from the comments on the Amazon Web Services support forums. (Besides, installing one of the open source VPN software implementations looks far from turnkey.) Just before I lost hope, though, I saw a reference to using the open source DD-WRT consumer router firmware to do this. After plugging away at it for an hour or so, I made it work with my home router, a AT&T U-verse internet connection, and iOS devices. It wasn’t easy, so I’m documenting the steps here in case I need to set this up again.
This week we got the long-awaited report from the group testing RDA to see if its use would be approved for the major U.S. national libraries. And the answer? An unsatisfying, if predictable, maybe-but-not-yet. This week also brought new examples of the tensions between authors and publishers and libraries. The first example is an author’s story of an attempt to navigate an author’s rights agreement and coming to an insurmountable barrier. The second example tries to look in to the future of teaching and learning in a world where fair use has been dramatically scaled back from the existing status quo, and it is a frightening one.
The main Open Repositories conference concluded this morning with a keynote by Clifford Lynch and the separate user group meetings began. I tried to transcribe Cliff’s great address as best I could from my notes; hopefully I’m not misrepresenting what he said in any significant ways. He has some thought-provoking comments about the positioning of repositories in institutions and the policy questions that come from that. For an even more abbreviated summary, check out this This is a preview of
Open Repositories 2011 Report: Day 3 – Clifford Lynch Keynote on Open Questions for Repositories, Description of DSpace 1.8 Release Plans, and Overview of DSpace Curation Services. Read the full post (77 words, 18 seconds estimated reading time)
Today was the second day of the Open Repositories conference, and the big highlight of the day for me was the panel discussion on using Fedora as a storage and service layer for DSpace. This seems like such a natural fit, but with two pieces of complex software the devil is in the details. Below that summary is some brief paragraphs about some of the 24×7 lightning talks.
Two threads this week: the first is an announcement from the major search engine on a way they agree to discover machine-processable information in web pages. The search engines want this so they can do a better job understanding the information web pages, but it stomps on the linked data work that has been a hot topic in libraries recently. The second is a red-letter day in the history of the internet as major services tried out a new way for machines to connect. The test was successful, and its success means a big hurdle has been crossed as the internet grows up.
Today was the first main conference day of the Open Repositories conference in Austin, Texas. There are 300 developers here from 20 countries and 30 states. I have lots of notes from the sessions, and I’ve tried to make sense of some of them below before I lose track of the entire context.
The meeting opened with the a keynote by Jim Jagielski, president of the Apache Software Foundation. He gave a presentation on what it means to be open source project with a focus on how Apache creates a community of developers and users around its projects.
This week I am attending the Open Repositories conference in Austin, Texas, and yesterday was the second preconference day (and the first day I was in Austin). Coming in as I did I only had time to attend two preconference sessions: one on the integration — or maybe “invasion” of the Spring Framework — into DSpace and one on the introduction of the DuraCloud service and code.
[Update on 10-Jun-2011: The answer to the question of the title is “not really” — see the update at the bottom of this post and the comments for more information.]
Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure.
The problem is, I think, that the markup they describe on there site generates invalid HTML. Did they really do this?