The W3C Library Linked Data (LLD) Incubator Group invites librarians, publishers, linked data researchers, and other interested parties to review and comment on drafts of reports to be published later this year. The LLD group has been chartered from May 2010 through August 2011 to prepare a series of reports on the existing and potential use of Linked Data technology for publishing library data. The group is currently preparing:
Wandering into public or semi-public wireless networks makes me nervous because I know how my network traffic can be easily watched, and because I’m a geek with control issues I’m even more nervous when using devices that I can’t get to the insides of (like phones and tablets). One way to tamp down my concerns is to use a Virtual Private Network (VPN) to tunnel the device’s network connection through the public wireless network to a trusted end-point, but most of those options require a subscription to a VPN service or a VPN installed in a corporate network. I thought about using one of the open source VPN implementations with an Amazon EC2 instance, but it isn’t possible with the EC2 network configuration judging from the comments on the Amazon Web Services support forums. (Besides, installing one of the open source VPN software implementations looks far from turnkey.) Just before I lost hope, though, I saw a reference to using the open source DD-WRT consumer router firmware to do this. After plugging away at it for an hour or so, I made it work with my home router, a AT&T U-verse internet connection, and iOS devices. It wasn’t easy, so I’m documenting the steps here in case I need to set this up again.
[Update on 10-Jun-2011: The answer to the question of the title is "not really" -- see the update at the bottom of this post and the comments for more information.]
Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure.
The problem is, I think, that the markup they describe on there site generates invalid HTML. Did they really do this?
One of the great things about the Shibboleth inter-institution single sign-on software package is the ability for the Identity Provider to limit how much a Service Provider knows about a user’s request for service. (Not familiar with those capitalized terms? Read on for definitions.) But with this capability comes great flexibility, and with the flexibility can come lots of management overhead. So I was intrigued to see the announcement for an online webinar from the InCommon Shibboleth Federation with the title “The Challenges of User Consent” covering the issues of managing who gets access to what information about users.
Thanks to everyone for participating in the first Code4Lib Virtual Lightning Talks on Friday. In particular, my gratitude goes out to Ed Corrado, Luciano Ramalho, Michael Appleby, and Jay Luker being the first presenters to try this scheme for connecting library technologists. My apologies also to those who couldn’t connect, in particular to Elias Tzoc Caniz who had signed up but found himself locked out by a simultaneous user count in the presentation system. Recordings of the presentation audio and screen capture video are now up in the Internet Archive.
|Edward M. Corrado||CodaBox: Using E-Prints for a small scale personal repository|
My employer recently became a member of NISO and I was made the primary representative. This is my first formal interaction with the standards organization heirarchy (NISO → ANSI → ISO) and as one of the side effects I’m being asked to provide advice to NISO on how its vote should be cast on relevant ISO ballots. Much of it has been pretty routine so far, but today one jumped out at me — the systematic review for the standard ISO 2709:2008, otherwise blandly known as Information and documentation — Format for information exchange. You might know it as the underlying structure of MARC. (Though, to describe it accurately, MARC is a subset or profile of ISO 2709.) And the voting options are: Confirm (as is), Revise/Amend, Withdraw (the standard), or Abstain (from the vote).
About two years ago I wrote a blog post wondering if we could outsource the preservation of digital bits. What prompted that blog post was an announcement from Iron Mountain of a Cloud-Based File Archiving service. Since then there have been a number of other services that have sprung up that are more attuned to the needs of cultural heritage communities (DuraCloud and Chronopolis come to mind), but I have wondered if the commercial sector had a way to do this cheaply and efficiently. The answer to that question is “maybe not” as Iron Mountain has told Gartner Group (PDF archive) that it is closing its services and its Archive Service Platform.
Last week in DLTJ Thursday Threads I posted an entry about running out of IP addresses. Since I posted that, I’ve run across a couple of other stories and websites that bring a little more context to the consequences of last week’s distribution of the last blocks of IP addresses from the world-wide pool of available addresses. The short version: channel any panic you might be feeling into making sure your systems are ready to communicate using both the existing network standard (IPv4) and the new network standard (IPv6).
Late last year I was asked to put together a 20-minute presentation for my employer (LYRASIS) on what I saw as upcoming technology milestones that could impact member libraries. It was a good piece, so I thought I’d share what I learned with others as well. The discussion was in two parts — general web technologies/expectations and mobile applications/web.
In a futile effort to fight link rot on DLTJ, I installed the Broken Link Checker plugin by “White Shadow”. I like the way it scans the entire content of this blog — posts, pages, comments, etc. — looking for pages linked from here that don’t respond with an HTTP 200 “Ok” status code. The dashboard of problem links has a nice interface for updating or deleting these links, including the ability to add a CSS style deleted links to note that they were formerly there. One of the things I wished it did, though, was to add a message to posts/pages that noted a link was changed or deleted. You know — just to document that something changed since the page was first published. Tonight I hacked into the code to add this function. And with apologies to the original author of this beautifully structured object-oriented PHP code, it is a gruesome hack.