Most e-mail messages I send are digitally signed using a process called “Pretty Good Privacy“, or PGP. In e-mail applications that don’t understand PGP, this digital signature will show up either as an attachment called “PGP.sig” or as a part of the message starting with “BEGIN PGP SIGNATURE” at the bottom of the e-mail. This file — containing gibberish to the human eye — is used by PGP-aware programs to verify that the message actually came from me. If you are using PGP, I could also sent you a message that only you could read (e.g. “encrypted”). This page gives some background on PGP and why I consider it important.
It is the start of a new year1, and it seems like a good time to update my public encryption key. My previous one — created in 2004 — is both a little weaker, cryptographically speaking, than the ones newly created (1024-bit versus 2048-bit) and also an uncomfortable mixing of my professional and personal lives. For my previous key, I attached all of my professional and personal user ids (e.g. e-mail addresses) to the same key. This time I decided to split my work-related user ids from my other ones. My reasoning for the split is that I might be compelled by my employer to turn over my private key to decrypt messages and files sent in the course of my work. If my personal user ids are also attached to that private key, my employer (and who ever else got ahold of that key), would be able to decrypt my personal messages and files as well. That is not necessarily a good thing. So my solution was to create two keys and cross-sign them. I’ve outlined the process below.
These keys are part of a computer standard and software algorithm called “Pretty Good Privacy“, or PGP. If you are interested in more of a background about PGP, see a companion post on why I digitally sign my e-mail.
- Some have even said it is the start of a new decade, but of course that isn’t true. We won’t start a new decade until 2011, just like we didn’t actually start a new millennium until 2001. [↩]
OCLC announced on Monday the availability of a new
There is an final report as submitted to the Mellon Foundation. This version of the report has minor corrections in the text and now includes information about the group of libraries that have committed to the build phase of the project. Those libraries are:on the OLE Project site that links to the
- Indiana University (lead)
- Florida Consortium (University of Florida, Florida International University, Florida State University, New College of Florida, Rollins College, University of Central Florida, University of Miami, University of South Florida, and the Florida Center for Library Automation)
- Lehigh University
An interesting thing happened at my place of work (OhioLINK) today. We recently added links to our central catalog pointing to manifestations in Google Books. The way it was decided to set it up, though, was to only point to Google Books if the full text was available. We tweeted about it to let our community know that this option was now available. The tweet included a link to a particular record that showed (at the time) an example of this change: Mark Twain’s Life on the Mississippi.
NISO voting members are currently considering two new work items: a statement of best practices for the physical delivery of library resources and formalizing the NLM journal article DTD de facto standards. The Physical Delivery and Standardized Markup for Journal Articles proposal documents are openly available for download.
Over the weekend, the folks at Duke University coordinating the development of the OLE Project Design Final Report released a draft for public comment. Weighing in at 100 pages (don’t let that put you off — there are lots of pictures), it represents the best thinking of a couple dozen individuals listening to hundreds of professionals working in libraries. Participants were challenged to consider not only their existing environments and workflows, but also how things could be put together differently. And “differently” — in this context — means thinking about tighter integration with information systems and processes at the host institution.
It has been a wild few weeks in search engines — or search-engine-like services. We’ve seen the introduction of no fewer than three high-profile tools … Wolfram|Alpha, Microsoft Bing, and … each with their own strengths and needing their own techniques — or, at least, their own distinct frame of reference — in order to maximize their usefulness. This post describes these three services, what their generally good for, and how to use them. We’ll also do a couple of sample searches to show how each is useful in its own way.
Via a post and an interview on the O’Reilly Radar blog, Google announced limited support for parsing RDFa statements and microformat properties in web page HTML coding and using those statements to enhance the relevance of search results as so-called “rich snippets”. In looking at the example review markup outlined in the O’Reilly post, though, I was struck by some unusual and unexpected markup. Specifically, that the namespace was this
http://rdf.data-vocabulary.org/ thing that I had never seen before, and the “rating” property didn’t have any corresponding range that would make that numeric value useful in a computational sense.