Library Journal reports on the results of discussions among 1 members to . The summary is available as a as are the . The report is two pages of executive summary, four pages of compact overviews of the major areas of focus, and four pages of recommendations for the SOLINET organization.
Earlier this week, Aaron Swartz of the Internet Archive announced the demonstration website of the Open Library project, a new kind of book catalog that brings together traditional publisher and library bibliographic data in an interface with the user-contributed paradigm of Wikipedia. Okay, I’ll pause for a moment while you parse that last sentence. Think you got it? Read — and watch — further.
Why settle for mere digital copies of books (a la the Google Book Search project and the Open Content Alliance) when you can have an edition printed, bound and sent to you in the mail? That’s the twist behind a recent partnership announced by Amazon.com, Kirtas Technologies, Emory University, University of Maine, Toronto Public Library, and the Public Library of Cincinnati and Hamilton County.
A recent posting in the Chronicle of Higher Education “Wired Campus” section describes the new iTunes U portal, “a spot on the site that will collect college lectures, commencement speeches, tours, sports highlights, and promotional material, all available at no cost.” (If you have iTunes on your desktop/laptop, you can use this link to visit iTunes U in the iTunes Store.) Now, according to the Apple press release, “content from iTunes can be loaded onto an iPod® with just one click and experienced on-the-go, anytime, making learning from a lecture just as simple as enjoying music.”
A post by Bill Harris at “Dubious Quality” with the title Information got caught up in my Technorati filter for disruptive change in libraries. Geoff Engelstein, a colleague of Bill’s mentioned this in an e-mail:
We were a generation of information explorers. They [Geoff's thirteen– and eleven-year-olds] are a generation of editors.
The context is a reflection on Bill’s part of the trials and feelings of success when conducting research: “you’d have to pull out a rack in the card catalog according to the alphabetized subject and flip through the cards. If you got lucky, the title of a book or a brief description would point you in the right direction. Then you had to actually find the book, skim through it, and hope that you’d find some information.” Bill even includes a link to a bibliographic instruction page showing how an actual card catalog works.
Recent posts by Richard Wallis and Paul Miller, both of Talis (a 40-year-old company in the U.K. specializing in information and metadata management), question a perceived division of library automation vendor technical staff with that of open source solution technical staff. I wasn’t at Code4Lib this year (I’m going to try to get there next year), but from the context of the blog postings and comments it seems like the Talis developers were showing some really cool stuff and concern was expressed by participants that they don’t want to see Code4Lib turned into a vendor forum.
Brewster Kahle, Director of the Internet Archive, was interviewed this week in a Chronicle of Higher Education podcast on the . Among the many interesting points in the interview was that one of the biggest challenges is to such a mass digitization effort to believe that to digitize massive numbers of books and make them available is actually possible. The Open Content Alliance has put together a suite of technology that brings down the cost for a color scan with OCR to 10 cents per page or about $30 per book. He then goes on to perform this calculation: the library system in the U.S. is a 12B industry. One million books digitized a year is $30M, or “a little less than .3 percent of one year’s budget of the United States library system would build a 1 million book library that would be available to anyone for free.” He also covers copyright concerns including the more liberal copyright laws in countries such as China.
On Tuesday, the Poynter Institute (a school for journalists, future journalists, and teachers of journalists) released results of their — an examination of reader behavior in the print and online mediums. An article on their website goes into more detail about the initial data but what caught my eye as of interest to the library community is the headline (“The Myth of Short Attention Spans”) and this conclusion “The reading-deep phenomenon [thoroughly reading a selected story] is even stronger online than in print.” Their website site has .
I really like Christensen‘s Theory of Disruptive Innovation (as he proposed in his book The Innovator’s Dilemma). It succinctly describes the challenges, if not the fate, of academic libraries as we navigate through changing expectations and fast-moving, turbulent technologies. In fact, I often find that in explaining my point-of-view on where libraries need to go that I draw the core graph of Christensen’s theory on napkins, whiteboards, hands — whatever I can find. Inevitably, with the enthusiasm for the topic and quick-moving hands, the lines don’t always match where they ought and that makes the concepts all that more difficult to explain.
Ron Murray (no relation) from the Library of Congress sent me this announcement about a joint NASA/Google partnership, which starts:
NASA Ames Research Center and Google have signed a Space Act Agreement that formally establishes a relationship to work together on a variety of challenging technical problems ranging from large-scale data management and massively distributed computing, to human-computer interfaces.
As the first in a series of joint collaborations, Google and Ames will focus on making the most useful of NASA’s information available on the Internet. Real-time weather visualization and forecasting, high-resolution 3-D maps of the moon and Mars, real-time tracking of the International Space Station and the space shuttle will be explored in the future.