As part of the Mellon Foundation grant funding the start-up of LYRASIS Technology Services, LTS is to produce a series of tools that enable libraries to decide whether open source is right for their environments. The grant says:
Identify useful tools that can support decision-making and create free, web-based versions for library self-use. Tools will enable libraries to look at products (open source or not) from the library requirement perspective as well as product functionality. Readiness assessment tools will assist libraries in evaluating local conditions to assess what resources exist or are needed to acquire, adopt, and support open source products. Selection tools will provide a structure for looking at such factors as usability, scalability, documentation, upgrade frequency, customization, maintenance requirements, community adoption levels, system support needs, and security in addition to product features. Existing models for assessing business requirements and readiness for other software applications will be used as a starting point for developing readiness assessment and selection tools for open source library products. The tools will be developed by staff and consultants, and tested/vetted with members and/or experts.
I’ve put a page up on the Code4Lib wiki describing the kinds of tools that will initially fall into this area. After review by the Advisory Panel and comments from the community, statements of work will be drafted for consultants to create these tools and the work will be let out for contract. The completed tools will be turned into web documents in the form of whitepapers, checklists, spreadsheets, etc., and published along with the open source software registry now under development. To encourage consultants to share their knowledge, we are considering allowing consultants to identify themselves in the text of the document (e.g. “Prepared for LYRASIS with funding from the 2011-2012 Mellon Foundation Open Source Support Grant by name of consultant.”)
With this background in mind, answers to these questions would be helpful:
- Based on your experience and/or knowledge of open source software adoption, are there other tools or techniques that would be useful to document and make available?
- Do you have suggestions for consultants to approach to complete the work of creating these tools?
Update on Software Registry
My earlier post with the entity-relationship diagram generated a lot of good comments. Thanks to everyone for responding with observations about the design itself or with general questions about what we’re up to. Keep ’em coming!
Based on that feedback, I’ve updated the diagram to include entities for a Characteristic and a Characteristic_Value. The idea is that a Characteristic is like a label for a row in a comparison table, and that a Characteristic is associated with a particular Package Type. A Characteristic_Value is the answer to how a Package does or does not implement that Characteristic.
This might be easier to explain in a diagram. In a mockup of the package comparison page, there is a list of Characteristics in the left-most column of the table followed across the page by Characteristic_Values for DSpace and Fedora. (The characteristics and values, as well as much of everything else in the mockups, are made-up data.) In this way we can have arbitrary Characteristics for each package type and allow them to be compared in a table like this. The values are strings, so no scoring or comparison is done; that is left as an exercise to the user depending on their own individual needs.
Speaking of mockups, this page and eight others can be found at http://dltj.org/temporary/registry-mockups/. Hopefully you can start to see the correlation between the E-R diagram and how the system will work.(This post was updated on 13-Jun-2014.)