Posted on October 17th, 2008 2 comments
It strikes me that training for and documentation about our new Aleph ILS are aimed at three types of staff: system administrators, system librarians and staff (expert) users. Basically system administrators are supposed to take care of “technical stuff” like installing, upgrading, monitoring, backups, general system configuration etc., while staff users are dealing with the “real stuff”, like cataloging, acquisition, circulation, etc. System librarians appear to be a kind of hybrid species, both technicians and librarians: information specialists with UNIX and vi experience.
At the Library of the University of Amsterdam we do not have these three staff types, we only have what we call system administrators and staff users. We as system administrators do both system administrator and system librarian tasks as defined in the Aleph documentation. Only hardware, operating system, network, server monitoring and system backups are taken care of by the University’s central ICT department.
There is no such job title as “system librarian”, in fact I would not even know how to translate this term into Dutch. However, we do have terms for three different types of tasks: technical system administration, application administration and functional administration, which may be equivalent to the above mentioned staff types, although the terms are used in different ways and boundaries between them are unclear. In The Netherlands we even have system administrators, application administrators and functional administrators, but these are all general terms, not limited to the library world.
Anyway, the need for three types of library system administration tasks and staff is typically related to the legacy systems of Library 1.0.
Library 0.0 (the catalog card era) had only one type: the expert staff user, also known as “librarian“.
Library 2.0 (also known as “next generation” library systems) will probably also have only one type of staff user that is needed in the libraries themselves: and I guess we will call these library staff users “system librarians“. These future system librarians will have knowledge of and experience in library and information issues, and will take care of configuration of the integrated library information systems at their disposal through sophisticated, intuitive and user friendly web admin interfaces.
The systems themselves will be hosted and monitored on remote servers, according to the SaaS model (Software as a Service), either by system vendors or by user consortia or in cooperation between both. Technical system administration will no longer be necessary at the local libraries.
Cataloging, tagging, indexing etc. will not be necessary at the local library level either, because metadata will be provided by publishers, or dynamically generated by harvesting and indexing systems, and enriched by our end users/clients via tagging. These metadata stores will also be hosted and administered on remote servers, either by publishers or again by cooperative user organisations.
Of course this will have a lot of consequences for the current organisation and staffing of our libraries, but there will be plenty of time to adapt.
System librarians of the world: unite!
Posted on October 12th, 2008 2 comments
In my post “LING – Library integration next generation” I mentioned Marshall Breedings presentation at TICER “Library Automation Challenges for the next generation”.
Besides “Moving toward new generation of library automation” one of his other two topics was “A Mandate for Openness”, about Open Source, Open Systems, Open Content.
Marshall Breeding distinguishes five types of Open Systems, three of which in my view are the most important:
- Closed Systems: black boxes, only accessible via the user interfaces provided by the developer, no programmable access to the system
- Open Source Model: all aspects of the system available to inspection and modification
- Open API model: the core application is closed and accessible via the user interfaces provided by the developer, but third party developers can create code against the published API’s or database tables
(The other two types are intermediate or combined types: “standard RDBM systems” where third party developers can access the database schema, which in my view contains only part of the system’s data; and “Open Source/Open API”).
Especially the “Open API Model” is an interesting development for most libraries that work with commercial library systems. I have had some experience with two initiatives in this field: OCLC’s “WorldCat Grid“, and Ex Libris’ “Open Platform“. A big and important difference between these two is: WorldCat Grid is about access to a specific database already available to the public at large, Ex Libris’ Open Platform is about access to a number of commercial systems.
Interestingly, both initiatives consist of two parts: a set of open API’s and an open developers’ platform. These two parts make it possible to have a kind of marriage between commercial systems and an open source community. But how does this work in real life, how open is access to both the API’s and the Platform?
Some of OCLC’s WorldCat Grid Services are freely accessible, others are accessible for OCLC members only.
Membership of the WorldCat Grid Developers’ Network is available to “IT professionals from: OCLC member institutions, content providers, other software vendors and publishers, as well as bloggers and others in the library field who see value in a collaborative network related to the development of new functionality for the WorldCat Grid.”
“Software code, snippets and API’s developed within the network will be openly available for members, and the world-at-large, to use and re-use.”
With Ex Libris’ Open Platform, access to the Developers’ Platform is only open for Ex Libris customers.
Access to the existing API mechanisms (“X-Server” for the products Aleph, MetaLib, SFX, and Webservices for Primo) are also only available to Ex Libris customers. What will happen with newly developed API’s (conforming to new API standards like DLF ILS-Discovery Interface protocol) for new products is still unclear.
In my view it does make sense to restrict availability of Open API’s to members or customers in the case of access to licensed metadata or resources. But availability of Open API’s that access public data should be free to all.
It does NOT make sense to restrict access to tools developed on top of the Open API’s to members or customers only.
Granting access to data should be the privilege of the owners of the data, granting access to tools that access data should be the privilege of the developers/owners of these tools.
In this respect the OCLC platform is more open than Ex Libris, but it still is not completely open.
Of course, this is all highly dependent of the motives of the companies for supporting Openness: is it commitment to openness, or fear of losing customers?
Posted on October 8th, 2008 No comments
The project for implementation of Aleph as the new ILS for the Library of the University of Amsterdam started last week (October 2) with the official kick-off meeting. The Ex Libris project plan was presented to the members of the project team, bottlenecks were identified, and a lot of adjustments were made to the planning in order to be able to carry out more tasks simultaneously and thus earlier in time.
First steps are installation of Aleph 18, and giving on site trainings to all people involved, using the new locally installed Aleph 18 system.
But of course, before everything can start, we need the hardware! The central ICT department of the University of Amsterdam (not part of the library) is responsible for configuring and providing the Aleph production server according to the official Ex Libris “Requirements for ALEPH 500 Installation”. And as always there is confusion about what is actually meant by the provider,and as always there are conflicts between the provider’s requirements and the ICT department’s security policy.
As head of the Library Systems Department of the library and as coordinator of the project’s System Administration team, I have been acting this week as an intermediary between our software and hardware providers, passing information about volumes, partitions, database and scratch files, root access, IP addresses and swap areas.
This makes you realise again that all these new web 2.0 systems and techniques are in the end completely dependent on the availability of correctly configured and constantly monitored machines, cables and electricity, and not in the least on all these technicians that know all about hardware and networks.
Posted on October 5th, 2008 No comments
End of August I attended the Technological Developments: Threats and Opportunities for Libraries module of TICER – Digital Libraries à la Carte 2008 at the University of Tilburg, The Netherlands.
One of the speakers was Marshall Breeding. His presentation “Library Automation Challenges for the next generation” consisted of three topics, one of which was “Moving toward new generation of library automation”.
He discussed “rethinking the ILS”. The old I(ntegrated) L(ibrary) S(ystem) was about integration of acquisition, serials, cataloguing, circulation, OPAC and reporting of print material. Now we are moving towards a completely elecronic information universe, so new means of integration (and also dis-integration!) are necessary.
Developments until now have been targeted at the front ends: new integrated web 2.0 user interfaces that can also be used in a “dis-integrated” way (by means of API’s that allow embedding portions of the user interface in other environments), such as Primo, Encore, WorldCat Local, AquaBrowser, VuFind, eXtensible Catalog, etc.
Keyword here is “decoupling” of the front end from the back end. But with these products that is not really the case: there is always a harvesting, indexing and enrichting component integrated in them, that moves at least part of the content and also processing to this front end environment.
A new direction here is what Marshall Breeding calls “Comprehensive Resource Management”: the integration of all types of administration (acquisition, cataloging, OPAC, metasearching, linking, etc.) of all types of library resources (print and electronic, text and objects).
One and a half year ago (February 2007) I wrote an article “My Ideal Knowledge Base” about this in “SMUG 4 EU – Newsletter for MetaLib and SFX users” Issue 4 (page 14), targeted at Ex Libris tools Aleph, Metalib, SFX, DigiTool. I ended this vision of an ideal situation with: “Is this ideal image only a dream, or will it come true some day?“.
According to Marshall Breeding it will take 2-3 years more to see library automation systems that follow this approach and 5-7 years for wider adoption. He also said that traditional ILS vendors were working on this, but that no public announcements had been made yet.
Exactly two weeks later, at IGeLU2008 in Madrid, Ex Libris announced and presented their plans for URM (Unified Resources Management) and URD2 (Unified Resource Discovery and Delivery, meaning Primo). Eventually all of their existing products will be integrated in this new next generation environment. The first release will focus on ERM (Electronic Resource Management).
Short term plans for existing tools are focused on preparing them for the new URM/URD2 environment. For instance SFX 4.0 will have a re-designed database ready for integration with URM 1.0.
MetaLib will see its final official version with minor release 4.3 spring 2009. After that a “next generation metasearch tool” will be developed with a completely re-designed back end and metasearch engine, and Primo as front end. Existing customers will be able to upgrade to this NextGen MetaSearch without paying a license fee for Primo (remote search option only).
Interesting times ahead….
Posted on October 3rd, 2008 1 comment
I have had this domain name for a long time, before I started working with digital library systems, even before I knew about them. It was January 2000, at the peak of web 1.0.
My main motive was that I wanted to have an email address that I would not have to change every so often because of disappearing free email providers (my first email address was something at crosswinds.net). But I also wanted to create some kind of bridge or virtual meeting place for the different fields I was interested in, art, history, IT, etc.
There were no blogs or blogging software or any modern web2.0 tools, I had to do everything with HTML and CSS.
A funny thing is that Pam’s Paper Pills blog (© photo) compares old “commonplace books” with “modern blogs”.
My first real project that attracted some attention was my “Short guide to free email” .
A couple of years later I found myself kind of “in between careers”, moving away from IT and system development into what I then expected to be arts and humanities. I actually found myself somewhere in the middle in the end (where I still am right now).
I started adding more “literary” and “historical” texts to my website.But I never really got it going.
Until web 2.0 came along. First I moved everything to a WordPress environment, but I still did not have real content. I played around with a couple of different approaches, finally I decided to start a blog on digital libraries. One of the many but it would automatically be part of the current “virtual community” of the blogosphere and the web at large.
It took some time to think of topics that are not really covered by other well known bloggers. Matters were complicated by the fact that I also have another site, that I had started using for a kind of “personal” blogging (http://lukask.blogspot.com).
But I think the next couple of years I may have a lot to blog about. I will be heavily involved in the implementation of Aleph at the Library of the University of Amsterdam, I have just been elected as member of the Steering Committee of IGeLU (International Group of Ex Libris Users), we intend to get involved more in the new Ex Libris developers platform, and of course there is Ex Libris‘ new URM/URD2 strategy to follow.
So, I hope this will be the first of many library2.0 blog posts.