Library2.0 and beyond
RSS icon Home icon
  • ReTweet @Reply – Twitter communities

    Posted on April 27th, 2009 Lukas Koster 1 comment

    twitterelag

    In my post “Tweeting Libraries” among other things I described my Personal Twitter experience as opposed to Institutional Twitter use. Since then I have discovered some new developments in my own Twitter behaviour and some trends in Twitter at large: individual versus social.

    There have been some discussions on the web about the pros and cons and the benefits and dangers of social networking tools like Twitter, focusing on “noise” (uninteresting trivial announcements) versus “signal” (meaningful content), but also on the risk of web 2.0 being about digital feudalism, and being a possible vehicle for fascism (as argumented by Andrew Keen).

    My kids say: “Twitter is for old people who think they’re cool“. According to them it’s nothing more than : “Just woke up; SEND”, “Having breakfast; SEND”; “Drinking coffee; SEND”; “Writing tweet; SEND”. For them Twitter is only about broadcasting trivialities, narcissistic exhibitionism, “noise”.
    For their own web communications they use chat (MSN/Messenger), SMS (mobile phone text messages), communities (Hyves, the Dutch counterpart of MySpace) and email. Basically I think young kids communicate online only within their groups of friends, with people they know.

    Just to get an overview: a tweet, or Twitter message, can basically be of three different types:

    • just plain messages, announcements
    • replies: reactions to tweets from others, characterised by the “@<twittername>” string
    • retweets: forwarding tweets from others, characterised by the letters “RT

    Although a lot of people use Twitter in the “exhibitionist” way, I don’t do that myself at all. If I look at my Twitter behaviour of the past weeks, I almost only see “retweets” and “replies”.

    Both “replies” and “retweets” obviously were not features of the original Twitter concept, they came into being because Twitter users needed conversation.
    A reply is becoming more and more a replacement for short emails or mobile phone text messages, at least for me. These Twitter replies are not “monologues”, but “dialogues”. If you don’t want everybody to read these, you can use a “Direct message” or “DM“.
    Retweets are used to forward interesting messages to the people who are following you, your “community” so to speak. No monologue, no dialogue, but sharing information with specific groups.
    The “@<twittername>” mechanism is also used to refer to another Twitter user in a tweet. In official Twitter terminology “replies” have been replaced by “mentions“.

    Retweets and replies are the building blocks of Twitter communities. My primary community consists of people and organisations related to libraries. Just a small number of these people I actually know in person. Most of them I have never met. The advantage of Twitter here is obvious: I get to know more people who are active in my professional area, I stay informed and up to date, I can discuss topics. This is all about “signal”. If issues are too big for twitter (more than 140 characters) we can use our blogs.
    But it’s not only retweets and replies that make Twitter communities work. Trivialities (“noise”) are equally important. They make you get to know people and in this way help create relationships built on trust.

    Another compelling example of a very positive social use of Twitter I experienced last week, when there were a number of very interesting Library 2.0 conferences, none of which I could attend in person because of our ILS project:

    All of these conferences were covered on Twitter by attendees using the hashtags #elag09, #csnr09 and #ugul09 . This phenomenon makes it possible for non-participants to follow all events and discussions at these conferences and even join in the discussions. Twitter at its best!

    Twitter is just a tool, a means to communicate in many different ways. It can be used for good and for bad, and of course what is “good” and what is “bad” is up to the individual to decide.

    Share

  • Replacing our ILS, business as usual

    Posted on April 24th, 2009 Lukas Koster 2 comments
    catalog

    © Peter Morville

    As you may have noticed from some of my tweets, the Library of the Unversity of Amsterdam, my place of work, is in the process of replacing its ILS (Integrated Library System). All in all this project, or better these two projects (one selecting a new ILS, the other one implementing it) will have taken 18 months or more from the decision to go ahead until STP (Switch to Production), planned for August 15 this year. My colleague Bert Zeeman blogged about this (in Dutch) recently.

    One thing that has become absolutely clear to me is that replacing an ILS is not just about replacing one information system by another. It is about replacing an entire organisational structure of work processes, with its huge impact on all people involved. And in our case it affects two organisations: besides the Library of the University of Amsterdam also the Media Library of the Hogeschool van Amsterdam. We have been managing library systems for both organisations in a mini consortial structure since a couple of years. So the Media Library is facing a second ILS replacement within two years.

    While the decision was made because of pressing technical reasons, also with an eye on preparing for future library 2.0 developments, it turned out to be of substantial consequence for the organisation.
    This is the first time that I am participating in such a radical library system project. I have done a couple of projects implementing and upgrading metasearching and OpenURL link resolver tools in the last six years, but these are nothing compared to the current project. With these “add-on” tools, that started as a means of extending the library’s primary stream of information, only a relatively limited number of people were involved. But with an ILS you are talking about the core business of a library (still!) and about day to day working life of everybody involved in acquisitions, cataloguing, circulation as well as system administrators and system librarians.

    To make it even more complicated, the University Library is also switching from the old system’s proprietary bibliographic format to MARC21, because that is what the new system is using. Personally I think that the old system’s format is better (just like our German colleagues think about their move from MAB to MARC), but of course the advantages of using an internationally accepted and used standard outweighs this, as always. Maybe food for another blog post later…

    Last but not least, the Library is simultaneously doing a project for the implementation of RFID for self check machines. The initial idea was to implement RFID in the old system and then just migrate everything to the new one. However, for various reasons, recently it was decided to postpone RFID implementation to shortly after our ILS STP. Some initial tests have shown that this probably will work.

    And while all this is going on, all normal work needs to be taken care of too: ” business as usual” .

    Now, looking at workflows: the way that our individual departments have organised their workflows, is partly dictated by the way the old system is designed. The new system obviously dictates workflows too, but in other areas. Although this new system is very flexible and highly configurable, there are still some local requirements that cannot be met by the new system.
    Of course this is NOT the way it should be! Systems should enable us to do what we want and how we want it! Hopefully new developments like Ex Libris’ URM and the very recently announced new OCLS WorldCat Web based ILS will take care of users better.

    Talking about “very flexible and highly configurable”: although a very big advantage, this also makes it much more complicated and time consuming to implement the new system. Fortunately there are a lot of other libraries in The Netherlands and around the world using the new system that are willing to help us in every possible way. And this is highly appreciated!

    Other isues that make this project complicated:

    • unexpected issues, bottlenecks: these keep on coming
    • migration of data from old system: conversion of old to new format
    • implementing links with external systems like student’s and staff database, financial system, national union catalogue

    I think we will make STP on the planned date, but I also think we need to postpone a number of issues until after that. There will still be a lot of work to be done for my department after the project has finished.

    To end with a positive note: the new OPAC wil be much nicer and more flexible than the old one. And in the end that is what we are doing this for: our patrons.

    Share

  • UMR – Unified Metadata Resources

    Posted on April 12th, 2009 Lukas Koster 7 comments

    One single web page as the single identifier of every book, author or subject

    openlibrary1

    I like the concept of “the web as common publication platform for libraries“, and “every book its own url“, as described by Owen Stephens in two blog posts:
    Its time to change library systems

    I’d suggest what we really need to think about is a common ‘publication’ platform – a way of all of our systems outputting records in a way that can then be easily accessed by a variety of search products – whether our own local ones, remote union ones, or even ones run by individual users. I’d go further and argue that platform already exists – it is the web!

    and “The Future is Analogue

    If every book in your catalogue had it’s own URL – essentially it’s own address on your web, you would have, in a single step, enabled anyone in the world to add metadata to the book – without making any changes to the record in your catalogue.

    This concept of identifying objects by URL:Unified Resource Locator (or maybe better URI: Unified Resource Identifier) is central to the Semantic Web, that uses RDF (resource Description Framework) as a metadata model.

    As a matter of fact at ELAG 2008 I saw Jeroen Hoppenbrouwers (“Rethinking Subject Access “) explaining his idea of doing the same for Subject Headings using the Semantic Web concept of triplets. Every subject its own URL or web page. He said: “It is very easy. You can start doing this right away“.

    elag_2008_hoppenbrouwers

    © Jeroen Hoppenbrouwers

    To make the picture complete we only need the third essential component: every author his or her or its own URL!

    This ideal situation would have to conform to the Open Access guidelines of course. One single web page serving as the single identifier of every book, author or subject, available for everyone to link their own holdings, subscriptions, local keywords and circulation data to.

    In real life we see a number of current initiatives on the web by commercial organisations and non commercial groups, mainly in the area of “books” (or rather “publications”) and “authors”. “Subjects” apparently is a less appealing area to start something like this, because obviously stand-alone “subjects” without anything to link them to are nothing at all, whereas you always have “publications” and “authors”, even without “subjects”. The only project I know of is MACS (Multilingual Acces to Subjects), which is hosted on Jeroen Hoppenbrouwers’ domain.

    For publications we have OCLC’s WorldCat, Librarything, Open Library, to name just a few. And of course these global initiatives have had their regional and local counterparts for many years already (Union Catalogues, Consortia models). But this is again a typical example of multiple parallel data stores of the same type of entities. The idea apparently is that you want to store everything in one single database aiming to be complete, instead of the ideal situation of single individual URI’s floating around anywhere on the web.
    Ex Libris’ new Unified Resource Management development (URM, and yes: the title of this blog post is an ironic allusion to that acronym), although it promotes sharing of metadata, it does this within another separate system into which metadata from other systems can be copied.

    The same goes for authors. We have WorldCat Identities, VIAF, local authority schemes like DAI, etc. Again, we see parallel silos instead of free floating entities.

    Of course, the ideal picture sketched above is much too simple. We have to be sure which version of a publication, which author and which translation of a subject for instance we are dealing with. For publications this means that we need to implement FRBR (in short: an original publication/work and all of its manifestations), for authors we need author names thesauri, for subjects multilingual access.

    I have tried to illustrate this in this simplified and incomplete diagram:

    © Lukas Koster

    © Lukas Koster

    In this model libraries can use their local URI-objects representing holdings and copies for their acquisitions and circulation management, while the bibliographic metadata stay out there in the global, open area. Libraries (and individuals of course) can also attach local keywords to the global metadata, which in turn can become available globally (“social tagging”).

    It is obvious that the current initiatives have dealt with these issues with various levels of success. Some examples to illustrate this:

    • Work: Desiderius ErasmusEncomium Moriae (Greek), Laus Stultitiae (Latin), Lof der Zotheid (Dutch), Praise of Folly (English)
    • Author: David Mitchell

    Authors
    Good:

    Medium:

    Bad:

    Publications
    Good:

    Bad:

    These findings seem to indicate that some level of coordination (which the commercial initiatives apparently have implemented better than the non-commercial ones) is necessary in order to achieve the goal of “one URI for each object”.

    Who wants to start?

    Share