Posted on July 8th, 2010 2 comments
Meeting new user expectations at ELAG 2010
In the near future libraries and librarians will be very different from what they are now. That’s the overall impression I took away from the ELAG 2010 conference in Helsinki, June 8-11, 2010. ELAG stands for “European Library Automation Group”, which is an indication of its age (34 years): “automation” was then what is now “ICT”. The meetings are characterised by a combination of plenary presentations and parallel workshops.
This year’s theme was “Meeting new users’ expectations”, where the term “users” refers to “end users”, “customers” or “patrons”, as library customers are also called. When you hear the phrase “end user expectations” in relation to library technology you first of all think of front end functionality (user interfaces and services) and the changing experiences there. A number of presentations and workshops were indeed focused on user experience and user studies.
Keywords: discovery, guidance, knowing/engaging users, relevance ranking, context.
But a considerable number of sessions, maybe even the majority, were dedicated to backend technology and systems development.
Keywords: webservices, API, REST, JSON, XML, Xpath, SOLR, data wells, aggregation, identifiers, FRBR, linked data, RDF.
It is becoming ever more obvious that improving libraries’ digital user experience cannot be accomplished without proper data infrastructures and information systems and services. This is directly related to the shift of existing library traditions to the new web experience, which was the leading topic of the presentation given by Rosemie Callewaert and myself: “Discovering the library collections”. We are experiencing a move from closed local physical collections to open networked digital information.
First of all, library collections will be digital. If you don’t believe that, look at the music industry. The recording of stories started 5000 years ago already. The first music recordings only date from the 19th century.
Next, collections will be networked, interlinked and virtual. Data, metadata, and digital objects will be fetched from all kinds of databases on the web, not only traditional bibliographic metadata from library catalogues, and mixed into new result sets, using mashup or linked data techniques.
In this open digital environment, existing and new library systems and discovery tools simply cannot incorporate all possible data services available now and in the future. That is why libraries (or maybe we should start saying ‘information brokers’) MUST have ‘developer skills’ in one form or another. This can range from building your own data wells and discovery tools on one end to using existing online service builders for enriching third party frontends on the other, and everything in between, with different levels of skills required.
Another inevitable development in this open information environment is “cooperation” in all kinds of areas with all kinds of partners in all kinds of forms. Cooperation in development, procurement, hosting and sharing of software (systems, services) and aggregation of data, with libraries, museums, archives, educational institutions, commercial partners, etc.
Last but not least there is the question of the value of the physical library building in the digital age. A number of people stress the importance of libraries as places where students like to come to study. But being a learning center in my view is not part of the core business of a library, which is providing access to information. In pre-digital times it was obviously a natural and necessary thing to study information at the location of the physical collection. But this direct physical link between access to and processing of information does not exist anymore in an open digital information environment.
Back to the ELAG 2010 theme “Meeting new users’ expectations”. In the last slide of our presentation we asked the question “Can LIBRARIES meet new user expectations?” Because we did not have time to discuss it then and there, I will answer it here: “No, not libraries as they are now!”.
New users don’t expect libraries, they expect information services. Libraries were once the best way of providing access to information. Instead of taking the defensive position of trying to secure their survival as organisation (as is the natural aspiration of organisations) libraries should focus on finding new ways of achieving their original mission. This may even lead to the disappearance of libraries, or rather the replacement of the library organisation by other organisational structures. This may of course vary between types of libraries (public, academic, special, etc.).
We may need to redefine the concept of library from “the location of a physical collection” to “a set of information services administered by a group of specialists”.
To summarise: the new digital and networked nature of collections of information leads to a focus on new information services, supported by library staff with information and technology skills, in new organisational structures and in cooperation with other organisations.
Posted on June 19th, 2009 7 commentsLinked Data and bibliographic metadata models
Some time after I wrote “UMR – Unified Metadata Resources“, I came across Chris Keene’s post “Linked data & RDF : draft notes for comment“, “just a list of links and notes” about Linked Data, RDF and the Semantic Web, put together to start collecting information about “a topic that will greatly impact on the Library / Information management world“.
While reading this post and working my way through the links on that page, I started realising that Linked Data is exactly what I tried to describe as One single web page as the single identifier of every book, author or subject. I did mention Semantic Web, URI’s and RDF, but the term “Linked Data” as a separate protocol had escaped me.
The concept of Linked Data was described by Tim Berners Lee, the inventor of the World Wide Web. Whereas the World Wide Web links documents (pages, files, images), which are basically resources about things, (“Information Resources” in Semantic Web terms), Linked Data (or the Semantic Web) links raw data and real life things (“Non-Information Resources”).
There are several definitions of Linked Data on the web, but here is my attempt to give a simple definition of it (loosely based on the definition in Structured Dynamics’ Linked Data FAQ):Linked Data is a methodology for providing relationships between things (data, concepts and documents) anywhere on the web, using URI’s for identifying, RDF for describing and HTTP for publishing these things and relationships, in a way that they can be interpreted and used by humans and software.
I will try to illustrate the different aspects using some examples from the library world. The article is rather long, because of the nature of the subject, then again the individual sections are a bit short. But I do supply a lot of links for further reading.
Data is relationships
The important thing is that “data is relationships“, as Tim Berners Lee says in his recent presentation for TED.
Before going into relationships between things, I have to point out the important distinction between abstract concepts and real life things, which are “manifestations” of the concepts. In Object modeling these are called “classes” (abstract concepts, types of things) and “objects” (real life things, or “instances” of “classes“).
- the class book can have the instances/objects “Cloud Atlas“, “Moby Dick“, etc.
- the class person can have the instances/objects “David Mitchell“, “Herman Melville“, etc.
In the Semantic Web/RDF model the concept of triples is used to describe a relationship between two things: subject – predicate – object, meaning: a thing has a relation to another thing, in the broadest sense:
- a book (subject) is written by (predicate) a person (object)
You can also reverse this relationship:
- a person (subject) is the author of (predicate) a book (object)
The person in question is only an author because of his or her relationship to the book. The same person can also be a mother of three children, an employee of a library, and a speaker at a conference.
Moreover, and this is important: there can be more than one relationship between the same two classes or types of things. A book (subject) can also be about (predicate) a person (object). In this case the person is a “subject” of the book, that can be described by a “keyword”, “subject heading”, or whatever term is used. A special case would be a book, written by someone about himself (an autobiography).
The problem with most legacy systems, and library catalogues as an example of these, is that a record for let’s say a book contains one or more fields for the author (or at best a link to an entry in an authority file or thesaurus), and separately one or more fields for subjects. This way it is not possible to see books written by an author and books about the same author in one view, without using all kinds of workarounds, link resolvers or mash-ups.
Using two different relationships that link to the same thing would provide for an actual view or representation of the real world situation.
Another important option of Linked Data/RDF: a certain thing can have as a property a link to a concept (or “class”) , describing the nature of the thing: “object Cloud Atlas” has type “book“; “object David Mitchell” has type “person“; “object Cloud Atlas” is written by “object David Mitchell“.
And of course, the property/relationship/predicate can also link to a concept describing the nature of the link.
Anywhere on the web
So far so good. But you may argue that this relationship theory is not very new. Absolutely right, but up until now this data-relationship concept has mainly been used with a view to the inside, focused on the area of the specific information system in question, because of the nature and the limitations of the available technology and infrastructure.
The “triple” model is of course exactly the same as the long standing methodology of Entity Relationship Diagrams (ERD), with which relationships between entities (=”classes“) are described. An ERD is typically used to generate a database that contains data in a specific information system. But ERD’s could just as well be used to describe Linked Data on the web.
Information systems, such as library catalogs, have been, and still are, for the greatest part closed containers of data, or “silos” without connections between them, as Tim Berners Lee also mentions in his TED presentation.
Lots of these silo systems are accessible with web interfaces, but this does not mean that items in these closed systems with dedicated web front ends can be linked to items in other databases or web pages. Of course these systems can have API‘s that allow system developers to create scripts to get related information from other systems and incorporate that external information in the search results of the calling system. This is what is being done in web 2.0 with so-called mash-ups.
But in this situation you need developers who know how to make scripts using specific scripting languages for all the different proprietary API’s that are being supported for all the individual systems.
If Linked Data was a global standard and all open and closed systems and websites supported RDF, then all these links would be available automatically to RDF enabled browser and client software, using SPARQL, the RDF Query Language.
- Linked Data/RDF can be regarded as a universal API.
The good thing about Linked Data is, that it is possible to use Linked Data mechanisms to link to legacy data in silo databases. You just need to provide an RDF wrapper for the legacy system, like has been done with the Library of Congress Subject Headings.
Some examples of available tools for exposing legacy data as RDF:
- Triplify – a web applications plugin that converts relational database structures into RDF triples
- D2R Server – a tool for publishing relational databases on the Semantic Web
- wp-RDFa – a wordpress plugin that adds some RDF information about Author and Title to WordPress blog posts
Of course, RDF that is generated like this will very probably only expose objects to link TO, not links to RDF objects external to the system.
Also, Linked Data can be used within legacy systems, for mixing legacy and RDF data, open and closed access data, etc. In this case we have RDF triples that have a subject URI from one data source and an object URI from another data source. In a situation with interlinked systems it would for instance be possible to see that the author of a specific book (data from a library catalog) is also speaking at a specific conference (data from a conference website). Objects linked together on the web using RDF triples are also known as an “RDF graph”. With RDF-aware client software it is possible to navigate through all the links to retrieve additional information about an object.
URI’s (“Uniform Resource Identifiers”) are necessary for uniquely identifying and linking to resources on the web. A URI is basically a string that identifies a thing or resource on the web. All “Information Resources”, or WWW pages, documents, etc. have a URI, which is commonly known as a URL (Uniform Resource Locator).
With Linked Data we are looking at identifying “Non-information Resources” or “real world objects” (people, concepts, things, even imaginary things), not web pages that contain information about these real world objects. But it is a little more complicated than that. In order to honour the requirement that a thing and its relations can be interpreted and used by humans and software, we need at least 3 different representations of one resource (see: How to publish Linked Data on the web):
- Resource identifier URI (identifies the real world object, the concept, as such)
- RDF document URI (a document readable for semantic web applications, containing the real world object’s RDF data and relationships with other objects)
- HTML document URI (a document readable for humans, with information about the real world object)
For instance, there could be a Resource Identifier URI for a book called “Cloud Atlas“. The web resource at that URI can redirect an RDF enabled browser to the RDF document URI, which contains RDF data describing the book and its properties and relationships. A normal HTML web browser would be redirected to the HTML document URI, for instance a web page about the book at the publisher’s website.
There are several methods of redirecting browsers and application to the required representation of the resource. See Cool URIs for the Semantic Web for technical details.
There are also RDF enabled browsers that transform RDF into web pages readable by humans, like the FireFox addon “Tabulator“, or the web based Disco and Marbles browsers, both hosted at the Free University Berlin.
RDF, vocabularies, ontologies
RDF or Resource Description Framework, is, like the name suggests, just a framework. It uses XML (or a simpler non-XML method N3) to describe resources by means of relationships. RDF can be implemented in vocabularies or ontologies, which are sets of RDF classes describing objects and relationships for a given field.
Basically, anybody can create an RDF vocabulary by publishing an RDF document defining the classes and properties of the vocabulary, at a URI on the web. The vocabulary can then be used in a resource by referring to the namespace (the URI) and the classes in that RDF document.
A nice and useful feature of RDF is that more than one vocabularies can be mixed and used in one resource.
Also, a vocabulary itself can reference other vocabularies and thereby inherit well established classes and properties from other RDF documents.
Another very useful feature of RDF is that objects can be linked to similar object resources describing the same real world thing. This way confusion about which object we are talking about, can be avoided.
A couple of existing and well used RDF vocabularies/ontologies:
- RDF – the base RDF vocabulary
- RDFS (for RDF Schema)
- DC (for Dublin Core)
- FOAF (for FOAF- Friend of a Friend) – online identities and social networks
- SKOS (for SKOS – Simple Knowledge Organisation System) – thesauri, classification schemes, subject heading systems and taxonomies
- OWL (for OWL -Ontology Web Language)
(By the way, the links in the first column (to the RDF files themselves) may act as an illustration of the redirection mechanism described before. Some of them may link to either the RDF file with the vocabulary definition itself, or to a page about the vocabulary, depending on the type of browser you use: rdf-aware or not.)
A special case is:
<?xml version=”1.0″ encoding=”UTF-8″ ?>
- RDFa – a sort of microformat without a vocabulary of its own, which relies on other vocabularies for turning XHTML page attributes into RDF
<dc:publisher>Random House Trade Paperbacks</dc:publisher>
<dc:title>Cloud Atlas: A Novel</dc:title>
<rdfs:label>Cloud Atlas: A Novel</rdfs:label>
<rdfs:label>RDF document about the book: Cloud Atlas: A Novel</rdfs:label>
<rdfs:label>Review number 1 about: Cloud Atlas: A Novel</rdfs:label>
<rdfs:label>RDF Book Mashup</rdfs:label>
A partial view on this RDF file with the Marbles browser:
It seems obvious that Linked Data can be very useful in providing a generic infrastructure for linking data, metadata and objects, available in numerous types of data stores, in the online library world. With such a networked online data structure, it would be fairly easy to create all kinds of discovery interfaces for bibliographic data and objects. Moreover, it would also be possible to link to non-bibliographic data that might interest the users of these interfaces.
A brief and incomplete list of some library related Linked Data projects, some of which already mentioned above:
- RDF BookMashup – Integration of Web 2.0 data sources like Amazon, Google or Yahoo into the Semantic Web.
- Library of Congress Authorities – Exposing LoC Autorities and Vocabularies to the web using URI’s
- DBPedia – Exposing structured data from WikiPedia to the web
- LIBRIS – Linked Data interface to Swedish LIBRIS Union catalog
- Scriblio+Wordpress+Triplify – “A social, semantic OPAC Union Catalogue”
And what about MARC, AACR2 and RDA? Is there a role for them in the Linked Data environment? RDA is supposed to be the successor of AACR2 as a content standard that can be used with MARC, but also with other encoding standards like MODS or Dublin Core.
The RDA Entity Relationship Diagram, that incorporates FRBR as well, can of course easily be implemented as an RDF vocabulary, that could be used to create a universal Linked Data library network. It really does not matter what kind of internal data format the connected systems use.
Posted on April 27th, 2009 1 comment
In my post “Tweeting Libraries” among other things I described my Personal Twitter experience as opposed to Institutional Twitter use. Since then I have discovered some new developments in my own Twitter behaviour and some trends in Twitter at large: individual versus social.
There have been some discussions on the web about the pros and cons and the benefits and dangers of social networking tools like Twitter, focusing on “noise” (uninteresting trivial announcements) versus “signal” (meaningful content), but also on the risk of web 2.0 being about digital feudalism, and being a possible vehicle for fascism (as argumented by Andrew Keen).
My kids say: “Twitter is for old people who think they’re cool“. According to them it’s nothing more than : “Just woke up; SEND”, “Having breakfast; SEND”; “Drinking coffee; SEND”; “Writing tweet; SEND”. For them Twitter is only about broadcasting trivialities, narcissistic exhibitionism, “noise”.
For their own web communications they use chat (MSN/Messenger), SMS (mobile phone text messages), communities (Hyves, the Dutch counterpart of MySpace) and email. Basically I think young kids communicate online only within their groups of friends, with people they know.
Just to get an overview: a tweet, or Twitter message, can basically be of three different types:
- just plain messages, announcements
- replies: reactions to tweets from others, characterised by the “@<twittername>” string
- retweets: forwarding tweets from others, characterised by the letters “RT“
Although a lot of people use Twitter in the “exhibitionist” way, I don’t do that myself at all. If I look at my Twitter behaviour of the past weeks, I almost only see “retweets” and “replies”.
Both “replies” and “retweets” obviously were not features of the original Twitter concept, they came into being because Twitter users needed conversation.
A reply is becoming more and more a replacement for short emails or mobile phone text messages, at least for me. These Twitter replies are not “monologues”, but “dialogues”. If you don’t want everybody to read these, you can use a “Direct message” or “DM“.
Retweets are used to forward interesting messages to the people who are following you, your “community” so to speak. No monologue, no dialogue, but sharing information with specific groups.
The “@<twittername>” mechanism is also used to refer to another Twitter user in a tweet. In official Twitter terminology “replies” have been replaced by “mentions“.
Retweets and replies are the building blocks of Twitter communities. My primary community consists of people and organisations related to libraries. Just a small number of these people I actually know in person. Most of them I have never met. The advantage of Twitter here is obvious: I get to know more people who are active in my professional area, I stay informed and up to date, I can discuss topics. This is all about “signal”. If issues are too big for twitter (more than 140 characters) we can use our blogs.
But it’s not only retweets and replies that make Twitter communities work. Trivialities (“noise”) are equally important. They make you get to know people and in this way help create relationships built on trust.
Another compelling example of a very positive social use of Twitter I experienced last week, when there were a number of very interesting Library 2.0 conferences, none of which I could attend in person because of our ILS project:
- ELAG 2009 (European Library Automation Group) in Bratislava, April 22-24
- Dutch Conference Social Networks 2009 in Rotterdam, April 22
- Ugame Ulearn 2009 in Delft, April 23
All of these conferences were covered on Twitter by attendees using the hashtags #elag09, #csnr09 and #ugul09 . This phenomenon makes it possible for non-participants to follow all events and discussions at these conferences and even join in the discussions. Twitter at its best!
Twitter is just a tool, a means to communicate in many different ways. It can be used for good and for bad, and of course what is “good” and what is “bad” is up to the individual to decide.
Posted on April 12th, 2009 7 comments
One single web page as the single identifier of every book, author or subject
I like the concept of “the web as common publication platform for libraries“, and “every book its own url“, as described by Owen Stephens in two blog posts:
“Its time to change library systems ”
I’d suggest what we really need to think about is a common ‘publication’ platform – a way of all of our systems outputting records in a way that can then be easily accessed by a variety of search products – whether our own local ones, remote union ones, or even ones run by individual users. I’d go further and argue that platform already exists – it is the web!
and “The Future is Analogue ”
If every book in your catalogue had it’s own URL – essentially it’s own address on your web, you would have, in a single step, enabled anyone in the world to add metadata to the book – without making any changes to the record in your catalogue.
This concept of identifying objects by URL:Unified Resource Locator (or maybe better URI: Unified Resource Identifier) is central to the Semantic Web, that uses RDF (resource Description Framework) as a metadata model.
As a matter of fact at ELAG 2008 I saw Jeroen Hoppenbrouwers (“Rethinking Subject Access “) explaining his idea of doing the same for Subject Headings using the Semantic Web concept of triplets. Every subject its own URL or web page. He said: “It is very easy. You can start doing this right away“.
To make the picture complete we only need the third essential component: every author his or her or its own URL!
This ideal situation would have to conform to the Open Access guidelines of course. One single web page serving as the single identifier of every book, author or subject, available for everyone to link their own holdings, subscriptions, local keywords and circulation data to.
In real life we see a number of current initiatives on the web by commercial organisations and non commercial groups, mainly in the area of “books” (or rather “publications”) and “authors”. “Subjects” apparently is a less appealing area to start something like this, because obviously stand-alone “subjects” without anything to link them to are nothing at all, whereas you always have “publications” and “authors”, even without “subjects”. The only project I know of is MACS (Multilingual Acces to Subjects), which is hosted on Jeroen Hoppenbrouwers’ domain.
For publications we have OCLC’s WorldCat, Librarything, Open Library, to name just a few. And of course these global initiatives have had their regional and local counterparts for many years already (Union Catalogues, Consortia models). But this is again a typical example of multiple parallel data stores of the same type of entities. The idea apparently is that you want to store everything in one single database aiming to be complete, instead of the ideal situation of single individual URI’s floating around anywhere on the web.
Ex Libris’ new Unified Resource Management development (URM, and yes: the title of this blog post is an ironic allusion to that acronym), although it promotes sharing of metadata, it does this within another separate system into which metadata from other systems can be copied.
Of course, the ideal picture sketched above is much too simple. We have to be sure which version of a publication, which author and which translation of a subject for instance we are dealing with. For publications this means that we need to implement FRBR (in short: an original publication/work and all of its manifestations), for authors we need author names thesauri, for subjects multilingual access.
I have tried to illustrate this in this simplified and incomplete diagram:
In this model libraries can use their local URI-objects representing holdings and copies for their acquisitions and circulation management, while the bibliographic metadata stay out there in the global, open area. Libraries (and individuals of course) can also attach local keywords to the global metadata, which in turn can become available globally (“social tagging”).
It is obvious that the current initiatives have dealt with these issues with various levels of success. Some examples to illustrate this:
- Work: Desiderius Erasmus – Encomium Moriae (Greek), Laus Stultitiae (Latin), Lof der Zotheid (Dutch), Praise of Folly (English)
- Author: David Mitchell
- Erasmus in WorldCat Identities (one ID, many forms)
- David Mitchell in WorldCat Identities (one id per author)
- David Mitchell in VIAF (one id per author)
- Erasmus in OpenLibrary (one id, one incomplete form)
- Erasmus in VIAF (one id, although from The Netherlands, preferred forms are Swedish, French and German)
- Erasmus in Librarything (no identifier, numerous forms and occurrences)
- David Mitchell in Librarything (one form, “David Mitchell is composed of at least 12 distinct authors“, no way to distinguish)
- David Mitchell in OpenLibrary (one id for multiple authors)
- Erasmus “Praise of folly” in Librarything (numerous entries for all different title variations)
- Erasmus “Praise of folly” in OpenLibrary (numerous entries for all different title variations)
These findings seem to indicate that some level of coordination (which the commercial initiatives apparently have implemented better than the non-commercial ones) is necessary in order to achieve the goal of “one URI for each object”.
Who wants to start?
Posted on March 30th, 2009 4 comments
Should libraries use Twitter ? Some web2.0 librarians think so, other people say it’s just a childish hype. Alice de Jong of the Peace Palace Library in The Hague wrote an article recently in the Dutch magazine Informatieprofessional (in Dutch), saying libraries should use Twitter as a means of quick and direct communication with their patrons. The Peace Palace Library uses Twitter as an automatic newsfeed .
An interesting question is: how can an in essence exhibitionist individual social networking tool be used in an institutional way?
What is Twitter anyway?
Wikipedia says: “Twitter is a social networking and micro-blogging service that enables its users to send and read other users’ updates known as tweets. Tweets are text-based posts of up to 140 bytes in length.” Basically a Twitter user broadcasts short messages to the web. Everybody can read these through that user’s personal Twitter page, or via an RSS feed on that page. Twitter users can subscribe to other Twitter users’ tweets by “following” them. In that case all followed tweets appear in their own Twitter stream. Twitter users can also reply to other tweets; this way it becomes a social networking environment. Tweets and replies are public, but there is also the option of “Direct messages”, that are private.
Twitter can be used via the Twitter website, or applications on mobile phones (Like Twitterfon ), on PC’s (like Tweetdeck ), or through widgets in other websites, like TwitterGadget in iGoogle .
Exhibitionism: that’s what Twitter originally is of course. Twitter asks “What are you doing?”. You simply tell the whole world (or world wide web at least) what you’re up to. A symptom of the egocentricity of this decade.
But somehow egocentric exhibitionism turned into professional cooperation and friendly conversation.
I first heard of Twitter at ELAG2008, less than a year ago. Besides the tag to be used for blog posts about the conference, there was also an announcement for a Twitter hashtag to be used. (And this at a conference where social tagging was promoted against controlled vocabularies!). There were a number of library bloggers there, who were also on Twitter: digicmb, Wowter, PatrickD.
Most people I follow or who are following me, are library people. Most of these also blog. So there is a kind of library2.0 community on Twitter, like there are all kinds of communities there. Some of my Twitter friends I know personally, I have met them, talked to them. Others I have only met on Twitter, but we do have agreeable, both professional and social talks. Remarkable: one of these Twitter friends I have never met “in the flesh”, is a colleague at the University of Amsterdam, but she works in the Medical Library, a long way from where I work.
My subjects on Twitter:
- football (soccer)
- what I am watching on TV – music
- what happens to me
- metadata issues
- day to day work issues
- my IGeLU stuff
- interesting blog posts
- my new blog posts (like this one!)
- interesting websites
- library 2.0 news
Twitter is also the “largest virtual expert helpdesk”, as digicmb recently experienced.
My personal Twitter experience is like having chats and discussions with colleagues at work, or with friends in a bar, but with a much larger group; or attending some library systems conference, with professional discussions, and also with social events, but then a continuous, intermittent one, and without travelling.
Now, how can an organisation, and in particular a library, use a tool, or rather a community, like Twitter for its own benefit?
Twitter has been around for three years, but it is growing incredibly fast. In The Netherlands politicians use Twitter, like our Foreign Secretary. Well known people in all areas are on Twitter, example: British writer Ben Okri started publishing his new poem “I Sing a New Freedom” on Twitter, one line a day. Newspapers write about it, popular TV shows talk about it.
So clearly, there is an ever growing audience. Libraries, as other organisations, should contact their audience where their audience is, so Twitter is another channel for communication.
Organisations can use Twitter as an alternative for news items on websites, RSS feeds, blogs, etc. But is there an advantage in using Twitter instead of other web2.0 channels? I am not sure. Just like surfing to websites and subscribing to RSS feeds, people have to actively start “following” an institutional Twitter account. Organisations, libraries need to actively promote their Twitter channel for it to be a success. But they also need to actively maintain their Twitter channel, just like all other web2.0 activities, otherwise it will just fade away, as Meredith Farkas notices in her blog post “It’s not all about the tech – why 2.0 tech fails“.
One advantage of using Twitter in libraries is the fact that it is getting beyond a hype. It will be one of the main channels of communication on the web.
Another one might be the interactive possibilities of Twitter. Institutional use of Twitter is mostly one way traffic, a broadcast to whoever wants to “follow”, as opposed to personal Twitter. See for instance the Library of Congress and the Peace Palace Library.
But as Alice de Jong points out, a number of libraries are choosing the “personal approach”. The tweeting librarian really communicates with patrons in order to promote closer contact between libaries and patrons.
This approach might also be a replacement for current libary chat services.
Personal institutional Twitter accounts could also be used as a means of representing the library as actual recognisable people, as has been promoted recently on a number of occasions. Patrons then will know library staff as experts in certain fields, instead of facing an anonymous organisation.
Conclusion: yes, libraries should use Twitter, as long as they can get a reasonable “following”, and have an official policy and staff dedicated to maintaining it.
Posted on February 15th, 2009 No comments
Henk Ellerman of Groningen University Library writes about the “Collection in the digital age” reacting to Mary Frances Casserly’s article “Developing a Concept of Collection for the Digital Age“. I haven’t read this 2002 article yet, but Henk Ellerman goes into the problem of finding a metaphor describing collections that for a large part consists of resources available on the internet.
“…the collection (the one deemed relevant for… well whatever) is a subset that needs to be picked from the total set of available online resources.”
“I find it quite remarkable that the new collection is seen as the result of a process of picking elements, a process similar to finding shells on a beach.”
“What if we expand the notion of a collection in such a way that the sea becomes part of it?”
“The main issue with any sensible collection is quality control. We don’t want ugly things in our collections.”
“Then a collection is not a simple store of documents anymore, but a rather complex system of interrelated documents, controlled by a selected group of people.”
“Librarians ‘just’ need to make the system searchable.”
I have a couple of thoughts about collections myself that I would like to add to these.
Originally, a collection is the total number of physical objects of a specific type that are in the possession of a person, or an organisation. Merriam-Webster says: “an accumulation of objects gathered for study, comparison, or exhibition or as a hobby“. People can collect Barbie dolls or miniature cars as a hobby, or rare books or monkey skulls for scientific reasons.
(By the way, individual collectors of rare books are often described in movies as rich, old, excentric people with a small but very valuable collection of very old books about topics such as satanism, who end up being killed in a horrible way, and having there collections destroyed by fire, like I saw some time ago in Polanski’s “The ninth gate”.)
When organisations have collections then it is almost always for study or exhibition, but also for practical reasons. We are talking mainly about museums and libraries. In the case of libraries there is a rough distinction between public libraries and libraries belonging to scientific and/or educational institutions. Let’s focus on educational libraries, or “university libraries” to make the picture a bit simple.
University libraries have collected written and/or printed texts (books, journals, also containing images, maps, diagrams, etc.) in order to provide their staff and students with material to be able to teach and study. A library’s collection then describes all objects in the possession of the library. In the digital age, electronic journals and databases have been added to these collections, but in most cases this concerns only resources the library owns or for which the library pays money to gain access to. The collection then becomes the totality of objects (physical or digital) that the library owns or is granted access to by means of a contract. Freely available resources are explicitly not counted here.
Now, here we have to make an important distinction between a library’s total collection (“the collection”), meaning “all items the library owns or has access to”), and a collection on a specific topic or for a specific subject (“subject collections”), meaning “all items that have been selected by professionals to be part of the material that is necessary for studying a specific topic”, for instance “the University of Amsterdam library’s Chess collection”. In the past, people would have to go to a specific library to consult a specific collection on a specific subject.
“The collection” is merely the sum of all the library’s “subject collections”, nothing more.
Before we go to the collection in the digital age, an interesting intermediate question is: what is the position of interlibrary loan in the concept of collection? Are books from other libraries that are available to a specific library’s patrons to be considered as part of that second library’s collection? In the strict sense of the collection concept (“all items the library owns”), the answer is “no”. But if we expand the notion of collection to mean: “everything a library has access to”, then the answer clearly would have to be “yes”.
Now, in the digital age, the limitation that a collection’s objects should be available physically in a specific location, disappears. This means that anything can be part of “the collection” of a specific library, also objects or texts that have not been judged as scientific before, like blog posts. This is the “sea” that Henk Ellerman is talking about. A subject collection is also not limited by physical borders anymore. Subject collections can contain material, physical and digital, from anywhere. In this case, there is no reason that a subject collection should be a specific library’s subject collection, obviously. Key is “quality control”, or as Henk Ellerman puts it: “We don’t want ugly things in our collections“. Subject collections should be universal, global, virtual collections of physical and digital objects, “controlled by a selected group of people“.
Now, the most important question: who decides who will be part of these selected groups of people? The answer to this question is still to be found. I guess we will see several types of “expert groups” emerge: coalitions between university libraries nationally or globally, but also between not-for-profit and commercial organisations, and of course also between individuals cooperating informally, like in the blogosphere, or in wikipedia .
The collections that will be controlled by these coalitions will not have fixed boundaries, but will have more “professional” cores with several “less professional” spheres around it or intersecting with other collections.
It is time we start building.
Posted on December 16th, 2008 1 comment
Last month the Dutch Advisory Committee on Library Innovation published its report “Innovation with Effect“. The report was commissioned by the Dutch Minister of Education, Culture and Science, the charge was to draw up a plan for library innovation for the period 2009-2012 including a number of required conditions. Priorities that had to be addressed were: provision of digital services, collection policy, marketing, HRM.
The recommendations of the committee are classified in three main areas or “programmatic lines” under a more or less central direction and/or coordination:
- Digital infrastructure (such as: one common information architecture, connection to nationwide and global information infrastructure, one national identity management system)
- Innovation of digital services and products
- Policy innovation
Interesting report, but that is not what I want to point out here. What is very exciting: in the list of consulted sources, amidst official reports and publications, appears the social information professionals network Bibliotheek 2.0, the Dutch equivalent of http://library20.ning.com. This aroused much enthusiasm among the members of the Dutch library blogosphere.
The Committee’s chairperson Josje Calff, deputy director of Leiden University Library, had started a discussion on the topic “One public library catalogue?” in this community, to which I am proud to say I also made a small contribution. The results of this discussion have been used by the committee in formulating their recommendations.
In striking contrast to this success for web 2.0 social networking, there was a lot of outrage in the same Dutch library blogosphere last week about the ban of The Netherlands most popular social network Hyves and YouTube from one of the countries institutes for professional and adult education, reported on by one of its employees (in Dutch). Because of all the protests the school’s management is currently reconsidering their position and a new decision will be made beginning of 2009. Probably YouTube will continue to be permitted, because it is heavily used as a source of information in the lessons.
Posted on November 10th, 2008 No comments
Last week my colleague Bert Zeeman published a poll “Open stack, get rid of it!” (in Dutch) with 3 options:1. Yes, of course, should have been done long ago
2. Help, no, open stacks are the backbone of the scientific library
3. Nonsense, like always the truth lies in the middle.
I voted for option 3, which is a bit spineless at first sight, I admit, but in my defense I can say, that I ended my explanatory comment with a somewhat more outspoken choice. Briefly, what I said was this:I am a big lover of book stacks, old libraries and bookstores. A confession: in my first year as a student I visited the University Library only once. As soon as I found out that the only way to obtain a book was to find one by looking through the card catalog and waiting for someone behind the terrifying desk to hand it over, I left the building never to return again during my years as a student. From then on I borrowed my books from the public library. (Presently I have been working for some years already in the same building that I left behind in shock).
But on the other hand, current developments are that our customers do their searching and finding off site more and more.
So to be honest, I guess I believe option 1 is more realistic.
This description of my state of mind is a good illustration of the current ambiguous library open stack situation.
For library customers who like to come to the library, look around, hold books in their hands and browse through them, obviously open stacks are definitely not a thing of the past. They can be a source for unexpected discoveries and instant satisfaction. This applies to the majority of the customers of public libraries, I guess.
For customers of scientific libraries, in my opinion in most cases the situation is quite different. Students most of the time know what they are looking for. So do researchers and teachers: no need for browsing, just locate the book, get it and check it out, order online, or download the full text article. Customers like this get along with open and closed stacks, on site and off site searching.
In the near future federated search systems, union catalogs, repositories and virtual collections combined with web 2.0 features like book covers and author profiles, together with the ever growing pools of digitised books, will be the new digital open stacks. They will take over the function of browsing, discovering and sampling books, journals and other objects. Eventually the typical public library customer will also prefer these open stacks 2.0.
Library buildings will more and more fulfill the role of meeting place, exposition hall, etc.
Open stacks will undergo the fate of vinyl records, paper telephone directories and steam engines. Only for real lovers of the printed book there will be dedicated open stack rooms and book museums like the Library of Congress. But this is still a couple of years away.
Posted on November 2nd, 2008 2 comments
In his post “Twitter me this” Owen Stephens writes about differences in use and audience of Social Networking Sites. (Apparently at Imperial College London they had a similar kind of Web2.0 Learning programme as we had at the Library of the University of Amsterdam.)
Owen distinguishes audiences on several, intermixed levels (my interpretation): “young” (e.g. MySpace ) vs. “old(er)” (e.g. Facebook ); “business/networking” (e.g. LinkedIn ) vs. “family and friends” (also FaceBook); “professional” (e.g. Ning ).
And Owen mentions the risk involved here:
“I do find that Facebook raises the issue of how I mix my professional and personal life – whereas on LinkedIn everyone is one there as a ‘professional contact’ (even those people who are also friends), in Facebook I have some professional contacts, and some personal contacts. Although it hasn’t happened yet, there is a clearly a risk that in the future there could be a conflict between how I want to present myself professionally, and how I do personally – I’m not sure I’d want my boss (not singling out my current boss) to be my ‘Friend’ on Facebook.”
I recognise these differences and risks as well. In The Netherlands the most popular social networking site is Hyves, which can be compared to MySpace (according to my interpretation of Owen’s classification), but without the music angle. I have an account there, with only 13 “friends”, but my kids have 100 or more.
On LinkedIn however, I have 80 connections (a term used to stress that these contacts are to be regarded as serious business relations), of which 99% I have met face-to-face at least once, by the way. Owen says about LinkedIn:
“I’ve got a LinkedIn account but I don’t tend to use it for ‘social networking’, and more really as a ‘contacts’ list – while some people clearly use LinkedIn to ‘work’ their business contacts, I can’t say that I’ve ever been terribly good at this.”
I guess I am using LinkedIn the same way as Owen does. Last week I had a discussion with a colleague/friend (!) about the use of these business networking sites like LinkedIn. We concluded that a number of people obviously use LinkedIn to show off: “Look, I have more than 300 connections on my list; mine is bigger than yours“. I must confess that I have thoughts like that myself sometimes: “I hope that this colleague has noticed that I know that famous person“….
Now these “serious” business networks are starting to offer more social features. LinkedIn has groups, forums and “LinkedIn Applications”: integrating web 2.0 stuff like Amazon reading list, Slideshare, WordPress. In fact, this very blog post will show up on my LinkedIn Profile.
I guess there is a lot of competition, for instance with Plaxo. Besides “connections”, which can be marked “business”, “friends” or “family”, Plaxo offers the options of “hooking up feeds” from web 2.0 services that you use, like flickr, delicious, twitter, blogs, youtube, lastfm, etc. I find this a very useful feature, because it gives me an integrated overview of all my web2.0 streams, much like SecondBrain does, which has a slightly different “connection” implementation, more like Twitter, with “followers”.
Plaxo lets you also synchronise connections with LinkedIn, but this is a “Premium service”, meaning it costs money.
Now, to come back to Owen’s risk assessment: in my Plaxo profile I show my professional blog (this one, that you are reading right now) to “Everyone”, but my twitter, personal blog, flickr, delicious, picasa and lastfm streams only to “Friends” and “Family”, because I think I should not draw unnecessary attention to my twitter “trivia” (as Owen calls it), holiday snapshots and non-professional bookmarks. These streams are publicly available of course, but I do not want to actually push them in the faces of my “serious” connections.
You might argue that this kind of behaviour is not “social“, but rather “antisocial“: certain groups of contacts are excluded from information that privileged groups do have access to. And this term could also be applied to the “showing off” behaviour that I mentioned above.
The funny thing is, that the “killer application” that won me over to Plaxo and that I use the most, is not social at all: it’s something that I have been looking for since playing around with web 1.0 “Personal Information Managers”: the option of integrating and synchronising the Plaxo Calendar with my Outlook work calendar and my private Google calendar. For me this is a huge advantage to having to consult several calendars when planning an appointment.
But I do not share my Plaxo Calendar at all. Would you call this antisocial behaviour too?
Posted on March 15th, 2008 No comments
British Library’s extraordinary technology for viewing great landmarks of world culture online.