Posted on June 16th, 2015 15 comments
Or: Anything goes. What are we thinking? An impression of ELAG 2015
This year’s ELAG conference in Stockholm was one of many questions. Not only the usual questions following each presentation (always elicited in the form of yet another question: “Any questions?”). But also philosophical ones (Why? What?). And practical ones (What time? Where? How? How much?). And there were some answers too, fortunately. This is my rather personal impression of the event. For a detailed report on all sessions see Christina Harlow’s conference notes.
The theme of the ELAG 2015 conference was: “DATA”. This immediately leads to the first question: “What is data?”. Or rather: “What do we mean with data?”. And of course: “Who is ‘we’?”.
In the current information professional and library perception ‘we’ typically distinguish data created and used for describing stuff (usually referred to as ‘metadata’), data originating from institutions, processes and events (known as ‘usage data’, ‘big data’), and a special case of the latter: data resulting from scholarly research (indeed: ‘research data’). All of these three types were discussed at ELAG.
It is safe to say however, that the majority of the presentations, bootcamps and workshops focused on the ‘descriptive data’ type. I try to avoid the use of the term ‘metadata’, because it is confusing, and superfluous. Just use ‘data’, meaning ‘artificial elements of information about stuff’. To be perfectly clear, ‘metadata’ is NOT ‘data about data’ as many people argue. It’s information about virtual entities, physical objects, information contained in these objects (or ‘content’), events, concepts, people, etc. We could only rightfully speak of ‘data about data’ in the special case of data describing (research) datasets. For this case ‘we’ have invented the job of ‘data librarian’, which is a completely nonsensical term, because this job is concerned with the storage, discoverability and obtainability of only one single object or entity type: research datasets. Maybe we should start using the job title ‘dataset librarian’ for this activity. But this seems a bit odd, right? On the other hand, should we replace the term ‘metadata librarian’ with ‘data librarian’? Also a bit odd. Data is at this moment in time what libraries and other information and knowledge institutions use to make their content findable and usable to the public. Let’s leave it at that.
This brings us to the two fundamental questions of our library ecosystem: “What are we describing?” and the mother of all data questions: “Why are we describing?”, which were at the core of what in my eyes was this year’s key presentation (not keynote!) by Niklas Lindström of the Swedish Royal Library/LIBRIS. I needed some time to digest the core assertions of Niklas’ philosophical talk, but I am convinced that ‘we’ should all be aware of the essential ‘truths’ of his exposition.
First of all: “Why are we describing?“. The objective of having a library in the first place is to provide access in any way possible to the objects in our collections, which in turn may provide access to information and knowledge. So in general we should be describing in order to enable our intended audience to obtain what they need in terms of the collection. Or should that be in terms of knowledge? In real life ‘we’ are describing for a number of reasons: because we follow our profession, because we have always done this, because we are instructed to do so, because we need guidance in our workflows, because the library is indispensable, because of financial and political reasons. In any case we should be clear about what our purposes are, because the purpose influences what we’re describing and how we do that.
Secondly: “What are we describing?”. Physical objects? Semi-tangible objects, like digital publications? Only outputs of processes, or also the processes themselves? Entities? Concepts? Representations? Relationships? Abstractions? Events? Again, we should be clear about this.
Thirdly (a Monty Python Spanish Inquisition moment ;-): “How are we describing?”. We use models, standards, formats, syntax, vocabularies in order to make maps (simplified representations of real world things) for reconciling differences between perceptions, bridging gaps between abstractions and guiding people to obtain the stuff they need. In doing so, Niklas says, we must adhere to Postel’s law, or the Robustness Principle, which states: “Be liberal in what you accept; be conservative in what you send”.
Back to the technology, and the day to day implementation of all this. ‘We’ use data to describe entities and relationships of whatever nature. We use systems to collect and store the data in domain, service and time dependent record formats in system dependent datastores. And we create flows and transformations of data between these systems in order to fulfill our goals. For all this there are multiple standards.
Basically, my own presentation “Datamazed – Analysing library dataflows, data manipulations and data redundancies” targeted this fragmented data environment, describing the library of the University of Amsterdam Dataflow Inventory project leading to a Dataflow Repository, effectively functioning as a map of mappings. “System of Systems (SoS)” was also the topic of the workshop I participated in, “What is metadata management in net-centric systems?” by Jing Wang.
Making sense of entities and relationships was the focus of a number of talks, especially the one by Thom Hickey about extending work, expression and author entities by way of data mining in Worldcat and VIAF, and the presentation by Jane Stevenson on the Jisc/Archives Hub project “exploring British Design”, which entailed shifting focus from documents to people and organizations as connected entities. Some interesting observations about the latter project: the project team started with identifying what the target audience actually wanted and how they would go about getting the desired information (“Why are we describing?”) in order to get to the entity based model (“What are we describing?”). This means that any entity identified in the system can become a focus, or starting point for pathways. A problem that became apparent is that the usual format standards for collection descriptions didn’t allow for events to be described.
Here we arrive at the critique of standards that was formulated by Rurik Greenall in his talk about the Oslo Public Library ILS migration project, where they are migrating from a traditional ILS to RDF based cataloguing. Starting point here is: know what you need in order to support your actual users, not some idealised standard user, and work with a number of simple use cases (“Why are we describing?”). Use standards appropriate for your users and use cases. Don’t be rigid, and adapt. Use enough RDF to support use cases. Use just a part of the open source ILS Koha to support specific use cases that it can do well (users and holdings). Users and holdings are a closed world, which can be dealt with using a part of an existing system. Bibliographic information is an open world which can be taken care of with RDF. The data model again corresponds to the use cases that are identified. It grows organically as needed. Standards are only needed for communicating with the outside world, but we must not let the standards infect our data model (here we see Postel’s Law again).
A striking parallel can be distinguished with the Stockholm University Library project for integration of the Open Source ILS Koha, the Swedish LIBRIS Union Catalogue and a locally developed logistics and ILL system. Again, only one part of Koha is used for specific functions, mainly because with commercial ILSes it is not possible to purchase only individual modules. Integrated library systems, which seemed a good idea in the 1980’s, just cannot cope with fragmented open world data environments.
Dedicated systems, like ILSes, either commercial or open source, tend to force certain standards upon you. These standards not only apply to data storage (record formats etc.) but also to system structure (integrated systems, data silos), etc. This became quite clear in the presentation about the CERN Open Data Portal, where the standard digital library system Invenio imposed the MARC bibliographic format for describing research datasets for high energy physics, which turned out to be difficult if not impossible. Currently they are moving towards using JSON (yet another data standard) because the system apparently supports that too.
With Open Source systems it is easier to adapt the standards to your needs than with proprietary commercial vendor systems. An example of this was given by the University of Groningen Library project where the Open Source Publication Repository software EPrints was tweaked to enable storage and description of research datasets focused on archeological findings, which require very specific information.
As was already demonstrated in the two ILS migration projects deviation of standards of any kind can very easily be implemented. This is obviously not always the case. The locally developed Swedish Royal Library system for the legal deposit of electronic documents supports available suitable metadata standards like OAI, METS, MODS, PREMIS.
For the OER World Map project, presented by Felix Ostrowski we can safely say that no standards were followed whatsoever, except using the discovery data format schema.org for storing data, which is basically also an adaptation of a standard. Furthermore the original objective of the project was organically modified by using the data hub for all kinds of other end user services and visualisations than just the original world map of the location of Open Educational Resources.
It should be clear that every adaptation of standards generates the need for additional mappings and transformations besides the ones already needed in a fragmented systems and data infrastructure for moving around data to various places for different services. Mapping and transformation of data can be done in two ways: manually, in the case of explicit, known items, and by mining, in the case of implicit, unknown items.
Manual mapping and transformation is of course done by dedicated software. The manual part consists of people selecting specific source data elements to be transformed into target data elements. This procedure is known as ETL (Extract Transform Load), and implies the copying of data between systems and datastores, which always entails some form of data redundancy. Tuesday afternoon was dedicated to this practice with three presentations: Catmandu and Linked Data Fragments by Ruben Verborgh and Patrick Hochstenbach; COMSODE by Jindrich Mynarz; d:swarm by Thomas Gängler. Of these three the first one focused more on efficiently exposing data as Linked Open Data by using the Linked Data Fragments protocol. An important aspect of tools like this is that we can move our accumulated knowledge and investment in data transformation away from proprietary formats and systems to system and vendor independent platforms and formats.
Data mining and text mining are used in the case of non explicit data about entities and relationships, where bodies of data and text are analyzed using dedicated algorithms in order to find implicit entities and relationships and make them explicit. This was already mentioned in Thom Hickey’s Worldcat Entity Extension project. It is also used in the InFoLiS2 project, where data and text mining is used to find relationships between research projects and scholarly publications.
Another example was provided by Rob Koopman and Shenghui Wang of OCLC Research, who analyzed keywords, authors and journal titles in the ArticleFirst database to generate proximity visualizations for greater serendipity.
As long as ‘we’ don’t or can’t describe these types of relationships explicitly, we have to use techniques like mining to extract meaningful entities and relationships and generate data. Even if we do create and maintain explicit descriptions, we will remain a closed world if we don’t connect our data to the rest of the world. Connections can be made by targeted mapping, as in the case of the Finnish FINTO Library Ontology service for the public sector, or by adopting an open world strategy making use of semantic web and linked open data instruments.
Furthermore, ‘we’ as libraries should continuously ask ourselves the questions “Why and what are we describing?”, but also “Why are we here?”. Should we stick to managing descriptive data, or should we venture out into making sense of big data and research data, and provide new information services to our audience?
Finally, I thank the local organizers, the presenters and all other participants for making ELAG2015 a smooth, sociable and stimulating experience.
Posted on September 5th, 2014 No comments
IFLA 2014 Annual World Library and Information Congress Lyon – Libraries, Citizens, Societies: Confluence for Knowledge
After attending the IFLA 2014 Library Linked Data Satellite Meeting in Paris I travelled to Lyon for the first three days (August 17-19) of the IFLA 2014 Annual World Library and Information Congress. This year’s theme “Libraries, Citizens, Societies: Confluence for Knowledge” was named after the confluence or convergence of the rivers Rhône and Saône where the city of Lyon was built.
This was the first time I attended an IFLA annual meeting and it was very much unlike all conferences I have ever attended. Most of them are small and focused. The IFLA annual meeting is very big (but not as big as ALA) and covers a lot of domains and interests. The main conference lasts a week, including all kinds of committee meetings, and has more than 4000 participants and a lot of parallel tracks and very specialized Special Interest Group sessions. Separate Satellite Meetings are organized before the actual conference in different locations. This year there were more than 20 of them. These Satellite Meetings actually resemble the smaller and more focused conferences that I am used to.
A conference like this requires a lot of preparation and organization. Many people are involved, but I especially want to mention the hundreds of volunteers who were present not only in the conference centre but also at the airport, the railway stations, on the road to the location of the cultural evening, etc. They were all very friendly and helpful.
Another feature of such a large global conference is that presentations are held in a number of official languages, not only English. A team of translators is available for simultaneous translations. I attended a couple of talks in French, without translation headset, but I managed to understand most of what was presented, mainly because the presenters provided their slides in English.
It is clear that you have to prepare for the IFLA annual meeting and select in advance a number of sessions and tracks that you want to attend. With a large multi-track conference like this it is not always possible to attend all interesting sessions. In the light of a new data infrastructure project I recently started at the Library of the University of Amsterdam I decided to focus on tracks and sessions related to aspects of data in libraries in the broadest sense: “Cloud services for libraries – safety, security and flexibility” on Sunday afternoon, the all day track “Universal Bibliographic Control in the Digital Age: Golden Opportunity or Paradise Lost?” on Monday and “Research in the big data era: legal, social and technical approaches to large text and data sets” on Tuesday morning.
Cloud Services for Libraries
It is clear that the term “cloud” is a very ambiguous term and consequently a rather unclear concept. Which is good, because clouds are elusive objects anyway.
In the Cloud Services for Libraries session there were five talks in total. Kee Siang Lee of the National Library Board of Singapore (NLB) described the cloud based NLB IT infrastructure consisting of three parts; a private, public and hybrid cloud. The private (restricted access) cloud is used for virtualization, an extensive service layer for discovery, content, personalization, and “Analytics as a service”, which is used for pushing and recommending related content from different sources and of various formats to end users. This “contextual discovery” is based on text analytics technologies across multiple sources, using a Hadoop cluster on virtual servers. The public cloud is used for the Web Archive Singapore project which is aimed at archiving a large number of Singapore websites. The hybrid cloud is used for what is called the Enquiry Management System (EMS), where “sensitive data is processed in-house while the non-sensitive data resides in the cloud”. It seems that in Singapore “cloud” is just another word for a group of real or virtual servers.
In the talk given by Beate Rusch of the German Library Network Service Centre for Berlin and Brandenburg KOBV the term “cloud” meant: the shared management of data on servers located somewhere in Germany. KOBV is one of the German regional Library Networks involved in the CIB project targeted at developing a unified national library data infrastructure. This infrastructure may consist of a number of individual clouds. Beate Rusch described three possible outcomes: one cloud serving as a master for the others, a data roundabout linking the other clouds, and a cross cloud dataspace where there is an overlapping shared environment between the individual clouds. An interesting aspect of the CIB project is that cooperation with two large commercial library system vendors, OCLC and Ex Libris, is part of the official agreement. This is of interest for other countries that have vested interests in these two companies, like The Netherlands.
Universal Bibliographic Control in the Digital Age
The Universal Bibliographic Control (UBC) session was an all day track with twelve very diverse presentations. Ted Fons of OCLC gave a good talk explaining the importance of the transition from the description of records to the modeling of entities. My personal impression lately is that OCLC all in all has been doing a good job with linked data PR, explaining the importance and the inevitability of the semantic web for libraries to a librarian audience without using technical jargon like URI, ontology, dereferencing and the like. Richard Wallis of OCLC, who was at the IFLA 2014 Linked Data Satellite Meeting and in Lyon, is spreading the word all over the globe.
Of the rest of the talks the most interesting ones were given in the afternoon. Anila Angjeli of the National Library of France (BnF) and Andrew MacEwan of the British Library explained the importance, similarities and differences of ISNI and VIAF, both authority files with identifiers used for people (both real and virtual). Gildas Illien (also one of the organizers of the Linked Data Satellite Meeting in Paris) and Françoise Bourdon, both BnF, described the future of Universal Bibliographic Control in the web of data, which is a development closely related to the topic of the talks by Ted Fons, Anila Angjeli and Andrew MacEwan.
The ONKI project, presented by the National Library of Finland, is a very good example of how bibliographic control can be moved into the digital age. The project entails the transfer of the general national library thesaurus YSA to the new YSO ontology, from libraries to the whole public sector and from closed to open data. The new ontology is based on concepts (identified by URIs) instead of monolingual text strings, with multilingual labels and machine readable relationships. Moreover the management and development of the ontology is now a distributed process. On top of the ontology the new public online Finto service has been made available.
The final talk of the day “The local in the global: universal bibliographic control from the bottom up” by Gordon Dunsire applied the “Think globally, act locally” aphorism to the Universal Bibliographic Control in the semantic web era. The universal top down control should make place for local bottom up control. There are so many old and new formats for describing information that we are facing a new biblical confusion of tongues: RDA, FRBR, MARC, BIBO, BIBFRAME, DC, ISBD, etc. What is needed are a number of translators between local and global data structures. On a logical level: Schema Translator, Term Translator, Statement Maker, Statement Breaker, Record Maker, Record Breaker. These black boxes are a challenge to developers. Indeed, mapping and matching of data of various types, formats and origins are vital in the new web of information age.
Research in the big data era
The Research in the big data era session had five presentations on essentially two different topics: data and text mining (four talks) and research data management (one talk). Peter Leonard of Yale University Library started the day with a very interesting presentation of how advanced text mining techniques can be used for digital humanities research. Using the digitized archive of Vogue magazine he demonstrated how the long term analysis of statistical distribution of related terms, like “pants”, “skirts”, “frocks”, or “women”, “girls”, can help visualise social trends and identify research questions. To do this there are a number of free tools available, like Google Books N-Gram Search and Bookworm. To make this type of analysis possible, researchers need full access to all data and text. However, rights issues come into play here, as Christoph Bruch of the Helmholtz Association, Germany, explained. What is needed is “intelligent openness” as defined by the Royal Society: data must be accessible, assessable, intelligible and usable. Unfortunately European copyright law stands in the way of the idea of fair use. Many European researchers are forced to perform their data analysis projects outside Europe, in the USA. The plea for openness was also supported by LIBER’s Susan Reilly. Data and text mining should be regarded as just another form of reading, that doesn’t need additional licenses
A very impressive and sympathetic library project that deserves everybody’s support was not an official programme item, but a bunch of crates, seats, tables and cushions spread across the central conference venue square. The whole set of furniture and equipment, that comes on two industrial pallets, constitutes a self supporting mobile library/information centre to be deployed in emergency areas, refugee camps etc. It is called IdeasBox, provided by Libraries without Borders. It contains mobile internet, servers, power supplies, ereaders, laptops, board games, books, etc., based on the circumstances, culture and needs of the target users and regions. The first IdeasBoxes are now used in Burundi in camps for refugees from Congo. Others will soon go to Lebanon for Syrian refugees. If librarians can make a difference, it’s here. You can support Libraries without Borders and IdeadBox in all kinds of ways: http://www.ideas-box.org/en/support-us.html.
The questions about data management in libraries that I brought with me to the conference were only partly addressed, and actual practical answers and solutions were very rare. The management and mapping of heterogeneous and redundant types of data from all types of sources across all domains that libraries cover, in a flexible, efficient and system independent way apparently is not a mainstream topic yet. For things like that you have to attend Satellite Meetings. Legal issues, privacy, copyright, text and data mining, cloud based data sharing and management on the other hand are topics that were discussed. It turns out that attending an IFLA meeting is a good way to find out what is discussed, and more importantly what is NOT discussed, among librarians, library managers and vendors.
The quality and content of the talks vary a lot. As always the value of informal contacts and meetings cannot be overrated. All in all, looking back I can say that my first IFLA has been a positive experience, not in the least because of the positive spirit and enthusiasm of all organizers, volunteers and delegates.
(Special thanks to Beate Rusch for sharing IFLA experiences)
Posted on August 26th, 2014 2 comments
On August 14 the IFLA 2014 Satellite Meeting ‘Linked Data in Libraries: Let’s make it happen!’ took place at the National Library of France in Paris. Rurik Greenall (who also wrote a very readable conference report) and I had the opportunity to present our paper ‘An unbroken chain: approaches to implementing Linked Open Data in libraries; comparing local, open-source, collaborative and commercial systems’. In this paper we do not go into reasons for libraries to implement linked open data, nor into detailed technical implementation options. Instead we focus on the strategies that libraries can adopt for the three objectives of linked open data, original cataloguing/creating of linked data, exposing legacy data as linked open data and consuming external linked open data. Possible approaches are: local development, using Free and open Source Software, participating in consortia or service centres, and relying on commercial vendors, or any combination of these. Our main conclusions and recommendations are: identify your business case, if you’re not big enough be part of some community, and take lifecycle planning seriously.
The other morning presentations provided some interesting examples of a number of approaches we described in our talk. Valentine Charles presented the work in the area of aggregating library and heritage data from a large number of heterogeneous sources in different languages by two European institutions that de facto function as large consortia or service centres for exposing and enriching data, Europeana and The European Library. Both platforms not only expose their aggregated content in web pages for human consumption but also as linked open data, besides other so called machine readable formats. Moreover they enrich their aggregated content by consuming data from their own network of providers and from external sources, for instance multilingual “value vocabularies” like thesauri, authority lists, classifications. The ideas is to use concepts/URIs together with display labels in multiple languages. For Europeana these sources currently are GeoNames, DBPedia and GEMET. Work is being done on including the Getty Art and Architecture Thesaurus (AAT) which was recently published as Linked Open Data. Besides using VIAF for person authorities, The European Library has started adding multilingual subject headings by integrating the Common European Research Classification Scheme, part of the CERIF format. The use of MACS (Multilingual Access to Subjects) as Linked Open Data is being investigated. This topic was also discussed during the informal networking breaks. Questions that were asked: is MACS valuable for libraries, who should be responsible for MACS and how can administering MACS in a Linked Open Data environment best be organized? Personally I believe that a multilingual concept based subject authority file for libraries, archives, museums and related institutions is long overdue and will be extremely valuable, not only in Linked Open Data environments.
The importance of multilingual issues and the advantages that Linked Open Data can offer in this area were also demonstrated in the presentation about the Linked Open Authority Data project at the National Diet Library of Japan. The Web NDL Authorities are strongly connected to VIAF and LCSH among others.
The presentation of the Linked Open Data environment of the National Library of France BnF (http://data.bnf.fr) highlighted a very interesting collaboration between a large library with considerable resources in expertise, people and funding on the one hand, and the non-library commercial IT company Logilab. The result of this project is a very sophisticated local environment consisting of the aggregated data sources of the National Library and a dedicated application based on the free software tool Cubicweb. An interesting situation arose when the company Logilab itself asked if the developed applications could be released as Open Source by the National Library. The BnF representative Gildas Illien (also one of the organizers of the meeting together with Emmanuelle Bermes) replied with considerations about planning, support and scalability, which is completely understandable from the perspective of lifecycle planning.
With all these success stories about exposing and publishing Linked Open Data, the question always remains if the data is actually used by others. It is impossible to incorporate this in project planning and results evaluation. Regarding the BnF data this question was answered in the presentation about Linked Open Data in the book industry. The Electre and Antidot project uses linked open data form among others data.bnf.fr.
The afternoon presentations were focused on creating, maintaining and using various data models, controlled vocabularies and knowledge organisation sysems (KOS) as Linked Open Data: The EDM Europeana data Model, UNIMARC, MODS. An interesting perspective was presented by Gordon Dunsire on versioning vocabularies in a linked data world. Vocabularies change over time, so an assignment of a URI of a certain vocabulary concept should always contain version information (like timestamps and/or version numbers) in order to be able to identify the intended meaning at the time of assigning.
The meeting was concluded with a panel with representatives of three commercial companies involved in library systems and Linked Open Data developments: Ex Libris, OCLC and the afore-mentioned Logilab. The fact that this panel with commercial companies on library linked data took place was significant and important in itself, regardless of the statements that were made about the value and importance of Linked Open Data in library systems. After years of dedicated temporarily funded proof of concept projects this may be an indication that Linked Open Data in libraries is slowly becoming mainstream.
Posted on June 19th, 2014 1 comment
Lingering gold at ELAG 2014
Libraries tend to see themselves as intermediaries between information and the public, between creators and consumers of information. Looking back at the ELAG 2014 conference at the University of Bath however, I can’t get the image out of my head of libraries standing in the way between information and consumers. We’ve been talking about “inside out libraries”, “libraries everywhere”, “rethinking the library” and similar soundbites for some years now, but it looks like it’s been only talk and nothing more. A number of speakers at ELAG 2014 reported that researchers, students and other potential library visitors wanted the library to get out of their way and give them direct access to all data, files and objects. A couple of quotes:
- “We hide great objects behind search forms” (Peter Mayr, “EuropeanaBot”)
- “Give us everything” (Ben O’Steen, “The Mechanical Curator”).
[Lingering gold: data, objects]
In a cynical way this observation tightly fits this year’s conference theme “Lingering Gold”, which refers to the valuable information and objects hidden and locked away somewhere in physical and virtual local stores, waiting to be dug up and put to use. In her keynote talk, Stella Wisdom, digital curator at the British Library, gave an extensive overview of the digital content available there, and the tools and services employed to present it to the public. However, besides options for success, there are all kinds of pitfalls in attempting to bring local content to the world. In our performance “The Lord of the Strings”, Karen Coyle, Rurik Greenall, Martin Malmsten, Anders Söderbäck and myself tried to illustrate that in an allegorical way, resulting in a ROADMAP containing guidelines for bringing local gold to the world.
In recent years it has become quite clear that data, dispersed and locked away in countless systems and silos, once liberated and connected can be a very valuable source of new information. This was very pertinently demonstrated by Stina Johansson in her presentation of visualization of research and related networks at Chalmers University using available data from a number of their information systems. Similar network visualizations are available in the VIVO open source linked data based research information tool, which was the topic of a preconference bootcamp which I helped organize (many thanks especially to Violeta Ilik, Gabriel Birke and Ted Lawless who did most of the work).
[Systems, apis, technology trap]
The point made here also implies that information systems actually function as roadblocks to full data access instead of as finding aids. I have come to realize this some time ago, and my perception was definitely confirmed during ELAG 2014. In his lightning talk Rurik Greenall emphasized the fact that what we do in libraries and other institutions is actually technology driven. Systems define the way we work and what we publish. This should be the other way around. Even APIs, intended for access to data in systems without having to use end user system functions, are actually sub-systems, giving non transparent views on the data. When Steve Meyer in his talk “Building useful and usable web services” said “data is the API” he was right in theory, yet in practice the reverse is not necessarily true. Also, APIs are meant to be used by developers in new systems. Non-tech end users have no use for it, as is illustrated by one of the main general reactions from researchers to the British Library Labs surveys, as reported by Ben O’Steen: “API? What’s that? I don’t care. Just give me the files.”.
[Commercial vs open source]
This technology critique essentially applies to both commercial/proprietary and open source systems alike. However, it could be that open source environments are more favorable to open and findable data than proprietary ones. Felix Ostrowski talked about the reasons for and outcomes of the Regal project, moving the electronic objects repository of the State Library of Rheinland-Pfalz from an environment based on commercial software to one based on open source tools and linked data concepts. One of the side effects of this move was that complaints were received from researchers about their output being publicly available on the web. This shows that the new approach worked, that the old approach was effectively hiding information and that certain stakeholders are completely satisfied with that.
On the side: one of the open source components of the new Regal environment is Fedora , only used for digital objects, not any metadata, which is exactly what is currently happening in the new repository project at the Library of the University of Amsterdam. A legitimate question asked by Felix: why use Fedora and not just the file system in this case?
All these observations also imply that, if libraries really want to disseminate and share their lingering gold with the world, alternative ways of exposing content are needed, instead of or besides the existing ones. Fortunately some libraries and individuals have been working on providing better direct access and even unguided and unsolicited publication of data and objects that might be available but not really findable with traditional library search tools. The above mentioned EuropeanaBot (and other twitter bots) and the British Library Labs’ Mechanical Curator are a case in point. Every hour EuropeanaBot sends a tweet about a random digital object, enriching it with extra information from Wikipedia and other sources.
In the case of the British Library Labs Ben O’Steen described an experiment with free access to large amounts of data that by chance led to the observation that randomly excavated images from that vast amount of content drew people’s attention. As all content was in the public domain anyway, they asked themselves “what’s the harm in making it a bit more acessible?”. So the Mechanical Curator was born, with channels on tumblr, twitter and flickr.
Another alternative way to expose and share library content, a game, was presented by Ciaran Talbot and Kay Munro: LibraryGame. In brief, students are encouraged to use and visit the library and share library content with others by awarding them points and badges as members of an online community. The only two things students didn’t like about the name LibraryGame were “library” and “game”, so the name was changed to “BookedIn”.
No matter if you like bots and games or not, the important message here is that it is worthwhile exploring alternative ways by which people can find the content that libraries consider so valuable.
In the end, it’s people that libraries work for. At Utrecht University Library they realised that they needed simpler ways to make it possible for people to use their content, not only APIs. Marina Muilwijk described how they are experimenting with the Lean Startup method. In a continuous cycle of building, measuring and learning, simple applications are released to end users in order to test if they use them and how they react to them.
“Focus on the user” was also the theme of the workshop given by Ken Chad around the Jobs-to-be-done methodology.
Interestingly, “How people find” instead of: “How people search” was one of the perspectives of the Jisc “Spotlight on the Digital” project, presented by Owen Stephens in his lightning talk.
[Collections and findability]
Another perspective of that Jisc project was how to make collections discoverable. It turns out that collections as such are represented on the web quite well, whereas items in these collection aren’t.
Valentine Charles of The European Library demonstrated the benefits of collection level metadata for the discoverability of hidden content, using the CENDARI project as example.
What’s a library technology conference without linked data? Implicitly and explicitly the instrument of connecting data from different sources relates quite well to most of the topics presented around the theme of lingering gold, with or without the application of the official linked data rules. I have already mentioned most cases, I will only go into a couple of specific sessions here.
Niklas Lindström and Lina Westerling presented the developments with the new linked data based cataloguing system for the Swedish LIBRIS union catalogue. This approach is not simply a matter of exposing and consuming linked data, but in essence the reconstruction of existing workflows using a completely new architecture.
The data management and integration platform d:swarm, a joint open source project of SLUB State and University Library Dresden and the commercial company AvantgardeLabs was presented in a lightning talk by Jan Polowinski. This tool aims at harvesting and normalising data from various existing systems and datastores into an intermediate platform that in turn can be used for all kinds of existing and new front end systems and services. The concept looks very useful for library environments with a multitude of legacy systems. Some time ago I visited the d:swarm team in Dresden together with a group of developers from the KOBV library consortium in Berlin, two of whom (Julia Goltz and Viktoria Schubert) presented their own new K2 portal solution for the data integration challenge in a lightning talk.
Linked data is all about unique identifiers on the web. The recent popular global identifier for researchers ORCiD, at last year’s ELAG topic of one of the workshops, was explained by Tom Demeranville. As it happened, right after the conference it became clear that ORCiD implemented the Turtle linked data format.
The problem of matching string based personal names from various data sources without matching identifiers was tackled in the workshop “Linking Data with sameAs” which I attended. Jane and Adrian Stevenson of the ArchivesHub UK showed us hands-on how to use tools like LOD-Refine and Silk for reconciling string value data fields and producing “sameAs” relationships/triples to be used in your local triple store. They have had substantial experience with this challenge in their Linking Lives project. I found the workshop very useful. One of the take-aways was that matching string data is hard work.
Hard work also goes on in the caves and basements of the library world, as was demonstrated by Toke Eskildsen in his war stories of the Danish State Library with scanning companies, and by Eva Dahlbäck and Theodor Tolstoy in their account of using smartphones and RFID technology in fetching books from the stacks.
Once again I have to say that a number of unofficial sessions, at breakfast, dinner, in pubs and hotel bars, were much more informative than the official presentations. These open discussions in small groups, fostering free exchange of ideas without fear of embarrassment, while being triggered by the talks in the official programme, can simply not take place within a tight conference schedule. Nevertheless, ELAG is a conference small and informal enough to attract people inclined to these extracurricular activities. I thank everybody who engaged in this. You know who you are. Or check Rurik Greenall’s conference report, which is a very structured yet personal account of the event.
Lots of thanks to the dedicated and very helpful local organisation team of the Library of the University of Bath, who have done a wonderful job doing something completely new to them: organising an international conference.
Posted on December 1st, 2013 1 comment
Struggling towards usable linked data services at SWIB13
Paraphrasing some of the challenges proposed by keynote speaker Dorothea Salo, the unofficial theme of the SWIB13 conference in Hamburg might be described as “No more ontologies, we want out of the box linked data tools!”. This sounds like we are dealing with some serious confrontations in the linked open data world. Judging by Martin Malmsten’s LIBRIS battle cry “Linked data or die!” you might even think there’s an actual war going on.
Looking at the whole range of this year’s SWIB pre-conference workshops, plenary presentations and lightning talks, you may conclude that “linked data is a technology that is maturing” as Rurik Greenall rightly states in his conference report. “But it has quite a way to go before we can say this stuff is ready to roll out in libraries” as he continues. I completely agree with this. Personally I got the impression that we are in a paradoxical situation where on the one hand people speak of “we” and “community”, and on the other hand they take fundamentalist positions, unconditionally defending their own beliefs and slandering and ridiculing other options. In my view there are multiple, sometimes overlapping, sometimes irreconcilable “we’s” and “communities”. Sticking to your own point of view without willingness to reason with the other party really does not bring “us” further.
This all sounds a bit grim, but I again agree with Rurik Greenall when he says that he “enjoyed this conference immensely because of the people involved”. And of course on the whole the individual workshops and presentations were of a high quality.
Before proceeding to the positive aspects of the conference, let me first elaborate a bit on the opposing positions I observed during the conference, which I think we should try to overcome.
Developers disagree on a multitude of issues:
Developers hate MARC. Everybody seems to hate RDF/XML, JSON-LD seems to be the thing for RDF, but some say only Turtle should be used, or just JSON.
Tools and languages
Perl users hate Java, Jave users hate PHP, there’s Python and Ruby bashing.
Create your own, reuse existing ones, yes or no upper ontologies, no ontologies but usable tools.
Windows/UNIX/Linux/Apple… it’s either/or.
Open source vs. commercial software
Need I say more?
Belgians hate German beer, or any foreign beer for that matter.
(Not to mention PDF).
OK, I hope I made myself clear. The point is that I have no problem at all with having diverse opinions, but I dislike it when people are convinced that their own opinion is the only right one and refuse to have a conversation with those who think otherwise, or even respect their choices in silence. The developer “community” definitely has quite a way to go.
Apart from these internal developer disagreements I noticed, there is the more fundamental gap between developers and users of linked open data. By “users” I do not mean “end users” in this case, but the intermediary deployers of systems. Let’s call them “libraries”.
Linked Data developers talk about tools and programming languages, metadata formats, open source, ontologies, technology stacks. Librarians want to offer useful services to their end users, right now. They may not always agree on what kind of services and what kind of end users, and they may have an opinion on metadata formats in systems, but their outlook is slightly different from the developers’ horizon. It’s all about expectations and expectation management. That is basically Dorothea Salo’s keynote’s point. Of course theoretical, scientific and technical papers and projects are needed to take linked data further, but libraries need linked data tools, focused on providing new services to their end users/customers in the world of the web, that can easily be implemented and maintained.
In this respect OCLC’s efforts to add linked data features to WorldCat is praiseworthy. OCLC’s Technology Evangelist Richard Wallis presented his view on the benefits of linked open data for libraries, using Google’s Knowledge Graph as an example. His talk was mainly focused at a librarian audience. At SWIB, where the majority of attendees are developers or technology staff, this seemed somewhat misplaced. By chance I had been present at Richard’s talk at the Dutch National Information Professional annual meeting two weeks earlier, where he delivered almost the same presentation for a large room full of librarians. There and then that was completely on target. For the SWIB audience this all may have been old news, except for the heads up about OCLC’s work on FRBR “Works” BIBFRAME type linked data which will result in published URIs for Works in WorldCat.
An important point here is that OCLC is a company with many library customers worldwide, so developments like this benefit all of these libraries. The same applies to customers of one of the other big library system vendors, Ex Libris. They have been working on developing linked data features for their so called “next generation” tools since some time now, in close cooperation with the international user groups’ Linked Open Data Special Interest Working Group, as I explained in the lightning talk I gave. Also open source library systems like Koha are working on adding linked open data features to their tools. It’s with tools like these, that reach a large number of libraries, that linked open data for libraries can spread relatively quickly.
In contrast to this linked data broadcasting, the majority of the SWIB presentations showed local proprietary development or research projects, mostly of high quality notwithstanding. In the case of systems or tools that were built all the code and ontologies are available on GitHub, making them open source. However, while it is commendable, open source on GitHub doesn’t mean that these potentially ground breaking systems and ontologies can and will be adopted as de facto standards in the wider library community. Most libraries, both public and academic, are dependent on commercial system and content providers and can’t afford large scale local system development. This also applies up to a point to libraries that deploy large open source tools like Koha, I presume.
It would be great if some of these many great open source projects could evolve into commonly used standard tools, like Koha, Fedora and Drupal, just to name a few. Vivo is another example of an open source project rapidly moving towards an accepted standard. It is a framework for connecting and publishing research information of different nature and origin, based on linked data concepts. At SWIB there was a pre-conference “VivoCamp”, organised by Lambert Heller, Valeria Pesce and myself. Research information is an area rapidly gaining importance in the academic world. The Library of the University of Amsterdam, where I work, is in the process of starting a Vivo pilot, in which I am involved. (Yes, the Library of the University of Amsterdam uses both commercial providers like OCLC and Ex Libris, and many open source tools). The VivoCamp was a good opportunity to have a practical introduction in and discussion about the framework, not in the least by the presence of John Fereira of Cornell University, one of the driving forces behind Vivo. All attendees (26) expressed their interest in a follow-up.
Vivo, although it may be imperfect, represents the type of infrastructure that may be needed for large scale adoption of linked open data in libraries. PUB, the repository based linked data research information project at Bielefeld University presented by Vitali Peil, is aimed at exactly the same domain as Vivo, but it again is a locally developed system, using another smaller scale open source framework (LibreCat/Catmandu of Bielefeld, Ghent and Lund universities) and a number of different ontologies, of which Vivo is just one. My guess is that, although PUB/LibreCat might be superior, Vivo will become the de facto standard in linked data based research information systems.
Instead of focusing on systems, maybe the library linked data world would be better served by a common user-friendly metadata+services infrastructure. Of course, the web and the semantic web are supposed to be that infrastructure, but in reality we all move around and process metadata all the time, from one system and database to another, in order to be able to offer new legacy and linked data services. At SWIB there was mention of a number of tools for ETL, which is developer jargon for Extract, Transform, Load. By the way, jargon is a very good way to widen the gap between developers and libraries.
There were pre-conference workshop for the ETL tools Catmandu and Metafacture, and in a lightning talk SLUB Dresden, in collaboration with Avantgarde Labs, presented a new project focused on using ETL for a separate multi-purpose data management platform, serving as a unified layer between external data sources and services. This looks like a very interesting concept, similar to the ideas of a data services hub I described in an earlier post “(Discover AND deliver) OR else”. The ResourceSync project, presented by Simeon Warner, is trying to address the same issue by a different method, distributed synchronisation of web resources.
One can say that the BIBFRAME project is also focused on data infrastructure, albeit at the moment limited to the internal library cataloguing workflow, aimed at replacing MARC. An overview of the current state of the project was presented by Lars Svensson of the German National Library.
The same can be said for the National Library of Sweden’s new LIBRIS linked data based cataloguing system, presented by Martin Malmsten (Decentralisation, Distribution, Disintegration – towards Linked Data as a First Class Citizen in Libraryland). The big difference is that they’re actually doing what BIBFRAME is trying to plan. The war cry “Linked data or die!” refers to the fact that it is better to start from scratch with a domain and format independent data infrastructure, like linked data, than to try and build linking around existing rigid formats like MARC. Martin Malmsten rightly stated that we should keep formats outside our systems, as is also the core statement of the MARC-MUST-DIE movement. Proprietary formats can be dynamically imported and exported at will, as was demonstrated by the “MARC” button in the LIBRIS user interface. New library linked data developments will have to coexist with the existing wider library metadata and systems environment for some time.
Like all other local projects, the LIBRIS source code and ontology descriptions are available on GitHub. In this case the mere scope of the National Library of Sweden and of the project makes it a bit more plausible that this may actually be reused on a larger scale. At least the library cataloguing ontology in JSON-LD there is worth having a look at.
To return to our starting point, the LIBRIS project acknowledges the fact that we need actual tools besides the ontologies. As Martin Malmsten quoted: “Trying to sell the idea of linked data without interfaces is like trying to sell a fax without the invention of paper”.
The central question in all this: what is the role of libraries in linked data? Developers or implementers, individually or in a community? There is obviously not one answer. Maybe we will know more at SWIB14. Paraphrasing Fabian Steeg and Pascal Christoph of hbz and Dorothea Salo, next years theme might be “Out of the box data knitting for great justice”.
Posted on June 10th, 2013 17 comments
The inside-out library at ELAG 2013
This year marked my fifth ELAG conference since 2008 (I skipped 2009), which is not much if you take into account that ELAG2013 was the 37th one. I really enjoyed the 2013 conference, not in the least because of the wonderful people of the local organising committee at the Ghent University Library, who made ELAG2013 a very pleasant event.This year’s theme was “the inside-out library”, a concept coined by Lorcan Dempsey, which in brief emphasises the need for libraries to shift focus 180 degrees.
In my personal overall conference experience major emphasis was on research support in libraries. This was partly due to my attendance of the pre-conference Joint OpenAIRE/LIBER Workshop ‘Dealing with Data – what’s the role for the library?’ on May 28. It was good to have sessions focusing on different perspectives: data management, data publication, the researchers’ needs, library support and training. I was honoured to be invited to participate in the closing round table panel discussion together with two library directors Wilma van Wezenbeek (TU Delft Library) and Wolfram Horstmann (Bodleian Library), under the excellent supervision of Kevin Ashley (DDC). An important central concept in the workshop was the research life cycle, which consists of many different tasks of a very diverse nature. Academic and research libraries should focus on those tasks for which they are or can easily become qualified.
Looking from another angle we can distinguish two main perspectives in integrating research: the research ecosystem itself, which can be seen as the main topic of the OpenAIRE/LIBER workshop, and the research content, the actual focus of researchers and research projects. I will try to address both perspectives here.
On the first day of the actual conference Herbert Van de Sompel gave the keynote speech with the title “A clean slate”. Rurik Greenall aptly describes the scope and meaning of Herbert’s argument. Herbert has been involved in a number of important and relevant projects in the domain of scholarly communication. My impression this time was: now he’s bringing it all together around the fairly new concept of the “research object”, integrating a number of projects and protocols, like ORE, Memento, OpenAnnotation, Provenance, ResourceSync. It’s all about connections between all components related to research on the web in all dimensions.
This linking of input, output, procedures and actors of research projects in various temporal and contextual dimensions in a machine readable way is extremely important in order to be able to process all relevant information by means of computer systems and present it to the human consumer. In this respect I think it is essential that data citations in scholarly articles should not only be made available in the article text, but also as machine readable metadata that can be indexed by external aggregators.
Moreover, it would be even better if it was possible to provide links to research projects that would serve as central hubs for linking to all associated entities, not only datasets. This is the role that the research object can fulfill. During the OpenAIRE/LIBER workshop I tried to address this issue a number of times, because I am a bit surprised that both researchers and publishers appear to be satisfied with having text only clickable dataset citations. That is even the case the other way around with links to articles in dataset repositories like Dryad. I think there is a role here for information professionals and metadata experts in libraries. This is exactly the point that Peter van Boheemen made in his talk about producing better metadata for research output. Similarly Jing Wang stressed the importance of investigating the role of metadata specialists and data librarians for interoperability and authority control in her presentation on the open source linked data based research discovery tool Vivo.
Again there are two perspectives here. Even if we have machine readable metadata on research projects and datasets, most systems are not adequately equipped with functionality to process or present this information. It is not so easy to update complex systems with new functionality. Planned update cycles, including extensive testing, are necessary in order to adhere to the system’s design and architecture and to avoid breaking things. This equally applies to commercial, open source and home grown systems. Joachim Neubert’s presentation of the use of the open source CMS Drupal for linked data enhanced publishing for special collections illustrated this. Some very specialist custom extensions to the essentially quite flexible system were needed to make this a success. (On a different note, it was nice to see that Joachim used a simple triple diagram from my first library linked data blog post to illustrate the use of different types of predicates between similar subjects and objects.)
Anyway, a similar point can be made about systems and identifiers for people (authors, researchers, etc.). I participated in the workshop on ISNI, ORCID and VIAF : Examining the fundamentals and application of contributor identifiers led by Anila Angjeli and Thom Hickey, one of six ELAG workshops this year. Thom and Anila presented a very complete and detailed overview of the similarities and differences of these three identifier schemes. One of the discussion topics was the difference in adoption of these schemes by the community on the one hand and as machine readable metadata and their application in library systems on the other.
Here comes “resilience” into play, a concept introduced by Beate Rusch in her talk on the changing roles of the German regional library consortia and service centres in the world of cloud computing and SaaS. Rurik Greenall captures the essence of her talk when he says “… homogenous, generic solutions will not work in practice because they are at odds with how things are done …” and that “messy, imperfect systems… are smart and long lived”. Since Beate’s presentation the term “resilience” popped up in a number of discussions with colleagues, during and after the conference, mainly in the sense that most systems, communities, infrastructures are NOT resilient. Resilience is a concept mainly used in psychology and physics, meaning the ability of someone or something to return to its original state after being subjected to a severe disturbance. Beate’s idea with resilience is that we can adapt better to changing circumstances and needs in the world around us if we are less perfect and rigid than we usually are. In this sense I think resilience can also mean that a structure could permanently change instead of returning to its original state.
In the library world resilience can be applied to librarians, libraries, library infrastructure and library systems alike. In my view “resilience” might apply to the alternative architecture I have described in a recent blog post, where I argue that we should stop thinking systems and start thinking data. In order to be resilient we need an open, connected infrastructure, that is of the web (not on the web). The SCAPE infrastructure for processing large datasets for long term preservation, presented by Sven Schlarb, might fit this description.
A number of presentations focused on infrastructure and architecture. The new version of the Swedish union catalogue LIBRIS could be described as a resilient system. Martin Malmsten, Markus Sköld and Niklas Lindström showed their new linked open data based integrated library framework which was built from the ground up, from ”a clean slate” so to speak. I can only echo Rurik’s verdict “ With this, Libris really are showing the world how things are done”. Contrary to the Library of Congress BibFrame development which started very promising, but now seems to evolve into an inward looking rigid New Marc. This was illustrated by Martin Malmsten when he revealed to us that Marc is undead, and by Becky Yoose, who wrote a very pertinent parable telling the tale of the resurrection of Marc.
Rurik Greenall described the direction taken at his own institution NTNU Library: getting rid of old legacy library and webpage formats and moving towards being part of the web, providing information for the web, being data driven. It’s a slow and uphill struggle, but better than the alternative. A clean slate again!
Dave Pattern presented a different approach in connecting data from a number of existing systems and databases by means of APIs, and combining these into a new and well received reading list service at the University of Huddersfield.
Back to research. In our presentation, or rather performance, Jane Stevenson and I tried to present the conflicting perspectives of collection managers and researchers in a theatrical way, showing parallel developments in the music industry. Afterwards we tried to analyse the different perspectives, argued that researchers need connected information of all types and from all sources and concluded that information professionals should try and learn to take the researcher’s perspective in order to avoid becoming irrelevant in that area.
The relationship between libraries and researchers was also the subject of the talk “Partners in research. Outside the library, inside the infrastructure“, by Sally Chambers and Saskia Scheltjens. Here the focus was on providing comprehensive infrastructures for research support, especially in the digital humanities. Central question: large top-down institutionalised structures, or bottom-up connected networks? Bottom line is: the researcher’s needs have to be met in the best possible way.
A very interesting example of an actual digital humanities research and teaching project in collaboration between researchers and the library is the Annotated Books Online project that was presented by Utrecht University staff. The collection of rare books is made available online in order to crowdsource the interpretation of handwritten annotations present in these books.
Besides research support there were presentations on other “inside out library” topics: publishing, teaching, data analysis and GLAM.
Anders Söderbäck presented the Stockholm University Press, a new publishing house for open access digital and print on demand books. I was pleasantly surprised that Anders included two quotes of my aforementioned blog post in his talk: “...in the near future we will see the end of the academic library as we know it” and “According to some people university libraries are very suitable and qualified to become scholarly publishers … I am not sure that this is actually the case. Publishing as it currently exists requires a number of specific skills that have nothing to do with librarian expertise“. But of course Anders’ most important achievement was winning the Library Automation Bingo by including all required terms in one slide in a coherent and meaningful way.
Merrilee Proffitt presented an overview of MOOCs and libraries, Sarah Brown described the way that learning materials at the Open University in the UK are successfully connected and integrated in the linked data based STELLAR project. Looking at these developments the question arises if there are already efforts to come to a Teaching Object model, similar to the Research Object?
Andrew Nagy described the importance of analysing huge amounts of usage data in order to improve the usability and end user front end of the Summon discovery tool. Dan Chudnov presented the Social Media Manager prototype, used for collecting data from twitter in order to be used in social science research.
Valentine Charles described the activities carried out by Europeana to contribute large amounts of digitised library heritage resources to Wikimedia Commons by means of the GLAMwiki toolset in order to improve visibility of these resources the Open Access way. The GLAMwiki toolset currently appears to offer a number of challenges for the interoperability and integration of metadata standards between the library and the Wikimedia world. Another plea for resilience.
Then there were the workshops. The combination of these parallel hands-on and engaging group activities and the plenary sessions makes ELAG a unique experience. Although I only participated in one, obviously, I have heard good reports from all other workshops. I would like to give a special mention to Ade and Jane Stevenson’s “Very Gentle Linked Data” workshop, where they managed to teach even non-tech people not only the basic principles of linked data, but also how to create their own triple store and query it with SPARQL.
Summarising: looking at the ELAG2013 presentations, are we ready for the inside out library? Sometimes we can start with a clean slate, but that is not always possible. Resilience seems to be a requirement if we want to cope with the dramatic changes we are facing. But you can’t simply decide to be resilient, either something is resilient or it isn’t. A clean slate might be the only option. In any case it seems obvious that connections are key. The information profession needs to invest in new connections on every level, creating new forms of knowledge, in order to stay relevant.
Posted on October 10th, 2012 28 comments
Or: Think “different” or paint yourself in a corner
EMTACL12 – Emerging Technologies in Academic Libraries 2012
I attended the EMTACL12 conference in Trondheim October 1-3, 2012, organised by the Library of NTNU Norwegian University of Science and Technology, both as a member of the international programme committee and as a speaker. EMTACL stands for “emerging technologies in academic libraries”. Looking back, my impression was that the conference was not so much about emerging technologies, as about emerging tasks using existing technologies. One of the keynote speakers, Rudolf Mumenthaler, expressed similar thoughts in his blog post “No new technologies in libraries”, but some of the other participants disagreed, saying that “being emerging” has more to do with the context of technology than with the technology itself (see the comments on that blog post). Some technologies can be established, but may still be emerging in certain domains. There is something to say for that. Anyway, whatever you say, we all mean the same thing.
EMTACL12 was the second EMTACL conference. The first one was organised in 2010. One of the presentations that caused a great stir amongst librarians on twitter in the 2010 edition was the one entitled “I’ve got Google, why do I need you? A student’s expectations of academic libraries” by Ida Aalen. Let’s look at this year’s conference with that perspective in mind: is there a future for academic libraries in supporting students and researchers other than just giving access to publications?
The word “change” best describes the overall impression I got from all EMTACL12 presentations. And “data”. Both concepts involving “support and services for research and education”. Technologies that were mentioned: linked data, apis, mobile computing, visualisation, infrastructure, communication.
The EMTACL12 programme consisted of 8 plenary keynote presentations by invited speakers, and a number of presentations in two parallel tracks. Let me report on the things that struck me most.
The title of the opening keynote presentation by Herbert Van de Sompel, “Paint-yourself-in-the-corner Infrastructure” aptly describes the current situation of academic libraries. “Paint yourself in a corner” means something like: “To put yourself in a situation with no visible solution or alternative”. Herbert Van de Sompel talked about the changing nature of the scholarly record: from “fixity” and “boundary” to dynamic and interdependent on the web. Online publications and related information, like research project information, references and data, change over time, so it becomes increasingly difficult to recreate a scholarly record. These are the challenges that academic libraries need to address. Van de Sompel mentioned a couple of new tools and protocols that can help: Memento, DURI (Durable URI), SiteStory. See also the excellent report of this session by Jane Stevenson on the Archives Hub blog.
‘Think “different”’ is what Karen Coyle told us, using the famous Steve Jobs quote. And yes, the quotes around “different” are there for a reason, it’s not the grammatically correct “think differently”, because that’s too easy. What is meant here is: you have to have the term “different” in your mind all the time. Karen Coyle confronted us with a number of ingrained obsolete practices in libraries. Like the ineradicable need for alphabetic ordering, which only makes sense in physical catalogue card systems. “Alphabetical order is not generally meaningful and an accident of language” she said. Same with page numbers and ebooks: “…it is literally impossible to get everyone ‘on the same page’”. Before printing we already had a perfect reference system for texts, independent of physical appearance: paragraph or verse numbers (like in the Bible).
Libraries put things on shelves, forcing the user to see individual items, and ignoring the connections there are between them. “Library classification is a knowledge-prevention system, not a knowledge-organisation system”. The focus is still too much on physical items: “The FRBR user tasks drive me insane, as they end with obtain”. According to Karen Coyle, libraries are two-dimensional linear things. We need to add a third (links), fourth (time) and fifth dimension (the users).
Is linked data the answer? Not as such: “ISBD in RDF is like putting a turbo engine on a dinosaur”. The world is not waiting for libraries’ bibliographic data as Linked Open Data. The web is awash with bibliographic data. But we have holdings information, and that is unique and adds value. We should try and get that information into Google search results rich snippets.
Karen’s message, which I wholeheartedly support, was: “The mission of the library is not to gather physical things into an inventory, but to organize human knowledge that has been very inconveniently packaged.”
Rurik Greenall’s keynote “Defining/Defying reality: the struggle towards relevance in bibliographic data” also focused on the imminent irrelevancy of libraries, from another perspective. “Outsourcing library business is better called ‘outscarcing’. Libraries are losing skills.”. “You can tell a lot about an organization from the way it treats its data.”. “We see metadata as good and data as bad. The terms are the same.” . “Ideas change, so should your data.”. Buying shelf-ready data means being static. “Data should age like wine, not like fish.”. In this changing environment bibliographic data needs to be enhanced. There is a role for experts, for the library. Final quote: “The semantic web doesn’t exist anymore, it’s been absorbed by the web”.
Rudolf Mumenthaler spoke about “Innovation management in and for libraries”. During and after his talk the big question was: can innovation be promoted by management, or does it need to grow of itself in freedom, by allowing staff to play the Google way? It appears that there may be cultural differences. Main thing is: innovation has to be facilitated in one way or another. See the comments on his blog post.
Astrophysicist Eirik Newth entertained the audience with his slideless “Forecast for the academic library of 2025: Cloudy with a chance of user participation and content lock-in”.
Jens Vigen, Head Librarian at CERN, delivered a very entertaining and compelling argumentation for open access with his talk “Connecting people and information: how open access supports research in High Energy Physics. Since 50 years!” The CERN convention of 1953 already effectively contains an Open Access Manifesto. CERN supports SCOAP3, Sponsoring Consortium for Open Access Publishing in Particle Physics. CERN uses subscription funds for open access. “You librarians today spend money on subscriptions, tomorrow you will spend it on open access.”.
A couple of very interesting remarks by Jens Vigen that are of direct interest to online library discovery layers:
“A researcher would never go to an institutional repository, they find their colleagues in subject repositories.”.
“A successful digital library: one size does not fit all.”.
OCLC’s new “Technology evangelist” Richard Wallis‘ talk “OCLC WorldShare and Linked Data” actually was not about WorldShare and Linked data, but consisted of two parts, a WorldShare commercial, and a presentation of WorldCat and linked data, mainly the embedding of additional schema.org markup in WorldCat search results. Richard Wallis also mentioned the WorldCat Linked Data Facebook app, which almost nobody seemed to know. Maybe Facebook isn’t the right platform for things like this after all?
In his closing keynote “What Next for Libraries? Making Sense of the Future“ Brian Kelly, UKOLN, University of Bath, in the UK, made it clear that it is very hard to foresee the future, with Star Trek, monorails and paper planes as evidence.
Obviously I could only attend half of the parallel tracks sessions. Moreover, I chaired two sessions of two presentations each, in the “Semantic Web” and “Supporting Research” tracks, and I gave one presentation myself.
In “The winner takes it al? – APIs and Linked Data battle it out” Jane and Adrian Stevenson (yes, they’re married, and work together) of the MIMAS National Data Centre at the University of Manchester in the UK, performed an actual battle defending the use of the generic linked data protocol versus the more dedicated API approach in making data available for reuse and mashups. Two interesting projects served as an example: the World War 1 Discovery Project (Adrian for APIs) and Linking Lives (Jane for Linked Data). Conclusion: too close to call.
Norwegian Black Metal was the intriguing topic of Kim Tallerås’ talk “Using Linked data to harmonize heterogeneous metadata – Modeling the birth of Norwegian black metal”. He and three others combined complicated metadata from two heterogeneous data sources about early Norwegian black metal bands, performances and recordings using linked data ontologies and graph matching techniques. We saw some very interesting slides containing MARC records and some typical Black Metal band and song names.
Afterwards we had the opportunity to experience the real thing in the Black Metal Room in the Norwegian Rock and Pop Museum Rockheim during our conference excursion.
“Mubil: a digital laboratory” is a project (NTNU Trondheim, PERCRO, Pisa, Italy) aimed at augmenting and enriching rare old books in a digital 3D architecture, ready for all kinds of platforms and devices. Results are touch ebooks, with options for retrieving extra textual information and virtual 3D objects. A very interesting presentation by Alexandra Angeletaki, Marcello Carrozzino and Chiara Evangelista.
In her talk “Libraries, research infrastructures and the digital humanities: are we ready for the challenge?”, Sally Chambers (DARIAH Göttingen) gave us a very thorough and complete overview of what “Digital Humanities” means and of all organisations and infrastructures currently available to libraries that are charged with supporting digital humanities research.
The History Engine project was the subject of the presentation “Driving history forward: The History Engine as a vehicle for engaging undergraduate research” by Paulina Rousseau, Whitney Kemble and Christine Berkowitz (University of Toronto Scarborough), as a real example of how libraries can support undergraduate students in their efforts to master research.
Sharon Favaro, Digital Services Librarian at Seton Hall University in South Orange, USA, showed us the landscape of disconnected tools used in the different stages of research projects: catalogues, databases, writing tools, drawing tools, reference managers, task managers, email; on the web, on internet file sharing tools, on desktop, on flash drives. The topic of her talk “Designing tools for the 21st century workflow of research and how it changes what libraries must do” was: how can research libraries support scholars within the entire lifecycle of the research process? The goal being to identify areas where library tools could be better integrated to support library resource use throughout the lifecycle of research. It was a pity that there was no real view yet on the best way to solve this problem: create a new library based infrastructure platform, use existing linking features, or other options. This will hopefully be the objective of a follow-up project at Seton Hall University Library.
“Publication profiles – presenting research in a new way“: Urban Andersson and Stina Johansson presented the Chalmers University (Gothenburg, Sweden) Publication Profiles Platform, in which all kinds of information related to Chalmers University researchers and publications are linked together. The main objective is to increase the visibility of Chalmers University research. A good example of how university libraries should take care of their own research and publications domain. A very interesting visualisation feature was shown: Chalmers Geography, or geographical relations between researchers and projects on Google Maps. A question I should have asked (but didn’t) is: how does this project relate to the VIVO project?
In my own presentation “Primo at the University of Amsterdam – Technology vs Real Life” I tried to show the discrepancies between the in theory unlimited possibilities of the technology used in library discovery layers and the limitations in the actual implementation of these tools, focused on content, indexing and user interface configuration. One of my conclusions was already expressed earlier by Jens Vigen: “A successful digital library: one size does not fit all.”.
Let’s not forget Rune Martin Andersen’s talk of the Bartebuss (Moustache Bus) Trondheim public transport open data app project. This is yet another proof that public transport apps are the killer apps of open data.
Last but not least: the food (delicious and lots of it), the photos, Patrick Hochstenbach’s doodles and the music: the excursion to Rockheim Museum, the conference dinner entertainment by Skrømt, and the afterparty at Ramp bar, resulting in an interesting playlist afterwards.
Posted on April 27th, 2009 1 comment
In my post “Tweeting Libraries” among other things I described my Personal Twitter experience as opposed to Institutional Twitter use. Since then I have discovered some new developments in my own Twitter behaviour and some trends in Twitter at large: individual versus social.
There have been some discussions on the web about the pros and cons and the benefits and dangers of social networking tools like Twitter, focusing on “noise” (uninteresting trivial announcements) versus “signal” (meaningful content), but also on the risk of web 2.0 being about digital feudalism, and being a possible vehicle for fascism (as argumented by Andrew Keen).
My kids say: “Twitter is for old people who think they’re cool“. According to them it’s nothing more than : “Just woke up; SEND”, “Having breakfast; SEND”; “Drinking coffee; SEND”; “Writing tweet; SEND”. For them Twitter is only about broadcasting trivialities, narcissistic exhibitionism, “noise”.
For their own web communications they use chat (MSN/Messenger), SMS (mobile phone text messages), communities (Hyves, the Dutch counterpart of MySpace) and email. Basically I think young kids communicate online only within their groups of friends, with people they know.
Just to get an overview: a tweet, or Twitter message, can basically be of three different types:
- just plain messages, announcements
- replies: reactions to tweets from others, characterised by the “@<twittername>” string
- retweets: forwarding tweets from others, characterised by the letters “RT“
Although a lot of people use Twitter in the “exhibitionist” way, I don’t do that myself at all. If I look at my Twitter behaviour of the past weeks, I almost only see “retweets” and “replies”.
Both “replies” and “retweets” obviously were not features of the original Twitter concept, they came into being because Twitter users needed conversation.
A reply is becoming more and more a replacement for short emails or mobile phone text messages, at least for me. These Twitter replies are not “monologues”, but “dialogues”. If you don’t want everybody to read these, you can use a “Direct message” or “DM“.
Retweets are used to forward interesting messages to the people who are following you, your “community” so to speak. No monologue, no dialogue, but sharing information with specific groups.
The “@<twittername>” mechanism is also used to refer to another Twitter user in a tweet. In official Twitter terminology “replies” have been replaced by “mentions“.
Retweets and replies are the building blocks of Twitter communities. My primary community consists of people and organisations related to libraries. Just a small number of these people I actually know in person. Most of them I have never met. The advantage of Twitter here is obvious: I get to know more people who are active in my professional area, I stay informed and up to date, I can discuss topics. This is all about “signal”. If issues are too big for twitter (more than 140 characters) we can use our blogs.
But it’s not only retweets and replies that make Twitter communities work. Trivialities (“noise”) are equally important. They make you get to know people and in this way help create relationships built on trust.
Another compelling example of a very positive social use of Twitter I experienced last week, when there were a number of very interesting Library 2.0 conferences, none of which I could attend in person because of our ILS project:
- ELAG 2009 (European Library Automation Group) in Bratislava, April 22-24
- Dutch Conference Social Networks 2009 in Rotterdam, April 22
- Ugame Ulearn 2009 in Delft, April 23
All of these conferences were covered on Twitter by attendees using the hashtags #elag09, #csnr09 and #ugul09 . This phenomenon makes it possible for non-participants to follow all events and discussions at these conferences and even join in the discussions. Twitter at its best!
Twitter is just a tool, a means to communicate in many different ways. It can be used for good and for bad, and of course what is “good” and what is “bad” is up to the individual to decide.