Library2.0 and beyond
RSS icon Home icon
  • Linked data or die!

    Posted on December 1st, 2013 Lukas Koster No comments

    Struggling towards usable linked data services at SWIB13

    20131127_091404

    Paraphrasing some of the challenges proposed by keynote speaker Dorothea Salo, the unofficial theme of the SWIB13 conference in Hamburg might be described as “No more ontologies, we want out of the box linked data tools!”. This sounds like we are dealing with some serious confrontations in the linked open data world. Judging by Martin Malmsten’s LIBRIS battle cry “Linked data or die!” you might even think there’s an actual war going on.

    Looking at the whole range of this year’s SWIB pre-conference workshops, plenary presentations and lightning talks, you may conclude that “linked data is a technology that is maturing” as Rurik Greenall rightly states in his conference report. “But it has quite a way to go before we can say this stuff is ready to roll out in libraries” as he continues. I completely agree with this. Personally I got the impression that we are in a paradoxical situation where on the one hand people speak of “we” and “community”, and on the other hand they take fundamentalist positions, unconditionally defending their own beliefs and slandering and ridiculing other options. In my view there are multiple, sometimes overlapping, sometimes irreconcilable “we’s” and “communities”. Sticking to your own point of view without willingness to reason with the other party really does not bring “us” further.

    This all sounds a bit grim, but I again agree with Rurik Greenall when he says that he “enjoyed this conference immensely because of the people involved”. And of course on the whole the individual workshops and presentations were of a high quality.

    Before proceeding to the positive aspects of the conference, let me first elaborate a bit on the opposing positions I observed during the conference, which I think we should try to overcome.

    Developers disagree on a multitude of issues:
    Formats
    Developers hate MARC. Everybody seems to hate RDF/XML, JSON-LD seems to be the thing for RDF, but some say only Turtle should be used, or just JSON.
    Tools and languages
    Perl users hate Java, Jave users hate PHP, there’s Python and Ruby bashing.
    Ontologies
    Create your own, reuse existing ones, yes or no upper ontologies, no ontologies but usable tools.
    Operating systems
    Windows/UNIX/Linux/Apple… it’s either/or.
    Open source vs. commercial software
    Need I say more?
    Beer
    Belgians hate German beer, or any foreign beer for that matter.
    (Not to mention PDF).

    OK, I hope I made myself clear. The point is that I have no problem at all with having diverse opinions, but I dislike it when people are convinced that their own opinion is the only right one and refuse to have a conversation with those who think otherwise, or even respect their choices in silence. The developer “community” definitely has quite a way to go.

    Apart from these internal developer disagreements I noticed, there is the more fundamental gap between developers and users of linked open data. By “users” I do not mean “end users” in this case, but the intermediary deployers of systems. Let’s call them “libraries”.
    Linked Data developers talk about tools and programming languages, metadata formats, open source, ontologies, technology stacks. Librarians want to offer useful services to their end users, right now. They may not always agree on what kind of services and what kind of end users, and they may have an opinion on metadata formats in systems, but their outlook is slightly different from the developers’ horizon. It’s all about expectations and expectation management. That is basically Dorothea Salo’s keynote’s point. Of course theoretical, scientific and technical papers and projects are needed to take linked data further, but libraries need linked data tools, focused on providing new services to their end users/customers in the world of the web, that can easily be implemented and maintained.
    In this respect OCLC’s efforts to add linked data features to WorldCat is praiseworthy. OCLC’s Technology Evangelist Richard Wallis presented his view on the benefits of linked open data for libraries, using Google’s Knowledge Graph as an example. His talk was mainly focused at a librarian audience. At SWIB, where the majority of attendees are developers or technology staff, this seemed somewhat misplaced. By chance I had been present at Richard’s talk at the Dutch National Information Professional annual meeting two weeks earlier, where he delivered almost the same presentation for a large room full of librarians. There and then that was completely on target. For the SWIB audience this all may have been old news, except for the heads up about OCLC’s work on FRBR “Works” BIBFRAME type linked data which will result in published URIs for Works in WorldCat.
    An important point here is that OCLC is a company with many library customers worldwide, so developments like this benefit all of these libraries. The same applies to customers of one of the other big library system vendors, Ex Libris. They have been working on developing linked data features for their so called “next generation” tools since some time now, in close cooperation with the international user groups’ Linked Open Data Special Interest Working Group, as I explained in the lightning talk I gave. Also open source library systems like Koha are working on adding linked open data features to their tools. It’s with tools like these, that reach a large number of libraries, that linked open data for libraries can spread relatively quickly.

    In contrast to this linked data broadcasting, the majority of the SWIB presentations showed local proprietary development or research projects, mostly of high quality notwithstanding. In the case of systems or tools that were built all the code and ontologies are available on GitHub, making them open source. However, while it is commendable, open source on GitHub doesn’t mean that these potentially ground breaking systems and ontologies can and will be adopted as de facto standards in the wider library community. Most libraries, both public and academic, are dependent on commercial system and content providers and can’t afford large scale local system development. This also applies up to a point to libraries that deploy large open source tools like Koha, I presume.
    It would be great if some of these many great open source projects could evolve into commonly used standard tools, like Koha, Fedora and Drupal, just to name a few. Vivo is another example of an open source project rapidly moving towards an accepted standard. It is a framework for connecting and publishing research information of different nature and origin, based on linked data concepts. At SWIB there was a pre-conference “VivoCamp”, organised by Lambert Heller, Valeria Pesce and myself. Research information is an area rapidly gaining importance in the academic world. The Library of the University of Amsterdam, where I work, is in the process of starting a Vivo pilot, in which I am involved. (Yes, the Library of the University of Amsterdam uses both commercial providers like OCLC and Ex Libris, and many open source tools). The VivoCamp was a good opportunity to have a practical introduction in and discussion about the framework, not in the least by the presence of John Fereira of Cornell University, one of the driving forces behind Vivo. All attendees (26) expressed their interest in a follow-up.
    Vivo, although it may be imperfect, represents the type of infrastructure that may be needed for large scale adoption of linked open data in libraries. PUB, the repository based linked data research information project at Bielefeld University presented by Vitali Peil, is aimed at exactly the same domain as Vivo, but it again is a locally developed system, using another smaller scale open source framework (LibreCat/Catmandu of Bielefeld, Ghent and Lund universities) and a number of different ontologies, of which Vivo is just one. My guess is that, although PUB/LibreCat might be superior, Vivo will become the de facto standard in linked data based research information systems.

    Instead of focusing on systems, maybe the library linked data world would be better served by a common user-friendly metadata+services infrastructure. Of course, the web and the semantic web are supposed to be that infrastructure, but in reality we all move around and process metadata all the time, from one system and database to another, in order to be able to offer new legacy and linked data services. At SWIB there was mention of a number of tools for ETL, which is developer jargon for Extract, Transform, Load. By the way, jargon is a very good way to widen the gap between developers and libraries.
    There were pre-conference workshop for the ETL tools Catmandu and Metafacture, and in a lightning talk SLUB Dresden, in collaboration with Avantgarde Labs, presented a new project focused on using ETL for a separate multi-purpose data management platform, serving as a unified layer between external data sources and services. This looks like a very interesting concept, similar to the ideas of a data services hub I described in an earlier post “(Discover AND deliver) OR else”. The ResourceSync project, presented by Simeon Warner, is trying to address the same issue by a different method, distributed synchronisation of web resources.

    One can say that the BIBFRAME project is also focused on data infrastructure, albeit at the moment limited to the internal library cataloguing workflow, aimed at replacing MARC. An overview of the current state of the project was presented by Lars Svensson of the German National Library.
    The same can be said for the National Library of Sweden’s new LIBRIS linked data based cataloguing system, presented by Martin Malmsten (Decentralisation, Distribution, Disintegration – towards Linked Data as a First Class Citizen in Libraryland). The big difference is that they’re actually doing what BIBFRAME is trying to plan. The war cry “Linked data or die!” refers to the fact that it is better to start from scratch with a domain and format independent data infrastructure, like linked data, than to try and build linking around existing rigid formats like MARC. Martin Malmsten rightly stated that we should keep formats outside our systems, as is also the core statement of the MARC-MUST-DIE movement. Proprietary formats can be dynamically imported and exported at will, as was demonstrated by the “MARC” button in the LIBRIS user interface. New library linked data developments will have to coexist with the existing wider library metadata and systems environment for some time.
    Like all other local projects, the LIBRIS source code and ontology descriptions are available on GitHub. In this case the mere scope of the National Library of Sweden and of the project makes it a bit more plausible that this may actually be reused on a larger scale. At least the library cataloguing ontology in JSON-LD there is worth having a look at.
    To return to our starting point, the LIBRIS project acknowledges the fact that we need actual tools besides the ontologies. As Martin Malmsten quoted: “Trying to sell the idea of linked data without interfaces is like trying to sell a fax without the invention of paper”.

    20131127_093330

    The central question in all this: what is the role of libraries in linked data? Developers or implementers, individually or in a community? There is obviously not one answer. Maybe we will know more at SWIB14. Paraphrasing Fabian Steeg and Pascal Christoph of hbz and Dorothea Salo, next years theme might be “Out of the box data knitting for great justice”.

    Share

  • Resilience, connections and a clean slate

    Posted on June 10th, 2013 Lukas Koster 17 comments

    The inside-out library at ELAG 2013
    This year marked my fifth ELAG conference since 2008 (I skipped 2009), which is not much if you take into account that ELAG2013 was the 37th one. I really enjoyed the 2013 conference, not in the least because of the wonderful people of the local organising committee at the Ghent University Library, who made ELAG2013 a very pleasant event.This year’s theme was “the inside-out library”, a concept coined by Lorcan Dempsey, which in brief emphasises the need for libraries to shift focus 180 degrees.

    DSC09680

    Sylvia Van Peteghem opening speech

    Before you read any further I strongly suggest you read Rurik Greenall’s post on ELAG 2013 first. He covered most of the programme in his usual thorough and analytical way.

    In my personal overall conference experience major emphasis was on research support in libraries. This was partly due to my attendance of the pre-conference Joint OpenAIRE/LIBER WorkshopDealing with Data – what’s the role for the library?’ on May 28. It was good to have sessions focusing on different perspectives: data management, data publication, the researchers’ needs, library support and training. I was honoured to be invited to participate in the closing round table panel discussion together with two library directors Wilma van Wezenbeek (TU Delft Library) and Wolfram Horstmann (Bodleian Library), under the excellent supervision of Kevin Ashley (DDC). An important central concept in the workshop was the research life cycle, which consists of many different tasks of a very diverse nature. Academic and research libraries should focus on those tasks for which they are or can easily become qualified.

    Looking from another angle we can distinguish two main perspectives in integrating research: the research ecosystem itself, which can be seen as the main topic of the OpenAIRE/LIBER workshop, and the research content, the actual focus of researchers and research projects. I will try to address both perspectives here.

    On the first day of the actual conference Herbert Van de Sompel gave the keynote speech with the title “A clean slate”. Rurik Greenall aptly describes the scope and meaning of Herbert’s argument. Herbert has been involved in a number of important and relevant projects in the domain of scholarly communication. My impression this time was: now he’s bringing it all together around the fairly new concept of the “research object”, integrating a number of projects and protocols, like ORE, Memento, OpenAnnotation, Provenance, ResourceSync. It’s all about connections between all components related to research on the web in all dimensions.

    This linking of input, output, procedures and actors of research projects in various temporal and contextual dimensions in a machine readable way is extremely important in order to be able to process all relevant information by means of computer systems and present it to the human consumer. In this respect I think it is essential that data citations in scholarly articles should not only be made available in the article text, but also as machine readable metadata that can be indexed by external aggregators.
    Moreover, it would be even better if it was possible to provide links to research projects that would serve as central hubs for linking to all associated entities, not only datasets. This is the role that the research object can fulfill. During the OpenAIRE/LIBER workshop I tried to address this issue a number of times, because I am a bit surprised that  both researchers and publishers appear to be satisfied with having text only clickable dataset citations. That is even the case the other way around with links to articles in dataset repositories like Dryad. I think there is a role here for information professionals and metadata experts in libraries. This is exactly the point that Peter van Boheemen made in his talk about producing better metadata for research output. Similarly Jing Wang stressed the importance of investigating the role of metadata specialists and data librarians for interoperability and authority control in her presentation on the open source linked data based research discovery tool Vivo.

    Again there are two perspectives here. Even if we have machine readable metadata on research projects and datasets, most systems are not adequately equipped with functionality to process or present this information. It is not so easy to update complex systems with new functionality. Planned update cycles, including extensive testing, are necessary in order to adhere to the system’s design and architecture and to avoid breaking things. This equally applies to commercial, open source and home grown systems. Joachim Neubert’s presentation of the use of the open source CMS Drupal for linked data enhanced publishing for special collections illustrated this. Some very specialist custom extensions to the essentially quite flexible system were needed to make this a success. (On a different note, it was nice to see that Joachim used a simple triple diagram from my first library linked data blog post to illustrate the use of different types of predicates between similar subjects and objects.)
    Anyway, a similar point can be made about systems and identifiers for people (authors, researchers, etc.). I participated in the workshop on ISNI, ORCID and VIAF : Examining the fundamentals and application of contributor identifiers led by Anila Angjeli and Thom Hickey, one of six ELAG workshops this year. Thom and Anila presented a very complete and detailed overview of the similarities and differences of these three identifier schemes. One of the discussion topics was the difference in adoption of these schemes by the community on the one hand and as machine readable metadata and their application in library systems on the other.

    Here comes “resilience” into play, a concept introduced by Beate Rusch in her talk on the changing roles of the German regional library consortia and service centres in the world of cloud computing and SaaS. Rurik Greenall captures the essence of her talk when he says “… homogenous, generic solutions will not work in practice because they are at odds with how things are done …” and that “messy, imperfect systems… are smart and long lived”. Since Beate’s presentation the term “resilience” popped up in a number of discussions with colleagues, during and after the conference, mainly in the sense that most systems, communities, infrastructures are NOT resilient. Resilience is a concept mainly used in psychology and physics, meaning the ability of someone or something to return to its original state after being subjected to a severe disturbance. Beate’s idea with resilience is that we can adapt better to changing circumstances and needs in the world around us if we are less perfect and rigid than we usually are. In this sense I think resilience can also mean that a structure could permanently change instead of returning to its original state.
    In the library world resilience can be applied to librarians, libraries, library infrastructure and library systems alike. In my view “resilience” might apply to the alternative architecture I have described in a recent blog post, where I argue that we should stop thinking systems and start thinking data. In order to be resilient we need an open, connected infrastructure, that is of the web (not on the web). The SCAPE infrastructure for processing large datasets for long term preservation, presented by Sven Schlarb, might fit this description.

    A  number of presentations focused on infrastructure and architecture. The new version of the Swedish union catalogue LIBRIS could be described as a resilient system. Martin Malmsten, Markus Sköld and Niklas Lindström showed their new linked open data based integrated library framework which was built from the ground up, from ”a clean slate” so to speak. I can only echo Rurik’s verdict “ With this, Libris really are showing the world how things are done”. Contrary to the Library of Congress BibFrame development which started very promising, but now seems to evolve into an inward looking rigid New Marc. This was illustrated by Martin Malmsten when he revealed to us that Marc is undead, and by Becky Yoose, who wrote a very pertinent parable telling the tale of the resurrection of Marc.
    Rurik Greenall described the direction taken at his own institution NTNU Library: getting rid of old legacy library and webpage formats and moving towards being part of the web, providing information for the web, being data driven. It’s a slow and uphill struggle, but better than the alternative. A clean slate again!
    Dave Pattern presented a different approach in connecting data from a number of existing systems and databases by means of APIs, and combining these into a new and well received reading list service at the University of Huddersfield.

    Back to research. In our presentation, or rather performance, Jane Stevenson and I tried to present the conflicting perspectives of collection managers and researchers in a theatrical way, showing parallel developments in the music industry. Afterwards we tried to analyse the different perspectives, argued that researchers need connected information of all types and from all sources and concluded that information professionals should try and learn to take the researcher’s perspective in order to avoid becoming irrelevant in that area.
    The relationship between libraries and researchers was also the subject of the talk “Partners in research. Outside the library, inside the infrastructure“, by Sally Chambers and Saskia Scheltjens. Here the focus was on providing comprehensive infrastructures for research support, especially in the digital humanities. Central question: large top-down institutionalised structures, or bottom-up connected networks? Bottom line is: the researcher’s needs have to be met in the best possible way.
    A very interesting example of an actual digital humanities research and teaching project in collaboration between researchers and the library is the Annotated Books Online project that was presented by Utrecht University staff. The collection of rare books is made available online in order to crowdsource the interpretation of handwritten annotations present in these books.

    Besides research support there were presentations on other “inside out library” topics: publishing, teaching, data analysis and GLAM.
    Anders Söderbäck presented the Stockholm University Press, a new publishing house for open access digital and print on demand books. I was pleasantly surprised that Anders included two quotes of my aforementioned blog post in his talk: “...in the near future we will see the end of the academic library as we know it” and “According to some people university libraries are very suitable and qualified to become scholarly publishers … I am not sure that this is actually the case. Publishing as it currently exists requires a number of specific skills that have nothing to do with librarian expertise“. But of course Anders’ most important achievement was winning the Library Automation Bingo by including all required terms in one slide in a coherent and meaningful way.

    DSC09707

    DSC09708

    Merrilee Proffitt presented an overview of MOOCs and libraries, Sarah Brown described the way that learning materials at the Open University in the UK are successfully connected and integrated in the linked data based STELLAR project. Looking at these developments the question arises if there are already efforts to come to a Teaching Object model, similar to the Research Object?
    Andrew Nagy described the importance of analysing huge amounts of usage data in order to improve the usability and end user front end of the Summon discovery tool. Dan Chudnov presented the Social Media Manager prototype, used for collecting data from twitter in order to be used in social science research.
    Valentine Charles described the activities carried out by Europeana to contribute large amounts of digitised library heritage resources to Wikimedia Commons by means of the GLAMwiki toolset in order to improve visibility of these resources the Open Access way. The GLAMwiki toolset currently appears to offer a number of challenges for the interoperability and integration of metadata standards between the library and the Wikimedia world. Another plea for resilience.

    Then there were the workshops. The combination of these parallel hands-on and engaging group activities and the plenary sessions makes ELAG a unique experience. Although I only participated in one, obviously, I have heard good reports from all other workshops. I would like to give a special mention to Ade and Jane Stevenson’s “Very Gentle Linked Data” workshop, where they managed to teach even non-tech people not only the basic principles of linked data, but also how to create their own triple store and query it with SPARQL.

    Summarising: looking at the ELAG2013 presentations, are we ready for the inside out library? Sometimes we can start with a clean slate, but that is not always possible. Resilience seems to be a requirement if we want to cope with the dramatic changes we are facing. But you can’t simply decide to be resilient, either something is resilient or it isn’t. A clean slate might be the only option. In any case it seems obvious that connections are key. The information profession needs to invest in new connections on every level, creating new forms of knowledge, in order to stay relevant.

    Share

  • Change or be irrelevant

    Posted on October 10th, 2012 Lukas Koster 28 comments

    Or: Think “different” or paint yourself in a corner

    EMTACL12 – Emerging Technologies in Academic Libraries 2012

    I attended the EMTACL12 conference in Trondheim October 1-3, 2012, organised by the Library of NTNU Norwegian University of Science and Technology, both as a member of the international programme committee and as a speaker. EMTACL stands for “emerging technologies in academic libraries”. Looking back, my impression was that the conference was not so much about emerging technologies, as about emerging tasks using existing technologies. One of the keynote speakers, Rudolf Mumenthaler, expressed similar thoughts in his blog post “No new technologies in libraries”, but some of the other participants disagreed, saying that “being emerging” has more to do with the context of technology than with the technology itself (see the comments on that blog post). Some technologies can be established, but may still be emerging in certain domains. There is something to say for that. Anyway, whatever you say, we all mean the same thing.

    EMTACL12 was the second EMTACL conference. The first one was organised in 2010. One of the presentations that caused a great stir amongst librarians on twitter in the 2010 edition was the one entitled “I’ve got Google, why do I need you? A student’s expectations of academic libraries” by Ida Aalen. Let’s look at this year’s conference with that perspective in mind: is there a future for academic libraries in supporting students and researchers other than just giving access to publications?

    The word “change” best describes the overall impression I got from all EMTACL12 presentations. And “data”. Both concepts involving “support and services for research and education”. Technologies that were mentioned: linked data, apis, mobile computing, visualisation, infrastructure, communication.

    The EMTACL12 programme consisted of 8 plenary keynote presentations by invited speakers, and a number of presentations in two parallel tracks. Let me report on the things that struck me most.

    Keynotes

     

    The title of the opening keynote presentation by Herbert Van de Sompel, “Paint-yourself-in-the-corner Infrastructure” aptly describes the current situation of academic libraries. “Paint yourself in a corner” means something like: “To put yourself in a situation with no visible solution or alternative”. Herbert Van de Sompel talked about the changing nature of the scholarly record: from “fixity” and “boundary” to dynamic and interdependent on the web. Online publications and related information, like research project information, references and data, change over time, so it becomes increasingly difficult to recreate a scholarly record. These are the challenges that academic libraries need to address. Van de Sompel mentioned a couple of new tools and protocols that can help: Memento, DURI (Durable URI), SiteStory. See also the excellent report of this session by Jane Stevenson on the Archives Hub blog.

     

    5 dimensions

    Think “different”’ is what Karen Coyle told us, using the famous Steve Jobs quote. And yes, the quotes around “different” are there for a reason, it’s not the grammatically correct “think differently”, because that’s too easy.  What is meant here is: you have to have the term “different” in your mind all the time. Karen Coyle confronted us with a number of ingrained obsolete practices in libraries. Like the ineradicable need for alphabetic ordering, which only makes sense in physical catalogue card systems. “Alphabetical order is not generally meaningful and an accident of language” she said. Same with page numbers and ebooks: “…it is literally impossible to get everyone ‘on the same page’”. Before printing we already had a perfect reference system for texts, independent of physical appearance: paragraph or verse numbers (like in the Bible).
    Libraries put things on shelves, forcing the user to see individual items, and ignoring the connections there are between them. “Library classification is a knowledge-prevention system, not a knowledge-organisation system”. The focus is still too much on physical items: “The FRBR user tasks drive me insane, as they end with obtain”.  According to Karen Coyle, libraries are two-dimensional linear things. We need to add a third (links), fourth (time) and fifth dimension (the users).

    © Patrick Hochstenbach

    Is linked data the answer? Not as such: “ISBD in RDF is like putting a turbo engine on a dinosaur”. The world is not waiting for libraries’ bibliographic data as Linked Open Data. The web is awash with bibliographic data. But we have holdings information, and that is unique and adds value. We should try and get that information into Google search results rich snippets.

    Karen’s message, which I wholeheartedly support, was: “The mission of the library is not to gather physical things into an inventory, but to organize human knowledge that has been very inconveniently packaged.

    Rurik Greenall’s keynote “Defining/Defying reality: the struggle towards relevance in bibliographic data” also focused on the imminent irrelevancy of libraries, from another perspective. “Outsourcing library business is better called ‘outscarcing’. Libraries are losing skills.”. “You can tell a lot about an organization from the way it treats its data.”. “We see metadata as good and data as bad. The terms are the same.” . “Ideas change, so should your data.”. Buying shelf-ready data means being static. “Data should age like wine, not like fish.”. In this changing environment bibliographic data needs to be enhanced. There is a role for experts, for the library. Final quote: “The semantic web doesn’t exist anymore, it’s been absorbed by the web”.

    Rurik’s world

     

    Rudolf Mumenthaler spoke about “Innovation management in and for libraries”. During and after his talk the big question was: can innovation be promoted by management, or does it need to grow of itself in freedom, by allowing staff to play the Google way? It appears that there may be cultural differences. Main thing is: innovation has to be facilitated in one way or another. See the comments on his blog post.

    Astrophysicist Eirik Newth entertained the audience with his slideless “Forecast for the academic library of 2025: Cloudy with a chance of user participation and content lock-in”.

    Jens Vigen, Head Librarian at CERN, delivered a very entertaining and compelling argumentation for open access with his talk “Connecting people and information: how open access supports research in High Energy Physics. Since 50 years!” The CERN convention of 1953 already effectively contains an Open Access Manifesto. CERN supports SCOAP3, Sponsoring Consortium for Open Access Publishing in Particle Physics.  CERN uses subscription funds for open access. “You librarians today spend money on subscriptions, tomorrow you will spend it on open access.”.
    A couple of very interesting remarks by Jens Vigen that are of direct interest to online library discovery layers:
    A researcher would never go to an institutional repository, they find their colleagues in subject repositories.”.
    A successful digital library: one size does not fit all.”.

    OCLC’s new “Technology evangelist” Richard Wallis‘ talk “OCLC WorldShare and Linked Data” actually was not about WorldShare and Linked data, but consisted of two parts, a WorldShare commercial, and a presentation of WorldCat and linked data, mainly the embedding of additional schema.org markup in WorldCat search results. Richard Wallis also mentioned the WorldCat Linked Data Facebook app, which almost nobody seemed to know. Maybe Facebook isn’t the right platform for things like this after all?

    In his closing keynote “What Next for Libraries? Making Sense of the FutureBrian Kelly, UKOLN, University of Bath, in the UK, made it clear that it is very hard to foresee the future, with Star Trek, monorails and paper planes as evidence.

    Parallel tracks

    Obviously I could only attend half of the parallel tracks sessions. Moreover, I chaired two sessions of two presentations each, in the “Semantic Web” and “Supporting Research” tracks, and I gave one presentation myself.

    In “The winner takes it al? – APIs and Linked Data battle it outJane and Adrian Stevenson (yes, they’re married, and work together) of the MIMAS National Data Centre at the University of Manchester in the UK, performed an actual battle defending the use of the generic linked data protocol versus the more dedicated API approach in making data available for reuse and mashups. Two interesting projects served as an example: the World War 1 Discovery Project (Adrian for APIs) and Linking Lives (Jane for Linked Data). Conclusion: too close to call.

    Black Metal MARC

    Norwegian Black Metal was the intriguing topic of Kim Tallerås’ talk “Using Linked data to harmonize heterogeneous metadata – Modeling the birth of Norwegian black metal”. He and three others combined complicated metadata from two heterogeneous data sources about early Norwegian black metal bands, performances and recordings using linked data ontologies and graph matching techniques. We saw some very interesting slides containing MARC records and some typical Black Metal band and song names.
    Afterwards we had the opportunity to experience the real thing in the Black Metal Room in the Norwegian Rock and Pop Museum Rockheim during our conference excursion.

    Black Metal Room at Rockheim

    Mubil: a digital laboratory” is a project (NTNU Trondheim, PERCRO, Pisa, Italy) aimed at augmenting and enriching rare old books in a digital 3D architecture, ready for all kinds of platforms and devices. Results are touch ebooks, with options for retrieving extra textual information and virtual 3D objects. A very interesting presentation by Alexandra Angeletaki, Marcello Carrozzino and Chiara Evangelista.

    In her talk “Libraries, research infrastructures and the digital humanities: are we ready for the challenge?”, Sally Chambers (DARIAH Göttingen) gave us a very thorough and complete overview of what “Digital Humanities” means and of all organisations and infrastructures currently available to libraries that are charged with supporting digital humanities research.

    The History Engine project was the subject of the presentation “Driving history forward: The History Engine as a vehicle for engaging undergraduate research” by Paulina Rousseau, Whitney Kemble and Christine Berkowitz (University of Toronto Scarborough), as a real example of how libraries can support undergraduate students in their efforts to master research.

    Sharon Favaro, Digital Services Librarian at Seton Hall University in South Orange, USA, showed us the landscape of disconnected tools used in the different stages of research projects: catalogues, databases, writing tools, drawing tools, reference managers, task managers, email; on the web, on internet file sharing tools, on desktop, on flash drives. The topic of her talk “Designing tools for the 21st century workflow of research and how it changes what libraries must do” was: how can research libraries support scholars within the entire lifecycle of the research process? The goal being to identify areas where library tools could be better integrated to support library resource use throughout the lifecycle of research. It was a pity that there was no real view yet on the best way to solve this problem: create a new library based infrastructure platform, use existing linking features, or other options. This will hopefully be the objective of a follow-up project at Seton Hall University Library.

    Publication profiles – presenting research in a new way“: Urban Andersson and Stina Johansson presented the Chalmers University (Gothenburg, Sweden) Publication Profiles Platform, in which all kinds of information related to Chalmers University researchers and publications are linked together. The main objective is to increase the visibility of Chalmers University research. A good example of how university libraries should take care of their own research and publications domain.  A very interesting visualisation feature was shown: Chalmers Geography, or geographical relations between researchers and projects on Google Maps. A question I should have asked (but didn’t) is: how does this project relate to the VIVO project?

    In my own presentation “Primo at the University of Amsterdam – Technology vs Real Life” I tried to show the discrepancies between the in theory unlimited possibilities of the technology used in library discovery layers and the limitations in the actual implementation of these tools, focused on content, indexing and user interface configuration. One of my conclusions was already expressed earlier by Jens Vigen: “A successful digital library: one size does not fit all.”.

    Of the other parallel tracks sessions I heard good reports about Andrew Withworth’s “The triadic model: A holistic view of how digital and information literacy must support each other” and Shun Nagaya and Keizo Itabashi – “Covo.js : A JavaScript Library to Utilize Subject Headings and Thesauri on the Web”. This doesn’t mean that the other talks were bad, I just didn’t manage to talk to people about them. One presentation worth mentioning is Krista Godfrey’s “The QR Question: QR Codes in Academic Libraries”, because it featured QR cows and my own photo of the University of Amsterdam Library’s QR cards.

    Let’s not forget Rune Martin Andersen’s talk of the Bartebuss (Moustache Bus) Trondheim public transport open data app project. This is yet another proof that public transport apps are the killer apps of open data.

    Trondheim moustache men

    Last but not least: the food (delicious and lots of it), the photos, Patrick Hochstenbach’s doodles and the music: the excursion to Rockheim Museum, the conference dinner entertainment by Skrømt, and the afterparty at Ramp bar, resulting in an interesting playlist afterwards.

    Share

  • ReTweet @Reply – Twitter communities

    Posted on April 27th, 2009 Lukas Koster 1 comment

    twitterelag

    In my post “Tweeting Libraries” among other things I described my Personal Twitter experience as opposed to Institutional Twitter use. Since then I have discovered some new developments in my own Twitter behaviour and some trends in Twitter at large: individual versus social.

    There have been some discussions on the web about the pros and cons and the benefits and dangers of social networking tools like Twitter, focusing on “noise” (uninteresting trivial announcements) versus “signal” (meaningful content), but also on the risk of web 2.0 being about digital feudalism, and being a possible vehicle for fascism (as argumented by Andrew Keen).

    My kids say: “Twitter is for old people who think they’re cool“. According to them it’s nothing more than : “Just woke up; SEND”, “Having breakfast; SEND”; “Drinking coffee; SEND”; “Writing tweet; SEND”. For them Twitter is only about broadcasting trivialities, narcissistic exhibitionism, “noise”.
    For their own web communications they use chat (MSN/Messenger), SMS (mobile phone text messages), communities (Hyves, the Dutch counterpart of MySpace) and email. Basically I think young kids communicate online only within their groups of friends, with people they know.

    Just to get an overview: a tweet, or Twitter message, can basically be of three different types:

    • just plain messages, announcements
    • replies: reactions to tweets from others, characterised by the “@<twittername>” string
    • retweets: forwarding tweets from others, characterised by the letters “RT

    Although a lot of people use Twitter in the “exhibitionist” way, I don’t do that myself at all. If I look at my Twitter behaviour of the past weeks, I almost only see “retweets” and “replies”.

    Both “replies” and “retweets” obviously were not features of the original Twitter concept, they came into being because Twitter users needed conversation.
    A reply is becoming more and more a replacement for short emails or mobile phone text messages, at least for me. These Twitter replies are not “monologues”, but “dialogues”. If you don’t want everybody to read these, you can use a “Direct message” or “DM“.
    Retweets are used to forward interesting messages to the people who are following you, your “community” so to speak. No monologue, no dialogue, but sharing information with specific groups.
    The “@<twittername>” mechanism is also used to refer to another Twitter user in a tweet. In official Twitter terminology “replies” have been replaced by “mentions“.

    Retweets and replies are the building blocks of Twitter communities. My primary community consists of people and organisations related to libraries. Just a small number of these people I actually know in person. Most of them I have never met. The advantage of Twitter here is obvious: I get to know more people who are active in my professional area, I stay informed and up to date, I can discuss topics. This is all about “signal”. If issues are too big for twitter (more than 140 characters) we can use our blogs.
    But it’s not only retweets and replies that make Twitter communities work. Trivialities (“noise”) are equally important. They make you get to know people and in this way help create relationships built on trust.

    Another compelling example of a very positive social use of Twitter I experienced last week, when there were a number of very interesting Library 2.0 conferences, none of which I could attend in person because of our ILS project:

    All of these conferences were covered on Twitter by attendees using the hashtags #elag09, #csnr09 and #ugul09 . This phenomenon makes it possible for non-participants to follow all events and discussions at these conferences and even join in the discussions. Twitter at its best!

    Twitter is just a tool, a means to communicate in many different ways. It can be used for good and for bad, and of course what is “good” and what is “bad” is up to the individual to decide.

    Share