Library2.0 and beyond
RSS icon Home icon
  • Linked data or die!

    Posted on December 1st, 2013 Lukas Koster No comments

    Struggling towards usable linked data services at SWIB13

    20131127_091404

    Paraphrasing some of the challenges proposed by keynote speaker Dorothea Salo, the unofficial theme of the SWIB13 conference in Hamburg might be described as “No more ontologies, we want out of the box linked data tools!”. This sounds like we are dealing with some serious confrontations in the linked open data world. Judging by Martin Malmsten’s LIBRIS battle cry “Linked data or die!” you might even think there’s an actual war going on.

    Looking at the whole range of this year’s SWIB pre-conference workshops, plenary presentations and lightning talks, you may conclude that “linked data is a technology that is maturing” as Rurik Greenall rightly states in his conference report. “But it has quite a way to go before we can say this stuff is ready to roll out in libraries” as he continues. I completely agree with this. Personally I got the impression that we are in a paradoxical situation where on the one hand people speak of “we” and “community”, and on the other hand they take fundamentalist positions, unconditionally defending their own beliefs and slandering and ridiculing other options. In my view there are multiple, sometimes overlapping, sometimes irreconcilable “we’s” and “communities”. Sticking to your own point of view without willingness to reason with the other party really does not bring “us” further.

    This all sounds a bit grim, but I again agree with Rurik Greenall when he says that he “enjoyed this conference immensely because of the people involved”. And of course on the whole the individual workshops and presentations were of a high quality.

    Before proceeding to the positive aspects of the conference, let me first elaborate a bit on the opposing positions I observed during the conference, which I think we should try to overcome.

    Developers disagree on a multitude of issues:
    Formats
    Developers hate MARC. Everybody seems to hate RDF/XML, JSON-LD seems to be the thing for RDF, but some say only Turtle should be used, or just JSON.
    Tools and languages
    Perl users hate Java, Jave users hate PHP, there’s Python and Ruby bashing.
    Ontologies
    Create your own, reuse existing ones, yes or no upper ontologies, no ontologies but usable tools.
    Operating systems
    Windows/UNIX/Linux/Apple… it’s either/or.
    Open source vs. commercial software
    Need I say more?
    Beer
    Belgians hate German beer, or any foreign beer for that matter.
    (Not to mention PDF).

    OK, I hope I made myself clear. The point is that I have no problem at all with having diverse opinions, but I dislike it when people are convinced that their own opinion is the only right one and refuse to have a conversation with those who think otherwise, or even respect their choices in silence. The developer “community” definitely has quite a way to go.

    Apart from these internal developer disagreements I noticed, there is the more fundamental gap between developers and users of linked open data. By “users” I do not mean “end users” in this case, but the intermediary deployers of systems. Let’s call them “libraries”.
    Linked Data developers talk about tools and programming languages, metadata formats, open source, ontologies, technology stacks. Librarians want to offer useful services to their end users, right now. They may not always agree on what kind of services and what kind of end users, and they may have an opinion on metadata formats in systems, but their outlook is slightly different from the developers’ horizon. It’s all about expectations and expectation management. That is basically Dorothea Salo’s keynote’s point. Of course theoretical, scientific and technical papers and projects are needed to take linked data further, but libraries need linked data tools, focused on providing new services to their end users/customers in the world of the web, that can easily be implemented and maintained.
    In this respect OCLC’s efforts to add linked data features to WorldCat is praiseworthy. OCLC’s Technology Evangelist Richard Wallis presented his view on the benefits of linked open data for libraries, using Google’s Knowledge Graph as an example. His talk was mainly focused at a librarian audience. At SWIB, where the majority of attendees are developers or technology staff, this seemed somewhat misplaced. By chance I had been present at Richard’s talk at the Dutch National Information Professional annual meeting two weeks earlier, where he delivered almost the same presentation for a large room full of librarians. There and then that was completely on target. For the SWIB audience this all may have been old news, except for the heads up about OCLC’s work on FRBR “Works” BIBFRAME type linked data which will result in published URIs for Works in WorldCat.
    An important point here is that OCLC is a company with many library customers worldwide, so developments like this benefit all of these libraries. The same applies to customers of one of the other big library system vendors, Ex Libris. They have been working on developing linked data features for their so called “next generation” tools since some time now, in close cooperation with the international user groups’ Linked Open Data Special Interest Working Group, as I explained in the lightning talk I gave. Also open source library systems like Koha are working on adding linked open data features to their tools. It’s with tools like these, that reach a large number of libraries, that linked open data for libraries can spread relatively quickly.

    In contrast to this linked data broadcasting, the majority of the SWIB presentations showed local proprietary development or research projects, mostly of high quality notwithstanding. In the case of systems or tools that were built all the code and ontologies are available on GitHub, making them open source. However, while it is commendable, open source on GitHub doesn’t mean that these potentially ground breaking systems and ontologies can and will be adopted as de facto standards in the wider library community. Most libraries, both public and academic, are dependent on commercial system and content providers and can’t afford large scale local system development. This also applies up to a point to libraries that deploy large open source tools like Koha, I presume.
    It would be great if some of these many great open source projects could evolve into commonly used standard tools, like Koha, Fedora and Drupal, just to name a few. Vivo is another example of an open source project rapidly moving towards an accepted standard. It is a framework for connecting and publishing research information of different nature and origin, based on linked data concepts. At SWIB there was a pre-conference “VivoCamp”, organised by Lambert Heller, Valeria Pesce and myself. Research information is an area rapidly gaining importance in the academic world. The Library of the University of Amsterdam, where I work, is in the process of starting a Vivo pilot, in which I am involved. (Yes, the Library of the University of Amsterdam uses both commercial providers like OCLC and Ex Libris, and many open source tools). The VivoCamp was a good opportunity to have a practical introduction in and discussion about the framework, not in the least by the presence of John Fereira of Cornell University, one of the driving forces behind Vivo. All attendees (26) expressed their interest in a follow-up.
    Vivo, although it may be imperfect, represents the type of infrastructure that may be needed for large scale adoption of linked open data in libraries. PUB, the repository based linked data research information project at Bielefeld University presented by Vitali Peil, is aimed at exactly the same domain as Vivo, but it again is a locally developed system, using another smaller scale open source framework (LibreCat/Catmandu of Bielefeld, Ghent and Lund universities) and a number of different ontologies, of which Vivo is just one. My guess is that, although PUB/LibreCat might be superior, Vivo will become the de facto standard in linked data based research information systems.

    Instead of focusing on systems, maybe the library linked data world would be better served by a common user-friendly metadata+services infrastructure. Of course, the web and the semantic web are supposed to be that infrastructure, but in reality we all move around and process metadata all the time, from one system and database to another, in order to be able to offer new legacy and linked data services. At SWIB there was mention of a number of tools for ETL, which is developer jargon for Extract, Transform, Load. By the way, jargon is a very good way to widen the gap between developers and libraries.
    There were pre-conference workshop for the ETL tools Catmandu and Metafacture, and in a lightning talk SLUB Dresden, in collaboration with Avantgarde Labs, presented a new project focused on using ETL for a separate multi-purpose data management platform, serving as a unified layer between external data sources and services. This looks like a very interesting concept, similar to the ideas of a data services hub I described in an earlier post “(Discover AND deliver) OR else”. The ResourceSync project, presented by Simeon Warner, is trying to address the same issue by a different method, distributed synchronisation of web resources.

    One can say that the BIBFRAME project is also focused on data infrastructure, albeit at the moment limited to the internal library cataloguing workflow, aimed at replacing MARC. An overview of the current state of the project was presented by Lars Svensson of the German National Library.
    The same can be said for the National Library of Sweden’s new LIBRIS linked data based cataloguing system, presented by Martin Malmsten (Decentralisation, Distribution, Disintegration – towards Linked Data as a First Class Citizen in Libraryland). The big difference is that they’re actually doing what BIBFRAME is trying to plan. The war cry “Linked data or die!” refers to the fact that it is better to start from scratch with a domain and format independent data infrastructure, like linked data, than to try and build linking around existing rigid formats like MARC. Martin Malmsten rightly stated that we should keep formats outside our systems, as is also the core statement of the MARC-MUST-DIE movement. Proprietary formats can be dynamically imported and exported at will, as was demonstrated by the “MARC” button in the LIBRIS user interface. New library linked data developments will have to coexist with the existing wider library metadata and systems environment for some time.
    Like all other local projects, the LIBRIS source code and ontology descriptions are available on GitHub. In this case the mere scope of the National Library of Sweden and of the project makes it a bit more plausible that this may actually be reused on a larger scale. At least the library cataloguing ontology in JSON-LD there is worth having a look at.
    To return to our starting point, the LIBRIS project acknowledges the fact that we need actual tools besides the ontologies. As Martin Malmsten quoted: “Trying to sell the idea of linked data without interfaces is like trying to sell a fax without the invention of paper”.

    20131127_093330

    The central question in all this: what is the role of libraries in linked data? Developers or implementers, individually or in a community? There is obviously not one answer. Maybe we will know more at SWIB14. Paraphrasing Fabian Steeg and Pascal Christoph of hbz and Dorothea Salo, next years theme might be “Out of the box data knitting for great justice”.

    Share

  • The poor person’s linked open data workbench

    Posted on November 11th, 2013 Lukas Koster 4 comments

    Using discovery tools for presenting integrated information

    There has been a lot of discussion in recent years about library discovery tools. Basically, a library discovery tool provides a centrally maintained shared scholarly material metadata index, a system for searching and an option for adding a local metadata index. Academic libraries use it for providing a unified access platform to subscribed and open access databases and ejournals as well as their own local print and digital holdings.

    workbenchoutside

    © vlashton

    I would like to put forward that, despite their shortcomings, library discovery tools can also be used for finding and presenting other scholarly information in the broadest sense. Libraries should look beyond the narrow focus on limitations and turn imperfection into benefits.

    The two main points of discussion regarding discovery tools are the coverage of the central shared index and relevance ranking. For a number of reasons of a practical, technical and competitive nature, none of the commercial central indexes cover all the content that academic libraries may subscribe to. Relevance ranking of search results depends on so many factors that it is a science in itself to satisfy each and every end user with their own specific background and context. Discovery tool vendors spend a lot of energy in improving coverage and relevance ranking.

    These two problems are the reason that not many academic libraries have been able to achieve the one-stop unified scholarly information portals for their staff and students that discovery tool providers promised them. In most cases the institutional discovery portal is just one of the solutions for finding scholarly publications that are offered by the library. A number of libraries are reconsidering their attitude towards discovery tools, or have even decided to renounce these tools altogether and focus on delivery instead, leaving discovery to external parties like Google Scholar.

    legoworkbench

    © derletzteschrei

    I fully support the idea that libraries should reconsider their attitude towards discovery tools, but I would like to stress that they should do so with a much broader perspective than just the traditional library responsibility of providing access to scholarly publications. Libraries must not throw away the baby with the bathwater. They should realise that a discovery tool can be used as a platform for presenting connected scholarly information, for instance publications with related research project information and research datasets, based on linked open data principles. You could call this the “poor person’s linked open data platform”, because the library has already paid the license fee for the discovery platform, and it does not have to spend a lot of extra money on additional linked open data tools and facilities.

    Of course this presupposes a number of things: the content to be connected should have identifiers, preferably in the form of URIs, and should be openly available for reuse, preferably via RDF. The discovery tools should be able to process URIs and RDF and present the resolved content in their user interfaces. We all know that this is not the case yet. Long term strategies are needed.

    Content providers must be convinced of the added value of adding identifiers and URIs to their metadata and providing RDF entry points. In the case of publishers of scholarly publications this means identifiers/URIs for the publications themselves, but also for authors, contributors, organisations, related research projects and datasets. A number of international associations and initiatives are already active in lobbying for these developments: OpenAIRE, Research Data Alliance, DataCite, the W3C Research Object for Scholarly Communication Community Group, etc. Universities themselves can contribute by adding URIs and RDF to their own institutional repositories and research information systems. Some universities are implementing special tools for providing integrated views on research information based on linked data, such as VIVO.
    There are also many other interesting data sources that can be used to integrate information in discovery tools, for instance in the government and cultural heritage domain. Many institutions in these areas already provide linked open data entry points. And then there is WikiPedia with its linked open data interface DBpedia.

    On the other side of the scale discovery tool providers must be convinced of the added value of providing procedures for resolving URIs and processing RDF in order to integrate information from internal and external data sources into new knowledge. I don’t know of any plans for implementing linked open data features in any of the main commercial or open source discovery tools, except for Ex LibrisPrimo. OCLC provides a linked data section for each WorldCat search result, but that is mainly focused on publishing their own bibliographic metadata in linked data format, using links to external subject and author authority files. This is a positive development, but it’s not consumption and reuse of external information in order to create new integrated knowledge beyond the bibliographic domain.

    With the joint IGeLU/ELUNA Linked Open Data Special Interest Working Group the independent Ex Libris user groups have been communicating with Ex Libris strategy and technology management on the best ways to implement much needed linked open data features in their products. The Primo discovery tool (with the Primo Central shared metadata index) is one of the main platforms in focus. Ex Libris is very keen on getting actual use cases and scenarios in order to identify priorities in going forward. We have been providing these for some time now through publications, presentations at user group conferences, monthly calls and face to face meetings. Ex Libris is also exploring best practices for the technical infrastructure to be used and is planning pilots with selected customers.

    While this may take some time to mature, in the mean time libraries who have access to their discovery tool’s back office and user interface HTML files can start experimenting with and implementing add-ons for integration of the tool’s metadata index with external information. This should be possible with open source discovery tools like VuFind and local or hosted installations with back office access of commercial products. The only commercial product that offers that option, as far as I know, is Primo. Creating local linked open data add-ons can be done by applying a combination of manipulation of local index metadata fields, JavaScript/jQuery in the front end HTML and the use of any open APIs available for the tool.
    The Austrian national library service OBVSG for instance has integrated WikiPedia/DBpedia information about authors in their Primo results.
    The Saxon State and University Library Dresden (SLUB) has implemented a multilingual semantic search tool for subjects based on DBpedia in their Primo installation.
    At the University of Amsterdam I have been experimenting myself with linking publications from our Institutional Repository (UvA DARE) in Primo with related research project information. This has for now resulted in adding extra external links to that information in the Dutch National Research portal NARCIS, because NARCIS doesn’t provide RDF yet. We are communicating with DANS, the NARCIS provider, about extending their linked open data features for this purpose.
    Of course all these local implementations can serve as use cases for discovery tool providers.

    I have only talked about the options of using discovery tools as a platform for consuming, reusing and presenting external linked open data, but I can imagine that a discovery tool can also be used as a platform for publishing linked open data. It shouldn’t be too hard to add extra RDF options besides the existing HTML and internal record format output formats. That way libraries could have a full linked open data consumption and publishing workbench at their disposal at minimal cost. Library discovery tools would from then on be known as information discovery tools.

    Share

  • A day between the stacks

    Posted on August 9th, 2013 Lukas Koster 2 comments

    Connecting real books, metadata and people

    I spent one day in the stacks of the off-site central storage facility of the Library of the University of Amsterdam as one of the volunteers helping the library perform a huge stock control operation which will take years. Goal of this project is to get a complete overview of the discrepancy between what’s on the shelves and what is in the catalogue. We’re talking about 65 kilometer of shelves, approximately 2.5 million items, in this central storage facility alone.


    To be honest, I volunteered mainly for personal reasons. I wanted to see what is going on behind the scenes with all these physical objects that my department is providing logistics, discovery and circulation systems for. Is it really true that cataloguing the height of a book (MARC 300$c) is only used for determining which shelf to put it on?

    The practical details: I received a book cart with a pencil, a marker, a stack of orange sheets of paper with the text “Book absent on account of stock control” and a printed list of 1000 items from the catalogue that should be located in one stack. I stood on my feet between 9 AM and 4 PM in a space of around 2-3 meter in one aisle between two stacks, with one hour in total of coffee and lunch breaks, in a huge building in the late 20th century suburbs of Amsterdam without mobile phone and internet coverage. I must say, I don’t envy the people working there. I’m happy to sit behind my desk and pc in my own office in the city centre.

    Most of my books were indeed in a specific size range, 18-22 cm. approximately, with a couple of shorter ones. I found approximately 25-30 books on the shelves that were not on my list, and therefore not in the catalogue. I put these on my cart, replacing them with one of the orange sheets on which I pencilled the shelfmark of the book. There were approximately 5-10 books on my list missing from the shelves, which I marked on the list. One book had a shelfmark on the spine that was identical to the one of the book next to it. Inside was a different code, which seemed to be the correct one (it was on the list, 10 places down). I put 10 books on the cart of which I thought the title on the list didn’t match the title on the book correctly, but this is a tricky thing, as I will explain.

    Title metadata
    The title printed on the list was the “main title”, or MARC 245$a. It is very interesting to see how many differences there are between the ways that main and subtitles have been catalogued by different people through the ages. For instance, I had two editions on my list (1976 and 1980) of a German textbook on psychiatry, with almost identical titles and subtitles (title descriptions taken from the catalogue):

    - Psychiatrie, Psychosomatik, Psychotherapie : Einführung in die Praktika nach der neuen Approbationsordnung für Ärzte mit 193 Prüfungsfragen : Vorbereitungstexte zur klinischen und Sozialpsychiatrie, psychiatrischen Epidemiologie, Psychosomatik,Psychotherapie und Gruppenarbeit

    - Psychiatrie : Psychosomatik, Psychotherapie ; Einf. in d. Praktika nach d. neuen Approbationsordnung für Ärzte mit Schlüssel zum Gegenstandskatalog u.e. Sammlung von Fragen u. Antworten für systemat. lernende Leser ; Vorbereitungstexte zur klin. u. Sozialpsychiatrie, psychiatr. Epidemiologie, Psychosomatik, Psychotherapie u. Gruppenarbeit

    The first book, from 1976 (which actually has ‘196’ instead of ‘193’ on the cover), is on the list and in the catalogue with the main title (MARC 245$a) “Psychiatrie, Psychosomatik, Psychotherapie :”.
    The second book, from 1980, is on the list with the main title “Psychiatrie :”.
    Evidently it is not clear without a doubt what is to be catalogued as main title and subtitle just by looking at the cover and/or title page.

    I have seen a lot of these cases in my batch of 1000 books in which it is questionable what constitutes the main and subtitle. Sometimes the main title consists of only the initial part, sometimes it consists of what looks like main and subtitle taken together. At first I put all parts of a serial on my cart because in my view the printed titles were incorrect. They only contained the title of the specific part of the serial, whereas in my non-librarian view the title should consist of the serial title + part title. On the other hand I also found serials for which only the serial title was entered as main title (5 items “Gesammelte Werke”, which means “Collected Works” in German). No consistency at all.
    What became clear to me is that in a lot of cases it is impossible to identify a book by the catalogued main title alone.

    Another example of problematic interpretation I came across: a Spanish book, with main title “Teatro de Jacinto Benavente” on my list, and on the cover the author name “Jacinto Benavente” and title “Teatro: Rosas de Otoño – Al Natural – Los Intereses Creados”. On the title page: “Teatro de Jacinto Benavente”.

     

     

     

     

     

     

     

     

     

     

     

     

     

    In the catalogue there are two other books with plays by the same author, just titled “Teatro”. All three have as author “Jacinto Benavente”. All three are books containing a number of theatre plays by the author Jacinto Benavente. There were a lot of similar books with as recorded main title ‘Theatre‘ in a number of languages.

    A lot of older books on my shelves (pre 20th century mainly, but also more recent ones) have different titles and subtitles on their spine, front and title page. Different variations depending on the available print space I guess. It’s hard to determine what the actual title and subtitles are. The title page is obviously the main source, but even then it looks difficult to me. Now I understand cataloguers a little better.

    Works on the shelves
    So much for the metadata. What about the actual works? There were all kinds of different types mixed with each other, mostly in batches apparently from the same collection. In my 1000 items there were theatre related books, both theoretical works and texts of plays, Russian and Bulgarian books of a communist/marxist/leninist nature, Arab language books of which I could not check the titles, some Swedish books, a large number of 19th century German language tourist guides for Italian regions and cities, medical, psychological and physics textbooks, old art history works, and a whole bunch of social science textbooks from the eighties of which we have at least half at our home (my wife and I both studied at the University of Amsterdam during that period ). I can honestly say that most of the textbooks in my section of the stacks are out of date and will never be used for teaching again. The rest was at least 100 years old. Most of these old books should be considered as cultural heritage and part of the Special Collections. I am not entirely sure that a university library should keep most of these works in the stacks.

    Apart from this neutral economical perspective, there were also a number of very interesting discoveries from a book lover’s viewpoint, of which I will describe a few.

    A small book about Japanese colour prints containing one very nice Hokusai print of Mount Fuji.

     

    A handwritten, and therefore unique, item with the title (in Dutch) a “Bibliography of works about Michel Angelo, compiled by Mr. Jelle Hingst”, containing handwritten catalogue type cards with one entry each.

     

    A case with one shelfmark containing two items: a printed description and a what looks like a facsimile of an old illustrated manuscript.

     

    An Italian book with illustrations of ornaments from the cathedral of Siena, tied together with two cords.

     

    And my greatest discovery: an English catalogue of an exhibition at the Royal Academy from 1908: “Exhibition of works by the old masters and by deceased masters of the British school including a collection of water colours” (yes, this is one big main title).

    But the book itself is not the discovery. It’s what is hidden inside the book. A handwritten folded sheet of paper, with letterhead “Hewell Grange. Bromsgrove.” (which is a 19th century country house, seat of the Earls of Plymouth, now a prison), dated Nov. 23. 192. Yes, there seems to be a digit missing there. Or is it “/92”? Which would not be logical in an exhibition catalogue from 1908. It definitely looks like a fountain pen was used. It also has some kind of diagonal stamp in the upper left corner “TELEGRAPH OFFICE FINSTALL”. Finstall is a village 3 km from Hewell Grange.

    The paper also has a pencil sketch of a group of people, probably a copy of a painting. At first I thought it was a letter, but looking more closely it seems to be a personal impression and description of a painting. There are similar handwritings on the pages of the book itself.
    I left the handwritten note where I found it. It’s still there. You can request the book for consultation and see for yourself.

     

    Conclusions
    End users, patrons, customers or whatever you want to call them, can’t find books that the library owns if they are not catalogued. They can find bibliographic descriptions of the books elsewhere, but not the information needed to get a copy at their own institution. This confirms the assertion that holdings information is very important, especially in a library linked open data environment.

    The majority of books in an academic library are never requested, consulted or borrowed. Most outdated textbooks can be removed without any problem.

    There are a lot of cultural heritage treasures hidden in the stacks that should be made accessible to the general public and humanities researchers in a more convenient way.

    In the absence of open stacks and full text search for printed books and journals it is crucial that the content of books, and articles too, is described in a concise, yet complete way. Not only formal cataloguing rules and classification schemes should be used, but definitely also expert summaries and end user generated tags.

    Even with cataloguing rules it can be very hard for cataloguers to decide what the actual titles, subtitles and authors of a book are. The best source for correct title metadata are obviously the authors, editors and publishers themselves.

    Book storage staff can’t find requested books with incorrect shelfmarks on the spine.

    Storing, locating, fetching and transporting books does not require librarian skills.

    All in all, a very informative experience. 

    Share

  • (Discover AND deliver) OR else

    Posted on January 7th, 2013 Lukas Koster 98 comments

    The future of the academic library as a data services hub

    © KaCey97007

    Is there a future for libraries, or more specifically: is there a future for academic libraries? This has been the topic of lots of articles, blog posts, books and conferences. See for instance Aaron Tay’s recent post about his favourite “future of libraries” articles. But the question needs to be addressed over and over again, because libraries, and particularly academic libraries, continue to persevere in their belief that they will stay relevant in the future. I’m not so sure.

    I will focus here on academic libraries. I work for one, the Library of the University of Amsterdam. Academic libraries in my view are completely different from public libraries in audience, content, funding and mission. As far as I’m concerned, they only have the name in common. For a vision on the future of public libraries, see Ed Summer’s excellent post “The inside out library”. As for research and special libraries, some of what I am about to say will apply to these libraries as well.

    So, is there a future for academic libraries? Personally I think in the near future we will see the end of the academic library as we know it. Let’s start with looking at what are perceived to be the core functions of libraries: discovery and delivery, of books and articles.
    For a complete overview of the current library ecosystem you should read Lorcan Dempsey’s excellent article “Thirteen Ways of Looking at Libraries, Discovery, and the Catalog: Scale, Workflow, Attention”.

     

    Discovery

    Discovery happens elsewhere”. Lorcan Dempsey said this already in 2007 . What this means is that the audience the library aims at, primarily searches for and finds information via other platforms than the library’s website and search interfaces. Several studies (for instance OCLC’s “Perceptions of libraries, 2010“) show that the most popular platforms are general search engines like Google and Wikipedia but also specific databases. And of course, if you’re looking for instant information, you don’t go to the library catalogue, because it only points you to items that you have to read in order to ascertain that they may or may not contain the information you need.

    © bibliovox

    And if you are indeed looking for publications (books, articles, etc.) you could of course search your library’s catalogue and discovery interface. But you can find exactly the same and probably even more results elsewhere: in other libraries’ search interfaces, or aggregators that collect bibliographic metadata from all over the world. Moreover, academic libraries are doing their best to get their local holdings metadata in WorldCat and their journal holdings in Google Scholar. As I said in my EMTACL12 talk: you can’t find all you need with one local discovery tool.
    Also, the traditional way of discovery through browsing the shelves is disappearing rapidly. The physical copies at the University of Amsterdam Library for instance are all stored in a storage facility in a suburb. Apart from some reference works and latest journal issues there is nothing to find in the library buildings. There is no need for a university library building for discovery purposes anymore.

    Utrecht University Library has taken the logical next step: they decided not to acquire a new discovery tool, discontinue their local homegrown article search index and focus on delivery. See the article “Thinking the unthinkable: a library without a catalogue” .

     

    Delivery

    So, if discovery is something that academic libraries should not invest in anymore, is delivery really the only core responsibility left? Let’s have a closer look.
    Delivery in the traditional academic library sense means: giving the customer access to the publications he or she selected, both in print and digital form. In the case of subscription based e-journal articles, delivery consists of taking a subscription and leading the customer to the appropriate provider website to obtain the online article. Taking subscriptions is an administrative and financial activity. For historical reasons the university library has been taking care of this task. Because they handled the print subscriptions, they also started taking care of the digital versions. But actually it’s not the library that holds the subscription, it’s the university. And it really does not require librarian skills to handle subscriptions. This could very well be taken care of by the central university administration. For free and open access journals you don’t even need that.
    The selection and procurement of journal packages from a large number of publishers and content providers is a different issue. Specific expertise is required for this. I will come to that later.
    The task of leading the customer to the appropriate online copy is only a technical procedure, involving setting up link resolvers. Again, no librarian skills needed. This task could be done by some central university agency, maybe even using an external global linking registry.

    Delivery

    As for the delivery of physical print copies, this is obviously nothing more than a logistics workflow, no different from delivery of furniture, tools, food, or any other physical business. The item is ordered, it is fetched from the shelf, sometimes by huge industrial robot installations, put in a van or cart, transported to the desired location and put in the customer’s locker or something similar. Again: no librarian skills whatsoever. Physical delivery only needs a separate internal or external logistics unit.

     

    What else?

    So, if discovery and delivery will cease to be core activities of the central university library organisation, what else is there?

    Selection
    Selection of print and digital material was already mentioned. It is evident that the selection of printed and digital books and journal subscriptions needs to be governed by expert knowledge and decisions in order to provide staff and students with the best possible material, because there is a lot of money involved. Typically this task is carried out by subject specialists (also called subject librarians), not by generalists. These ‘faculty liaisons’ usually have had an education in the disciplines they are responsible for, and they work closely together with their customers (academic staff and students). Many universities have semiautonomous discipline oriented sublibraries. The recent development of Patron Driven Acquisition (PDA) also fits into this construction.
    The actual comparison, selection and procurement of journal packages from a large number of publishers and content providers requires a certain generic specific expertise which is not discipline dependent. This is a task that could well continue to be the responsibility of some central organisational unit, which may or may not be called the university library.

    Cataloguing
    And what about cataloguing, a definite librarian skill? If discovery happens elsewhere, and libraries don’t need to maintain their own local catalogues, then it seems obvious that libraries don’t need to catalogue anything anymore. In fact, in the current situation most libraries don’t catalogue that much already. All the main bibliographical metadata for books (title, author, date, etc.) are already provided by publishers, by external central library service centres, or by other libraries in a shared cataloguing environment. And libraries have never catalogued journal articles anyway, only journals and issues. Article metadata are provided by the publishers or aggregators. Libraries pay for these services.
    It is usual for libraries to add their own subject headings and classification terms to the already existing ones. But as Karen Coyle said at EMTACL12: “Library classification is a knowledge prevention system“, because it offers only one specific object oriented view on the information world. So maybe libraries should stop doing this, which would be in line with the “discovery happens elsewhere” argument anyway.
    What remains of cataloguing is adding local holdings, items and subscription information. This is very useful information for library customers, but again this doesn’t seem to require very detailed librarian skills. As a matter of fact most of these metadata are already provided in the selection and acquisition process by acquisition staff and vendors.
    The recent Library of Congress BIBFRAME initiative developments in theory make it possible to replace all local cataloguing efforts by linking local holdings information to global metadata.
    There is still one area that may require the full local cataloguing range: the university’s own scientific output, as long as it is not published in journals or as books. The fulltext material is made available through institutional repositories, which obviously requires metadata to make the publications findable. However, the majority of the institutional publications are made available through other channels as well, as mentioned, so the need for local cataloguing in these cases is absent.

    Reading rooms
    More and more students are coming to the library buildings every day, that’s what you hear all the time. Large amounts of money are spent on creating new study centres and meeting places in existing library buildings, even on new buildings. But that’s exactly the point: students don’t come to the library for discovery anymore, because the building no longer provides that. They come for places to study, use network pc’s or the university wifi, meet with fellow students, pick up their print items on loan, or view not-for-loan material. The physical locations are nothing more or less than study centres. There’s absolutely nothing wrong with that, they are very important, but they do not have to be associated with the university library, but can be provided by the university, on any location.

    Reference desk

    © Ohio University Libraries

    The reference desk, or its online counterpart, is a weird phenomenon. It seems to emphasise the fact that if you want instant information, books are of no use. On the other hand, it suggests that you should come to the library if you need specific information right now. In my view, although the reference desk partly embodies the actual original objective of a library, namely giving access to information, this could function very well outside the library context.
    The reference desk service is also somewhat ambiguous. In some cases subject specialist expertise is needed, other cases require a more general knowledge of how to search and find information.

    Usage statistics
    Statistics of the use of library holdings, both print and electronic, are an important source of information for making decisions on acquisitions and subscriptions. These statistics are provided by local and remote delivery systems and vendors. Usage statistics can also be used for other purposes, like identifying certain trends in scholarly processes, mapping of information sources to specific user groups, etc. Administering and providing statistics once again is not a librarian task, but can be done by internal or external service providers.

    Special collections
    Special Collections are a Special Case. Most university libraries have a Special Collections division, for historical reasons. But of course Special Collections divisions are nothing less than a Museum and Archive division with specific skills, expertise and procedures. Most of the time they are autonomous units within the university anyway.

     

    New services?

    Now, if the traditional library tasks of selection, cataloguing, discovery and delivery will increasingly be carried out by non-librarian staff and units inside and outside the university, is there still a valid reason for maintaining an autonomous central university library organisation? Should academic libraries shift focus? There are a number of possible new services and responsibilities for the library that are being discussed or already being implemented.

    © Joshua Kaufman

    © Joshua Kaufman

    Content curation
    Content curation can be seen as the task of bringing together information on a specific subject, of all kinds, from different sources on the web to be consumed by people in an easy way. This is something that can be done and is already done by all kinds of organisations and people. Libraries, academic, public and other types, can and should play a bigger role in this area. This involves looking at other units and sources of information than just the traditional library ones: books and journals. This new service type evidently is closely related to the traditional reference desk service.
    Obviously this can best be taken care of by subject specialists. To do this, they need tools and infrastructure. These tools and infrastructure are of a generic nature and can be provided by technical specialists inside or outside the libraries or universities.
    Techniques are often referred to as “mashups” or “linked data”, depending on the background of the people involved.

    Linked data
    Linked data deserves its own section here, because it has been an ever widening movement since a number of years. It finally reached the library world the last couple of years with developments like the W3C Library Linked Data Incubator Group, the Library of Congress BIBFRAME initiative and the IFLA Semantic Web Special Interest Group. Linked data is a special type of data source mashup infrastructure. It requires the use of URIs for all separately usable data entities, and triples as the format for the actual linking (subject-predicate-object), mostly using the RDF structure.
    There are two sides to linked data: the publishing of data in RDF and consequently the consumption of data elsewhere. A special case is the linked data based infrastructure, combining both publication and consumption in a specific way, as is the objective of the above mentioned BIBFRAME project.
    Again, we need both subject specialists and generic technology experts to make this work in libraries, both academic and public ones.

    Research support
    University libraries are more and more expected to increase the level of support for researchers. It’s not only about providing access to scholarly publications anymore, but also about maintaining research information systems, virtual research environments, and long term preservation, availability and reusability of research data sets.
    Again, here we see the need for discipline specific support because the needs of researchers for communication, collaboration and data varies greatly per discipline. And again, for the technical and organisational infrastructure we need internal or external generic technology experts and services. Apart from metadata expertise there are no traditional librarian skills required.

    Publishing
    The Final Frontier: the library turning 180 degrees and switching from consumption to production of publications. According to some people university libraries are very suitable and qualified to become scholarly publishers (see for instance Björn Brembs‘ “Libraries Are Better Than Corporate Publishers Because…”). I am not sure that this is actually the case. Publishing as it currently exists requires a number of specific skills that have nothing to do with librarian expertise. A number of universities already have dedicated university press publishing agencies. But of course the publishing process can and probably will change. There is the open access movement, there is the rebellion against large scientific publishers, and last but not least, there is the slow rise of nanopublications, which could revolutionise the form that scholarly publishing will take. In the future publishing can originate at the source, making use of all kinds of new technologies of linking different types of data into new forms of non-static publications. Universities or university libraries could play a role here. Again we see here the need for both subject specialists and generic technology.

     

    Special and general

    So what is the overall picture? Of the current academic library tasks, only a few may still be around in the university in the future: selection, acquisition, cataloguing (if any), reference desk, usage statistics, and only a small part actually requires traditional librarian skills. Together with the new service areas of content curation, linked data, research support and publishing, this is rather an odd collection of very different fields of expertise. There does not seem to be a nice matching set of tasks for one central university division, let alone a library.

    But what all these areas have in common is that they depend on linking and coordination of data from different sources.

    And another interesting conclusion is that virtually all of these areas have two distinct components:

    • Discipline or subject specific expertise
    • Generic technical and organisational data infrastructure

    I see a new duality in the realm of information management in universities. Selection, content curation, reference desk, linking data, cataloguing and research support will all be the domain of subject specialists directly connected to departments responsible for teaching and research in specific disciplines. These discipline related services will depend on generic technological and organisational infrastructures, available inside and outside the university, maintained by generic technical specialists.
    These generic infrastructures could function completely separately, or they could somehow be interlinked and coordinated by some central university organisational unit. This would make sense, because there is a lot of overlap in information between these areas. Some kind of central data coordination unit would make it possible to provide a lot more useful data services than can be imagined now. Also, usage statistics, acquisition and the potential new publishing framework, yes even the special collections, could benefit from a central data services unit.

    © HawkinsThiel

    © HawkinsThiel

    Such a unit would be different from the existing university ICT department. The latter mainly provides generic hardware, network, storage and security, and is focused on the internal infrastructure, trying to keep out as much external traffic as possible.
    The new unit would be targeted at providing data services, possibly built on top of the internal technical infrastructure, but mainly using existing external ones. And it is obvious that there is added value in cooperation with similar bodies outside the university.

    “Data services” then stands for providing storage, use, reuse, creation and linking of internal and external metadata and datasets by means of system administration, tools selection and implementation, and explicitly also programming when needed.
    Such a unit would up to a point resemble current library service providers like the German regional library consortia and service centres such as hbz, KOBV or GBV, or high level organisations like the Dutch National Library Catalogue project.

    Paraphrasing the conclusion of my own SWIB12 talk: it is time to stop thinking publications and start thinking data. This way the academic library could transform itself into a new central data services hub.

    (Subject expertise AND data infrastructure) OR else!

    Share

  • Local library data in the new global framework

    Posted on January 5th, 2012 Lukas Koster 33 comments

    2011 has in a sense been the year of library linked data. Not that libraries of all kinds are now publishing and consuming linked data in great numbers. No. But we have witnessed the publication of the final report of the W3C Library Linked Data Incubator Group, the Library of Congress announcement of the new Bibliographic Framework for the Digital Age based on Linked Data and RDF, the release by a number of large libraries and library consortia of their bibliographic metadata, many publications, sessions and presentations on the subject.

    All these events focus mainly on publishing library bibliographic metadata as linked open data. Personally I am not convinced that this is the most interesting type of data that libraries can provide. Bibliographic metadata as such describe publications, in the broadest sense, providing information about title, authors, subjects, editions, dates, urls, but also physical attributes like dimensions, number of pages, formats, etc. This type of information, in FRBR terms: Work, Expression and Manifestation metadata, is typically shared among a large number of libraries, publishers, booksellers, etc. ‘Shared’ in this case means ‘multiplied and redundantly stored in many different local systems‘. It doesn’t really make sense if all libraries in the world publish identical metadata side by side, does it?

    In essence only really unique data is worth publishing. You link to the rest.

    Currently, library data that is really unique and interesting is administrative information about holdings and circulation. After having found metadata about a potentially relevant publication it is very useful for someone to know how and where to get access to it, if it’s not freely available online. Do you need to go to a specific library location to get the physical item, or to have access to the online article? Do you have to be affiliated to a specific institution to be entitled to borrow or access it?

    Usage data about publications, both print and digital, can be very useful in establishing relevance and impact. This way information seekers can be supported in finding the best possible publications for their specific circumstances. There are some interesting projects dealing with circulation data already, such as the research project by Magnus Pfeffer and Kai Eckert as presented at the SWIB 11 conference, and the JISC funded Library Impact Data project at the University of Huddersfield. The Ex Libris bX service presents article recommendations based on SFX usage log analysis.

    The consequence of this assertion is that if libraries want to publish linked open data, they should focus on holdings and circulation data, and for the rest link to available bibliographic metadata as much as possible. It is to be expected that the Library of Congress’ New Bibliographic Framework will take care of that part one way or another.

    In order to achieve this libraries should join forces with each other and with publishers and aggregators to put their efforts into establishing shared global bibliographic metadata pools accessible through linked open data. We can think of already existing data sources like WorldCat, OpenLibrary, Summon, Primo Central and the like. We can only hope that commercial bibliographic metadata aggregators like OCLC, SerialsSolutions and Ex Libris will come to realise that it’s in everybody’s interest to contribute to the realisation of the new Bibliographic Framework. The recent disagreement between OCLC and the Swedish National Library seems to indicate that this may take some time. For a detailed analysis of this see the blog post ‘Can linked library data disrupt OCLC? Part one’.

     

    An interesting initiative in this respect is LibraryCloud, an open, multi-library data service that aggregates and delivers library metadata. And there is the HBZ LOBID project, which is targeted at ‘the conversion of existing bibliographic data and associated data to Linked Open Data‘.

    So what would the new bibliographic framework look like? If we take the FRBR model as a starting point, the new framework could look something like this. See also my slideshow “Linked Open Data for libraries”, slides 39-42.

    The basic metadata about a publication or a unit of content, on the FRBR Work level, would be an entry in a global datastore identified by a URI ( Uniform Resource Identifier). This datastore could for instance be WorldCat, or OpenLibrary, or even a publisher’s datastore. It doesn’t really matter. We don’t even have to assume it’s only one central datastore that contains all Work entries.

    The thing identified by the URI would have a text string field associated with it containing the original title, let’s say “The Da Vinci Code” as an example of a book. But also articles can and should be identified this way. The basic information we need to know about the Work would be attached to it using URIs to other things in the linked data web. A set of two things linked by a URI is called a ‘triple’. ‘Author’ could for instance be a link to OCLC’s VIAF (http://viaf.org/viaf/102403515 = Dan Brown), which would then constitute a triple. If there are more authors, you simply add a URI for every person or institution. Subjects could be links to DBPedia/Wikipedia, Freebase, the Library of Congress Authority files, etc. There could be some more basic information, maybe a year, or a URI to a source describing the background of the work.

    At the Expression level, a Dutch translation would have it’s own URI, stored in the same or another datastore. I could imagine that the publisher who commissioned the translation would maintain a datastore with this information. Attached to the Expression there would be the URI of the original Work, a URI pointing to the language, a URI identifying the translator and a text string contaning the Dutch title, among others.

    Every individual edition of the work could have it’s own Manifestation level URI, with a link to the Expression (in this case the Dutch translation), a publisher URI, a year, etc. For articles published according to the long standing tradition of peer reviewed journals, there would also be information about the journal. On this level there should also be URIs to the actual content when dealing with digital objects like articles, ebooks, etc., no matter if access is free or restricted.

    So far we have everything we need to know about publications “in the cloud”, or better: in a number of datastores available on a number of servers connected to the world wide web. This is more or less the situation described by OCLC’s Lorcan Dempsey in his recent post ‘Linking not typing … knowledge organization at the network level’. The only thing we need now is software to present all linked information to the user.

    No libraries in sight yet. For accessing freely available digital content on the web you actually don’t need a library, unless you need professional assistance finding the correct and relevant information. Here we have identified a possible role of librarians in this new networked information model.

    Now we have reached the interesting part: how to link local library data to this global shared model? We immediately discover that the original FRBR model is inadequate in this networked environment, because it implies a specific local library situation. Individual copies of a work (the Items) are directly linked to the Manifestation, because FRBR refers to the old local catalogue which describes only the works/publications one library actually owns.

    In the global shared library linked data network we need an extra explicit level to link physical Items owned by the library or online subscriptions of the library to the appropriate shared network level. I suggest to use the “Holding” level. A Holding would have it’s own URI and contain URIs of the Manifestation and of the Library. A specific Holding in this way would indicate that a specific library has one or more copies (Items) of a specific edition of a work (Manifestation), or offers access to an online digital article by way of a subscription.

     

    If a Holding refers to physical copies (print books or journal issues for instance) then we also need the Item level. An Item would have it’s own URI and the URI of the Holding. For each Item, extra information can be provided, for instance ‘availability’, ‘location’, etc. Local circulation administration data can be registered for all Holdings and Items. For online digital content we don’t need Items, only subscription information directly attached to the Holding.

    Local Holding and Item information can reside on local servers within the library’s domain or just as well on some external server ‘in the cloud’.

    It’s on the level of the Holding that usage statistics per library can be collected and aggregated, both for physical items and for digital material.

    Now, this networked linked library data model still allows libraries to present a local traditional catalogue type interface, showing only information about the library’s own print and digital holdings. What’s needed is software to do this using the local Holdings as entry level.

    But the nice thing about the model is that there will also be a lot of other options. It will also be possible to start at the other end and search all bibliographic metadata available in the shared global network, and then find the most appropriate library to get access to a specific publication, much like WorldCat does, but on an even larger scale.

    Another nice thing of using triples, URIs and linked data, is that it allows for adding all kinds of other, non-traditional bibliographic links to the old inward looking library world, making it into a flexible and open model, ready for future developments. It will for instance be possible for people to discover links to publications and library holdings from any other location on the web, for instance a Wikipedia page or a museum website. And the other way around, from an item in local library holdings to let’s say a recorded theatre performance on YouTube.

    When this new data and metadata framework will be in place, there will be two important issues to be solved:

    • Getting new software, systems and tools for both back end administrative functions and front end information finding needs. For this we need efforts from traditional library systems vendors but also from developers in libraries.
    • Establishing future roles for libraries, librarians and information professionals in the new framework. This may turn out to be the most important issue.
    Share

  • FRBR outside the box

    Posted on September 2nd, 2011 Lukas Koster 10 comments
    Shifting focus from information carriers back to information
    This blog post is based on a presentation I did at Datasalon 6 in Brussels, January 21, 2011.

    © TheArtGuy

    Library catalogues have traditionally been used to describe and register books and journals and other physical objects that together constitute the holdings of a library. In an integrated library system (ILS), the public catalogue is combined with acquisition and circulation modules to administer the purchases of book copies and journal subscriptions on one side, and the loans to customers on the other side. The “I” for “Integrated” in ILS stands for an internal integration of traditional library workflows. Integration from a back end view, not from a customer perspective.

    Because of the very nature of such a catalogue, namely the description of physical objects and the administration of processing them, there are no explicit relations between the different editions and translations of the same book, nor are there descriptions of individual journal articles. If you do a search on a specific person’s name, you may end up with a large number of result records, written by that person or someone with a similar name, or about that person, even with identical titles, without knowing if there is a relationship between them, and what that relationship might be. What’s certain is that you will not find journal articles written by or about that person. The same applies to a search on title. There is no way of telling if there is any relation between identical titles. A library catalogue user would have to look at specific metadata in the records (like MARC 76X-78X – Linking Entries534 – Original Version Note or 580 – Linking Entry Complexity Note), if available, to reach their own conclusions.

    Most libraries nowadays also purchase electronic versions of books and journals (ebooks and ejournals) and have free or paid subscriptions to online databases. Sometimes these digital items (ebooks, ejournals and databases) are also entered into the traditional library catalogues, but they are sometimes also made available through other library systems, like federated search tools, integrated discovery tools, A-Z lists, etc. All kinds of combinations occur.

    In traditional library catalogues digital items are treated exactly the same as their physical counterparts. They are all isolated individual items without relations. As Karen Coyle put it November 2010 at the SWIB10 conference: “The main goal of cataloguing today is to keep things apart” .
    Basically, integrated library systems and traditional catalogues are nothing more than inventory and logistics systems for physical objects, mainly focused on internal workflows. Unfortunately in newer end user interfaces like federated search and integrated discovery tools the user experience in this respect has in general been similar to that of traditional public catalogues.

    At some point in time during the rise of electronic online catalogues, apparently the lack of relations between different versions of the same original work became a problem. I’m not sure if it was library customers or librarians who started feeling the need to see these implicit connections made explicit. The fact is that IFLA (International Federation of Library Associations) started developing FRBR in 1998.

    FRBR (Functional Requirements for Bibliographic Records) is an attempt to provide a model for describing the relations between physical publications, editions, copies and their common denominator, the Work.

    FRBR Model © Library of Congress/Barbara Tillett

    FRBR Group 1 describes publications in terms of the entities Work, Expression, Manifestation and Item (WEMI).
    FRAD (Functional Requirements for Authority Data – ‘authors’) and FRSAD (Functional Requirements for Subject Authority Data – ‘subjects’) have been developed later on as alternatives for the FRBR Group 2 and 3 entities.

    Anne Frank's Diary

    As an example let’s have a look at The Diary of Anne Frank. The original handwritten diary may be regarded as the Work. There are numerous adaptations and translations (Expressions) of the original unfinished and unedited Work. Each of these Expressions can be published in the form of one or more prints, editions, etc. These are the Manifestations, especially if they have different ISBN’s. Finally a library can have one or more physical copies of a Manifestation, the Items.

    Some might even say the actual physical diary is the only existing Item embodying one specific (the first) Expression of the Work (Anne’s thoughts) and/or the only Manifestation of that Expression.

    Of course, this model, if implemented, would be an enormous improvement to the old public  catalogue situation. It makes it possible for library customers to have an automatic overview of all editions, translations, adaptations of one specific original work through the mechanism of Expressions and Manifestations. RDA (Resource Description and Access) is exactly doing this.
    However there are some significant drawbacks, because the FRBR model is an old model, based on the traditional way of library cataloguing of physical items (books, journals, and cd’s, dvd’s), etc. (Karen Coyle at SWIB10).

    • In the first place the FRBR model only shows the Works and related Manifestations and Expressions of physical copies (Items) that the library in question owns. Editions not in the possession of the library are ignored. This would be a bit different in a union catalogue of course, but then the model still only describes the holdings of the participating libraries.
    • Secondly, the focus on physical copies is also the reason that the original FRBR model does not have a place for journal titles as such, only for journal issues. So there will be as many entries for one journal as the library has issues of it.
    • Thirdly, it’s a hierarchical model, which incorporates only relations from the Work top down. There is no room for relations like: ‘similar works’, ‘other material on the same subject’, ‘influenced by’, etc.
    • In the fourth place, FRBR still does not look at content. It is document centric, instead of information centric. It does however have the option for describing parts of a Work, if they are considered separate entities/works, like journal articles or volumes of a trilogy.
    • Finally, the FRBR Item entity is only interesting in a storage and logistics environment for physical copies, such as the Circulation function in libraries, or the Sales function in bookstores. It has no relation to content whatsoever.

    FRBR definitely is a positive and necessary development, but it is just not good enough. Basically it still focuses on information carriers instead of information (it’s a set of rules for managing Bibliographic Records, not for describing Information). It is an introverted view of the world. This was OK as long as it was dictated by the prevailing technological, economical and social conditions.
    In a new networked digital information world libraries should shift their focus back to their original objective: being gateways to information as such. This entails replacing an introverted hierarchical model with an extroverted networked one, and moving away from describing static information aggregates in favour of units of content as primary objects.

    The linked data concept provides the framework of such a networked model. In this model anything can be related to anything, with explicit declarations of the nature of the relationship. In the example of the Diary of Anne Frank one could identify relations with movies and theater plays that are based on the diary, with people connected to the diary or with the background of World War 2, antisemitism, Amsterdam, etc.

    Unlinked data

    In traditional library catalogues defining relations with movies or theater plays is not possible from the description of the book. They could however be entered as a textual reference in the description of a movie, if for instance a DVD of that movie is catalogued. Relations to people, World War 2, antisemitism and Amsterdam would be described as textual or coded references to a short concept description, which in turn could provide lists of other catalogue items indexed with these subjects.
    In a networked linked data model these links could connect to information entities in their own right outside the local catalogue, containing descriptions and other material about the subject, and providing links to other related information entities.

    FRBR would still be a valuable part of such a universal networked model, as a subset for a specific purpose. In the context of physical information carriers it is a useful model, although with some missing features, as described above. It could be used in isolation, as originally designed, but if it’s an open model, it would also provide the missing links and options to describe and find related information.

    Also, the FRBR model is essential as a minimal condition for enabling links from library catalogue items to other entity types through the Work common denominator.

    In a completely digital information environment, the model could be simplified by getting rid of the Item entity. Nobody needs to keep track of available copies of online digital information, unless publishers want to enforce the old business models they have been using in order to keep making a profit. Ebooks for instance are essentially Expressions or Manifestations, depending on their nature, as I stated in my post ’Is an e-book a book?’.

    The FRBR model can be used and is used also in other subject areas, like music, theater performances, etc. The Work – Expression – Manifestation – Item hierarchy is applicable to a number of creative professions.

    The networked model provides the option of describing all traditional library objects, but also other and new ones and even objects that currently don’t exist, because it is an open and adaptable model.
    In the traditional library models it is for instance impossible, or at least very hard, to describe a story that continues through all volumes of a trilogy as a central thread, apart from and related to the descriptions of the three separate physical books and their own stories. In the Millennium trilogy by Stieg Larsson, Lisbeth Salander’s life story is the central thread, but it can’t be described as a separate “Work” in MARC/FRBR/RDA because it is not the main subject of one physical content carrier (unless we are dealing with an edition in one physical multi part volume). The three volumes will be described with the subjects ‘Missing girl mystery‘, ‘Sex trafficking‘ and ‘Illegal secret service unit‘ respectively.

    In an open networked information model on the contrary it would be entirely possible to describe such a ‘roaming story’.

    Millennium trilogy and FRBR

    New forms of information objects could appear in the form of new types of aggregates, other than books or journal articles, for instance consisting of text, images, statistics and video, optionally of a flexible nature (dynamic instead of static information objects).

    Existing library systems (ILS’s and Integrated Discovery tools  alike), using bibliographic metadata formats and frameworks like MARC, FRBR and RDA, can’t easily deal with new developments without some sort of workaround. Obviously this means that if libraries want to continue playing a role in the information gateway world, they need completely different systems and technology. Library system vendors should take note of this.

    Finally, instead of only describing information objects, libraries could take up a new role in creating new objects, in the form of subject based virtual information aggregates, like for instance the Anne Frank Timeline, or Qwiki.This would put libraries back in the center of the information access business.

    See also
    http://dynamicorange.com/2009/11/11/bringing-frbr-down-to-earth/
    http://www.slideshare.net/SarahBartlett/what-place-for-libraries-in-a-linked-data-world
    http://kcoyle.blogspot.com/2011/08/models-of-bibliographic-data.html

    Share

  • Missing links

    Posted on March 28th, 2011 Lukas Koster 1 comment

    The challenges of generating linked data from legacy databases

    © extranoise

     

    Some time ago I wrote a blog post about the linked data proof of concept project I am involved in, connecting bibliographic metadata from the OPAC of the Library of the University of Amsterdam with the theatre performances database maintained by the Theatre Institute of The Netherlands.
    I ended that post with a list of next steps to take:

    • select/adapt/create a vocabulary for the Production/Performance subject area
    • select/adapt/create vocabularies for Persons (FOAF?) and Subjects (SKOS?)
    • add internal relationships with the other entities (Play, Production, etc.) in the JSON structure (implement RDF in JSON)
    • Add RDF/XML as output option, besides JSON
    • add external relationships (to other linked data sources like DBPedia, etc.)
    • extend the number of possible URI formats (for Play, Production, etc.)
    • add content negotiation to serve both human and machine readable redirects
    • extend the options on the OPAC side
    • publish UBA bibliographic data as linked open data (probably an entirely new project)

    So, what have we achieved so far? I can be brief about all the ‘real’ linked data stuff (RDF, vocabularies, external links, content negotiation): we are not there yet. This will be dealt with in the next phase.
    Instead, we have focused on getting the simple JSON implementation right, both on the data publishing side and on the data using side. We have added more URIs and internal relationships, and we are using these in the OPAC user interface.
    But we have also encountered a number of crucial problems that are in my view inherent to the type of legacy data models used in libraries and cultural heritage institutions.

    Theatre Production data in the Library Catalogue

     

    Progress

    First let me describe the improvements we have added so far.

    The URI for ‘person<baseurl>/person/<personname> now also returns a link to all the ‘titles’ that person is connected to (not only with the ‘author’ role, but for all roles, like director, performer, etc.): <baseurl>/gettitles/<personname>. This link will return a set of URIs of the form <baseurl>/title/<personname>/<title>. The /<personname>/<title> bit is at the moment the only way that a more or less unique identifier can be constructed from the OPAC metadata for the ‘play’ in the TIN database. There are a number of really important problems related to this that I will discuss below.

    The URI:

    <baseurl>/person/Beckett, Samuel

    returns among others:

    /title/Beckett, Samuel/Waiting for Godot
    /title/Beckett, Samuel/En attendant Godot
    /title/Beckett, Samuel/Endgame
    etc.

    The URI for a ‘play<baseurl>/title/<personname>/<title> now returns a set of ‘production’ URIs of the form <baseurl>/production/<personname>/<title>/<openingdate>/<idnr>.
    The ‘production’ URI returns information about ‘theatre company’, ‘venue‘ and all persons connected to that production, including their URIs, and when available also a link to an image of a poster, and a video.

    The URI

    <baseurl>/title/Beckett, Samuel/Waiting for Godot

    returns:

    /production/Beckett, Samuel/Waiting for Godot/1988-07-28/5777
    /production/Beckett, Samuel/Waiting for Godot/1988-11-22/6750
    /production/Beckett, Samuel/Waiting for Godot/1992-04-16/10728
    /production/Beckett, Samuel/Waiting for Godot/1981-02-18/43032

    The last ‘production’ URI returns:

    “name”:”Beckett, Samuel”,
    “title”:”Waiting For Godot”,
    “opening”:”1981-02-18″,
    “people”:
    “description”:”Beckett, Samuel (auteur: toneelspel van)”,
    “uri”:”/person/Beckett, Samuel”

    “description”:”Hartnett, John (regie)”,
    “uri”:”/person/Hartnett, John”

    “description”:”Muller, Frans (decor: ontwerp)”,
    “uri”:”/person/Muller, Frans”

    “description”:”Newell, Kym (licht: ontwerp)”,
    “uri”:”/person/Newell, Kym”

    “description”:”Zaal, Kees (geluid)”,
    “uri”:”/person/Zaal, Kees”

    “description”:”Tolstoj, Alexander (uitvoerende: Lucky)”,
    “uri”:”/person/Tolstoj, Alexander”

    “description”:”Weeks, David (uitvoerende: Estragon)”,
    “uri”:”/person/Weeks, David”

    “description”:”Coburn, Grant (uitvoerende: Vladimir)”,
    “uri”:”/person/Coburn, Grant”

    “description”:”Evans, Rhys (uitvoerende: Pozzo)”,
    “uri”:”/person/Evans, Rhys”

    “description”:”Geiringer, Karl (uitvoerende: A Boy)”,
    “uri”:”/person/Geiringer, Karl”

    “description”:”Guidi, Peter (uitvoering muziek)”,
    “uri”:”/person/Guidi, Peter”

    “description”:”Kimmorley, Roxanne (uitvoering muziek)”,
    “uri”:”/person/Kimmorley, Roxanne”

    “description”:”Vries, Hessel de (uitvoering muziek)”,
    “uri”:”/person/Vries, Hessel de”

    “description”:”Phillips, Margot (uitvoering muziek)”,
    “uri”:”/person/Phillips, Margot”

     

    Challenges/problems

    Now, the problems (or challenges) that we are facing here are essential to the core concept of linked data:

    • we don’t have actual matching unique identifiers (URIs)
    • we don’t have explicit internal relations with a common entity in both sources
    • part of the data consists of literal strings in a specific language

    These three problems are interrelated, they are linked problems, so to speak.

     

    Missing identifiers

    To start with the identifiers. Of course we have internal system identifiers in our local Aleph catalogue database. Because we contribute to the Dutch Union Catalogue (originally a PICA system, now OCLC), our bibliographic records also have national Dutch PICA identifiers. And because the Dutch Union Catalogue records are copied to WorldCat, these records in WorldCat also have OCLC numbers.
    Also the Theatre Institute has internal system identifiers in their Adlib database. But at the moment we do not have a match between these separate internal identifier schemes. The Theatre Production database records are not in WorldCat because they’re not bibliographic records.
    We are more or less forced to use the string values of the title and author fields to construct a usable URI, on both sides. Clearly this is the basis of lots of errors, because of the great number of possible variations in author and title descriptions.
    But even if the Theatre Institute’s records were in the Union Catalogue or WorldCat as well, then we still would not have an automatic match without some kind of broker mechanism ascertaining that the library catalogue record describes the same thing as the theatre production database record. The same applies to the author, which of course should be a relation of the type “written by” between the play and a person record instead of string values. Both systems do have internal author or person authority files, but there is no direct matching. For authors this could theoretically be achieved by linking to an online person authority file like VIAF. But in the current situation this is not available.

     

    Missing relations

    This brings me to the second problem. The fact that we are using the string values of title instead of unique identifiers, means that we connect plays and productions with a specific title variety or language. In our current implementation this means that we are not able to link to all versions of one specific play.
    For instance, from our OPAC the following URIs are constructed (two in English, one in French, one in Dutch):

    /title/Beckett, Samuel/Waiting for Godot
    /title/Beckett, Samuel/Waiting for Godot : a tragicomedy in two acts
    /title/Beckett, Samuel/En attendant Godot : pièce en deux actes
    /title/Beckett, Samuel/Wachten op Godot

    In the Theatre Production database (two in English, four in Dutch, one in French, one in German):

    /title/Beckett, Samuel/Waiting for Godot
    /title/Beckett, Samuel/Waiting For Godot
    /title/Beckett, Samuel/Wachten op Godot
    /title/Beckett, Samuel/Wachtend op Godot
    /title/Beckett, Samuel/Wachten op Godot (De favorieten)
    /title/Beckett, Samuel/Wachten op Godot (eerste bedrijf)
    /title/Beckett, Samuel/En attendant Godot
    /title/Beckett, Samuel/Warten auf Godot

    Only the first and fourth URI from the OPAC will find corresponding titles in the Theatre Production database. The second and third one, using a subtitle within the main title, don’t even have equivalents. And only two of the eight entries from the Theatre Production database have a match in the catalogue.
    In a library catalogue environment we are used to this problem, because catalogues are used for describing physical objects in the form of editions and copies. Unfortunately, also the Theatre Production database just contains records describing productions of a specific ‘edition’ or translation of a play, with only the opening performance information attached.

    This is where I need to talk about FRBR. Basically in a library catalogue environment this means that we should describe the relations between the ‘work’ (original text), the ‘expression’ (the version or translation), the ‘manifestation’ (edition, format, etc.) and the ‘items’ (the physical copies). Via the relations with higher level expression and work, the physical copy could be linked to the unifying work level, and then ideally through some universally valid unique identifier to, in our case, the theatre plays.
    Although FRBR is a publication centered schema used only in libraries, the same concepts can be applied to theatre performances: the original work (which is the same as the work in a bibliographical sense) has expressions (adaptations, translations, etc.), which have manifestations (productions), and in the end the individual items (actual performances on a specific date, time and location).

    Linking library and theatre in theory through FRBR

    If both the library catalogue and the theatre production database were FRBRised, we could in theory link on the Work level and cover all individual versions. But we would still need a matching mechanism on that Work level of course.

    In reality however we can only try to link on the Manifestation level in an imperfect way.

    Linking library and theatre in reality

    At the moment, in our project, on the catalogue side we extract the title and author from the generated OPAC HTML. It could be an option to get available linking information form the internal MARC records (like the 240, 246, 765, 767, 775 tags), but that is not easy because of a number of reasons. Something similar could be done in the theatre production database, making implicit links explicit. But all this makes the effort to get something sensible out there much bigger.

     

    Literal strings

    The third problem, the literal strings in Dutch both in the library catalogue and in the theatre production database, prevents the effective use of the data in multilingual environments, equally in the traditional native interfaces and as linked data. Obviously for English speaking end users the Dutch terms mean nothing. And in a linked data environment the Dutch strings can’t easily be linked to other data, in Dutch, English, or any language, without unique identifiers.

     

    Implicit to explicit

    People calling on institutions to publish their data as linked open data tend to say it’s easy once you know how to do it . And of course it must be done. But if the published datasets have a flat internal structure designed to fulfill a specific local business objective, then they just don’t provide sufficient added value for third party use. In order to make your published open data useful for others, you have to make implicit relations explicit. And this requires something more than just making the data available in RDF ‘as is’, it requires a lot of processing.

    Share

  • Explicit and implicit metadata

    Posted on August 20th, 2009 Lukas Koster 12 comments
    tagged

    Tagged © funkandjazz

    On August 17, after I tested a search in our new Aleph OPAC and mentioned my surprise on Twitter, the following discussion unfolded between me (lukask), Ed Summers of the Library of Congress and Till Kinstler of GBV (German Union Library Network):

    • lukask: Just found out we only have one item about RDF in our catalogue: http://tinyurl.com/lz75c4
    • edsu: @lukask broaden that search :-) http://is.gd/2l6vB
    • lukask: @edsu Ha! Thanks! But I’m sure that RDF will be mentioned in these 29 titles! A case for social tagging!
    • edsu: @lukask or better cataloging :-)
    • edsu: @lukask i guess they both amount to the same thing eh?
    • lukask: @edsu That’s an interesting position…”social tagging=better cataloging”. I will ask my cataloguing co-workers about this specific example
    • edsu: @lukask make sure to wear body-armor
    • lukask: @edsu Yes I know! I will bring it up at tomorrow’s party for the celebration of our ALEPH STP (after some drinks…)
    • tillk: @edsu @lukask or fulltext search… :-) SCNR…
    • edsu: @tillk yeah, totally — with projects like @googlebooks and @hathitrust we may look back on the age of cataloging with different eyes …
    • lukask: @tillk @edsu Fulltext search yes, or “implicit automatic metadata generation”?

    What happened here was:

    • A problem with findability of specific bibliographic items was observed: although it is highly unlikely that books about the Semantic Web will not cover RDF-Resource Description Framework, none of the 29 titles found with “Semantic Web” could be found with the search term “Resource Description Framework“; on the other side, the only item found with “Resource Description Framework” was NOT found with “Semantic Web“. I must add that the “Semantic web” search was an “All words” search. Only 20 of the results were indexed with the Dutch subject heading “Semantisch web” (which term is never used in real life as far as I know; the English term is an international concept). Some results were off topic, they just happened to have “semantic” and “web” somewhere in their metadata. A better search would have been a phrase search (adjacent) with “semantic web” in actual quotes, which gives 26 items. But of these, a small number were not indexed with subject heading “Semantisch web“. Another note: searching with “RDF” gets you all kinds of results. Read more on the issue of searching and relevance in my post Relevance in context.
    • Four possible solutions were suggested:
    1. social tagging
    2. better cataloging
    3. fulltext searching
    4. automatic metadata generation

    Social tagging
    Clearly, the 26 items found with the search “Semantic web” are not indexed by the “Resource description framework” or “RDF” subject heading. There is not even a subject heading for “Resource description framework” or “RDF“. In my personal view, from my personal context, this is an omission. Mind you, this is not only an issue in the catalogue of the Library of the University of Amsterdam, it is quite common. I tried it in the British Library Integrated Catalogue with similar results. Try it in your own OPAC!
    I presume that our professional cataloging colleagues can’t know everything about all subjects. That is completely understandable. I would not know how to catalog a book about a medical subject myself either! But this is exactly the point. If you allow end users to add their own tags to your bibliographic records, you enhance the findability of these records for specific groups of end users.
    I am not saying that cataloguing and indexing by library specialists using controlled vocabularies should be replaced by social tagging! No, not at all. I am just saying that both types of tagging/indexing are complementary. Sure, some of the tags added by end users may not follow cataloging standards, but who cares? Very often the end users adding tags of their own will be professional experts in their field. In any case, items with social tags will be found more often because specific end user groups can find them searching with their own terms.

    Better cataloging
    I suppose Ed Summers was trying to say the same thing as I just did above, when he commented “or better cataloging, I guess they both amount to the same thing eh?“, which I summarised as “social tagging=better cataloging“, but he can correct me if I’m wrong.
    Anyway, I hope I made it clear that I would not say “social tagging=better cataloging“, but rather “controlled vocabularies+social tagging=better cataloging“.
    Or alternatively, could we improve cataloging by professional library catalogers? I must admit I do not know enough about library training and practice to say something about that. I am not a trained librarian. Don’t hesitate to comment!

    Fulltext searching
    Is fulltext searching the miracle cure for findability problems, as Till Kinstler seems to suggest? Maybe.
    Suppose all our print material was completely digitised and available for fulltext search, I have no doubt that all 26 items mentioned above (the results of the “semantic web” all words search) would be found with the “resource description framework” or “rdf” search as well. But because fulltext search is by its very nature an “all words” search, the “rdf” fulltext search would also give a lot of “noise”, or items not having any relation to “semantic web” at all (author’s initials “R.D.F”, other acronyms “R.D.F.”, just see RDF in the BL catalogue). Again, see my post Relevance in context for an explanation of searching without context.
    Also, there will be books or articles about a subject that will not contain the actual subject term at all. With fulltext search these items will not be found.
    Moreover, fulltext searching actually limits the findable items to text, excluding other types, like images, maps, video, audio etc.
    This brings me to the “final solution”:

    Automatic metadata generation
    Of course this is mostly still wishful thinking. But there are a number of interesting implementations already.
    What I mean when I say “(implicit) automatic metadata generation” is: metadata that is NOT created deliberately by humans, but either generated and assigned as static metadata, or generated on the fly, by software, applying intelligent analysis to objects, of all types (text, images, audio, video, etc.).
    In the case of our “rdf” example, such a tool would analyse a text and assign “rdf” as a subject heading based on the content and context of this text, even if the term “rdf” does not appear in the text at all. It would also discard texts containing the string “rdf” that refer to something completely different. Of course for this to succeed there should be some kind of contextual environment with links to other records or even other systems to be able to determine if certain terminology is linked to frequently used terms not mentioned in the text itself (here the Linked Data developments could play a major role).
    The same principle should also apply to non-textual objects, so that images, audio, video etc. about the same subject can be found in one go. Google has some interesting implementations in this field already: image search by colour and content type: see for example the search for “rdf” in Google Images with colour “red”and content type “clip art”.
    But of course there still needs a lot to be done.

    Share

  • Relevance in context

    Posted on August 11th, 2009 Lukas Koster 5 comments
    searching

    Search! © Jeffrey Beall

    If you do a search in a bibliographic database, you should find what you need, not just what you are looking for, or what the database “thinks” you are looking for. If you find what you are looking for, then you will not be surprised and you will not discover anything new. And that’s not what you want, is it? But if you find things you did not look for but also do not need, you’re not just surprised, you are confused! And that’s not what you want either.

    You want the results that are the most relevant for your search, with your specific objectives, at that specific point in time time, for your specific circumstances, and you want them immediately.

    So, how should search systems behave to make you find what you need? There are two conditions that need to be met:

    • The search terms must be interpreted correctly
    • The most relevant search results must be presented

    The Problem
    First of all, let’s take a look at current practice.

    Search systems cannot cope with ambiguous search terms. My favorite example and test search term is “voc“. This can stand for a number of things in various disciplines: V.O.C. (Dutch: “Verenigde Oostindische Compagnie”  or “Dutch United East Indies Company”) in historical databases; “vocals” in musical databases; “volatile organic compounds” in physics databases. So if you do a search for “voc” in a standard library catalogue, you get all kinds of results. Even more so if you use a metasearch or federated search tool for searching several databases simultaneously.

    Search for "voc" in British Library catalogue

    Search for "voc" in British Library Integrated Catalogue

    You are confused. You would like the system to “understand” which one of these concepts you are referring to instead of just using the literal string. You would like the system to take into account your context.

    In most databases search results can be sorted or filtered by a number of fields, most commonly by year, title, author, and also by more specific fields in dedicated databases.  But unless you are interested in a specific year, author or title, this will not do. Recently many systems have implemented “faceted” and “clustered” browsing of results, enabling “drilling down” on specific terms or subjects. This basically comes down to setting the context after the fact.

    But after the system has interpreted your search terms, the  results should also be ordered in a specific way, the ones you need most should be on top. This is where “relevance ranking” of search results comes in. Most catalogues and databases use a system specific default relevance ranking algorithm. Search results are assigned a rank, based on a number of criteria, that can differ between databases, depending on the nature of the database.
    Some databases just present the most recent results on top. For medical and physical sciences this may be right, but for history and literature databases this may just be wrong.
    Sometimes the search terms are taken into account: the number of times the given search terms are present in the result fields is important, but also the specific fields in which search terms appear. The appearance of search terms in “Title” and “Subject” may rank higher than in “Abstract” or “Publisher”. Moreover, the search indexes used can have a major influence on rank: if you search for “Subject” = “flu”, then results with “flu” as subject will be ranked higher than results with “flu” in the title only.
    To come back to my example, with ambiguous search terms like “voc” this type of relevance ranking will definitely not be enough, because results from the three different conceptual areas will be completely mixed up.

    Faceted/clustered search results in Amsterdam Univerity Digital Library

    Clustered search results in Amsterdam University Library MetaLib portal

    When searching with a metasearch or federated search tool things get even more complicated. Each of the remote databases that you search in has its own default way of ranking. Usually the metasearch tool fetches the first 30 or so results from each remote database (one set sorted by date, the other by internal rank, the next by title), merges these into one list and then applies its own local ranking mechanism to this subset only. Confusion! And I did not even mention searching databases with metadata in multiple languages. Moreover, databases containing only metadata will produce different results and relevance than databases with full text articles. There is absolutely no way of telling if you actually have the most relevant results for your situation.

    Again, with relevance ranking search systems do not take into account the context either. You could say it is an introverted, internally focused way of ranking, the confusing results of which are multiplied in case of metasearching.

    Most metasearch tools give users the option of searching in sets of pre-selected databases, based on subject or type. This way you can limit your search to those databases that are known to have data about that specific subject. You more or less set the context in advance. But this mechanism only eliminates results from databases that probably do not have data on your subject at all, so they would not have shown up in the results anyway. Moreover, the same issues that were discussed above apply to this limited set of databases.

    The metasearch tool that I know best (MetaLib) offers the option of setting a relative rank per database, so results from databases with a higher rank will have a higher relevance in merged result sets. But this is a system wide option, set by system administration, so it is not taking into account any context at all. It would be better if you could make the relative database rank dependent on the set or subject the search is done from (for instance: if a history database is searched in the context of a “History” set, the results get a higher rank than in a search from a “Music” set).

    The best solution for this “internal” relevance problem regarding distributed databases is a central database of harvested indexes. In this case all harvested metadata is normalised and ranked in a uniform way, and users do not have to select databases in advance. But these systems still do not take into account “external” relevance: there is no context!

    A very interesting and intelligent solution for the problem of pre-selecting databases is provided in PurpleSearch, the integrated front end to MetaLib (among other things), developed by the Library of the University of Groningen. The system records which databases actually produce results for specific search terms. As soon as the user enters search terms in the single search box, the system knows which databases will have results, and the search is automatically carried out in these databases, without asking the user to select the databases or subject area he or she wants to search in. Simultaneously a background search in all other databases is performed in order to check additional new results, and the information about results in databases is updated.
    Of course, all other usual options are available as well, like pre-selecting databases (setting context in advance) and faceted results drilling down (setting context after the fact). But again, no external contextual settings.

    Search "voc" in PurpleSearch

    Search "voc" in PurpleSearch

    • Conclusion: the only way to find what you need, is to make search systems take into account the context in which the search is done, both for searching and for relevance ranking.

    Solutions
    Now, let’s have a look at a couple of conditions that would make contextual searches possible.

    Personal context: a system should “know” about your personal interests, field of study, job situation, age, etc. so it can “decide” which databases to search in and which results are the most relevant for you. Some systems, like university systems, have access to information about their users. Once you log in, the system potentially knows which subjects you are studying or teaching and could use this information for setting the context for searching and ranking.
    But what if you are a student in Law AND Social Siences, which subject area should the system choose? Or: if you are a History teacher, and you have a personal interest in Ecology, which the system does not know about, what then? Somehow you still need to set context yourself.

    Some systems also offer the opportunity of setting personal preferences, like: area of interest, specific databases, type of material (only digital or print), only recent material, etc. Again: you must be able to deviate from these preferences, depending on your situation, which means setting context manually.

    Different search systems will have different user profiles (user data and preferences). It would be nice if search systems could take advantage of universal external personal profiles (like Google Profiles for instance) using some kind of universal API.

    Situational context: a system should also “know” about the situation you are in, both in a functional sense and in a physical sense.

    Functional context means: wich role are you playing? Are you in your law student role or in your social sciences student role? Are you in your professional role or in your private role? But also: to which resources do you have access?
    An interesting idea: if you work Monday to Friday during office hours, study in the evenings and spend time on your personal interests on the weekends, it would be nice if you could link times of day and days of the week to your different roles, so search systems could use the correct context for your searches depending on time and date: “if it’s Tuesday evening then use study profile and search in ‘History’; if it’s Sunday, use private profile and search in ‘Ecology’“.

    It's the Great Pumpkin Charlie Brown

    This temporal context was also referred to by Till Kinstler in a (German) blog post about the new “Suchkiste” search system prototype of the German Union Library Network (GBV): ‘the search for “Charlie Brown” in October should result in “It’s the Great Pumpkin, Charlie Brown” at number 1, and in December in “A Charlie Brown Christmas“‘.

    Physical context means: where are you? It would be nice if a library catalogue search system would take into account your actual location, so it could show you the records of the copies of the FRBR-ized results available in the library locations nearest to you (this idea came up in a recent Twitter discussion between @librarianbe and @gbierens). This is what Worldcat does when you supply it with your location manually. In Worldcat this is a static preference. But it would be nice if it would respond to your actual location, for instance by using the GPS coordinates of your mobile phone. Alternatively, search systems could derive your location from the IP address you are sending your search from.
    This information could also be used to determine if records for digital or physical copies should be ranked the most relevant in this case. If you are inside the library building and you have a preference for physical books and journals, then records for available print copies should be on top of the results list. If you are at home, then records for digital copies that you have access to should come first.

    Contextual searching and ranking should always be a combination of all possible conditions, personal, situational and internal system ones.

    Of course it goes without saying that it would be great if metasearch tools were able to convey the search context to the remote databases and get contextual results back, using some kind of universal serach context API!

    Last but not least, each search system should show the context of the search, and explain how it got to the results in the presented order. Something like: based on your personal preferences, the time of day and day of the week, and your location, the search was done in these databases, with this subject area, and the physical copies of the nearest location are shown on top.
    This context area on the results screen could then be used as a kind of inverted faceted search, drilling “up” to a broader level or “sideways” to another context.

    Share

  • Linked Data for Libraries

    Posted on June 19th, 2009 Lukas Koster 8 comments
    Linked Data and bibliographic metadata models

    ted

    © PhOtOnQuAnTiQuE

    Some time after I wrote “UMR – Unified Metadata Resources“, I came across Chris Keene’s post “Linked data & RDF : draft notes for comment“, “just a list of links and notes” about Linked Data, RDF and the Semantic Web, put together to start collecting information about “a topic that will greatly impact on the Library / Information management world“.

    While reading this post and working my way through the links on that page, I started realising that Linked Data is exactly what I tried to describe as One single web page as the single identifier of every book, author or subject. I did mention Semantic Web, URI’s and RDF, but the term “Linked Data” as a separate protocol had escaped me.

    The concept of Linked Data was described by Tim Berners Lee, the inventor of the World Wide Web. Whereas the World Wide Web links documents (pages, files, images), which are basically resources about things, (“Information Resources” in Semantic Web terms), Linked Data (or the Semantic Web) links raw data and real life things (“Non-Information Resources”).

    There are several definitions of Linked Data on the web, but here is my attempt to give a simple definition of it (loosely based on the definition in Structured Dynamics’ Linked Data FAQ):

    Linked Data is a methodology for providing relationships between things (data, concepts and documents) anywhere on the web, using URI’s for identifying, RDF for describing and HTTP for publishing these things and relationships, in a way that they can be interpreted and used by humans and software.

    I will try to illustrate the different aspects using some examples from the library world. The article is rather long, because of the nature of the subject, then again the individual sections are a bit short. But I do supply a lot of links for further reading.

    Data is relationships
    The important thing is that “data is relationships“, as Tim Berners Lee says in his recent presentation for TED.
    Before going into relationships between things, I have to point out the important distinction between abstract concepts and real life things, which are “manifestations” of the concepts. In Object modeling these are called “classes” (abstract concepts, types of things) and “objects” (real life things, or “instances” of “classes“).

    Examples:

    • the class book can have the instances/objects “Cloud Atlas“, “Moby Dick“, etc.
    • the class person can have the instances/objects “David Mitchell“, “Herman Melville“, etc.

    In the Semantic Web/RDF model the concept of triples is used to describe a relationship between two things: subject – predicate – object, meaning: a thing has a relation to another thing, in the broadest sense:

    • a book (subject) is written by (predicate) a person (object)

    You can also reverse this relationship:

    • a person (subject) is the author of (predicate) a book (object)
    Triple

    Triple

    The person in question is only an author because of his or her relationship to the book. The same person can also be a mother of three children, an employee of a library, and a speaker at a conference.
    Moreover, and this is important: there can be more than one relationship between the same two classes or types of things. A book (subject) can also be about (predicate) a person (object). In this case the person is a “subject” of the book, that can be described by a “keyword”, “subject heading”, or whatever term is used. A special case would be a book, written by someone about himself (an autobiography).

    The problem with most legacy systems, and library catalogues as an example of these, is that a record for let’s say a book contains one or more fields for the author (or at best a link to an entry in an authority file or thesaurus), and separately one or more fields for subjects. This way it is not possible to see books written by an author and books about the same author in one view, without using all kinds of workarounds, link resolvers or mash-ups.
    Using two different relationships that link to the same thing would provide for an actual view or representation of the real world situation.

    Another important option of Linked Data/RDF: a certain thing can have as a property a link to a concept (or “class”) , describing the nature of the thing: “object Cloud Atlas” has type “book“; “object David Mitchell” has type “person“; “object Cloud Atlas” is written by “object David Mitchell“.

    And of course, the property/relationship/predicate can also link to a concept describing the nature of the link.

    Anywhere on the web

    ERD

    ERD

    So far so good. But you may argue that this relationship theory is not very new. Absolutely right, but up until now this data-relationship concept has mainly been used with a view to the inside, focused on the area of the specific information system in question, because of the nature and the limitations of the available technology and infrastructure.

    The “triple” model is of course exactly the same as the long standing methodology of Entity Relationship Diagrams (ERD), with which relationships between entities (=”classes“) are described. An ERD is typically used to generate a database that contains data in a specific information system. But ERD’s could just as well be used to describe Linked Data on the web.

    Information systems, such as library catalogs, have been, and still are, for the greatest part closed containers of data, or “silos” without connections between them, as Tim Berners Lee also mentions in his TED presentation.

    Lots of these silo systems are accessible with web interfaces, but this does not mean that items in these closed systems with dedicated web front ends can be linked to items in other databases or web pages. Of course these systems can have API‘s that allow system developers to create scripts to get related information from other systems and incorporate that external information in the search results of the calling system. This is what is being done in web 2.0 with so-called mash-ups.
    But in this situation you need developers who know how to make scripts using specific scripting languages for all the different proprietary API’s that are being supported for all the individual systems.
    If Linked Data was a global standard and all open and closed systems and websites supported RDF, then all these links would be available automatically to RDF enabled browser and client software, using SPARQL, the RDF Query Language.

    • Linked Data/RDF can be regarded as a universal API.

    The good thing about Linked Data is, that it is possible to use Linked Data mechanisms to link to legacy data in silo databases. You just need to provide an RDF wrapper for the legacy system, like has been done with the Library of Congress Subject Headings.

    Some examples of available tools for exposing legacy data as RDF:

    • Triplify – a web applications plugin that converts relational database structures into RDF triples
    • D2R Server – a tool for publishing relational databases on the Semantic Web
    • wp-RDFa – a wordpress plugin that adds some RDF information about Author and Title to WordPress blog posts

    Of course, RDF that is generated like this will very probably only expose objects to link TO, not links to RDF objects external to the system.

    Also, Linked Data can be used within legacy systems, for mixing legacy and RDF data, open and closed access data, etc. In this case we have RDF triples that have a subject URI from one data source and an object URI from another data source. In a situation with interlinked systems it would for instance be possible to see that the author of a specific book (data from a library catalog) is also speaking at a specific conference (data from a conference website). Objects linked together on the web using RDF triples are also known as an “RDF graph”. With RDF-aware client software it is possible to navigate through all the links to retrieve additional information about an object.

    Linked Data

    Linked Data

    URI’s
    URI’s (“Uniform Resource Identifiers”) are necessary for uniquely identifying and linking to resources on the web. A URI is basically a string that identifies a thing or resource on the web. All “Information Resources”, or WWW pages, documents, etc. have a URI, which is commonly known as a URL (Uniform Resource Locator).

    With Linked Data we are looking at identifying “Non-information Resources” or “real world objects” (people, concepts, things, even imaginary things), not web pages that contain information about these real world objects. But it is a little more complicated than that. In order to honour the requirement that a thing and its relations can be interpreted and used by humans and software, we need at least 3 different representations of one resource (see: How to publish Linked Data on the web):

    • Resource identifier URI (identifies the real world object, the concept, as such)
    • RDF document URI (a document readable for semantic web applications, containing the real world object’s RDF data and relationships with other objects)
    • HTML document URI (a document readable for humans, with information about the real world object)
    rdfredir2

    Redirection

    For instance, there could be a Resource Identifier URI for a book called “Cloud Atlas“. The web resource at that URI can redirect an RDF enabled browser to the RDF document URI, which contains RDF data describing the book and its properties and relationships. A normal HTML web browser would be redirected to the HTML document URI, for instance a web page about the book at the publisher’s website.

    There are several methods of redirecting browsers and application to the required representation of the resource. See Cool URIs for the Semantic Web for technical details.

    There are also RDF enabled browsers that transform RDF into web pages readable by humans, like the FireFox addon “Tabulator“, or the web based Disco and Marbles browsers, both hosted at the Free University Berlin.

    RDF, vocabularies, ontologies
    RDF or Resource Description Framework, is, like the name suggests, just a framework. It uses XML (or a simpler non-XML method N3) to describe resources by means of relationships. RDF can be implemented in vocabularies or ontologies, which are sets of RDF classes describing objects and relationships for a given field.
    Basically, anybody can create an RDF vocabulary by publishing an RDF document defining the classes and properties of the vocabulary, at a URI on the web. The vocabulary can then be used in a resource by referring to the namespace (the URI) and the classes in that RDF document.

    A nice and useful feature of RDF is that more than one vocabularies can be mixed and used in one resource.
    Also, a vocabulary itself can reference other vocabularies and thereby inherit well established classes and properties from other RDF documents.
    Another very useful feature of RDF is that objects can be linked to similar object resources describing the same real world thing. This way confusion about which object we are talking about, can be avoided.

    A couple of existing and well used RDF vocabularies/ontologies:

    (By the way,  the links in the first column (to the RDF files themselves) may act as an illustration of the redirection mechanism described before. Some of them may link to either the RDF file with the vocabulary definition itself, or to a page about the vocabulary, depending on the type of browser you use: rdf-aware or not.)

    A special case is:

    • RDFa – a sort of microformat without a vocabulary of its own, which relies on other vocabularies for turning XHTML page attributes into RDF

    Example
    A shortened example for “Cloud Atlas” by David Mitchell from the RDF BookMashup at the Free University Berlin, which uses a number of different vocabularies:

    <?xml version=”1.0″ encoding=”UTF-8″ ?>
    <rdf:RDF
    xmlns:rdf=”http://www.w3.org/1999/02/22-rdf-syntax-ns#”

    xmlns:skos=”http://www.w3.org/2004/02/skos/core#”>

    <rdf:Description rdf:about=”http://www4.wiwiss.fu-berlin.de/bookmashup/books/0375507256″>
    <rev:hasReview rdf:resource=”http://www4.wiwiss.fu-berlin.de/bookmashup/reviews/0375507256_EditorialReview1″/>
    <dc:creator rdf:resource=”http://www4.wiwiss.fu-berlin.de/bookmashup/persons/David+Mitchell”/>
    <dc:format>Paperback</dc:format>
    <dc:identifier rdf:resource=”urn:ISBN:0375507256″/>
    <dc:publisher>Random House Trade Paperbacks</dc:publisher>
    <dc:title>Cloud Atlas: A Novel</dc:title>
    </rdf:Description>

    <scom:Book rdf:about=”http://www4.wiwiss.fu-berlin.de/bookmashup/books/0375507256″>
    <rdfs:label>Cloud Atlas: A Novel</rdfs:label>
    <skos:subject rdf:resource=”http://www4.wiwiss.fu-berlin.de/bookmashup/subject/Fantasy+fiction”/>
    <skos:subject rdf:resource=”http://www4.wiwiss.fu-berlin.de/bookmashup/subject/Fate+and+fatalism”/>

    <foaf:depiction rdf:resource=”http://ecx.images-amazon.com/images/I/51MIVHgJP%2BL.jpg”/>
    <foaf:thumbnail rdf:resource=”http://ecx.images-amazon.com/images/I/51MIVHgJP%2BL._SL75_.jpg”/>
    </scom:Book>

    <rdf:Description rdf:about=”http://www4.wiwiss.fu-berlin.de/bookmashup/doc/books/0375507256″>
    <dc:license rdf:resource=”http://www.amazon.com/AWS-License-home-page-Money/b/ref=sc_fe_c_0_12738641_12/102-8791790-9885755?ie=UTF8&amp;node=3440661&amp;no=12738641&amp;me=A36L942TSJ2AJA”/>
    <dc:license rdf:resource=”http://www.google.com/terms_of_service.html”/>
    </rdf:Description>

    <foaf:Document rdf:about=”http://www4.wiwiss.fu-berlin.de/bookmashup/doc/books/0375507256″>
    <rdfs:label>RDF document about the book: Cloud Atlas: A Novel</rdfs:label>
    <foaf:maker rdf:resource=”http://www4.wiwiss.fu-berlin.de/is-group/resource/projects/Project10″/>
    <foaf:primaryTopic rdf:resource=”http://www4.wiwiss.fu-berlin.de/bookmashup/books/0375507256″/>
    </foaf:Document>

    <rdf:Description rdf:about=”http://www4.wiwiss.fu-berlin.de/bookmashup/persons/David+Mitchell”>
    <rdfs:label>David Mitchell</rdfs:label>
    </rdf:Description>

    <rdf:Description rdf:about=”http://www4.wiwiss.fu-berlin.de/bookmashup/reviews/0375507256_EditorialReview1″>
    <rdfs:label>Review number 1 about: Cloud Atlas: A Novel</rdfs:label>
    </rdf:Description>

    <rdf:Description rdf:about=”http://www4.wiwiss.fu-berlin.de/is-group/resource/projects/Project10″>
    <rdfs:label>RDF Book Mashup</rdfs:label>
    </rdf:Description>

    </rdf:RDF>

    A partial view on this RDF file with the Marbles browser:

    RDF browser view

    RDF browser view

    See also the same example in the Disco RDF browser.

    Library implementations
    It seems obvious that Linked Data can be very useful in providing a generic infrastructure for linking data, metadata and objects, available in numerous types of data stores, in the online library world. With such a networked online data structure, it would be fairly easy to create all kinds of discovery interfaces for bibliographic data and objects. Moreover, it would also be possible to link to non-bibliographic data that might interest the users of these interfaces.

    A brief and incomplete list of some library related Linked Data projects, some of which already mentioned above:

    And what about MARC, AACR2 and RDA? Is there a role for them in the Linked Data environment? RDA is supposed to be the successor of AACR2 as a content standard that can be used with MARC, but also with other encoding standards like MODS or Dublin Core.
    The RDA Entity Relationship Diagram, that incorporates FRBR as well, can of course easily be implemented as an RDF vocabulary, that could be used to create a universal Linked Data library network. It really does not matter what kind of internal data format the connected systems use.

    Share