Library2.0 and beyond
RSS icon Home icon
  • Change or be irrelevant

    Posted on October 10th, 2012 Lukas Koster 28 comments

    Or: Think “different” or paint yourself in a corner

    EMTACL12 – Emerging Technologies in Academic Libraries 2012

    I attended the EMTACL12 conference in Trondheim October 1-3, 2012, organised by the Library of NTNU Norwegian University of Science and Technology, both as a member of the international programme committee and as a speaker. EMTACL stands for “emerging technologies in academic libraries”. Looking back, my impression was that the conference was not so much about emerging technologies, as about emerging tasks using existing technologies. One of the keynote speakers, Rudolf Mumenthaler, expressed similar thoughts in his blog post “No new technologies in libraries”, but some of the other participants disagreed, saying that “being emerging” has more to do with the context of technology than with the technology itself (see the comments on that blog post). Some technologies can be established, but may still be emerging in certain domains. There is something to say for that. Anyway, whatever you say, we all mean the same thing.

    EMTACL12 was the second EMTACL conference. The first one was organised in 2010. One of the presentations that caused a great stir amongst librarians on twitter in the 2010 edition was the one entitled “I’ve got Google, why do I need you? A student’s expectations of academic libraries” by Ida Aalen. Let’s look at this year’s conference with that perspective in mind: is there a future for academic libraries in supporting students and researchers other than just giving access to publications?

    The word “change” best describes the overall impression I got from all EMTACL12 presentations. And “data”. Both concepts involving “support and services for research and education”. Technologies that were mentioned: linked data, apis, mobile computing, visualisation, infrastructure, communication.

    The EMTACL12 programme consisted of 8 plenary keynote presentations by invited speakers, and a number of presentations in two parallel tracks. Let me report on the things that struck me most.

    Keynotes

     

    The title of the opening keynote presentation by Herbert Van de Sompel, “Paint-yourself-in-the-corner Infrastructure” aptly describes the current situation of academic libraries. “Paint yourself in a corner” means something like: “To put yourself in a situation with no visible solution or alternative”. Herbert Van de Sompel talked about the changing nature of the scholarly record: from “fixity” and “boundary” to dynamic and interdependent on the web. Online publications and related information, like research project information, references and data, change over time, so it becomes increasingly difficult to recreate a scholarly record. These are the challenges that academic libraries need to address. Van de Sompel mentioned a couple of new tools and protocols that can help: Memento, DURI (Durable URI), SiteStory. See also the excellent report of this session by Jane Stevenson on the Archives Hub blog.

     

    5 dimensions

    Think “different”’ is what Karen Coyle told us, using the famous Steve Jobs quote. And yes, the quotes around “different” are there for a reason, it’s not the grammatically correct “think differently”, because that’s too easy.  What is meant here is: you have to have the term “different” in your mind all the time. Karen Coyle confronted us with a number of ingrained obsolete practices in libraries. Like the ineradicable need for alphabetic ordering, which only makes sense in physical catalogue card systems. “Alphabetical order is not generally meaningful and an accident of language” she said. Same with page numbers and ebooks: “…it is literally impossible to get everyone ‘on the same page’”. Before printing we already had a perfect reference system for texts, independent of physical appearance: paragraph or verse numbers (like in the Bible).
    Libraries put things on shelves, forcing the user to see individual items, and ignoring the connections there are between them. “Library classification is a knowledge-prevention system, not a knowledge-organisation system”. The focus is still too much on physical items: “The FRBR user tasks drive me insane, as they end with obtain”.  According to Karen Coyle, libraries are two-dimensional linear things. We need to add a third (links), fourth (time) and fifth dimension (the users).

    © Patrick Hochstenbach

    Is linked data the answer? Not as such: “ISBD in RDF is like putting a turbo engine on a dinosaur”. The world is not waiting for libraries’ bibliographic data as Linked Open Data. The web is awash with bibliographic data. But we have holdings information, and that is unique and adds value. We should try and get that information into Google search results rich snippets.

    Karen’s message, which I wholeheartedly support, was: “The mission of the library is not to gather physical things into an inventory, but to organize human knowledge that has been very inconveniently packaged.

    Rurik Greenall’s keynote “Defining/Defying reality: the struggle towards relevance in bibliographic data” also focused on the imminent irrelevancy of libraries, from another perspective. “Outsourcing library business is better called ‘outscarcing’. Libraries are losing skills.”. “You can tell a lot about an organization from the way it treats its data.”. “We see metadata as good and data as bad. The terms are the same.” . “Ideas change, so should your data.”. Buying shelf-ready data means being static. “Data should age like wine, not like fish.”. In this changing environment bibliographic data needs to be enhanced. There is a role for experts, for the library. Final quote: “The semantic web doesn’t exist anymore, it’s been absorbed by the web”.

    Rurik’s world

     

    Rudolf Mumenthaler spoke about “Innovation management in and for libraries”. During and after his talk the big question was: can innovation be promoted by management, or does it need to grow of itself in freedom, by allowing staff to play the Google way? It appears that there may be cultural differences. Main thing is: innovation has to be facilitated in one way or another. See the comments on his blog post.

    Astrophysicist Eirik Newth entertained the audience with his slideless “Forecast for the academic library of 2025: Cloudy with a chance of user participation and content lock-in”.

    Jens Vigen, Head Librarian at CERN, delivered a very entertaining and compelling argumentation for open access with his talk “Connecting people and information: how open access supports research in High Energy Physics. Since 50 years!” The CERN convention of 1953 already effectively contains an Open Access Manifesto. CERN supports SCOAP3, Sponsoring Consortium for Open Access Publishing in Particle Physics.  CERN uses subscription funds for open access. “You librarians today spend money on subscriptions, tomorrow you will spend it on open access.”.
    A couple of very interesting remarks by Jens Vigen that are of direct interest to online library discovery layers:
    A researcher would never go to an institutional repository, they find their colleagues in subject repositories.”.
    A successful digital library: one size does not fit all.”.

    OCLC’s new “Technology evangelist” Richard Wallis‘ talk “OCLC WorldShare and Linked Data” actually was not about WorldShare and Linked data, but consisted of two parts, a WorldShare commercial, and a presentation of WorldCat and linked data, mainly the embedding of additional schema.org markup in WorldCat search results. Richard Wallis also mentioned the WorldCat Linked Data Facebook app, which almost nobody seemed to know. Maybe Facebook isn’t the right platform for things like this after all?

    In his closing keynote “What Next for Libraries? Making Sense of the FutureBrian Kelly, UKOLN, University of Bath, in the UK, made it clear that it is very hard to foresee the future, with Star Trek, monorails and paper planes as evidence.

    Parallel tracks

    Obviously I could only attend half of the parallel tracks sessions. Moreover, I chaired two sessions of two presentations each, in the “Semantic Web” and “Supporting Research” tracks, and I gave one presentation myself.

    In “The winner takes it al? – APIs and Linked Data battle it outJane and Adrian Stevenson (yes, they’re married, and work together) of the MIMAS National Data Centre at the University of Manchester in the UK, performed an actual battle defending the use of the generic linked data protocol versus the more dedicated API approach in making data available for reuse and mashups. Two interesting projects served as an example: the World War 1 Discovery Project (Adrian for APIs) and Linking Lives (Jane for Linked Data). Conclusion: too close to call.

    Black Metal MARC

    Norwegian Black Metal was the intriguing topic of Kim Tallerås’ talk “Using Linked data to harmonize heterogeneous metadata – Modeling the birth of Norwegian black metal”. He and three others combined complicated metadata from two heterogeneous data sources about early Norwegian black metal bands, performances and recordings using linked data ontologies and graph matching techniques. We saw some very interesting slides containing MARC records and some typical Black Metal band and song names.
    Afterwards we had the opportunity to experience the real thing in the Black Metal Room in the Norwegian Rock and Pop Museum Rockheim during our conference excursion.

    Black Metal Room at Rockheim

    Mubil: a digital laboratory” is a project (NTNU Trondheim, PERCRO, Pisa, Italy) aimed at augmenting and enriching rare old books in a digital 3D architecture, ready for all kinds of platforms and devices. Results are touch ebooks, with options for retrieving extra textual information and virtual 3D objects. A very interesting presentation by Alexandra Angeletaki, Marcello Carrozzino and Chiara Evangelista.

    In her talk “Libraries, research infrastructures and the digital humanities: are we ready for the challenge?”, Sally Chambers (DARIAH Göttingen) gave us a very thorough and complete overview of what “Digital Humanities” means and of all organisations and infrastructures currently available to libraries that are charged with supporting digital humanities research.

    The History Engine project was the subject of the presentation “Driving history forward: The History Engine as a vehicle for engaging undergraduate research” by Paulina Rousseau, Whitney Kemble and Christine Berkowitz (University of Toronto Scarborough), as a real example of how libraries can support undergraduate students in their efforts to master research.

    Sharon Favaro, Digital Services Librarian at Seton Hall University in South Orange, USA, showed us the landscape of disconnected tools used in the different stages of research projects: catalogues, databases, writing tools, drawing tools, reference managers, task managers, email; on the web, on internet file sharing tools, on desktop, on flash drives. The topic of her talk “Designing tools for the 21st century workflow of research and how it changes what libraries must do” was: how can research libraries support scholars within the entire lifecycle of the research process? The goal being to identify areas where library tools could be better integrated to support library resource use throughout the lifecycle of research. It was a pity that there was no real view yet on the best way to solve this problem: create a new library based infrastructure platform, use existing linking features, or other options. This will hopefully be the objective of a follow-up project at Seton Hall University Library.

    Publication profiles – presenting research in a new way“: Urban Andersson and Stina Johansson presented the Chalmers University (Gothenburg, Sweden) Publication Profiles Platform, in which all kinds of information related to Chalmers University researchers and publications are linked together. The main objective is to increase the visibility of Chalmers University research. A good example of how university libraries should take care of their own research and publications domain.  A very interesting visualisation feature was shown: Chalmers Geography, or geographical relations between researchers and projects on Google Maps. A question I should have asked (but didn’t) is: how does this project relate to the VIVO project?

    In my own presentation “Primo at the University of Amsterdam – Technology vs Real Life” I tried to show the discrepancies between the in theory unlimited possibilities of the technology used in library discovery layers and the limitations in the actual implementation of these tools, focused on content, indexing and user interface configuration. One of my conclusions was already expressed earlier by Jens Vigen: “A successful digital library: one size does not fit all.”.

    Of the other parallel tracks sessions I heard good reports about Andrew Withworth’s “The triadic model: A holistic view of how digital and information literacy must support each other” and Shun Nagaya and Keizo Itabashi – “Covo.js : A JavaScript Library to Utilize Subject Headings and Thesauri on the Web”. This doesn’t mean that the other talks were bad, I just didn’t manage to talk to people about them. One presentation worth mentioning is Krista Godfrey’s “The QR Question: QR Codes in Academic Libraries”, because it featured QR cows and my own photo of the University of Amsterdam Library’s QR cards.

    Let’s not forget Rune Martin Andersen’s talk of the Bartebuss (Moustache Bus) Trondheim public transport open data app project. This is yet another proof that public transport apps are the killer apps of open data.

    Trondheim moustache men

    Last but not least: the food (delicious and lots of it), the photos, Patrick Hochstenbach’s doodles and the music: the excursion to Rockheim Museum, the conference dinner entertainment by Skrømt, and the afterparty at Ramp bar, resulting in an interesting playlist afterwards.

    Share

  • Mobile app or mobile web?

    Posted on February 21st, 2010 Lukas Koster 11 comments

    Technology, users and business models

    This is the second post in a series of three

    [1. Mainframe to mobile – 2. Mobile app or mobile web? – 3. Mobile library services]

    © turkeychik

    Mobile access to information on the internet is the latest step in the development of information systems technology, as described in the previous post in this series. The two main features that distinguish mobile devices from other devices are:

    • Access to the web literally any time, anywhere
    • Location awareness using GPS or the mobile network

    Let’s focus on web access first. There are two main ways in which information providers can provide access to their data: by a mobile web browser or by apps.

    The easiest way to provide mobile access is: do nothing. Users of mobile internet devices can simply visit all existing websites with their mobile browser. However, in doing so they will experience a number of problems: performance is slow, pages are too large, navigation is difficult, certain parts of websites don’t work. These problems are caused by the very physical characteristics of mobile technology that make mobile internet access possible: the small size of devices and displays, the wireless network, the limited features of dedicated mobile operating systems and browsers.
    Fortunately, technological development is an interactive, reciprocal, cyclic process. Technology continuously needs to find solutions to problems that were caused by new uses of existing technology.

    Dumbed Down

    Many organisations have solved this problem by creating separate “dumbed down” mobile versions of their websites, containing mainly text only pages and textual links to their most important services and information. In the case of libraries for instance “locations and addresses“,  “opening hours“, etc. See this list of examples (with thanks to Aaron Tay). Another example is LibraryThing Mobile, which also has a catalog search option. In these cases you have to manually point your browser to the dedicated mobile URL, unless the webserver is configured to automatically recognise mobile browsers and redirect them to the mobile site.

    Of course this not the optimal solution for two reasons:

    • On the front end: as an information provider you are complete ignoring all graphical, dynamic, interactive and web 2.0 functionality on the end user side. This means actually going back to the early days of the world wide web of static text pages.
    • On the back end: duplicating system and content administration. In most cases it will come down to manually creating and editing HTML pages, because most website content management systems may not offer manual or automatic editing of pages for mobile access. Some systems offer automatic recognition of mobile browsers and display content in the appropriate format, like the WordPress plugin “WordPress Mobile Edition” that automatically shows a list of posts if mobile browsers are detected. This is what happens on this blog.

    SCCL App

    Because of this situation we are witnessing a re-enactment of the client-server alternative to static HTML that I described previously: mobile apps! “Apps” is short for “applications“, apparently everything needs to be short in the mobile online web 2.0 age. Apps are installed on mobile devices, they run locally making use of the hardware, operating system and user friendly interface of the device, and they only connect to the internet for retrieving data from a database system in the cloud (on a remote server).
    A disadvantage of this solution obviously is that you have to multiply development and maintenance in order to support all mobile platforms that your customers are using, or just support the most used platform (iPhone) and ignore the rest of your end users. Alternatively you can support one mobile platform with an app, and the rest with a mobile web site. Organisations have the choice of developing apps themselves from scratch, or using one of the commercial parties that offer library apps, such as Boopsie, Blackboard or the recently announced LibraryThing Anywhere, that is meant to offer both mobile web and apps for iPhone, Blackberry and Android.

    Some examples:

    An alternative solution to the client-server and “dumbed down” models would be to use the new HTML5 and CSS3 options to create websites that can easily be handled by all PC and mobile webbrowsers alike. HTML 5 has geolocation options, and browsers are made location aware this way too. The iWebKit Framework is a free and easy package to create web apps compatible for all mobile platforms. See this demo on PC, iPhone, Android, etc.
    Some say that HTML5/CSS3 will make apps disappear, but I suspect performance may still be a problem, due to slow connections. But it’s not only a technology issue. It’s also a matter of business models, as Owen Stephens and Till Kinstler pointed out.

    Apps can be distributed for free by organisations that want to draw traffic to their own data, ignoring the open web. This method fits their clasic business model, as Till remarked, mentioning the newspaper business as an example.
    But there is also another side to this: apps can be created by anybody, making use of APIs to online systems and databases, and be shared with others for free or for a small fee, as is the case with the iPhone Apps Store, the Android Market, the Nokia Ovi Store, or the newly announced Wholesale Applications Community (WAC). This model will never be possible with web based apps (like HTML5), because nobody has access to a system’s web server other than the system administrators. It is also much too complicated for developers and consumers of apps to host web apps on a server that mobile device users can connect too.
    And there is more: independent developers are more likely to look beyond the boundaries of the classic model of giving access to your own data only. Third party apps have the opportunity to connect data from a number of data sources in the cloud in order to satisfy mobile user needs better. To take the newspaper business example, I mentioned this in my post “Mobile reading“: general news apps vs dedicated newspaper apps. The rise of the open linked data movement will only boost the development and use of the mobile client server model.

    In my view there will be a hybrid situation: HTML5/CSS3 based web apps and local mobile apps will coexist, depending on developer, audience, and objectives.

    What services library mobile apps should offer, including location awareness and linking data, is the topic of another post.

    Share

  • Mainframe to mobile

    Posted on February 16th, 2010 Lukas Koster 11 comments

    The connection between information technology and library information systems

    This is the first post in a series of three

    [1. Mainframe to mobile – 2. Mobile app or mobile web?3. Mobile library services]

    The functions, services and audience of library information systems, as is the case with all information systems, have always been dependent on and determined by the existing level of information technology. Mobile devices are the latest step in this development.

    © sainz

    In the beginning there was a computer, a mainframe. The only way to communicate with it was to feed it punchcards with holes that represented characters.

    © Mirandala

    If you made a typo (puncho?), you were not informed until a day later when you collected the printout, and you could start again. System and data files could be stored externally on large tape reels or small tape cassettes, identical to music tapes. Tapes were also used for sharing and copying data between systems by means of physical transportation.

    © ajmexico

    Suddenly there was a human operable terminal, consisting of a monitor and keyboard, connected to the central computer. Now you could type in your code and save it as a file on the remote server (no local processing or storage at all). If you were lucky you had a full screen editor, if not there was the line editor. No graphics. Output and errors were shown on screen almost immediately, depending on the capacity of the CPU (central processing unit) and the number of other batch jobs in the queue. The computer was a multi-user time sharing device, a bit like the “cloud”, but every computer was a little cloud of its own.
    There was no email. There were no end users other than systems administrators, programmers and some staff. Communication with customers was carried out by sending them printouts on paper by snail mail.

    I guess this was the first time that some libraries, probably mainly in academic and scientific institutions, started creating digital catalogs, for staff use only of course.

    © n.kahlua72

    © RaeA

    Then came the PC (Personal Computer). Terminal and keyboard were now connected to the computer (or system unit) on your desk. You had the thing entirely to yourself! Input and output consisted of lines of text only, one colour (green or white on black), and still no graphics. Files could be stored on floppy disks, 5¼-inch magnetic things that you could twist and bend, but if you did that you lost your data. There was no internal storage. File sharing was accomplished by moving the floppy from one PC to another and/or copy files from one floppy to another (on the same floppy drive).

    © suburbanslice

    Later we got smaller disks, 3½-inch, in protective cases. The PC was mainly used for early word processing (WordStar, WordPerfect) and games. Finally there was a hard disk (as opposed to “floppy” disk) inside the PC system unit, which held the operating system (mainly MS-DOS), and on which you could store your files, which became larger. Time for stand-alone database applications (dBase).

    Client server GUI

    Then there was Windows, a mouse, and graphics. And of course the Internet! You could connect your PC to the Internet with a modem that occupied your telephone line and made phone calls impossible during your online session. At first there was Gopher, a kind of text based web.
    Then came the World Wide Web (web 0.0), consisting of static web pages with links to other static web pages that you could read on your PC. Not suitable for interactive systems. Libraries could publish addresses and opening hours.
    But fortunately we got client-server architecture, combining the best of both worlds. Powerful servers were good at processing, storing and sharing data. PC’s were good at presenting and collecting data in a “user friendly” graphical user interface (GUI), making use of local programming and scripting languages. So you had to install an application on the local PC which then connected to the remote server database engine. The only bad thing was that the application was tied to the specific PC, with local Windows configuration settings. And it was not possible to move the thing around.

    Now we had multi-user digital catalogs with a shared central database and remote access points with the client application installed, available to staff and customers.

    Luckily dynamic creation of HTML pages came along, so we were able to move the client part of client-server applications to the web as well. With web applications we were able to use the same applications anywhere on any computer linked to the world wide web. You only needed a browser to display the server side pages on the local PC.

    Now everybody could browse through the library catalog any time, anywhere (where there was a computer with an internet connection and a web browser). The library OPAC (Online Public Access Catalog) was born.

    Web OPAC

    The only disadvantage was that every page change had to be generated by the server again, so performance was not optimal.
    But that changed with browser based scripting technology like JavaScript, AJAX, Flash, etc. Application bits are sent to the local browser on the PC at runtime, to be executed there. So actually this is client server “on the fly”, without the need to install a specific application locally.

    © nxtiak

    In the meantime the portable PC appeared, system unit, monitor and keyboard all in one. At first you needed some physical power to move the thing around, but later we got laptops, notebooks, netbooks, getting smaller, lighter and more powerful all the time. And wifi of course, no need to plug the device in to the physical network anymore. And USB-sticks.

    Access to OPAC and online databases became available anytime, anywhere (where you carried your computer).

    The latest development of course is the rise of mobile phones with wireless web access, or rather mobile web devices which can also be used for making phone calls. Mobile devices are small and light enough to carry with you in your pocket all the time. It’s a tiny PC.

    Finally you can access library related information literally any time, anywhere, even in your bedroom and bathroom.

    Mobile library app

    It’s getting boring, but yes, there is a drawback. Web applications are not really accommodated for use in mobile browsers: pages are too large, browser technology is not really compatible, connections are too slow.

    Available options are:

    • creating a special “dumbed down” version of a website for use on mobile devices only: smaller text based pages with links
    • creating a new HTML5/CSS3 website, targeted at mobile devices and “traditional” PC’s alike
    • creating “apps”, to be installed on mobile devices and connect to a database system in the cloud; basically this is the old client-server model all over again.

    A comparison of mobile apps and mobile web architecture is the topic of another post.

    Share