Library2.0 and beyond
RSS icon Home icon
  • Mainframe to mobile

    Posted on February 16th, 2010 Lukas Koster 11 comments

    The connection between information technology and library information systems

    This is the first post in a series of three

    [1. Mainframe to mobile - 2. Mobile app or mobile web? - 3. Mobile library services]

    The functions, services and audience of library information systems, as is the case with all information systems, have always been dependent on and determined by the existing level of information technology. Mobile devices are the latest step in this development.

    © sainz

    In the beginning there was a computer, a mainframe. The only way to communicate with it was to feed it punchcards with holes that represented characters.

    © Mirandala

    If you made a typo (puncho?), you were not informed until a day later when you collected the printout, and you could start again. System and data files could be stored externally on large tape reels or small tape cassettes, identical to music tapes. Tapes were also used for sharing and copying data between systems by means of physical transportation.

    © ajmexico

    Suddenly there was a human operable terminal, consisting of a monitor and keyboard, connected to the central computer. Now you could type in your code and save it as a file on the remote server (no local processing or storage at all). If you were lucky you had a full screen editor, if not there was the line editor. No graphics. Output and errors were shown on screen almost immediately, depending on the capacity of the CPU (central processing unit) and the number of other batch jobs in the queue. The computer was a multi-user time sharing device, a bit like the “cloud”, but every computer was a little cloud of its own.
    There was no email. There were no end users other than systems administrators, programmers and some staff. Communication with customers was carried out by sending them printouts on paper by snail mail.

    I guess this was the first time that some libraries, probably mainly in academic and scientific institutions, started creating digital catalogs, for staff use only of course.

    © n.kahlua72

    © RaeA

    Then came the PC (Personal Computer). Terminal and keyboard were now connected to the computer (or system unit) on your desk. You had the thing entirely to yourself! Input and output consisted of lines of text only, one colour (green or white on black), and still no graphics. Files could be stored on floppy disks, 5¼-inch magnetic things that you could twist and bend, but if you did that you lost your data. There was no internal storage. File sharing was accomplished by moving the floppy from one PC to another and/or copy files from one floppy to another (on the same floppy drive).

    © suburbanslice

    Later we got smaller disks, 3½-inch, in protective cases. The PC was mainly used for early word processing (WordStar, WordPerfect) and games. Finally there was a hard disk (as opposed to “floppy” disk) inside the PC system unit, which held the operating system (mainly MS-DOS), and on which you could store your files, which became larger. Time for stand-alone database applications (dBase).

    Client server GUI

    Then there was Windows, a mouse, and graphics. And of course the Internet! You could connect your PC to the Internet with a modem that occupied your telephone line and made phone calls impossible during your online session. At first there was Gopher, a kind of text based web.
    Then came the World Wide Web (web 0.0), consisting of static web pages with links to other static web pages that you could read on your PC. Not suitable for interactive systems. Libraries could publish addresses and opening hours.
    But fortunately we got client-server architecture, combining the best of both worlds. Powerful servers were good at processing, storing and sharing data. PC’s were good at presenting and collecting data in a “user friendly” graphical user interface (GUI), making use of local programming and scripting languages. So you had to install an application on the local PC which then connected to the remote server database engine. The only bad thing was that the application was tied to the specific PC, with local Windows configuration settings. And it was not possible to move the thing around.

    Now we had multi-user digital catalogs with a shared central database and remote access points with the client application installed, available to staff and customers.

    Luckily dynamic creation of HTML pages came along, so we were able to move the client part of client-server applications to the web as well. With web applications we were able to use the same applications anywhere on any computer linked to the world wide web. You only needed a browser to display the server side pages on the local PC.

    Now everybody could browse through the library catalog any time, anywhere (where there was a computer with an internet connection and a web browser). The library OPAC (Online Public Access Catalog) was born.

    Web OPAC

    The only disadvantage was that every page change had to be generated by the server again, so performance was not optimal.
    But that changed with browser based scripting technology like JavaScript, AJAX, Flash, etc. Application bits are sent to the local browser on the PC at runtime, to be executed there. So actually this is client server “on the fly”, without the need to install a specific application locally.

    © nxtiak

    In the meantime the portable PC appeared, system unit, monitor and keyboard all in one. At first you needed some physical power to move the thing around, but later we got laptops, notebooks, netbooks, getting smaller, lighter and more powerful all the time. And wifi of course, no need to plug the device in to the physical network anymore. And USB-sticks.

    Access to OPAC and online databases became available anytime, anywhere (where you carried your computer).

    The latest development of course is the rise of mobile phones with wireless web access, or rather mobile web devices which can also be used for making phone calls. Mobile devices are small and light enough to carry with you in your pocket all the time. It’s a tiny PC.

    Finally you can access library related information literally any time, anywhere, even in your bedroom and bathroom.

    Mobile library app

    It’s getting boring, but yes, there is a drawback. Web applications are not really accommodated for use in mobile browsers: pages are too large, browser technology is not really compatible, connections are too slow.

    Available options are:

    • creating a special “dumbed down” version of a website for use on mobile devices only: smaller text based pages with links
    • creating a new HTML5/CSS3 website, targeted at mobile devices and “traditional” PC’s alike
    • creating “apps”, to be installed on mobile devices and connect to a database system in the cloud; basically this is the old client-server model all over again.

    A comparison of mobile apps and mobile web architecture is the topic of another post.

    Share

  • Mobile reading

    Posted on January 22nd, 2010 Lukas Koster 6 comments

    New models, new formats

    © Lukas Koster

    Recently I have been experimenting a bit with reading newspapers on my mobile phone (a G1 android device), or maybe I should say “reading news on my mobile”. I looked at two Dutch newspapers that adopt two completely different approaches.

    NRC Handelsblad” publishes it’s daily print newspaper as a daily “e-paper” in PDF, Mobi and ePub format, to be downloaded every day to the platform of your choice. In order to read the e-paper you need a physical device plus software (mobile phone, PC, e-reader, etc.) that can handle one of the available formats. On my G1 I use the Aldiko e-reader app for android with the ePub format. The e-paper is treated as an e-book file, with touch screen operation for browsing tables of content, paging through chapters or articles, zooming, etc. Access to the e-paper files is on a subscription basis.

    Het Parool” on the other hand offers a free app to be downloaded from the Android Market that serves as a front end to all recent articles available from their news server on the web. There is no need for a daily download of a file in a specific format that has to be supported by the physical platform of your choice. There is also an iPhone app. The app and access to the news articles are free of charge.

    © Lukas Koster

    Besides the difference in access (free vs paid), the most important contrast between these two mobile newspapers is the form in which the printed news is transformed to the digital and mobile environment. “NRC Handelsblad” takes the physical form the newspaper has had since it’s origin in the 17the century, dictated by physical, logistical and economical conditions, and transforms this 1 to 1 to the digital world: the e-paper still is one big monolithic bundle of articles that can’t be retrieved individually, completely ignoring the fact that the centuries old limitations don’t apply anymore. It is basically exactly the same as most manifestations of e-books.

    Het Parool” does completely the opposite. It treats individual news articles as units of content in their own right, “stories” as I call them in my post “Is an e-book a book?“. And this is how it should be in the digital mobile world. This is similar to the way that e-journals offer direct access to individual articles already.
    Readers should be able to apply their own selection of “stories” to read in a specific, virtual, on the fly bundle, using the front end of their choice.
    However, the “Parool” app functions as a predefined filter: it presents the reader with the most recent (24 hour max) articles from it’s own source of news. Of course this is fine as long as the readers choose to use the “Parool” app, but they may also choose to read news stories from different sources. This could be achieved with a different mobile, PC or web application that gathers content from a variety of sources.

    Another drawback of the ‘Parool” implementation is that it does not offer a “save” option. There is no way to read old articles, other than to go to the official newspaper website, either through mobile browsing or by using a PC web browser. The “NRC Handelsblad” implementation on the other hand does offer this option, because it is based on a download model to begin with.

    This brings me to the matter of mobile web browsing. Reading and navigating a web page designed for the PC screen on a mobile device is annoying at least, not to mention the time it takes to load complete web pages into the mobile browser. Common practice is to create a simplified version of full fledged web pages for mobile use only. Of course this means doubling the website maintenance effort.
    An alternative could be the adoption of HTML 5 and CSS 3, as was stated at a Top Tech Trends Panel session at ALA Midwinter 2010, where a university library official said: “2010 is the year that the app dies“, because “developers can leverage a single well-designed service to serve both browser-based and mobile users“. But this view completely misses the point: “Apps are not about technology, they are about a business model” as Owen Stephens pointed out. This business model implies the separation of content and presentation in a much broader sense then that of database back end – website front end only. This was an innovative concept until a couple of years ago.

    As I briefly described above, we need units of content being accessible by all kinds of platforms and applications through universal APIs. This model not only applies to reading texts, but also to finding these texts. Especially libraries should be aware of that.

    Although the ALA Top Trends Panel stated that libraries’ focus should be on content rather than hardware, they did not touch upon the changing concept of what books are in the e-book era, as again Owen Stephens pointed out. New models and formats will have all kinds of consequences for the way we handle information. For instance: pages. A PDF file, which is a 1 to 1 translation of the print unit to a digital unit, as I explained, still has fixed pages and page numbers. An ePub file however has a flexible format that allows “pages” to be automatically adapted to the size of the device’s screen (thanks to @rsnijders and @Wowter for discussing this). There are no fixed pages or page numbers anymore. HTML pages containing full articles don’t have page numbers either, by the way. This will change the way we refer to texts online, without page numbers, which is one of the subject of the Telstar project, again with Owen Stephens involved (watch that guy).

    The flexible page is another reason to have a critical look at MARC. There is no use anymore for tags like 300,a “Extent (Number of physical pages, etc.)”, 773,g (“Vol. 2, no. 2 (Feb. 1976), p. 195-230“).

    The inevitable conclusion of all this is that all innovative developments on the end user interface presentation front end need to be supported by corresponding developments on the content back end, and vice versa.

    Share

  • Library Systems and the world of hardware

    Posted on October 8th, 2008 Lukas Koster No comments

    The project for implementation of Aleph as the new ILS for the Library of the University of Amsterdam started last week (October 2) with the official kick-off meeting. The Ex Libris project plan was presented to the members of the project team, bottlenecks were identified, and a lot of adjustments were made to the planning in order to be able to carry out more tasks simultaneously and thus earlier in time.

    First steps are installation of Aleph 18, and giving on site trainings to all people involved, using the new locally installed Aleph 18 system.

    "Systems dependent on hardware and people"

    But of course, before everything can start, we need the hardware! The central ICT department of the University of Amsterdam (not part of the library) is responsible for configuring and providing the Aleph production server according to the official Ex Libris “Requirements for ALEPH 500 Installation”. And as always there is confusion about what is actually meant by the provider,and as always there are conflicts between the provider’s requirements and the ICT department’s security policy.

    As head of the Library Systems Department of the library and as coordinator of the project’s System Administration team, I have been acting this week as an intermediary between our software and hardware providers, passing information about volumes, partitions, database and scratch files, root access, IP addresses and swap areas.

    This makes you realise again that all these new web 2.0 systems and techniques are in the end completely dependent on the availability of correctly configured and constantly monitored machines, cables and electricity, and not in the least on all these technicians that know all about hardware and networks.

    Share