Permalink: https://purl.org/cpl/983
The connection between information technology and library information systems
This is the first post in a series of three
[1. Mainframe to mobile – 2. Mobile app or mobile web? – 3. Mobile library services]
The functions, services and audience of library information systems, as is the case with all information systems, have always been dependent on and determined by the existing level of information technology. Mobile devices are the latest step in this development.
In the beginning there was a computer, a mainframe. The only way to communicate with it was to feed it punchcards with holes that represented characters.
If you made a typo (puncho?), you were not informed until a day later when you collected the printout, and you could start again. System and data files could be stored externally on large tape reels or small tape cassettes, identical to music tapes. Tapes were also used for sharing and copying data between systems by means of physical transportation.
Suddenly there was a human operable terminal, consisting of a monitor and keyboard, connected to the central computer. Now you could type in your code and save it as a file on the remote server (no local processing or storage at all). If you were lucky you had a full screen editor, if not there was the line editor. No graphics. Output and errors were shown on screen almost immediately, depending on the capacity of the CPU (central processing unit) and the number of other batch jobs in the queue. The computer was a multi-user time sharing device, a bit like the “cloud”, but every computer was a little cloud of its own.
There was no email. There were no end users other than systems administrators, programmers and some staff. Communication with customers was carried out by sending them printouts on paper by snail mail.
I guess this was the first time that some libraries, probably mainly in academic and scientific institutions, started creating digital catalogs, for staff use only of course.
Then came the PC (Personal Computer). Terminal and keyboard were now connected to the computer (or system unit) on your desk. You had the thing entirely to yourself! Input and output consisted of lines of text only, one colour (green or white on black), and still no graphics. Files could be stored on floppy disks, 5¼-inch magnetic things that you could twist and bend, but if you did that you lost your data. There was no internal storage. File sharing was accomplished by moving the floppy from one PC to another and/or copy files from one floppy to another (on the same floppy drive).
Later we got smaller disks, 3½-inch, in protective cases. The PC was mainly used for early word processing (WordStar, WordPerfect) and games. Finally there was a hard disk (as opposed to “floppy” disk) inside the PC system unit, which held the operating system (mainly MS-DOS), and on which you could store your files, which became larger. Time for stand-alone database applications (dBase).
Then there was Windows, a mouse, and graphics. And of course the Internet! You could connect your PC to the Internet with a modem that occupied your telephone line and made phone calls impossible during your online session. At first there was Gopher, a kind of text based web.
Then came the World Wide Web (web 0.0), consisting of static web pages with links to other static web pages that you could read on your PC. Not suitable for interactive systems. Libraries could publish addresses and opening hours.
But fortunately we got client-server architecture, combining the best of both worlds. Powerful servers were good at processing, storing and sharing data. PC’s were good at presenting and collecting data in a “user friendly” graphical user interface (GUI), making use of local programming and scripting languages. So you had to install an application on the local PC which then connected to the remote server database engine. The only bad thing was that the application was tied to the specific PC, with local Windows configuration settings. And it was not possible to move the thing around.
Now we had multi-user digital catalogs with a shared central database and remote access points with the client application installed, available to staff and customers.
Luckily dynamic creation of HTML pages came along, so we were able to move the client part of client-server applications to the web as well. With web applications we were able to use the same applications anywhere on any computer linked to the world wide web. You only needed a browser to display the server side pages on the local PC.
Now everybody could browse through the library catalog any time, anywhere (where there was a computer with an internet connection and a web browser). The library OPAC (Online Public Access Catalog) was born.
The only disadvantage was that every page change had to be generated by the server again, so performance was not optimal.
But that changed with browser based scripting technology like JavaScript, AJAX, Flash, etc. Application bits are sent to the local browser on the PC at runtime, to be executed there. So actually this is client server “on the fly”, without the need to install a specific application locally.
In the meantime the portable PC appeared, system unit, monitor and keyboard all in one. At first you needed some physical power to move the thing around, but later we got laptops, notebooks, netbooks, getting smaller, lighter and more powerful all the time. And wifi of course, no need to plug the device in to the physical network anymore. And USB-sticks.
Access to OPAC and online databases became available anytime, anywhere (where you carried your computer).
The latest development of course is the rise of mobile phones with wireless web access, or rather mobile web devices which can also be used for making phone calls. Mobile devices are small and light enough to carry with you in your pocket all the time. It’s a tiny PC.
Finally you can access library related information literally any time, anywhere, even in your bedroom and bathroom.
It’s getting boring, but yes, there is a drawback. Web applications are not really accommodated for use in mobile browsers: pages are too large, browser technology is not really compatible, connections are too slow.
Available options are:
- creating a special “dumbed down” version of a website for use on mobile devices only: smaller text based pages with links
- creating a new HTML5/CSS3 website, targeted at mobile devices and “traditional” PC’s alike
- creating “apps”, to be installed on mobile devices and connect to a database system in the cloud; basically this is the old client-server model all over again.
A comparison of mobile apps and mobile web architecture is the topic of another post.
11 thoughts on “Mainframe to mobile”
Thanks for the comprehensive overview! I’d add the invention of stable URLs which most library system vendors have not catched for years.
Hmmm, I feel you look at all the technology changes you describe with a static “distribution” model in mind: Disseminate data/content from a central “production/storage point” to distributed consumers. But with the hyperlinked Web(!) there are alternative distribution models…
Content providers (newspapers, publishers, music industry…) that base their business on that classic distribution model still struggle hard with the Web. And libraries do, too (I think, at least part of the “library 2.0” hype is related to that).
As you write, with those mobile apps, you have that pre-Web client-server model again: One central content repository you access through a one way distribution channel, sometimes offering a limited, controlled back channel, sometimes extended by controlled linking to other ressources.
I think, that’s why classic content providers (like newspapers) love apps, isn’t it? It fits their classic business model. No need to change. I totally agree with @ostephens, who wrote “Apps are not about technology, they are about a business model” that tweet: http://twitter.com/ostephens/status/7982515485. It’s like AOL and the Web, different business models. Who is still there?
How will libraries deal with that?
I am courious, if you will look at these aspects of apps in one of the following parts of this article series.
Till, thanks. And yes: connecting mobile apps and distributed data (linked data/semantic web) will be very prominent in the (planned) third post about Mobile Services. It’s good to get confirmation for that approach!
I already mentioned this model in a previous post https://commonplace.net/2010/01/mobile-reading/
Lukas
A lot of people are resorting to many different types of this, as conventional methods have become more complicated and displaying more unwanted side effects. your post explores a few of these different sorts of methods and ways in which the benefit us, thanks! thanks
During the green-screen era, there was indeed email. That goes back to the 1970s. Also, our local library had the old green-screen computer terminals for client catalog access. It was a bit unusual — allowing the general public to touch a computing device — but it worked pretty well. The system worked well enough that it lasted well into the PC era after such terminals were rare anywhere else.
I believe that there should be a different version for the mobile phone. At least have them go to the mobile version initially while giving them the option to override and go to the full web version. From a web developer perspective it is hard to give the user a great user experience while having such a huge variance in screen size, hardware capability, plugin support, etc.
Supporting HTML5 alone is going to do nothing to help that.
Mobile apps are good but for small local service/business websites you are not going to have an app. It is just not cost effective to do that option, plus you have to worry about marketing your app as well as your site. Spreading yourself too thin.