"The biggest game changer in Education will never be a technology - It’s an educator who’s willing to be Innovative”
Friday, December 05, 2008
RSS MICRO - DEDICATED RSS FEED SEARCH ENGINE
Excerpted from...nmlis
Friday, October 17, 2008
Technological convergence- An visible option for library
Technological convergence is the tendency for different technological systems to evolve towards performing similar tasks.
Convergence can refer to previously separate technologies such as voice (and telephony features), data (and productivity applications) and video that now share resources and interact with each other, synergistically creating new efficiencies.
Convergence in this instance is defined as the interlinking of computing and other information technologies, media content and communication networks that have arisen as the result of the evolution and popularisation of the Internet as well as the activities, products and services that have emerged in the digital media space.
Many experts view this as simply being the tip of the iceberg, as all facets of institutional activity and social life such as business, government, art, journalism, health and education are increasingly being carried out in these digital media spaces across a growing network of ICT devices.
Also included in this topic is the basis of computer networks, wherein many different operating systems are able to communicate via different protocols. This could be a prelude to artificial intelligence networks on the internet and eventually leading to a powerful superintelligence[1] via a Technological singularity.
Technological Convergence can also refer to the phenomena of a group of technologies developed for one use being utilized in many different contexts. This often happens to military technology as well as most types of machine tools and now silicon chips.
Messaging convergence
Combinational services are growing in prominence, chief among these being those services which integrate SMS with voice, such as voice SMS (voice instead of text – service providers include Bubble Motion and Kirusa) and SpinVox (voice to text). In addition, several operators have launched services that combine SMS with mobile instant messaging (MIM) and presence.
The Text-to-Landline services are also trendy, where subscribers can send text messages to any landline phone and are charged at standard text message fees. This service has been very popular in America, where it’s difficult to differentiate fixed-line from mobile phone numbers.
Inbound SMS has been also converging to enable reception of different formats (SMS, voice, MMS, etc.). UK companies, including consumer goods companies and media giants, should soon be able to let consumers contact them via voice, SMS, MMS, IVR or video using just one five-digit number or long number. In April 2008, O2 UK launched voice-enabled shortcodes, adding voice functionality to the five-digit codes already used for SMS. Mobile messaging provider Tyntec also provides a similar service based on long number, converging text message and voice calls under one number.
This type of convergence is particularly helpful for media companies, broadcasters, enterprises, call centres and help desks who need to develop a consistent contact strategy with the consumer. Because SMS is very popular today with any demographic, it became relevant to include text messaging as a contact possibility for consumers. To avoid having multiple numbers (one for voice calls, another one for SMS), a simple way is to merge the reception of both formats under one number. This means that a consumer can text or call +44 7624 805555 and can be sure that, regardless of the format, the message will be received.
OpenID- a shared identity service
Library Mashups: Web 2.0 tool for integrating contents and services of libraries
Bulu Maharana
Lecturer, P. G. Department of Library & Information Science, Sambalpur University (Orissa)
N. K. Sahu
Lecturer, P. G. Department of Library & Information Science, North Orissa University (Orissa)
Ms. Arundhati Deb
M. Phil Scholar, P. G. Department of Library & Information Science, Sambalpur University (Orissa)
Siba Bhue
Assistant Librarian, IMIS,BBSR
Abstract
Mashup is one of the many new phenomena in the Web 2.0 environment. They are largely experimental, but some of them are very useful, well-designed and very popular. Google Maps is the most popular component of Mashups. Amazon, Yahoo! Maps, and photo-sharing site Flickr are also a source for many of the sites. The libraries have been well adapting to the emerging technologies to integrate contents and services in order to provide innovative services to the users. The paper defines Mashup and discusses its various aspects with specific reference to the libraries.
Keywords: Web 2.0, Mashups, Google Maps, API, RSS, Screen Scrapping
1. Introduction
Over the past several years, as the Web 2.0 movement has gathered critical mass, technological mash ups have generated most of the attention, receiving lots of publicity and lots of programming effort. New Massups are created everyday, ranging from the popular (1412 Google Maps Mashups) to the provocative (Yahoo Wheel of Food). Aaron Boodman, the 27-year-old Google Web developer remarks "The Web was originally designed to be mashed up. The technology is finally growing up and making it possible." [1]
2. Definition
The evolution of Mashup technology is the next stage of Web 2.0. Mashup is a term, originally used in pop music by artists and disk jockeys when two songs were remixed and played at the same time. Web experts have borrowed this term when two or more software tools are merged. The resulting new tool provides an enriched web experience for the users.
Wikipedia defines a mashup as "a web application that combines data from more than one source into a single integrated tool" [2]. Many popular examples of Mashups make use of the Google Map service to provide a location display of data taken from another source.
Hong Chun (2006)[3] defines Mashup as “a web page or application that combines data from two or more external online sources. The external sources are typically other websites and their data may be obtained by the mashup developers in various ways, including but not limited to APIs, XML feeds*, and screen scrapping§.
3. Technical Concept
As illustrated in a video clip on "What Is A Mashup?" [4] from a programmer's perspective a mashup is based on making use of APIs (Application Programmers Interface). A key characteristic of Web 2.0 is the notion of 'the network as the platform'. APIs provided by Web-based services (services provided by companies such as Google and Yahoo) can similarly be used by programmers to build new services, based on popular functions the companies may provide. Content used in Mashups is typically sourced from a third party via a public interface or API (web services). Other methods of sourcing content for Mashups include Web feeds (e.g. RSS or Atom), and screen scraping. Mashups should be differentiated from simple embedding of data from another site to form compound documents. A site that allows a user to embed a YouTube video for instance, is not a Mashup site. As outlined above, the site should itself access third party data using an API, and process that data in some way to increase its value to the site's users. Many people are experimenting with Mashups using Amazon, eBay, Flickr, Google, Microsoft, Yahoo and YouTube APIs, which has led to the creation of the Mashup editor.
Fig-1: A Google Maps Mashup Showing Location and Data About UK Universities
4. Mashup Architecture
The general architecture of Mashup web applications is always composed of three levels or tiers:
i. The content provider: It is the source of the data. Data is made available using an API and different Web-protocols such as RSS, REST, and Web Services
ii. The Mashup site: It is the web application that provides the new service using different data sources that are not owned by it.
iii. The client web browser: It is the user interface of the Mashup. In a web-application, the content can be mashed by the client web browsers using client side web language for example JavaScript
The Mashup Ecosystem is constituted of Open data, Open Set of services, small pieces loosely joined and you.
Fig-2: Mashup Ecosystem
Fig: Mashup Architecture [Source: Dion Hinchcliffe’s Blog]
http://web2.socialcomputingmagazine.com/
5. Essential Features of Mashups
John Herren (2006)[5] has Mashup has highlighted three basic characteristics of a Mashup:
Combination: It uses multiple data sources; join across dimensions, subject, time and place.
Visualization: It stresses on visual presentation of data sources.
Aggregation: It groups data, creates information from data, uncover hidden aspects of data
However, the essential features of Mashup can be summarised as follows:
A Mashup is a website or application that uses content from more than one source to create a completely new service.
Uses a public interface, RSS feed, or API.
Original use was music – combining tracks from different sets and artists.
Simple API: Anyone can create one
Content from two or more sites
Current emphasis on presentation – visual maps, Simple two dimension maps
Content structure, data: Issues of compatibility, Every Mashup a unique job
Self Service – embed variety of mashups
6. Mashup for Libraries
Mashups in a general sense have been going on in the libraries for many years. In fact the library world has been a leader in blending its programmes and services with the latest trends and technology developments. The combination of different ideas and services are made to reach different audiences or energize existing ones. In the process new experiences are created and traditional services are revitalized.
In this perspective, Web Mashups for today’s libraries are carrying on a tradition of innovation started in 1800s, when libraries moved away from the closed organizations that they were into the vibrant cultural and academic centres that they are today. Susan Gibbons, Vice Provost, River Campus Libraries, University of Rochester says, “Mashups are critical to reach users, who now have to exit their preferred Web environments to come to the library and use its services”. He further adds, “We have to accept that our library websites are not going to be destinations of choice for our students/patrons. Rather we have to be packaging and serving up parts of our Websites in ways that they can be integrated into the users’ preferred virtual destinations, whether that be Google, Facebook or SecondLife. [6]
7. Important Library Mashups
Library Thing: (URL: http://www.librarything.com/)
LibraryThing is a site for book lovers. LibraryThing helps us to create a library-quality catalog of books. We can do all of them or just what you're reading now. And because everyone catalogs online, they also catalog together. LibraryThing connects people based on the books they share. Adding books to our catalog is also easy. Just enter some words from the title, the author or an ISBN. You don't have to type everything in. LibraryThing gets all the right data from Amazon.com and over 690 libraries around the world, including the Library of Congress.
BookPrice (URL: http://www.bookprice.net/)
bookprice.net offers a quick way to compare the prices of any in-print book of so far 8 online bookstores. We can view the results with or without the shipping costs of a single book. This site collects prices for books in realtime from different online bookstore. There are connections to the original bookstore where one can then buy this item. This is a service to make it easier for us to find low price books, but there is no guarantee that this is the best price. There might be money to save if we check the shipping pages to find the lowest shipping cost.
TOCRoSS (Table of Content by Really Simple Syndication) (URL: http://www.jisc.ac.uk/)
TOCRoSS demonstrated that it is possible to automate the inclusion of TOCs from a publisher into a library OPAC. TOC data for 160 Emerald journals (3,000) articles was pushed using RSS into the Talis PRISM OPAC at the University of Derby library. Searches on keywords retrieved journal articles, and users were able to link through to and view the full text article. Librarians and end users tested the service, and feedback was on the whole positive about the inclusion of article records in the library OPAC .
Book Finder 4 You (URL: http://www.bookfinder4u.com/)
BookFinder4U is a FREE service that searches 130 bookstores, 80,000 booksellers and 90 million new & used books worldwide to find the lowest book price in A click! At Bookfinder4U, the goal is simple, to provide with a book search and price comparison service that is Comprehensive, Objective and Easy to use.
Journal Junkie (URL: http://journaljunkie.com/
JournalJunkie.com is a free podcast syndication service for medical practitioners with an insatiable interest in the latest medical news, providing abstract summaries from the highest impact medical journals as downloadable audio. During Beta testing alone, and without any marketing, JournalJunkie.com has already attracted around 30,000 hits per month from over 2000 unique visitors. Subscription to JournalJunkie.com is free, and subscribers can:
listen to abstracts immediately
download them to their iPod/MP3 player for later
set up automatic downloads from their favourite journals to their computer or iPod/MP3 player
receive a regular email reminder whenever new audio content from their chosen journals becomes available
LIBWORM (URL: http://www.libworm.com/)
LibWorm is intended to be a search engine, a professional development tool, and a current awareness tool for people who work in libraries or care about libraries. LibWorm collects updates from about 1400 RSS feeds (and growing). The contents of these feeds are then available for searching, and search results can themselves be output as an RSS feed that the user can subscribe to either in his/her favourite aggregator or in LibWorm's built-in aggregator.
BookJetty (URL: http://www.bookjetty.com/)
BookJetty is a social utility that connects several library sites and checks books' availability in the libraries. For a start, BookJetty links up with only Singapore National Library Board. Now, it connects with more than 300 libraries worldwide from 11 different countries, i.e. US, UK, Canada, Australia, Singapore, Taiwan, Hongkong, and more.
xISBN (URL: http://www.worldcat.org/affiliate/webservices/xisbn/app.jsp)
The xISBN Web service supplies ISBNs and other information associated with an individual intellectual work that is represented in WorldCat. If we submit an ISBN to this service, it returns a list of related ISBNs and selected metadata. Ideal for Web-enabled search applications—such as library catalogs and online booksellers—and based on associations made in the WorldCat database, xISBN enables an end user to link to information about other versions of a source work.
8. Conclusion
Modern approaches to thinking about provision of library data and services online create opportunities for numerous applications beyond the traditionally defined library management system. By adhering to standards from the wider Web community, by considering the library system as an interlocking set of functional components rather than a monolithic black box, and by taking a bold new approach to defining the ways in which information from and about libraries are 'owned' and exposed to others, we make it straightforward for information from the library to find its way to other places online. Rather than being locked inside the library system, data can add value to the experience of users wherever they are, whether it is Google, Amazon, the institutional portal, or one of the social networking sites such as MySpace or Facebook. By unlocking data and the services that make use of it, the possibilities are literally endless, and it is here that efforts such as those around the construction of a library 'Platform' become important.
References
Robert, D. 2005. Mix., Match and Mutate. Business Week. [Online] Available at http://www.businessweek.com/magazine/content/05_30/b3944108_mz063.htm
Mashup (web application hybrid, Wikipedia,
CHUN, Hong. 2006. Web 2.0-Mashups. ISNM 2006.Modules of Virtual Organizations. Oliver Bohl. [Online] Available at http://www.oliverbohl.de/DOCS/Mashup.pdf
What is A Mashup? [Online] Available at http://news.zdnet.com/2422-13569_22-152729.html
HERREN, John. 2006. Introduction to Mashup development. [Online] Available at http://www.slideshare.net/jhherren/mashup-university-4-intro-to-mashups
STOREY, Tom. 2008. Mixing it up: Libraries mash up content, services, and ideas. Next Space. 9, p. 7
CHO, Allan. 2007. An introduction to Mashup for health librarians. JCHLA/JABSC. v.28, p. 19-22 [Online] Available at pubs.nrc-cnrc.gc.ca/jchla/jchla28/c07-007.pdf
Bulu Maharana is working as a Lecturer in Library and Information Science. He has a teaching experience of 7 years and professional experience of 3 years. He has published 35 papers in different national and international journals, seminars proceedings and book chapters. He can be contacted at bulumaharana@gmail.com .
* XML Feed is a form of paid inclusion where a search engine is "fed" information about pages via XML, rather than gathering that information through crawling actual pages. [http://www.anvilmediainc.com/search-engine-marketing-glossary.html]
§ Screen scraping is a technique in which a computer program extracts data from the display output of another program. The program doing the scraping is called a screen scraper. The key element that distinguishes screen scraping from regular parsing is that the output being scraped was intended for final display to a human user, rather than as input to another program, and is therefore usually neither documented nor structured for convenient parsing. [http://en.wikipedia.org/wiki/Screen_scraping]
Saturday, August 16, 2008
BENTHAM OPEN: More than 200 open access journals
for detail visit- http://www.bentham.org/index.htm
Koha 3.0.0 RC1 Release
Release Notes for Koha 3.0.0 RC1
Koha 3 is the next-generation release of the award-winning Koha open-source integrated library system.
You can obtain Koha 3.0 RC1 from the following URL: http://download.koha.org/koha-3.00.00-stableRC1.tar.gz
These Release Notes cover What's New in Koha 3, information about the new Revision control system (Git), and Version-release process, pointers to Download, Installation, and Upgrade documentation, a brief introduction to the new Templates, a call to Translation and Documentation writers, and finally, Known Issues with this version.
to know
What's New in Koha 3? plz visit- http://www.koha.org/about-koha/news/nr1214238926.html
Thursday, July 17, 2008
Science 2.0 -- Is Open Access Science the Future
Proponents say these “open access” practices make scientific progress more collaborative and therefore more productive.
Critics say scientists who put preliminary findings online risk having others copy or exploit the work to gain credit or even patents.
Despite pros and cons, Science 2.0 sites are beginning to proliferate; one notable example is the OpenWetWare project started by biological engineers at the Massachusetts Institute of Technology.
The first generation of World Wide Web capabilities rapidly transformed retailing and information search. More recent attributes such as blogging, tagging and social networking, dubbed Web 2.0, have just as quickly expanded people’s ability not just to consume online information but to publish it, edit it and collaborate about it—forcing such old-line institutions as journalism, marketing and even politicking to adopt whole new ways of thinking and operating.
Science could be next. A small but growing number of researchers (and not just the younger ones) have begun to carry out their work via the wide-open tools of Web 2.0. And although their efforts are still too scattered to be called a movement—yet—their experiences to date suggest that this kind of Web-based “Science 2.0” is not only more collegial than traditional science but considerably more productive.
“Science happens not just because of people doing experiments but because they’re discussing those experiments,” explains Christopher Surridge, managing editor of the Web-based journal Public Library of Science On-Line Edition (www.plosone.org). Critiquing, suggesting, sharing ideas and data—this communication is the heart of science, the most powerful tool ever invented for correcting errors, building on colleagues’ work and fashioning new knowledge. Although the classic peer-reviewed paper is important, says Surridge, who publishes a lot of them, “they’re effectively just snapshots of what the authors have done and thought at this moment in time. They are not collaborative beyond that, except for rudimentary mechanisms such as citations and letters to the editor.”
Web 2.0 technologies open up a much richer dialogue, says Bill Hooker, a postdoctoral cancer researcher at the Shriners Hospital for Children in Portland, Ore., and author of a three-part survey on open-science efforts that appeared at 3 Quarks Daily (www.3quarksdaily.com), where a group of bloggers write about science and culture. “To me, opening up my lab notebook means giving people a window into what I’m doing every day,” Hooker says. “That’s an immense leap forward in clarity. In a paper, I can see what you’ve done. But I don’t know how many things you tried that didn’t work. It’s those little details that become clear with an open [online] notebook but are obscured by every other communication mechanism we have. It makes science more efficient.” That jump in efficiency, in turn, could greatly benefit society, in everything from faster drug development to greater national competitiveness.
http://www.sciam.com/article.cfm?id=science-2-point-0
http://openwetware.org/wiki/Science_2.0
Saturday, July 05, 2008
LOCKSS:preserve today’s web-published materials for tomorrow’s readers.
Libraries inform and educate citizens and provide critical support to democratic societies by acting as memory organizations. The memory of a library is its collections; therefore in order for a library to be a memory organization it must build collections. LOCKSS helps libraries stay relevant by building collections even as an increasing portion of today’s content is born digitally and published on the web.
LOCKSS replicates the traditional model of libraries keeping physical copies of books, journals, etc. in their collections, making it possible for libraries to house copies of digital materials long-term. Hundreds of publishers and libraries around the world have joined the LOCKSS community and are working together to ensure that libraries continue their important social memory role.
The ACM award-winning LOCKSS technology is open source, peer-to-peer, decentralized digital preservation infrastructure. LOCKSS preserves all formats and genres of web-published content. The intellectual content, which includes the historical context (the look and feel), is preserved. LOCKSS is OAIS-compliant; the software migrates content forward in time; and the bits and bytes are continually audited and repaired. Content preserved by libraries through LOCKSS becomes a part of their collection, and they have perpetual access to 100% of the titles preserved in their LOCKSS Box.
YouTube video: "Why Libraries Should Join LOCKSS"
http://www.youtube.com/watch?v=POJf38RzihA (part 1) and http://www.youtube.com/watch?v=AKr1Adc8tnA (part 2).
http://www.youtube.com/watch?v=0wdcnXrQkaI
(http://www.youtube.com/watch?v=TOE_Jw23cVg)
Wednesday, June 25, 2008
When Wikipedia Won't Cut It: 25 Online Sources for Reliable, Researched Facts
Citizendium: This wiki focuses on credibility, using both the general public and credentialed experts. It works just like Wikipedia, but better.
AmericanFactFinder: This database from the US Census Bureau is a great source for information on housing, economics, geography and population.
The Linguist List: The Linguist List is home to a peer-reviewed database of language and language-family information.
Intute: Created by a network of UK universities and partners, this database is full of evaluations from subject specialists.
Classic Encyclopedia: This online encyclopedia is based on the 1911 11th edition of the Encyclopedia Brittannica. Although quite old, it offers an in-depth look on more than 40,000 items, and it's widely considered to be the best encyclopedias ever written.
Virtual Reference Shelf: This Library of Congress site offers a number of high quality selected web resources.
MedBioWorld: Get professional medical and biotechnology information from this resource for journals, reference tools, databases, and more.
Library Spot: Check out this site for libraries online, a reading room, reference desk, and more.
FactCheck.org: FactCheck.org researches politics and delivers the truth on candidates and more.
iTools: Use iTools' research tools to find facts and theories on just about any subject.
Browse Topics: Maintained by professional librarians, this site links to Federal websites that offer facts.
WWW Virtual Library: Created by Tim Berners-Lee, who also created HTML and the Web, this library uses experts to compile high quality information.
Open Site: Open Site uses volunteer editors to offer a fair, impartial Internet encyclopedia.
CredoReference: CredoReference aggregates content from some of the best publishers in reference, offering more than 3 million reference entries.
Internet Public Library: In the Internet Public Library, you'll find references for nearly every subject out there.
Infoplease: Infoplease offers an entire suite of reference materials, including an atlas, dictionary, encyclopedia, and almanacs.
STAT-USA/Internet: This service of the US Department of Commerce offers information on business, economics, trade, and more.
Mathematica: Mathematica, the Wolfram Library Archive, offers research and information on math, science, and more.
Refdesk: Refdesk calls itself the single best resource for facts, and it delivers. Visit this online reference desk to find facts in their tools, facts-at-a-glance, or facts search desk.
AskOxford: This reference tool from Oxford University Press offers facts and tips on the English language and more.
The Old Farmer's Almanac: Whether you're searching for weather, food, gardening, or beyond, you'll find what you need in this online almanac.
eXtension: The information you'll find on eXtension is objective, research-based, and credible.
FindLaw: This listing of legal resources makes it easy to find cases, codes, references, and much more.
CIA Factbook: The CIA Factbook offers information on world countries and more.
Martindale's: The Reference Desk: Find reference material for nearly everything, from medicine to weather
25 Awesome Beta Research Tools from Libraries Around the World
Check out this list for academically-minded beta search tools sponsored by universities around the world.
Vera Multi-Search: MIT: This new tool is still in the works, but once it's officially approved, students and researchers can use Vera Multi-Search as a way to find material in several different databases with one single search.
Project Blacklight: University of Virginia: UVA's Blacklight tool gives students the advanced ability to narrow down their searches and "more easily filter content" in order to increase their chances of finding exact matches and helpful research materials. Developed by Erik Hatcher and UVA library staff, Blacklight features several filters, subject searches and multimedia tools to enhance the research process.
LibX: This browser extension lets users search library catalogs, e-journal lists, databases and other library websites from their toolbars. Users can also easily highlight key words, save information and access personal library accounts. LibX is an open source project, allowing universities to continually develop and rework their own versions of the plugin.
Quick Start: Brigham Young University: This program, used at BYU's Harold B. Lee Library, lets students tailor their search to books, articles, or a combination of the two. Powered by the GoogleScholar Beta, Quick Start points researchers in the right direction from the very beginning.
Encore: Michigan State: Michigan State's new beta project, Encore, connects researchers to books, journals, articles and other materials. Try out the search, and then send in your feedback.
HBLL Firefox Extension: Brigham Young University: BYU's Harold B. Lee Library also sponsors this unique Firefox extension. The beta search tool lets researchers search within the HBLL for articles, books, personal account information and other materials with just one search box hidden inside your Firefox toolbar. Other quick links include information about library hours, floor maps, study room reservations and interlibrary loans.
New Books Beta: University of Otago: This New Zealand library keeps new books in a separate search engine for one week before sending them into general circulation. Users can search by subject or library to narrow down their search even further.
Library Search: University of Minnesota: The college's new Web interface is simply called Library Search, a program that is divided into two sections: Books and More: Twin Cities (MNCAT Catalog) and Articles. The MNCAT Catalog searches materials in libraries and databases in the Twin Cities. Researchers can find individual titles and journal entries in the Articles search.
LCSH Tag Cloud: Flinders University: Australia's Flinders University is currently testing out this search tool, which displays categories in a cloud-like format, similar to ones used on social media sites and blogs.
Windows Live Academic Search: Schools like University College Dublin are trying out this beta search tool, which supports books, dissertations, conferences and more.
MIT Tech TV: This beta program also comes with a collection of video tutorials that gives tips on using the MIT library services.
JHOVE: This tool, developed by JSTOR and Harvard University, "provides functions to perform format-specific identification, validation, and characterization of digital objects."
Google Scholar: Google Scholar and Google Advanced Scholar Search are popular beta tools that allow researchers to search academic journals, books, articles and other materials.
Live Search Books: Windows Live Book Search has partnered with with the University of California and University of Toronto to improve academic and book searches within its beta program. Non-Academic Libraries and Tools
For access to even more new developments and cutting-edge research tools, review this list of betas, sponsored by Firefox, the Library of Congress, the British Library and more.
Zotero: This Firefox add-on is perfect for students and professionals who need to keep track of a heavy load of research sites. The add-on stores PDFs, files, images, links and records in any language; automatically saves citations; offers a note taking autosave feature and more. The best part? It all fits neatly in your Firefox browser without getting in the way. The tool is currently used at the Harold B. Lee Library at Brigham Young but is sponsored by the Institute of Museum and Library Sciences and The Andrew W. Mellon Foundation.
New Search (BETA) -- Library of Congress: This simple tool lets researchers search just the Library of Congress website, U.S. historical collections, LofC online catalog, prints and photographs online catalog, the THOMAS Legislative Information System, or all 5 at once. It's the first time the library has given its users a chance to search all areas of the site by typing in keywords only once.
WorldCat: WorldCat connects libraries all over the world with information on the Internet. Many university libraries like the University of Washington, Trinity College, Wheaton College, the University of Minnesota and the University of Arizona all use WorldCat to enhance student, faculty and personal research abilities. Features like custom-designed search lists, shareable search results and browser plugins have made this beta a success so far.
Web Curator Tool Project: A project sponsored by the National Library of New Zealand and the British Library, this beta tool is designed to help researchers find relevant information on the Internet.
JISC Academic Database Assessment Tool: This tool, sponsored by Scopus, the International Bibliography of the Social Sciences, Thomson Scientific and ProQuest, is designed to help libraries identify quality future subscriptions to various databases. Users are encouraged to compare journal title lists, database platforms and eBook platforms to find the best fit for their library.
Fez: This open source project lets libraries working with Fedora "to produce and maintain a highly flexible web interface" for organizing their online archives and documents. Organizations currently involved in the project include Tufts University, University of Queensland, MediaShelf, Digital Peer Publishing and the University of Prince Edward Island.
Sustainability of Digital Formats: Planning for Library of Congress Collections: This project aims to redesign and evaluate a new system of describing content with appropriate digital formats, making it easier for users to search through catalogs and databases.
Google Book Search Library Project: Google's popular Book Search is now working with libraries to incorporate their card catalogs into Google's beta tool. Users will be able to find copyrighted books as well as books that are out of print.
THOMAS: The Library of Congress is developing another search tool, called THOMAS. Researchers seeking legislative materials like the Congressional Record, U.S. treaties and more. Users can search the entire database with only one search box and choose to search by sponsor or topic.
LibWorm: This beta helps you "search the biblioblogosphere and beyond." When you want to start your search on the Internet but only want to find library-related material, this tool can help. By pulling information from over 1500 RSS feeds in categories like academic libraries, government libraries, law libraries, podcasts: librarianship, medical libraries and more.
PhiloBiblon: This highly-specialized beta search engine is in development at the Berkeley Digital Library SunSITE. Early texts produced in the Iberian Peninsula are available on the Internet through this search engine, helping researchers find rare materials
Be a high-tech librarian
This changed library scenario will demand a considerable amount of reskilling and upgradation initiatives from conventional librarians in order to fit into the new requirement.
The country has over 400 large (employing nearly 10,000 people) and thousands of small and medium libraries. The entire library (content storage and management) segment awaits a total makeover as the way information is handled, stored and retrieved today is going to be completely changed.
According to H S Siddamalliah, president of Karnataka State Libriarians’ Association, the definition of libraries has changed with technology having started to facilitate easy access to "tailor-made or micro-information." This changes the overall profile of librarians, he says.
From mere book keepers and journal managers they are now transforming into publishers, editors, digitalisers, converters, compilers, categorisers, aggregators, collectors, collators, indexers and consolidators of content.
"Online content is growing rapidly along with multimedia learning materials. As classroom-centric teaching practices are becoming library-centric, our librarians need to be tech-savvy. Librarians should be familiar with content management software products and solutions."
While there is a lot of awareness among librarian attached to research agencies, defence departments or private library outfits, says Siddamalliah, but librarians working in the government segment - universities and other educational outfits - remain largely unaware of the technological revolution sweeping their field.
T B Rajasekhar, Associate Chairman, National Centre for Science Information (NCSI) IISc, Bangalore, says that digitalisation of library is all about a system that manages and preserves documents intelligently and makes them easily accessible.
"These open online archives should be able to talk to each other," he says elaborating on the magic of technology.
Waikanto University of New Zealand has developed an open source software - GreenStone - tool that helps in managing, searching and retrieving specific content from digital libraries. IISc will start using the software from April and it is also planning workshops to educate the librarian fraternity, he informs.
"The challenge in front of librarians is that they should be able to use technology to enhance the content management, online publishing and content refreshing," Rajasekhar says.
Says IRN Gouda, head, Information Centre of National Aeronautic Laboratories (NAL), "These are the digital counterparts of traditional libraries. The content will include a whole lot of printed stuff, images, audio, video, music, movies, art objects, etc."
According to him, digital libraries are all about Knowledge Management (KM), which is currently being treated as a discipline by itself. Gauda says traditional librarians have been all along managing the print versions of content, and, quoting a recent study, he says in India the ratio between off and online content will reach 50:50 from the current 75:25 by the year 2005.
"Digitalisation will also boost other areas like telecom, networking, systems integration. In a sea of information in the future, librarians will not be those who provide the water, but those who navigate the ship," Gouda claims.
Even conventional librarians are slowly seeing the advantages of digitalisation. Putta Basavaiah, deputy librarian at IISc says, "Digital libraries offer many advantages. Information is accessible wherever you are unlike the conventional libraries which were constrained by location and space. Digital libraries provide seamless content sharing (multi-user) facility. Librarians need to take up this new challenge by upgrading their knowledge base and reskilling themselves in selecting, collating, editing and managing the matter online."
Courses in digital library sciences
• National Informatics Centre runs a crash course
• Funded by UGC, NSCI offers a one-year training programme for library science graduates on technology and e-content management.
• Bureau of Indian Standards (BIS) has formed a standard-Indian Script Code for Information Interchange (ISCII) that helps digitalisation.
• Indira Gandhi National Centre for Arts, under the ministry of IT, has a cell called Technology Department for Indian Languages (TDIL) to develop tools for digitisation of various Indian language materials.
• Asian Digital Libraries, a body that research problems related to digitalising Asian cultural heritage and languages.
Thursday, June 12, 2008
ABCD-Automation of Libraries and Documentation Centers
1ST NUGGET
ABCD is a long held aspiration for the ISIS community, since the first MS-DOS version came out more than 20 years ago. Finally, this aspiration is about to come true.
ABCD is an integrated library management software comprising the main basic library functions:
Definition of any number of new databases (similar to Winisis), which includes: FDT, PFT, FST, and worksheets directly on the Web, or copying from existing ones either from the Web or from Winisis on a local hard disk,
cataloguing of books and serials, independently of the format: MARC, LILACS, AGRIS, etc.
end-user searching (OPAC),
loans circulation,
acquisitions,
statistics,
library services like SDI, barcode printing, quality control, etc.
The software will be compatible with CDS/ISIS database technology for the bibliographic databases, i.e. reading ISIS-databases and making use of ISIS Formatting Language (or something functionally similar) for producing output (PFT) and indexing (FST) of records;
1The software will run on both Windows and Linux platforms;
2The software will allow the use of MARC-21 cataloging formats and other current standards or protocols (Dublin Core, METS, Z39.50...);
3The software will be published as Free and Open Source Software (FOSS) with the accompanying tools for the developer community;
4The software will be intrinsically multi-lingual, with English, Spanish, Portuguese and French interfaces being available by the end of 2008;
5The software will be fully documented for system-managers in at least one language, preferably English, by the end of 2008;
6UNICODE-compatibility will at least be envisaged and prepared, if not in the actual version then for an upcoming future version, which is part of the ISIS-NBP paralell project;
A testing version will be ready by September 2008 to be presented in the 3rd World Isis Meeting;
This version and manuals will be used as training materials for an international Training Workshop on the software in March 2009.
2ND NUGGET OF ABCD
ABCD is alligned with the CISIS/1660 version 5.2 platform, and will eventually be made compatible with later versions. This means that the inverted file entries are 60 characters long, and will increase in length in the ISIS-NBP based version.
ABCD is compatible with programming languages accepted by the GNU licences, i.e. PHP, Java, Javascript, Python, etc. The current version of ABCD is written in PHP v.5.
The system is totally language independent. The product will be made available in Spanish, English, French and Portuguese by end of 2008, and can be translated into other languages in the same way the CDS/ISIS applications always were.
Cataloguing module
The main feature of the cataloguing module is that it accepts different database structures in a transparent way. Each database has its own configuration files which ABCD interprets in order to apply the necessary procedures to manage each information structure. This follows the same philosophy as CDS/ISIS. Each database has its own FDT (Field Definition Table), etc.
The first version of ABCD has the following functions available:
User control through a database defining username, login, password, user level (Administrator, Operator, Librarian, End user, etc.), where specific information sources and access rights are established.
Database creation in three modalities:
Creating new databases in the traditional, four-step CDS/ISIS way: defining the FDT, a worksheet, a display format (PFT) and an indexing format (FST). ABCD generates the necessary environment files and creates the database in the Web server.
Copying an existing database from the Web server, and making changes afterwards.
Creating a database in the Web server based on a Winisis database available on the local machine. The ABCD system performs all the necessary conversions for the change of operative environment.
Management of multiple structures, providing templates for the separate entry of subfields. The structures can be based on MARC21, UNIMARC, LILACS, CEPAL, AGRIS, or any other ad-hoc structure that the user prefers. ABCD gives the user an extraordinarily wide capacity to define the FDT. It is possible to use repeatable subfields, to give names to subfields and to associate special help messages, pick-lists, etc. to each of them.
Dynamic building of data entry worksheets based on specifications given in the FDT. This table contains not only the specification of the ISIS fields proper, but also the characteristics which the fields will deploy in the worksheets (textbox, select, checkbox options, html area, text area), as well as the facilities to present controlled vocabularies related to the fields.
Capture of controlled vocabulary through authority files, obtained from the same or external databases. This feature is managed through inverted files.
Capture of controlled vocabulary through tables defined in TXT files.
Identification of fields requiring association to external resources (images, pdf, xls, etc) in order to upload these to the server.
Management of different kinds of records in each database, presenting the adequate worksheet for each type.
Management of multiple worksheets, dynamically defined by the user.
Access to records through the MFN, advanced search or alphabetical listing of a field defined as the main entry.
Creation, editing and deletion of records.
Presentation of search results using various display formats.
Printing module allowing different print formats and sorting criteria (depending on the number of records). Facilities for sending the results to wordprocessors or spreadsheets.
A module for the generation of lists and indexes.
Generation of scripts for real-time quality control during data entry.
ABCD-Automation of Libraries and Documentation Centers
LONG AWAITED GROUND BREAKING LIBRARY MANAGEMANT SOFTWARE
NUGGET 1ST
Definition of any number of new databases (similar to Winisis), which includes: FDT, PFT, FST, and worksheets directly on the Web, or copying from existing ones either from the Web or from Winisis on a local hard disk,
cataloguing of books and serials, independently of the format: MARC, LILACS, AGRIS, etc.
end-user searching (OPAC),
loans circulation,
acquisitions,
statistics,
library services like SDI, barcode printing, quality control, etc.
The software will be compatible with CDS/ISIS database technology for the bibliographic databases, i.e. reading ISIS-databases and making use of ISIS Formatting Language (or something functionally similar) for producing output (PFT) and indexing (FST) of records;
The software will run on both Windows and Linux platforms;
The software will allow the use of MARC-21 cataloging formats and other current standards or protocols (Dublin Core, METS, Z39.50...);
The software will be published as Free and Open Source Software (FOSS) with the accompanying tools for the developer community;
The software will be intrinsically multi-lingual, with English, Spanish, Portuguese and French interfaces being available by the end of 2008;
The software will be fully documented for system-managers in at least one language, preferably English, by the end of 2008;
UNICODE-compatibility will at least be envisaged and prepared, if not in the actual version then for an upcoming future version, which is part of the ISIS-NBP paralell project;
A testing version will be ready by September 2008 to be presented in the 3rd World Isis Meeting;
This version and manuals will be used as training materials for an international Training Workshop on the software in March 2009.
NUGGET 2ND
Tuesday, June 03, 2008
what librarian can do for open acccess
1;Launch an open-access, OAI-compliant institutional eprint archive, for both texts and data.
The main reason for universities to have institutional repositories is to enhance the visibility, retrievability, and impact of the research output of the university. It will raise the profile of the work, the faculty, and the institution itself.
A more specific reason is that a growing number of journals allow authors to deposit their postprints in institutional but not disciplinary repositories. Even though this is an almost arbitrary distinction, institutions without repositories will leave some of their faculty stranded with no way to provide OA to their work.
"OAI-compliant" means that the archive complies with the metadata harvesting protocol of the Open Archives Initiative (OAI). This makes the archive interoperable with other compliant archives so that the many separate archives behave like one grand, virtual archive for purposes such as searching. This means that users can search across OAI-compliant archives without visiting the separate archives and running separate searches. Hence, it makes your content more visible, even if users don't know that your archive exists or what it contains.
There are almost a dozen open-source packages for creating and maintaining OAI-compliant archives. The four most important are Eprints (from Southampton University), DSpace (from MIT), CDSWare (from CERN), and FEDORA (from Cornell and U. of Virginia).
When building the case for an archive among colleagues and administrators, see The Case for Institutional Repositories: A SPARC Position Paper, by Raym Crow.
When deciding which software to use, see the BOAI Guide to Institutional Repository Software.
When implementing the archive, see the SPARC Institutional Repository Checklist & Resource Guide.
Configure your archive to facilitate crawling by Google and other search engines.
If your institution wants an archive but would prefer to outsource the work, then consider the Open Repository service from BioMed Central or the DigitalCommons@ service from ProQuest and Bepress.
Help faculty deposit their research articles in the institutional archive.
Many faculty are more than willing, just too busy. Some suffer from tech phobias. Some might need education about the benefits.
For example, some university libraries have dedicated FTE's who visit faculty, office by office, to help them deposit copies of their articles in the institutional repository. (This is not difficult and could be done by student workers.) The St. Andrews University Library asks faculty to send in their articles as email attachments and library staff will then deposit them in the institutional repository.
Consider publishing an open-access journal.
Philosophers' Imprint, from the University of Michigan, is a peer-reviewed OA journal whose motto is, "Edited by philosophers. Published by librarians. Free to readers of the Web." Because the editors and publishers (faculty and librarians) are already on the university payroll, Philosophers' Imprint is a university-subsidized OA journal that does not need to charge upfront processing fees.
The library of the University of Arizona at Tucson publishes the OA peer-reviewed Journal of Insect Science. For detail and perspective on its experience, see (1) Henry Hagedorn et al., Publishing by the Academic Library, a January 2004 conference presentation, and (2) Eulalia Roel, Electronic journal publication: A new library contribution to scholarly communication, College & Research Libraries News, January 2004.
The Boston College Libraries publish OA journals edited by BC faculty. See their press release from December 16, 2004.
The OA Journal of Digital Information is now published by the Texas A&M University Libraries.
See the BOAI Guide to Business Planning for Launching a New Open Access Journal.
See SPARC's list of journal management software.
See the list of what journals can do, below.
Consider rejecting the big deal, or cancelling journals that cannot justify their high prices, and issue a public statement explaining why.
See my list of other universities that have already done so. If they give you courage and ideas, realize that you can do the same for others.
Give presentations to the faculty senate, or the library committee, or to separate departments, educating faculty and adminstrators about the scholarly communication crisis and showing how open access is part of any comprehensive solution. You will need faculty and administrative support for these decisions, but other universities have succeeded in getting it.
Help OA journals launched at the university become known to other libraries, indexing services, potential funders, potential authors, and potential readers.
See Getting your journal indexed from SPARC.
Include OA journals in the library catalog.
The Directory of Open Access Journals offers its journal metadata free for downloading. For tips on how to use these records, see the 2003 discussion thread on the ERIL list (readable only by list subscribers) or Joan Conger's summary of the thread (readable by everyone).
Take other steps to insure that students and faculty doing research at your institution know about OA sources, not just traditional print and toll-access sources.
Offer to assure the long-term preservation of some specific body of OA content.
OA journals suffer from the perception that they cannot assure long-term preservation. Libraries can come to their rescue and negate this perception. For example, in September 2003 the National Library of the Netherlands agreed to do this for all BioMed Central journals. This is a major library offering to preserve a major collection, but smaller libraries can do the same for smaller collections.
Undertake digitization, access, and preservation projects not only for faculty, but for local groups, e.g. non-profits, community organizations, museums, galleries, libraries. Show the benefits of OA to the non-academic community surrounding the university, especially the non-profit community.
Negotiate with vendors of priced electronic content (journals and databases) for full access by walk-in patrons.
A September 2003 article in Scientific American suggests that only a minority of libraries already do this.
Annotate OA articles and books with their metadata.
OA content is much more useful when it is properly annotated with metadata. University librarians could start by helping their own faculty annotate their own OA works. But if they have time (or university funding) left over, then they could help the cause by annotating other OA content as a public service.
Inform faculty in biomedicine at your institution about the NIH public-access policy.
SPARC has put together a good page on the benefits for researchers in complying with the NIH policy and suggestions on how to do so in the most effective way, and another page for librarians on ways to help faculty understand the policy and realize its benefits.
Help design impact measurements (like e.g. citation correlator) that take advantage of the many new kinds of usage data available for OA sources.
The OA world needs this and it seems that only librarians can deliver it. We need measures other than the standard impact factor. We need measures that are article-based (as opposed to journal or institution based), that can be automated, that don't oversimplify, and that take full advantage of the plethora of data available for OA resources unavailable for traditional print resources.
Librarians can also help pressure existing indices and impact measures to cover OA sources.
Join SPARC, a consortium of academic libraries actively promoting OA.
Join the Alliance for Taxpayer Access, a coalition of U.S.-based non-profit organizations working for OA to publicly-funded research. See the existing members of the ATA. If you can persuade your university as a whole to join the ATA, then do that as well.