Quantcast
Last updated on April 24, 2014 at 17:35 EDT

Winning the Competition for Attention on the Web

June 21, 2008

By Breeding, Marshall

IT’S JUST AS IMPORTANT TO OPTIMIZE THE PAGE TO MAXIMIZE ITS FINDABILITY IN THE SEARCH ENGINES. It’s no big revelation to assert that the web has grown to be a more pervasive aspect of our lives. A recent study conducted by OCLC (“Sharing, Privacy and Trust in Our Networked World,” 2007) shows that most of the major activities related to the internet have increased by substantial proportions: It sees that 19% more individuals have used search engines over the last 2 years, 30% more read or post to blogs, 5% more access online bookstores, while use of email increased 14%. In this context, it’s alarming that the study indicates that the number of individuals who report having used a library website has decreased by 10%.

I observe the web as becoming increasingly interwoven into society. Some organizations operate almost entirely through the web; others use the web to enhance their services and to help bring clientele to their physical establishments. Brick-and-mortar and online services seem to be increasingly interdependent. The observation that the use of library websites has decreased makes me very concerned that libraries really need to pay close attention to making improvements in the way that they manage their online presence.

While this statistic represents a broad pattern, I think that those involved in developing library websites can consider a number of measures to ensure that their own sites run counter to this trend and see increased use over time.

Tune for Discoverability

Take it as an uncomfortable reality that only the smallest minority of your library’s users will think to begin their research with your library’s website. OCLC’s 2005 report, “Perceptions of Libraries and Information Resources,” claimed survey responses indicated that only 1% to 2% of the population initiate their online research processes with library resources directly. Given the large portion that begin with internet search engines such as Google, a key strategy for library relevancy involves finding ways to channel users from the search engine results to our library sites. It’s important to follow the techniques of search engine optimization (SEO) to ensure the best possible exposure in the search engines and the highest rate of delivery of users from search engine results to your website.

The key asset that will increase the discoverability of your website involves your library’s content, especially any unique collections. It’s important to provide as much descriptive data about your unique collections in a way that can be harvested by the search engines-ideally for each individual record. If this isn’t possible, then ensure that any collection-level descriptions have been loaded with keywords that will improve their rankings in the search engines.

Each page on your site should be designed from two major perspectives. Many web authors or designers just consider how the page will be viewed by end users as they navigate through a site. They will optimize the site for visual appeal, usability, and navigation. It’s just as important, however, to optimize the page to maximize its findability in the search engines. From this perspective, it’s important to think about maximizing the number of access points embedded in the page. The text of the page should include text ranging from broad context down to granular details. You must ensure that the text of each page includes the name and location of your library, the name of the collection, and then details of names, dates, subjects, and unique identifiers. For the human user, you might take the broader information as obvious or implicit by virtue of the logos and navigational elements of your website. These elements must be explicit and unambiguous to boost search engine performance.

The positioning of the text on the page should also be given some attention. Titles, headers, and text near the beginning of the page tend to receive more attention by the search engines. The words that you put in these privileged areas of the page should be the ones most appropriate for search engine queries. Google, for example, automatically creates a brief description of the page called a “snippet” that it presents when your page turns up in a results list. It’s important to design the page to produce a snippet that accurately represents what the page is about and that will increase the likelihood that a user will click through to your site from the search engine results.

Ensure that your site delivers simple, clear pages. Ideally, the pages themselves will contain only the text involved in the content of the page headings and sections specified in standard tags, with most of the coding related to the appearance delivered through a separate style sheet. That doesn’t mean that the pages can’t look great-just control the look through the style sheet and avoid weaving spaghetti code through the body of the page. Search engines can also have a hard time with some text and features delivered through JavaScript. Many are moving toward this separation of content from form through CSS as a more mature and sophisticated approach to website management. Keep in mind that it’s also a great way to boost the performance of pages in search engines as well.

One technique that I find helpful in assessing the findability of a given page is to view the source of the page and examine the text of the page source. If you see that the words that encapsulate the meaning of the page are hopelessly buried in the coding that controls the look of the page, then the page isn’t going to perform as well with the search engines.

Help Out the Search Engines Through XML Sitemaps

Another key technique in improving site performance in the search engines involves making sure that all the pages can be efficiently harvested by web crawlers that visit a page to harvest and index its contents. I recommend creating an XML file that represents the contents of your page according to the Sitemap protocol. This protocol involves creating an XML that lists each of the links that represents a page on your site and when that page’s contents were last modified. Having a simple, definitive description of the site allows the search engine to index its contents more efficiently and reliably. The protocol also allows you to distinguish the relative priority of the pages in your site, which is important given that Google and the other search engines may not index every page.

The Sitemap protocol was originally created by Google, but it has recently been adopted by the other major search engines.

Most of the search engines have a page that site managers can access to submit sitemaps and to track the indexing and search performance of your site:

* Google

www.google.com/webmasters/tools

* Microsoft Live

http://webmaster.live.com

* Yahoo!

https://siteexplorer.search.yahoo.com

Each of these utilities requires you to establish an account and provides tools to submit and monitor your sitemaps. All three require some sort of mechanism to ensure that you actually own the site, usually by placing a special file or token on your site. After you initially submit your sitemaps, you should sign in to your accounts on the various search engines regularly to be sure that no errors appear in your sitemap files and to monitor the number of pages being indexed.

You can also provide a line in your robots.txt file to specify the location of your main sitemap file. The robots file for my Library Technology Guides site, for example, includes the following line: Sitemap: http://www.librarytechnol ogy.org/sitemap.xml.

Exploit Metatags

In addition to the text in the body of your pages, you can also add “metatags” to the headers of your pages to assist the search engines in understanding their contents. The importance of metatags has waxed and waned over the history of the web. In the web’s early days, the use of metatags was critical since the search engines relied on them heavily. The misuse of metatags led to them being virtually ignored by search engines. You can imagine how some might describe the contents of the page for the benefit of the search engines while the actual content of the page might be something quite different. In today’s environment, the search engines appear to take the contents of the metatags into consideration when indexing and ranking a page, provided that the metatags are consistent with the content in the body of the page.

The most important tag in the header of the page is

. This tag characterizes the page and is heavily weighted by the search engines; it usually determines how the link will be labeled in search engine results. So don’t assign the same title to all the pages on your site. Try to give each page a unique title based on the unique content of the page. <p> Seek Referrals </p> <p> It’s important to position your library’s website to be conspicuous to its potential users. Make sure that other destinations frequented by your users include prominent links to your site. And go beyond simple links whenever possible. A search box into the library catalog, featured content mined from the library’s collections, or any other service or content component that you can integrate into other resources can help increase exposure to the library’s online presence. Exploit all possible opportunities to plant links or content in any destination frequented by your users, with strong branding from your library as part of a strategy to increase online exposure. You might want to review and expand the way that your library is represented in the web presence of the library’s higher-level organization. For academic libraries, this might include the website of the college or university, courseware systems, or other portals used as part of daily online life of faculty and students. Public libraries might focus on their municipal or county websites and portals that provide government services. It’s important to measure the effectiveness of the referrals in driving traffic to the library’s own resources. I spend quite a bit of time studying patterns of incoming traffic for the web resources that I manage. Web server logs include the URL of the referrer (the site from which the user clicked the link to get to your site). Knowing the performance of these incoming referrals gives important clues as to whether the links in the library’s surrounding environment are sufficiently prominent. This referral information also reveals how frequently visitors arrive at the library’s sites through search engines and includes the queries that resulted in the visit. </p> <p> Leverage Social Bookmarking </p> <p> It’s becoming more common for sites to include icons that allow visitors to easily post pages to social bookmarking and networking sites such as del.icio.us, Digg, and Facebook. Facilitating the submission of your content in this way can help increase use. Providing an icon that submits your page to these sites takes a little bit of behind-the-scenes technical work, but it can be worth the effort in that it ensures that a valid, persistent URL is provided and makes a larger audience aware of your content. This approach works best for pages that represent individual content items and not necessarily the top-level page for your library. </p> <p> Embrace RSS </p> <p> RSS can be a powerful tool for engaging users with library content. Providing an RSS feed to new items added to each of the libraries collections, blogs and events, search query results, or other content selections provides a great service to library users who prefer to consume content in this way. You should also work toward constructing RSS feeds that also help to drive users back into your website. It’s important to include enough information in each RSS item to entice the reader but also to include back links that bring users into the library’s own resource when they want more information. </p> <p> I offer two RSS feeds on Library Technology Guides: one for industry news and another for blog entries. It has been very interesting to watch the statistics related to the feeds steadily increase and to see a significant number of accesses into the website that appear to be a follow-up to content delivered through the RSS feeds. </p> <p> Offer Content That Your Users Want </p> <p> More than anything else, the key to increasing interest in your library’s web presence involves offering compelling and interesting content. Design your site to enhance, promote, and deliver access to the library’s collections and services. While it’s probably necessary to also include information about policies, rules, and regulations, these are not what attract patrons to your website. </p> <p> I’ve mentioned some of the tactics that I think can be part of a library’s strategy to deal with this urgent need to increase interest in and activity on its online web presence. Libraries face ever-more formidable competition for the attention of our users in a crowded field of information providers on the web. The possibility that the number of visits to library websites has gone down in the last couple of years reminds me how important it is for us to be very mindful of the way we position and promote our online presence. </p> <p> ENSURE THAT YOUR SITE DELIVERS SIMPLE, CLEAR PAGES. </p> <p> Marshall Breeding is the director for innovative technologies and research for the Vanderbilt University Libraries, the executive director of the Vanderbilt Television News Archive, and the founder of Library Technology Guides (www.li brarytechnology.org). His email address is marshall. breeding@vanderbilt. edu. </p> <p> Copyright Information Today, Inc. Jun 2008 </p> <p> (c) 2008 Computers in Libraries. Provided by ProQuest Information and Learning. All rights Reserved.</p> <p>