Quantcast

User-Centered Policy Evaluations of Section 508 of the Rehabilitation Act: Evaluating E-Government Web Sites for Accessibility for Persons With Disabilities

July 31, 2008

By Jaeger, Paul T

The author examines user-centered evaluations of e-government Web sites for compliance with a policy related to persons with disabilities: the requirements of Section 508 of the Rehabilitation Act. Although Section 508 requires that federal e-government sites offer equal access to all users, research indicates that inaccessibility is still prevalent. User-centered evaluation approaches offer a vital way to discover areas of inaccessibility on Web sites related to the requirements of Section 508. Following an overview of accessibility, Section 508, and e-government sites, the goals of evaluation and various approaches to evaluating e- government sites are analyzed. The author then focuses on methods and issues in user-centered evaluations of e-government that measure accessibility, and considerations for universal design and future studies. Sample instruments and example data from a 2006 study of e- government accessibility are included to illustrate methods and issues. Evaluating e-government Web sites can ultimately prove quite informative in ascertaining compliance with laws intended to promote accessibility. Keywords: policy; telecommunications; Section 508; law/legal issues; disability

Section 508 and E-Government

Accessibility is the equal access to information and communication technologies (ICTs) for individuals with disabilities, and it is of utmost importance to persons with disabilities in the networked society. Accessibility allows individuals with disabilities, regardless of the types of disabilities they have, to use ICTs, such as Web sites, in a manner that is equal to the use enjoyed by others. In the United States, 54 million people have disabilities, and the number of persons with disabilities worldwide is more than 550 million. That number will continue to grow as the baby boom generation ages (Jaeger & Bowman, 2005). For ICTs to be accessible, they should (a) provide equal or equivalent access to all users and (b) work compatibly with assistive technologies, such as narrators, scanners, enlargement, voice-activated technologies, and many other devices that persons with disabilities may use. The U.S. government has created numerous laws related to the accessibility of ICTs (Jaeger, 2004a, 2004b). For Web sites, the most prominent is Section 508 of the Rehabilitation Act, which includes specific requirements for federal government Web sites to be accessible.

Prior to reaching government and other Web sites, however, users with disabilities often face many barriers to accessing the Internet itself. Persons with disabilities can be limited in their access to and use of the Internet by a wide range of factors, from a lack of ability to afford the necessary ICTs, including computers; to accessibility problems with Internet service providers; to Web browsers that are not compatible with vital assistive technologies (Jaeger & Bowman, 2005). Persons with disabilities are much less likely to regularly use the Internet than many other populations as a result of these barriers to initial access (Jaeger & Bowman, 2005). Although numerous issues remain in the study of the accessibility of many aspects of the Internet, this discussion focuses on the accessibility of government Web sites.

Electronic government (e-government) is the provision of government information and services in the networked environment, notably through the provision of government Web sites. E-government is intended to make government more available to citizens, businesses, and other government agencies. However, federal e- government Web sites, in spite of the requirements of Section 508, are often inaccessible to persons with disabilities (Jaeger, 2004b, 2006a). Some persons with disabilities have come to distrust e- government as a source of information or services as a result of these accessibility issues (Cullen & Hernon, 2006). To improve compliance with the section 508 requirements and make e-government Web sites accessible to all users, disability studies research must work to find detailed, comprehensive methods and approaches for testing e-govemment Web sites for accessibility. A key aspect of studying any policy is assessing how well that policy is being implemented. The use of evaluation studies holds promise as such an approach to assess the extent, effectiveness, and success of the implementation of accessibility requirements on e-government Web sites.

The Evaluation of Government Policies in the Networked Environment

Evaluation studies of policies focus on their effects on individuals, organizations, and society. Such evaluations determine if, and under what conditions, policies and their components are effectively meeting programmatic, ethical, social, economic, and intellectual goals (Chen, 1990; Sanders, 2001). Evaluation “may confirm uncertain prior findings, provide new understandings about how programs work, or fundamentally question assumptions about particular interventions” (Ginsburg & Rhett, 2003, p. 490). In the public sector, evaluation is intended to produce social betterment by improving conditions for users of government policies, programs, or institutions (Henry, 2003; Henry & Mark, 2003). Ultimately, evaluation studies can offer prescriptions for improving policies or provide new perspectives on policies.

The Government Performance and Results Act of 1993, the Education Sciences Reform Act of 2002, and other laws require government agencies to conduct evaluations in many circumstances. With the increasing frequency of the evaluation of government policies, evaluation studies have gained considerable significance in the policy-making process. “Evaluation has, now more than ever before, become an integral part of how policies, decisions, reforms, programs, and projects are undertaken to try to achieve credibility and trust” (Segerholm, 2003, p. 353). The evaluation of public policy is particularly complex, because it must account for legal and operational dimensions of the related laws and for any changes in the policy environment (Grob, 2003; Lafond, Toomey, Rothstein, Manning, & Wagenaar, 2000; Mabry, 2002; Wagenaar, Harwood, Silianoff, & Toomey, 2005).

All public policies have implications for democracy and society; as such, evaluations of these policies do as well (Hansberger, 2001). Evaluations should examine policies in terms of how they affect democratic society and how they contribute to or enhance democratic values (Hansberger, 2001). Ideally, evaluations can identify gaps in access to social institutions, ensure distribution, and alleviate underprivilege in the networked environment (Stake, 2004). Such evaluations help foster democracy by involving people often left out of the policy-making process (MacNeil, 2002; Stake, 2004).

As the use of ICTs has blossomed over the past two decades, the evaluation of the networked environment has grown increasingly important. Evaluation can play both a formative role, helping continually refine and update policies, and a summative role, helping ascertain whether policy goals and objectives are being met (K. M. Thompson, McClure, & Jaeger, 2003).

The Evaluation of E-Government Policies

There have been numerous suggestions of specific methodologies that could be used to evaluate e-government Web sites for different purposes or from different perspectives. An early suggestion for evaluating e-government Web sites was in terms of compliance with laws related to security, privacy, and the freedom of information (Eschenfelder, Beachboard, McClure, & Wyman, 1997; Smith, Fraser, & McClure, 2000). Eschenfelder et al. (1997) provided an extensive list of more than 60 evaluation criteria in two major categories: information-content criteria and ease-of-use criteria. This methodology was originally proposed for U.S. government Web sites but has since been extended to other national e-governments, such as that of New Zealand (Smith, 2001).

Huang and Chao (2001) asserted that e-government Web sites should be evaluated on the basis of usability principles, specifically that Web sites should employ a user-centered design that allows users of e-government Web sites to effectively reach the information they seek. Holliday (2002) created a set of evaluation criteria for the level of the usefulness of e-government Web sites, including factors such as the amount of information about the government, contact information, feedback options, search capabilities, and related links. The Value Measuring Methodology encourages the evaluation of e-government Web sites on the basis of cost/benefit, social, and political factors (Mechling & Booz Allen Hamilton, 2002). Gupta and Jana (2003) suggested evaluating e-government sites in terms of the tangible and intangible economic benefits the sites produce.

Some of these methods for evaluating e-government Web sites have been created with specific populations in mind. Ritchie and Blanck (2003) proposed that e-government Web sites should be evaluated in relation to the users of human services that are provided on the site, such as everyday life information, referral services, peer counseling, and advocacy. Fenton (2004), in examining e-government Web sites of adoption agencies, also focused on the evaluation factors that affect users of human services provided through e- government. Overview of User-Centered Evaluations of E-Government Web Sites

Although the evaluation of e-govemment Web sites can be approached from many directions, a particularly robust approach is user-centered evaluation, which focuses on the implementation of a Web site from the perspective of a user (Bertot & Jaeger, 2006; Bertot, Snead, Jaeger, & McClure, 2006). This type of user-centered evaluation can include three interrelated components:

* functionality evaluation, examining how well a Web site and its implementation fulfill the functions they are intended to perform;

* usability evaluation, examining how well users are able to use and interact with the implementation of a Web site; and

* accessibility evaluation, examining how inclusive a Web site and its implementation are for all users, including persons with disabilities (Bertot & Jaeger, 2006; Bertot et al., 2006).

User-centered testing focuses on the needs of users of Web sites and includes users directly in the testing process. This section details each of these evaluation approaches in the networked environment.

Functionality Evaluation

The goal of functionality testing is to determine if a Web site works in the manner it is intended and provides the results it is meant to deliver. Quite literally, this method finds whether a Web site and its elements objectively function according to its goals (Wallace, 2001). Functionality testing is often used to make comparisons between separate, comparable Web sites with similar goals (Bertot, McClure, Thompson, Jaeger, & Langa, 2003). In functionality testing, issues that must be considered include the types of functions that are tested, the perspectives on the functions tested, the needs of potential users of those functions, the goals of the program, and the scale of the functions (Gibbons, Peters, & Bryan, 2003). Perhaps most significantly, functionality evaluation of a Web site can help understand if its functions meet policy goals. This approach can be particularly important if used during the implementation process to evaluate the Web site, while it can still be modified if necessary.

The most significant drawback to functionality testing is likely the general lack of involvement of typical users of a program. Regardless of the design of a functionality evaluation, researchers will probably not be able to anticipate all of the experiences and potential difficulties of diverse users, particularly unskilled ones (Gibbons et al., 2003). Functionality testing may have no way of getting at the impressions of users, such as their levels of satisfaction. These sorts of issues are better addressed by usability evaluation.

Usability Evaluation

Usability evaluation examines how users react to and interact with Web sites. Usability may be regarded as “the extent to which the information technology affords (or is deemed capable of affording) an effective and satisfying interaction to the intended users, performing the intended tasks within the intended environment at an acceptable cost” (Sweeney, Maguire, & Shackel, 1993, p. 690). Usability evaluation can elicit user input at many points in design and implementation and can iteratively use a range of techniques (K. M. Thompson et al., 2003). Although user participation in usability testing has been criticized as being too labor intensive (Norman & Panizzi, 2006), it provides detailed information from the perspective of users.

Usability metrics employed in evaluation tend to focus on two specific aspects of the experiences of users: their perceptions and their interactions with the system (Hert, 2001). The first type of metric allows a user to express personal impressions of a resource, such as satisfaction, utility, value, helpfulness, benefits, frustration, and self-efficacy (Dalrymple & Zweizig, 1992; Hert, 2001). The second type of metric provides a portrait of a user’s interaction with a resource by monitoring the number of errors, the time necessary to complete specified tasks, and similar measures of the efficiency and effectiveness of the resource when being used (Hert, 2001). Through the combination of these two types of data, usability testing can catch many issues that designers may have missed. Using protocols that encourage a user to articulate immediate impressions about the Web site while actively using it, usability analysis can offer insight into the perspectives of the user about the resource at issue that might not otherwise be available in the course of the evaluation (Hert, 2001).

Accessibility Evaluation

Accessibility testing is the assessment of a Web site on the basis of whether it provides equal access to all users, particularly in terms of the Web site accessibility requirements of Section 508 of the Rehabilitation Act. A number of studies have investigated the accessibility of e-government Web sites (Ellison, 2004; Jackson- Sanborn, Odess-Hamish, & Warren, 2002; Marincu & McMullin, 2004; Michael, 2004; Stowers, 2002; West, 2003; World Markets Research Centre, 2001). Studies of the accessibility of Web sites have also focused on retail, airline, tourism, employment, academic, library, distance learning, and popular Web sites, among others (Coonin, 2002; Gutierrez, Loucopoulos, & Reinsch, 2005; Jackson-Sanborn et al., 2002; Milliman, 2003; Schmetzke, 2003; Shi, 2006; Stewart, Narendra, & Schmetzke, 2005; T. Thompson, Burgstahler, & Comden, 2003; Witt & McDermott, 2004). Many of these studies have relied primarily on automated testing software (i.e., Bobby, WebXact) and do not involve users in the evaluation.

User-centered studies of accessibility in the networked environment are less common, though a few studies have taken a more comprehensive approach to accessibility evaluation. In two studies examining the accessibility of educational Web sites in the United Kingdom, the authors argued for using a combination of automated testing, expert testing, and user testing (King, Ma, Zaphiris, Petrie, & Hamilton, 2004; Sloan, Gregor, Booth, & Gibson, 2002). A study of educational Web sites in Iowa also used a more complex method to evaluate accessibility by using automated testing and expert testing, though not user testing (Klein et al., 2003).

Of the three types of user-centered evaluation, accessibility may be the least widely used. This is likely due in no small part to a lack of awareness of issues related to persons with disabilities among designers, developers, and evaluators of ICTs (Jaeger, 2006b). Accessibility, nevertheless, is a growing concern in the networked environment, because an inaccessible Web site literally excludes persons with disabilities.

Multimethod Evaluations of the Accessibility of E-Government Web Sites

In this section, I examine multimethod, user-centered evaluations of e-government Web sites for accessibility through the lens of a specific study, a dissertation study conducted in 2005 and 2006 (Jaeger, 2006c). This study used five evaluation methods in the assessment of e-government Web sites in terms of the implementation of Section 508 of the Rehabilitation Act. While the primary data from this study are reported in detail elsewhere (Jaeger, 2006a), the primary finding of the study was that e-government Web sites frequently do not comply with all of the requirements of Section 508 of the Rehabilitation Act, rendering most e-government Web sites inaccessible to some or all persons with disabilities (Jaeger, 2006c). The multiple methods of evaluation revealed that the policy in question was being neglected or unsuccessfully implemented by many federal government agencies. Instead of focusing on the findings, in this discussion, I examine the effectiveness of the different methods of evaluation in determining levels of accessibility on the Web sites. Example instruments used in the study and selected data, however, are used to illustrate the points in the discussion and serve as illustrations of methods that can be used to conduct user-centered accessibility testing of Web sites.

In this study, selected federal e-government Web sites were evaluated using policy analysis, expert testing, user testing, and automated testing, along with a survey administered to federal webmasters that assessed their views on accessibility (Jaeger, 2006c). Each method of evaluation was intended to play a specific role in the study, complementing one another and increasing the amount of information available. In the evaluation of e-government sites, the use of a multimethod approach to evaluation is optimal (K. M. Thompson et al., 2003). Combining policy analysis with other evaluation methods can bring more attention to an issue than single evaluation methods could alone (Gordon & Heinrich, 2004).

The different methods incorporated in the study have both benefits and limitations as part of an evaluation. Exploring the inherent value to the study of each of the component methods allows for consideration of which methods were best suited to the evaluation of the implementation of the specific requirements of Section 508 on e-government Web sites in future studies. Below, the three user-centered methods-expert testing, user testing, and webmaster questionnaires-used in this study (Jaeger, 2006c) are examined in terms of factors such as effectiveness, efficiency, feasibility, and impact on the overall study. The other two methods of data collection are not discussed because the policy analysis provided background and context, while automated testing proved to be of little value compared with the other data collection methods, and even if it had been useful, it could not be considered a user- centered method.

Expert Testing

Expert testing is the evaluation by persons knowledgeable about the design and development of Web sites, and it is a method by which to identify a broad range of issues. To engage in expert testing, one or more people with the skills to conduct it must be available, and they need to have an established system or rubric from which to work (Lazar, 2006). With these conditions, expert testing is a very valuable method for evaluating the accessibility of e-government Web sites. The key obstacle in conducting expert testing for accessibility is that it requires persons who understand and can identify the barriers to accessibility in design and the impacts of these barriers on users with diverse disabilities, while having an understanding of the legal requirements for accessibility and how they should be properly implemented. In this study, expert testing proved very useful in revealing broad issues that would affect users with many different types of disabilities (Jaeger, 2006c). Table 1 presents selected expert testing questions from the study that were used to evaluate the accessibility of e-government sites.

These kinds of questions are representative of the types of questions that are necessary for achieving a broad understanding of the accessibility of a site in terms of the specific Section 508 requirements. The testing of the sites was conducted to ensure as wide an analysis of the sites as possible. Sites were tested through multiple browsers to see if there were significant differences in levels of accessibility and tested for compatibility with a range of technologies related to different types of disabilities, including narrators and screen readers, screen enlargement software, magnifiers, alternate color schemes, and alternate navigation devices, among others. By using questions such as those in Table 1 and by testing using a range of assistive technologies, expert testing identified the following major accessibility barriers among the e-government sites evaluated:

* compatibility problems with screen enlargement;

* compatibility problems with screen readers;

* compatibility problems with alternate color schemes;

* the use of Flash animations and moving images to convey content;

* cluttered layout and organization;

* audio content without a text equivalent;

* graphics lacking ALT tags (which provide text replacements for the graphics);

* difficult dropdown, mouse-over menus; and

* problems with the consistency and clarity of context, orientation, and navigation.

These major problems were themes across many of the sites tested (Jaeger, 2006c). The expert testing also found many smaller accessibility barriers on most of the sites evaluated.

Expert testing is particularly important because it is very unlikely that the evaluation of an e-govemment site could be conducted so thoroughly as to include user tests that represented people who have all the different types and levels of disabilities that must be accounted for in designing for accessibility. Such representativeness in the user tests would require a great number of user tests involving people with an array of disabilities-visual impairments, hearing impairments, mobility impairments, learning disabilities, cognitive disabilities, and others-and with a range of severity of impairment within each type of disability. It is unlikely that all those people could be found for testing, even if there were time and financial resources to accommodate that many user tests.

Expert testing, fortunately, allows for fewer user tests by identifying the potential issues for people with different disabilities without needing representative users from each group. Although the expert testing did not reach the same depth or granularity in identifying problems for any particular disability as a user with that disability would, expert testing did identify the major accessibility issues on each site (Jaeger, 2006c). The user testing enriched the findings of the expert testing: The expert testing provided breadth, while the user testing provided depth. As such, expert testing reduced the burden of user testing.

User Testing

User testing, the testing of a Web site by users under the guidance of a researcher, provides a great depth of information from the perspective of each user who is tested. Conducting user testing creates a detailed portrait of the accessibility of a Web site from the perspective of people who have the same disability as the person being tested. Although a poorly designed element of a site will sometimes cause accessibility problems to people with different kinds of disabilities, that will not always be the case. People with different levels of severity of the same kind of disability will often experience different accessibility issues on the same site. People involved in user testing, then, can best provide information related to the experiences of people like themselves.

Because user testing provides unparalleled richness of detail for those disabilities represented in the user population, it is an extremely important method for evaluating accessibility. However, it is also labor intensive and time-consuming. The first and most pressing difficulty with user testing is finding users. Users with the types of disabilities that are being tested not only have to be located, but these users must be interested in participating, have time to participate, and have the requisite computer skills. Also, the right assistive technologies must be on hand so that the users can operate the computer as they normally would. Once users are identified and recruited, the actual user testing procedures can be lengthy, especially if certain users require special needs or even breaks to be able to participate in the testing.

Table 2 includes sample questions that can be used during the course of user testing to elicit the thoughts of users with disabilities about the site being evaluated. Background information on the impacts of each participant’s disability on the use of the Internet in general is also essential information to gather in understanding the barriers to accessibility on the sites that are being tested.

In the case of this study, the user tests took, on average, between 1 and 2 hours to complete (Jaeger, 2006c). Analyzing the results of user testing requires effort, because data must be extracted and made sensible from a large number of comments that each user will make during the course of the testing. However, the users identified more detailed accessibility barriers than the expert testing did. A sample of the accessibility problems identified by users is summarized in Table 3 (Jaeger, 2006c).

The levels of effort to conduct user testing are extremely worthwhile. Because the depth of detail provided by user testing demonstrated in Table 3 is not available through any other methodology, the impact of conducting user testing on an evaluation is sizable. However, given the intense time requirements and other constraints, it is unlikely that too many user tests could feasibly be conducted in an accessibility evaluation. The combination of user testing and expert testing is particularly valuable, because the later provides breadth and the former provides depth.

An issue that would be particularly valuable as an avenue of future research would be comparing face-to-face user testing and remote user testing (i.e., conducted via telephone and Internet communication) in accessibility studies. The remote user testing data in this study were as meaningful as the face-to-face data, with the verbal script for the face-to-face testing converted to an interactive script conveyed electronically. In fact, the remote testing data were usually more reflective and thoughtful, while the face-to-face data were generally more spontaneous. Remote user testing seems particularly vital to accessibility testing, because it allows persons with very significant disabilities, who might have difficulty reaching a lab setting, to participate; it allows researchers to involve participants with disabilities who are locally unavailable for testing; and it allows participants with very specific technology needs to work at computers they are comfortable with and that have all of the assistive technologies they need.

Webmaster Questionnaire

In this study, a questionnaire was sent to webmasters to gauge their perceptions of the accessibility of the Web sites being studied. The webmaster questionnaire provided very valuable data without too much difficulty. Compared with expert testing and user testing, a questionnaire is not time-consuming or labor intensive, even taking into account the time needed to find contact information and send follow-up e-mails and reminders. Because it was distributed free of charge via e-mail, it was cost effective.

Table 4 includes sample questions from this study that can be asked of webmasters and Web site developers to assess the considerations of disability and accessibility in the development of their sites. In the context of a multimethod study, the webmaster questionnaire proved very insightful (Jaeger, 2006c). The responses to the questionnaire exposed many issues that might otherwise have been missed in the study. Most significantly, the responses to the questionnaire revealed the agencies’ perceptions of the accessibility of their Web sites, which often did not match the findings of the user testing and the expert testing (Jaeger, 2006c).

The questionnaire also revealed a great deal about problems of communication between the providers of e-government Web sites and the users of e-government Web sites. The questionnaire made evident some very sizable gaps in the ability to reach developers of many e- government sites, because some sites lacked contact information, had invalid e-mail addresses posted, only sent form responses, or had policies not to respond to e-mail contacts. This finding is a problem both in terms of accessibility and in terms of larger issues of the overall responsiveness and transparency of e-government. Without the webmaster questionnaire, these key points would not have been identified in the study (Jaeger, 2006c). Ultimately, the webmaster questionnaire was a unique enhancement to the multimethod evaluation process, providing information that could not be provided by any other methods of evaluation. The use of the webmaster questionnaire within the context of this study proved to be highly valuable in illuminating considerations of accessibility in the development and implementation of e-government Web sites and contextual issues surrounding the accessibility of e-government Web sites.

User-Centered Evaluations, Universal Access, and the Future of E- Government Policies

On the basis of the findings of this study, it appears that the combination of policy analysis, expert testing, user testing, and webmaster questionnaires may provide an effective user-centered approach to the evaluation of the accessibility of e-government Web sites in terms of the requirements of Section 508 of the Rehabilitation Act (Jaeger, 2006c). Policy analysis provided the social, legal, and political context, as well as the framing issues related to the design and development of accessible technologies. Expert testing provided the broad perspective of issues related to the spectrum of different disabilities that face barriers in the online environment. User testing provided deep, detailed information directly from the viewpoints of users with particular disabilities. The webmaster questionnaire provided a contextual perspective and insights into agency thinking and policy that enhance the information generated by the other methods. Automated testing, however, did not seem to add sufficient benefits to justify its inclusion in multimethod evaluations of accessibility.

This combination of methods helps address the limitations of the methods individually. Policy analysis, though vital to understanding the context of accessibility, does not involve the perspectives of users. Expert testing, although it can touch on all issues of accessibility in a broad sense, does not have the same depth of perspective as user testing. The findings provided by user testing are limited in scope to the disabilities present among the users involved, so expert testing can ensure breadth. Webmaster questionnaires provide insight into the considerations of users with disabilities by those creating and managing the sites. These methods all require time, effort, and addressing certain issues of feasibility. However, the combined use of these methods seems well suited to evaluating the levels of accessibility of e-government Web sites.

Future user-centered studies of the accessibility of e- government Web sites, while drawing on the findings of this research, need to explore other potential combinations of methods for evaluating e-government Web sites, including methods not explored in this study. Other methods that might be used include case studies, interviews, and focus groups with users with disabilities or with government Web developers. Future studies could test these methods and evaluate how they work individually and in conjunction with the methods used in this study.

The concept of universal design may also hold promise in considering how research can increase the accessibility of e- government Web sites. Designing government Web sites to be accessible to all users from the outset, rather than trying to make them accessible later, would be significant in increasing the accessibility of e-government. However, the design, development, and implementation of ICTs predominantly fail to account for issues of accessibility for persons with disabilities (Jaeger, 2006b). “An understanding of disability is still not regarded as something that should be considered from the outset and made integral to the shaping of existing and new technologies” (Goggin & Newell, 2000, p. 130).

ICTs should be designed from the outset to be accessible for all and to be “flexible enough to work with the various assistive technology devices that a person with a disability might use and to provide relevant content in an accessible modality” (Lazar, Beere, Greenidge, & Nagappa, 2003, p. 331). Many designers, however, substitute usability principles for accessibility principles or simply ignore accessibility completely (Goggin & Newell, 2000; Keates & Clarkson, 2003; Lazar, Dudley-Sponaugle, & Greenidge, 2004; Powlik & Karshmer, 2002; Stephanidis & Savidis, 2001). Furthermore, it is very difficult to gauge how accessible an ICT will be until actual users work with the technology (Culnan, 1983), and the use of input from persons with disabilities in the design process of Web sites and other ICTs is extremely rare (Jaeger, 2006b). The failure to create initially accessible ICTs is particularly problematic, because access issues involving technology are inherently more complex than other access issues (Culnan, 1983, 1984, 1985). Without universal design, universal access cannot be achieved, because universal access inherently means designing, developing, and implementing ICTs to meet the needs of all users (Buhler, 2001; Jacko & Vintense, 2001).

Access as a mode of equality for persons with disabilities is a concept that often does not receive adequate consideration. A key reason is that society “still generally perceives all disability as a purely internal state” (Goering, 2002, p. 375), so the impacts of social structures, even the design of ICTs, on persons with disabilities are frequently ignored. In spite of laws such as Section 508 of the Rehabilitation Act, Web sites clearly are not being designed to be universally accessible from the beginning.

A vast gap remains between the rhetoric of public inclusion that mandates everything from universal design to inclusive classrooms and the battles that still have to be fought on a daily basis to ensure their availability-battles which not everyone can or will fight. (Rapp & Ginsburg, 2001, p. 541)

User-centered evaluation of existing e-government Web sites can do much to improve accessibility, but true equal access to e- government will likely only occur if e-government Web sites are designed to be accessible from the start and tested for accessibility by persons with disabilities during development rather than after implementation.

As e-government becomes more central to the lives of citizens, the need to develop user-centered evaluation strategies for e- government accessibility takes on greater importance. Much work remains in creating user-centered evaluation strategies to assess the accessibility of e-government. In this article, I have attempted to summarize the issues and approaches taken toward user-centered evaluation of the accessibility of e-government, intending to demonstrate the application of the issues, examine the interactions of multiple methods of evaluation, and raise questions for future research in the evaluation of e-government. In the networked environment, the importance of e-government will continue to expand, making the development of user-centered evaluation strategies for the accessibility of e-government Web sites a significant area for future research, particularly because persons with disabilities still face so many barriers to equal access.

Table 1

Sample Questions for Expert Testing

1. Provides an audio/video/textual equivalent for every element related to content and services?

2. Alternative formats of elements of multimedia presentations synchronize to the appropriate parts of the presentation?

3. All information conveyed through color also conveyed without color?

4. Content clear and organized so as to be readable to any user?

5. Provides context and orientation information at all times?

6. Provides clear navigation mechanisms?

7. Identifies row and column headers on tables?

8. Does not rely on moving pictures or Rash to convey content?

9. Works comprehensively with assistive technologies?

10. All electronic forms allow users with assistive technologies to access the information, field elements, and functionality required for completion and submission of the forms, including directions and cues?

11. Text-only equivalent page available for every page that cannot otherwise be made completely compliant with all other requirements?

12. Ensures user control of time-sensitive content changes?

13. Users not timed out of applications?

14. Ensures direct accessibility of embedded user interfaces?

Table 2

Sample Questions for User Testing

1. What assistive technologies are you currently using (if any)?

2. Are you able to navigate the site without difficulty? If not, what accessibility problems did you face in navigating?

3. Are you able to read the text on the site without difficulty? If not, what accessibility problems did you face in reading?

4. Are you able to use the search function on the site without difficulty? If not, what accessibility problems did you face in searching?

5. Are you able to use particular applications (i.e., download forms, view audio or video, fill out forms) on the site without difficulty? If not, what accessibility problems did you face in using these applications?

6. Do you feel that the site as a whole is working well with the assistive technology you are using? Please specify.

7. Do you notice problems that might affect people with other types of disabilities? Please specify.

Table 3

Selected Findings of User Testing

* Elements do not enlarge

* Compatibility problems with screen readers

* Compatibility problems with alternate color schemes

* Compatibility problems with screen enlargements

* Uses Flash animation and moving images to convey content

* Font size too small

* Lack of ALT tags

* Spacing between lines not large enough

* Links and accompanying descriptors too small

* Header text too small

* Problems with printer-friendly version of site

* Problems with navigation elements

* Navigation elements confusing and extremely hard to use

* Uses graphics and color to convey content

* Some buttons not working

* Search function problems * Tables too closely spaced

* Small font on deeper pages

* Tabs too small

* Color scheme hard to read

* Insufficient spacing between lines and individual words

* Inconsistent layout

* Inconsistent navigation

* Navigation elements too small

* Poor use of available space

* Mouse-over menus difficult to use

* Too much scrolling required

* Lack of text equivalents for audio content

* Pages cluttered, busy, and poorly organized

* Insufficient navigation elements

Table 4

Sample Questions for Webmasters and Web Site Developers

1. Do you feel that the accessibility of your Web site for persons with disabilities is a priority within your agency?

2. When working to make your Web site accessible for persons with disabilities, where do you turn for resources and guidelines?

3. Do you perform accessibility testing on your Web site to test how well it can be used by persons with disabilities? If so, at what point in the Web site development process is this testing done?

4. What factors (e.g., staff time, staff skills, funding, agency mission) influence the priority accorded to the accessibility of your Web site for persons with disabilities?

5. Have you received any feedback from users of your site regarding its accessibility? If so, were the comments generally positive or negative?

6. If you feel that the accessibility of your Web site could be improved, what resources would you find beneficial in working to improve it?

References

Bertot, J. C., & Jaeger, P. T. (2006). Editorial: User-centered e- government: Challenges and benefits for government Websites. Government Information Quarterly, 23, 163-168.

Bertot, J. C., McClure, C. R., Thompson, K. M., Jaeger, P. T., & Langa, L. A. (2003). Florida electronic library: Pilot project functionality assessment for the Florida Division of Library Services. Tallahassee, FL: Information Use Management and Policy Institute.

Bertot, J. C., Snead, J. T., Jaeger, P. T., & McClure, C. R. (2006). Functionality, usability and accessibility: Iterative user- centered assessment strategies for digital libraries. Performance Measurement and Metrics, 7(1), 17-28.

Buhler, C. (2001). Empowered participation of users with disabilities in universal design. Universal Access in the Information Society, 1, 85-90.

Chen, H. (1990). Theory-driven evaluations. Newbury, CA: Sage.

Coonin, B. (2002). Establishing accessibility for e-journals: A suggested approach. Library Hi Tech, 20(2), 207-213.

Cullen, R., & Hernon, P. (2006). More citizen perspectives on e- government. In P. Hernon, R. Cullen, & H. C. Relyea (Eds.), Comparative perspectives on e-govemment: Serving today and building for tomorrow (pp. 209-242). Lanham, MD: Scarecrow.

Culnan, M. J. (1983). Environmental scanning: The effects of task complexity and source accessibility on information gathering behavior. Decision Sciences, 14(2), 194-206.

Culnan, M. J. (1984). The dimensions of accessibility to online information: Implications for implanting office information systems. ACM Transactions on Office Information Systems, 2(2), 141-150.

Culnan, M. J. (1985). The dimensions of perceived accessibility to information: Implications for the delivery of information systems and services. Journal of the American Society for Information Science, 36(5), 302-308.

Dalrymple, P. W., & Zweizig, D. L. (1992). Users’ experiences of information retrieval systems: An exploration of the relationship between search experience and affective measures. Library & Information Science Research, 14, 167-181.

Education Sciences Reform Act, Pub. L. No. 107-279, 116 Stat. 1940 (2002).

Ellison, J. (2004). Accessing the accessibility of fifty United States government Web pages: Using Bobby to check on Uncle Sam. First Monday, 9(7). Available at http://www.firstmonday.org/issues/ issue9_7/ellison/index.html

Eschenfelder, K. R., Beachboard, J. C., McClure, C. R., & Wyman, S. K. (1997). Assessing US federal government Websites. Government Information Quarterly, 14, 173-189.

Fenton, R. (2004). United Kingdom adoption agency Web sites. First Monday, 9(2). Available at http://www.firstmonday.org/issues/ issue9_2/fenton/index.html

Gibbons, S., Peters, T. A., & Bryan, R. (2003). E-book functionality: What libraries and their patrons want and expect from electronic books. Chicago: LITA.

Ginsburg, A., & Rhett, N. (2003). Building a better body of evidence: New opportunities to strengthen evaluation utilization. American Journal of Evaluation, 24, 489-498.

Goering, S. (2002). Beyond the medical model? Disability, formal justice, and the exception for the “profoundly impaired.” Kennedy Institute of Ethics Journal, 12(4), 373-388.

Goggin, G., & Newell, C. (2000). An end to disabling policies? Toward enlightened universal service. Information Society, 16, 127- 133.

Gordon, R., & Heinrich, C. J. (2004). Modeling trajectories in social program outcomes for performance accountability. American Journal of Evaluation, 25, 161-189.

Government Performance and Results Act, Pub. L. No. 103-62, 107 Stat. 287 (1993).

Grob, G. F. (2003). A truly useful bat is one found in the hands of a slugger. American Journal of Evaluation, 24, 499-505.

Gutierrez, C. F., Loucopoulos, C., & Reinsch, R. W. (2005). Disability-accessibility of airlines’ Web sites for U.S. reservations online. Journal of Air Transport Management, 11, 239- 247.

Gupta, M. P., & Jana, D. (2003). E-government evaluation: A framework and case study. Government Information Quarterly, 20, 365- 387.

Hansberger, A. (2001). Policy and program evaluation, civil society, and democracy. American Journal of Evaluation, 22, 211- 228.

Henry, G. T. (2003). Influential evaluations. American Journal of Evaluation, 24, 515-524.

Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluations’ influence on attitudes and action. American Journal of Evaluation, 24, 293-314.

Hert, C. A. (2001). User-centered evaluation and its connection to design. In C. R. McClure & J. C. Bertot, Evaluating networked information services: Techniques, policy, and issues (pp. 155-174). Medford, NJ: Information Today.

Holliday, I. (2002). Building e-government in East and Southeast Asia: Regional rhetoric and national (in)action. Public Administration and Development, 22, 323-335.

Huang, C. J., & Chao, M.-H. (2001). Managing WWW in public administration: Uses and misuses. Government Information Quarterly, 18, 357-373.

Jacko, J. A., & Vintense, H. S. (2001). A review and reappraisal of information technologies within a conceptual framework for individuals with disabilities. Universal Access in the Information Society, 1, 56-76.

Jackson-Sanborn, E., Odess-Hamish, K., & Warren, N. (2002). Web site accessibility: A study of six genres. Library Hi Tech, 20(3), 308-317.

Jaeger, P. T. (2004a). Beyond section 508: The spectrum of legal requirements for accessible e-government Websites in the United States. Journal of Government Information, 30(4), 518-533.

Jaeger, P. T. (2004b). The social impact of an accessible e- democracy: The importance of disability rights laws in the development of the federal E-government. Journal of Disability Policy Studies, 15(1), 19-26.

Jaeger, P. T. (2006a). Assessing Section 508 compliance on federal e-government Websites: A multi-method, user-centered evaluation of the accessibility of e-government. Government Information Quarterly, 23, 169-190.

Jaeger, P. T. (2006b). Telecommunications policy and individuals with disabilities: Issues of accessibility and social inclusion in the policy and research agenda. Telecommunications Policy, 30(2), 112-124.

Jaeger, P. T. (2006c). Multi-method evaluation of U.S. federal electronic government Websites in terms of accessibility for persons with disabilities (Doctoral dissertation, Florida State University). Dissertation Abstracts International, 67(04).

Jaeger, P. T., & Bowman, C. A. (2005). Understanding disability: Inclusion, access, diversity, & civil rights. Westport, CT: Praeger.

Keates, S., & Clarkson, P. J. (2003). Countering design exclusion: Bridging the gap between usability and accessibility. Universal Access in the Information Society, 2, 215-225.

King, N., Ma, T.H.-Y., Zaphiris, P., Petrie, H., & Hamilton, F. (2004). An incremental usability and accessibility evaluation framework for digital libraries. In P. Brophy, S. Fisher, & J. Craven (Eds.), Libraries without walls 5: The distributed delivery if librarian and information services (pp. 123-131). London: Facet.

Klein, D., Myhill, W., Hansen, L., Asby, G., Michaelson, S., & Blanck, P. (2003). Electronic doors to education: Study of high school Website accessibility in Iowa. Behavioral Sciences and the Law, 21, 27-49.

Lafond, C. L., Toomey, T. L., Rothstein, C., Manning, W., & Wagenaar, A. C. (2000). Policy evaluation research: Measuring the independent variables. Evaluation Review, 24(1), 92-101.

Lazar, J. (2006). Web usability: A user-centered design approach. Boston: Pearson.

Lazar, J., Beere, P., Greenidge, K., & Nagappa, Y. (2003). Web accessibility in the mid-Atlantic United States: A study of 50 homepages. Universal Access in the Information Society, 2, 331-341.

Lazar, J., Dudley-Sponaugle, A., & Greenidge, K.-D. (2004). Improving Web accessibility: A study of webmaster perceptions. Computers in Human Behavior, 20, 269-288.

Mabry, L. (2002). Postmodern evaluation-or not? American Journal of Evaluation, 23, 141-157.

MacNeil, C. (2002). Evaluator as steward of citizen deliberation. American Journal of Evaluation, 23, 45-54.

Marincu, C., & McMullin, B. (2004). A comparative analysis of Web accessibility and technical standards conformance in four EU states. First Monday, 9(7). Available at http://www.firstmonday.org/ issues/ issue9_7/marincu/index.html

Mechling, J., & Booz Allen Hamilton. (2002). Building a methodology for measuring the value of e-services. Washington, DC: Booz Alien Hamilton.

Michael, S. (2004, April 19). Making government accessible- online. Federal Computer Week, pp. 24-30. Milliman, R. E. (2002). Website accessibility and the private sector: Disability stakeholders cannot tolerate 2% access! Information Technology and Disabilities, 8(2). Available at http://www.rit.edu/~easi.itd.htm

Norman, K. L., & Panizzi, E. (2006). Levels of automation and user participation in user testing. Interacting With Computers, 18, 246-264.

Powlik, J. J., & Karshmer, A. I. (2002). When accessibility meets usability. Universal Access in the Information Society, 1, 217-222.

Rapp, R., & Ginsburg, F. (2001). Enabling disability: Rewriting kinship, reimagining citizenship. Public Culture, 13(3), 533-556.

Rehabilitation Act of 1973, 29 U.S.C. [section] 701 et seq. (1973)

Ritchie, H., & Blanck, P. (2003). The promise of the Internet for disability: A study of online services and Website accessibility at centers for independent living. Behavioral Sciences and the Law, 21, 5-26.

Sanders, J. R. (2001). A vision for evaluation. American Journal of Evaluation, 22, 363-366.

Schmetzke, A. (2003). Web accessibility at university libraries and library schools: 2002 follow-up study. In M. Hricko (Ed.), Design and implementation of Web-enabled teaching tools (pp. 145- 189). Hershey, PA: Idea Group.

Segerholm, C. (2003). Researching evaluation in national (state) politics and administration: A critical approach. American Journal of Evaluation, 24, 353-372.

Shi, Y. (2006). The accessibility of Queensland visitor information centres’ Websites. Tourism Management, 27, 829-841.

Sloan, D., Gregor, P., Booth, P., & Gibson, L. (2002). Auditing accessibility of UK higher education Websites. Interacting With Computers, 14, 313-325.

Smith, A. (2001). Applying evaluation criteria to New Zealand government websites. International Journal of Information Management, 21, 137-149.

Smith, B., Fraser, B. T., & McClure, C. R. (2000). Federal information policy and access to Web-based information. Journal of Academic Librarianship, 26, 274-281.

Stake, B. (2004). How far dare an evaluator go toward saving the world? American Journal of Evaluation, 25, 103-107.

Stephanidis, C., & Savidis, A. (2001). Universal access in the information society: Methods, tools, and interactive technologies. Universal Access in the Information Society, 1, 40-55.

Stewart, R., Narendra, V., & Schmetzke, A. (2005). Accessibility and usability of online library databases. Library Hi Tech, 23(2), 265-286.

Stowers, G.N.L. (2002). The state of federal Websites: The pursuit of excellence. Available at http:// www.endowment.pwcglobal.com/pdfs/StowersReport0802.pdf

Sweeney, M., Maguire, M., & Shackel, B. (1993). Evaluating user- machine interaction: A framework. International Journal of ManMachine Studies, 38, 689-711.

Thompson, K. M., McClure, C. R., & Jaeger, P. T. (2003). Evaluating federal Websites: Improving e-government for the people. In J. F. George (Ed.), Computers in society: Privacy, ethics, and the Internet (pp. 400-412). Upper Saddle River, NJ: Prentice Hall.

Thompson, T., Burgstahler, S., & Comden, D. (2003). Research on Web accessibility in higher education. Journal of Information Technology and Disabilities, 9(2). Available at http://www.rit.edu/ ~easi/itd/itdv09n2/thompson.htm

Wagenaar, A. C., Harwood, E. M., Silianoff, C., & Toomey, T. L. (2005). Measuring public policy: The case of beer keg registration laws. Evaluation and Program Planning, 28, 359-367.

Wallace, D. P. (2001). The nature of evaluation. In D. P. Wallace & C. V. Fleet (Eds.), Library evaluation: A casebook and can-do guide (pp. 209-220). Westport, CT: Libraries Unlimited.

West, D. M. (2003). Achieving e-government for all: Highlights from a national survey. Available at http://www.benton.org/ publibrary/egov/access2003.doc

Witt, N., & McDermott, A. (2004). Web site accessibility: What logo will we use today? British Journal of Educational Technology, 35,45-56.

World Markets Research Centre. (2001). Global e-government survey. Providence, RI: Author.

Paul T. Jaeger

University of Maryland

Paul T. Jaeger, PhD, JD, is an assistant professor in the College of Information Studies and director of the Center for Information Policy and Electronic Government at the University of Maryland. His research focuses on the ways in which law and public policy shape access to information in public forums, such as the Internet, libraries, education, and e-government. Issues of disability and accessibility are central to much of his research.

Copyright PRO-ED Journals Jun 2008

(c) 2008 Journal of Disability Policy Studies. Provided by ProQuest Information and Learning. All rights Reserved.




comments powered by Disqus