Program Theory: A Framework for Theory-Driven Programming and Evaluation

The recent publication of Therapeutic Recreation practice models has strengthened the conceptual basis for TR practice. The critiques of these models argued for greater specificity of the model tenets and greater clarification of their applicability to professional practice. Therefore, an important next step is to advance knowledge of how these models can be used to develop theory-based programming. Program theory is presented as a conceptual framework and method for developing theory-based practice. Using the Self-Determination and Enjoyment Enhancement (SDEE) model (Dattilo, Kleiber, & Williams, 1998), the process of constructing a program theory is reviewed, a conceptual example is provided, and the implications for program evaluation are discussed.

KEY WORDS: Program. Theory, Therapeutic Recreation Practice Models, Program Theory Evaluation

Practitioners within the field of therapeutic recreation (TR) have long been expected to demonstrate the efficacy of their programs. Increasingly, practitioners in a wide range of human service and community based programs (e.g., after-school programming, health promotion and prevention, social services, youth development) are also expected to provide empirical evidence that their programs work (Easterling, 2000; Shalock & Bonham, 2003). Evidence of program efficacy is usually achieved through impact evaluation studies that examine whether or not desired outcomes occur. While these impact studies often do provide empirical evidence of outcomes they often do not adequately account for how the outcomes occurred (see Hamilton, 1980).

A more compelling evaluation would account for the processes by which the program outcomes were achieved (Davidson, 2000; Donaldson, 2001). However, this type of theory-driven evaluation needs to extend from theory-driven program planning. As recent calls for greater use of theory-driven programming and evaluation in the field of recreation illustrate, the use of theory is not yet a common practice (Baldwin, 2000; Caldwell, 2000; Mobily, 1999; Payne, 2002).

There are likely numerous reasons why theory is not more readily applied in real life program contexts. The scope of many social science theories may make it difficult for practitioners to determine how tenets of a theory affect programming decisions (Finney & Moos, 1992). Whereas researchers are generally concerned with advancing theory through empirical tests of that theory, practitioners working in real life situations may design programs based not on theory, but on best practices or conventions that have evolved over time through experience (Sussman & Sussman, 2001). While there may be a general consensus on the merit of these practices, they rarely have been tested through systematic research.

In some cases practitioners may choose, as their ultimate goal, efficacy on several outcomes and base programs not on a single theory but on a number of theories (West & Aiken, 1997). Practitioners may also prefer to use program components or activities that affect more than a single outcome, whereas researchers may be interested in a specific theory and distinct program components that affect a specific program outcome (West & Aiken). Thus, despite the increased calls to employ theory driven programming and evaluation, doing so in real life program contexts means adequately overcoming these challenges.

Recently published TR practice models have provided one way to better integrate issues of theory and practice. In a review of practice models presented in the Therapeutic Recreation Journal special series, Mobily (1999) concluded that one of the contributions of these models was a framework for examining such practice issues as dosage, population specificity, and explaining how programs work. Mobily focused on the fact that multiple models drawn from established theories should support more rigorous examination of TR practice. However, an additional theme found in the published critiques of these models was that they lacked specificity. As Freysinger (1999) noted, the usefulness of the models lays in the guidance they provide for deriving research questions, that is, testable theoretical tenets about how programs work. If the models do not provide such direction, their contribution for linking theory and practice is limited.

Thus, there is the need to better define how TR models can influence theory-driven practice. Better theory-based programming can then support analysis of how programs produce desired outcomes.

Program theory is a way of linking practice and theory and the purpose of this article is to discuss program theory as a basis for theorydriven programming and evaluation. Elements of a program theory and the relationship between program theory and TR practice models are reviewed. Using the Self-Determination and Enjoyment Enhancement (SDEE) model (Dattilo, Kleiber, & Williams, 1998) a conceptual example of a program theory is provided and the contributions and limits of using principles underlying program theory for program development and evaluation are discussed.

Two Levels of Theory

Adequately applying theory to program design and addressing the question of how programs work is fostered by thinking about two distinct levels of theory (West & Aiken, 1997). If one begins with a clear and specific program outcome (also referred to as factor or condition) then TR programming is informed in part by understanding of theory related to the etiology of that outcome (Johnson & Pandina, 2001). However, a theory concerning this outcome does not generally address how interventions or prevention programs can affect that outcome (West & Aiken). A second layer of theory is the program theory which specifies precise operational methods (i.e., program components) that affect the desired proximal program outcomes (also called mediators).

Conventionally, the term program theory often incorporates the psychosocial theory that links proximal and distal program outcomes. Thus, the program theory explains how program components affect proximal outcomes and how these proximal outcomes affect distal outcomes.

Program Theory

Rogers (2000) described a program theory as an explicit representation of the “mechanisms by which program activities are understood to contribute to the intended outcomes” (p. 209). Program theory is a framework that guides practice and is “a specification of what must be done to achieve the desired goals, what other important impacts may also be anticipated, and how these goals and impacts could be generated” (Chen & Rossi, 1992, p. 43). It establishes “links between what programs assume their activities are accomplishing and what actually happens at each small step along the way” (Weiss, 2000, p. 35).

To the extent that TR practice models specify the mechanisms through which learning and personal changes are theorized to occur in TR, they meet the definition of a program theory (Rogers, 2000; Rogers, Hasci, Petrosino & Huebner, 2000). However, because TR practice models were designed to encompass a wide array of practice settings, they, like many other social science theories, tend to be relatively broad in scope, vague in regard to program outcomes, and often difficult to interpret. That is, TR practice models do not completely function as program theory because they fail to adequately articulate distinct program components and their links to outcomes. A program theory for an actual program, designed for a particular context, requires a narrow frame of reference (Reynolds, 1998). The two layers of theory previously described, are often not clearly articulated in descriptions of a TR practice model.

In a program theory, the processes that link program components with proximal and distal outcomes are explicitly defined. For example, Bickman (1996) described how program components were linked to proximal, intermediate, and ultimate (a.k.a., distal outcomes) outcomes during the intake, assessment, and treatment phases of mental health services. Client intake criteria (program component) were established to increase access to mental health services (proximal outcome). If this component and outcome were successful, then two intermediate outcomes were hypothesized: increased number of clients served and increased client satisfaction with intake procedures. It was then hypothesized that better intake procedures influenced greater client participation in treatment planning during the assessment phase of the program. The timing of assessment (another program component) was hypothesized to affect the proximal outcome of better treatment planning. Intake and assessment outcomes were then linked to treatment services (program components) representing a larger continuum of treatment services. The program components and outcomes associated with the intake, assessment, and treatment phases were hypothesized to affect ultimate or distal outcomes of improved mental health and quicker recovery.

It is important to note that program theory is more than a flow chart of program processes because, as Rogers (2000) stated, flow charts do not necessarily explain “how program activities are understood to lead to intended outcomes” (p. 227) and they do not “convey what it is about the program activities that seems to he\lp bring about the goal” (p. 227). Again, theory is employed at two levels, the conceptualization of mediating processes that link program components to proximal outcomes and the psychosocial theory that explains proximal outcomes that mediate distal outcomes (West & Aiken, 1997).

Elements of a Program Theory: Modification and Refinement of a TR Practice Model

Whether working with an existing social science theory or a TR practice model, the application of the theory to guide program design and evaluation requires a refined and explicit representation of the function of program components and resulting mediational program processes. Refining and specifying a TR practice model into a testable program theory involves making explicit the theoretical tenets derived from the model, program context, program components and activities, and desired outcomes. Researchers use various labels for these program characteristics, but they are generally summarized as: (a) the problem area or behavior to be addressed by the program, (b) the target population, (c) context conditions, (d) program content, or skills to be acquired that will be sufficient to produce an effect (i.e., program components), and (e) key responses and outcomes of the program (see Reynolds, 1998).

To illustrate, a program theory can be developed from the Self- Determination and Enjoyment Enhancement (SDEE) practice model (Datillo et al., 1998). This model originates in established psychosocial theories of optimal experience (Csikszentmihalyi, 1990) and self-determination (Ryan & Deci, 2000), which are predominant in the fields of TR and Leisure Studies. Furthermore, as noted by Mobily (1999), the SDEE model “shows remarkable fidelity to its theoretical origins . . . The result of the close correspondence between theory and practice is that Dattilo et al. are able to provide clear methods for producing optimal experiences and favorable changes in the TR environment” (p. 185).

Overview of the SDEE Model

The SDEE model (Dattilo et al., 1998) model is a “self- reinforcing” model whereby enjoyment in leisure is influenced by activity engagement associated with self-determination, intrinsic motivation, perceptions of manageable challenge, and investment of attention. Enjoyment of activity is hypothesized to affect functional improvement. Dattilo et al. have also incorporated factors associated with activity settings, such as choice, goal- setting, and feedback that affect intrinsic motivation, self- determination, challenge, investment of attention and enjoyment. However, despite its importance, the process of internalization, which also explains how self-determination is enhanced (Ryan & Deci, 2000), was only briefly mentioned.

For each aspect of the model, Dattilo et al. (1998) proposed how it would be linked to practice. They proposed that: (a) self- determination is associated with making decisions and developing self-awareness, (b) intrinsic motivation is associated with positive feedback and focus on internal standards, (c) manageable challenge is associated with skill assessment and activity adaptations, and (d) investment of attention is associated with reducing distractions and maladaptive attributions. Their presentation of the model in TR programming format (assessment, planning, implementation, and evaluation) provided a very generalized view of real life program contexts. Figure 1 summarizes their discussion of the SDEE model elements within each programming phase.

Consistent with the development of a program theory, Figure 1 was designed to carve the SDEE model up into successive small steps. This successive small step format is essential for evaluating the theoretical and program logic and helps identify strengths, assumptions, potential confounds, and unclear aspects of the model. In particular it is important that a program theory illustrate program components (what the client or practitioner does at each step of a program) and how those components result in proximal outcomes. That is, one should be able to read from left to right in Figure 1 and find a logical flow that links the desired outcomes evaluated during the assessment phase with program activities that lead to those outcomes. While this can be done in some instances, such as activity adaptations, it is difficult to do with the other program components listed.

FIGURE 1. SUMMARY OF SDEE MODEL (DATTILO ET AL., 1998) IN PROGRAM THEORY FORMAT.

As acknowledged by Dattilo et al. (1998), a great number of theoretical concepts or constructs are introduced in the SDEE model and it is difficult to ascertain clear paths for the theoretical concepts introduced. It is also difficult to see how elements of the SDEE model should be differentiated for a program that could be rigorously evaluated. For example, the role of the TRS is not completely clear. Making choices and reducing distractions are associated with the assessment phase of the SDEE model. Presumably the therapeutic recreation specialist (TRS) has an important role in framing and organizing these choices, but it is unclear from this model what parameters and techniques are used.

Not only is it is difficult to match the model elements to specific program components (TRS actions, activity structure, etc.), very similar concepts are introduced throughout the sequential process presented by Dattilo et al. (1998). It is very difficult to distinguish, for example, attribution processes (client’s internal versus external explanations of their actions) from feedback (TRS’s feedback or client’s self-talk). The referents for these elements in Figure 1 are unclear. For example, avoiding disruptive feedback is a desired participant skill. However, program components associated with this skill listed in the implementation column of Figure 1 only describe TRS actions, not how clients develop this skill. Does giving strategic feedback during activity engagements foster internal attributions as suggested, or is there an additional instructional element whereby a TRS also teaches clients how to process feedback in situations they will encounter post-program?

A hypothetical program theory example addresses these issues. As illustrated in Figure 2, program components and links to outcomes are identified. The case example also integrates realistic program parameters and measurable mediating variables. The format of the program theory illustrates a linear program progression, reflective of an actual program setting.

A Program Theory Derived from the SDEE Model

In this case example TR is a treatment modality in a short-term rehabilitation context, and it was assumed that clients were facing the onset of-or a major change in-a disabling condition and that optimizing their physical strength and endurance was a common goal for all treatment modalities (i.e., occupational therapy, physical therapy, medicine). The imposition of this distal outcome is an overriding element that further defined the TR context and the application of the SDEE model components.

It is assumed that at the initial assessment most clients will not have the knowledge, attitude or skill sets to successfully engage in self-determined and enjoyable physical activity (i.e., optimal recreation engagement). Guided activity involvement characterized by high levels of intrinsic motivation (or internally driven extrinsic motivation, see Ryan & Deci, 2000), awareness of environmental modifications, challenge management, and self- allribulions are mediators.

The program theory is modeled as successive mediating steps associated with program experiences and content. This example assumes that all clients are directed to the TR program and are assessed on a one-to-one basis by the TRS. The criterion for participation has been specified as being a client in the hospital as the result of an onset, or change in, a disabling condition. The hypothesis guiding the TR program is that all clients will benefit from TR sessions that enhance the participant’s engagement (characterized by self-determination and enjoyment) in physical activity. The distal outcome of improved physical functioning is an antecedent factor important to overall recovery from the disabling condition and is a shared goal the entire treatment team advances.

Assessment and guided activity planning are used to appraise the client’s current level of self-determination and enjoyment in physical activity and, possibly, to inform the TRS’s structuring of four guided activity sessions. That is, the overall intensity of the first guided activity session can be adapted or modified based on the client’s perceived competence and ability. The program consists of modifications of one of three types of physically active recreation engagements (stationary cycling, aquatics, modified aerobics). Choice of activity is delimited based in part on the ultimate functional goal but also to reflect that most TR settings cannot support meaningful guided engagement in a limitless variety of activities. Other limitations associated with real life program context have also been incorporated. For example, the number of sessions and amount of time a TRS may spend with a client is limited.

Assessment results and guided activity planning reflect program components used to foster personal agency and involvement in the treatment plan. These program elements are a form of autonomy support consistent with the SDEE model. Self-determination and enjoyment of a physical activity is enhanced through four guided activity sessions. The role of activity is emphasized in the program theory and guided recreational activity engagement is clearly identified as an essential program component, something not clearly identified in the SDEE model.

Guided activity session 1 is a preliminary session designed to establish the client’s baseline performance in physical activity. Each of the successive sessions has been designed to target the client’s: (a) level \of enjoyment (through activity modifications), (b) activity challenge management (through cognitive and behavioral strategies), and (c) self-talk and attributions (through cognitive and behavioral strategies). Proximal outcomes associated with each activity session are also identified. The TRS also infuses informative feedback during activity engagement throughout sessions 2 to 4.

The program theory case example also addresses each of Reynolds’ (1998) previously described elements of a program theory (problem area, target population, program context conditions, etc). These elements, along with theory concerning optimal end states of functioning, are the basis for theory-driven program planning in this case.

The program design also lends itself to evaluation. Pre-post measurement strategies, could be carried out in experimental, quasi- experimental, or single-subject research designs. Additionally, the program theory supports conceptual analysis of the impact of variations of program components; more is said about this point in the following sections of this article.

Theoretical Rationale for the SDEE Program Theory

Dattilo et al. (1998) acknowledged that the theoretical components of the SDEE model were complex and, in some places, ambiguous. They further acknowledged that this theoretical complexity and ambiguity could make it difficult to interpret and apply the model. The following discussion clarifies the theoretical and practical aspects of the clinical case example provided in the previous section.

In building the program theory case example, characteristics of the population and the reasons for being directed to a TR program lead to assumptions about clients’ capacity for self-management of physical activity as a meaningful and enjoyable recreation activity. This assumption helped clarify the important program aspects integral to a program theory (Reynolds, 1998). As previously discussed, the problem area, target population, and key outcomes largely influence many aspects of this TR context.

Key program components of choice, feedback, internal versus external perceptions of causality (attributions), and orientation to challenge are in the SDEE model, but their application in a TR context requires clarification. For example, choice could mean choice of: (a) participation in a TR program (i.e., issues of selection into a program, whether required, prescribed, or voluntary), (b) unlimited choice of type of activity engagement (“what would you like to do?”), or (c) choice within a limited range of activity engagements, set by a TRS. Thus, choice as a condition may enter a program experience in many ways. Understanding what aspects of choice are most important to optimal engagement in physical activity is important to the application of theory and the SDEE model to the TR setting in the case scenario.

In the case example, choice, in terms of type of activity, is restricted to one of three activities. Making decisions about environmental modifications that would enhance enjoyment (e.g., music, participating with others) is another aspect of choice. Limiting choice is consistent with the needs of the clients (i.e., maximizing physical functioning) and helped focus the purpose of assessment. The way choice is defined and used as a program component in this case example is a theoretically derived working hypothesis and it can be contrasted with alternative interpretations. For example, an alternative program format would be to have all clients participate in the same activity (i.e., no activity choice) or provide clients with a free choice of activity engagements. Self determination theory and the SDEE model suggests that both of the choice conditions will be more effective than the no choice condition but does not specify which of the choice conditions is better. Therefore, it is a compelling question worthy of inquiry.

It could be argued that the treatment of choice in the program theory case example violates or misrepresents essential elements of TR practice and differs from the description provided by Dattilo et al. (1998). For example, it could be argued that assessment and the selection of activities should be open-ended, completely individualized, and that the TR experience would be optimal when the client has “complete” control (i.e., choice) to subjectively define what activity has the greatest potential for enjoyment. This individualized and subjective definition is problematic because it is so broad that it is difficult to identify aspects of functioning important for assessment. Furthermore, there is no assurance that the activity the client chooses will have physical benefits.

Respect for clients’ subjective views regarding their preferred types of activity is likely to be important. However, to adopt a conceptualization that the assessment, program components, and outcomes are open ended and subjectively defined casts TR in rather broad and vague terms. At the very least it means that specific program outcomes cannot be established until a rather broad assessment occurs with the client. In reality, the program context (including institutional priorities) dictates some of the outcomes that need to be advanced.

The clinical context described in the case example affected the conceptualization of choice and the application of intrinsic motivation and self-determination theory, which underlie the SDEE model. One interpretation of intrinsic motivation within leisure research is that recreational activity is inherently intrinsically motivating. However, in the TR context described in the case example, physical activity likely has an instrumental character, and clients may not feel competent or intrinsically motivated. To apply self-determination theory in this context required careful consideration of the complex relationship between intrinsic and extrinsic motivation not discussed in the SDEE model. As Ryan and Deci (2000) noted,

. . . people will be intrinsically motivated only for activities that hold intrinsic interest for them, activities that have the appeal of novelty, challenge, or aesthetic value. For activities that do not hold such appeal, the principles of CET [cognitive evaluation theory] do not apply, because the activities will not be experienced as intrinsically motivated to begin with. To understand the motivation for those activities, we need to look more deeply into the nature and dynamics of extrinsic motivation. (p. 71)

Extrinsic motivation and the process of internalization are more relevant to the TR context described in the clinical case example, and the treatment of choice and other program components were applied and interpreted from this extrinsic motivation framework.

Dattilo et al. (1998) largely assumed that a TR session would involve an activity of interest. If such interest is thwarted due to progressive illness or some other change in a client’s capacity, then there may indeed be other concerns and an activity of interest may not be possible. Programs may need to focus on the expression of interest, re-engagement in recreational activity for which the individual has an intrinsic interest, and/or developing recreational skills for activities that are instrumentally valuable. It could be the case that the activities of intrinsic interest to the client have instrumental value, however, this cannot be assured.

It is true that a TR client may not be interested in the activities used in the case example (cycling, aquatics, or modified aerobics) and may want to advance other activity goals such as a diversion from the medical setting (see for example, Hutchinson, 2000) rather than physical fitness in their TR session. However, when enhanced physical functioning is established as an outcome, theory and activity are employed with that end in mind.

Fortunately, many of the kinds of things that foster or deter intrinsic motivation are also important to fostering internal forms of extrinsic motivation. For example, choice is a program condition that affects the expression of intrinsic motivation and the internalization of values associated with two internal forms of extrinsic motivation (Ryan & Deci, 2000). In this situation, a program may be structured to foster internalization, helping the client to learn to value and enjoy a physically demanding recreational activity one has to do to achieve the desired physical functioning outcome. Thus, the goal of the TR session in this context is to foster self-determination in non-intrinsically motivated behaviors.

In summary, there are two essential points to this theoretical overview. The first point is that context matters and largely defines essential program aspects that must be carefully specified in the program theory. In an attempt to stress a common underlying process that can be applied to a number of populations and service formats, program planning is often presented generically, as in the SDEE model, without reference to a specific program context. The limitation of that approach is that the affect of these elements may be overlooked. Second, development of the program theory called for a revision of the SDEE model based on understanding of its underlying theories and the program context.

Self-determination theory is comprised of two sub-theories, one addresses conditions of initial intrinsic interest and another the conditions of instrumental participation (Deci, 1992; Ryan & Deci, 2000). This distinction is important because of the program context and because the process of internalization may have greater implications for TRS behavior than intrinsically motivating activities. As Ryan and Deci noted, “. . . Because extrinsically motivated behaviors are not typically interesting, the primary reason people initially perform such actions is because the behaviors are prompted, modeled, or valued by significant others . . . ” (p. 73). Therefore, this modeling and the character of the client-TR\S relationship are important aspects of the guided activity session and facilitation of the process of internalization.

The application of the SDEE model in this context was also tied to activity adaptation, challenge management, self-talk, and attributions as leisure skills that naturally build upon each other. Informative feedback, designed to enhance competence and performance, was provided by the TRS and relates to these skills. Feedback was infused throughout guided activity sessions 2-4. Other aspects of the SDEE model were not included in the program theory. For example, investment of attention was de-emphasized in favor of the construct of attributions.

The program theory represented in Figure 2 is a step-by-step, time ordered progression of a program, with the sequential steps, program components, and their proximal and distal outcomes identified. Representing a program in this manner serves to clarify the underlying beliefs about how the program works. Thus, the benefit of a well-specified program theory is that it provides a detailed explanation of how a program works. These hypothesized links between program components and outcomes can then provide a guiding framework for systematic program evaluation.

Program Theory Evaluation

Program theory evaluation (PTE) is an evaluation that is at least partly guided by a program theory (Rogers, 2000; Rogers et al., 2000). What distinguishes PTE from a typical outcome evaluation is that PTE is based on an explicit program theory, which has clear testable tenets about how the program produces desired effects. For this reason it can demonstrate both that programs have effects and substantiate how programs works (Rogers et al.). It is conceptually similar to other theory-driven evaluation models that require a specific “causal” model such as logic models (Weiss, 1972) and the generic input-processoutcome model used by the United Way (see Rogers et al.).

Benefits of Program Theory Evaluation

One of the benefits of constructing a program theory and using it as the basis for evaluation is that it can help practitioners identify and assess key hypotheses or assumptions guiding practice. For example, if the TR specialist wants to test the hypothesis that actual activity engagement is important to enhancing levels of self- determination and enjoyment in leisure, then a program can be designed in which there is an experiential component to the program for one group, and individual counseling without guided activity participation for another. Likewise, aspects of the therapists’ interpersonal involvement (e.g., teaching style) could be examined. By manipulating different theoretically meaningful components of a program (and keeping everything else the same) a TRS can evaluate whether they do affect proximal and distal program outcomes. Demonstrating a predictive relationship between hypothesized variations in the program components and outcomes provides empirical evidence for a particular path or mechanism (Hamilton, 1980; Trochim & Cook, 1992).

An explicit program theory should also help evaluators discriminate whether lack of expected outcomes relates to problems of implementation or problems of theory (Rogers, 2000; Reynolds, 1998). Quite often process evaluations are restricted to examining whether the program was implemented in a manner consistent with its design. This is particularly an issue when a specific program is replicated at numerous sites. If the program was properly implemented then lack of outcomes may be attributed to problems with the theory, that is, lack of empirical support for the specified mechanisms (Rogers).

Other Issues to Consider During Evaluation

It should be noted that the evaluation of a program using a guiding program theory does not necessarily cover all of the theoretically important aspects of programming nor all of the things practitioners may want to evaluate. Finney and Moos (1992) encouraged evaluators of programs to carefully assess assumptions regarding research design and other related aspects of a program before making conclusions about program mechanisms and the efficacy of a program. In particular, they suggested that the impact of at least three processes be carefully considered even though not all of these processes would be examined in a single evaluation study. These theoretically important processes are: (a) client’s selection into treatment (e.g., intake criteria, characteristics of the target population); (b) etiology of desired outcomes (origins and processes underlying the problems to be treated); and (c) matching clients with treatment (Finney & Moos). Identifying each of these or assumptions related to these three processes alongside the program theory helps clarify the contribution and limits of a single evaluation study and various evaluative designs.

FIGURE 2. PROGRAM THEORY CASE EXAMPLE.

FIGURE 2. PROGRAM THEORY CASE EXAMPLE.

Client’s selection into treatment. Issues related to selection into treatment reflect the need to identify real world conditions that may influence an individual entering “treatment” (i.e., therapeutic programs) and the type of treatment they select. It involves assessing factors that precede entry into a program and means understanding how decisions are made about who should and actually does attend a program as these factors may affect client participation.

Factors related to how clients come to be in a program (voluntary, required, or prescribed) may restrain program effects. For example, an outreach TR program for families with children with disabilities may be designed to build family cohesion, but the types of families that sign up for the program may be unknown. The program may attract relatively cohesive fami lies looking for an opportunity to spend more time together rather than families in conflict. In this case failure to show improvement in family cohesion may be related to the type of families that chose to participate in the program.

Understanding who chooses a program is more than an issue of target marketing and cannot be completely resolved by research design. According to Finney and Moos (1992) the predominance of experimental and quasiexperimental designs in evaluation research has resulted in a tendency to overlook the impact of selection issues. As they argued:

Selection processes occurring before the point of random assignment (Why did these people present for treatment?) and after it (Why did some people participate more intensively in the treatment program?) are generally not considered because they have no direct impact on the internal validity of global treatment effect estimates. Treatment in the real world is a different and more complicated process than that modeled in experimental designs, however, (p. 18)

Finney and Moos’ point is that even if random assignment of clients into treatment and control groups could be achieved in an evaluation study of a real program context, the study design does not aid understanding of the life conditions or factors that lead an individual to become a client. Yet, knowledge of these factors may indeed affect program participation and judgments of program efficacy. In some cases, systematically assessing the characteristics of who attends should be a focus of research.

Etiology of the “problem.” Processes related to understanding the etiology of the problem refer to appreciating and using theory and the latest knowledge of the characteristics of the disabling problem in program development and evaluation. Specifically, it requires identifying the antecedents, correlates, or factors that affect the manifestation of the problem, condition, or disorder. It means understanding the pathways and factors that contribute to clients’ recovery, adjustment, or non-recovery, maladjustment, and relapse. For example, in a rehabilitation context where people who have experienced a traumatic injury (e.g., spinal cord injury, stroke, traumatic brain injury) or are dealing with a chronic illness (e.g., diabetes, multiple sclerosis), the TRS uses knowledge of the etiology of the disabling condition and optimal recreation engagement to inform the design of the program.

This point was made previously in the discussion of using psychosocial theory to explain how proximal program outcomes affect distal outcomes. However, it is important to recognize that there may be no single etiological model that explains adaptive, functional, or dysfunctional factors associated with various health problems and disabling conditions. In fact, the list of possible factors is often long and there may be gaps in knowledge in these etiological frameworks (Johnson & Pandina, 2001). Nonetheless, the aim of a prevention or intervention program is to affect factors in these etiological theories.

Depending on the program context characteristics, all aspects of the etiology of a problem may not be directly integrated into programming. Nonetheless, this knowledge may be very relevant to making claims about program efficacy. For example, a program for youth with alcohol problems may target the adoption of a hobby as means for constructive use of free time and a reduction in engagement in alcohol consumption. The program for these youth would focus on merging treatment (how hobbies are adopted) with theories that suggest factors related to juvenile alcohol problems (excessive amount of free time spent hanging out). However, the role of peers may also need to be accounted for, even if it is not part of this particular program. Thus, elements that are not included in the program may still be included in the evaluation study.

Evaluation data collected after the program may find that the extent to which the youth desired to be with peers impacted the program efficacy. Those youth who spent more time with peers (post- program) may have engaged less in the targeted hobby and had more relapses of alcohol consumption than those who spent less ti\me with peers. While a common programmatic response may be to integrate peer relations skills into the program, doing so may be unrealistic under the current agency and program structure, or including a program component on friendship may require dropping some other program component.

Thus, knowledge of the etiology not only affects programming decisions, it affects the evaluation of the program. Known factors beyond the scope of the program may moderate its success. The TR program cannot and likely should not try to affect all factors, but the evaluation can account for an additional limited set of factors when measuring program outcomes, which affords a more accurate understanding of the efficacy of the program.

Matching clients with treatment. Client matching addresses those within-program modifications based on known characteristics of subgroups that enhance program effects. For example, activity modifications (e.g., to leadership style, equipment, environment, procedures) based on client characteristics may be made within a treatment session to influence optimal activity engagement. Again, while these modifications reflect common practices associated with TR, such as adapting recreational equipment, what is important is to represent them in a program theory and treat them as testable hypotheses in an evaluation design. Program adjustments reflect client-by-treatment interactions (moderators). That is, client characteristics influence how successfully a particular program component works. These client characteristics could be measured and assessed in an evaluation design. One implication for data analysis is that the client population for a program is subdivided by the factor believed to impact how the program works. If differences are found then modifications in programming can be made and tested.

Who enters a program and why, knowledge about the disabling condition, and ways to best match clients with program activity components are not new TR practice issues. However, these steps may often be taught as if all the important variables can be adequately addressed in a single program. Or, activity adaptations may be made based on past practice or assumptions that were never systematically assessed. The Finney and Moos (1992) discussion of these processes from an evaluative framework helps limit grand generalizations about program efficacy and encourages more theory-based, discrete and limited assessments of program components, processes, and outcomes.

Conclusion

As previously stated, using an established TR practice model to drive program design and evaluation within a particular practice setting (e.g., rehabilitation, long term care, juvenile detention) requires some refinement and specification of a TR model and the application of it in a specific program context. Development of a program theory that connects program components with proximal and distal outcomes serves this purpose. However, developing a program theory may be challenging as existing programs are often multifaceted, and developing a useful and measurable representation of program processes requires carving up the interconnected flow of life (Britt, 1997). When working with actual ongoing programs, development of a program theory may need to occur over time through several steps in a repetitive process (see Rogers, 2000). While it would be difficult to describe this process, the goal of the case example was to illustrate the level of refinement and specification needed for theory-based programming and evaluation.

Program theory links theory and practice. Conceptually, a program theory creates a link between an abstract theory, studies “manufactured” exclusively for research purposes of testing theory, and actual theory-driven programs which can serve as “natural” studies of theory (Reynolds, 1998). Since a program theory hypothesizes how programs work, well-designed evaluations based on the program may serve as empirical tests of theory in real life program contexts (Johnson & Pandina, 2001; Lerner et al., 1994).

There have been numerous calls for the use of theory in TR practice and suggestions that TR practice models serve this role. However, TR practice models have been largely decontextualized and found lacking in regard to specification of program components and what it is about program components that leads to intended outcomes. Program theory can serve to clarify both theory and context and therefore holds great potential as a framework for future inquiry.

In the area of practice, program planners can concentrate on testing whether the program components they use actually produce the intended effects (see Fetterman & Bowman, 2002; Simon, Bosworth, & Unger, 2001). In the case where program components are not working as expected, practice can be refined. Secondly, practitioners can examine whether social science theories support a link between proximal outcomes associated with their program and distal outcomes they desire to affect. Where theory and empirical evidence support such a link, practitioners can develop a comprehensive program theory and evaluation strategy that can be used to produce strong evidence for the efficacy of their programs.

In the area of research, the program theory framework can be used to refine and apply TR models to create interventions that include assessment of key theoretical hypotheses that can be rigorously tested. Theoretically driven examinations of TR practice in real life settings can produce findings that lead to evidence-based practice and support theory driven programming. As the previous discussion of the program component of choice illustrated, an aspect of this future work would be examining predominant TR practices that have not been thoroughly specified. A benefit of this type of research would be that it would bring research and practice closer together, further advancing evidence-based TR practice.

References

Baldwin, C. K. (2000). Theory, program, and outcomes: Assessing the challenges of evaluating at-risk youth recreation programs. Journal of Park and Recreation Administration, 18(1), 19-33.

Bickman, L. (1996). The application of program theory to the evaluation of a managed mental health care system. Evaluation and Program Planning, 19, 111-119.

Britt, D. W. (1997). A conceptual introduction to modeling: Qualitative and quantitative perspectives. Mahwah, NJ: Lawrence Erlbaum.

Caldwell, L. (2000). Beyond fun and games?: Challenges to adopting a prevention and youth development approach to youth recreation. Journal of Park and Recreation Administration, 18(3), 1- 18.

Chen, H., & Rossi, P. H. (1992). Using theory to improve program and policy evaluations. New York: Greenwood Press.

Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York: Harper & Row.

Dattilo, J., Kleiber, D., & Williams, R. (1998). Self- determination and enjoyment enhancement: A psychologically-based service delivery model for therapeutic recreation. Therapeutic Recreation Journal, 32, 258-271.

Davidson, E. J. (2000). Ascertaining causality in theory-based evaluation. In P. J. Rogers, T. A. Hacsi, A. Petrosino, & T. A. Huebner (Eds.), Program theory in evaluation: Challenges and opportunities (pp. 17-26). San Francisco, CA: Jossey-Bass.

Deci, E. L. (1992). The relation of interest to the motivation of behavior: A self-determination theory perspective. In K. A. Renninger, S. Hidi, & A. Krapp (Eds.), The role of interest in learning and development (pp. 43-69). Hillsdale, NJ: Lawrence Erlbaum.

Donaldson, S. I. (2001). Mediator and moderator analysis in program development. In S. Sussman (Ed.), Handbook of program development for health behavior research and practice (pp. 470- 496). Thousand Oaks, CA: Sage.

Easterling, D. (2000). Using outcome evaluation to guide grant making: Theory, reality, and possibilities. Nonprofit and Voluntary Sector Quarterly, 29, 482-486.

Fetterman, D., & Bowman, C. (2002). Experiential education and empowerment evaluation: Mars rover educational program case example. Journal of Experiential Education, 25, 286-295.

Finney, J. W., & Moos, R. H. (1992). Four types of theory that can guide treatment evaluations. In H. Chen & P. H. Rossi, (Eds.), Using theory to improve program and policy evaluations (pp. 49-69). New York: Greenwood Press.

Freysinger, V. J. (1999). A critique of the optimizing lifelong health through therapeutic recreation (OLH-TR) model, Therapeutic Recreation Journal, 33, 109-115.

Hamilton, S. F. (1980). Experiential learning programs for youth. American Journal of Education, 88, 179-215.

Hntchinson, S. L. (2000). Discourse and the construction of meaning in the context of therapeutic recreation. Unpublished doctoral dissertation. University of Georgia, Athens, GA.

Johnson, V., & Pandina, R. J. (2001). Choosing assessment studies to clarify theory-based program ideas. In S. Sussman (Ed.), Handbook of program development for health behavior research and practice (pp. 321-344). Thousand Oaks, CA: Sage.

Lerner, R. M., Miller, J. R., Knott, J. H., Corey, K. E., Bynum, T. S., Hoopfer, L. C., McKinney, M. H., Abrams, L. A., Hula, R. C., & Patterson, A. T. (1994). Integrating scholarship and outreach in human development research, policy, and service: A developmental contextual perspective. In D. L. Featherman, R. M. Lerner, & M. Perlmutter (Eds.), Life-span development and behavior (pp. 249- 273). Hillsdale, NJ: Lawrence Erlbaum.

Mobily, K. (1999). New horizons in models of practice in therapeutic recreation. Therapeutic Recreation Journal, 33, 174- 192.

Payne, L. L. (2002). Progress and challenge in repositioning leisure as a core component of health. Journal of Recreation and Park Administration, 20(4), 1-11.

Reynolds, A. J. (1998). Confirmatory program evaluation: A method for strengthening causal inference. American Journal of Evaluation, 19, 203-221.

Rogers, P. J. (2000). Program theory: Not whether programs work but howthey work. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models (pp. 209-232). Boston: Kluwer Academic.

Rogers, P. J., Hacsi, T. A., Petrosino, A., & Huebner, T. A. (Eds.). (2000). Program theory in evaluation: Challenges and opportunities. San Francisco, CA: Jossey-Bass.

Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55, 68-78.

Shalock, R. L., & Bonham, G. S. (2003). Measuring outcomes and managing for results. Evaluation and Program Planning, 26, 229-235.

Simon, T. R., Bosworth, K., & Unger, J. B. (2001). Component studies. In S. Sussman (Ed.), Handbook of program development for health behavior research and practice (pp. 321-344). Thousand Oaks, CA: Sage.

Sussman, S., & Sussman, A. N. (2001). Praxis in health behavior program development. In S. Sussman (Ed.), Handbook of program development for health behavior research and practice (pp. 79-97). Thousand Oaks, CA: Sage.

Trochim, W. M. K., & Cook, J. (1992). Pattern matching in theory- driven evaluation: A field example from psychiatric rehabilitation. In H. Chen & P. H. Rossi, (Eds.), Using theory to improve program and policy evaluations (pp. 49-69). New York: Greenwood Press.

Weiss, C. H. (1972). Evaluation research: Methods of assessing program effectiveness. Englewood Cliffs, NJ: Prentice Hall.

Weiss, C. H. (2000). Which links in which theories shall we evaluate? In P. J. Rogers, T. A. Hacsi, A. Petrosino, & T. A. Huebner (Eds.), Program theory In evaluation: Challenges and opportunities (pp. 35-45). San Francisco, CA: Jossey-Bass.

West, S. G., & Aiken, L. S. (1997). Toward understanding individual effects in multicomponent prevention programs: Design and analysis strategies. In K. J. Bryant, M. Windle, & S. G. West (Eds.), The science of prevention: Methodological advances from alcohol and substance abuse research (pp. 167-209). Washington DC: American Psychological Association.

Baldwin is an Assistant Professor with the Human Services Program at Aurora University. Hutchinson is an Assistant Professor with the School of Hotel, Restaurant and Recreation Management at Pennsylvania State University. Magnuson is an Assistant Professor with the Division of Leisure, Youth, and Human Services at the University of Northern Iowa.

Direct correspondence to Baldwin: Human Services Program, Aurora University, 347 South Gladstone Avenue, Aurora, IL 60506-4892. Phone: 630-844-4227. fax: 630-949-5532. email: [email protected]

Copyright National Recreation and Park Association First Quarter 2004