The Challenge of Measuring Epistemic Beliefs: An Analysis of Three Self-Report Instruments

By DeBacker, Teresa K Crowson, H Michael; Beesley, Andrea D; Thoma, Stephen J; Hestevold, Nita L

ABSTRACT. Epistemic beliefs are notoriously difficult to measure with self-report instruments. In this study, the authors used large samples to assess the factor structure and internal consistency of 3 self-report measures of domain-general epistemic beliefs to draw conclusions about the trustworthiness of findings reported in the literature. College students completed the Epistemological Questionnaire (EQ; M. Schommer, 1990; N = 935); the Epistemic Beliefs Inventory (EBI; G. Schraw, L. D. Bendixen, & M. E. Dunkle, 2002; N = 795); and the Epistemological Beliefs Survey (EBS; P. Wood & C. Kardash, 2002; N = 795). Exploratory factor analyses, confirmatory factor analyses, and internal consistency estimates indicated psychometric problems with each of the 3 instruments. The authors discuss challenges in conceptualizing and measuring personal epistemology. Keywords: beliefs, epistemological beliefs, measurement, motivation, personal epistemology

FOR SOME TIME NOW, educational researchers have been interested in the role of epistemic beliefs in learning and academic achievement. Epistemic beliefs refer to beliefs about knowledge (including its structure and certainty) and knowing (including sources and justification of knowledge; Buehl & Alexander, 2001; Duell & Schommer-Aikins, 2001; Hofer, 2000; Hofer & Pintrich, 1997). In particular, these can include beliefs about “the definition of knowledge, how knowledge is constructed, how knowledge is evaluated, where knowledge resides, and how knowing occurs” (Hofer, 2001, p. 355).

Researchers have differed in how they conceptualize epistemic beliefs. In much of early theorizing, researchers conceived of epistemic beliefs as broad and general (Baxter Magolda, 1992; Belenky, Clinchy, Goldberger, & Tarule, 1986; Kitchener & King, 1981, 1990; Kuhn, 1991; Perry, 1970). Thus, they were thought to influence treatment of knowledge across contexts or domains in a fairly uniform fashion, although researchers working within these frameworks conducted studies largely in academic settings and in regard to academic knowledge. These theorists all described developmental changes in epistemic beliefs as stage-like, although there was a great deal of variability in how many stages the various theorists described (e.g., as few as four [Baxter Magolda] or five [Belenky et al.] to as many as nine [Perry]) and how they characterized the stages (e.g., as intellectual and ethical development [Perry], as epistemological reflection [Baxter Magolda], as reflective judgment [Kitchener & King, 1981], or as argumentative reasoning [Kuhn]). Theorists working in this tradition used interviews and laboratory tasks to reveal the nature of epistemic beliefs and their development.

Other theorists have conceived of epistemic beliefs as a set of related beliefs about knowledge and knowing that are more narrowly defined. Each of these beliefs has its own developmental trajectory, and developmental change may vary across the range of individual epistemic beliefs (Schommer, 1990; Schraw et al., 2002; Wood & Kardash, 2002). In addition, some researchers suggest that epistemic beliefs may be domain- or discipline-specific rather than general (Buehl, Alexander, & Murphy, 2002; Hofer, 2000; Jehng, Johnson, & Anderson, 1993; Paulsen & Wells, 1998; Schommer & Walker, 1995). Theorists working from this multidimensional conception of epistemic beliefs have developed paper-and pencil self-report measures that tap a variety of proposed epistemic beliefs.

Because of the convenience and efficiency of the self-report measures of epistemic beliefs, they are widely used and form the basis for much of the current research on the role of epistemic beliefs in learning. Evidence that epistemic beliefs are related to learners’ achievement motivation (Braten & Olaussen, 2005; Braten & Stromso, 2004; Buehl & Alexander, 2005; DeBacker & Crowson, 2006; Muis, 2004; Ravindran, Greene, & DeBacker, 2005), cognitive engagement and strategy use (Braten & Olaussen; DeBacker & Crowson; Kardash & Howell, 2000; Ravindran et al.; Ryan, 1984; Schommer, Crouse, & Rhodes, 1992; Tsai, 1998), text comprehension (Schommer, 1990; Schommer et al., 1992; Schommer-Aikins & Easter, 2006), and achievement (Buehl & Alexander; Muis; Schommer, 1993; Schommer, Calvert, Gariglietti, & Bajaj, 1997; Schommer et al., 1992; Schommer- Aikins & Easter) is accumulating.

Although these findings suggest that the study of epistemic beliefs may prove fruitful in advancing understanding of learning and instruction, progress has been undermined by concerns about the available measurement tools (Clarebout, Elen, Luyten, & Bamps, 2001; Hofer & Pintrich, 1997; Duell & Schommer-Aikins, 2001). One concern is theoretical and relates to the categories of beliefs included in these multidimensional measures of epistemic beliefs. Although some of the proposed beliefs are clearly epistemic (beliefs about the structure and certainty of knowledge), others are not epistemic themselves but are related to epistemic beliefs (beliefs about the speed or ease of learning or the fixed nature of ability). Another concern is empirical and relates to instability of the factor structures underlying the self-report measures and the low internal consistency coefficients typically reported for the subscales in the various instruments.

Recent reviews have provided general overviews of the available measures of epistemic beliefs, including self-report measures (Buehl & Alexander, 2001; Duell & Schommer-Aikins, 2001). These reviews catalog the variety of measures available to researchers in a way that highlights their theoretical and procedural distinctions (Duell & Schommer-Aikins) and provides a critical discussion of issues relevant to the study of personal epistemology (Buehl & Alexander). In both instances, the researchers included psychometric issues in the reviews, but they were not the primary focus. However, careful reading of research on epistemic beliefs reveals a number of troubling indications that the research does not rest on a strong psychometric foundation. In the present study, measurement issues were the primary focus of inquiry. Our purpose was to assess the psychometric properties of three commonly used measures of epistemic beliefs, using larger samples and more rigorous analyses than those that have appeared in the literature, as a general gauge of the trustworthiness of knowledge about personal epistemology and researchers’ ability to make progress in this line of investigation.

The majority of researchers in the field have used the Epistemological Questionnaire (EQ; Schommer, 1990), although they have used other measures as well, including the Epistemic Beliefs Inventory (EBI; Schraw et al., 2002), and the Epistemological Beliefs Survey (EBS; Wood & Kardash, 2002). We describe each of these measures and associated findings in the following sections.

Epistemological Questionnaire

The measure of epistemic beliefs most commonly encountered in the literature is Schommer’s (1990) EQ (see Appendix A). With the introduction of this instrument, Schommer brought both conceptual and methodological changes to the study of epistemological understanding. Breaking with the developmentalstructural tradition (King & Kitchener, 1994; Kuhn, 1991; Perry, 1970), Schommer conceptualized personal epistemology as a belief system composed of several “more or less independent dimensions” of beliefs about knowledge and knowing (Schommer, 1990, p. 498). This system included beliefs about the structure of knowledge (simple vs. complex), the certainty of knowledge (certain vs. tentative), and the source of knowledge (omniscient authorities vs. personal construction) as well as beliefs about the nature of ability (fixed vs. malleable) and learning (e.g., learning happens quickly or not at all; Schommer, 1990, 1994). Schommer (1998) further proposed that, over time, individuals move from naive beliefs to more sophisticated beliefs in these areas.

Schommer (1990) created the 63-item EQ by developing 2 or more subsets of items to capture each of the five proposed dimensions of beliefs, for a total of 12 subsets. Scoring of the instrument often involves conducting a second-order factor analysis, whereby the 12 subsets are treated as items by using principal axis factoring. Item subsets are then combined in the manner indicated by the factor analysis to produce belief scores. In Schommer’s 1990 study, in which she introduced the EQ, factor analysis indicated the presence of four orthogonal factors: Simple Knowledge, Certain Knowledge, Innate Ability, and Quick Learning. Researchers would probably accept two of these factors (Simple Knowledge, Certain Knowledge) as constituting epistemic beliefs, whereas the other two (Innate Ability, Quick Learning) are better considered beliefs about learning (Hofer & Pintrich, 1997). The anticipated factor capturing beliefs about the source of knowledge did not emerge.

Since its introduction, Schommer, her colleagues, (e.g., Schommer, 1993; Schommer et al., 1992; Schommer & Dunnell, 1994) and other researchers (e.g., Clarebout et al., 2001; Kardash & Howell, 2000; Paulsen & Wells, 1998) have used the EQ in numerous studies. However, the sample-specific procedure for scoring makes it somewhat difficult to compare findings across studies. Because scoring of the instrument is typically based on a factor analysis of subset scores in each new sample, individual studies may in essence be using different instruments. Across the various samples,1 unique combinations of subsets emerge, often creating novel factors. Although Schommer (1998) and Schommer and Dunnell (1994) factor analyzed the subscale scores to successfully reproduce the four factors that emerged in Schommer’s (1990) original study, in other cases this did not occur. For example, Schommer (1993), studying high school students, factor analyzed subset scores to produce four factors that she gave the original four factor names but that were composed of somewhat different groupings of subsets. Likewise, Schommer et al. (1992) factor analyzed the subset scores and found that a four-factor solution fit the data, but again the resulting four factors did not completely replicate the Schommer (1990) findings. The researchers identified factors measuring Simple Knowledge, Certain Knowledge, and Quick Learning, although the item subsets that composed these factors were not identical to those in previous studies. The fourth factor, Externally Controlled Learning, was unique to this sample.

Kardash and Howell (2000), using a 42-item version of Schommer’s (1990) instrument, factor-analyzed subset scores to extract four factors from the data. Kardash and Howell identified the factors as Nature of Learning, Speed of Learning, Certain Knowledge, and Avoid Integration. Schommer-Aikins, Duell, and Barker (2002) reported four factors named Stability of Knowledge, Structure of Knowledge, Control of Learning, and Speed of Learning. The researchers did not provide details regarding how they arrived at the four factors. Last, Clarebout et al. (2001), using a Dutch translation of the EQ, factor analyzed subset scores in two different samples of college students. Although both samples yielded fourfactor solutions, the solutions resembled neither each other nor the factors that Schommer (1990) identified.

In some studies, a three-factor solution better represented the data. Schommer et al. (1997) analyzed subset scores to produce factors named Malleability of Learning Ability, Structure of Knowledge, and Speed of Learning. Schommer- Aikins, Mau, Brookhart, and Hutter (2000), using confirmatory factor analysis (CFA) on a 30- item version of the questionnaire that Schommer (1993) developed for middle school students, found that the four-factor solution fit the data poorly. A three-factor solution yielded stronger fit indexes. The three factors, which included only 11 items, were Speed of Learning, Ability to Learn, and Stability of Knowledge. Note that in each of these samples, two of the three resulting factors addressed learning beliefs rather than epistemic beliefs.

Across these studies, factors concerning speed of learning (similar to Schommer’s [1990] original factor Learning Happens Quickly or Not At All) and structure of knowledge (similar to Schommer’s [1990] original factor Simple vs. Complex Structure) appeared with the greatest regularity. Other factors, including many related to learning (Externally Controlled Learning, Nature of Learning, and Avoid Integration) were unique to one or two studies.

We suspect that-aside from the circumstance that some investigators did not use the full 63-item version of the EQ-at least two circumstances contribute to the inconsistency of factors that emerge across studies. These have to do with the internal consistency of the factors identified through factor analysis and of the item subsets on which the factors are based.

Sometimes internal consistency statistics have not been reported for the beliefs scales that emerge in individual studies (see Kardash & Howell, 2000; Paulsen & Wells, 1998; Schommer, 1990, 1993, 1998; Schommer et al., 1992; Schommer & Dunnell, 1994, 1997; Schommer & Walker, 1995, 1997). When they have been reported, they have tended to be low. For example, Schommer et al. (1997) reported reliabilities ranging from .63 to .85, and Schommer (1993) reported a range of .51 to .78. Schommer-Aikins et al. (2002) reported reliabilities ranging from .58 to .73 for their domain-specific versions of the EQ. In their review of measures of epistemic beliefs, Duell and Schommer-Aikins (2001) stated that reliability coefficients for the EQ ranged from .55 to .70 for middle school students, from .51 to .78 for high school students, and from .63 to .85 for college students. Poor internal consistency of scales is indicative of large proportions of measurement error and is related to difficulty in replicating findings across samples. This may contribute to the inconsistency seen across studies using the EQ.

It is also possible that the subsets of items used to produce the scores for factor analysis suffer from low internal consistency. Evidence of this is in Neber and Schommer-Aikins’ (2002) study, in which they analyzed subset scores directly (rather than using them as the basis of exploratory factor analysis [EFA]) and reported internal consistency coefficients for the six subsets in their study as ranging from .40 to .52. This raises questions about the appropriateness of including item subsets with high levels of unreliability as indicator variables in empirically based scoring procedures.

Taking a different approach, Qian and Alvermann (1995) factor analyzed items (not subset scores) from a pool of 53 items2 drawn from the high school version of the Schommer instrument (Schommer & Dunnell, 1994). Three factors emerged, which were reminiscent of Schommer’s (1990) original factors and which had greater internal consistencies than have sometimes been reported. The factors were Learning Is Quick (alpha = .79), Knowledge Is Simple and Certain (alpha = .68), and Ability Is Innate (alpha = .62). This finding suggested the possible utility of analyzing items rather than subset scores. However, when Hofer (2000) used Qian and Alvermann’s 32- item adaptation of Schommer’s (1990) scale, the result was much different. Like Qian and Alvermann, Hofer factor analyzed items rather than subscale scores. Unlike Qian and Alvermann, Hofer’s data failed to yield factors that resembled those that Schommer (1990) identified. Hofer reported that “the overall four-factor solution that emerged from an itembased factor analysis had no single factor that replicated those factors reported by Schommer and others when a factor analysis [was] conducted using subscales” (2000, p. 392).

In sum, several potential sources of difficulty are associated with using the EQ in the prescribed manner. Conceptually, the reliance on a sample-specific scoring procedure seems inconsistent with the goal of replicating results across samples. Empirically, research reports indicate inconsistency of factors across samples and persistently low internal consistency of scales. On a related matter, many researchers have used samples that have been fairly small relative to what is desirable for conducting factor analyses (see Russell, 2002), leaving open the question of whether and how sample size has contributed to consistency of findings. Regarding factor analysis of EQ items, rather than subscale scores, the picture is still unclear.

In the wake of findings suggesting potential problems with the EQ, researchers have created other measures of epistemic beliefs. In the present study, we have included two. The developers of the EBI (Schraw et al., 2002) retained the conceptual structure that Schommer proposed, and they created new items to try to better capture that structure. The developers of the EBS (Wood & Kardash, 2002) retained Schommer’s (1990) items and tried to find a cleaner and more stable structure among them.

Epistemic Beliefs Inventory

Bendixen, Schraw, and Dunkle (1998) and Schraw et al. (2002) have noted that one of the main problems researchers wishing to study epistemic beliefs have encountered has been the lack of valid and reliable self-report instruments. In response, Schraw et al. developed the EBI (see Appendix B), which was composed of new items created to better capture the five dimensions of epistemic beliefs that Schommer (1990) described. One of their objectives was to “construct an instrument in which all of the items fit unambiguously into one of five categories that corresponded to [Schommer’s] five hypothesized epistemic dimensions” (Schraw et al., p. 263). In particular, these researchers hoped to preserve the Source of Knowledge factor (called Omniscient Authority in the EBI), which Schommer hypothesized but was not empirically confirmed. The EBI contains five subscales: Simple Knowledge (seven items), Certain Knowledge (eight items), Omniscient Authority (five items), Quick Learning (five items) and Fixed Ability (seven items).

The EBI has been used in a number of studies. Bendixen et al. (1998), analyzing a 32-item version of the EBI, reported five clean factors that measured the anticipated categories of beliefs. Internal consistency coefficients for these factors ranged from .67 to .87. Schraw et al. (2002) factor analyzed a 28-item version of the EBI, reported reliability estimates ranging from .58 to .68, and reported the same five factors.

However, Nussbaum and Bendixen (2002, 2003) were unable to reproduce the five-factor structure of the EBI. In Nussbaum and Bendixen’s 2002 study, factor analysis produced two factors. The Complexity factor contained items intended to measure simple knowledge, quick learning, and innate ability. The Uncertainty factor contained items intended to measure certain knowledge and omniscient authority. Internal consistency estimates were not reported for these factors. In Nussbaum and Bendixen’s 2003 study, factor analysis produced three factors: Simple Knowledge (alpha = .69), Certain Knowledge (alpha = .69), and Innate Ability (alpha = .77). In several studies, researchers scored the EBI as recommended and did not subject it to factor analysis. Nonetheless, these studies provide information on the internal consistency of the proposed subscales. Ravindran et al. (2005) reported reliabilities for the five subscales ranging from .54 to .78. Hardre, Crowson, Ly, and Xie (2007) included the EBI in a study that compared internal consistency estimates of various instruments across three types of administration (paper and pencil, computer based, or Web based). For the five EBI scales across the three conditions, they reported Cronbach’s alpha coefficients ranging from .50 to .76 for Sample 1 and from .42 to .79 for Sample 2.

In sum, empirical support for the five a priori dimensions of epistemic beliefs has been mixed in published factor analyses. Internal consistency coefficients for the proposed subscales are higher than those seen with the EQ, but still lower than is desirable for some subscales. Moreover, we note that the sample sizes in studies using the EBI have been generally modest. Schraw et al. (2002) surveyed 160 participants, Ravindran et al. (2005) surveyed 101 participants, and Hardre et al.’s (2006) samples included 67 and 160 respondents. Nussbaum and Bendixen (2002, 2003) included 101 and 238 participants, respectively. Again, this raises the question of whether and how sample size may have affected findings.

Epistemological Beliefs Survey

In their discussion of measurement issues related to epistemic beliefs, Wood and Kardash (2002) reported that they had repeatedly been unable to satisfactorily reproduce the expected factor structure of Schommer’s (1990) EQ when conducting factor analyses at the item level and noted that the same was true of a related measure of epistemic beliefs that Jehng et al. (1993) developed. Jehng et al. designed their instrument to capture the factors that Schommer described by using some, but not all, of Schommer’s (1990) original items plus new items that they created. To find a factor structure that fit Schommer’s items better than did the five-factor structure that she originally proposed, Wood and Kardash combined items from Schommer’s (1990) and Jehng et al.’s instruments to create an 80- item survey of epistemic beliefs.3

After subjecting these items to a test of internal consistency and several different exploratory factor analyses, Wood and Kardash (2002) retained 38 items that they argued represented five independent dimensions of epistemic beliefs (see Appendix C). The resulting EBS included five subscales: Speed of Knowledge Acquisition (8 items), Structure of Knowledge (11 items), Knowledge Construction and Modification (11 items), Characteristics of Successful Students (5 items), and Attainability of Objective Truth (3 items). Although the new scales Speed of Knowledge Acquisition and Structure of Knowledge seem to correspond closely to Schommer’s (1990) original dimensions, the other three dimensions seem fairly novel.

There is little information on the EBS in the literature, so its psychometric properties remain largely unknown. Wood and Kardash (2002) reported the internal consistency of the five subscales as .74 for Speed of Knowledge Acquisition, .72 for Structure of Knowledge, .66 for Knowledge Construction and Modification, .58 for Characteristics of Successful Students, and .54 for Attainability of Objective Truth. Sinatra and Kardash (2004) used two subscales of the EBS in their study, reporting alphas of .59 for the Speed of Knowledge Acquisition scale and .54 for the Knowledge Construction and Modification scale. Schommer- Aikins and Easter (2006) reported alphas for the five EBS scales ranging from .54 to .74.


The EQ, EBI, and EBS are similar in several ways. Each uses a Likert scale format in which respondents indicate their degree of agreement with each item on the instruments. It is more important that each conceptualizes epistemic beliefs as domain general. That is, the context tapped by individual items is assumed to be relatively unimportant, because beliefs about knowledge are thought to apply uniformly across knowledge domains (e.g., social knowledge vs. academic knowledge, or knowledge about math vs. knowledge about history vs. knowledge about psychology). Therefore, items included in the measures may tap a particular context (most often school knowledge and learning) or may refer to beliefs about knowledge in general. Each of the instruments includes items that vary in context and scope from the specific (e.g., “Most words have one clear meaning”; “When I study I look for specific facts”) to the broad (e.g., “The only thing that is certain is uncertainty itself”; “The best ideas are often the most simple”). Although recent thinking calls into question the wisdom of assuming that epistemic beliefs are domain general (Buehl & Alexander, 2001), such a conceptualization was common at the time these measures were created.

Each instrument conceptualizes epistemic development as consisting of changes in a set of beliefs about the nature of knowledge and knowing. Each instrument specifies five dimensions of epistemic beliefs; however, the specific dimensions constituting the various instruments, and the items constituting those dimensions, are similar but not identical.


The purpose of the current investigation was to examine the factor structure of the EQ, the EBI, and the EBS by using CFA.4 CFA provides a more stringent test of the hypothesized model implied by these measures of epistemic beliefs than does EFA (the technique most commonly reported in the literature; Byrne, 2005; Kline, 2005). Our use of multiple samples sheds further light on the stability of the factor structures by allowing for replication on independent samples. In Studies 1 and 2, we report our analyses of the EBI and the EBS. We present the EQ, which required additional analyses because of its scoring procedure, in Study 3.




We drew on two different samples of participants to assess the psychometric properties of the EBI. In each case, participants completed the EBI as part of a larger study. In Sample 1, we aggregated data across several student samples that took the EBI (Bendixen et al., 1998) during the period from fall 2002 to spring 2004 while enrolled in an introductory or developmental psychology course. College students (N = 417) from a midsized Southwestern university with a mean age of approximately 22 years composed this composite sample. Participants were largely female (94%) and White (67%).

Undergraduate students (N = 378) at a midsized Southeastern university enrolled in educational psychology classes in 2004 composed Sample 2. Their mean age was approximately 20 years. Participants were largely female (78%) and White (80%).


We recruited students at their universities through educational psychology or human development courses. Volunteers received an informed consent form and packet of surveys, including the EBI, to be completed at home and returned at the next class meeting. Volunteers received course credit for their participation. We conducted CFA by using LISREL 8.52 (Joreskog & Sorbom, 2002) on two separate samples to assess the extent to which the prescribed five- factor model fit the data.


Confirmatory Factor Analysis

For each sample, we loaded items from the EBI onto the five latent factors hypothesized to account for the variance in the items: belief in simple knowledge, belief in certain knowledge, belief in quick learning, belief that ability is fixed, and belief that knowledge is derived from omniscient authorities. The five latent factors in our CFA were allowed to covary.

We assessed the overall fitness of the model by using several fit indexes reported in the LISREL output. The Goodness of Fit Index (GFI) reflects the proportion of variance explained in the variance- covariance matrix by the model, whereas the Adjusted Goodness of Fit Index (AGFI) reflects the same information by taking model complexity into account (Kline, 2005). Other things being equal, AGFI values are lower for more complex models as opposed to more parsimonious models. The Comparative Fit Index (CFI) is an incremental fit index that reports degree of improvement in fit of the research model over a null model. Of the common fit indexes, the CFI is most robust to sample size differences (Tanguma, 2001). Last, the root mean square error of approximation (RMSEA) is a badness-of- fit index that indicates degree of discrepancy between the model- implied and sample correlation matrixes and that includes a correction for model complexity (Kline). According to Schumacker and Lomax (2004) and Kline, GFI and AGFI values near or greater than .95 and CFI values greater than .95 are indicative of optimal model fit. RMSEA values less than .06 also indicate good fit (Hu & Bentler, 1999; Tabachnick & Fidell, 2001).

For Samples 1 and 2, the RMSEA value were .069 and .060, respectively, indicating marginally good model fit. However, in both samples, the CFI, GFI, and AGFI values all fell well below optimal levels at .79, .83, and .80, respectively, for Sample 1 and .83, .85, and .83, respectively, for Sample 2. These values provide evidence that the theoretical model did not fit the data well.

The R^sup 2^ values (see Table 1) for the items in each sample suggested that the five hypothesized factors explained a fair amount of variance in the items. However, across Samples 1 and 2, 11 of the 32 items had standardized factor loadings less than |.35|, suggesting they are not strong indicators of the hypothesized latent factors. This poses a particular problem for the scale of belief in simple knowledge.

Correlations Among Latent Constructs

Table 2 provides the correlations among the latent epistemic belief factors in our CFA model. Most correlations were of moderate magnitude. We found higher correlations among constructs capturing beliefs about knowledge (e.g., Knowledge is simple; Knowledge is certain; and Knowledge is gained from omniscient authorities) and among constructs capturing beliefs about learning (Learning is quick; Ability is fixed). This finding is not unexpected because of the conceptual similarity in the clusters of constructs. Subscale Reliabilities

To further examine the measurement properties of the EBI, we obtained internal consistency estimates for its five subscales for the two samples. Across the samples, subscale reliabilities were lower than was desirable (see Table 3).

Ancillary Analyses

To determine the extent to which the 11 weaker items identified in the aforementioned CFAs were hurting overall model fit, we excluded them and ran an additional CFA on each sample.5 Fit statistics improved, with CFI, GFI, and AGFI values at .89, .91, and .88, respectively, for Sample 1; and .91, .91, and .89, respectively, for Sample 2. RMSEA values were .060 in Sample 1 and .053 in Sample 2. These findings indicate that dropping the items increased the fit of the model to the data in our samples.


Our analyses uncovered a variety of problems with the EBI. CFA suggested that the five dimensions proposed to constitute the EBI were not a good fit to the data in either sample. Moreover, the magnitude of correlations among some of the latent variables calls into question the assumption that the dimensions of epistemic beliefs are orthogonal. Tests of internal consistency produced coefficients that were uniformly below .70, suggesting that there is a fair amount of measurement error associated with the EBI subscale scores. We note that the Cronbach’s alpha coefficients found in our large samples are smaller than those in the literature. Removal of 11 items improved the fit of the model to the data.




We used two different samples of participants to assess the psychometric properties of the EBS. In each case, participants completed the EBS as part of a larger study. In Sample 1, we aggregated data across several student samples who took the EBS (Wood & Kardash, 2002) in spring 2005. College students (N = 380) from two neighboring midsized Southwestern universities composed this composite sample. The participants’ mean age was approximately 24 years. Participants were largely female (75%) and White (71%). In Sample 2, we again aggregated data across several student samples who took the EBS (Wood & Kardash, 2002) in fall 2005. College students (N = 415) from two neighboring midsized Southwestern universities composed this composite sample. Their mean age was approximately 25 years. Participants were largely female (73%) and White (72%).


We recruited students at their universities through educational psychology courses. Volunteers received an informed consent form and packet of surveys, including the EBS, to be completed at home and returned at the next class meeting. Volunteers received course credit for their participation. We again conducted two separate CFAs to assess the extent to which the prescribed five-factor model fit the data from two different samples.


Confirmatory Factor Analysis

For each sample separately, we loaded items from the EBS onto the five latent factors hypothesized to account for the variance in the items: Speed of Knowledge Acquisition, Structure of Knowledge, Knowledge Construction and Modification, Characteristics of Successful Students, and Attainability of Objective Truth. Again, the five latent factors in our CFA were allowed to covary. In our assessment of the EBS, we used the same fit statistics used to assess the fit of our hypothesized model with the EBI.

Results for the EBS indicated somewhat better fit than for the EBI. For Samples 1 and 2, the RMSEA values were .050 and .052, respectively, which are indicative of fairly good model fit. However, the CFI, GFI, and AGFI values fell below optimal levels in both Sample 1 (.90, .85, and .83, respectively) and Sample 2 (.88, .85, and .83, respectively).

Inspection of the R^sup 2^ values (see Table 4) by sample suggested that the five hypothesized factors explained a fair amount of variance in the items. The majority of items on the EBS appear to be fairly good indicators of the factors that Wood and Kardash (2002) described. However, there were nine items with standardized factor loadings less than |.35| in one or both samples, suggesting they were not strong indicators of the hypothesized latent factors.

Correlations Among Latent Constructs

Correlations among the latent epistemic belief factors in our CFA model are shown in Table 5. Constructs in the EBS demonstrated a higher degree of interrelatedness than those on the EBI, with many correlations being moderate to strong in magnitude.

Subscale Reliabilities

Internal consistency estimates for the five subscales of the EBS are shown in Table 6. Across the samples, subscale reliabilities were better than those associated with the EBI but still lower than is desirable.

Ancillary Analyses

To determine how the nine weaker items influenced the CFAs, we excluded them and ran an additional CFA on each sample. Fit statistics improved slightly, with CFI, GFI, and AGFI values at .92, .88, and .86, respectively, for Sample 1; and .89, .87, and .85, respectively, for Sample 2. RMSEA values actually increased slightly to .052 in Sample 1 and .059 in Sample 2.


Our analyses of the EBS revealed psychometric problems. CFA suggested that the five dimensions proposed to constitute the EBS fit the data only marginally well in either sample, although fit indexes were better for the EBS than those reported for the EBI. Internal consistency coefficients were also somewhat stronger than those seen in the EBI, but many still fell below .70, and all fell below .80.




We aggregated data across several undergraduate student samples who took the EQ (Schommer, 1990) between summer 2000 and spring 2002 while enrolled in introductory or developmental psychology classes in conjunction with various larger studies. College students (N = 935) from a midsized Southeastern university composed this composite sample. Participants’ ages ranged from 18 to 45 years, with a mean age of approximately 20 years. Approximately 84% of the sample was 18-21 years of age. Participants were largely female (75%) and White (68%).


We recruited students through educational psychology or human development courses. Volunteers received an informed consent form and packet of surveys, including the EQ, to be completed at home and returned at the next class meeting. Volunteers received course credit for their participation.

We examined the psychometric properties of the EQ from three different angles. Having split the composite sample into random halves (creating Sample 1 and Sample 2), we used Sample 1 to conduct an EFA on the subscale scores. This allowed us to assess the factor structure of the EQ when analyzed in the intended manner. We then conducted a second EFA on individual items rather than on item subsets. This allowed us to assess whether a reasonable factor structure could be found in the data when side-stepping the controversial practice of analyzing item subsets. Having identified the more efficacious of the two factor structures emerging from EFA, we used the second sample to conduct a CFA to assess how well the model fit a new data set.


Exploratory Factor Analysis

Using Sample 1, we subjected the 12 item-subset scores composing the EQ to principal axis factoring with Varimax rotation, maintaining only those factors with eigenvalues greater than 1.0. (as Schommer [1990, 1993] recommended). Using this criterion, three factors emerged in the data that accounted for a combined 26.74% of the variance.

We retained subsets with loadings at or above |.35| on a factor (see Table 7). Six item subsets met the criterion for inclusion in Factor 1. Three of these subsets (Learn the first time, Success is unrelated to hard work, and Cannot learn how to learn) have consistently loaded together in previous research (see Schommer, 1993, 1998). These subsets are often interpreted as suggesting a belief in innate ability, although they might also be indicative of a belief in quick learning. Concentrated effort is a waste of time and Learning is quick have demonstrated inconsistencies in their loadings with the aforementioned subsets in the literature but nevertheless also reflect the innate ability or quick learning theme. Do not criticize authority has not typically loaded with the other item subsets and has little, if any, conceptual similarity to them. We note that in past research, subsets pertaining to authority have shown the greatest inconsistency in loadings across studies. We named the first factor Quick and Fixed Learning on the basis of the pattern of loadings.

Four item subsets met the criteria for inclusion on the second factor: Seek single answers, Avoid ambiguity, Avoid integration, and Ability to learn is innate. Of these four subsets, Ability to learn is innate has not typically loaded with the remaining three in the literature and is the only subset not related to a belief in simple knowledge. Because the other subsets had higher loadings than the Ability to learn is innate subset, we named the second factor Belief in Simple Knowledge.

With respect to the third factor, only two item subsets met our criteria for inclusion: Success is unrelated to hard work and Knowledge is certain. We note that the success subset also loaded heavily and positively onto the first factor. Because of its conceptual coherence with the first factor, it made sense to treat it primarily as an indicator of Quick and Fixed Learning. The third factor, then, was only (theoretically) definable in terms of the item subset Knowledge is certain. Because of the overall pattern of loadings on this factor, we hesitated to interpret it. As planned, we also conducted an EFA on the individual items composing the EQ. We subjected the 63 items of the EQ to principal axis factoring with Varimax rotation, retaining those factors with eigenvalues greater than 1.0. Using this criterion, we extracted 22 factors, with those accounting for the most variance being largely uninterpretable. Comparison of the two EFA results indicated that the factor structure resulting from analysis of the subset scores, despite its weakness, held greater meaning than did the factor structure resulting from an item-level analysis. Therefore, that is the factor structure we attempted to confirm in Sample 2.

Confirmatory Factor Analysis

Based on our exploratory findings, it was clear that only six item subsets loaded as they typically have in previous research and in a meaningful pattern: Avoid ambiguity, Avoid integration, and Seek single answers, Can’t learn how to learn, Learn the first time, and Success is unrelated to hard work. The pattern of loadings for the remaining subsets (Learning is quick, Knowledge is certain, Do not criticize authority, Depend on authority, Ability to learn is innate, and Concentrated effort is a waste of time) did not represent a clean conceptual scheme and provided additional evidence of their instability as indicators of latent factors associated with the EQ. Rather than include these unstable indicators in our confirmatory model, we chose to confirm the presence of only two factors by using only those aforementioned six subsets that have demonstrated the most stability in their pattern of loadings in EFA in the literature. We performed this analysis by using the second half of our composite sample.

We loaded the Item subsets Seek simple answers, Avoid ambiguity, and Avoid integration onto one latent construct, Belief in simple knowledge. We loaded Learn the first time, Can’t learn how to learn, and Success is unrelated to hard work onto the second latent construct. Because the three item subsets that compose the second construct have generally been thought to measure beliefs about ability, we called the factor Ability to Learn Is Fixed. The two latent factors were allowed to covary in the CFA model.

In our assessment of the EQ, we used the same fit statistics used to assess the fit of our hypothesized model with the EBI and EBS. For the model, the RMSEA was .045, whereas values for CFI, GFI, and AGFI were .97, .99, and .97, respectively. All of these indexes indicated that the model fit the data well. In addition, the R^sup 2^ values (see Table 8) associated with all six subsets suggested they were functioning well as indicators of their respective latent constructs. For these items, R^sup 2^ values ranged from .15 to .48.

Subset Reliabilities

One possible reason for the tendency of item subsets to group differently across studies may be unreliability in the subsets themselves. Indeed, if firstorder factors are not measured well by their respective indicators (e.g., items), then including those factors in a second-order factor analysis becomes dubious because the correlations among the first-order factors may be unstable because of measurement error.

To assess the possibility that measurement unreliability at the item subset level may be a source of instability in the EQ, we examined the internal consistencies (using Cronbach’s alpha) of each subset in our sample (see Table 6). Across the board, we found that the internal consistencies for the EQ item subsets were poor. Although we recognize that having a low number of items in a subset may attenuate internal consistency estimates somewhat, it is important to note that even among subsets with more items, the estimates were low. For example, the 11-item Seek single answers subset exhibited one of the lowest internal consistencies, falling at .22, and the 8-item Avoid integration subset had a reliability estimate of .25.


Our results indicated a variety of problems with the EQ. First, EFA of neither item subsets nor individual items produced a factor solution that resembled Schommer’s (1990). The CFA fit statistics were good, but this analysis was based on only six item subsets that represented only two dimensions of beliefs. Earlier, we proposed that one potential reason for the instability of some of the EQ subsets might be unreliability in the measurement of the subsets themselves. Based on the low reliabilities of the two items, it is clear that the item subsets generally lacked internal consistency, supporting our assertion that this may contribute to low correlations among them.


From a theoretical perspective, our investigations failed to support the view of epistemic beliefs as a domain-general and multidimensional collection of related beliefs about knowledge and knowing. This is seen most clearly in the consistent failure of factor analyses (exploratory and confirmatory) to support the hypothesized factor structures of the instruments investigated herein and is underscored by the low internal consistency estimates seen for subscales in the target instruments. Lack of support is further demonstrated in the extant research literature, which has failed to yield a consistent picture of the number or nature of dimensions that constitute epistemic beliefs.

Regarding use of the target instruments, caution must be advised. The EQ presented the most insurmountable psychometric problems because of the samplespecific scoring procedure and our EFA results. The CFA model we ultimately tested showed good fit to the data; however, it included only two scales: Belief in simple knowledge, and Ability to learn is fixed. Use of just these two scales to capture epistemic beliefs cannot be recommended because of concerns about undersampling the construct. There are certainly more facets of epistemic beliefs.

The EBI and EBS each fared better than the EQ, although the fit indexes for our CFA fit statistics and estimates of internal consistency were lower than desirable. Looking for sources of relative stability, we found that Belief that ability is fixed had the most desirable psychometric properties of the EBI scales and that Speed of knowledge acquisition had the most desirable psychometric properties of the EBS scales.

Epistemic Nature of Beliefs

There have been a number of recent developments regarding how epistemic beliefs are conceptualized. For instance, there is growing consensus that some of the beliefs originally included in measures of epistemic beliefs are not, themselves, epistemic in nature (Bendixen & Rule, 2004; Hofer, 2000; Hofer & Pintrich, 1997). Hofer (2000) and Pintrich (2002) have suggested that epistemic beliefs include beliefs about knowledge (the simplicity and certainty of knowledge) and beliefs about knowing (source and justification of knowledge) but not beliefs about learning or the nature of ability. Schommer-Aikins (2004) recently made a similar distinction, separating beliefs about knowing (e.g., fixed ability, quick learning) from beliefs about knowledge (e.g., knowledge is simple and certain). We note that three of the four subscales we identified as sources of relative stability are, in fact, beliefs about learning (a la Hofer) or knowing (a la Schommer-Aikins): Belief that ability is fixed from the EBI, Speed of knowledge acquisition from the EBS, and Ability to learn is fixed from the EQ.

Of the remaining EBI subscales, three are clearly epistemic in nature (Belief in simple knowledge, Belief in certain knowledge, and Belief that knowledge is derived from omniscient authorities) but cannot be recommended because internal consistency estimates were low for the three scales, ranging from .47 to .63. The Structure of knowledge and Knowledge construction and modification subscales of the EBS also address epistemic beliefs, and had internal consistency estimates that were strong relative to other epistemic beliefs scales assessed herein but that still only ranged from .65 to .76. These findings underscore the challenge of conceptualizing and operationalizing the more abstract and possibly tacit elements of epistemic beliefs in contrast to more concrete beliefs about learning.

Levels of Specificity

Consensus is growing in the research community that epistemic beliefs are both domain specific and domain general in nature. Buehl and Alexander (2001) proposed a nested model featuring three levels of beliefs: domain-specific epistemic beliefs (e.g., beliefs about mathematical knowledge or psychological knowledge), which are embedded within more general beliefs about academic knowledge, which are embedded within general epistemic beliefs at a broad level. Each of the three instruments examined here included a mix of items representing each of these levels of specificity, which may contribute to a lack of stability.

It is intuitively appealing to believe that measures of epistemic beliefs that are more domain- or context-specific would yield higher internal consistency. However, several researchers have tried to develop domain-specific measures of epistemic beliefs, and these attempts have not fared well either. Hofer (2000) designed an instrument to assess four dimensions of epistemic beliefs (Knowledge is certain, Knowledge is simple, Source of knowledge, Justification of knowledge) relative to the domains of science and psychology. Cronbach’s alphas for the four scales on the two versions of the instrument ranged from .51 to .81, with five of the eight alphas being less than .70.

Buehl et al. (2002) measured two dimensions of epistemic beliefs (Need for effort, 6 Integration of information and problem solving) situated in the domains of history and mathematics. Cronbach’s alphas for these four scales ranged from .61 to .75 in their Study 1 and from .58 to .72 in their Study 2. Likewise, Buehl and Alexander (2005) derived factors capturing three epistemic beliefs (Isolation of knowledge, Certainty of knowledge, Authority as source of knowledge) in two academic domains (history, mathematics) and reported alphas ranging from .64 to .77 on the resulting six subscales, with three of the six alphas being less than or equal to .70. Last, Mansell, DeBacker, and Crowson (2005) introduced a measure of beliefs about school learning (Buehl & Alexander’s [2001] middle level of specificity) that included two scales that address beliefs about academic knowledge. The internal consistency estimates were alpha = .77 for Knowledge is constructed and alpha = .63 for Knowledge is a commodity. It appears, therefore, that the challenges of measuring epistemic beliefs are not due solely to lack of domain specificity.

Undue Empirical Influence

In reviewing how measures of epistemic beliefs have been developed and used, empirical approaches are more in evidence than theoretical approaches. Although not providing a fully explicated theoretical grounding, Schommer (1990) did articulate in her pioneering work a multidimensional model of epistemic beliefs that she then sought to operationalize through development of the EQ. The dimensions of epistemic beliefs proposed in the theory were not, however, reliably captured by the items or item subsets constituting the EQ. As a result, routine use of the EQ involved factor analyses performed on specific samples, which led to findings regarding belief factors that were grounded empirically rather than theoretically.

In other cases, instrument development efforts have been essentially empirical from the start. For example, the EBS emerged through a series of analyses performed on a large pool of items adopted from other researchers. To the extent that identified dimensions of epistemic beliefs lack a firm theoretical grounding, they will be more strongly influenced by the particular characteristics of the development sample. Careful theoretical grounding will assure that any successful instruments that do emerge in the future will be able to yield explanatory, and not merely descriptive, information about epistemic beliefs.


A limitation of the present study is the preponderance of White female participants in our samples. This is an artifact of the geographic locations in which we conducted the study. There is little evidence in the epistemic-beliefs literature regarding gender differences, and what is there is mixed. Hofer (2000) reported gender differences, with males being more likely than females to view knowledge as simple and certain and to view authorities as the source of knowledge. However, Buehl et al. (2002) failed to find gender differences regarding beliefs about integration of knowledge and the need for effort when learning. In both cases, the researchers investigated gender differences by comparing the beliefs scores of the two groups. We are not aware of any studies that have investigated gender differences in the factor structure of epistemic beliefs.


Theoretical arguments suggest that epistemic beliefs are related to classroom learning and achievement. Although research to date has generally supported this assertion, studies linking epistemic beliefs to motivation and academic achievement that use the EQ, EBI, or EBS should be interpreted cautiously. Because of our findings that these measurement instruments contain large amounts of error variation and offer dubious operationalizations of the constructs that they purportedly measure, researchers should seriously reconsider the state of knowledge in the area of epistemic beliefs and their relationships with learning processes and outcomes. As work in this area continues, researchers invested in exploring epistemic beliefs from a multiple-beliefs perspective need to clearly define dimensions of beliefs that are explicitly epistemic in nature and to refrain from making decisions about dimensionality that are overly based on empirical- rather than theoretical- foundations. Theoretical grounding will not ensure greater psychometric success in measurement of epistemic beliefs with selfreport instruments, but its absence will surely obscure understanding of the role of epistemic beliefs in the classroom.


1. In some studies, researchers did not perform factor analyses on the study sample. Rather, they calculated belief scores by using factor coefficients from previous samples and z scores from their current sample, as in Paulsen and Wells (1998), Schommer and Walker (1995, 1997), and Schommer- Aikins and Hutter (2002).

2. The researchers removed 10 items on the EQ related to omniscient authority because they failed to emerge as a separate factor in previous studies.

3. The original 80-item survey was composed of 29 items that were unique to Schommer’s (1990) instrument, 22 items that were unique to Jehng et al.’s (1993) instrument, and 29 items that appeared on both instruments.

4. For the EQ only, we also conducted EFA because it is an element in the scoring procedure that researchers typically use.

5. This information is provided for informational purposes. We note that these modifications necessarily capitalized on the unique characteristics of our samples, producing fit statistics that may not generalize to other samples.

6. Some researchers may question whether this is an epistemic belief or a belief about learning.


Baxter Magolda, M. B. (1992). Knowing and reasoning in college: Gender-related patterns in students’ intellectual development. San Francisco: Jossey-Bass.

Belenky, M. F., Clinchy, B. M., Goldberger, N. R., & Tarule, J. M. (1986). Women’s ways of knowing: The development of the self, voice, and mind. New York: Basic Books.

Bendixen, L. D., & Rule, D. C. (2004). An integrative approach to personal epistemology: A guiding model. Educational Psychologist, 39(1), 69-80.

Bendixen, L. D., Schraw, G., & Dunkle, M. E. (1998). Epistemic beliefs and moral reasoning. The Journal of Psychology, 132, 187- 200.

Braten, I., & Olaussen, B. S. (2005). Profiling individual differences in student motivation: A longitudinal cluster-analytic study in difference academic contexts. Contemporary Educational Psychology, 30, 359-396.

Braten, I., & Stromso, H. I. (2004). Epistemological beliefs and implicit theories of intelligence as predictors of achievement goals. Contemporary Educational Psychology, 29, 371-388.

Buehl, M. M., & Alexander, P. A. (2001). Beliefs about academic knowledge. Educational Psychology Review, 13, 385-418.

Buehl, M. M., & Alexander, P. A. (2005). Motivation and performance differences in students’ domain-specific epistemological belief profiles. American Educational Research Journal, 42, 697- 726.

Buehl, M. M., Alexander, P. A., & Murphy, P. K. (2002). Beliefs about schooled knowledge: Domain specific or domain general? Contemporary Educational Psychology, 27, 415-449.

Byrne, B. M. (2005). Factor analytic models: Viewing the structure of an assessment instrument from three perspectives. Journal of Personality Assessment, 85, 17-32.

Clarebout, G., Elen, J., Luyten, L., & Bamps, H. (2001). Assessing epistemological beliefs: Schommer’s questionnaire revisited. Educational Research and Evaluation, 7(1), 53-77.

DeBacker, T. K., & Crowson, H. M. (2006). Influences on cognitive engagement and achievement: Personal epistemology and achievement motives. British Journal of Educational Psychology, 76, 535-551.

Duell, O. K., & Schommer-Aikins, M. (2001). Measures of people’s beliefs about knowledge and learning. Educational Psychology Review, 13, 419-449.

Hardre, P. L., Crowson, H. M., Ly, C., & Xie, K. (2007). Testing differential effects of computerbased, Web-based, and paper-based administration of questionnaire research instruments. British Journal of Educational Technology, 38(1), 5-22.

Hofer, B. K. (2000). Dimensionality and disciplinary differences in personal epistemology. Contemporary Educational Psychology, 25, 378-405.

Hofer, B. K. (2001). Personal epistemology research: Implications for learning and teaching. Journal of Educational Psychology Review, 13, 353-383.

Hofer, B. K., & Pintrich, P. R. (1997). The development of epistemological theories: Beliefs about knowledge and knowing and their relation to learning. Review of Educational Research, 67(1), 88-140.

Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1-55.

Jehng, J. J., Johnson, S. D., & Anderson, R. C. (1993). Schooling and students’ epistemological beliefs about learning. Contemporary Educational Psychology, 18, 23-35.

Joreskog, K., & Sorbom, D. (2002). LISREL (Version 8.52) [Computer software]. Lincolnwood, IL: Scientific Software International.

Kardash, C. M., & Howell, K. L. (2000). Effects of epistemological beliefs and topic-specific beliefs on undergraduates’ cognitive and strategic processing of dual- positional text. Journal of Educational Psychology, 92, 524-535.

King, P. M., & Kitchener, K. S. (1994). Developing reflective judgment: Understanding and promoting intellectual growth and critical thinking in adolescents and adults. San Francisco: Jossey- Bass.

Kitchener, K. S., & King, P. M. (1981). Reflective judgment: Concepts of justification and their relationship to age and education. Journal of Applied Developmental Psychology, 2, 89-116.

Kitchener, K. S., & King, P. M. (1990). The reflective judgment model: Ten years of research. In M. L. Commons, C. Armon, L. Kohlberg, F. A. Richards, & T. A. Grotzer (Eds.), Adult development: Vol. 2. Models and methods in the study of adolescent and adult thought (pp. 63-78). New York: Praeger.

Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York: Guilford Press.

Kuhn, D. (1991). The skills of argument. Cambridge, England: Cambridge University Press. Mansell, R., DeBacker, T. K., & Crowson, H. M. (2005, November). Further validation of the BASLQ, a measure of epistemology grounded in educational context. Poster presented at the first meeting of the Southwest Consortium for Innovations in Psychology in Education, Las Vegas, NV.

Muis, K. R. (2004). Personal epistemology and mathematics: A critical review and synthesis of research. Review of Educational Research, 74, 317-377.

Neber, H., & Schommer-Aikins, M. (2002). Self-regulated science learning with highly gifted students: The role of cognitive, motivational, epistemological, and environmental variables. High Ability Studies, 13(1), 59-74.

Nussbaum, E. M., & Bendixen, L. D. (2002, April). The effect of personality, ability, and epistemological beliefs on students’ argumentation behavior. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.

Nussbaum, E. M., & Bendixen, L. D. (2003). Approaching and avoiding arguments: The role of epistemological beliefs, need for cognition, and extraverted personality traits. Contemporary Educational Psychology, 28, 573-595.

Paulsen, M. B., & Wells, C. T. (1998). Domain differences in the epistemological beliefs of college students. Research in Higher Education, 39, 365-384.

Perry, W. G. (1970). Forms of intellectual and ethical development in the college years: A scheme. New York: Holt, Rinehart & Winston.

Pintrich, P. R. (2002). Future challenges and directions for theory and research on personal epistemology. In B. K. Hofer & P. R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 103-118). Mahwah, NJ: Erlbaum.

Qian, G., & Alvermann, D. (1995). Role of epistemological beliefs and learned helplessness in secondary school students’ learning science concepts from text. Journal of Educational Psychology, 87, 282-292.

Ravindran, B., Greene, B. A., & DeBacker, T. K. (2005). The role of achievement goals and epistemological beliefs in the prediction of pre-service teachers’ cognitive engagement and learning. Journal of Educational Research, 98, 222-233.

Russell, D. W. (2002). In search of underlying dimensions: The use (and abuse) of factor analysis. Personality and Social Psychology Bulletin, 28, 1629-1646.

Ryan, M. P. (1984). Monitoring text comprehension: Individual differences in epistemological standards. Journal of Educational Psychology, 76, 248-254.

Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology, 82, 498-504.

Schommer, M. (1993). Epistemological development and academic performance among secondary students. Journal of Educational Psychology, 85, 406-411.

Schommer, M. (1994). An emerging conceptualization of epistemological beliefs and their role in learning. In R. Garner & P. Alexander (Eds.), Beliefs about text and about text instruction (pp. 25-40). Hillsdale, NJ: Erlbaum.

Schommer, M. (1998). The influence of age and education on epistemological beliefs. British Journal of Educational Psychology, 68, 551-562.

Schommer, M., Calvert, C., Gariglietti, G., & Bajaj, A. (1997). The development of epistemological beliefs among secondary students: A longitudinal study. Journal of Educational Psychology, 89, 37-40.

Schommer, M., Crouse, A., & Rhodes, N. (1992). Epistemological beliefs and mathematical text comprehension: Believing it is simple does not make it so. Journal of Educational Psychology, 84, 435- 443.

Schommer, M., & Dunnell, P. A. (1994). A comparison of epistemological beliefs between gifted and non-gifted high school students. Roeper Review, 16, 207-210.

Schommer, M., & Dunnell, P. A. (1997). Epistemological beliefs of gifted high school students. Roeper Review, 19, 153-156.

Schommer, M., & Walker, K. (1995). Are epistemological beliefs similar across domains? Journal of Educational Psychology, 87, 424- 432.

Schommer, M., & Walker, K. (1997). Epistemological beliefs and valuing school: Considerations for college admissions and retention. Research in Higher Education, 38, 173-186.

Schommer-Aikins, M. (2004). Explaining the epistemological belief system: Introducing the embedded systemic model and coordinated research approach. Educational Psychologist, 39(1), 19-29.

Schommer-Aikins, M., Duell, O. K., & Barker, S. (2002). Epistemological beliefs across domains using Biglan’s classification of academic disciplines. Research in Higher Education, 44, 347-366.

Schommer-Aikins, M., & Easter, M. (2006). Ways of knowing and epistemological beliefs: Combined effect on academic performance. Educational Psychology, 26, 411-423.

Schommer-Aikins, M., & Hutter, R. (2002). Epistemological beliefs and thinking about everyday controversial issues. The Journal of Psychology: Interdisciplinary and Applied, 136, 5-20.

Schommer-Aikins, M., Mau, W., Brookhart, S., & Hutter, R. (2000). Understanding middle students’ beliefs about knowledge and learning using a multidimensional paradigm. Journal of Educational Research, 94, 120-127.

Schraw, G., Bendixen, L. D., & Dunkle, M. E. (2002). Development and validation of the Epistemic Belief Inventory. In B. K. Hofer & P. R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 103-118). Mahwah, NJ: Erlbaum.

Schumacker, R. E., & Lomax, R. G. (2004). A beginner’s guide to structural equation modeling (2nd ed.). Mahwah, NJ: Erlbaum.

Sinatra, G. M., & Kardash, C. K. (2004). Teacher candidates’ epistemological beliefs, dispositions, and views on teaching as persuasion. Contemporary Educational Psychology, 29, 483-498.

Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4th ed.). Boston: Allyn & Bacon.

Tanguma, J. (2001). Effects of sample size on the distribution of select