Instruments

Developers: Yesim Capa Aydin, Jale Cakiroglu, & Hilal Sarikaya

Contact information:
Yesim Capa Aydin, Ph.D.
Middle East Technical University
Faculty of Education
06531 Ankara TURKEY
capa@metu.edu.tr

________________________________________________________________

The Teachers’ Sense of Efficacy Scale

If you want a copy of this scale including the long and and short form and scoring directions, click here

Directions for Scoring the Teachers’ Sense of Efficacy Scale

Developers: Megan Tschannen-Moran, College of William and Mary

Anita Woolfolk Hoy, the Ohio State University.

Construct Validity

For information the construct validity of the Teachers’ Sense of Teacher efficacy Scale, see:

Tschannen-Moran, M., & Woolfolk Hoy, A. (2001). Teacher efficacy: Capturing and elusive construct. Teaching and Teacher Education, 17, 783-805.

Factor Analysis

It is important to conduct a factor analysis to determine how your participants respond to the questions. We have consistently found three moderately correlated factors: Efficacy in Student Engagement, Efficacy in Instructional Practices, and Efficacy in Classroom Management, but at times the make up of the scales varies slightly. With preservice teachers we recommend that the full 24-item scale (or 12-item short form) be used, because the factor structure often is less distinct for these respondents.

Subscale Scores

To determine the Efficacy in Student Engagement, Efficacy in Instructional Practices, and Efficacy in Classroom Management subscale scores, we compute unweighted means of the items that load on each factor.

Reliabilities

In Tschannen-Moran, M., & Woolfolk Hoy, A. (2001). Teacher efficacy: Capturing and elusive construct. Teaching and Teacher Education, 17, 783-805, the following were found:

Long Form

Short Form

Mean SD alpha Mean SD alpha
TSES (OSTES) 7.1 .94 .94 7.1 .98 .90
Engagement 7.3 1.1 .87 7.2 1.2 .81
Instruction 7.3 1.1 .91 7.3 1.2 .86
Management 6.7 1.1 .90 6.7 1.2 .86

1 Because this instrument was developed at the Ohio State University, it is sometimes referred to as the Ohio State Teacher Efficacy Scale (OSTES). We prefer the name, Teachers’ Sense of Efficacy Scale (TSES).

UP

Teacher Efficacy Scale (Gibson & Dembo: Long Form)

If you want a copy of this scale, click here

Directions for Scoring the Teacher Efficacy Scale: Long Form

1. Construct validity

For information the construct validity of the 22-item efficacy scale, see Woolfolk, A. E., & Hoy, W. K. (1990). Prospective teachers’ sense of efficacy and beliefs about control. Journal of Educational Psychology, 82, 81-91.

2. Factor Analysis

When using the 22-item of the Teacher Efficacy Scale, it is important to conduct a factor analysis to determine how your subjects respond to the questions. We have consistently found two independent factors: Teaching Efficacy (TE) and Personal Efficacy (PE), but at times the make up of the scales varies slightly. For example, we often find that items 15 and 21 of the 22-item version do not load on either factor and must be dropped.

3. Reverse scoring:

Given the 1=”strongly agree” to 6=”strongly disagree” format, if you want a high score on each scale to indicate strong sense of efficacy, then you must reverse the scoring for the Personal Efficacy items. Thus a “strongly agree” response to the statement, “When I try really, I can get through to most difficult students” must be reversed so that the respondent receives a score of 6 rather than 1.

The reverse scored items on the 22-item version

are: 1, 5, 6, 7, 8, 11, 12, 14, 15*, 16, 18, 19, 22

*Note that item 15 is the only reversed item that is from the Teaching Efficacy, not Personal Efficacy scale.

4. TE and PE Scores:

To determine the TE and PE scores, we compute unweighed means of the items that load .35 or higher on each respective factor. We do not recommend combining the TE and PE scores to compute a total score because the TE and PE scales represent independent factors.

UP

Teacher Efficacy Scale (Hoy & Woolfolk: Short Form)

If you want a copy of this scale, click here

Directions for Scoring the Teacher Efficacy Scale: Short Form

1. Construct validity

For information the construct validity of the 10-item efficacy scale, see Hoy, W. K., & Woolfolk, A. E. (1990). Organizational socialization of student teachers. American Educational Research Journal, 27, 279-300.

2. Factor Analysis

It is important to conduct a factor analysis to determine how your subjects respond to the questions. We have consistently found two independent factors: Teaching Efficacy (TE) and Personal Efficacy (PE), but at times the make up of the scales varies slightly.

3. Reverse scoring:

Given the 1=”strongly agree” to 6=”strongly disagree” format, if you want a high score on each scale to indicate strong sense of efficacy, then you must reverse the scoring for the Personal Efficacy items. Thus a “strongly agree” response to the statement, “When I try really, I can get through to most difficult students” must be reversed so that the respondent receives a score of 6 rather than 1.

The reverse scored items on the 10-item version

are: 3, 6, 7, 8, 9

4. TE and PE Scores:

To determine the TE and PE scores, we compute unweighed means of the items that load .35 or higher on each respective factor. We do not recommend combining the TE and PE scores to compute a total score because the TE and PE scales represent independent factors.

UP

The Teaching Confidence Scale

If you want a copy of the Teaching Confidence scale, click here

Directions for Scoring the Teaching Confidence Scale

This scale was developed in order to devise a program-specific measure of efficacy. In an attempt to identify an appropriate level of specificity for assessing efficacy in our preservice teacher preparation program, we surveyed all the instructors who worked with the prospective teacher cohorts, asking the instructors what students should be able to do after completing the coursework. After removing redundancies, the result was a list of 32 teaching skills such as manage classrooms, evaluate student work, use cooperative learning approaches, teach basic concepts of fractions, and build learning in science on children’s intuitive understandings.

We then designed a questionnaire, named the Teaching Confidence Scale (initially called the OSU Teaching Confidence Scale because it focused on skills in our program), that asked students to rate on a 6-point scale how confident they were in their ability to accomplish each skill, the higher the score, the more confident. We then calculated a total average score for each respondent. In our first study, based on the average score for the entire 32-item scale, the alpha coefficient of reliability was in the 95.

In order to create a measure appropriate for your program, you would have to determine what students should be able to do after completing your requirements and then build a scale based on these expectations.

1. Construct validity

For information the construct validity of the Teaching Confidence Scale, see Woolfolk Hoy, A. (2000, April). Changes in teacher efficacy during the early years of teaching. Paper presented at the American Educational Research Association, New Orleans, LA.

2. Factor Analysis

As described in Woolfolk Hoy, A. (2000, April), Changes in teacher efficacy during the early years of teaching, we performed a principal-axis factor analysis using Kaiser’s criterion of eigenvalues greater than 1 (Kaiser, 1974) in combination with Cattell’s scree test (Cattell, 1965) to determine the number of factors (Kim & Mueller, 1978). Three factors emerged and accounted for 70% of the variance. Some of the items loaded on two or all three factors, so these items were dropped and the remaining items analyzed into three factors with varimax rotation. The three factors seem to represent confidence to teach math and science, confidence to use instructional innovations, and confidence to manage classrooms. It is important to conduct a factor analysis to determine how your subjects respond to your questions.

UP

Other Efficacy Scales

1. Responsibility for Student Achievement

For a copy of the Responsibility for Student Achievement scale, click here.

Shortly after the first Rand study was published, Guskey developed a 30-item instrument measuring Responsibility for Student Achievement (Guskey, 1981). For each item, participants were asked to distribute 100 percentage points between two alternatives, one stating that the event was caused by the teacher and the other stating that the event occurred because of factors outside the teacher’s immediate control. Consistent with explanations from attributional theory (Weiner, 1979, 1992, 1994), four types of causes were offered for success or failure: specific teaching abilities, the effort put into teaching, the task difficulty, and luck. Scores on the Responsibility for Student Achievement (RSA) yielded a measure of how much the teacher assumed responsibility for student outcomes in general, as well as two subscale scores indicating responsibility for student success (R+) and for student failure (R-). The 100-point scale proved cumbersome and in subsequent uses the scale was reduced to 10 points for the teacher to divide between the alternative explanations.

When Guskey (1982, 1988) compared scores from the RSA with teacher efficacy (TE) as measured by the sum of the two Rand items, he found significant positive correlations between teacher efficacy and responsibility for both student success (R+) and student failure (R-). He reported strong intercorrelations ranging from.72- to .81 between overall responsibility and responsibility for student success and student failure while the subscales for student success and student failure were only weakly related (.20) or not at all (Guskey, 1981, 1988). Guskey asserted that positive and negative performance outcomes represent separate dimensions, not opposite ends of a single continuum, and that these dimensions operate independently in their influence on perceptions of efficacy (Guskey, 1987). In general, teachers assumed greater responsibility for positive results than for negative results, that is, they were more confident in their ability to influence positive outcomes than to prevent negative ones. Greater efficacy was related to a high level of confidence in teaching abilities on a measure of teaching self-concept (Guskey, 1984). In an extensive review of the research on teacher efficacy, no published studies were found in which other researchers had adopted this measure.

Responsibility for Student Achievement (Guskey, 1981)
Format: Participants are asked to give a weight or percent to each of the two choices.

Scoring: A global measure of responsibility, with two subscales: responsibility for student success (R+) & responsibility for student failure (R-)

Example Items

If a student does well in your class, would it probably be

a. because that student had the natural ability to do well, or

b. because of the encouragement you offered?

When your students seem to have difficulty learning something, is it usually

a. because you are not willing to really work at it, or

b. because you weren’t able to make it interesting for them?

UP

2. Teacher Locus of Control

For a copy of the Teacher Locus of Control scale, click here.

At the same time as Guskey developed the RSA, Rose and Medway (1981) proposed a 28-item measure called the Teacher Locus of Control (TLC) in which teachers were asked to assign responsibility for student successes or failures by choosing between two competing explanations for the situations described. Half the items on the TLC describe situations of student success while the other half describe student failure. For each success situation, one explanation attributes the positive outcome internally to the teacher (I+) while the other assigns responsibility outside the teacher, usually to the students. Similarly, for each failure situation, one explanation gives an internal teacher attribution (I-) while the other blames external factors.

Scores on the TLC have been weakly but significantly related to the individual Rand items (GTE and PTE) as well as to the sum of the two Rand items (TE) with correlations generally ranging from .11 to .41 (Coladarci , 1992; Parkay, Greenwood, Olejnik, & Proller, 1988). Rose and Medway (1981) found that the TLC was a better predictor of teacher behaviors than Rotter’s Internal-External (I-E) Scale, probably because it was more specific to a teaching context. For example, the TLC predicted teachers’ willingness to implement new instructional techniques, whereas Rotter’s I-E Scale did not. To further examine the TLC and the two Rand items, Greenwood, Olejnik, and Parkay (1990) dichotomized teachers’ scores on the two Rand questions and cross-partitioned them into four efficacy patterns. They found that teachers with high efficacy on both measures (I can, teachers can) had more internally-oriented scores on the TLC for both student success and student failure than teachers who scored low on both (I can’t, teachers can’t). This measure never received wide acceptance and has all but disappeared from view in the past decade.

Teacher Locus of Control (Rose & Medway, 1981)
Format: 28 items with a forced-choice format.

Scoring: Half of the items describe situations of student success (I+) and half describe student failure (I-).

Example Items

Suppose you are teaching a student a particular concept in arithmetic or math and the student has trouble learning it. Would this happen

a. because the student wasn’t able to understand it, or

b. because you couldn’t explain it very well?

If the students in your class perform better than they usually do on a test, would this happen

a. because the students studied a lot for the test, or

b. because you did a good job of teaching the subject area?

UP

3. The Webb Scales

For a copy of the Webb Efficacy scales, click here.

At about the same time as the RSA and the TLC were being developed, a third group of researchers sought to expand the Rand efficacy questions to increase their reliability. The Webb Scale (Ashton, et al., 1982) was an attempt to extend the measure of teacher efficacy while maintaining a narrow conceptualization of the construct. To reduce the problem of social desirability bias, Webb and his colleagues used a forced-choice format with items matched for social desirability. They found that teachers who scored higher on the Webb Efficacy Scale evidenced fewer negative interactions (less negative affect) in their teaching style (Ashton, et al, 1982). This measure, however, never met with wide acceptance and we found no published work beyond the original study in which the scale was used.

Webb Efficacy Scale (Ashton, et al. 1982).
Format: 7 items, forced choice. Participants must determine if they agree most strongly with the first or the second statement. Example Items

A. A teacher should not be expected to reach every child; some students are not going to make academic progress.

B. Every child is reachable. It is a teacher’s obligation to see to it that every child makes academic progress.

A. My skills are best suited for dealing with students who have low motivation and who have a history of misbehavior in school.

B. My skills are best suited for dealing with students who are academically motivated and generally well behaved.

UP

4. The Ashton Vignettes

For a copy of the Ashton Vignettes, click here.

In order to address the assumption that teacher efficacy is context specific, Ashton and her colleagues (1984) developed a series of vignettes describing situations a teacher might encounter and asking the teacher to make a judgment as to their effectiveness in handling the situation. The researchers tested two frames of reference for judgments. The first asked teachers to judge how they would perform in the described situation on a scale from “extremely ineffective” to “extremely effective.” The second version asked teachers to make a comparison to other teachers, from “much less effective than most teachers” to “much more effective than most teachers.” The norm-reference vignettes in which teachers compared themselves to other teachers were significantly correlated with Rand items but the self-referenced vignettes, rating effectiveness or ineffectiveness, were not (Ashton, Buhr, & Crocker, 1984; Ashton & Webb, 1986). Teachers also were asked to indicate the level of stress in each of the situations but, with correlations between efficacy and stress ranging from -.05 to -.82, with an average of -.39, it was concluded that stress could not be used as a proxy for efficacy. This measure has not received wide acceptance. Only one study was found where it was used since it was used in the original study.

Ashton Vignettes (Ashton, et al. 1982).
Format: 50 items describing problem situations concerning various dimensions of teaching, including motivation, discipline, academic instruction, planning, evaluation, and work with parents. Self-referenced: “extremely ineffective” to “extremely effective.” Norm-referenced: “much less effective than most teachers” to “much more effective than other teachers.” Example Items

Your school district has adopted a self-paced instructional program for remedial students in your area. How effective would you be in keeping a group of remedial students on task and engaged in meaningful learning while using these materials?

A small group of students is constantly whispering, passing notes and ignoring class activities. Their academic performance on tests and homework is adequate and sometimes even good. Their classroom performance, however, is irritating and disruptive. How effective would you be in eliminating their disruptive behavior?

UP

5. Science Teaching Efficacy Belief Instrument

For a copy of the Science Teaching Efficacy Belief Instrument, click here.

Science educators have conducted extensive research on the effects of efficacy on science teaching and learning. Riggs and Enochs (1990) developed an instrument, based on the Gibson and Dembo approach, to measure efficacy of teaching science–the Science Teaching Efficacy Belief Instrument (STEBI). Consistent with Gibson and Dembo they have found two separate factors, one they called personal science teaching efficacy (PSTE) and a second factor they labeled science teaching outcome expectancy (STOE). The two factors are uncorrelated. Exploring an even greater level of specificity, Rubeck and Enochs (1991) distinguished chemistry teaching efficacy from science teaching efficacy. They found that among middle-school science teachers, personal science teaching efficacy (PTE for teaching science) was correlated with preference to teach science, and that chemistry teaching self-efficacy (PTE for teaching chemistry) was related to preference to teach chemistry. Chemistry teaching self-efficacy was related to science teaching self-efficacy, and science teaching self-efficacy was significantly higher than chemistry teaching self-efficacy. Science teaching self-efficacy was related to the teacher’s experiences taking science courses with laboratory experiences and to experience teaching science, while chemistry self-efficacy was related to chemistry course work involving lab experiences and chemistry teaching experience. This instrument has been used in several studies (see Enochs, Posnanski, & Hagedorn, 1999).

Science Teaching Efficacy Belief Instrument (Riggs & Enochs, 1990)
Format: 25 item 5 point Likert scale from strongly agree to strongly disagree. Example Items

I understand science concepts well enough to be effective in teaching elementary science.

Effectiveness in science teaching has little influence on the achievement of students with low motivation.

UP

6. Bandura’s Teacher Efficacy Scale

For a copy of the Bandura’s Teacher Efficacy scale, click here.

In the midst of the confusion about how to best measure teacher efficacy, an unpublished measure used by Bandura in his work on teacher efficacy has begun quietly circulating. Bandura (1997) pointed out that teachers’ sense of efficacy is not necessarily uniform across the many different types of tasks teachers are asked to perform, nor across different subject matter. In response, he constructed a 30-item instrument with seven subscales: efficacy to influence decision making, efficacy to influence school resources, instructional efficacy, disciplinary efficacy, efficacy to enlist parental involvement, efficacy to enlist community involvement, and efficacy to create a positive school climate. Each item is measured on a 9-point scale anchored with the notations: “nothing, very little, some influence, quite a bit, a great deal.” This measure attempts to provide a multi-faceted picture of teachers’ efficacy beliefs without becoming too narrow or specific. Unfortunately, reliability and validity information about the measure have not been available.

Bandura’s Teacher Efficacy Scale (unpublished)
Format: 30 items on a 9 point scale anchored at nothing, very little, some influence, quite a bit, a great deal.

7 subscales: Influence on decision making, influence on school resources, instructional efficacy, disciplinary efficacy, enlisting parental involvement, enlisting community involvement, and creating a positive school climate.

Example Items

How much can you influence the decisions that are made in your school?

How much can you do to overcome the influence of adverse community conditions on student learning?

How much can you do to get children to follow classroom rules?

How much can you assist parents in helping their children do well in school?

How much can you do to get local colleges and universities involved in working with your school?

How much can you do to make students enjoy coming to school?

How much can you do to get students to believe they can do well in schoolwork?

UP

References

Allinder, R.M. (1994). The relationship between efficacy and the instructional practices of special education teachers and consultants. Teacher Education and Special Education, 17, 86-95.

Anderson, R., Greene, M., & Loewen, P. (1988). Relationships among teachers’ and students’ thinking skills, sense of efficacy, and student achievement. Alberta Journal of Educational Research, 34 (2), 148-165.

Armor, D., Conroy-Oseguera, P., Cox M., King, N., McDonnell, L., Pascal, A. Pauly, E., & Zellman, G. (1976). Analysis of the school preferred reading programs in selected Los Angeles minority schools. (REPORT NO. R-2007-LAUSD). Santa Monica, CA: Rand Corporation. (ERIC Document Reproduction Service No. 130 243).

Ashton, P. T., Olejnik, S., Crocker, L. & McAuliffe, M. (1982, April). Measurement problems in the study of teachers’ sense of efficacy. Paper presented at the annual meeting of the American Educational Research Association, New York.

Ashton, P., Buhr, D., & Crocker, L. (1984). Teachers’ sense of efficacy: A self- or norm-referenced construct? Florida Journal of Educational Research, 26 (1), 29-41.

Ashton, P.T. (1985). Motivation and teachers’ sense of efficacy. In C. Ames and R. Ames (Eds.) Research on Motivation in Education Vol. 2: The Classroom Milieu . (pp. 141-174) Orlando, FL: Academic Press.

Ashton, P.T., & Webb, R. B., (1986). Making a difference: Teachers’ sense of efficacy and student achievement. New York: Longman.

Bandura, A., (1977). Self-efficacy: Toward a unifying theory of behavioral change Psychological Review, 84, 191-215.

Bandura, A., (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.

Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning. Educational Psychologist, 28(2), 117-148.

Bandura, A. (1996). Self-efficacy in changing societies. New York: Cambridge University Press.

Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman and Company.

Berman, P., McLaughlin, M., Bass, G., Pauly, E., Zellman, G. (1977). Federal Programs supporting educational change. Vol. VII Factors affecting implementation and continuation (Report No. R-1589/7-HEW) Santa Monica, CA: The Rand Corporation (ERIC Document Reproduction Service No. 140 432).

Brookover, W., Schweitzer, J., Schneider, C., Beady, C. Flood, P., & Wisenbaker, J. (1978). Elementary school social climate and student achievement. American Educational Research Journal, 15, 301-318.

Brookover, W., Beady, C. Flood, P., Schweitzer, J., & Wisenbaker, J. (1979). School social systems and student achievement: Schools can make a difference. New York: Bergin.

Burley, W. W., Hall, B. W., Villeme, M.G., & Brockmeier, L. L. (1991, April) A path analysis of the mediating role of efficacy in first-year teachers’ experiences, reactions, and plans. Paper presented at the annual meeting of the American Educational Research Association, Chicago.

Coladarci, T. (1992). Teachers’ sense of efficacy and commitment to teaching. Journal of Experimental Education, 60, 323-337.

Coladarci, T., & Breton, W. (1997). Teacher efficacy, supervision, and the special education resource-room teacher. Journal of Educational Research, 90, 230-239.

Coladarci, T., & Fink, D. R. (1995, April). Correlations among measures of teacher efficacy: Are they measuring the same thing? Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

Emmer, E. (1990, April). A scale for measuring teacher efficacy in classroom management and discipline. Paper presented at the annual meeting of the American Educational Research Association, Boston, MA; (Revised, June, 1990).

Emmer, E., & Hickman, J. (1990, April). Teacher decision making as a function of efficacy, attribution, and reasoned action. Paper presented at the annual meeting of the American Educational Research Association, Boston, MA.

Enochs, L. G., Posnanski, T, & Hagedorn, E. (1999, March). Science teaching self-efficacy beliefs: Measurement, recent research, and directions for future research. Paper presented at the National Association of Research in Science Education, Boston, MA.

Evans, E.D., & Tribble, M. (1986). Perceived teaching problems, self-efficacy and commitment to teaching among preservice teachers. Journal of Educational Research, 80 (2), 81-85.

Forsyth, P. B. & Hoy, W. K. (1978). Isolation and alienation in educational organizations. Educational Administration Quarterly, 14, 80-96.

Gibson, S. & Dembo, M., (1984). Teacher efficacy: A construct validation. Journal of Educational Psychology, 76(4), 569-582.

Gist, M. E., & Mitchell, T. R. (1992). Self-efficacy: A Theoretical Analysis of Its Determinants and Malleability. Academy of Management Review, 17(2), 183-211.

Glickman, C., & Tamashiro, R., (1982). A comparison of first-year, fifth-year, and former teachers on efficacy, ego development, and problem solving. Psychology in Schools, 19, 558-562.

Greenwood, G. E., Olejnik, S. F., & Parkay, F. W. (1990). Relationships between four teacher efficacy belief patterns and selected teacher characteristics. Journal of Research and Development in Education, 23(2), 102-106.

Guskey, T. R. (1981). Measurement of responsibility teachers assume for academic successes and failures in the classroom. Journal of Teacher Education, 32, 44-51.

Guskey, T. R. (1982). Differences in teachers’ perceptions of personal control of positive versus negative student learning outcomes. Contemporary Educational Psychology, 7, 70-80.

Guskey, T. R. (1984). The influence of change in instructional effectiveness upon the affective characteristics of teachers. American Educational Research Journal, 21, 245-259.

Guskey, T. R. (1987). Context variables that affect measures of teacher efficacy. Journal of Educational Research, 81(1), 41-47.

Guskey, T. R. (1988). Teacher efficacy, self-concept, and attitudes toward the implementation of instructional innovation. Teaching and Teacher Education, 4(1), 63-69

Guskey, T. (1989). Attitude and perceptual change in teachers. International Journal of Educational Research, 13, 439-453.

Guskey, T., & Passaro, P. (1994). Teacher efficacy: A study of construct dimensions. American Educational Research Journal, 31, 627-643.

Hall, B., Burley, W., Villeme, M., & Brockmeier, L. (1992). An attempt to explicate teacher efficacy beliefs among first year teachers. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

Henson, R. K., Bennett, D. T., Sienty, S. F., & Chambers, S. M. (2000, April). The relationship between means-end task analysis and context specific and global efficacy in emergency certification teachers: Exploring a new model of efficacy. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.

Hoy, W. K. & Woolfolk, A. E. (1990). Socialization of student teachers. American Educational Research Journal, 27, 279-300.

Hoy, W. K. & Woolfolk, A. E. (1993). Teachers’ sense of efficacy and the organizational health of schools. The Elementary School Journal, 93,356-372.

Lee, V., Dedick, R., & Smith, J. (1991). The effect of the social organization of schools on teachers’ efficacy and satisfaction. Sociology of Education, 64, 190-208.

Lucas, K., Ginns, I., Tulip, D., & Watters, J. (1993). Science teacher efficacy, locus of control and self-concept of Australian preservice elementary school teachers. Paper presented at the annual meeting of the National Association for research in Science Teaching, Atlanta.

Meijer, C. & Foster, S. (1988) The effect of teacher self-efficacy on referral chance. Journal of Special Education, 22, 378-385.

Midgley, C., Feldlaufer, H., & Eccles, J., (1989). Change in teacher efficacy and student self- and task-related beliefs in mathematics during the transition to junior high school. Journal of Educational Psychology, 81, 247-258.

Moore, W., & Esselman, M. (1992). Teacher efficacy, power, school climate and achievement: A desegregating district’s experience. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

Newman, F.M., Rutter, R.A. & Smith, M.S. (1989). Organizational factors that affect school sense of efficacy, community and expectations. Sociology of Education, 62, 221-238.

Pajares, F. (1992). Teachers’ beliefs and educational research: Cleaning up a messy construct. Review of Educational Research, 62, 307-332.

Pajares, F. (1996). Self-efficacy beliefs in academic settings. Review of Educational Research, 66, 533-578.

Parkay, F. W., Greenwood, G., Olejnik, S. & Proller, N. (1988). A study of the relationship among teacher efficacy, locus of control, and stress. Journal of Research and Development in Education, 21(4), 13-22.

Podell, D. & Soodak, L. (1993). Teacher efficacy and bias in special education referrals. Journal of Educational Research, 86 , 247-253.

Raudenbush, S., Rowen, B., Cheong, Y. (1992). Contextual effects on the self-perceived efficacy of high school teachers. Sociology of Education, 65, 150-167.

Riggs, I. (1995). The characteristics of high and low efficacy elementary teachers. Paper presented at the annual meeting of the National Association of Research in Science Teaching, San Francisco, CA.

Riggs, I., Diaz, E. Riggs, M. et al. (1994). Impacting elementary teachers’ beliefs and performance through teacher enhancement for science instruction in diverse settings. Paper presented at the annual meeting of the National Association of Research in Science Teaching, Anaheim, CA.

Riggs, I., & Enochs, L. (1990). Toward the development of an elementary teacher’s science teaching efficacy belief instrument. Science Education, 74, 625-638.

Rose, J. S., & Medway, F. J., (1981). Measurement of teachers’ beliefs in their control over student outcome. Journal of Educational Research, 74, 185-190.

Ross, J. A. (1992). Teacher efficacy and the effect of coaching on student achievement. Canadian Journal of Education, 17(1), 51-65.

Ross, J. A., Cousins, J. B., & Gadalla, T. (1996). Within-teacher predictors of teacher efficacy. Teaching and Teacher Education, 12 , 385-400.

Rotter, J. B.(1966). Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs, 80, 1-28.

Saklofske, D., Michaluk, B., & Randhawa, B. (1988). Teachers’ efficacy and teaching behaviors. Psychological Report, 63, 407-414.

Skinner, E. A. (1996). A guide to constructs of control. Journal of Personality and Social and Personality Psychology, 71, 549-570.

Soodak, L.& Podell, D. (1993). Teacher efficacy and student problem as factors in special education referral. Journal of Special Education, 27, 66-81.

Soodak, L.& Podell, D. (1996). Teaching efficacy: Toward the understanding of a multi-faceted construct. Teaching and Teacher Education, 12 , 401-412..

Stein, M. K., & Wang, M.C. (1988). Teacher development and school improvement: The process of teacher change. Teaching and Teacher Education, 4, 171-187.

Trentham, L., Silvern, S., & Brogdon, R. (1985). Teacher efficacy and teacher competency ratings. Psychology in Schools, 22, 343-352.

Tschannen-Moran, M., Woolfolk Hoy, A., & Hoy, W. K. (1998). Teacher efficacy: Its meaning and measure. Review of Educational Research, 68 (2), 202-248.

Weiner, B. (1979). A theory of motivation for some classroom experiences. Journal of Educational Psychology, 71, 3-25.

Weiner, B. (1992). Human motivation: Metaphors, theories, and research. Newbury Park, CA: Sage.

Weiner, B. (1994). Integrating social and personal theories of achievement striving. Review of Educational Research, 64, 557-573.

Willower, D.J., Eidell, T.L., & Hoy, W. K. (1967). The school and pupil control ideology. Penn State Studies Monograph No. 24. University Park, PA: Pennsylvania State University.

Woolfolk, A. E., & Hoy, W. K., (1990). Prospective teachers’ sense of efficacy and beliefs about control, Journal of Educational Psychology, 82, 81-91.

Woolfolk, A. E., Rosoff, B., & Hoy, W. K. (1990). Teachers’ sense of efficacy and their beliefs about managing students. Teaching and Teacher Education, 6, 137-148.