Papers published analyzing the More Kids in the Woods evaluation effort

Michaela T. Zint, Beth A. Covitt & Patrick F. Dowd (2011): Insights From an Evaluability Assessment of the U.S. Forest Service More Kids in the Woods Initiative, The Journal of Environmental Education, 42:4, 255-271 To link to this article: http://dx.doi.org/10.1080/00958964.2010.538091

 

The results suggest that, the MKIW initiative is making valuable contributions to the field of environmental education.

First,  the initiative is addressing the need identified by Louv (2005) for children and particularly, poor, urban, and/or minority children, to have opportunities to participate in active outdoor, nature-based experiences. This is valuable because poor, urban, and/or minority children have traditionally been underserved by environmental education (Zint, 2012).Moreover, contrary to grant programs that do not require partnerships and tend to fund new programs (e.g.,Monroe et al., 2005), the MKIW initiative is likely to result in more extensive and longer lasting benefits because of the partnerships it has fostered and the support it has provided for projects that have proven themselves sustainable.

Based on the evaluability assessment (diagnostic pre-evaluation activity intended to (1) determine a program’s evaluability (i.e., readiness for evaluation) and (2) obtain insight into what type of evaluation will be useful to decision makers it is now also possible to make more informed decisions about what evaluations of the initiative are likely to be useful for the FS.

Second, with regard to evaluating the initiative’s overall outcomes, it seems appropriate to evaluate changes in youth’s environmental knowledge and attitudes. All of the projects sought to achieve these outcomes and research suggests that the type of programs implemented by the initiative’s projects can contribute to these particular outcomes (Bogner, 1998; Dillon et al., 2006; Gillett, 1991; Leeming, Dwyer, Porter,&Cobern, 1993).

In contrast, it is less clear that future outcome evaluations of the initiative should focus on assessing desired changes in youths’ stewardship behaviors and youths’ health behaviors or health outcomes. Research suggests that to achieve these outcomes, programs need to be based on behavior change theories (Heimlich & Ardoin, 2008) and longer in duration (Chawla, 2009).

The majority of MKIW projects were relatively short and when project leaders were asked to explain why they thought their programs might achieve these outcomes, none offered a behavior change theory based explanation. At the same time, however, there were a few projects that were longer in duration and used interventions, such as involving youth in conservation activities that have been linked to behavior change (Zint, 2012). It may therefore be appropriate to conduct behavioral outcome evaluations of the few projects that have these particular characteristics.

Third, an evaluation focused on the initiative’s partnerships could provide details about how to develop and maintain such partnerships and about the various ways partnerships help projects achieve desired outcomes. Results from such an evaluation could inform the initiative’s funding decisions and may also benefit other grant programs that seek to foster partnerships to enhance their environmental education programs.

 

This study makes three main contributions to the field of environmental education.

 

First, by documenting the process and benefits of the MKIW initiative’s evaluability assessment, we hope that environmental educators will consider implementing this pre-evaluation approach instead of supporting the completion of outcome, or similarly intensive, evaluations, when it is not clear if programs are ready for evaluation or what should be evaluated.

We recognize that evaluability assessments may be perceived as yet another layer of evaluat ion, it ascertain what information is needed or of interest to program stakeholders Because evaluability assessments can clarify program objectives and design as well as provide preliminary formative information about program implementation and plausible outcomes, they can also be used to inform program improvements in a more timely and costeffective manner than intensive evaluations.

Second, this  study adds to the limited information about environmental education grant programs. More specifically, the study provides preliminary evidence to suggest that the MKIW initiative contributes to addressing the environmental education need of providing youth, and particularly underserved poor, urban and/or minority youth, with opportunities to participate in active outdoor, nature-based experiences (Louv, 2005). Consistent with the only other published study of an environmental education grant program (Monroe et al., 2005), the evaluability assessment also suggests that grant programs can make significant contributions to environmental education through the partnerships they foster.

Third, the study provides data on the evaluation practices, competencies, and interests of FS employees who lead environmental education programs and thus, also contributes to the limited literature on these topics.

 

The second article"

Michaela T. Zint, Patrick F. Dowd & Beth A. Covitt (2011): Enhancing environmental educators' evaluation competencies: insights from an examination of the effectiveness of the My Environmental Education Evaluation Resource Assistant (MEERA) website, Environmental Education Research, 17:4, 471-497

 

To link to this article: http://dx.doi.org/10.1080/13504622.2011.565117

 

Abstract

"To conduct evaluations that can benefit individual programs as well as the field as a whole, environmental educators must have the necessary evaluation competencies. This exploratory study was conducted to determine to what extent a self-directed learning resource entitled My Environmental Education Evaluation Resource Assistant (MEERA) can enhance environmental educators’ evaluation competencies. The multiple case studies relied on data from eight environmental educators with limited evaluation experience who used MEERA to evaluate one of their programs. Results suggest that MEERA can (1) increase environmental educators’ perceived evaluation competencies, (2) help environmental educators produce quality evaluation outputs, and (3) foster their use of evaluation results.

Perceived benefits of using MEERA included obtaining evidence of program success, insights into how to improve programs, and alternative ways of thinking about programs. Perceived challenges included varying difficulties with evaluation tasks such as prioritizing evaluation questions and designing data collection instruments and, in line with this, desiring personal expert assistance for context-specific advice and reassurance. This research contributes to expanding understanding of how to enhance environmental educators’ evaluation competencies and practices."

 

At a time when programs face increasing budget constraints, it is reasonable to expect that many educators will turn to self-directed learning resources to support their evaluation needs.

Results also suggest that although MEERA appears to be able to support the evaluation efforts of environmental educators, it cannot replace the personalized assistance or reassurance experts can provide.