Skip to Main Content
U.S. Forest Service
Caring for the land and serving people

United States Department of Agriculture

Home > Search > Publication Information

  1. Share via EmailShare on FacebookShare on LinkedInShare on Twitter
    Dislike this pubLike this pub
    Author(s): Dana J. Morin; Charles B. Yackulic; Jay E. Diffendorfer; Damon B. Lesmeister; Clayton K. Nielsen; Janice Reid; Eric M. Schauber
    Date: 2020
    Source: Ecosphere. 11(1): 421-.
    Publication Series: Scientific Journal (JRNL)
    Station: Pacific Northwest Research Station
    PDF: Download Publication  (1.0 MB)


    Ecologists routinely fit complex models with multiple parameters of interest, where hundreds or more competing models are plausible. To limit the number of fitted models, ecologists often define a model selection strategy composed of a series of stages in which certain features of a model are compared while other features are held constant. Defining these multi-stage strategies requires making a series of decisions, which may potentially impact inferences, but have not been critically evaluated. We begin by identifying key features of strategies, introducing descriptive terms when they did not already exist in the literature. Strategies differ in how they define and order model building stages. Sequential-by-sub-model strategies focus on one sub-model (parameter) at a time with modeling of subsequent sub-models dependent on the selected sub-model structures from the previous stages. Secondary candidate set strategies model sub-models independently and combine the top set of models from each sub-model for selection in a final stage. Build-up approaches define stages across sub-models and increase in complexity at each stage. Strategies also differ in how the top set of models is selected in each stage and whether they use null or more complex sub-model structures for non-target sub-models. We tested the performance of different model selection strategies using four data sets and three model types. For each data set, we determined the "true" distribution of AIC weights by fitting all plausible models. Then, we calculated the number of models that would have been fitted and the portion of "true" AIC weight we recovered under different model selection strategies. Sequential-by-sub-model strategies often performed poorly. Based on our results, we recommend using a build-up or secondary candidate sets, which were more reliable and carrying all models within 5–10 AIC of the top model forward to subsequent stages. The structure of non-target sub-models was less important. Multi-stage approaches cannot compensate for a lack of critical thought in selecting covariates and building models to represent competing a priori hypotheses. However, even when competing hypotheses for different sub-models are limited, thousands or more models may be possible so strategies to explore candidate model space reliably and efficiently will be necessary.

    Publication Notes

    • Visit PNW's Publication Request Page to request a hard copy of this publication.
    • We recommend that you also print this page and attach it to the printout of the article, to retain the full citation information.
    • This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain.


    Morin, Dana J.; Yackulic, Charles B.; Diffendorfer, Jay E.; Lesmeister, Damon B.; Nielsen, Clayton K.; Reid, Janice; Schauber, Eric M. 2020. Is your ad hoc model selection strategy affecting your multimodel inference?. Ecosphere. 11(1): 421-.


    Google Scholar


    AIC, information criterion, model selection, multimodel inference, occupancy models, parameter estimation, population models.

    Related Search

    XML: View XML
Show More
Show Fewer
Jump to Top of Page