Faustmann's formula gives the land value, or the forest value of land with trees, under deterministic assumptions regarding future stand growth and prices, over an infinite horizon. Markov decision process (MDP) models generalize Faustmann's approach by recognizing that future stand states and prices are known only as probabilistic distributions. The objective function is then the expected discounted value of returns, over an infinite horizon. It gives the land or the forest value in a stochastic environment. In MDP models, the laws of motion between stand-price states are Markov chains. Faustmann's formula is a special case where the probability of movement from one state to another is equal to unity. MDP models apply whether the stand state is bare land, or any state with trees, be it even- or uneven-aged. Decisions change the transition probabilities between stand states through silvicultural interventions. Decisions that maximize land or forest value depend only on the stand-price state, independently of how it was reached. Furthermore, to each stand-price state corresponds one single best decision. The solution of the MDP gives simultaneously the best decision for each state, and the forest value (land plus trees), given the stand state and following the best policy. Numerical solutions use either successive approximation, or linear programming. Examples with deterministic and stochastic cases show in particular the convergence of the MDP model to Faustmann's formula when the future is assumed known with certainty. In this deterministic environment, Faustmann's rule is independent of the distribution of stands in the forest.
Buongiorno, Joseph. 2001. Generalization of Faustmann''s Formula for Stochastic Forest Growth and Prices with Markov Decision Process Models. For. Sci. 47(4):466-474.