2018-10-24
The result presented in this post came out of a discussion with Thomas Moerland at CWI. It is related to (Khan, Devroye, and Neininger 2005) and also to work by (Pearl 1980). If found these references later, and it was a lot of fun to work this out myself, and discuss the results with Aurélien Garivier, Emilie Kaufmann and Bruno Scherrer.
Consider a two-player game tree, say a full binary tree of depth 4 as shown in the following figure.
Now pick some distribution \(\pi\) of leaf values, and assign values to the leaves i.i.d. at random from \(\pi\). Then compute the value of the tree using backward induction. What is the distribution of this value? As a function of the depth, the degree and \(\pi\)?
Say \(X_i\) are independent, and we know the CDF \(\operatorname*{\mathbb P}(X_i \le x)\) for each \(i = 1, \ldots, K\). Then the CDF of the maximum is given by \begin{equation}\label{eq:max.any} \operatorname*{\mathbb P}\left(\max_i X_i \le x\right) ~=~ \prod_{i=1}^K \operatorname*{\mathbb P}(X_i \le x) . \end{equation} In particular, if \(X_i\) are i.i.d. with common CDF \(\operatorname*{\mathbb P}(X \le x)\) then \begin{equation}\label{eq:max.iid} \operatorname*{\mathbb P}\left(\max_i X_i \le x\right) ~=~ \operatorname*{\mathbb P}(X \le x)^K . \end{equation} Analogously, for the minimum of independent \(X_i\) we find \begin{equation}\label{eq:min.any} \operatorname*{\mathbb P}\left(\min_i X_i \le x\right) ~=~ 1-\prod_{i=1}^K \left(1- \operatorname*{\mathbb P}(X_i \le x)\right) . \end{equation} and for i.i.d. \(X_i\) we find \begin{equation}\label{eq:min.iid} \operatorname*{\mathbb P}\left(\min_i X_i \le x\right) ~=~ 1- \left(1- \operatorname*{\mathbb P}(X \le x)\right)^K . \end{equation}
The figure below illustrates the computation \(\eqref{eq:max.any}\) on some example distributions.
To compute the CDF of any game tree, we can evaluate the CDF of \(\pi\) on a discrete grid (we may even have a different \(\pi_\ell\) for each leaf \(\ell\)), and then apply the formulas \(\eqref{eq:max.any}\) and \(\eqref{eq:min.any}\) recursively in bottom-up fashion until we obtain the CDF of the value of the game tree. The amount of work scales with the size of the tree (which equals the number of leaves).
For regular game trees with common leaf distribution \(\pi\), we may use symmetry to observe that all nodes at the same depth have the same value distribution. Hence we can alternately apply \(\eqref{eq:max.iid}\) and \(\eqref{eq:min.iid}\), and now the amount of work only scales with the tree depth.
Can we argue the value concentrates as the depth increases? Is there some natural quantity (entropy?) that always increases? Can we predict where the value concentrates? Yes! Namely, from the formulas above we see something curious: namely the CDF of minimum and maximum at \(x\) only depend on the CDF of the ingredients at \(x\). More precisely, we can think of a max round as mapping the CDF values pointwise to \[v ~\mapsto~ v^K,\] and a min round maps the CDF values pointwise to \[v ~\mapsto~ 1-(1-v)^K .\] A combined max-after-min round pair hence maps \begin{equation}\label{eq:fp} v ~\mapsto~ \left(1-(1-v)^K\right)^K . \end{equation} We can indeed observe that this mapping has three fixed points. \(v=0\), \(v=1\) and some intermediate crossover point.
Let \(v^* \in (0,1)\) be the solution to \(v = \left(1-(1-v)^K\right)^K\). Now \(\left(1-(1-v)^K\right)^K < v\) iff \(v < v^*\). Hence the mapping above increases the difference to \(v^*\). We find that only values \(0\) and \(1\) attract. So the \(\text{CDF}\) converges to \(\mathbf 1 \left\{v \ge v^*\right\}\). So the value concentrates to \(\text{CDF}_\pi^{-1}(v^*)\).
The Bayesian model has the maximal imaginable independence, and it is hence very interesting for computational reasons. The fact that the value of the root concentrates with the depth is in some sense bad news. For it renders the model questionable for games “in the wild”. Samples from this model will have about the same value, and this is not what we expect for games that are interesting in practise. I am curious about more realistic models that are still computationally tractable, i.e. for which we can sample trees from the posterior distribution.
Khan, Tämur Ali, Luc Devroye, and Ralph Neininger. 2005. “A Limit Law for the Root Value of Minimax Trees.” Electron. Comm. Probab 10: 273–81. http://emis.ams.org/journals/EJP-ECP/article/download/1168/1168-3903-1-PB.pdf.
Pearl, Judea. 1980. “Asymptotic Properties of Minimax Trees and Game-Searching Procedures.” Artif. Intell. 14 (2): 113–38. https://doi.org/10.1016/0004-3702(80)90037-5.