A Cooperative Hat Game

\(\newcommand{W}{\square} \newcommand{B}{\blacksquare}\)

Hat puzzles are super popular among mathematicians. Most of them have cute and clever solutions. Here’s one that, at the time of writing, is still an open problem.

Alice and Bob sit facing each other, each with an infinite tower of hats on their heads. Each hat is either black or white, with equal probability. Alice can see all of Bob’s hats, but not her own, and vice versa. On the count of three, both players must name a natural number, which is used to index into their own hat tower. If the two hats match, then the players win, otherwise they lose. (Also, they’re not allowed to talk, cough, wink, or otherwise communicate.)

As an example, say Alice’s hats are \(\W\W\B\W\W\B\cdots\) and Bob’s hats are \(\B\W\B\W\W\B\cdots\). If Alice says 3 and Bob says 1, then since Alice’s third hat and Bob’s first hat are both black, then they win. If they both say 1, their first hats do not match, so they lose.

What’s the best possible strategy, and how often does it win? No one knows! I have some conjectures here, and some (probably unoriginal) strategies that do pretty well.

Simplest Strategy

The simplest strategy is for both players to ignore any information they have and just pick the first hat. Unsurprisingly, this doesn’t go very well. The outcomes \(\W/\W\), \(\W/\B\), \(\B/\W\), and \(\B/\B\) are all equally likely, so the chance of winning is \(1/2\).

It’s not at all obvious that you can do any better than this. Since there’s no communication, neither player can learn anything about their own hats, and so both players are equally likely to pick a white hat or a black hat. How can you squeeze out any additional advantage?

First-White Strategy

Here’s a strategy that does better. Both players look for the first white hat on their partner’s head, and guess the corresponding number. For example, if Bob is wearing \(\B\B\W\B\W\W\cdots\), Alice would say “3”. If he’s wearing \(\W\W\W\B\W\B\cdots\), Alice would say “1”. Call Alice’s guess \(a\) and Bob’s guess \(b\). What’s the probability of success?

  • Case \(a = b\): they’re both pointing at white hats, so they win.
  • Case \(a < b\): Bob’s guess means that every one of Alice’s hats before \(b\) was black, including the one at \(a\). Alice stopped looking at Bob’s hats at \(a\), so Bob’s \(b\)th hat could be either color. They win with probability \(1/2\).
  • Case \(a > b\): Symmetric to the previous case.

So if \(p\) is the probability that \(a = b\), then the chance of success is \(p + (1 - p) / 2 = 1/2 + p/2\). Even before we know \(p\), we can already tell that we’re going to do better than \(1/2\)!

To find \(p\), we sum up the probabilities both players say “1”, that they both say “2”, that they both say “3”, etc. Note that the chance that Alice says “\(k\)” is the chance that Bob’s \(k\)th hat is white, and that none of the previous ones were. Likewise for Bob. Summing up the resulting geometric series, we get

$$ p = \sum_{k = 1}^\infty \left[ \left(\frac{1}{2} \right) \left(\frac{1}{2} \right)^{k-1} \right]^2 = \sum_{k = 1}^\infty \frac{1}{4^k} = \frac{1/4}{1 - 1/4} = \frac{1}{3} $$

So by following this strategy, Alice and Bob can win with probability \(2/3\). Much better!

Finite Strategies

Here’s another approach: what if we focus only on the first \(N\) hats, reducing it to a finite problem?

If \(N = 1\) obviously there’s nothing interesting we can do, so let’s look at \(N = 2\). If Alice sees only black hats on Bob’s head, then she knows that strategizing is hopeless – Bob will pick a black hat for sure, and she’ll pick a black hat with probability only 50%. Same thing goes if she sees only white, and same thing from Bob’s point of view. So the only interesting cases are when both players have non-monochromatic hat stacks.

There’s four possible situations: \(\W\B / \W\B\), \(\W\B / \B\W\), \(\B\W / \W\B\), and \(\B\W / \B\W\). We could brute-force all possible strategies (there’s only four possible for each player, and half of those are constant strategies). But let’s think this one through. Let’s say, arbitrarily, that Alice guesses “1” if she sees \(\W\B\), and “2” if she sees \(\B\W\). If Bob sees \(\W\B\) on Alice’s head, what should he do?

  • If he has \(\W\B\), then Alice will pick “1”, selecting her white hat. Bob should select his white hat by saying “1”.
  • If he has \(\B\W\), then Alice will pick “2”, selecting her black hat. Bob should select his black hat by saying “1”.

In both situations, saying “1” guarantees a win. Similarly, if he sees \(\B\W\) on Alice’s head, he wins by saying “2”. So in the “neither player is monochrome” situation, they can win 100% of the time! For the monochrome cases, no strategy is possible, and so that’s just 50%. There’s 4 non-monochrome cases, and 12 monochrome ones, so that gives a win rate of \(10/16 = 62.5\%\).

How about \(N = 3\)? We could just ignore the third hat, giving us a win rate of at least \(10/16\), but we can do better. Consider the following (asymmetric) strategy:

  • If a player sees a monochromatic stack, they pick an arbitrary hat. Doesn’t matter.
  • If a player sees only one white hat, they pick the index corresponding to that hat.
  • If Alice sees one black hat, she picks the hat after that one (with wraparound, so \(\B\W\W \to 2\), \(\W\B\W \to 3\), \(\W\W\B \to 1\)).
  • If Bob sees one black hat, he picks the hat before that one (again, with wraparound).

How does this strategy do? Note that the strategy is unchanged by cyclic shifting of the hats, which reduces the amount of casework we have to do.

If either player has a monochromatic stack, then they win only 50% of the time, as usual.

If they both have a one-white-hat stack, then they have a guaranteed win.

$$ \begin{matrix} \\ \textrm{Alice's hats} \\ \\ \textrm{Bob's hats} \end{matrix} \qquad \begin{matrix} \downarrow\hphantom{\B\B} \\ \W\B\B \\ \downarrow\hphantom{\B\B} \\ \W\B\B \end{matrix} \qquad \begin{matrix} \downarrow \\ \W\B\B \\ \downarrow\hphantom{\B\B} \\ \B\W\B \end{matrix} \qquad \begin{matrix} \hphantom{\B\B}\downarrow \\ \W\B\B \\ \downarrow\hphantom{\B\B} \\ \B\B\W \end{matrix} $$

If they both have one-black-hat stacks, then they also have a guaranteed win, though it’s less obvious why.

$$ \begin{matrix} \\ \textrm{Alice's hats} \\ \\ \textrm{Bob's hats} \end{matrix} \qquad \begin{matrix} \downarrow \\ \B\W\W \\ \hphantom{\B\B}\downarrow \\ \B\W\W \end{matrix} \qquad \begin{matrix} \hphantom{\B\B}\downarrow \\ \B\W\W \\ \hphantom{\B\B}\downarrow \\ \W\B\W \end{matrix} \qquad \begin{matrix} \downarrow\hphantom{\B\B} \\ \B\W\W \\ \hphantom{\B\B}\downarrow \\ \W\W\B \end{matrix} $$

The only remaining case is when one player has a one-white stack, and the other has a one-black stack. We can’t win every matchup here, but we can get a solid \(4/6\). (Note: what happens if you change the one-black strategy to “pick the black hat”?).

$$ \begin{matrix} \\ \textrm{Alice's hats} \\ \\ \textrm{Bob's hats} \end{matrix} \qquad \begin{matrix} \downarrow \\ \W\B\B \\ \downarrow \hphantom{\B\B} \\ \B\W\W \end{matrix} \qquad \begin{matrix} \hphantom{\B\B} \downarrow \\ \W\B\B \\ \downarrow \hphantom{\B\B} \\ \W\B\W \end{matrix} \qquad \begin{matrix} \downarrow \hphantom{\B\B} \\ \W\B\B \\ \downarrow \hphantom{\B\B} \\ \W\W\B \end{matrix} $$
$$ \begin{matrix} \\ \textrm{Alice's hats} \\ \\ \textrm{Bob's hats} \end{matrix} \qquad \begin{matrix} \downarrow \hphantom{\B\B} \\ \B\W\W \\ \hphantom{\B\B} \downarrow \\ \W\B\B \\ \end{matrix} \qquad \begin{matrix} \downarrow \\ \B\W\W \\ \hphantom{\B\B} \downarrow \\ \B\W\B \\ \end{matrix} \qquad \begin{matrix} \hphantom{\B\B} \downarrow \\ \B\W\W \\ \hphantom{\B\B} \downarrow \\ \B\B\W \\ \end{matrix} $$

This totals up to a winning probability of \(44/64 = 68.75\%\). Better than \(N = 2\), but also better than our “first-white” strategy!

The casework becomes worse and worse for \(N \ge 4\), so we’ll stop here for now.

Stronger Together

We’ve seen two kinds of strategies so far: first-white, and finite strategies. These can be combined, in a pretty simple way, into a strategy better than either of them alone!

With an \(N\)-hat strategy, the augumented strategy goes as follows:

  • Each player looks at the first \(N\) hats on their partner’s head.
  • If they’re not monochromatic, then apply the finite strategy as usual.
  • Otherwise, skip those \(N\) hats, and look at hats \(N+1\) to \(2N\).
  • If those are non-monochromatic, apply the finite strategy, but increase all your answers by \(N\).
  • Otherwise, look at the next block of \(N\) hats, and repeat.

The finite strategies perform worst when facing a monochromatic block of hats. By using the “scan upwards and focus on the first non-monochromatic block” trick, we can sometimes salvage situations where the finite strategy would have to accept the 50-50 guess.

Say that the \(N\)-hat strategy has win rate \(q\). We’d first like to find \(q^\ast\), the conditional win rate for scenarios where neither player has a monochromatic stack. Let \(W\) be the event “we win”, and \(E\) be the event “neither player has a monochromatic stack”. The number of situations where Alice has a non-monochromatic stack is \(2^N - 2\), and same for Bob. So the probability of \(E\) is \((2^N - 2)^2/4^N\). Thus,

$$ \begin{align*} Pr(W) &= Pr(W | E) Pr(E) + Pr(W | \lnot E) Pr(\lnot E) \\ q &= q^\ast \frac{(2^N - 2)^2}{4^N} + \frac{1}{2} \left( 1 - \frac{(2^N - 2)^2}{4^N} \right) \\ q &= q^\ast \frac{(2^N - 2)^2}{4^N} + \frac{2^{N+1} - 2}{4^N} \\ q \frac{4^N}{(2^N - 2)^2} &= q^\ast + \frac{2^{N+1} - 2}{(2^N - 2)^2} \\ \frac{4^N q - 2^{N+1} + 2}{(2^N - 2)^2} &= q^\ast \\ \end{align*} $$

Next, we want to find \(r\), the probability that both players will select the same block of \(N\) hats. The chance an individual block is monochromatic is \(2/2^N\), and so the chance that Alice (or Bob) picks the \(k\)th block is “probability the \(k\)th block is non-monochromatic” times “probability the first \(k-1\) were monochromatic”. This is quite similar to the setup we had for the original first-white strategy.

$$ \begin{align*} r &= \sum_{k=1}^\infty \left( \frac{2^N - 2}{2^N} \cdot \left( \frac{2}{2^N} \right)^{k-1} \right)^2 \\ &= \frac{(2^N - 2)^2}{4^N} \sum_{k=1}^\infty \left( \frac{4}{4^N} \right)^{k-1} \\ &= \frac{(2^N - 2)^2}{4^N} \frac{1}{1 - 4/4^N} \\ &= \frac{(2^N - 2)^2}{4^N - 4} \\ &= \frac{2^N - 2}{2^N + 2} \end{align*} $$

So now we can find \(q'\), the win rate of the augmented strategy. If they pick the same block, then they win with probability \(q^\ast\) (remember that these blocks are necessarily non-monochromatic). If they don’t, then someone is picking into a monochromatic block, and so we’re fated to get only \(1/2\) success.

$$ \begin{align*} q' &= r q^\ast + (1 - r) \frac{1}{2} \\ &= \frac{2^N - 2}{2^N + 2} \frac{4^N q - 2^{N+1} + 2}{(2^N - 2)^2} + \frac{4}{2^N + 2} \frac{1}{2} \\ &= \frac{4^N q - 2^{N+1} + 2}{4^N - 4} + \frac{2(2^N - 2)}{4^N - 4} \\ &= \frac{4^N q - 2}{4^N - 4} \end{align*} $$

Since \(q \ge 1/2\), we have \(q' \ge q\), and when the first inequality is strict, so is the second. So, perhaps unsurprisingly, augmenting a finite strategy makes it work better. How much better? Let’s take our \(N = 3\) strategy:

$$ \frac{4^3 (44/64) - 2}{4^3 - 4} = \frac{42}{60} = \frac{7}{10} $$

We’ve nudged our 68.75% chance of winning to a 70% chance. That’s small, but it’s not nothing. Unfortunately, it’s as far as we can go – this is conjectured to be an optimal strategy. No one’s found or ruled out anything better yet.

Observations

Now that we’ve seen some strategies, we can look for some patterns.

In the simplest strategy, we’re equally likely to get any pair of hats. With the “first-white” strategy, what are the odds of each outcome? The only way to get \(\W/\W\) is for both players to guess the same index, which happens with probability \(1/3\). In the other \(2/3\) of the time, half the time Alice guesses the higher number, and half the time it’s Bob. In the former case, Bob’s hat is guaranteed black, and Alice’s hat is random. In the latter case, it’s the other way around. So that adds up to \(\B/\B\) with probability \(1/3\), \(\W/\B\) with probablity \(1/6\), and \(\B/\W\) with \(1/6\).

Similarly, if you work through the strategies given in the “Finite Strategies” section, the probability of \(\W/\W\) and \(\B/\B\) outcomes are equal, as are \(\B/\W\) and \(\W/\B\) outcomes. This is no coincidence.

Since Alice is equally likely to pick a white or black hat (remember, she never learns anything about her own hat stack), \(Pr(\W/\W) + Pr(\W/\B)\) has to equal \(Pr(\B/\B) + Pr(\B/\W)\). Similarly, Bob has to be equally likely to pick white or black, meaning \(Pr(\W/\W) + Pr(\B/\W)\) equals \(Pr(\W/\B) + Pr(\B/\B)\). Subtracting one equation from the other gives \(Pr(\B/\W) = Pr(\W/\B)\), and some quick algebra gives \(Pr(\W/\W) + Pr(\B/\B)\) as well.

This tells us something interesting – changing the win condition to “both players pick white hats” doesn’t change the nature of the game at all. Maximizing the probability of matching pairs is the same as maximizing the number of white pairs (and in fact, this is how the problem is usually presented.)


Another thing we can look at is the relationship between the finite and infinite game. Let \(p_\infty\) denote the best possible winning probability for the infinite game, and \(p_N\) for the game with just \(N\) hats. How are these related to each other?

Since an \(N\)-hat strategy works just as well for a \((N+1)\)-hat game (by just ignoring the last hat), we know that \(p_{N+1}\) is at least \(p_N\). Similarly, \(p_\infty \ge p_N\) for all \(N\). This gives us a chain of inequalities:

$$ p_1 \le p_2 \le p_3 \le \cdots \le p_\infty $$

Also, from augmenting a strategy, we know that \(p_\infty \ge \frac{4^N p_N - 2}{4^N - 4}\) for all \(N\).

Upper Bounds, Infinite

Well, we know that \(p_\infty\) is at least \(0.7\); can we put an upper bound on it too?

Let’s say Alice and Bob have already decided on a strategy, one that has win rate \(p\). Now, imagine that, right before the game starts, we split the game into two identical games: in one game, things proceed as normal, and in the other game, all of Alice’s hats are swapped with their opposites. Every black hat becomes a white hat, and vice versa. We’ll refer to these players as “Alice” and “nega-Alice”. Let \(X\) be the random variable “how many games are won” (so it is either \(0\), \(1\), or \(2\)).

Then clearly the expected value of \(X\) is just \(p + p\) – each game has probability \(p\) of being won, and expected value is linear. But we can also bound it in an interesting way. Let \(S\) be the event “Bob picks the same color hat in both games”. Then in such a situation, only one of the two games is winnable. Both Alices will see the same hats on Bob, and will say the same number. But this will always result in different hats between them, and so Bob will win in exactly one game. If we let \(q\) denote the probability of \(S\) under the chosen strategy:

$$ E[X] = Pr(S) E[X|S] + Pr(\lnot S) E[X | \lnot S] \le q + (1 - q) 2 = 2 - q $$

Rearranging, we get that \(p \le 1 - q/2\). What do we know about \(q\)? If Bob picks the same index in both games, then he’s guaranteed to pick the same color hat too, and if he doesn’t, then the hats are uncorrelated, and there’s a 50-50 chance he picks the same hat. So this means \(q \ge 1/2\), and thus \(p \le 3/4\).

So we know that the optimal \(p_\infty\) is between \(0.7\) and \(0.75\). This is the best I’ve been able to prove, but apparently, there is a proof that \(p_\infty < \frac{81}{112} \approx 0.723\), as mentioned in this paper. Doesn’t seem to be published though, unfortunately.

Upper Bounds, Finite

Let’s, for the moment, assume that \(p_\infty\) is indeed \(7/10\), and try to put some upper bounds on \(p_N\).

In this section, it’ll be easier to work with “number of winning outcomes” than “probability of winning”, so for a strategy on \(N\) hats, we’ll call the number of winning outcomes the “score” of a strategy, which is equal to \(4^N\) times the win rate. The optimal score for an \(N\)-hat strategy we’ll denote \(s_N\), which is of course equal to \(4^N p_N\).

We’ll start with the inequality we learned about from augumenting finite strategies: \(p_\infty \ge \frac{4^N p_N - 2}{4^N - 4}\). Rearranging it, we get that \(s_N = 4^N p_N \le \frac{7}{10} (4^N - 4) + 2\). Let \(B_N\) be the floor of the RHS, so that \(s_N \le B_N\). Later, we’ll show that these bounds are sharp, and so \(s_N\) actually equals \(B_N\), but for now it’s easier to call them different names.

Computing some values of \(B_N\), we can see a pattern forming:

\(N\) \(1\) \(2\) \(3\) \(4\) \(5\) \(6\) \(7\) \(8\) \(9\) \(10\)
\(B_N\) \(2\) \(10\) \(44\) \(178\) \(716\) \(2866\) \(11468\) \(45874\) \(183500\) \(734002\)

They seem to follow an almost-geometric recurrence relation:

  • \(B_1 = 2\)
  • for even \(N\), \(B_N = 4 B_{N-1} + 2\)
  • for odd \(N\), \(B_N = 4 B_{N-1} + 4\)

Proof: Let \(e_N\) be the amount removed by flooring, i.e., \(\left( \frac{7}{10} (4^N - 4) + 2 \right) - B_N\). We’d like to find \(e_N\), since it will make our lives easier.

For odd \(N\), this is easy: \(4^N - 4\) is divisible by \(10\), so the flooring is unnecessary, which makes \(e_N = 0\).

For even \(N\), \(4^N - 4\) is \(2\) mod \(10\), and so \(\frac{7}{10} (4^N - 4)\) is of the form “integer \(+ \frac{7 \cdot 2}{10}\)”. This makes \(e_N = 2/5\).

Now, we can find the difference between \(B_N\) and \(4 B_{N-1}\):

$$ \begin{align*} B_N - 4 B_{N-1} &= \left( \frac{7}{10} (4^N - 4) + 2 - e_N \right) - 4 \left( \frac{7}{10} (4^{N-1} - 4) + 2 - e_{N-1} \right) \\ &= \left( \frac{7}{10} (4^N - 4) + 2 - e_N \right) + \left( \frac{7}{10} (16 - 4^N) - 8 + 4 e_{N-1} \right) \\ &= \frac{7}{10} (16 - 4) - 6 + 4 e_{N-1} - e_N \\ &= \frac{12}{5} + 4 e_{N-1} - e_N \end{align*} $$

For odd \(N\), this is \(\frac{12}{5} + \frac{8}{5} - 0 = 4\). For even \(N\), this is \(\frac{12}{5} + 0 - \frac{2}{5} = 2\). Check.

Now, we don’t know for sure that these \(B_N\) are upper bounds on our score. That proof relied on \(p_\infty\) actually being \(7/10\). But when I take a computer and search for good strategies, I found lots of strategies that acheive \(B_N\), and none that surpass it. That’s pretty suggestive that this conjecture is right.

But computer-generated strategies don’t give good intution, and my program starts to struggle at about \(N = 11\). Can we come up with a way to construct strategies that hit \(B_N\)?

Finite Strategies, Part II

We’ll start with the following \(3\)-hat strategy, and build it up into \(4\)-hat and \(5\)-hat strategies. (I’ve picked a symmetric one, for ease of presentation). It has score \(44\):

Hats \(\B\B\B\) \(\W\B\B\) \(\B\W\B\) \(\W\W\B\) \(\B\B\W\) \(\W\B\W\) \(\B\W\W\) \(\W\W\W\)
Choice \(1\) \(2\) \(1\) \(1\) \(3\) \(2\) \(3\) \(1\)

It’s easy to extend to a \(4\)-hat strategy, by just ignoring the last hat and applying the original strategy. But obviously this doesn’t improve the probability of winning, and it just increases the score to \(4 \cdot 44 = 176\), which is a little less than \(B_4 = 178\). Somehow we need to squeeze out an additional two points.

The key observation is that when we designed the \(3\)-hat strategy, it didn’t matter what our decision was when seeing \(\B\B\B\) or \(\W\W\W\). When you see your partner with a monochromatic stack of hats, you know that your choice doesn’t matter. But when we extended this to a \(4\)-hat strategy, those decisons were copied over to \(\B\B\B\W\) and \(\W\W\W\B\), where now they might matter! (They still won’t matter for \(\B\B\B\B\) and \(\W\W\W\W\) of course.)

Let’s just focus on the case where both Alice and Bob have one of these “almost monochromatic” stacks. Right now, they’ll both say “1”, and will only win when their stacks are identical. If they change their strategy so that \(\B\B\B\W \to 4\), then they’ll win all four possible matchups.

$$ \begin{matrix} \\ \textrm{Alice's hats} \\ \\ \textrm{Bob's hats} \end{matrix} \qquad \begin{matrix} \hphantom{\B\B\B} \downarrow \\ \B\B\B\W \\ \hphantom{\B\B\B} \downarrow \\ \B\B\B\W \\ \end{matrix} \qquad \begin{matrix} \downarrow \hphantom{\B\B\B} \\ \B\B\B\W \\ \hphantom{\B\B\B} \downarrow \\ \W\W\W\B \\ \end{matrix} \qquad \begin{matrix} \hphantom{\B\B\B} \downarrow \\ \W\W\W\B \\ \downarrow \hphantom{\B\B\B} \\ \B\B\B\W \\ \end{matrix} \qquad \begin{matrix} \downarrow \hphantom{\B\B\B} \\ \W\W\W\B \\ \downarrow \hphantom{\B\B\B} \\ \W\W\W\B \\ \end{matrix} $$

That could be our extra two points we need. We just need to confirm that this tweak didn’t have a negative effect elsewhere.

If the matchup doesn’t involve \(\B\B\B\W\), then obviously the result is unaffected. So all we have to look at are matchups of the form “\(\B\B\B\W\) vs ‘anything other than \(\B\B\B\W\) and \(\W\W\W\B\)’“. Before our tweak, we won exactly half of these matchups. Afterwards, the first player will answer “1”, “2”, or “3”, and the second player will answer “4”. The first player is guaranteed to pick a black hat, and since the second player is equally likely to pick a white or black hat, we still win exactly half of our matchups. So we have a score of \(178\), as desired!


How about \(N = 5\)? We could try the same approach – extend and tweak the \(\B\B\B\B\W\) state – but that would only get us to \(4 \cdot 178 + 2 = 714\), which is still two points away from our target of \(B_5 = 716\).

The key is to think about why tweaking \(\B\B\B\W\) was a strict improvement on the old strategy. It didn’t affect the outcome against most other hat configurations. You can reframe our \(4\)-hat strategy as similar to our augumented “first-white” strategy:

  • Split the \(4\) hats you see into a block of \(3\) and a block of \(1\).
  • If the first block is not monochromatic, apply the \(3\)-hat strategy.
  • If it is, apply the following strategy:
Hats \((\B\B\B)\B\) \((\W\W\W)\B\) \((\B\B\B)\W\) \((\W\W\W)\W\)
Choice \(1\) \(1\) \(4\) \(1\)

That table should look familar; it’s essentially our \(2\)-hat strategy from earlier on, but using the monochromatic block as a single hat!

This provides an interesting way to build strategies. If we have an \(N\)-hat strategy \(S\), and an \(M\)-hat strategy \(T\), then we can combine them into an \((N+M-1)\)-hat strategy that has a potentially better score.

Let \(p\) be the win rate of \(S\), and \(q\) the win rate of \(T\). Then we can find the win rate of this new strategy.

  • If both players have a non-monochromatic first block, then the conditional win rate here is \(p^\ast\), which we know how to compute.
  • If both players have a monochromatic first block, then the conditional win rate is just \(q\).
  • If only one player has a monochromatic first block, then I claim they can only win half the time.
    • Say Alice has the monochromatic first block, and Bob doesn’t. Then Alice will only ever answer a number between \(1\) and \(N\).
    • Imagine flipping all of Bob’s hats; since Alice will still pick into her first block, it won’t change the color of the hat she picks. But it does flip the color of Bob’s choice.
    • This pairs every win with a loss, and vice versa, so they must be equal in number.

This means the total win rate of this strategy is:

$$ \begin{align*} p_{new} &= p^\ast \frac{(2^N - 2)^2}{4^N} + \frac{1}{2} \frac{2 \cdot 2 \cdot (2^N - 2)}{4^N} + q \frac{2^2}{4^N} \\ &= \frac{4^N p - 2^{N+1} + 2}{4^N} + \frac{2^{N+1} - 4}{4^N} + \frac{4q}{4^N} \\ &= \frac{4^N p + 4q - 2}{4^N} \\ &= p + \frac{4q - 2}{4^N} \end{align*} $$

Interestingly enough, this doesn’t depend on the particular strategy chosen, only its win rate. Converting this into a score-based equation, where \(s\) is the score of \(S\), and \(t\) the score of \(T\), we get:

$$ s_{new} = 4^{M-1} s + (t - 4^M / 2) $$

That last term can be interpreted as “score above halfway”. I don’t know if that’s meaningful, but it’s crisp.

Let’s try to make a good \(5\)-hat strategy with this. We know that combining a \(4\) and \(2\) hat strategy doesn’t work (we get a score of \(4 \cdot 178 + (10 - 8) = 714\)). How about \(3\) and \(3\)? We’d get \(16 \cdot 44 + (44 - 32) = 716\). That works!

For completeness’s sake, let’s check out \((2, 4)\). The score would be \(64 \cdot 10 + (178 - 128) = 710\). Not great, which kind of makes sense. Front-loading the \(2\)-hat strategy, which is worse than the \(4\)-hat strategy, is a bad idea.


Using this idea, we can construct strategies with scores of \(B_N\) for all \(N\).

  • For \(N = 1, 2, 3\) we have explicit examples.
  • For even \(N\), extend an \((N-1)\)-hat strategy by the optimal \(2\)-hat strategy. This has a score of \(4 B_{N-1} + (10 - 8) = B_N\).
  • For odd \(N\), extend an \((N-2)\)-hat strategy by the optimal \(3\)-hat strategy. This has a score of \(16 B_{N-2} + (44 - 32) = 16 B_{N-2} + 12\). This is \(4 B_{N-1} + 4 = B_N\).

Final Thoughts

Okay, we’ve defined a series \(B_N\), and shown we can construct strategies for \(N\)-hat games with a score of \(B_N\).

If \(p_\infty = 7/10\), then we know that \(B_N\) is an upper bound on our possible scores, which makes the strategies described above optimal. And conversely, if these finite strategies are optimal, then we can prove \(p_\infty = 7/10\).

I don’t quite have a proof figured out, because there’s some measurability criterion I’m missing, but the gist of it is: it should be the case that an infinite strategy can be approximated arbitrarily well by an \(N\)-hat strategy, as long as we allow \(N\) to be large. If \(p_\infty\) were larger than \(7/10\), we’d be able to find a finite strategy with success rate higher than \(7/10\). But \(B_N / 4^N\) is always less than \(7/10\):

$$ B_N = \left \lfloor \frac{7}{10} (4^N - 4) + 2 \right \rfloor = \left \lfloor \frac{7}{10} 4^N - \frac{8}{10} \right \rfloor $$

So proving one of these two claims is sufficient for proving the other. Unfortunately, I can’t prove either one of them.

I’ve proven via computer search that the finite strategies described up to \(N = 8\) are optimal, which is reassuring, but certainly not a proof.


Interestingly enough, it doesn’t seem to matter if we restrict ourselves to symmetric strategies. We seem to get just as successful strategies even when we’re limited like that.


One possible way to prove an upper bound is to show some kind of relation between a given \(N\)-hat strategy, and an \((N-1)\)-hat strategy derived from it. The difficult part here is that unless you remove a hat from both players at once, you end up in a situation where players have different numbers of hats, which I really don’t want to think about.

But I do want to throw a computer at it!

I wrote up a “relaxation” algorithm, that starts with a random strategy for Alice, computes Bob’s best response to it, then Alice’s best response to that, and so on, until we hit a fixed point. Repeating this over and over again gave the following table of scores:

\(1\) \(2\) \(3\) \(4\) \(5\) \(6\) \(7\) \(8\) \(9\) \(10\)
\(1\) \(2\) \(4\) \(8\) \(16\) \(32\) \(64\) \(128\) \(256\) \(512\) \(1024\)
\(2\) \(4\) \(10\) \(20\) \(40\) \(80\) \(160\) \(320\) \(640\) \(1280\) \(2560\)
\(3\) \(8\) \(20\) \(44\) \(88\) \(176\) \(352\) \(704\) \(1408\) \(2816\) \(5632\)
\(4\) \(16\) \(40\) \(88\) \(178\) \(356\) \(712\) \(1424\) \(2848\) \(5696\) \(11392\)
\(5\) \(32\) \(80\) \(176\) \(356\) \(716\) \(1432\) \(2864\) \(5728\) \(11456\) \(22912\)
\(6\) \(64\) \(160\) \(352\) \(712\) \(1432\) \(2866\) \(5732\) \(11464\) \(22928\) \(45856\)
\(7\) \(128\) \(320\) \(704\) \(1424\) \(2864\) \(5732\) \(11468\) \(22936\) \(45872\) \(91744\)
\(8\) \(256\) \(640\) \(1408\) \(2848\) \(5728\) \(11464\) \(22936\) \(45874\) \(91748\) \(183496\)
\(9\) \(512\) \(1280\) \(2816\) \(5696\) \(11456\) \(22928\) \(45872\) \(91748\) \(183500\) \(367000\)
\(10\) \(1024\) \(2560\) \(5632\) \(11392\) \(22912\) \(45856\) \(91744\) \(183496\) \(367000\) \(734002\)

It seems to follow a… pattern? Not a nice pattern, but a pattern. Say you have \(a_{m,n}\), where \(m > n\). Then:

  • To step “away from the diagonal”, i.e., to \(a_{m+1,n}\), then you just double the score.
  • To step “toward the diagonal”, i.e., to \(a_{m, n+1}\), then you double and add \(2^k\), where \(k\) is \(m - 1 - 2 \lfloor n / 2 \rfloor\).
    • In other words, \(k\) goes \(m-1, m-1, m-3, m-3, m-5, m-5, \ldots\), until it ends in either \(2, 2\) or \(3, 3, 1\), at which point we arrive at the diagonal itself.

No idea if that’s helpful, or can be cleaned up into anything nice.