GitHub Page

Tuesday, May 18, 2021

Connect-$n$ Game on a \(k \times k\) Board

Definition: two players, the first to place n stones in consecutive line (horizontal, vertical, diagonal) wins. A real-life examples are tic tac toe (n=3, k=3) and Gomoku (n=5, k=18). My question is, for what n and k does the game have a must-win strategy? A compromised question is, does there exist a general algorithm to approach the optimal strategy?

Part 1. $n$, $k$, and the existence of a must-win strategy

For the question to make sense, we always have \(k \geq n\)
Trivially, for $n=1, 2$, when $k\geq n$ there exists must-win strategy.

Some counting:
There are \(2(k+1-n)k+2(k+1-n)^{2}\) ways for a winning line to occur. 

My intuition is that such n and k are related to the dimension of the board, since winning lines can only be parallel to the dimensions (and diagonally, which can make it really complicated in higher dimensions).  


Part 2. A general algorithm to approach the optimal strategy

I'm writing a program without a clear image, but try to achieve something:P
Firstly, I want to be able to simulate a connect-n game for future use.
 

Question 2.1 (not really important but interesting): What's the most efficient algorithm to check for winning lines on the board?


Without loss of generality, I'll only consider the time used to check horizontally on the board, but in reality, algorithms need to check the other 3 directions as well.

Algorithm 1 (basic): For every row (k rows), check possible starting positions (k+1-n). When a position is nonempty, check the following n-1 positions starting from the nearest from the starting position.
This should be a $O(k^{2})$ algorithm. Linear dependence on how dense the distribution of stones (how long into the game), noted as $d$ here, and $n$ as well. That is, $O(k^{2}, dn)$

Algorithm 2: Same as Algorithm 1, only when a position is nonempty, check from the furthest from the starting position, and skip to check the next position whenever a possible winning line doesn't form.
Same as Algorithm 1, $O(k^{2})$ algorithm. Still linearly corresponding to $d$ and $n$, but should reduce it by a constant factor (should be calculatable if stones are randomly placed). That is, $O(k^{2}, Cdn)$, $0<C<1$.
Also, I suppose this algorithm works better in real situations, but when stones are randomly placed, a better algorithm is to check the next $n/2$ position, then $n/2+n/4$, and so on, which reduce time complexity to $O(k^{2}, dlog(n))$.

Algorithm 3: For every row, only checks every $n$ position. When a position is nonempty, check alternately around the position in the midpoint. The time complexity for this algorithm is $O(\frac{k^{2}}{n}, dlog(n))$.
I think this is the best I can do. I might add some empirical date when I have time. I also must comment that in real life we might choose a point closer to the position than midpoint, because the average length of consecutive lines is likely skewed to the shorter end.

No comments:

Post a Comment