Google Interview Question
Software EngineersCountry: India
Interview Type: In-Person
/******************************************************************************
Online C++ Compiler.
Code, Compile, Run and Debug C++ program online.
Write your code in this editor and press "Run" button to compile and execute it.
*******************************************************************************/
#include <iostream>
#include <vector>
#include <cassert>
using namespace std;
enum Layout {H,V};
struct Move { int cell; Layout layout;};
Move LoseMove { -1, H };
class Board
{
public:
Board(int h, int v, bool _myTurn) : numCells(h * v), hSize(h), myTurn(_myTurn)
{
state = new bool[numCells];
for (int i = 0; i < numCells; ++i) state[i] = false;
}
virtual ~Board()
{
delete[] state;
}
Board(const Board& other, const Move& m) : numCells(other.numCells), hSize(other.hSize), myTurn(!other.myTurn)
{
state = new bool[numCells];
for (int i = 0; i < numCells; ++i)
{
state[i] = other.state[i];
}
set(m.cell);
switch (m.layout)
{
case H: set(m.cell + 1);break;
case V: set(m.cell + hSize); break;
}
}
vector<Move> generatePossibleMoves() const
{
vector<Move> ret;
for (int i = 0; i < numCells; ++i)
{
if (state[i]) continue;
bool rightmost = i % hSize == hSize - 1;
bool bottom = i / hSize == (numCells / hSize) - 1;
if ((!rightmost) && !(state[i+1])) ret.push_back({i, H});
if ((!bottom) && !(state[i+hSize])) ret.push_back({i, V});
}
return ret;
}
void set(int cell)
{
assert(!state[cell]);
state[cell] = true;
}
private:
int numCells;
int hSize;
bool myTurn;
bool* state;
};
Move solve(const Board& current)
{
vector<Move> nextMoves = current.generatePossibleMoves();
for (Move move : nextMoves)
{
Board newBoard(current, move);
Move bestEnemyPlan = solve(newBoard);
if (bestEnemyPlan.cell == -1 /* losing move */)
{
return move ;
}
}
return LoseMove;
}
int main()
{
Board b(3,2, true);
b.set(2); b.set(3);
Move m = solve(b);
cout << m.cell << "," << m.layout << endl;
}
If the board has a cell symmetry in it then the player who can first exploit it will win. Example on a even*even board the second player can always win, by simply imitating all moves made by the first player ensuring that his moves are symmetrical against the same diagonal. This way the number of domino positions is even, and they are consumed at identical rates across the two diagonally opposed triangles.
On the other hand if the board is even * odd, then the first player has winning strategy by occupying the center cells. Once done all remaining cells are symmetric by a diagonal, and the first player merely has to repeat the same move, thus again ensuring that the free positions are consumed at an identical rates across the two diagonally opposed triangles.
If the board is even * even, then in some cases one player still has winning strategy if it manages to 'win' or 'loose' the extra column or row in a way that all the larger board rectangle can be won per previous two strategies.
If all else fails standard game tree search minmax / alphabeta / transposition tables / iterative deepening / mtd(f) or newer monte carlo search could be used. Albeit in these algos might not fit into the interview time window. This would suggest that perhaps there is a simplification that would allow optimal strategy for at least one player without having to do tree search.
I think, more clarification is needed on the question. What did the interviewer mean by "improve the AI?" Is he expecting an AI solution or an Algorithmic solution?
For Algorithmic solution, Zsombor's solution works. But to implement any of the tree-based would take more than the interview time, unless you have memorized the solution
For AI solution, we can use reinforcement learning approach. Since the environment (i.e. the board) is fully observable, we can use markov desicion process (mdp) for AI to improve. For that, we will have a define states, actions, transition probabilities, and the reward function. Then we can use Bellman's equation to estimate the optimal policy for a given board.
A simple AI could check whether the number of valid positions before and after a move and prefer moves which create an odd or even number of moves depending on which one would allow the AI to have the final turn. This would require first writing a method to check the valid moves on the board with a given state. I'd talk about how you could train an AI but obviously that wouldn't be implemented in an interview.
- Anonymous April 16, 2020