# Tic-Tac-Toe and AI: Who is the Winner? (Part 3)

After having determined if a board has a winner using TensorFlow in the previous blog post, let us tackle a very similar question: Who is the winner?

Again this is a binary decision: Either X (“0”) or O (“1”) may win a board. It is also possible that we run into a tie, and therefore no one is a winner. For the sake of simplicity, let’s then still say “0” as a result. If the board really has a winner, we have already found a high-accuracy neural network to decide that in first place.

Using the same imports and setups as before, we again prepare our data. This time, we are interested in the information about the winner for our labels:

``````winnerAllDataFrame = pd.DataFrame(list(zip([x.vector[0] for x in validtttRecordsList],
[x.vector[1] for x in validtttRecordsList],
[x.vector[2] for x in validtttRecordsList],
[x.vector[3] for x in validtttRecordsList],
[x.vector[4] for x in validtttRecordsList],
[x.vector[5] for x in validtttRecordsList],
[x.vector[6] for x in validtttRecordsList],
[x.vector[7] for x in validtttRecordsList],
[x.vector[8] for x in validtttRecordsList],
[x.winner for x in validtttRecordsList])),
columns =['pos1', 'pos2', 'pos3','pos4','pos5','pos6','pos7','pos8','pos9', 'winner'])

print(winnerAllDataFrame.tail())
winner_train_dataset = winnerAllDataFrame.sample(frac=0.8, random_state=42)
winner_test_dataset = winnerAllDataFrame.drop(winner_train_dataset.index)

print(winnerAllDataFrame.shape, winner_train_dataset.shape, winner_test_dataset.shape)
print(winner_train_dataset.describe().transpose())

# split features from labels
winner_train_features = winner_train_dataset.copy()
winner_test_features = winner_test_dataset.copy()

winner_train_labels = winner_train_features.pop('winner')
winner_test_labels = winner_test_features.pop('winner')``````

Starting another normalizer

``````winner_normalizer = preprocessing.Normalization()
print(winner_normalizer.mean.numpy())``````

We can define another model and train it:

``````wwinner_model = keras.models.Sequential([
winner_normalizer,
layers.Dense(units=64, activation='relu'), #1
layers.Dense(units=64, activation='relu'), #2
layers.Dense(units=128, activation='relu'), #3
layers.Dense(units=1)
])
print(winner_model.summary())
winner_model.compile(loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics = ["accuracy"])
winner_history = winner_model.fit(winner_train_features, winner_train_labels,
batch_size=512,
epochs=50,
shuffle=True,
callbacks=[
tf.keras.callbacks.EarlyStopping(monitor='accuracy', mode="max", restore_best_weights=True, patience=5, verbose=1),
Accuracy1Stopping(),
tf.keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.2, patience=2, min_lr=0.002),
tensorboard_callback
],
verbose=1)``````

Note that we use the same configuration for our Sequential model as before. Running the training already terminates after three (!) epochs to reach an accuracy of 1.0:

``````Model: "sequential_6"
_________________________________________________________________
Layer (type)                Output Shape              Param #
=================================================================
normalization_2 (Normaliza  (None, 9)                 19
tion)

dense_19 (Dense)            (None, 64)                640

dense_20 (Dense)            (None, 64)                4160

dense_21 (Dense)            (None, 128)               8320

dense_22 (Dense)            (None, 1)                 129

=================================================================
Total params: 13268 (51.83 KB)
Trainable params: 13249 (51.75 KB)
Non-trainable params: 19 (80.00 Byte)
_________________________________________________________________
None
Epoch 1/50
567/567 [==============================] - 5s 7ms/step - loss: 0.1759 - accuracy: 0.9413 - lr: 0.0500
Epoch 2/50
567/567 [==============================] - 3s 5ms/step - loss: 0.0048 - accuracy: 0.9989 - lr: 0.0500
Epoch 3/50
567/567 [==============================] - 3s 6ms/step - loss: 4.6310e-04 - accuracy: 1.0000 - lr: 0.0500``````

A brief evaluation also shows that the accuracy is effective for our test data:

``````evaluationResult = winner_model.evaluate(winner_test_features, winner_test_labels, batch_size=256, verbose=1)
print(evaluationResult)``````
``````284/284 [==============================] - 3s 10ms/step - loss: 2.9924e-04 - accuracy: 1.0000
[0.0002992423251271248, 1.0]``````

This means that our model is over-equipped. In fact, running a model with a single Dense layer of 40 units also still does the trick after 11 epochs:

``````winner_model = keras.models.Sequential([
winner_normalizer,
layers.Dense(units=40, activation='relu'), #1
layers.Dense(units=1)
])
# ...``````
``````Epoch 1/50
567/567 [==============================] - 7s 9ms/step - loss: 0.2422 - accuracy: 0.9175 - lr: 0.0500
Epoch 2/50
567/567 [==============================] - 3s 5ms/step - loss: 0.1134 - accuracy: 0.9610 - lr: 0.0500
Epoch 3/50
567/567 [==============================] - 3s 5ms/step - loss: 0.0890 - accuracy: 0.9680 - lr: 0.0500
Epoch 4/50
567/567 [==============================] - 3s 5ms/step - loss: 0.0718 - accuracy: 0.9730 - lr: 0.0500
Epoch 5/50
567/567 [==============================] - 3s 5ms/step - loss: 0.0605 - accuracy: 0.9766 - lr: 0.0500
Epoch 6/50
567/567 [==============================] - 3s 5ms/step - loss: 0.0412 - accuracy: 0.9847 - lr: 0.0500
Epoch 7/50
567/567 [==============================] - 3s 5ms/step - loss: 0.0287 - accuracy: 0.9901 - lr: 0.0500
Epoch 8/50
567/567 [==============================] - 3s 5ms/step - loss: 0.0181 - accuracy: 0.9947 - lr: 0.0500
Epoch 9/50
567/567 [==============================] - 3s 5ms/step - loss: 0.0063 - accuracy: 0.9997 - lr: 0.0500
Epoch 10/50
567/567 [==============================] - 3s 5ms/step - loss: 0.0113 - accuracy: 0.9970 - lr: 0.0500
Epoch 11/50
567/567 [==============================] - 3s 5ms/step - loss: 0.0027 - accuracy: 1.0000 - lr: 0.0500``````