TensorBoard – How to Visualize Data Using TensorBoard
TensorBoard is a web-based tool embedded in TensorFlow. It provides a suite of methods that we can use to get insights into TensorFlow sessions and graphs, thus allowing the user to inspect, visualize, and understand them deeply. It provides access to many functionalities in a straightforward way, as follows:
- It allows us to explore the details of TensorFlow model graphs, making the user able to zoom in to specific blocks and subsections.
- It can generate plots of typical quantities of interest that we can take a look at during training, such as loss and accuracy.
- It gives us access to histogram visualizations that show tensors changing over time.
- It provides trends of layer weights and bias over epochs.
- It stores runtime metadata for a run, such as total memory usage.
- It visualizes embeddings.
TensorBoard reads TensorFlow log files containing summary information about the training process at hand. These are generated with the appropriate callbacks, which are then passed to TensorFlow jobs.
The following screenshot shows some typical visualizations that are provided by TensorBoard. The first one is the "Scalars" section, which shows scalar quantities associated with the training stage. In this example, accuracy and binary cross entropy are being represented:
The second view provides a block diagram visualization of the computational graph, where all the layers are reported together with their relations, as shown in the following screenshot:
The DISTRIBUTIONS tab provides an overview of how the model parameters are distributed across epochs, as shown in the following figure:
Finally, the HISTOGRAMS tab provides similar information to the DISTRIBUTIONS tab, but is unfolded in 3D, as shown in the following screenshot:
In this section and, in particular, in the following exercise, TensorBoard will be leveraged to easily visualize metrics in terms of trends, tensor graphs, distributions, and histograms.
In order to focus only on TensorBoard, the very same classification exercise we performed in the previous section will be used. Only the large model will be trained. All we need is to import TensorBoard and activate it, as well as a definition of the log file directory.
A TensorBoard callback is then created and passed to the fit method of the model. This will generate all TensorBoard files inside the log directory. Once training is complete, this log directory path is passed to TensorBoard as an argument. This will open a web-based visualization where the user is able to gain deep insights into its model and training-related aspects.
Exercise 3.07: Creating a Deep Neural Network to Classify Events Generated by the ATLAS Experiment in the Quest for the Higgs Boson Using TensorBoard for Visualization
In this exercise, we will build, train, and measure the performance of a deep neural network with the same goal of Exercise 3.06, Creating a Deep Neural Network to Classify Events Generated by the ATLAS Experiment in the Quest for Higgs Boson in mind, but instead, we will leverage TensorBoard so that we can gain additional training insights.
The following steps need to be implemented in order to complete this exercise:
- Import all the required modules:
from __future__ import absolute_import, pision, \
print_function, unicode_literals
from IPython import display
from matplotlib import pyplot as plt
from scipy.ndimage.filters import gaussian_filter1d
import pandas as pd
import numpy as np
import datetime
import tensorflow as tf
!rm -rf ./logs/
# Load the TensorBoard notebook extension
%load_ext tensorboard
- Download the custom smaller subset of the original dataset:
higgs_path = tf.keras.utils.get_file('HIGGSSmall.csv.gz', \
'https://github.com/PacktWorkshops/'\
'The-Reinforcement-Learning-Workshop/blob/master/'\
'Chapter03/Dataset/HIGGSSmall.csv.gz?raw=true')
- Read the CSV dataset into the TensorFlow dataset class and repack it so that it has tuples (features, labels):
N_TEST = int(1e3)
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(N_TRAIN)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
N_FEATURES = 28
ds = tf.data.experimental.CsvDataset\
(higgs_path,[float(),]*(N_FEATURES+1), \
compression_type="GZIP")
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
packed_ds = ds.batch(N_TRAIN).map(pack_row).unbatch()
- Create training, validation, and test sets and assign them the BATCH_SIZE parameter:
validate_ds = packed_ds.take(N_VALIDATION).cache()
test_ds = packed_ds.skip(N_VALIDATION).take(N_TEST).cache()
train_ds = packed_ds.skip(N_VALIDATION+N_TEST)\
.take(N_TRAIN).cache()
test_ds = test_ds.batch(BATCH_SIZE)
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE)\
.repeat().batch(BATCH_SIZE)
- Now, let's start creating the model and training it. Create a decaying learning rate:
lr_schedule = tf.keras.optimizers.schedules\
.InverseTimeDecay(0.001, \
decay_steps=STEPS_PER_EPOCH*1000,\
decay_rate=1, staircase=False)
- Define a function that will compile a model with the Adam optimizer and use binary cross entropy as the loss function. Then, fit it on the training data using early stopping by using the validation dataset, as well as a TensorBoard callback:
log_dir = "logs/fit/" + datetime.datetime.now()\
.strftime("%Y%m%d-%H%M%S")
def compile_and_fit(model, name, max_epochs=3000):
optimizer = tf.keras.optimizers.Adam(lr_schedule)
model.compile(optimizer=optimizer,\
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\
metrics=[tf.keras.losses.BinaryCrossentropy\
(from_logits=True, name='binary_crossentropy'),\
'accuracy'])
model.summary()
tensorboard_callback = tf.keras.callbacks.TensorBoard\
(log_dir=log_dir,\
histogram_freq=1,\
profile_batch=0)
history = model.fit\
(train_ds,\
steps_per_epoch = STEPS_PER_EPOCH,\
epochs=max_epochs,\
validation_data=validate_ds,\
callbacks=[tf.keras.callbacks.EarlyStopping\
(monitor='val_binary_crossentropy',\
patience=200),\
tensorboard_callback], verbose=2)
return history
- Create the same large model as before with regularization items such as L2 regularization and dropout, and then compile it and fit it on the dataset:
regularization_model = tf.keras.Sequential([\
tf.keras.layers.Dense(512,\
kernel_regularizer=tf.keras.regularizers\
.l2(0.0001),\
activation='elu', \
input_shape=(N_FEATURES,)),\
tf.keras.layers.Dropout(0.5),\
tf.keras.layers.Dense(512,\
kernel_regularizer=tf.keras.regularizers\
.l2(0.0001),\
activation='elu'),\
tf.keras.layers.Dropout(0.5),\
tf.keras.layers.Dense(512,\
kernel_regularizer=tf.keras.regularizers\
.l2(0.0001),\
activation='elu'),\
tf.keras.layers.Dropout(0.5),\
tf.keras.layers.Dense(512,\
kernel_regularizer=tf.keras.regularizers\
.l2(0.0001),\
activation='elu'),\
tf.keras.layers.Dropout(0.5),\
tf.keras.layers.Dense(1)])
compile_and_fit(regularization_model,\
"regularizers/regularization", max_epochs=9000)
The last output line will be as follows:
Epoch 1112/9000
20/20 - 1s - loss: 0.5887 - binary_crossentropy: 0.5515
- accuracy: 0.6949 - val_loss: 0.5831
- val_binary_crossentropy: 0.5459 - val_accuracy: 0.6960
- Check the model's performances on the test set:
test_accuracy = tf.keras.metrics.Accuracy()
for (features, labels) in test_ds:
logits = regularization_model(features)
probabilities = tf.keras.activations.sigmoid(logits)
predictions = 1*(probabilities.numpy() > 0.5)
test_accuracy(predictions, labels)
print("Test set accuracy: {:.3%}".format(test_accuracy.result()))
The output will be as follows:
Test set accuracy: 69.300%
Note
The accuracy may show slightly different values due to random sampling with a variable random seed.
- Visualize the variables with TensorBoard:
%tensorboard --logdir logs/fit
This command starts the web-based visualization tool. Four main windows are represented in the following figure, displaying information about loss and accuracy, model graphs, histograms, and distributions in a clockwise order, starting from the top left:
The advantages of using TensorBoard are quite evident: all the training information is collected in a single place, allowing the user to easily navigate through it. The top-left panel, the SCALARS tab, allows the user to monitor loss and accuracy so that they are able to check the same chart we saw previously in an easier way.
In the top right, the model graph is shown, so it is possible to visualize how input data flows into the computational graph by going through each block.
The two views at the bottom show the same information in two different representations: all the model parameter (networks weights and biases) distributions are shown across training epochs. On the left, the DISTRIBUTIONS tab displays the parameters in 2D, while the HISTOGRAMS tab unfolds the parameters in 3D. They both allow the user to monitor how trainable parameters vary during the training step.
Note
To access the source code for this specific section, please refer to https://packt.live/2AWGjFv.
You can also run this example online at https://packt.live/2YrWl2d.
In this section, we focused on providing some insights into how to use TensorBoard to visualize training-related model parameters. We saw how, starting with an already familiar problem, it is super easy to add the TensorBoard web-based visualization tool and navigate through all of its plugins directly inside a Python notebook.
Now, let's complete an activity to put all our knowledge to the test.
Activity 3.01: Classifying Fashion Clothes Using a TensorFlow Dataset and TensorFlow 2
Suppose you need to code an image processing algorithm for a company that owns a clothes warehouse. They want to autonomously classify clothes based on a camera output, thereby allowing them to group clothes together with no human intervention.
In this activity, we will create a deep fully connected neural network capable of doing such a task, meaning that it will accurately classify images of clothes by assigning them to the class they belong to.
The following steps will help you to complete this activity:
- Import all the required modules, such as numpy, matplotlib.pyplot, tensorflow, and tensorflow_datasets, and print out their main module versions.
- Import the Fashion MNIST dataset using TensorFlow datasets and split it into train and test sets.
- Explore the dataset to get familiar with the input features, that is, shapes, labels, and classes.
- Visualize some instances of the training set.
- Perform data normalization by building the classification model.
- Train the deep neural network.
- Test the model's accuracy. You should obtain an accuracy in excess of 88%.
- Perform inference and check the predictions against the ground truth.
By the end of this activity, the trained model should be able to classify all the fashion items (clothes, shoes, bags, and so on) with an accuracy greater than 88%, thus producing a result similar to the one shown in the following image:
Note
The solution to this activity can be found on page 696.