Mastering Machine Learning with TensorFlow: Unveiling Insights through Practical Projects in Image Classification and Regression

Sid Kedha
10 min readAug 28, 2023

--

Abstract:

Machine learning is no ordinary tool; it’s a phenomenon that has transcended the boundaries of human perception. Its applications extend beyond the limits of human cognition, continuously astounding the community with new developments and possibilities. Make no mistake, artificial intelligence isn’t in the future, it is the future. Guiding its path of progress is our responsibility, ensuring that its evolution remains within the realm of human understanding and oversight. This approach is vital, preventing any unintended consequences of the rapid development(a subtle reference to Terminator for those unfamiliar with the context). This article serves as a guide to mastering machine learning concepts by delving into hands-on projects. We’ll navigate through two distinct projects — one focused on image classification using the Fashion MNIST dataset and another tackling regression with the task of Celsius to Fahrenheit temperature conversion.

Part 1: Celsius to Fahrenheit Temperature Conversion

So let’s start off with a brief explanation of the task at hand here. Given a dataset of temperatures in Celsius and Fahrenheit, the objective is to train a machine learning model to figure out the formula in which the temperature conversion occurs. In more simpler terms, we train a model involving a simple neural network to achieve temperature conversion. A task like this would be considered regression because the machine outputs a single value. The end goal here is to get the model to be able to convert the value given in Celsius to Fahrenheit. How would we do this you might ask? The process consists of 5 crucial steps.

  1. Understanding the Task

2. Data Preparation

3. Building the Neural Network

4. Training the Model

5. Inference and Interpretation

As we have just gone over understanding the task let’s move on to data preparation. For regression tasks, we prepare the input and output data differently. We normalize the Celsius temperatures and their corresponding Fahrenheit values, creating a dataset that feeds into our neural network. Simple right? Now here is where it gets a little more complicated. As we delve into the construction of our neural network, we work to acquire a design that encapsulates the simplicity and power that we need.

At the heart of our neural network lies a dense layer. What is that you may ask? Well a dense layer consists of interconnected nodes also referred to as neurons.

Here we construct our Dense layer with our units being set equal to 1. This parameter specifies the number of neurons (units) in the dense layer. In this case, the layer will have 1 neuron. The input_shape=[1] defines the shape of the input data that will be fed into the layer.

This layer is not merely a mathematical construct - its a construction of neurons meticulously calibrated to transform the input — the Celsius temperature — into a meaningful prediction of the output — the Fahrenheit temperature. This process provides a glimpse into the transformative powers of a neural network including but not limited to the ability to learn and deduce relationships from data. The dense layer is equipped with weights and biases, concealed variables that hold the key to the network’s understanding. The weights determine the strength of connections between neurons, while biases introduce a touch of flexibility, allowing for nuanced adjustments. As the network undergoes training, these variables evolve, striving to minimize the discrepancy between predicted and actual Fahrenheit values.

The quest to bridge the gap between the actual and predicted value is assisted by our trusted companion: the loss function. Named mean square error, this acts as an evaluator of our neural networks performance. It quantifies the disparity between our predictions and the actual fahrenheit value. For a second lets imagine each prediction as a brushstroke on a canvas of data. The mean squared error acts as the critic, measuring the distance between the ideal masterpiece and the strokes that compose it. Our objective is to bridge that distance closer and closer every time to achieve optimal discrepancy between the strokes and the ideal masterpiece. This delicate dance of error minimization is what refines our network’s comprehension and, in turn, its predictions. Together, this neural symphony, led by a single dense layer and guided by the melody of mean squared error, forms the crux of our regression model.

After the construction of our neural network we move on to training the model which is relatively simple given the fact that our dataset contains not many values. It’s a process in which our neural network aims to mirror the intricate relationship between Celsius and Fahrenheit temperatures. Central to this journey are the concealed variables — the weights of our neural network. These weights, initially set to arbitrary values, hold the potential to uncover the hidden patterns within the temperature conversion data. However, the path to enlightenment demands adaptation, requiring these weights to converge and evolve towards values that bring our predictions closer to the actual value. In this process, the role of the Adam optimizer is critical. Adam stands as an acronym for Adaptive Moment Estimation, a technique that equips our neural network with the ability to traverse the intricate landscape of weight values. Visualize the weights as hikers and the Adam optimizer as the compass guiding the hikers through valleys and peaks towards the summit of optimal values.

Straight from the source code, this demonstrated the training section for the code. The ‘fit’ method trains the algorithm on the training data, after the model is initialized.

In summary, Adam optimization dynamically adjusts the step size (learning rate) for each parameter based on the history of gradients, allowing it to navigate the landscape of the loss function more effectively and converge towards the optimal solution. The guiding mechanism of the Adam optimizer is gradient descent, a fundamental concept in the realm of machine learning. Picture the loss function as a hilly landscape — valleys represent points of low error while peaks symbolize high discrepancies. Gradient descent is our navigator, guiding the weights towards lower points, where the errors are minimized. As the training unfolds, the neural network iteratively adjusts its weights, treading the path towards convergence — the state where the loss function’s value is minimized. The collaboration between weights, the Adam optimizer, and the technique of gradient descent orchestrates a symphony that culminates in a model capable of deciphering the intricacies of temperature conversion.

Here the Adam optimizer is illustrated along with the learning rate displayed as (0.9). Along with the Adam optimizer, the first line displays our loss function which the model aims to minimize.

Part 2: Fashion MNIST Dataset

Now lets adventure into the exciting realm of image classification using the Fashion MNIST dataset. This dataset contains a collection of grayscale images depicting various clothing items, each belonging to one of ten different categories. Our goal here is to harness the power of machine learning to accurately classify these images into their respective categories. Following the same 5 steps mentioned for our last project , understanding the task at hand remains crucial to the machine learning process. Image classification requires training a model to recognize and classify images into their respective categories based on their features. In our case, the Fashion MNIST dataset provides us with images of clothing items like shirts, shoes, dresses, and more. The goal of this project is to use machine learning techniques to build a classifier that can correctly identify the type of clothing depicted in each image.

Moving on to data preparation since the Fashion MNIST dataset is provided by Tensorflow, we’ll use Python’s TensorFlow library to load and preprocess the Fashion MNIST dataset. The images need to be transformed into a format that can be fed into a neural network. TensorFlow simplifies this process by providing tools for resizing, normalization, and data splitting.

Using the normalize function we transform the image into a format that can be fed into the neural network.

Next we get into the process and architecture of our intermediately complicated neural network. We will dive straight into building a simple sequential neural network model using the Keras API, a high-level deep learning framework integrated within TensorFlow. This model, carefully designed with three layers, forms the backbone of our image classification project, ushering us into a world of pattern recognition and predictive prowess.

Constructing the Sequential Neural Network Model:

Our journey begins with the meticulous construction of the sequential neural network model. A sequential model is a linear stack of layers, each seamlessly connected to the next, forming a precise path for data to flow. With Python’s TensorFlow and Keras libraries at our disposal, this process becomes an elegant endeavor.

  1. Flattening Layer: The initial layer is a flattening layer, designed to transform the 28x28 pixel images into a 1D array. This transformation enables the subsequent layers to comprehend the underlying patterns and features within the images.
tf.keras.layers.Flatten(input_shape=(28, 28, 1)),

2. Dense Hidden Layer: As we move forward, we encounter a dense hidden layer comprising 128 neurons, each equipped with the rectified linear unit (ReLU) activation function. The dense layer’s neuron in a model receives output from every neuron of its preceding layer, where neurons of the dense layer perform matrix-vector multiplication. The ReLU activation function introduces non-linearity, allowing the model to capture complex interactions within the data. This layer is where the true magic of pattern recognition occurs, as it uncovers the hidden features that distinguish one clothing item for another.

    tf.keras.layers.Dense(128, activation=tf.nn.relu),

3. Dense Output Layer: The culmination of our model is the dense output layer, a pivotal player in the classification endeavor. With 10 neurons, one for each class in our dataset, this layer serves as a decision-maker, assigning probabilities to each class. The softmax activation function, akin to a conductor, orchestrates these probabilities into a harmonious symphony, allowing us to identify the most likely category for each clothing item.

tf.keras.layers.Dense(10, activation=tf.nn.softmax)

Compiling the model is providing it with the mighty tools that Tensorflow provided us with to optimize the models parameters and enhance its predictive ability. The Adam optimizer, intricately tweaks the models internal parameters to minimize the categorical cross-entropy loss. Adams adaptive learning rate allows the model to optimize its parameters with ease and converge efficiently to a solution. The sparse categorical cross-entropy function plays the role of our “loss function”, with the sole goal of finding the discrepancy value between the actual and predicted value in order to optimize the weights and tweak the internal variables correctly. Finally, we add the accuracy metric which keeps track of our models performance, tallying the correct number of predictions in our sample.

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

Post construction of the neural network, we need to train the model. The training process is quite simple. After uploading our training samples(60,000) and test samples(10,000) the model is trained using the training dataset for 5 epochs. The fit function takes care of iterating through the dataset in batches and updating the models parameters to minimize the loss function. This process involves adjusting the weights and biases in the model’s layers through optimization techniques such as gradient descent.

model.fit(train_dataset, epochs=5, steps_per_epoch=math.ceil(num_train_examples/BATCH_SIZE))

Post training, we evaluate the models performance, using the test dataset, on accuracy and and calculate the test loss using the evaluate function. The script then proceeds to make predictions on a batch of test images and visualize the results using Matplotlib. It plots the predicted class label, true class label, and a bar graph showing the prediction probabilities for each class.

 def plot_image(i, predictions_array, true_labels, images):
predictions_array, true_label, img = predictions_array[i], true_labels[i], images[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])

plt.imshow(img[..., 0], cmap=plt.cm.binary)

predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'

plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100 * np.max(predictions_array),
class_names[true_label]),
color=color)

def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)

thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')

After running the script let’s take a look at the model’s output.

With 94 percent confidence the model is sure the provided image is a coat.

This is further supported when the output is returned in the terminal by providing an array of all the items in the dataset and the amount of confidence the machine has that the image could be them.

This is an example of an array:

[[1.0623254e-04, 3.4596424e-06, 1.2640079e-02, 6.8163035e-06, 9.2804480e-01, 6.5611316e-09, 5.9176281e-02, 1.3649837e-09, 2.2307891e-05, 6.7272636e-09]]

This array matches up with this array: [‘T-shirt/Top’, ‘Trouser’, ‘Pullover’, “Dress”, “Coat”, ‘Sandal’, ‘Shirt’, ‘Sneaker’, ‘Bag’, ‘Ankle boot’]

In this case, the machine is stating with 93 percent(9.28 = 9.3 = 93%) confidence that the image given is a coat. Since the 5th index in the array above is a coat and the 5th index in the output array is 9.3 we can confirm that the model works and has correctly identified a coat.

Conclusion:

In this article, we have thoroughly gone over two distinct projects that provide a glimpse into the power and impact of effective machine learning. Using a temperature conversion regression task and a classification project using the Fashion MNIST dataset a comprehensive understanding of the fundamental concepts in machine learning should be achieved. We have witnessed how machine learning algorithms can turn mere data into accurate predictions and bridge the gap between understanding and application. Peering into the future, we can see unlimited possibilities and endless use to the effectiveness of machine learning power. Whether we’re decoding Celsius temperatures or classifying fashion items, the underlying principle remains the same: machine learning is a fusion of art and science that empowers us to understand, predict, and transform the world around us. So, as we venture forward, may our algorithms be efficient, our predictions accurate, and our education expanding in the direction of knowledge and pursuit. It is hopefully that this article has provided a comprehensive guide to machine learning introduction fundamental concepts and giving way to using Tensorflow for future machine learning projects.

References:

--

--