OpenMV-Image-Classification/.ipynb_checkpoints/openmv clasification training-checkpoint.ipynb

1668 lines
1.4 MiB
Plaintext
Raw Normal View History

2024-09-14 23:47:15 +00:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "6Y8E0lw5eYWm"
},
"source": [
"# Post-training integer quantization"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BTC1rDAuei_1"
},
"source": [
"## Overview\n",
"\n",
"Integer quantization is an optimization strategy that converts 32-bit floating-point numbers (such as weights and activation outputs) to the nearest 8-bit fixed-point numbers. This results in a smaller model and increased inferencing speed, which is valuable for low-power devices such as the [OpenMV](https://openmv.io) camera"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dDqqUIZjZjac"
},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "I0nR5AMEWq0H"
},
"source": [
"In order to quantize both the input and output tensors, we need to use APIs added in TensorFlow 2.3:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "WsN6s5L1ieNl"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-09-14 19:26:49.528256: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-09-14 19:26:49.544309: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-09-14 19:26:49.549222: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-09-14 19:26:49.561671: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
"2024-09-14 19:26:50.195875: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n"
]
}
],
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"\n",
"\n",
"from tensorflow import keras\n",
"from tensorflow.keras import layers\n",
"from tensorflow.keras.models import Sequential"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2XsEP17Zelz9"
},
"source": [
"## Generate a TensorFlow Model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5NMaNZQCkW9X"
},
"source": [
"We'll build a simple model to classify a few playing cards."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "eMsw_6HujaqM"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found 97 files belonging to 5 classes.\n",
"Using 78 files for training.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n",
"I0000 00:00:1726356412.152193 168090 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\n",
"I0000 00:00:1726356412.201034 168090 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\n",
"I0000 00:00:1726356412.201280 168090 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\n",
"I0000 00:00:1726356412.202348 168090 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\n",
"I0000 00:00:1726356412.202582 168090 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\n",
"I0000 00:00:1726356412.202769 168090 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\n",
"I0000 00:00:1726356412.292549 168090 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\n",
"I0000 00:00:1726356412.292752 168090 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\n",
"I0000 00:00:1726356412.292919 168090 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355\n",
"2024-09-14 19:26:52.293158: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 5600 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3050, pci bus id: 0000:2d:00.0, compute capability: 8.6\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found 97 files belonging to 5 classes.\n",
"Using 19 files for validation.\n",
"number of classes: 5\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-09-14 19:26:53.320173: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAxkAAAMsCAYAAAA4VG/hAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8hTgPZAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOz9W8gty5YXDv5GRGTOOb9v7dvZ53iqylNoUwVa2lKNJYiIIPrgiwoi+q+XtlBoH6QEfWhpaFB86BYf1AYfLPBBUYTGO/UiNoIP3h4EQRSv+K+qv1XlqVP7si7f982ZmREx+mGMERGZM+d3WXvtc/ZalWPvueY358yMjIyMGGP8xi2ImRkbbbTRRhtttNFGG2200UZviNz3ugMbbbTRRhtttNFGG2200btFG8jYaKONNtpoo4022mijjd4obSBjo4022mijjTbaaKONNnqjtIGMjTbaaKONNtpoo4022uiN0gYyNtpoo4022mijjTbaaKM3ShvI2GijjTbaaKONNtpoo43eKG0gY6ONNtpoo4022mijjTZ6o7SBjI022mijjTbaaKONNtrojdIGMjbaaKONNtpoo4022mijN0obyNhoo4022mijjTbaaKON3ihtIGOjjTbaaKONNtpoo402eqO0gYyNNtpoo4022mijjTba6I3SBjI22mijjTbaaKONNtpoozdKG8h4C+nnfu7n8Cf+xJ/Ar/t1vw6HwwEff/wx/tAf+kP42Z/92bNjnz9/jj/9p/80fu2v/bXY7Xb41re+hT/yR/4IPvnkk3LMMAz4c3/uz+GHf/iHsdvt8IM/+IP4M3/mz2AYhu/iXW200UbfTdr4yEYbbfRFaeMjG91H4XvdgY2eTv/23/5b/Ot//a/x4z/+4/jWt76Fn/3Zn8Vf+2t/Db/zd/5O/Kf/9J9wdXUFALi5ucHv+B2/A//5P/9n/LE/9sfwm3/zb8Ynn3yCn/7pn8bP//zP4+tf/zpyzvj9v//341/+y3+JP/7H/zh+5Ed+BP/hP/wH/JW/8lfw3/7bf8M//sf/+Ht7sxtttNGXQhsf2Wijjb4obXxko3uJN3rr6O7u7uy7f/Nv/g0D4L/1t/5W+e7P/tk/ywD4H/7Df3h2fM6ZmZn/9t/+2+yc43/xL/7F7Pef+qmfYgD8r/7Vv3rDvd9oo42+CrTxkY022uiL0sZHNrqPtnCpt5AOh0P5e5omfPrpp/jhH/5hfPjhh/h3/+7fld/+wT/4B/jRH/1R/IE/8AfO2iAiAMDf+3t/Dz/yIz+CX//rfz0++eST8vpdv+t3AQD++T//51/y3Wy00UbfC9r4yEYbbfRFaeMjG91HW7jUW0jH4xF/4S/8BfyNv/E38Au/8Atg5vLbixcvyt//43/8D/zBP/gH723rv//3/47//J//M77xjW+s/v6d73znzXR6o402+krRxkc22mijL0obH9noPtpAxltIf/JP/kn8jb/xN/Cn/tSfwm/7bb8NH3zwAYgIP/7jP46c85PayjnjN/2m34S//Jf/8urvP/iDP/gmurzRRht9xWjjIxtttNEXpY2PbHQfbSDjLaS///f/Pn7iJ34Cf+kv/aXy3el0wvPnz2fH/dAP/RD+43/8j/e29UM/9EP49//+3+N3/+7fXVyWG2200btPGx/ZaKONvihtfGSj+2jLyXgLyXs/c0kCwF/9q38VKaXZd3/wD/5B/Pt//+/xj/7RPzprw87/w3/4D+MXfuEX8Nf/+l8/O+Z4POL29vYN9nyjjTb6qtDGRzbaaKMvShsf2eg+Il7Ojo2+8vQTP/ET+Dt/5+/gJ3/yJ/EbfsNvwL/5N/8G/+yf/TMcj0f83t/7e/E3/+bfBCAl437rb/2t+K//9b/ij/2xP4Yf+7Efw2effYaf/umfxk/91E/hR3/0R5Fzxu/7fb8P/+Sf/BP8b//b/4bf/tt/O1JK+C//5b/g7/7dv4t/+k//KX7Lb/kt39sb3mijjd44bXxko402+qK08ZGN7qXvUVWrjb4Aff755/xH/+gf5a9//ev87Nkz/j2/5/fwf/kv/4V/za/5NfwTP/ETs2M//fRT/smf/En+1b/6V3Pf9/ytb32Lf+InfoI/+eSTcsw4jvwX/+Jf5N/4G38j73Y7/uijj/jHfuzH+M//+T/PL168+C7f3UYbbfTdoI2PbLTRRl+UNj6y0X20eTI22mijjTbaaKONNtpoozdKW07GRhtttNFGG2200UYbbfRGaQMZG2200UYbbbTRRhtttNEbpQ1kbLTRRhtttNFGG2200UZvlDaQsdFGG2200UYbbbTRRhu9UdpAxkYbbbTRRhtttNFGG230RmkDGRtttNFGG2200UYbbbTRG6UNZGy00UYbbbTRRhtttNFGb5TCYw/8c//f/x+YGTkzOBNyJsgGGwRmB0C/ywADYCIE70GO4L1gmcQJOScMwwk5JUzTBMqEEAnOedmePme9jmw1n1JCShnH4xGZM/p+D+ccvPcgIpBzgG71kXNGzhnMCZkTch6QeQJyAiEjZICYkY8ncEqgcYID4eA9rg4HfOPjr8ER4AggBzgHOOdAjkCeQOQQQHAADmOGYwDIyAAmx0gEDMRIDoj6ORLAYIABgrwQE5AYPE5AyuAxApnBMem9Z7AjpECIDpg8cJMn3OWIT+9e4dVwxKc3L3CKE66fPYN3DtPxBE4ZOE0IIDxzHXa+w3v9HjsfcHA93t/v8f5+j855eB1v55zcI9HF12PInsdjiJlh27Ms3wEZcwDSHjOyjkt7bkoJOefynlMq3zMzCARq2vLeS9sgyMxlUHnYgM7ael2gfLb3nHP9TO3vi7+ZgcworSy3olmOExGYrBWWnuh9rm1j8//8f//Zs+/eFvqLf/Ofg8FInJGZMeWEDCCBAafzTecjEwAi2H/BBzhyCN4DDGR99nGckHJG1GfPmZFyQk4JyEmeBWcADMdZ5tQgfIFTBHIGpwQHQnDCt/oQEIJD511dt80zZsiXRATnnTxTqkcxczN7uDxf0uOc3Bo8kU4lRmbGOEaklDFNwsummDHGhHGKyOSQyeEb3/dNvPfBB/jB/9OvxfsffIBnH30AHwKyk2snZDgCgsulv9K1xZ0wwCAwA8wknxmVzzMjJQagvMARiOpKSVOUdTeM4JSAcQJSQj4ewTEinY6YTie8/OyX5XdOYBb+zHWmA2TrCUB2qB/W14HxK/u9pTV+Mj+gHChPhrPy56x8QeZHjFFkUY5gzshJPsc4YRpHvHzxAq9ub/CdT38ZyTmkENBfHdDv9zgOE6aY0O8O6LodftX3fwu7wxXI9QB5UOgBcmAVv8yEzISYlXVkeRDMWeZoeZ+PAzf3kHV+t2NW/kaWNhbj8nf/P//39TH6itMf+b/+3+Ccx36/RwhBdQLRIYSM+0Pkg8mDRqZVmefKmgRkrcrv9Tfn5nKQdfGIrNErEvR4lPPkJf1x2YG4XteBdP2rrJMmwQw4IvjFcfWO5nOcRciBwZimWOaEjQLrPEoxIUXhLZwzcsq22AEA3otkBBEYjAhGIsboEiIYA2eMOWJIEac4YYgTbtOAMUcc84SIjEgZGYwJjATGyKrPIOnYAh4Ez8BhIhxGwt4F7Mij9wHBOfjIoMzIwwBOCXkYAc5AznVcnPAim9tjiog5YooRmTO86+Ccx+HqCiF06Pd7eB/QdXv4ENDvDwjdDv3+Cr/qm9/E9//At7A/XGO3P6DbHeBDQIwZOTNO4yj6WJrgOGGXRyAn5OmEGCPGcUScJkzDiGmaEKcJMYrsCSbHMrcTB1TmjM6rokvIOs05lbkWQlAdjQAwYorNGj7XoSpPqO+mI53zjkr36W738dT/x//rYV3k0SBjGAa9kAoqBRZwHnKzpIMoi4GYkeIEZuCkwiVz1oGIAABva5sARlJAkXTxmtCTgfIeoAycTrcgArquh3MOIcgtEICsCnrOSZluggy0DnJKIFU4HAG73Q6d93hvt8eu7+BcK950gCEMqtwiWEFFlsnCABNjApBNdKqsItVhRWBnFfEEUnkg6i4hZUZOCTHKuMA7TJxxGiacOOGYJ9wpyDjGATFFBHL
"text/plain": [
"<Figure size 1000x1000 with 9 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 111/1875\u001b[0m \u001b[32m━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[1m2s\u001b[0m 1ms/step - accuracy: 0.9836 - loss: 0.0545"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 147/1875\u001b[0m \u001b[32m━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[1m2s\u001b[0m 1ms/step - accuracy: 0.9837 - loss: 0.0547"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 184/1875\u001b[0m \u001b[32m━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[1m2s\u001b[0m 1ms/step - accuracy: 0.9835 - loss: 0.0549"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 221/1875\u001b[0m \u001b[32m━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[1m2s\u001b[0m 1ms/step - accuracy: 0.9833 - loss: 0.0559"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 258/1875\u001b[0m \u001b[32m━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[1m2s\u001b[0m 1ms/step - accuracy: 0.9831 - loss: 0.0567"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 295/1875\u001b[0m \u001b[32m━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━━━\u001b[0m \u001b[1m2s\u001b[0m 1ms/step - accuracy: 0.9830 - loss: 0.0572"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 332/1875\u001b[0m \u001b[32m━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━━━\u001b[0m \u001b[1m2s\u001b[0m 1ms/step - accuracy: 0.9829 - loss: 0.0575"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 369/1875\u001b[0m \u001b[32m━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━━━\u001b[0m \u001b[1m2s\u001b[0m 1ms/step - accuracy: 0.9829 - loss: 0.0577"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 406/1875\u001b[0m \u001b[32m━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━━\u001b[0m \u001b[1m2s\u001b[0m 1ms/step - accuracy: 0.9829 - loss: 0.0579"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 443/1875\u001b[0m \u001b[32m━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9829 - loss: 0.0581"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 479/1875\u001b[0m \u001b[32m━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9829 - loss: 0.0583"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 516/1875\u001b[0m \u001b[32m━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9828 - loss: 0.0585"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 553/1875\u001b[0m \u001b[32m━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9827 - loss: 0.0587"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 590/1875\u001b[0m \u001b[32m━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9827 - loss: 0.0589"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 627/1875\u001b[0m \u001b[32m━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9826 - loss: 0.0591"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 664/1875\u001b[0m \u001b[32m━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9826 - loss: 0.0593"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 700/1875\u001b[0m \u001b[32m━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9825 - loss: 0.0594"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 736/1875\u001b[0m \u001b[32m━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9825 - loss: 0.0596"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 772/1875\u001b[0m \u001b[32m━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9825 - loss: 0.0597"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 808/1875\u001b[0m \u001b[32m━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9824 - loss: 0.0598"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 844/1875\u001b[0m \u001b[32m━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9824 - loss: 0.0599"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 881/1875\u001b[0m \u001b[32m━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9824 - loss: 0.0600"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 918/1875\u001b[0m \u001b[32m━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9824 - loss: 0.0600"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 956/1875\u001b[0m \u001b[32m━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9823 - loss: 0.0601"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m 993/1875\u001b[0m \u001b[32m━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9823 - loss: 0.0602"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1030/1875\u001b[0m \u001b[32m━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9823 - loss: 0.0603"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1067/1875\u001b[0m \u001b[32m━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9823 - loss: 0.0604"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1105/1875\u001b[0m \u001b[32m━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9823 - loss: 0.0605"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1143/1875\u001b[0m \u001b[32m━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━\u001b[0m \u001b[1m1s\u001b[0m 1ms/step - accuracy: 0.9822 - loss: 0.0606"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1181/1875\u001b[0m \u001b[32m━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9822 - loss: 0.0606"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1218/1875\u001b[0m \u001b[32m━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9822 - loss: 0.0607"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1256/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9822 - loss: 0.0608"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1295/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9822 - loss: 0.0608"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1334/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9822 - loss: 0.0609"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1373/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9822 - loss: 0.0609"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1412/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9822 - loss: 0.0610"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1451/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9822 - loss: 0.0610"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1489/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9821 - loss: 0.0611"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1528/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9821 - loss: 0.0611"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1566/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9821 - loss: 0.0612"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1603/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9821 - loss: 0.0612"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1641/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9821 - loss: 0.0613"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1679/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9821 - loss: 0.0613"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1718/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9821 - loss: 0.0614"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1756/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9821 - loss: 0.0614"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1795/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9821 - loss: 0.0614"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1833/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9820 - loss: 0.0615"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1870/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m━\u001b[0m \u001b[1m0s\u001b[0m 1ms/step - accuracy: 0.9820 - loss: 0.0615"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\r",
"\u001b[1m1875/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9820 - loss: 0.0615 - val_accuracy: 0.9797 - val_loss: 0.0668\n"
]
},
{
"data": {
"text/plain": [
"<keras.src.callbacks.history.History at 0x7f0f06a4f5b0>"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"batch_size = 58\n",
"img_height = 120\n",
"img_width = 120\n",
"\n",
"\n",
"train_ds = tf.keras.utils.image_dataset_from_directory(\n",
" \"images/\",\n",
" #color_mode='grayscale',\n",
" validation_split=0.2,\n",
" subset=\"training\",\n",
" seed=123,\n",
" image_size=(img_height, img_width),\n",
" batch_size=batch_size)\n",
"\n",
"val_ds = tf.keras.utils.image_dataset_from_directory(\n",
" \"images/\",\n",
" #color_mode='grayscale',\n",
" validation_split=0.2,\n",
" subset=\"validation\",\n",
" seed=123,\n",
" image_size=(img_height, img_width),\n",
" batch_size=batch_size)\n",
"\n",
"\n",
"\n",
"# print info on the classes in the dataset\n",
"class_names = train_ds.class_names\n",
"num_classes = len(class_names)\n",
"print(\"number of classes:\", num_classes)\n",
"\n",
"plt.figure(figsize=(10, 10))\n",
"for images, labels in train_ds.take(3):\n",
" for i in range(9):\n",
" ax = plt.subplot(3, 3, i + 1)\n",
" plt.imshow(images[i].numpy().astype(\"uint8\"))\n",
" plt.title(class_names[labels[i]])\n",
" plt.axis(\"off\")\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-09-14 19:26:56.822115: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
]
}
],
"source": [
"image_list = [] # Initialize an empty list to store the images\n",
"label_list = [] # Initialize an empty list to store the labels\n",
"\n",
"for images, labels in val_ds: # Take the first batch\n",
"\n",
" for i in range(len(images)):\n",
" image_list.append(images[i].numpy()) # Convert to NumPy and store in the list\n",
" label_list.append(labels[i].numpy()) # Convert to NumPy and store in the list\n",
"\n",
"# Convert the list of NumPy arrays into a single NumPy array\n",
"\n",
"image_array = np.array(image_list)\n",
"test_labels = np.array(label_list)\n",
"\n",
"# Now apply astype and normalization\n",
"test_images = image_array.astype(np.float32) / 255.0\n",
"\n",
"\n",
"image_list = [] # Initialize an empty list to store the images\n",
"label_list = [] # Initialize an empty list to store the labels\n",
"\n",
"for images, labels in train_ds: # Take the first batch\n",
" for i in range(len(images)):\n",
" image_list.append(images[i].numpy()) # Convert to NumPy and store in the list\n",
" label_list.append(labels[i].numpy()) # Convert to NumPy and store in the list\n",
"\n",
"# Convert the list of NumPy arrays into a single NumPy array\n",
"\n",
"image_array = np.array(image_list)\n",
"train_labels = np.array(label_list)\n",
"\n",
"# Now apply astype and normalization\n",
"train_images = image_array.astype(np.float32) / 255.0"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.2882355 0.99411756\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-09-14 19:26:59.222836: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence\n"
]
}
],
"source": [
"image_list = [] # Initialize an empty list to store the images\n",
"label_list = [] # Initialize an empty list to store the labels\n",
"\n",
"for images, labels in train_ds: # Take the first batch\n",
" for i in range(len(images)):\n",
" image_list.append(images[i].numpy()) # Convert to NumPy and store in the list\n",
" label_list.append(labels[i].numpy()) # Convert to NumPy and store in the list\n",
"\n",
"# Convert the list of NumPy arrays into a single NumPy array\n",
"\n",
"image_array = np.array(image_list)\n",
"train_labels = np.array(label_list)\n",
"\n",
"# Now apply astype and normalization\n",
"train_images = image_array.astype(np.float32) / 255.0\n",
"\n",
"first_image = train_images[0]\n",
"print(np.min(first_image), np.max(first_image))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this step we will do a little data augmentation so that the model does not overfit"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/brickman/miniconda3/envs/openmv_train/lib/python3.10/site-packages/keras/src/layers/preprocessing/tf_data_layer.py:19: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.\n",
" super().__init__(**kwargs)\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAxkAAAMWCAYAAACdtUsqAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8hTgPZAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOz9zZLsuLIuiH0OgIyIXFV7n9u3pYFkJjOZ3kEzzfQUPWjrt9BYI5lJryHTu+g19GOt233P3vWzVmaQBOAauDv+SEZGrl3n3L3qJqpiMZJBgiDgcPfP3eEgZmZ8ls/yWT7LZ/ksn+WzfJbP8lk+yx9U3H/pBnyWz/JZPstn+Syf5bN8ls/yWf5c5RNkfJbP8lk+y2f5LJ/ls3yWz/JZ/tDyCTI+y2f5LJ/ls3yWz/JZPstn+Sx/aPkEGZ/ls3yWz/JZPstn+Syf5bN8lj+0fIKMz/JZPstn+Syf5bN8ls/yWT7LH1o+QcZn+Syf5bN8ls/yWT7LZ/ksn+UPLZ8g47N8ls/yWT7LZ/ksn+WzfJbP8oeWT5DxWT7LZ/ksn+WzfJbP8lk+y2f5Q8snyPgsn+WzfJbP8lk+y2f5LJ/ls/yhJTx74f/9/3kHMxATADCYAecI3hMoR1COyPdXpOUNy+9/x3b/hu31d6R1Rd4WICc4MAgAQAARAAIzwIAe5T8pDIBBYIAA15wvdZRj+7ErqdQi7a2fnFN3tKuICEQE73352zZEt9+I7Ll9cc6V66zIM+rbdIUZWdvTFiKCc65/Hlm92m/UVsPdvf0jxvfOyNag4ZlSe9+zUrW2MWcwZz0mGa/mHWLO3d8ZDHAG5wwgAzlJL1h/66DrI2Sk7bRSCsO1rQEhgZABpK6d4/hDqKZQCgA47TZP9jeDiOUIljbac2igLfJydL5cYx1UnuE84AIQLqAwwV1u8GHG7aefME0zri8v8MFjulzgvIcPE8h5UAhgcvJxHuwcyAfAORAJHRQaThkA47//3//Lbgx/lPJ//b/9P6S3k/R3O7+s2LmjuTF+xmtHmh/L0Twdy6P72/nZztNyL9CwpuP2nbX36Fnd0c4bX8wJ4Iys8yaxzJuktIQww4UJfr7CTxPCfJHPNGk/EHKKyClhW+7IccPbt98RtxX3r78jxQ3x7RXECS5HODA8NbOt68I6J4Yz3Twc32J3G1ceYfxnLGf0cXRNOz513AKcc/A+gFyQv4nKfCv8lgjOV5lARHB6JK2rfQYRAa6XE8JbCfKTO2x3oaNyryvddMjTpXcAALnpC6MSR6Tv0/c4D539P/wP/8fT/vtnLv+X/9P/ufu759Y61ge84nC88JiO7PqjYv2Zcpb5l3ue9h6dPvv8o3tGuh5/a68vc5W01SePa/Wm5on1SHYkUOEC++vK8zvd5aBO7v/u5W7DP3bttetdmS8mC5jtZeV3AKCufwZ+bVNiqJ9c5QGkHWi8vbun0WFqhfp81z8vZ+FtudBJvbFndbV/7FS9p9XfWh2R6nuDwEc02+i3Rc9t+E7lQ65c247fSFcA8N//d/+H/XOG8jTIyHDCyNImD2EG2CO7AKIAHzzowvDOYeYENwUwZ5D3iMjgREDcIENgip0DkSqYBGQdRS6SWkAHgZGl6woTsU4+/hyXfWf1dRHtJ+yZUpJVqTYGUYSQe845dMZWrM7dxDTaVSLuiO2BwvRcY1jrqBOkTCQ7EgFMADllWgzSCVMmgjVRWqiqu0ysQ8ZGMmNZb2ICmI06CEURkYv176zuN6ONY+bI3V29UkRkTzCAyd1dpT5yKvAVZBTHH1f8obXQdIWbLvDXL3DTBfPLF4RpxstPPyGEgGmeRTHRib1ZmwxzORU2LHPL2Gx5K24B+A9euAqA/UzgIjDGOdDOxXYu27Xt8azknA+ZZteCAai0Solzbm8EaMtgBBif1dY5tvM9ZaNCWgGb5DIMHzMDmUjB6gR4Dzdf4cOMcJWjny+gEAAfwM7BEcGHCZ4BP83IKYFcQNxWEDziumBhIKcVec3IOSHlJPcRwYEGV3g7l444cctjdO7SwNoO5uNZORvrM8BWlRIq87rSoc04rnMa2sAzMISB/oZ3JvvvjFZwRoMMLsYpruzdvigQ44G3MgBylX8ZWze+UQxqH1Bq/9kL6atQ+WccBf39QDl/tpzNS9YxMIMbjGc8ce8/Uh6Bp7PSyuQHNR8fW32pohWM/VyVXaH9ZlAOwPLwzE4nO3uPtq5+PhcwUCq2c20f7dtONFCMAQNH7WUDT2/vocLH5MXOdUei3IEgAQ6khtjHhZxTGnMY+eKB+vawGE3mnAVgsPZjA4jeq+bIWH1WngYZps7JPK7WJminCS4gwDlVzrwiIteg3qYOrVVQMYNZX5AB0MlkKEwW/bH8vvtycNFhxVp1PyHOji3jGBWGQ4DwjxYlnkLH9qzS+ufLe0rYYV3lQYYGmrln9SlgOKqvnOWjs7T/6agWI7UBavJ43e5c/cWAxv4RvXJUqzpjrgfXFSXGgZyHcx7Oe1FKva+WT/3Y9LGxHeV/Heuj/vhxS7EwmyJ3MFcezaF2Dj4CGuPzxvsftc2+jx6G07nzoM5HQGasr/27v4+H7w29kirqRACE38J7kA8gP4FCgPOzfHcBRB5M6sVQJVjwdICDQ5gvIHJI1w3OO+S0Im2ELW3ynsmMB0KjGSqXQWJ4anhEfbF3iFgNDN2cPmcpD/v06LpeITOQ6ArAoGOm86i5HSiqdNe31/624xG9d/RqyiqZAotGgaGuH6uCy60Yht4+GGAakMEN7fyg5ZScbEqcDOdR3x99f7YUr5L1a2OZ+7fo4TN9xL6fe1xaeu3PN7UfH1u5R9TMlXqk8jNVIdu1r20XVf1h1NfsWdz+aCC7rWsAD/ZRBZ+VofSeCzpoz2AYsHrsGuofe6aa9LyuiQex79S8E/XjV0DHodJCtXo240cZTejpjs8c8tLSt7UUMDzoHcZ3Sy81atr3zpenQUZTezOZxh9tMJ0oVUaAx5rd7k5+8PcfWR6DgP0EPiqtAnLqfbBrv6uVj8u/u5h454GjCr4j8j+kfEwZ+EMeV5jm+HYHCJfE1ejKx0tIlAFtAxh6R5nk1PyNg6rPzv2ApVPCTuCeKUnluncU9UflSHF/Zm4f/T0CG+qE0nPlDACdeW/6UkP6rIcYQHaETA7ZBYACKFzh/AR/ucFPM8L1Bc4HCcPTNucqCuXjJ5ADLjcPviTM8wVxWxCCx3p/BcBIcUVeAOaMrKZ28wOIAdBVsd2g6CqsTOWVp1bS762BNrtG28WZJ2D0Do1jZJ4n8TbrvCQPR+5dWrBS5qx6GXO2rmyNLD05FOCh1+ZHMmL4LbdyVjtlVJYKyCh1NKzK5a7NABqL6Z+EmTwoRwBzLI88FO+dHw19Bvb+Lfp2tIyfeXS/s/aD70pE1P898uvu2Uafh8Di2XaMchbDufZdm48p89YvQEHdO4BB+7q4Pd/2465JlUHtWtkAI2qua28g5wQsOAnpd9AQKr3wofFXwZNTj0Z7aQeUxvvGd0L15tuRtC3choCOFXGVyWfhvWflaZBBkLAW1o6SXiKNdWcQZ3CK4LghbQvSKjG+OUZwTuCcQbsJKIw5l05GYaz2QrYmg0br2EELaznr9rNOGplHL6xyzmVwRzel1bOPz66K6KFKSsdW0SOGSAPh76f7P1gOGENvFQMK3FXlwaw3mRkZIhSrWrH/lOp3jJirdsHYYfX2+aIayWcPSVHuGPv7/enQ3m9fGWqnhawBUStxex2LBw4AkCNyXBHXuygTziHnBO8d8jQBmMW7oczYU417FEt0ZXhmkSi8VE/+mbwarUWs9v4QqnJQnvIsDL+PFr/2+FRbHygqXdv0X1H23q9/rPPIU9rWLL2S7SkVZJADw8maIDfBTxc4PyFMFwmHsvU/zovHo7GsdWvgCCCL150mMDGmyxUgIKUNaQ3YQMLn06ZgQ5VusK530hE0BsLt+3D53YCGvTd3XK1eURVniy3f8yoTlmMM/OjB6GK4XRNmcaQ1HJQjutx5cc/IUa97Zu3NkRAXPsEgqnKohEvpc0k7jB2BMxXewdZObq7/gYHGI4X
"text/plain": [
"<Figure size 1000x1000 with 9 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Augment training data\n",
"\n",
"data_augmentation = keras.Sequential(\n",
" [\n",
" layers.RandomFlip(\"horizontal\", input_shape=(img_height, img_width, 3)),\n",
" layers.RandomRotation(0.4),\n",
" layers.RandomZoom(0.1),\n",
" ]\n",
")\n",
"\n",
"\n",
"# Visualize Change\n",
"plt.figure(figsize=(10, 10))\n",
"for images, _ in train_ds.take(1):\n",
" for i in range(9):\n",
" augmented_images = data_augmentation(images)\n",
" ax = plt.subplot(3, 3, i + 1)\n",
" plt.imshow(augmented_images[0].numpy().astype(\"uint8\"))\n",
" plt.axis(\"off\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's set up our model architecture and decide whether we want to use transfer learning or build the model from scratch. Transfer learning may include more overhead for the edge device but is better with less data, making it faster to build and train."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Choose whether you want to use transfer learning by setting the `transfer_learning` parameter to `True` or `False`."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/tmp/ipykernel_168090/819969002.py:5: UserWarning: `input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.\n",
" base_model = tf.keras.applications.MobileNetV2(input_shape=(img_height, img_width, 3),\n"
]
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">Model: \"sequential_1\"</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1mModel: \"sequential_1\"\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
"┃<span style=\"font-weight: bold\"> Layer (type) </span>┃<span style=\"font-weight: bold\"> Output Shape </span>┃<span style=\"font-weight: bold\"> Param # </span>┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
"│ sequential (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Sequential</span>) │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">120</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">120</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">3</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ mobilenetv2_1.00_224 │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">4</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">4</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">1280</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">2,257,984</span> │\n",
"│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Functional</span>) │ │ │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ global_average_pooling2d │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">1280</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │\n",
"│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">GlobalAveragePooling2D</span>) │ │ │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dense (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Dense</span>) │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">128</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">163,968</span> │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dropout (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Dropout</span>) │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">128</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ outputs (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Dense</span>) │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">5</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">645</span> │\n",
"└─────────────────────────────────┴────────────────────────┴───────────────┘\n",
"</pre>\n"
],
"text/plain": [
"┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
"┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
"│ sequential (\u001b[38;5;33mSequential\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m120\u001b[0m, \u001b[38;5;34m120\u001b[0m, \u001b[38;5;34m3\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ mobilenetv2_1.00_224 │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m4\u001b[0m, \u001b[38;5;34m4\u001b[0m, \u001b[38;5;34m1280\u001b[0m) │ \u001b[38;5;34m2,257,984\u001b[0m │\n",
"│ (\u001b[38;5;33mFunctional\u001b[0m) │ │ │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ global_average_pooling2d │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m1280\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"│ (\u001b[38;5;33mGlobalAveragePooling2D\u001b[0m) │ │ │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dense (\u001b[38;5;33mDense\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m163,968\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dropout (\u001b[38;5;33mDropout\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ outputs (\u001b[38;5;33mDense\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m5\u001b[0m) │ \u001b[38;5;34m645\u001b[0m │\n",
"└─────────────────────────────────┴────────────────────────┴───────────────┘\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Total params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">2,422,597</span> (9.24 MB)\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1m Total params: \u001b[0m\u001b[38;5;34m2,422,597\u001b[0m (9.24 MB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Trainable params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">164,613</span> (643.02 KB)\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m164,613\u001b[0m (643.02 KB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Non-trainable params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">2,257,984</span> (8.61 MB)\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m2,257,984\u001b[0m (8.61 MB)\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"\n",
"transfer_learning = True\n",
"\n",
"if transfer_learning:\n",
" # Load the pre-trained MobileNetV2 model (excluding the top classification layer)\n",
" base_model = tf.keras.applications.MobileNetV2(input_shape=(img_height, img_width, 3),\n",
" include_top=False, # Do not include the final classification layer\n",
" weights='imagenet') # Use weights pre-trained on ImageNet\n",
" \n",
" base_model.trainable = False # Freeze the base model so its weights won't be updated during training\n",
" \n",
" # Create the model\n",
" model = Sequential([\n",
" data_augmentation,\n",
" base_model, # Add the pre-trained MobileNetV2\n",
" layers.GlobalAveragePooling2D(), # Use global average pooling instead of flattening\n",
" layers.Dense(128, activation='relu'), # Add a fully connected layer\n",
" layers.Dropout(0.2), # Dropout to reduce overfitting\n",
" layers.Dense(num_classes, name=\"outputs\", activation='softmax') # Final classification layer (for 10 classes)\n",
" ])\n",
"\n",
"else:\n",
" model = Sequential([\n",
" layers.InputLayer(input_shape(img_height, img_width, 3), batch_size=1), # Proper InputLayer with batch_size=1\n",
" data_augmentation,\n",
" layers.Conv2D(32, 3, padding='same', activation='relu'),\n",
" layers.MaxPooling2D(),\n",
" layers.Conv2D(64, 3, padding='same', activation='relu'),\n",
" layers.MaxPooling2D(),\n",
" layers.Conv2D(128, 3, padding='same', activation='relu'),\n",
" layers.MaxPooling2D(),\n",
" layers.Dropout(0.2),\n",
" layers.Flatten(),\n",
" layers.Dense(24, activation='relu'),\n",
" layers.Dense(48, activation='relu'),\n",
" layers.Dense(num_classes, name=\"outputs\", activation='softmax')\n",
" ])\n",
"\n",
"\n",
"# compile the model\n",
"model.compile(optimizer='adam',\n",
" loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
" metrics=['accuracy'])\n",
"\n",
"\n",
"# give a nice summary of the model architecture\n",
"model.summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Training Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have setup are model's architecture lets train our model"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 1/100\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/brickman/miniconda3/envs/openmv_train/lib/python3.10/site-packages/keras/src/backend/tensorflow/nn.py:635: UserWarning: \"`sparse_categorical_crossentropy` received `from_logits=True`, but the `output` argument was produced by a Softmax activation and thus does not represent logits. Was this intended?\n",
" output, from_logits = _get_logits(\n",
"2024-09-14 19:27:10.386696: I external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:531] Loaded cuDNN version 8907\n",
"W0000 00:00:1726356430.467248 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.485512 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.486658 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.495360 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.497357 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.499308 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.500620 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.502785 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.504102 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.521907 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.630649 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.632200 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.634857 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.636585 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.658563 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.660279 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.662139 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.665014 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.667685 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.670758 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.673827 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.676793 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.679584 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.681518 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.683228 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.685087 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.688047 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.690277 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.693367 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356430.695685 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 49ms/step - accuracy: 0.3849 - loss: 1.9038"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"W0000 00:00:1726356431.021783 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.022984 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.024083 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.025588 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.026889 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.028113 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.029282 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.030394 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.031506 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.032620 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.033733 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.035066 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.036302 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.037617 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.038987 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.040387 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.041732 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.043304 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.044593 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.046044 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.047401 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.048729 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.050096 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.051522 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.053092 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.054510 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.056059 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.057472 168193 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 420ms/step - accuracy: 0.3976 - loss: 1.8413 - val_accuracy: 0.5263 - val_loss: 1.2608\n",
"Epoch 2/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 0.7404 - loss: 0.6882 - val_accuracy: 0.6316 - val_loss: 0.9502\n",
"Epoch 3/100\n",
"\u001b[1m1/3\u001b[0m \u001b[32m━━━━━━\u001b[0m\u001b[37m━━━━━━━━━━━━━━\u001b[0m \u001b[1m0s\u001b[0m 32ms/step - accuracy: 0.8438 - loss: 0.3935"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"W0000 00:00:1726356431.761672 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.762804 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.763902 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.765542 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.766898 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.768073 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.769267 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.770395 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.771519 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.772649 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.773785 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.775234 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.776535 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.777962 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.779436 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.780947 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.782382 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.784152 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.785550 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.787161 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.788645 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.790093 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.791588 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.793164 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.794950 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.796513 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.798257 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n",
"W0000 00:00:1726356431.799823 168190 gpu_timer.cc:114] Skipping the delay kernel, measurement accuracy will be reduced\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 0.8464 - loss: 0.4116 - val_accuracy: 0.7368 - val_loss: 0.7621\n",
"Epoch 4/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 0.9317 - loss: 0.2455 - val_accuracy: 0.6842 - val_loss: 0.6344\n",
"Epoch 5/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 0.8759 - loss: 0.3613 - val_accuracy: 0.6842 - val_loss: 0.6439\n",
"Epoch 6/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 33ms/step - accuracy: 0.9651 - loss: 0.1515 - val_accuracy: 0.6842 - val_loss: 0.6854\n",
"Epoch 7/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 0.9214 - loss: 0.1634 - val_accuracy: 0.6842 - val_loss: 0.5383\n",
"Epoch 8/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9445 - loss: 0.2176 - val_accuracy: 0.7895 - val_loss: 0.4033\n",
"Epoch 9/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 33ms/step - accuracy: 0.9612 - loss: 0.1978 - val_accuracy: 0.7895 - val_loss: 0.3957\n",
"Epoch 10/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 37ms/step - accuracy: 0.9897 - loss: 0.0959 - val_accuracy: 0.7368 - val_loss: 0.4482\n",
"Epoch 11/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 0.9573 - loss: 0.1609 - val_accuracy: 0.7368 - val_loss: 0.4644\n",
"Epoch 12/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 37ms/step - accuracy: 0.9897 - loss: 0.0862 - val_accuracy: 0.7895 - val_loss: 0.3869\n",
"Epoch 13/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9833 - loss: 0.0579 - val_accuracy: 0.8421 - val_loss: 0.3883\n",
"Epoch 14/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9819 - loss: 0.0622 - val_accuracy: 0.8421 - val_loss: 0.4512\n",
"Epoch 15/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 0.9897 - loss: 0.0609 - val_accuracy: 0.8421 - val_loss: 0.4793\n",
"Epoch 16/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 34ms/step - accuracy: 0.9612 - loss: 0.0720 - val_accuracy: 0.8421 - val_loss: 0.4097\n",
"Epoch 17/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9833 - loss: 0.0637 - val_accuracy: 0.7895 - val_loss: 0.3420\n",
"Epoch 18/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 0.9612 - loss: 0.1100 - val_accuracy: 0.8421 - val_loss: 0.3505\n",
"Epoch 19/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9612 - loss: 0.0986 - val_accuracy: 0.8421 - val_loss: 0.3895\n",
"Epoch 20/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 37ms/step - accuracy: 0.9833 - loss: 0.0691 - val_accuracy: 0.8421 - val_loss: 0.4282\n",
"Epoch 21/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9651 - loss: 0.1071 - val_accuracy: 0.8421 - val_loss: 0.3757\n",
"Epoch 22/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9897 - loss: 0.0423 - val_accuracy: 0.7895 - val_loss: 0.2878\n",
"Epoch 23/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 0.9794 - loss: 0.0693 - val_accuracy: 0.7895 - val_loss: 0.2836\n",
"Epoch 24/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 1.0000 - loss: 0.0236 - val_accuracy: 0.7895 - val_loss: 0.3023\n",
"Epoch 25/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 38ms/step - accuracy: 0.9342 - loss: 0.1475 - val_accuracy: 0.8947 - val_loss: 0.2678\n",
"Epoch 26/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 37ms/step - accuracy: 0.9897 - loss: 0.0524 - val_accuracy: 0.8947 - val_loss: 0.3377\n",
"Epoch 27/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 33ms/step - accuracy: 0.9651 - loss: 0.1025 - val_accuracy: 0.8947 - val_loss: 0.2798\n",
"Epoch 28/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 37ms/step - accuracy: 0.9819 - loss: 0.0805 - val_accuracy: 0.8421 - val_loss: 0.2067\n",
"Epoch 29/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 0.9897 - loss: 0.0397 - val_accuracy: 0.8421 - val_loss: 0.3344\n",
"Epoch 30/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 0.9612 - loss: 0.0903 - val_accuracy: 0.7895 - val_loss: 0.3593\n",
"Epoch 31/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 1.0000 - loss: 0.0145 - val_accuracy: 0.8947 - val_loss: 0.3727\n",
"Epoch 32/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9897 - loss: 0.0540 - val_accuracy: 0.8947 - val_loss: 0.3414\n",
"Epoch 33/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9716 - loss: 0.0738 - val_accuracy: 0.8421 - val_loss: 0.3185\n",
"Epoch 34/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 33ms/step - accuracy: 0.9534 - loss: 0.0842 - val_accuracy: 0.8421 - val_loss: 0.3502\n",
"Epoch 35/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9936 - loss: 0.0339 - val_accuracy: 0.8421 - val_loss: 0.3274\n",
"Epoch 36/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 1.0000 - loss: 0.0155 - val_accuracy: 0.8421 - val_loss: 0.3008\n",
"Epoch 37/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 32ms/step - accuracy: 1.0000 - loss: 0.0287 - val_accuracy: 0.8421 - val_loss: 0.2620\n",
"Epoch 38/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9691 - loss: 0.0764 - val_accuracy: 0.8947 - val_loss: 0.2829\n",
"Epoch 39/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 1.0000 - loss: 0.0433 - val_accuracy: 0.8947 - val_loss: 0.3697\n",
"Epoch 40/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9897 - loss: 0.0567 - val_accuracy: 0.8947 - val_loss: 0.3053\n",
"Epoch 41/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 0.9730 - loss: 0.0667 - val_accuracy: 0.8947 - val_loss: 0.2928\n",
"Epoch 42/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9936 - loss: 0.0212 - val_accuracy: 0.7895 - val_loss: 0.4014\n",
"Epoch 43/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 32ms/step - accuracy: 0.9573 - loss: 0.0836 - val_accuracy: 0.7895 - val_loss: 0.5016\n",
"Epoch 44/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 0.9353 - loss: 0.1166 - val_accuracy: 0.8947 - val_loss: 0.3694\n",
"Epoch 45/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 37ms/step - accuracy: 1.0000 - loss: 0.0143 - val_accuracy: 0.8947 - val_loss: 0.4014\n",
"Epoch 46/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9936 - loss: 0.0260 - val_accuracy: 0.8947 - val_loss: 0.4448\n",
"Epoch 47/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9637 - loss: 0.0920 - val_accuracy: 0.8947 - val_loss: 0.4216\n",
"Epoch 48/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 32ms/step - accuracy: 0.9872 - loss: 0.0301 - val_accuracy: 0.8947 - val_loss: 0.4062\n",
"Epoch 49/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9897 - loss: 0.0180 - val_accuracy: 0.8947 - val_loss: 0.3476\n",
"Epoch 50/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 1.0000 - loss: 0.0244 - val_accuracy: 0.8947 - val_loss: 0.3082\n",
"Epoch 51/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 0.9819 - loss: 0.0436 - val_accuracy: 0.8947 - val_loss: 0.3227\n",
"Epoch 52/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 1.0000 - loss: 0.0109 - val_accuracy: 0.8947 - val_loss: 0.3390\n",
"Epoch 53/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 1.0000 - loss: 0.0081 - val_accuracy: 0.8947 - val_loss: 0.3357\n",
"Epoch 54/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 32ms/step - accuracy: 1.0000 - loss: 0.0131 - val_accuracy: 0.8947 - val_loss: 0.3251\n",
"Epoch 55/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 1.0000 - loss: 0.0128 - val_accuracy: 0.8947 - val_loss: 0.3439\n",
"Epoch 56/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 1.0000 - loss: 0.0110 - val_accuracy: 0.8947 - val_loss: 0.3746\n",
"Epoch 57/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9819 - loss: 0.0406 - val_accuracy: 0.8947 - val_loss: 0.4276\n",
"Epoch 58/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9651 - loss: 0.0715 - val_accuracy: 0.8947 - val_loss: 0.2093\n",
"Epoch 59/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 28ms/step - accuracy: 1.0000 - loss: 0.0119 - val_accuracy: 0.9474 - val_loss: 0.1279\n",
"Epoch 60/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 29ms/step - accuracy: 0.9637 - loss: 0.0817 - val_accuracy: 0.8947 - val_loss: 0.2241\n",
"Epoch 61/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 29ms/step - accuracy: 0.9509 - loss: 0.1228 - val_accuracy: 0.9474 - val_loss: 0.0986\n",
"Epoch 62/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9819 - loss: 0.0417 - val_accuracy: 0.8947 - val_loss: 0.3678\n",
"Epoch 63/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 32ms/step - accuracy: 1.0000 - loss: 0.0122 - val_accuracy: 0.8421 - val_loss: 0.7185\n",
"Epoch 64/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 33ms/step - accuracy: 0.9936 - loss: 0.0259 - val_accuracy: 0.7895 - val_loss: 0.7323\n",
"Epoch 65/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 34ms/step - accuracy: 0.9353 - loss: 0.1039 - val_accuracy: 0.8947 - val_loss: 0.2433\n",
"Epoch 66/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 29ms/step - accuracy: 0.9716 - loss: 0.0363 - val_accuracy: 1.0000 - val_loss: 0.0542\n",
"Epoch 67/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9819 - loss: 0.0499 - val_accuracy: 1.0000 - val_loss: 0.0603\n",
"Epoch 68/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 29ms/step - accuracy: 0.9872 - loss: 0.0598 - val_accuracy: 1.0000 - val_loss: 0.0431\n",
"Epoch 69/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9637 - loss: 0.0670 - val_accuracy: 0.8947 - val_loss: 0.3316\n",
"Epoch 70/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9833 - loss: 0.0394 - val_accuracy: 0.7368 - val_loss: 0.7130\n",
"Epoch 71/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 0.9612 - loss: 0.0893 - val_accuracy: 0.7895 - val_loss: 0.4985\n",
"Epoch 72/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 0.9612 - loss: 0.0572 - val_accuracy: 0.8947 - val_loss: 0.2079\n",
"Epoch 73/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 32ms/step - accuracy: 0.9936 - loss: 0.0202 - val_accuracy: 0.8947 - val_loss: 0.1120\n",
"Epoch 74/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 0.9819 - loss: 0.0316 - val_accuracy: 0.8947 - val_loss: 0.1634\n",
"Epoch 75/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 1.0000 - loss: 0.0128 - val_accuracy: 0.9474 - val_loss: 0.1917\n",
"Epoch 76/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 1.0000 - loss: 0.0289 - val_accuracy: 0.9474 - val_loss: 0.1438\n",
"Epoch 77/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 34ms/step - accuracy: 1.0000 - loss: 0.0136 - val_accuracy: 0.9474 - val_loss: 0.1096\n",
"Epoch 78/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 0.9819 - loss: 0.0513 - val_accuracy: 1.0000 - val_loss: 0.0786\n",
"Epoch 79/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 0.9819 - loss: 0.0270 - val_accuracy: 0.9474 - val_loss: 0.0851\n",
"Epoch 80/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 29ms/step - accuracy: 0.9819 - loss: 0.0416 - val_accuracy: 1.0000 - val_loss: 0.0736\n",
"Epoch 81/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 33ms/step - accuracy: 1.0000 - loss: 0.0119 - val_accuracy: 1.0000 - val_loss: 0.0757\n",
"Epoch 82/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 29ms/step - accuracy: 0.9897 - loss: 0.0172 - val_accuracy: 0.9474 - val_loss: 0.0997\n",
"Epoch 83/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 36ms/step - accuracy: 1.0000 - loss: 0.0073 - val_accuracy: 0.8947 - val_loss: 0.1681\n",
"Epoch 84/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9897 - loss: 0.0184 - val_accuracy: 0.8947 - val_loss: 0.2357\n",
"Epoch 85/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 1.0000 - loss: 0.0187 - val_accuracy: 0.8947 - val_loss: 0.2395\n",
"Epoch 86/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 1.0000 - loss: 0.0045 - val_accuracy: 0.8947 - val_loss: 0.2212\n",
"Epoch 87/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 32ms/step - accuracy: 0.9897 - loss: 0.0637 - val_accuracy: 0.8947 - val_loss: 0.1375\n",
"Epoch 88/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 34ms/step - accuracy: 1.0000 - loss: 0.0052 - val_accuracy: 0.9474 - val_loss: 0.0707\n",
"Epoch 89/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 32ms/step - accuracy: 1.0000 - loss: 0.0045 - val_accuracy: 1.0000 - val_loss: 0.0522\n",
"Epoch 90/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 1.0000 - loss: 0.0074 - val_accuracy: 1.0000 - val_loss: 0.0559\n",
"Epoch 91/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9819 - loss: 0.0221 - val_accuracy: 0.9474 - val_loss: 0.0867\n",
"Epoch 92/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 0.9819 - loss: 0.0324 - val_accuracy: 0.9474 - val_loss: 0.1309\n",
"Epoch 93/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 34ms/step - accuracy: 1.0000 - loss: 0.0075 - val_accuracy: 0.8947 - val_loss: 0.1963\n",
"Epoch 94/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 1.0000 - loss: 0.0144 - val_accuracy: 0.8947 - val_loss: 0.2724\n",
"Epoch 95/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 35ms/step - accuracy: 1.0000 - loss: 0.0075 - val_accuracy: 0.8947 - val_loss: 0.3144\n",
"Epoch 96/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 33ms/step - accuracy: 1.0000 - loss: 0.0081 - val_accuracy: 0.8947 - val_loss: 0.3045\n",
"Epoch 97/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 1.0000 - loss: 0.0094 - val_accuracy: 0.8947 - val_loss: 0.2763\n",
"Epoch 98/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 34ms/step - accuracy: 1.0000 - loss: 0.0055 - val_accuracy: 0.8947 - val_loss: 0.2239\n",
"Epoch 99/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 30ms/step - accuracy: 1.0000 - loss: 0.0071 - val_accuracy: 0.8947 - val_loss: 0.1649\n",
"Epoch 100/100\n",
"\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 31ms/step - accuracy: 1.0000 - loss: 0.0057 - val_accuracy: 0.8947 - val_loss: 0.1196\n"
]
}
],
"source": [
"epochs = 100\n",
"history = model.fit(\n",
" train_images,\n",
" train_labels,\n",
" epochs=epochs,\n",
" validation_data=(test_images, test_labels)\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Visualize training results\n",
"Create plots of the loss and accuracy on the training and validation sets:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAABL4AAAKqCAYAAAA0dZe7AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8hTgPZAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOzdeXwU9f0/8Nce2U12c4dAEgj3fSOoVatgi0VQqqhYjypQj2pFa9FqrS2i7Vd+rVqp9tDWVlqtByjihSIieODFDXLJfSQhIeTYZJNs9pjfH7MzO3vP3pvs6/l48IBsZmdnN5Ow88r7/f5oBEEQQERERERERERE1M1oU30AREREREREREREicDgi4iIiIiIiIiIuiUGX0RERERERERE1C0x+CIiIiIiIiIiom6JwRcREREREREREXVLDL6IiIiIiIiIiKhbYvBFRERERERERETdEoMvIiIiIiIiIiLqlhh8ERERERERERFRt8Tgi8jH3Llz0b9//6juu2jRImg0mvgeUJo5cuQINBoNli5dmvTH1mg0WLRokfzx0qVLodFocOTIkbD37d+/P+bOnRvX44nlXCEiIqKuj+8bQ+P7Rg++byRKHQZf1GVoNBpVf9avX5/qQ814d911FzQaDQ4cOBB0mwcffBAajQY7duxI4pFFrrq6GosWLcK2bdtSfSgB7dmzBxqNBtnZ2Whqakr14RAREaUFvm/sOvi+MbGk8PHxxx9P9aEQpYw+1QdApNYLL7zg9fF///tfrFmzxu/2ESNGxPQ4//znP+FyuaK6729+8xv86le/iunxu4Prr78eTz/9NF566SUsXLgw4DYvv/wyxowZg7Fjx0b9ODfccAOuueYaGI3GqPcRTnV1NR5++GH0798f48eP9/pcLOdKvLz44osoKytDY2MjXnvtNdx8880pPR4iIqJ0wPeNXQffNxJRojH4oi7jxz/+sdfHX375JdasWeN3u6+2tjaYTCbVj5OVlRXV8QGAXq+HXs9vq7PPPhuDBw/Gyy+/HPANzBdffIHDhw/j//2//xfT4+h0Ouh0upj2EYtYzpV4EAQBL730Eq677jocPnwY//vf/9I2+LJarTCbzak+DCIiyhB839h18H0jESUaWx2pW5kyZQpGjx6NzZs344ILLoDJZMKvf/1rAMCbb76JSy65BBUVFTAajRg0aBB+97vfwel0eu3Dt/9eWR78j3/8A4MGDYLRaMSZZ56JjRs3et030KwGjUaD+fPnY+XKlRg9ejSMRiNGjRqF999/3+/4169fj0mTJiE7OxuDBg3Cs88+q3r+w6efforZs2ejb9++MBqNqKysxC9+8Qu0t7f7Pb/c3FxUVVXh8ssvR25uLkpLS3Hvvff6vRZNTU2YO3cuCgoKUFhYiDlz5qhup7v++uuxd+9ebNmyxe9zL730EjQaDa699lp0dnZi4cKFmDhxIgoKCmA2m3H++edj3bp1YR8j0KwGQRDw+9//Hn369IHJZMKFF16IXbt2+d23oaEB9957L8aMGYPc3Fzk5+dj+vTp2L59u7zN+vXrceaZZwIA5s2bJ7dFSHMqAs1qsFqtuOeee1BZWQmj0Yhhw4bh8ccfhyAIXttFcl4Es2HDBhw5cgTXXHMNrrnmGnzyySc4ceKE33Yulwt//vOfMWbMGGRnZ6O0tBQXX3wxNm3a5LXdiy++iLPOOgsmkwlFRUW44IIL8MEHH3gds3JWhsR3Dob0dfn444/xs5/9DD179kSfPn0AAEePHsXPfvYzDBs2DDk5OSgpKcHs2bMDzttoamrCL37xC/Tv3x9GoxF9+vTBjTfeiPr6erS2tsJsNuPnP/+53/1OnDgBnU6HxYsXq3wliYgoE/F9I983ZtL7xnDq6upw0003oVevXsjOzsa4cePwn//8x2+7V155BRMnTkReXh7y8/MxZswY/PnPf5Y/b7fb8fDDD2PIkCHIzs5GSUkJvvvd72LNmjVxO1aiSPFXDNTtnD59GtOnT8c111yDH//4x+jVqxcA8T+73NxcLFiwALm5ufjoo4+wcOFCWCwWPPbYY2H3+9JLL6GlpQU//elPodFo8Mc//hFXXHEFDh06FPY3OJ999hlWrFiBn/3sZ8jLy8NTTz2FK6+8EseOHUNJSQkAYOvWrbj44otRXl6Ohx9+GE6nE4888ghKS0tVPe/ly5ejra0Nt99+O0pKSvD111/j6aefxokTJ7B8+XKvbZ1OJ6ZNm4azzz4bjz/+OD788EM88cQTGDRoEG6//XYA4huByy67DJ999hluu+02jBgxAm+88QbmzJmj6niuv/56PPzww3jppZdwxhlneD32smXLcP7556Nv376or6/Hc889h2uvvRa33HILWlpa8K9//QvTpk3D119/7VcmHs7ChQvx+9//HjNmzMCMGTOwZcsW/OAHP0BnZ6fXdocOHcLKlSsxe/ZsDBgwALW1tXj22WcxefJk7N69GxUVFRgxYgQeeeQRLFy4ELfeeivOP/98AMC5554b8LEFQcAPf/hDrFu3DjfddBPGjx+P1atX45e//CWqqqrw5JNPem2v5rwI5X//+x8GDRqEM888E6NHj4bJZMLLL7+MX/7yl17b3XTTTVi6dCmmT5+Om2++GQ6HA59++im+/PJLTJo0CQDw8MMPY9GiRTj33HPxyCOPwGAw4KuvvsJHH32EH/zgB6pff6Wf/exnKC0txcKFC2G1WgEAGzduxOeff45rrrkGffr0wZEjR/D3v/8dU6ZMwe7du+Xfsre2tuL888/Hnj178JOf/ARnnHEG6uvr8dZbb+HEiRMYP348Zs2ahVdffRV/+tOfvH6D+/LLL0MQBFx//fVRHTcREWUOvm/k+8ZMed8YSnt7O6ZMmYIDBw5g/vz5GDBgAJYvX465c+eiqalJ/kXjmjVrcO211+L73/8+/vCHPwAQ581u2LBB3mbRokVYvHgxbr75Zpx11lmwWCzYtGkTtmzZgosuuiim4ySKmkDURd1xxx2C7yk8efJkAYDwzDPP+G3f1tbmd9tPf/pTwWQyCR0dHfJtc+bMEfr16yd/fPjwYQGAUFJSIjQ0NMi3v/nmmwIA4e2335Zve+ihh/yOCYBgMBiEAwcOyLdt375dACA8/fTT8m0zZ84UTCaTUFVVJd+2f/9+Qa/X++0zkEDPb/HixYJGoxGOHj3q9fwACI888ojXthMmTBAmTpwof7xy5UoBgPDHP/5Rvs3hcAjnn3++AEB4/vnnwx7TmWeeKfTp00dwOp3ybe+//74AQHj22WflfdpsNq/7NTY2Cr169RJ+8pOfeN0OQHjooYfkj59//nkBgHD48GFBEAShrq5OMBgMwiWXXCK4XC55u1//+tcCAGHOnDnybR0dHV7HJQji19poNHq9Nhs3bgz6fH3PFek1+/3vf++13VVXXSVoNBqvc0DteRFMZ2enUFJSIjz44IPybdddd50wbtw4r+0++ugjAYBw1113+e1Deo32798vaLVaYdasWX6vifJ19H39Jf369fN6baWvy3e/+13B4XB4bRvoPP3iiy8EAMJ///tf+baFCxcKAIQVK1YEPe7Vq1cLAIT33nvP6/Njx44VJk+e7Hc/IiLKXHzfGP758X2jqLu9b5TOycceeyzoNkuWLBEACC+++KJ8W2dnp3DOOecIubm5gsViEQRBEH7+858L+fn5fu/vlMaNGydccsklIY+JKNnY6kjdjtFoxLx58/xuz8nJkf/d0tKC+vp6nH/++Whra8PevXvD7vdHP/oRioqK5I+l3+IcOnQo7H2nTp2KQYMGyR+PHTsW+fn58n2dTic+/PBDXH755aioqJC3Gzx4MKZPnx52/4D387Naraivr8e5554LQRCwdetWv+1vu+02r4/PP/98r+eyatUq6PV6+Td5gDgb4c4771R1PIA4X+PEiRP45JNP5NteeuklGAwGzJ49W96nwWAAILbkNTQ0wOFwYNKkSQHL3UP58MMP0dnZiTvvvNOrzP/uu+/229ZoNEKrFX8EOp1OnD59Grm5uRg2bFjEjytZtWoVdDod7rrrLq/b77nnHgiCgPfee8/r9nD
"text/plain": [
"<Figure size 1500x800 with 2 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"acc = history.history['accuracy']\n",
"val_acc = history.history['val_accuracy']\n",
"\n",
"loss = history.history['loss']\n",
"val_loss = history.history['val_loss']\n",
"\n",
"epochs_range = range(epochs)\n",
"\n",
"plt.figure(figsize=(15, 8))\n",
"plt.subplot(1, 2, 1)\n",
"plt.plot(epochs_range, acc, label='Training Accuracy')\n",
"plt.plot(epochs_range, val_acc, label='Validation Accuracy')\n",
"plt.legend(loc='lower right')\n",
"plt.title('Training and Validation Accuracy')\n",
"\n",
"plt.subplot(1, 2, 2)\n",
"plt.plot(epochs_range, loss, label='Training Loss')\n",
"plt.plot(epochs_range, val_loss, label='Validation Loss')\n",
"plt.legend(loc='upper right')\n",
"plt.title('Training and Validation Loss')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KuTEoGFYd8aM"
},
"source": [
"## Convert to a TensorFlow Lite model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FQgTqbvPvxGJ"
},
"source": [
"### Convert using integer-only quantization"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rTe8avZJHMDO"
},
"source": [
"To quantize the variable data (such as model input/output and intermediates between layers), you need to provide a [`RepresentativeDataset`](https://www.tensorflow.org/api_docs/python/tf/lite/RepresentativeDataset). This is a generator function that provides a set of input data that's large enough to represent typical values. It allows the converter to estimate a dynamic range for all the variable data. (The dataset does not need to be unique compared to the training or evaluation dataset.)\n",
"To support multiple inputs, each representative data point is a list and elements in the list are fed to the model according to their indices.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mwR9keYAwArA"
},
"source": [
"To quantize the input and output tensors, and make the converter throw an error if it encounters an operation it cannot quantize, convert the model again with some additional parameters:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "kzjEjcDs3BHa"
},
"outputs": [],
"source": [
"def representative_data_gen():\n",
" for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):\n",
" yield [input_value]\n",
"\n",
"converter = tf.lite.TFLiteConverter.from_keras_model(model)\n",
"\n",
"converter._experimental_disable_per_channel_quantization_for_dense_layers = True\n",
"\n",
"converter.optimizations = [tf.lite.Optimize.DEFAULT]\n",
"converter.representative_dataset = representative_data_gen\n",
"converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]\n",
"converter.target_spec.supported_types = [tf.int8]\n",
"converter.inference_input_type = tf.uint8\n",
"converter.inference_output_type = tf.uint8\n",
"tflite_model_quant = converter.convert()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wYd6NxD03yjB"
},
"source": [
"The internal quantization remains the same as above, but you can see the input and output tensors are now integer format:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PaNkOS-twz4k"
},
"outputs": [],
"source": [
"interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)\n",
"input_type = interpreter.get_input_details()[0]['dtype']\n",
"print('input: ', input_type)\n",
"output_type = interpreter.get_output_details()[0]['dtype']\n",
"print('output: ', output_type)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TO17AP84wzBb"
},
"source": [
"Now you have an integer quantized model that uses integer data for the model's input and output tensors, so it's compatible with integer-only hardware such as the [Edge TPU](https://coral.ai)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sse224YJ4KMm"
},
"source": [
"### Save the models as files"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4_9nZ4nv4b9P"
},
"source": [
"You'll need a `.tflite` file to deploy your model on other devices. So let's save the converted model to a file and then load it when we run inferences below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "BEY59dC14uRv"
},
"outputs": [],
"source": [
"import pathlib\n",
"\n",
"tflite_models_dir = pathlib.Path(\"models/\")\n",
"tflite_models_dir.mkdir(exist_ok=True, parents=True)\n",
"\n",
"# Save the quantized model:\n",
"tflite_model_quant_file = tflite_models_dir/\"model_quant.tflite\"\n",
"tflite_model_quant_file.write_bytes(tflite_model_quant)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9t9yaTeF9fyM"
},
"source": [
"## Run the TensorFlow Lite model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "L8lQHMp_asCq"
},
"source": [
"Now we'll run inferences using the TensorFlow Lite [`Interpreter`](https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter) to confirm our model's accuracy.\n",
"\n",
"First, we need a function that runs inference with the model and images, and then return the predictions:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "X092SbeWfd1A"
},
"outputs": [],
"source": [
"# Helper function to run inference on a TFLite model\n",
"def run_tflite_model(tflite_file, test_image_indices):\n",
" global test_images\n",
"\n",
" # Initialize the interpreter\n",
" interpreter = tf.lite.Interpreter(model_path=str(tflite_file))\n",
" interpreter.allocate_tensors()\n",
"\n",
" input_details = interpreter.get_input_details()[0]\n",
" output_details = interpreter.get_output_details()[0]\n",
"\n",
"\n",
" predictions = np.zeros((len(test_image_indices),), dtype=int)\n",
" for i, test_image_index in enumerate(test_image_indices):\n",
" test_image = test_images[test_image_index]\n",
"\n",
" # Check if the input type is quantized, then rescale input data to uint8\n",
" if input_details['dtype'] == np.uint8:\n",
" input_scale, input_zero_point = input_details[\"quantization\"]\n",
" test_image = test_image / input_scale + input_zero_point\n",
"\n",
" test_image = np.expand_dims(test_image, axis=0).astype(input_details[\"dtype\"])\n",
" \n",
" interpreter.set_tensor(input_details[\"index\"], test_image)\n",
" interpreter.invoke()\n",
" output = interpreter.get_tensor(output_details[\"index\"])[0]\n",
"\n",
"\n",
" predictions[i] = output.argmax()\n",
"\n",
" return predictions\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2opUt_JTdyEu"
},
"source": [
"### Testing the model on one image\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QpPpFPaz7eEM"
},
"source": [
"Now we'll test the performance of the model.\n",
"\n",
"Let's create another function to print our predictions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zR2cHRUcUZ6e"
},
"outputs": [],
"source": [
"import matplotlib.pylab as plt\n",
"\n",
"# Change this to test a different image\n",
"test_image_index = 18\n",
"\n",
"## Helper function to test the models on one image\n",
"def test_model(tflite_file, test_image_index):\n",
" global test_labels\n",
"\n",
" predictions = run_tflite_model(tflite_file, [test_image_index])\n",
"\n",
"\n",
" plt.imshow(test_images[test_image_index])\n",
" template = \" Model \\n True: {true}, Predicted: {predict}\"\n",
" _ = plt.title(template.format(true= str(class_names[test_labels[test_image_index]]), predict=str(class_names[predictions[0]])))\n",
" plt.grid(False)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "o3N6-UGl1dfE"
},
"source": [
"And test the model on an image:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rc1i9umMcp0t"
},
"outputs": [],
"source": [
"test_model(tflite_model_quant_file, test_image_index)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LwN7uIdCd8Gw"
},
"source": [
"### Evaluate the model on all images"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RFKOD4DG8XmU"
},
"source": [
"Now let's run the model using all the test images we loaded at the beginning:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "05aeAuWjvjPx"
},
"outputs": [],
"source": [
"# Helper function to evaluate a TFLite model on all images\n",
"def evaluate_model(tflite_file, model_type):\n",
" global test_images\n",
" global test_labels\n",
"\n",
" test_image_indices = range(test_images.shape[0])\n",
" predictions = run_tflite_model(tflite_file, test_image_indices)\n",
"\n",
" accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images)\n",
"\n",
" print('%s model accuracy is %.4f%% (Number of test samples=%d)' % (\n",
" model_type, accuracy, len(test_images)))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Km3cY9ry8ZlG"
},
"source": [
"Evaluate the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-9cnwiPp6EGm"
},
"outputs": [],
"source": [
"evaluate_model(tflite_model_quant_file, model_type=\"Quantized\")"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "post_training_integer_quant.ipynb",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 4
}