This repository contains a Jupyter notebook that demonstrates how to perform post-training integer quantization on machine learning models. This technique is particularly useful for reducing model size and improving inference speed, especially on low-power devices like the [OpenMV](https://openmv.io) camera.
## Features
- **Post-training Integer Quantization**: Optimize a model by converting 32-bit floating-point numbers to 8-bit fixed-point numbers.
- **Low-Power Device Compatibility**: The notebook is tailored for devices with limited computational resources, such as the OpenMV camera.
- **Efficient Model Deployment**: The techniques demonstrated ensure smaller model sizes and faster inference.