This is part of a series on ML for generalists, you can find the start here.
ReLU or Rectified Linear Unit, keeps the positive values from a layer's output and replaces the negative values with zeros. So if one of our convolutional layers had -0.5 somewhere in its output, the ReLU step would turn that to 0. If it outputted 2.5, that stays 2.5.
It's a simple step but fundamental to making our model actually work. The diagrams on the ReLU wikipedia page are so intimidating, I waited until now to link it, but the principle really is that simple.
Why does it matter? We need non-linearity for our model to learn.
This is part of a series on ML for generalists, you can find the start here.
Fully connected layers are the oldest part of our network. They're made up of perceptrons, the simplest possible neural network unit. They were invented in 1957 by Frank Rosenblatt.
This is part of a series on ML for generalists, you can find the start here.
The core ideas behind Convolutional Neural Networks (CNNs) go back to the late 1980s.
Yann LeCun published the foundational paper in 1989. This is the same Yann LeCun that was Meta's chief AI scientist from 2013 to 2025, leaving because he believes LLMs are a dead end towards "superintelligent" models.
This is part of a series on ML for generalists, you can find the start here.
We'll put everything in train.py for now, to keep things simple.
PyTorch expects our data to live inside a Dataset, a list-like container where it can access individual training samples. We'll wrap our answersheet.json and the images it points to in a custom OrientationDataset class.
This is part of a series on ML for generalists, you can find the start here.
We need images to train our model, so let's create some synthetic data. We'll need images rotated at varying degrees and we'll also need our ground truth, the correct answer: how many degrees we rotated the image by.
Without the ground truth, we can't tell our model how wrong its predictions are so we can't train it and improve its answers.
You can let your eyes glaze over for this code, it's not important, it just gets us a dataset we can work with:
Creates an output directory
Generates the requested number of images:
each image is a 480x480 square
contains random lines of text which may be in different sizes
is rotated by a random number of degrees
Writes an answersheet.json with the ground truth (correct answer) rotation for each image
This is part of a series on ML for generalists, you can find the start here.
I'm a fan of uv from Astral for managing Python projects. It takes care of package management, Python versions and virtual environments. Use whatever you prefer, but the commands below assume uv.
I'm a generalist software engineer. I think I'm a generalist in most aspects of my life. I've never wanted to be pigeonholed as a front-end or a backend dev. Same for languages: I'm not a Python person or a Kotlin kid. I can't solve complete problems without knowing a little about a lot.
I always considered machine learning an exception. A domain too specialised to understand, with lots of maths and academic knowledge required. The closest I came was translating Jupyter Notebooks into applications that could run in production.
That's changed. Even when the next AI winter comes (and it'll be a cold one), people will still expect that little sprinkle of ML magic in their applications.
I've written the coding standards at half a dozen companies by now. I generally don't care about where you put the curly braces or if else's are cuddled, I'm happy to let an opinionated formatting tools decide that for me.
For every rule, I should be able to explain why it's valid. If my explanation is weak, so is my rule.