Practical Deep Learning

The course.

Lesson 1

All models need numbers as their inputs.

We should learn as we learn a new sport: we learn the rules, how fun it is and only later we start paying attention to details, techniques, special training regime etc.

Deep learning neural networks: they find features automatically.

Image-based algorithms may be used for classification of sounds (fft), mouse movements, …

Less code in deep learning is better.

Working with smaller images (thumbnails up to 400px) makes training faster. It may be that opening a large image takes much more time than processing it in a network.

The AI community now found a small amount of models which work very well for many types of tasks.

When predicting a range, it’s sometimes useful to train the prediction on a slightly wider range (e.g. 0.5–5.5 for predicting 1–5 stars)

Homework

Run the notebooks. Try prediction of more than one categories for pictures maybe?

Book, 1st chapter

here

Paperspace notebooks

Cat detector the default (resize to 224):

from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'

def is_cat(x):
    return x[0].isupper()

dls = ImageDataLoaders.from_name_func(
    path,
    get_image_files(path),
    valid_pct=0.2,
    seed=42,
    label_func=is_cat,
    item_tfms=Resize(224)
)

learn = vision_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)
epoch 	train_loss 	valid_loss 	error_rate 	time
0 	    0.171127 	0.011786 	0.002706 	00:25
epoch 	train_loss 	valid_loss 	error_rate 	time
0 	    0.063262 	0.024726 	0.006089 	00:28

When resize was set to 32 px, the results were:

epoch 	train_loss 	valid_loss 	error_rate 	time
0 	    1.047951 	0.762592 	0.320704 	00:12
epoch 	train_loss 	valid_loss 	error_rate 	time
0 	    0.725621 	0.567899 	0.266576 	00:12

So not much faster but much worse results.

When resize was set to 512, the results were:

epoch 	train_loss 	valid_loss 	error_rate 	time
0 	    0.184101 	0.028688 	0.008119 	01:25
epoch 	train_loss 	valid_loss 	error_rate 	time
0 	    0.038081 	0.031769 	0.008796 	01:52

Overfitting is the single most important and challenging issue when training for all machine learning practitioners, and all algorithms.


Models using architectures with more layers take longer to train, and are more prone to overfitting (i.e. you can’t train them for as many epochs before the accuracy on the validation set starts getting worse).

If you’re creating a cat detector, for instance, you generally want at least 30 cats in your validation set.

A key property of the validation and test sets is that they must be representative of the new data you will see in the future.

(Finished 2023-11-07.)

Lesson 2, deployment, 2023-12-05

Before cleaning dataset, train a model on the dirty dataset and then use it to clean the dataset. fastai has a cleaner method for that.

GPUs cannot swap memory to disk.

Lesson 3

Fastai library notes

show_batch then shows an example/sample of the data.

learner.show_results() will sample the model

When fine_tune is not available (there is no pretrained model, e.g. in the case of tabular data) fit_one_cycle is the most commonly used method for training fastai models from scratch

Datasets

Jupyter notebooks tips

!pip install -Uqq fastai to ensure the newest version of the library (in cloud)

RISE: a library to turn cells into slides and fragments (presentation)

Presentation about notebooks 101.

In command mode, pressing 0 twice will restart the kernel

Ideas for ML

published: 2023-10-28
last modified: 2023-12-05

https://vit.baisa.cz/notes/learn/practical-deep-learning/