The FastAI course seems to be a GANtastic journey (quote by Jakub Langr and Vladimir Bok). After seeing the LAMB optimizer (You et al. 2019) I was completely astonished by the connection between my favorite subjects: Mathematics and Programming showing how we can translate 10 lines of a Mathematical formula into 10 lines of code.

*Math –> CodeCode –> Math *

It has been love at first sight. It was similar to the one I had for Visual Programming Languages in 2013 when I realized that I can drive my design with mathematical formulas in a parametric and procedural way with algorithms and now thanks to deep learning I can define a mathematical formula based on my design.

*Math –> Design Design –> Math *

This passion is driven by the idea of cross-pollination; applying transfer learning between different disciplines and industries. One discipline that is fascinating to me is

Geometrical Deep Learning (GDL). Working in the **Non-euclidean** space has been always challenging for me and the representation of non-euclidean geometry produces more often inductive bias. You can find GDL applied with Graph neural networks **(GNNs)** , Graph Convolutional Network (GCN) and many others, it allows us to take advantage of **data with inherent relationships, connections, and shared properties.**

I arrived here to Silicon Valley almost 3 years ago with a grant for Artificial Intelligence and Building Information Modeling but unfortunately, I had to realize that the market, companies, and technologies were not ready yet for this leap in the Architecture Engineering and Construction. Companies were complaining regarding their lack of **big data**, without focusing on the quality of the data and a clever approach to the problem. Finally, I can quote someone who strongly believes that you **do NOT need a lot of data **to do deep learning**.** This person is Jeremy Howard: he has proved several times that. He defeated giant companies such as Google, IBM, and others in Deep Learning competitions using smarter techniques such as **Transfer Learning** and **Data Augmentation**. These are fundamental in order to lay down the basic foundation for cross-pollination.

Jeremy Howard, Fastai

“Although many have claimed that you need Google-size data sets to do deep learning, this is false. The power oftransfer learning(combined with techniques likedata augmentation) makes it possible for people to apply pre-trained models to much smaller datasets.”

During this blog, I will try to use a transfer learning approach as much as possible. `fine_tuning :`

Transfer Learning`fit_one_cycle :`

Training from Scratch

“

Jeremy Howard, Deep Learning for Coders without a Ph.D.Transfer Learning: Using a pre-trained model for a task different to what it was originally trained for.“

Total trainable params: 21,813,056

These are the parameters of resnet34 architecture after we changed the Head with the new layer. And in the dls ( dataloader ) we modified the dataset with aug_transforms and then we applied the fine-tuning during in the learning part. I combined Chapter 1 and Chapter 2 from Jeremy in this code.

**dls** = ImageDataLoaders.from_name_func(

path, get_image_files(path), valid_pct=0.2, seed=42,

label_func=is_cat, item_tfms=RandomResizedCrop(224, min_scale=0.5),

batch_tfms=** aug_transforms**())

**learn**= cnn_learner(dls, resnet34, metrics=error_rate)

**learn.**

*fine_tune(2)*I can’t wait to show you in what project I am working on right now.

**REFERENCE**

http://ai.stanford.edu/blog/topologylayer/

https://dawn.cs.stanford.edu/2019/10/10/noneuclidean/

https://ruder.io/transfer-learning/

**DATASET**

Stanford Pointclouds