First Class 1.0 – Transfer Learning

The FastAI course seems to be a GANtastic journey (quote by Jakub Langr and Vladimir Bok). After seeing the LAMB optimizer (You et al. 2019) I was completely astonished by the connection between my favorite subjects: Mathematics and Programming showing how we can translate 10 lines of a Mathematical formula into 10 lines of code.

Jeremy Howard Fastai v2.

Math –> Code
Code –> Math

It has been love at first sight. It was similar to the one I had for Visual Programming Languages in 2013 when I realized that I can drive my design with mathematical formulas in a parametric and procedural way with algorithms and now thanks to deep learning I can define a mathematical formula based on my design.

Thanks to Om. egvo / Lunchbox for their Moebius nodes as well.

Math –> Design
Design –> Math

This passion is driven by the idea of cross-pollination; applying transfer learning between different disciplines and industries. One discipline that is fascinating to me is
Geometrical Deep Learning (GDL). Working in the Non-euclidean space has been always challenging for me and the representation of non-euclidean geometry produces more often inductive bias. You can find GDL applied with Graph neural networks (GNNs) , Graph Convolutional Network (GCN) and many others,  it allows us to take advantage of data with inherent relationships, connections, and shared properties.

I arrived here to Silicon Valley almost 3 years ago with a grant for Artificial Intelligence and Building Information Modeling but unfortunately, I had to realize that the market, companies, and technologies were not ready yet for this leap in the Architecture Engineering and Construction. Companies were complaining regarding their lack of big data, without focusing on the quality of the data and a clever approach to the problem. Finally, I can quote someone who strongly believes that you do NOT need a lot of data to do deep learning. This person is Jeremy Howard: he has proved several times that. He defeated giant companies such as Google, IBM, and others in Deep Learning competitions using smarter techniques such as Transfer Learning and Data Augmentation. These are fundamental in order to lay down the basic foundation for cross-pollination.

“Although many have claimed that you need Google-size data sets to do deep learning, this is false. The power of transfer learning (combined with techniques like data augmentation) makes it possible for people to apply pre-trained models to much smaller datasets.”

Jeremy Howard, Fastai

During this blog, I will try to use a transfer learning approach as much as possible.
fine_tuning : Transfer Learning
fit_one_cycle : Training from Scratch

Transfer Learning: Using a pre-trained model for a task different to what it was originally trained for.

Jeremy Howard, Deep Learning for Coders without a Ph.D.
Total trainable params: 21,813,056 

These are the parameters of resnet34 architecture after we changed the Head with the new layer. And in the dls ( dataloader ) we modified the dataset with aug_transforms and then we applied the fine-tuning during in the learning part. I combined Chapter 1 and Chapter 2 from Jeremy in this code.

dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=RandomResizedCrop(224, min_scale=0.5),
batch_tfms=aug_transforms())
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(2)

I can’t wait to show you in what project I am working on right now.

REFERENCE
http://ai.stanford.edu/blog/topologylayer/
https://dawn.cs.stanford.edu/2019/10/10/noneuclidean/
https://ruder.io/transfer-learning/

DATASET
Stanford Pointclouds

Ethics, AI and Computational Design

It has been an honor to be nominated Forbes 30 under 30 Europe, even if I didn’t achieve the final recognition, due to my visa status here in the Silicon Valley, I would like to thank who nominated me.

However, I was able to obtain the Deep Learning Scholarship provided by Fast.Ai in order to access this amazing course. This allows me to go a step further, to make my dream come true. Before starting the course I would like to pause and make a reflection about some important concepts to keep in mind before starting the course.

“How to think about ethical implications of your work, to help ensure that you’re making the world a better place, and that your work isn’t misused for harm “

“Removing barriers: deep learning has, until now, been a very exclusive game. We’re breaking it open, and ensuring that everyone can play”

Jeremy Howard in his Fastbook

It is extremely hard to find someone who shares your same values and stress that important in one of the most important courses in deep learning. (Something that I didn’t find in deeplearning.ai Andrew Ng’s course). Thanks, Jeremy and Team.

I was so happy to be invited to the Ethics and AI talk in Rome: ReinAIssance but unfortunately, I could not leave the USA at that time. Furthermore, I was able to share my point of view with some participants and provide a small contribution.
I deeply care about those values that I hope will be embedded in my future development. Rachel Thomas from CADE (Center of Applied Data Ethics) is also a strong advocate about these principles and I strongly recommend following her.
Exactly, for this reason, the application that I am developing aims to establish Unity in the design process.

“Data scientists need to be part of a cross-disciplinary team. And researchers need to work closely with the kinds of people who will end up using their research. Better still is if the domain experts themselves have learned enough to be able to train and debug some models themselves”

Rachel Thomas, Data Ethics

In order to be a tool to achieve the empowerment for creativity as Genevieve Bell recommended at the Intel AI Summit in 2018.

 “We have entered the age of automation overconfident yet underprepared. If we fail to make ethical and inclusive artificial intelligence, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality”.

Joy Buolamwini MIT

REFERENCES

https://ai.google/principles/
https://ethical.institute/
https://romecall.org/
https://www.zdnet.com/article/ibms-rometty-lays-out-ai-considerations-ethical-principles/
https://www.ideo.com/blog/ai-needs-an-ethical-compass-this-tool-can-help

The nature of order by Christopher Alexander
Timeless Way Building by Christopher Alexander
Architecture of Happiness by Alain De Botton
Cortex Twitter by Luca Belli
Algorithms Of Oppression by Safiya Umoja Noble
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil

Pytorch3d – 3D Deep Learning in Architecture

Finally, the Artificial Intelligence field is becoming interesting even for passionate about the 3D environment even if it is a painstakingly long process. I wanted to test with Pytorch3D building in which I had the honor to work on in the past.

Image result for central and wolfe floor plan
Central and Wolfe Credit to HOK

The main reason why it has been decided to explore Pytorch3D it is because of the following main features that we are looking to implement in fastai:

  • Heterogeneous Batching: Supports batching of 3D inputs of different sizes such as meshes.
  • Fast 3D Operators (Mesh IO): Supports optimized implementations of several common functions for 3D data.
  • Differentiable Rendering: Modular differentiable rendering API with parallel implementations in PyTorch, C++ and CUDA fundamental for 2D to 3D translation.
    (differentiable programming: OpenDRNeural Mesh RendererSoft Rasterizer, and redner, have showcased how to build differentiable renderers that can be cleanly integrated with deep learning )
A simplified version of Central and Wolfe

from pytorch3d.utils import ico_sphere
from pytorch3d.io import load_obj
from pytorch3d.structures import Meshes
from pytorch3d.ops import sample_points_from_meshes
from pytorch3d.loss import chamfer_distance

# Use an ico_sphere mesh and load a mesh from an .obj e.g. model.obj
sphere_mesh = ico_sphere(level=3)
verts, faces, _ = load_obj(“model.obj”)
test_mesh = Meshes(verts=[verts], faces=[faces.verts_idx])

# You can also use upload to test against your own model
from google.colab import files
uploaded = files.upload()


# Differentiably sample 5k points from the surface of each mesh and then compute the loss.
sample_sphere = sample_points_from_meshes(sphere_mesh, 5000)
sample_test = sample_points_from_meshes(test_mesh, 5000) loss_chamfer, _ = chamfer_distance(sample_sphere, sample_test)

it has been done with a .obj file but for the learning, we could been implemented with an .ifc with different classes in order to label automatically the objects.

def plot_pointcloud(mesh, title=””):    
# Sample points uniformly from the surface of the mesh.    
points = sample_points_from_meshes(mesh, 5000)    
x, y, z = points.clone().detach().cpu().squeeze().unbind(1)        
fig = plt.figure(figsize=(5, 5))    
ax = Axes3D(fig)    ax.scatter3D(x, z, -y)   
ax.set_xlabel(‘x’)    
ax.set_ylabel(‘z’)    
ax.set_zlabel(‘y’)    
ax.set_title(title)    
#Set the Position of the Camera
ax.view_init(300, 60)    
plt.show()

4 main factors are taken into consideration:

  • Chamfer Distance:the distance between the predicted (deformed) and target mesh, defined as the chamfer distance between the set of point clouds resulting from differentiably sampling points from their surfaces.
  • Edge Length: which minimizes the length of the edges in the predicted mesh
  • Normal Consistency: which enforces consistency across the normals of neighboring faces.
  • Laplacian: which is the laplacian regularize

from pytorch3d.loss import 
(    
chamfer_distance,    
mesh_edge_loss, 
mesh_normal_consistency,
mesh_laplacian_smoothing, 
)

Point Set Learning with PointNET has been used instead of ShapeNET


taking into consideration Hausdorff continuous symmetric function.

And these are the results from my test with the Central and Wolfe Campus.


Thanks to Gernot Riegler, Ali Osman Ulusoy, Andreas Geiger and Hao Su

Help #COVID19
share your computer power

Health First – Pro-Active Approach

During this period of the extremely contagious pandemic, we are experiencing how sad it is to stay in isolation and not to connect with each other in physical space as normal.
Conferences such as F8, GDC, GTC, and many others have been canceled. A deep sense of sadness and detachment for our social life is affecting most of us, and at the same time as the COVID-19, I would like to share my unwieldy personal experience.

Memento mori“.
I found a way to check if there is something inside my head and further explore my Neural Network.

EEG – Deep Learning
!! Important to remember never freeze batch-normalization layers, and never turn off the updating of their moving average statistics.

It is nice to see that Facebook AI on the 12th they released FastMRI. And the 17th I received my results, the Validation Set. Unfortunately, It took almost 2 months before receiving the results, and it was such a nerve-racking situation. I hope with this implementation people won’t pass through our suffering as we did.

DICOM/DatCard to see MRI. Thanks to FastAI and Jeremy Howard I was able to analyze my analysis in a more accurate way. “At medical start-up Enlitic, Jeremy Howard led a team that used just 1,000 examples of lung CT scans with cancer to build an algorithm that was more accurate at diagnosing lung cancer than a panel of 4 expert radiologists.”

When I saw these images the first time, I said “Eureka”, finally I have the proof that there is something inside my head, something that people called the Brain but which we are still trying to understand its complexity.

The fastMRI initiative aims to make scans up to 10 times faster than they are today, thereby improving the patient experience and making MRI scans less expensive and more accessible.  Jeremy taught us how to classify 37 images of pets and I applied transfer-learning with this Fine-Grained Classification for Brain diseases.

conda install pyarrow   

pip install pydicom kornia opencv-python scikit-image

fnames = get_image_files(path_img)

dls = ImageDataLoaders.from_name_re(path, fnames, pat=r'(.+)_\d+.jpg$', item_tfms=Resize(460), bs=bs, batch_tfms=[*aug_transforms(size=224, min_scale=0.75), Normalize.from_stats(*imagenet_stats)])

Trained with ResNet34. We used a CNN backbone and a fully connected head with a single hidden layer as a classifier. ” This Neural Network has already trained with 1.5M pictures of different pictures using ImageNET in this way we can apply successfully transfer-learning.

learn = cnn_learner(dls, resnet34, metrics=error_rate).to_fp16()

learn.model learn.fit_one_cycle(4)

Results regarding my brain situation will be published soon (in the positive case)… on my code-blog,

Pandemic: “From Ebola to COVID-19″

We tried to raise awareness back in 2015, and also Bill Gates in 2019.
I strongly suggest having a look at this game as a simulation for understanding possible outcomes and consequences:

Plague Inc

Aesculapius

This project is called: Aesculapius. It was designed for a pandemic in general but it used Ebola as an example. The design solution focused on the importance of raising awareness about this pandemic and emergency in order to act in a pro-active way. The real information and news need to occur immediately, in real-time, in order to eradicate the disease.

At the moment, for coronavirus, a lot of data has been gathered.

Factors that few people took into consideration:

  • Source of Data: Private Hospitals don’t provide or release their data as other public institutions such as Italy.
  • Healthcare system structure, people in Italy have the Health service free and as soon as they feel sick they directly go there.
  • People in the USA have a flu vaccine every year.
  • Culture: Italy is immersed in Piazzas, social spaces that embrace the culture and spirit of Italians, this facilitated the spread of the diseased.

We should always place ethical values first and operate in a PRO-ACTIVE way with dedication and love. Take care of yourself, at this time #STAYHOME #LOVEYOURFAMILY #LEARN and improve your community and society.

Deep Learning meets Computational Design

25th of February 2020 | San Francisco

Today I was working on the new website for the Computational Design Institute and I couldn’t stop thinking about the upcoming Info Session of one of the most interesting and important courses in Deep Learning in the world:

Deep Learning Part I with Jeremy Howard (FastAI v.2)

Image
Jeremy’s new book available during the course.

The course will start on the 17th of March.
It will be a great opportunity to work fully on the project that started at the AEC Tech Hackathon in New York organized by Thornton Tomasetti Core Studio.

The open-source project started in the AEC Tech Hackathon

The main values that Jeremy pointed out regarding his course are:

  • the passion of the students who come from all over the world to work in Teams
  • the impact of the projects created during the course.

It seems it will be a 3 months intense and exciting Hackathon.
As in the AEC Tech Hackathon, the amazing value that I see is not the technology itself.
In New York, during these 24 hours, we were able to accomplish something remarkable: I am not referring to the inspirational video in Twitter’s post but to the fact that we were 2 teams that, instead of competing against each other, we decided to collaborate and win together.


“Technology is not automatically a force for inclusion”


Melinda Gates
Here all the Team from Left to Right:

Alberto Tono,  me
Lexi Fritz, Marketing Domain Expert at Tetra Tech
Sounok Sarkar, Design Technology Specialist at HOK
Valentin Noves, Director of Technology and Innovation at ENGWorks
Dan Siroky, Design Technology Specialist at HOK
Jeffrey Moser, Computational Design Specialist at Grimshaw

Pablo Derendinger BIM Project Manager on behalf of ENGworks at Walt Disney Imagineering
Constantina Tsiara Computational Designer at Workshop APD

Rachel Hartley, Community Manager at Autodesk 
Marios Tsiliakos, Design Computation Specialist at Foster + Partners
Byron Mardas, Associate Environmental Designer at Foster + Partners

and the project is on this website: Try it out!!

Right after the Hackathon, our competitor released open-source their solution which got less attraction than ours:

Exactly, for this reason, I am eager to meet other like-minded people during this course and build something that can help our community. I am eager to see if we can tame another giant with our passion as others Teams did in the past during his course. And I already learned a lot just from this Info Session.

“Python will become less important in 3 years from now”

Jeremy Howard