Pytorch3d – 3D Deep Learning in Architecture

Finally, the Artificial Intelligence field is becoming interesting even for passionate about the 3D environment even if it is a painstakingly long process. I wanted to test with Pytorch3D building in which I had the honor to work on in the past.

Image result for central and wolfe floor plan
Central and Wolfe Credit to HOK

The main reason why it has been decided to explore Pytorch3D it is because of the following main features that we are looking to implement in fastai:

  • Heterogeneous Batching: Supports batching of 3D inputs of different sizes such as meshes.
  • Fast 3D Operators (Mesh IO): Supports optimized implementations of several common functions for 3D data.
  • Differentiable Rendering: Modular differentiable rendering API with parallel implementations in PyTorch, C++ and CUDA fundamental for 2D to 3D translation.
    (differentiable programming: OpenDRNeural Mesh RendererSoft Rasterizer, and redner, have showcased how to build differentiable renderers that can be cleanly integrated with deep learning )
A simplified version of Central and Wolfe

from pytorch3d.utils import ico_sphere
from pytorch3d.io import load_obj
from pytorch3d.structures import Meshes
from pytorch3d.ops import sample_points_from_meshes
from pytorch3d.loss import chamfer_distance

# Use an ico_sphere mesh and load a mesh from an .obj e.g. model.obj
sphere_mesh = ico_sphere(level=3)
verts, faces, _ = load_obj(“model.obj”)
test_mesh = Meshes(verts=[verts], faces=[faces.verts_idx])

# You can also use upload to test against your own model
from google.colab import files
uploaded = files.upload()


# Differentiably sample 5k points from the surface of each mesh and then compute the loss.
sample_sphere = sample_points_from_meshes(sphere_mesh, 5000)
sample_test = sample_points_from_meshes(test_mesh, 5000) loss_chamfer, _ = chamfer_distance(sample_sphere, sample_test)

it has been done with a .obj file but for the learning, we could been implemented with an .ifc with different classes in order to label automatically the objects.

def plot_pointcloud(mesh, title=””):    
# Sample points uniformly from the surface of the mesh.    
points = sample_points_from_meshes(mesh, 5000)    
x, y, z = points.clone().detach().cpu().squeeze().unbind(1)        
fig = plt.figure(figsize=(5, 5))    
ax = Axes3D(fig)    ax.scatter3D(x, z, -y)   
ax.set_xlabel(‘x’)    
ax.set_ylabel(‘z’)    
ax.set_zlabel(‘y’)    
ax.set_title(title)    
#Set the Position of the Camera
ax.view_init(300, 60)    
plt.show()

4 main factors are taken into consideration:

  • Chamfer Distance:the distance between the predicted (deformed) and target mesh, defined as the chamfer distance between the set of point clouds resulting from differentiably sampling points from their surfaces.
  • Edge Length: which minimizes the length of the edges in the predicted mesh
  • Normal Consistency: which enforces consistency across the normals of neighboring faces.
  • Laplacian: which is the laplacian regularize

from pytorch3d.loss import 
(    
chamfer_distance,    
mesh_edge_loss, 
mesh_normal_consistency,
mesh_laplacian_smoothing, 
)

Point Set Learning with PointNET has been used instead of ShapeNET


taking into consideration Hausdorff continuous symmetric function.

And these are the results from my test with the Central and Wolfe Campus.


Thanks to Gernot Riegler, Ali Osman Ulusoy, Andreas Geiger and Hao Su

Help #COVID19
share your computer power

Leave a Reply

Your email address will not be published. Required fields are marked *