Transforms

class torch_points3d.core.data_transform.PointCloudFusion[source]

This transform is responsible to perform a point cloud fusion from a list of data

  • If a list of data is provided -> Create one Batch object with all data

  • If a list of list of data is provided -> Create a list of fused point cloud

class torch_points3d.core.data_transform.GridSphereSampling(radius, grid_size=None, delattr_kd_tree=True, center=True)[source]

Fits the point cloud to a grid and for each point in this grid, create a sphere with a radius r

Parameters
  • radius (float) – Radius of the sphere to be sampled.

  • grid_size (float, optional) – Grid_size to be used with GridSampling3D to select spheres center. If None, radius will be used

  • delattr_kd_tree (bool, optional) – If True, KDTREE_KEY should be deleted as an attribute if it exists

  • center (bool, optional) – If True, a centre transform is apply on each sphere.

class torch_points3d.core.data_transform.RandomSphere(radius, strategy='random', class_weight_method='sqrt', center=True)[source]

Select points within a sphere of a given radius. The centre is chosen randomly within the point cloud.

Parameters
  • radius (float) – Radius of the sphere to be sampled.

  • strategy (str) – choose between random and freq_class_based. The freq_class_based favors points with low frequency class. This can be used to balance unbalanced datasets

  • center (bool) – if True then the sphere will be moved to the origin

class torch_points3d.core.data_transform.GridSampling3D(size, quantize_coords=False, mode='mean', verbose=False)[source]

Clusters points into voxels with size size. :param size: Size of a voxel (in each dimension). :type size: float :param quantize_coords: If True, it will convert the points into their associated sparse coordinates within the grid and store

the value into a new coords attribute

Parameters

mode (string:) – The mode can be either last or mean. If mode is mean, all the points and their features within a cell will be averaged If mode is last, one random points per cell will be selected with its associated features

class torch_points3d.core.data_transform.RandomSymmetry(axis=[False, False, False])[source]

Apply a random symmetry transformation on the data

Parameters

axis (Tuple[bool,bool,bool], optional) – axis along which the symmetry is applied

class torch_points3d.core.data_transform.RandomNoise(sigma=0.01, clip=0.05)[source]

Simple isotropic additive gaussian noise (Jitter)

Parameters
  • sigma – Variance of the noise

  • clip – Maximum amplitude of the noise

class torch_points3d.core.data_transform.RandomScaleAnisotropic(scales=None, anisotropic=True)[source]

Scales node positions by a randomly sampled factor s1, s2, s3 within a given interval, e.g., resulting in the transformation matrix

\[\begin{split}\left[ \begin{array}{ccc} s1 & 0 & 0 \\ 0 & s2 & 0 \\ 0 & 0 & s3 \\ \end{array} \right]\end{split}\]

for three-dimensional positions.

Parameters

scales – scaling factor interval, e.g. (a, b), then scale is randomly sampled from the range a <=  b.

class torch_points3d.core.data_transform.MultiScaleTransform(strategies)[source]

Pre-computes a sequence of downsampling / neighboorhood search on the CPU. This currently only works on PARTIAL_DENSE formats

Parameters

strategies (Dict[str, object]) – Dictionary that contains the samplers and neighbour_finder

class torch_points3d.core.data_transform.ModelInference(checkpoint_dir, model_name, weight_name, feat_name, num_classes=None, mock_dataset=True)[source]

Base class transform for performing a point cloud inference using a pre_trained model Subclass and implement the __call__ method with your own forward. See PointNetForward for an example implementation.

Parameters
  • checkpoint_dir (str) – Path to a checkpoint directory

  • model_name (str) – Model name, the file checkpoint_dir/model_name.pt must exist

class torch_points3d.core.data_transform.PointNetForward(checkpoint_dir, model_name, weight_name, feat_name, num_classes, mock_dataset=True)[source]

Transform for running a PointNet inference on a Data object. It assumes that the model has been trained for segmentation.

Parameters
  • checkpoint_dir (str) – Path to a checkpoint directory

  • model_name (str) – Model name, the file checkpoint_dir/model_name.pt must exist

  • weight_name (str) – Type of weights to load (best for iou, best for loss etc…)

  • feat_name (str) – Name of the key in Data that will hold the output of the forward

  • num_classes (int) – Number of classes that the model was trained on

class torch_points3d.core.data_transform.AddFeatsByKeys(list_add_to_x: List[bool], feat_names: List[str], input_nc_feats: List[Optional[int]] = None, stricts: List[bool] = None, delete_feats: List[bool] = None)[source]

This transform takes a list of attributes names and if allowed, add them to x

Example

Before calling “AddFeatsByKeys”, if data.x was empty

  • transform: AddFeatsByKeys params:

    list_add_to_x: [False, True, True] feat_names: [‘normal’, ‘rgb’, “elevation”] input_nc_feats: [3, 3, 1]

After calling “AddFeatsByKeys”, data.x contains “rgb” and “elevation”. Its shape[-1] == 4 (rgb:3 + elevation:1) If input_nc_feats was [4, 4, 1], it would raise an exception as rgb dimension is only 3.

list_add_to_x: List[bool]

For each boolean within list_add_to_x, control if the associated feature is going to be concatenated to x

feat_names: List[str]

The list of features within data to be added to x

input_nc_feats: List[int], optional

If provided, evaluate the dimension of the associated feature shape[-1] found using feat_names and this provided value. It allows to make sure feature dimension didn’t change

stricts: List[bool], optional

Recommended to be set to list of True. If True, it will raise an Exception if feat isn’t found or dimension doesn t match.

delete_feats: List[bool], optional

Wether we want to delete the feature from the data object. List length must match teh number of features added.

class torch_points3d.core.data_transform.AddFeatByKey(add_to_x, feat_name, input_nc_feat=None, strict=True)[source]

This transform is responsible to get an attribute under feat_name and add it to x if add_to_x is True

add_to_x: bool

Control if the feature is going to be added/concatenated to x

feat_name: str

The feature to be found within data to be added/concatenated to x

input_nc_feat: int, optional

If provided, check if feature last dimension maches provided value.

strict: bool, optional

Recommended to be set to True. If False, it won’t break if feat isn’t found or dimension doesn t match. (default: True)

class torch_points3d.core.data_transform.RemoveAttributes(attr_names=[], strict=False)[source]

This transform allows to remove unnecessary attributes from data for optimization purposes

Parameters
  • attr_names (list) – Remove the attributes from data using the provided attr_name within attr_names

  • strict (bool=False) – Wether True, it will raise an execption if the provided attr_name isn t within data keys.

class torch_points3d.core.data_transform.ShuffleData[source]

This transform allow to shuffle feature, pos and label tensors within data

class torch_points3d.core.data_transform.ShiftVoxels(apply_shift=True)[source]

Trick to make Sparse conv invariant to even and odds coordinates https://github.com/chrischoy/SpatioTemporalSegmentation/blob/master/lib/train.py#L78

Parameters

apply_shift (bool:) – Whether to apply the shift on indices

class torch_points3d.core.data_transform.ChromaticTranslation(trans_range_ratio=0.1)[source]

Add random color to the image, data must contain an rgb attribute between 0 and 1

Parameters

trans_range_ratio – ratio of translation i.e. tramnslation = 2 * ratio * rand(-0.5, 0.5) (default: 1e-1)

class torch_points3d.core.data_transform.ChromaticAutoContrast(randomize_blend_factor=True, blend_factor=0.5)[source]

Rescale colors between 0 and 1 to enhance contrast

Parameters
  • randomize_blend_factor – Blend factor is random

  • blend_factor – Ratio of the original color that is kept

class torch_points3d.core.data_transform.ChromaticJitter(std=0.01)[source]

Jitter on the rgb attribute of data

Parameters

std – standard deviation of the Jitter

class torch_points3d.core.data_transform.Jitter(mu=0, sigma=0.01, p=0.95)[source]

add a small gaussian noise to the feature. :param mu: mean of the gaussian noise :type mu: float :param sigma: standard deviation of the gaussian noise :type sigma: float :param p: probability of noise :type p: float

class torch_points3d.core.data_transform.RandomDropout(dropout_ratio: float = 0.2, dropout_application_ratio: float = 0.5)[source]

Randomly drop points from the input data

Parameters
  • dropout_ratio (float, optional) – Ratio that gets dropped

  • dropout_application_ratio (float, optional) – chances of the dropout to be applied

class torch_points3d.core.data_transform.DropFeature(drop_proba=0.2, feature_name='rgb')[source]

Sets the given feature to 0 with a given probability

Parameters
  • drop_proba – Probability that the feature gets dropped

  • feature_name – Name of the feature to drop

class torch_points3d.core.data_transform.NormalizeFeature(feature_name, standardize=False)[source]

Normalize a feature. By default, features will be scaled between [0,1]. Should only be applied on a dataset-level.

Parameters

standardize (bool: Will use standardization rather than scaling.) –

class torch_points3d.core.data_transform.PCACompute[source]

compute Principal Component Analysis of a point cloud \(x_1,\dots, x_n\). It computes the eigenvalues and the eigenvectors of the matrix \(C\) which is the covariance matrix of the point cloud:

\[ \begin{align}\begin{aligned}x_{centered} &= \frac{1}{n} \sum_{i=1}^n x_i\\C &= \frac{1}{n} \sum_{i=1}^n (x_i - x_{centered})(x_i - x_{centered})^T\end{aligned}\end{align} \]

store the eigen values and the eigenvectors in data. in eigenvalues attribute and eigenvectors attributes. data.eigenvalues is a tensor \((\lambda_1, \lambda_2, \lambda_3)\) such that \(\lambda_1 \leq \lambda_2 \leq \lambda_3\).

data.eigenvectors is a 3 x 3 matrix such that the column are the eigenvectors associated to their eigenvalues Therefore, the first column of data.eigenvectors estimates the normal at the center of the pointcloud.

class torch_points3d.core.data_transform.ClampBatchSize(num_points=100000)[source]

Drops sample in a batch if the batch gets too large

Parameters

num_points (int, optional) – Maximum number of points per batch, by default 100000

class torch_points3d.core.data_transform.LotteryTransform(transform_options)[source]

Transforms which draw a transform randomly among several transforms indicated in transform options Examples

Parameters

Omegaconf list which contains the transform (transform_options) –

class torch_points3d.core.data_transform.RandomParamTransform(transform_name, transform_params)[source]

create a transform with random parameters

Example (on the yaml)

transform: RandomParamTransform
    params:
        transform_name: GridSampling3D
        transform_params:
            size:
                min: 0.1
                max: 0.3
                type: "float"
            mode:
                value: "last"

We can also draw random numbers for two parameters, integer or float

transform: RandomParamTransform
    params:
        transform_name: RandomSphereDropout
        transform_params:
            radius:
                min: 1
                max: 2
                type: "float"
            num_sphere:
                min: 1
                max: 5
                type: "int"
Parameters
  • transform_name (string:) – the name of the transform

  • transform_options (Omegaconf Dict) – contains the name of a variables as a key and min max type as value to specify the range of the parameters and the type of the parameters or it contains the value “value” to specify a variables (see Example above)

class torch_points3d.core.data_transform.Select(indices=None)[source]

Selects given points from a data object

Parameters

indices (torch.Tensor) – indeices of the points to keep. Can also be a boolean mask

torch_points3d.core.data_transform.NormalizeRGB(normalize=True)[source]

Normalize rgb between 0 and 1

Parameters

normalize (bool: Whether to normalize the rgb attributes) –

torch_points3d.core.data_transform.ElasticDistortion(apply_distorsion: bool = True, granularity: List = [0.2, 0.8], magnitude=[0.4, 1.6])[source]

Apply elastic distortion on sparse coordinate space. First projects the position onto a voxel grid and then apply the distortion to the voxel grid.

Parameters
  • granularity (List[float]) – Granularity of the noise in meters

  • magnitude (List[float]) – Noise multiplier in meters

Returns

data – Returns the same data object with distorted grid

Return type

Data

torch_points3d.core.data_transform.Random3AxisRotation(apply_rotation: bool = True, rot_x: float = None, rot_y: float = None, rot_z: float = None)[source]

Rotate pointcloud with random angles along x, y, z axis

The angles should be given in degrees.

Parameters
  • apply_rotation (bool:) – Whether to apply the rotation

  • rot_x (float) – Rotation angle in degrees on x axis

  • rot_y (float) – Rotation anglei n degrees on y axis

  • rot_z (float) – Rotation angle in degrees on z axis

torch_points3d.core.data_transform.RandomCoordsFlip(ignored_axis, is_temporal=False, p=0.95)[source]
torch_points3d.core.data_transform.ScalePos(scale=None)[source]
torch_points3d.core.data_transform.RandomWalkDropout(dropout_ratio: float = 0.05, num_iter: int = 5000, radius: float = 0.5, max_num: int = -1, skip_keys: List = [])[source]

randomly drop points from input data using random walk

Parameters
  • dropout_ratio (float, optional) – Ratio that gets dropped

  • num_iter (int, optional) – number of iterations

  • radius (float, optional) – radius of the neighborhood search to create the graph

  • max_num (int optional) – max number of neighbors

  • skip_keys (List optional) – skip_keys where we don’t apply the mask

torch_points3d.core.data_transform.RandomSphereDropout(num_sphere: int = 10, radius: float = 5, grid_size_center: float = 0.01)[source]

drop out of points on random spheres of fixed radius. This function takes n random balls of fixed radius r and drop out points inside these balls.

Parameters
  • num_sphere (int, optional) – number of random spheres

  • radius (float, optional) – radius of the spheres

torch_points3d.core.data_transform.SphereCrop(radius: float = 50)[source]

crop the point cloud on a sphere. this function. takes a ball of radius radius centered on a random point and points outside the ball are rejected.

Parameters

radius (float, optional) – radius of the sphere

torch_points3d.core.data_transform.CubeCrop(c: float = 1, rot_x: float = 180, rot_y: float = 180, rot_z: float = 180, grid_size_center: float = 0.01)[source]

Crop cubically the point cloud. This function take a cube of size c centered on a random point, then points outside the cube are rejected.

Parameters
  • c (float, optional) – half size of the cube

  • rot_x (float_otional) – rotation of the cube around x axis

  • rot_y (float_otional) – rotation of the cube around x axis

  • rot_z (float_otional) – rotation of the cube around x axis

torch_points3d.core.data_transform.compute_planarity(eigenvalues)[source]

compute the planarity with respect to the eigenvalues of the covariance matrix of the pointcloud let \(\lambda_1, \lambda_2, \lambda_3\) be the eigenvalues st:

\[\lambda_1 \leq \lambda_2 \leq \lambda_3\]

then planarity is defined as:

\[planarity = \frac{\lambda_2 - \lambda_1}{\lambda_3}\]