API Reference

This page documents all the exported functions in the distance_transforms Python package.

Core Functions

transform

import distance_transforms as dts

result = dts.transform(arr)

Computes the squared Euclidean distance transform of a binary array using NumPy arrays.

Parameters

  • arr: A NumPy array containing binary values (0s and 1s)
    • Can be 1D, 2D, or 3D
    • Must be a NumPy array type

Returns

  • A NumPy array containing the squared Euclidean distance from each 0 pixel to the nearest 1 pixel
  • The output has the same shape and type as the input array

Examples

import numpy as np
import distance_transforms as dts

# Basic usage with NumPy
arr = np.random.choice([0, 1], size=(10, 10)).astype(np.float32)
result = dts.transform(arr)

transform_cuda

import distance_transforms as dts
import torch

result = dts.transform_cuda(tensor)

Computes the squared Euclidean distance transform of a binary array using PyTorch tensors on CUDA.

Parameters

  • tensor: A PyTorch tensor containing binary values (0s and 1s)
    • Must be a CUDA tensor (on GPU)
    • Can be 1D, 2D, or 3D

Returns

  • A PyTorch tensor containing the squared Euclidean distance from each 0 pixel to the nearest 1 pixel
  • The output has the same shape, type, and device as the input tensor

Examples

import torch
import distance_transforms as dts

# Create a tensor on GPU
tensor = torch.rand((100, 100), device='cuda')
tensor = (tensor > 0.5).float()

# Apply transform on GPU
result = dts.transform_cuda(tensor)

Implementation Details

CPU Implementation

When running on CPU with transform(), distance_transforms wraps the Julia implementation from DistanceTransforms.jl:

  • The NumPy array is converted to a Julia array
  • The binary indicator function is applied to prepare the data
  • The distance transform is computed in Julia
  • The result is converted back to a NumPy array with the same dtype as the input

GPU Implementation

When using transform_cuda() with CUDA tensors:

  • The PyTorch tensor is shared with Julia using DLPack without copying
  • The computation is performed using Julia’s GPU optimizations
  • The result is shared back to PyTorch using DLPack

Performance Considerations

  • GPU acceleration works best for large arrays (typically 128×128 or larger)
  • The first call to either function may be slower due to Julia’s JIT compilation