Foundation Study · cs.GR · cs.LG · Feb 2025
Documentation → ← Back to White Papers
Signed Distance Fields as a Foundational 3-D Representation: Analytic SDFs, Comparison Against Point Clouds and Meshes, and a Brief Exploration of GAN-Based SDF Generation
Aaditya Jain
Signed Distance Functions · Foundations · Thesis-Line Foundation Study
Submitted: February 2025 Subject: cs.GR · cs.LG Keywords: signed distance field, SDF, implicit surface, CSG operators, GAN-SDF, foundational study, thesis-line foundation
Abstract
We document the foundational SDF study session that informs every subsequent thesis-line topic touching implicit surfaces. The study covers: (i) the three competing 3-D representations on the table at thesis-line entry — point clouds, meshes, and SDFs — with a structured comparison on the dimensions that matter for ML use (manifold structure, differentiability, variable-size batchability, surface-extraction quality); (ii) analytic SDFs for sphere and cuboid primitives, including the CSG operators (union = min, intersection = max, difference = max(a, −b)) that compose primitive SDFs into compound shapes; (iii) a brief exploration of GAN-based SDF generation that was abandoned in favour of the diffusion-based path documented in the parallel Red-Square DDPM work [1] and the downstream MambaFlow3D-class generators [2]. The contribution is the structured representation comparison and the abandonment-of-GAN-SDF decision, both of which propagate forward to load-bearing thesis-line decisions. The decision to commit to continuous distance / depth / normal features over binary occupancy (Hexplane AE, Six-Plane Mesh) traces back to the SDF gradient-norm property identified here. Keywords: SDF, implicit surface, CSG, GAN-SDF, foundation study, thesis-line architecture origin.
1. Introduction

By February 2025 the thesis line was choosing a 3-D representation. The three obvious candidates: point clouds (raw scan output), meshes (Houdini-native), SDFs (implicit, smooth-gradient). The Topic-8 study session was the literacy-and-compare exercise before committing.

The output of the study, recorded in this paper, is the structured comparison (§2), the analytic SDF building blocks (§3), and the GAN-SDF dead end (§4). The thesis-line consequences propagate forward — every downstream topic touching implicit surfaces (Hexplane AE [3], Hierarchical Triplane [4], Six-Plane Mesh [5], UODF [6]) inherits decisions made here.

2. Three Representations Compared
Table 1 — Point cloud, mesh, SDF comparison.
RepresentationProsCons
Point cloud (.ply / .npy)Direct from LiDAR / photogrammetry; PointNet++ inputNo topology; sparse; hard to texture
Mesh (triangles)Explicit topology; rendering / texturing; Houdini-nativeVariable vertex count breaks batched NN training
SDFSmooth gradient (‖∇SDF‖ ≈ 1); differentiable; marching-cubes extractableClosed surfaces only (use UDF / UODF for open surfaces)

The SDF gradient-norm property is the load-bearing insight. A valid SDF satisfies the eikonal equation ‖∇SDF‖ = 1 almost everywhere. This is a hard geometric constraint that distinguishes valid SDFs from arbitrary scalar fields. Any neural network trained to produce SDFs is implicitly constrained to satisfy (approximately) this equation. The constraint is what makes SDF-based marching-cubes extraction stable.

3. The Eikonal Constraint

A signed-distance field is not an arbitrary scalar field — it satisfies the eikonal equation:

‖∇f(x)‖ = 1 for almost every x ∈ ℝ³

This is a hard geometric constraint. Geometrically: at every point in space, the gradient of the SDF has unit norm and points in the direction of fastest increase of distance to the surface. Equivalently: the SDF's level sets are parallel surfaces spaced at unit distance from one another. The constraint is what makes SDFs special among scalar fields — a random scalar field does not satisfy it.

The eikonal constraint has two consequences for ML use. First, marching-cubes surface extraction on an SDF produces a clean isosurface because the level sets are well-spaced. Second, an SDF can be queried for "distance to surface" at any point in space, not just at the surface itself — useful for collision detection, ray marching, and signed-distance-based losses.

The constraint can be enforced as a soft prior during neural-network training via the eikonal-loss term:

L_eik = E_x ( ‖∇f̂(x)‖ − 1 )²

added to the reconstruction loss. The combination produces a network that learns to predict SDFs rather than arbitrary scalar fields.

4. Analytic SDFs

Two primitive shapes, with the closed-form SDF expressions worked out from the geometric definition.

// Sphere SDF (centred at origin, radius r) float sdSphere(vec3 p, float r) { return length(p) - r; } // Cuboid SDF (axis-aligned, half-extents b) float sdBox(vec3 p, vec3 b) { vec3 q = abs(p) - b; return length(max(q, 0.0)) + min(max(q.x, max(q.y, q.z)), 0.0); } // CSG operators float opUnion(float a, float b) { return min(a, b); } float opIntersect(float a, float b) { return max(a, b); } float opSubtract(float a, float b) { return max(a, -b); }

The CSG operators on SDFs are exact: min(a, b) is the SDF of the union, max(a, b) is the SDF of the intersection, max(a, −b) is the SDF of the difference. The compositionality is a unique property of SDFs that neither point clouds nor meshes have — combining two meshes requires explicit boolean-operation geometry; combining two SDFs requires a single min / max.

5. CSG Composition

CSG (Constructive Solid Geometry) builds complex shapes by combining primitives via boolean operations. SDFs are unique among 3-D representations in that the CSG operators are exact closed-form expressions:

Table 2 — CSG operators on SDFs.
OperationSDF formulaGeometric meaning
Union A ∪ Bmin(f_A, f_B)Surface of the union of the two solids
Intersection A ∩ Bmax(f_A, f_B)Surface of the intersection
Difference A − Bmax(f_A, −f_B)Surface of A minus B
Smooth union (k-blend)smin(f_A, f_B; k) = min − k · h² / 6, h = max(k − |f_A − f_B|, 0)Union with rounded corner at the join

The compositionality is what makes SDFs the right substrate for procedural-CAD use cases (see CAPRI-Net [9]). Combining two meshes requires explicit boolean-operation geometry (CGAL's polygon CSG library); combining two SDFs requires a single min or max. The SDFs are also differentiable through the CSG operators — min and max have well-defined gradients (subgradients at the matching surfaces, well-defined elsewhere), so gradient-based optimisation works on CSG trees of SDFs.

6. GAN-SDF — Brief and Abandoned

A short exploration of GAN-based SDF generation was attempted. The setup: a DeepSDF-class encoder produces a latent vector; a generator network produces an SDF field conditioned on the latent; a discriminator distinguishes generated SDFs from training-set SDFs.

The exploration ran into the standard GAN-instability traps: mode collapse, oscillating training loss, and gradient-norm violation in the generated SDFs (the discriminator does not directly enforce the eikonal constraint, so the generator drifts off the SDF manifold). Conclusion at the time: the diffusion alternative pursued from Topic 6 onward [1] avoids these traps because diffusion's denoising objective is more compatible with the SDF gradient-norm property than the GAN's adversarial objective. Carried forward as a decision rule: for any geometric-constraint-bearing representation, use diffusion (or flow matching) rather than GAN.

7. Thesis-Line Propagation

Three downstream thesis-line decisions trace back to the SDF study.

Table 2 — Downstream propagation.
TopicDecisionSource in SDF study
Hexplane AE pivot to continuous features [3]Drop binary occupancy; use continuous depth + normalsContinuous-gradient property (§2)
Six-Plane Mesh extraction [5]Per-plane depth-map input; marching-squares per planeSDF as the implicit-surface stepping stone
UODF cross-reference [6]Axis-aligned representations preferredSDF's rotational symmetry analysis
8. Conclusion

SDFs are the right foundational 3-D representation for the thesis-line ML work. The smooth-gradient property, the exact CSG operators, and the differentiability all align with neural-network training requirements. GAN-SDF was explored and abandoned in favour of diffusion / flow matching as the generator family.

References
[1] Jain, A. "A Minimal DDPM From First Principles (Red-Square)." Thesis research, Feb 2025. /whitepaper/diffusion-red-square
[2] Jain, A. "MambaFlow3D." Thesis research, Nov 2025. /whitepaper/mambaflow3d
[3] Jain, A. "When VAEs Meet Binary Geometry (Hexplane AE)." Thesis research, Dec 2025. /whitepaper/hexplane-ae
[4] Jain, A. "Hierarchical Part-Based Triplane Reconstruction." Thesis research, Feb 2026. /whitepaper/hierarchical-triplane
[5] Jain, A. "Six-Plane Orthographic Mesh Reconstruction." Thesis research, Feb 2026. /whitepaper/six-plane-mesh
[6] Jain, A. "UODF Paper Analysis." Thesis research, Jan 2026. /whitepaper/uodf
[7] Park, J. J. et al. "DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation." CVPR, 2019.
[8] Sitzmann, V. et al. "Implicit Neural Representations with Periodic Activation Functions (SIREN)." NeurIPS, 2020.