Wednesday, January 29All That Matters

Bill Wurtz-like approach to explaining 3D Gaussian Splatting

6 Comments

  • This might be good for people who want to be “wowed” or people who know the industry, but its pretty arcane to me. Multivariate? Alpha? “It looks like this:” *insert wave form*. Uh…ok.

  • All of the videos about this keep talking about its potential use for gaming or rendering, but as far as I can understand this fundamentally is only for static scenes (or as a high quality background with traditional CG overlaid) as the whole concept of the compression it achieves relies on a single “frame” with lighting being baked into the image and objects not existing as distinct geometry (like the lighting they are baked into an image as a whole). Can someone explain how this could work with real time lighting or movement of objects within the scene if even possible?

  • Besides being very annoying and not very educational, this video is also wrong. Gaussian splatting is not new, see for instance [this paper from 2001](https://www.cs.umd.edu/~zwicker/publications/EWASplatting-TVCG02.pdf). This is referenced in[ the paper](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/3d_gaussian_splatting_low.pdf) this video tries to explain:

    > Point-based methods efficiently render disconnected and unstruc-
    tured geometry samples (i.e., point clouds) [Gross and Pfister 2011].
    In its simplest form, point sample rendering [Grossman and Dally
    1998] rasterizes an unstructured set of points with a fixed size, for
    which it may exploit natively supported point types of graphics APIs
    [Sainz and Pajarola 2004] or parallel software rasterization on the
    GPU [Laine and Karras 2011; Schütz et al . 2022]. While true to the
    underlying data, point sample rendering suffers from holes, causes
    aliasing, and is strictly discontinuous. Seminal work on high-quality
    point-based rendering addresses these issues by “splatting” point
    primitives with an extent larger than a pixel, e.g., circular or elliptic
    discs, ellipsoids, or surfels [Botsch et al . 2005; Pfister et al . 2000; Ren
    et al. 2002; Zwicker et al. 2001b]

    His comparison with photogrammetry is also strange, photogrammetry has nothing to do with path tracing, ray tracing, or diffusion. It’s used to produce a mesh model that can later be rendered, just as this is used to produce a 3D representation of a scene that is later rendered in real time.

Leave a Reply

Your email address will not be published. Required fields are marked *