Content pfp
Content
@
0 reply
0 recast
0 reaction

July pfp
July
@july
A great explanation I heard on 3DGS vs NeRF: With NeRF you are estimating every point in space with a neural network, and then you bring a camera in to do raycasting (i.e. tracing rays from the camera through the scene and doing sampling volumetric density etc) With 3DGS you rasterizing the visual representation of all the 3D primitives in the scene onto a 2D image plane in front of the camera - no need for NN to do full representation, explicit representation (the Gaussian blobs - which are the explicit primitives that are used) can happen from a point cloud, or NeRF like thing generating them
2 replies
1 recast
30 reactions

BatinePeoplan pfp
BatinePeoplan
@batinepeoplan
3DGS simplifies representation with explicit primitives, while NeRF uses NN for full representation in raycasting
0 reply
0 recast
0 reaction