Group Research Study

Visual Technology Research • Spring 2026

Week 01
Fast Start - Topic & Initial Research
Jan 5 - Jan 11, 2026
Completed

We assembled the team this week and immediately split into research territories. No slow ramp-up, no endless planning meetings. Everyone came prepared with clear technical areas they wanted to investigate.

The goal: understand what's technically possible before we commit to any specific project direction. Research first, decide later.

My research spans four interconnected areas that we think might work together:

  • Gaussian Splatting in Unreal Engine - Can we relight GS captures in real-time? What's the performance cost? Can we manipulate splat data, or is it locked in black boxes?
  • USD (Universal Scene Description) - How does it handle non-standard data formats like GS? Can it orchestrate multi-tool workflows?
  • GenAI Integration - Where does generative AI fit in spatial computing pipelines?
  • TouchDesigner Connectivity - Can TD bridge gaps between these systems?

These aren't separate research tracks - they're pieces of the same puzzle. If GS can't export to USD, can TD act as a bridge? If we can't manipulate splats in Unreal, can GenAI tools help us process the data differently?

Gaussian Splatting has exploded over the past year as a capture technology. You can scan real-world environments with unprecedented speed and visual fidelity. But there's a fundamental workflow problem:

GS is treated as an end product, not source material. We can't iterate.

What We Need:
Capture → Manipulate Data → Create Interactive Experiences

If we can make GS behave like editable 3D data rather than baked renders, it opens up applications in architectural visualization, virtual production, interactive installations, anywhere photogrammetry meets real-time creative control.

Research Questions

  1. Can Gaussian Splats be relit effectively in Unreal?
  2. What's the performance cost of real-time relighting?
  3. Can we access/manipulate individual splat data?
  4. Does USD export work with GS data?
  5. How do post-processing effects interact with splat rendering?

Quick answers before diving into details:

  • Real-time relighting works
  • Performance cost is acceptable
  • No particle-level access in commercial plugins
  • USD export is broken (only outputs bounding boxes)
  • Post-processing integrates correctly

Got trial access to https://volinga.ai/main Friday afternoon. Installation was smooth - followed their plugin docs at https://docs.volinga.ai/ue-plugin, took maybe 20 minutes from download to first render in UE 5.6.1.

Used open-source PLY files from Sketchfab for testing - specifically chose indoor scenes with complex lighting to stress-test the relighting capabilities.

Test Hardware: RTX 4070 SUPER / Ryzen 9 3900X 12-core / 64GB RAM

Ran three specific tests to isolate performance costs:

Test A: Baseline (Relighting Disabled)
Frame rate: ~70 fps
Test B: Relighting Enabled
Frame rate: ~60 fps
Test C: Relighting + 1 Additional Point Light
Frame rate: ~60 fps

Key Finding: The 10fps drop when enabling relighting is acceptable for real-time work. More importantly, adding extra lights doesn't scale the performance cost linearly - the system appears to batch light calculations efficiently. This suggests we could have multiple dynamic lights in a scene without destroying frame rate.

Wanted to verify that GS rendering integrates properly with Unreal's standard pipeline, not bypassing post-process stages.

Test: Bloom Effect

  • Result: Bloom applies correctly to splat render
  • Color bleeding and light halos behave as expected
  • No artifacting or weird edge cases

This confirms Volinga isn't taking shortcuts. The splats render through Unreal's proper pipeline, which means any post-process effects we need later (color grading, DOF, motion blur, etc.) should work correctly.

This is where things got interesting. Tried to export the GS scene to USD using Unreal's native USD exporter, expecting to get point cloud data or some geometric representation.

Result: Empty bounding box mesh. That's it.

  • No geometry data
  • No point cloud representation
  • No splat attributes

Started researching Luma AI's Unreal plugin at Luma AI Documentation

Why Luma AI Might Be Different:

Their architectural approach is fundamentally different from Volinga, they render GS as Niagara particle systems rather than proprietary rasterization. This could theoretically give us:

  • Access to Niagara's particle manipulation systems
  • Blueprint-accessible parameters
  • Potential for custom shaders
  • Better USD integration (Niagara systems can export)

Current Intel:

  • Supports UE 5.1-5.3
  • Community reports suggest limited functionality
  • Found a user test at YouTube showing only exposure control (not full relighting, only exposure changes)

Next update: Luma AI results + TouchDesigner findings + how this is shaping our project direction. Documentation by end of week.

Week 02
Technical Exploration & Prototyping
Jan 12 - Jan 18, 2026
Completed

New Approach: LiDAR Point Cloud Plugin Documentation

It seems ply can store vertices and also triangles not unlike OBJ format.

We can modify the header and only preserve the vertex data, then format it as a XYZ, and finally use the Lidar plugin in UE to read those data LiDAR Point Cloud.

Python
.ply to .xyz tool

Above 110 FPS

Color data works

Relighting works

Luma.ai

  • Volinga plugin test results with performance benchmarks
  • LiDAR Point Cloud workflow Python script (PLY header converter) also documented on Github︎︎︎
  • Luma.ai Plugin install
Week 03
Pipeline Integration & Workflow Testing
Jan 19 - Jan 25, 2026
Completed

TO DO: (listed in priority)

  • USD Integration with Yiqi
  • Instancing Blueprint Tool in Unreal
  • .ply to Niagara FX, also documented on ︎ Github︎︎︎
  • Look into GSOPs
  • GenAI integration opportunity assessment

Troubleshoot

The `Single.usdc` file references an external USD sublayer that is missing:

To make the USD file work correctly, the following structure is needed:

├── Single.usdc
└── obj/
    └── Fern_v1/
        └── Out_Growth.usd
                

To make sure USD Stage, action Requires:

- Place `obj` folder in the same directory as `Single.usdc`

Render Test

USD GeoCache in Sequencer

Pay Attention:

1. Edits in default level sequence will lose if restart Unreal Engine, should create a new level sequence for .usdc and setup geometry cache before render.

2. Material will reset if restart Unreal as well, try to find an approach to override USD Stage by blueprint.

3. Some frames have strange artifacts.

Video: Instance Tool based on Spline

Research:

Performance Notes

  • Geometry Cache (Alembic/USD animation cache) does NOT support native GPU instancing in Unreal Engine.

  • Each Geometry Cache instance:

        - Requires separate draw calls

        - Streams vertex data from disk per instance

        - Performance degrades quickly with multiple instances

        - UsdGeomPointInstancer is NOT fully supported in Unreal Engine's native USD importer.

References:

        - Epic Forums: "Alembic regenerates the mesh every frame"

        - SideFX Forums: "Alembic archives have a lot of overhead when using instancing"

Workarounds:

  • PCG workflow

  • VAT workflow (VAT DOES support GPU instancing)

                - Animation is baked into textures (Position, Normal, etc.)

                - Uses standard Static Mesh with custom material

Conclusion

        For mass instancing of animated geometry: VAT is better.

GSOPs︎︎︎ https://github.com/cgnomads/GSOPs

GSOPs is a free Houdini plugin for Gaussian Splatting editing developed by David Rhodes & Ruben Diaz.

Current Test Status

  • ︎Relight example file working

  • ︎Animated character support confirmed

  • ︎HDRI lighting input supported

Highlight of GSOPs: Coarse Meshing

Notes: .ply works fine by using provided example files but crashed tons if using out source, should look into that and try more files.
︎Update (1/29): Fixed by optimizing splats before convert to VDB

Bake position and color information into textures, then apply to Niagara FX, still in working progress.

  • Integrate 3DGS into USD pipeline. Here is tech breakdown︎︎︎ for AI Summit
  • Test 3GDS in Touch Designer
  • Refine result gallery for AI Summit

Result Gallery @ AI Summit | NVIDIA x SCAD




Touch Designer Test

Pop import doesn’t support 3GDS yet. here is the solution from community https://derivative.ca/community-post/asset/gaussian-splatting/69107

GaussianSplat
├── File Parameter (point to .PLY)
├── Camera Controls
└── renderTOP
                ├── Alpha Threshold ( for performance )
                └── Bitonic Sort
 

Current Limitations:
  • Lighting is baked (limited relighting capabilities)
  • Not flexible for camera settings

Interactive Reference:


IDEA: From Reality to Stylized 3DGS
1. Photorealistic image to 3DGS
2. Stylized imgae to 3DGS
3. Costum HLSL/PYTHON Tool
4.Interactive with Mouse hover(X,Y) - ultimately with different medium

Visual Reference


Test
Animation driven by noise
Next Step:
1. add mouse interaction
2. add one more ply into scene

  • Landscape exploration in Unreal
  • Look Develepment
  • Document Perforce setup and tech pipeline
  • AI-assisted Previs video with team

  • Look development for shot4
  • Material exploration
Workflow - From Marble.ai to Unreal




Material Develeopment for VAT tree:
1. changed default lit shader to subsurface
2. added hue shift control



Look development for shot3 and shot4.

  • Look Development for Shot3 & Shot 4
  • Substrate Material Exploration.
  • R&D for Subsurface Scattering approaches in Unreal Engine

Documenting everything. Writing up technical findings, successful workflows, dead ends we hit. Making it clear and shareable.

  • Technical documentation and workflow guides
  • ComfyUI test
  • Layout shot4

Creating a polished demo that shows what's possible. Visual presentation of research and results. Live demonstration and Q&A.

  • Interactive technical demo showcasing key findings
  • Visual presentation slides
  • Live demonstration preparation
  • Q&A session with mentors and peers