Things arrived in the mail today! With luck, I'll have the server for the bot set up this weekend :D

a hitch in my plan: opening a complicated model on a windows 11 vm running on half a CPU core on a potato takes a hot minute to compile a few hundred shaders 🙃

another hitch in my plan is my RSI is flaring up tonight :|

this is the problem model in the first image. each color is a different shader that had to be generated by tangerine to render the voxel, and the average complexity of the generated shaders is also high.

the second image is the generated part for one of the these shaders. its basically the object structure w/ all the params pulled out.

it's basically half way to being an interpreter, i'd just need to also pull the call structure out into a parameter buffer, and replace the function with an interpreter loop. it would be slower than rendering with all of the compiled shaders, but it only needs to produce one frame for the bot

incidentally this happens to also be my plan for how to deal with shader compiler hitches in general, i've just been procrastinating on it because opengl doesn't believe in async shader compiling

here's some very entertaining reading about the same problem in a completely different project dolphin-emu.org/blog/2017/07/3

here's the wheel loading in normally on my main machine. the shaders are cached by the driver, so the pop in is a lot faster than it would be on a cold start.

and here's the same model running with the new interpreter. once the octree stuff is done processing, the model renders with no pop in. The time to image is much lower, but a given frame is much longer.

anyways, here's the code for just the interpreter if you are curious. it is quite short. github.com/Aeva/tangerine/blob

when i get around to adding occlusion culling this should become quite fast, as a lot of the frame time is burned rendering voxels you can't see. visibility feedback could also be used to prioritize the compiling queue. this also might mean a wysiwyg editor could be possible since the time to render is instant once the octree is solved. lots exciting stuff.

@aeva whoa, this is cool! Do you have different sdfs at different levels of the octtree?

@jonbro yes :D the root of the tree contains the entire model, which would be too slow to render compiled or otherwise. the octree splits to eliminate dead space, and as it does so each node removes the parts of the CSG tree that can't effect it resulting in a simpler SDF

@aeva oh neat! storing aabbs for the sdf ops is clever.

@aeva I guess I can't think of an alternate way to approach :D

this is really cool to see an end to end implementation of this.

@jonbro thank you :D also i wrote a blog post a while back about the general technique zone.dog/braindump/sdf_cluster

@aeva awesome! I'm not sure I'm ready to revive my toy voxel sdf thingy, but these notes are gonna be my starting point if i do.

I gave up at the culling SDF ops stage, so I could never really have complex models :(

@jonbro so far this approach is working quite well for me. the main problem is the distance fields aren't exact after any set operators, so it can't cull as aggressively on the CPU as i would like it to. it also definitely needs clustered occlusion culling. I think this strat has promise though.

Sign in to participate in the conversation
Mastodon

The original server operated by the Mastodon gGmbH non-profit