I'm excited to release ENFUGUE into beta with v0.2 - full SDXL support, mixed 1.5/XL Diffusion/Inpainting/Refining Pipelines, and full MacOS support with a portable installation. Free and open-source.
benjamin @ benjamin @lemmy.dbzer0.com Posts 1Comments 8Joined 2 yr. ago
benjamin @ benjamin @lemmy.dbzer0.com
Posts
1
Comments
8
Joined
2 yr. ago
YOU GOT IT WORKING?
You are the first person to stick through to the end and do it. Seriously. Thank you so much for confirming that it works on some machine besides mine and monster servers in the cloud.
The configuration is obviously a pain point, but we're running along the cutting edge with TensorRT on Windows at all. I'm hoping Nvidia makes it easier soon, or at least relaxes the license so I'm not running afoul if I redistribute required dll's (for comparison, Nvidia publishes TensorRT binary libraries for Linux directly on pip, no license required.)
It's also a pain that 11.7 is the best CUDA version for Stable Diffusion with TensorRT. I couldn't even get 11.8, 12.0 or 12.1 to work at all on Windows with TensorRT (they work fine on their own.) On Linux, they would work, but would at best give me the same speed as regular GPU inference, and at worst would be slower, completely defeating the point.