Old VoltaML
This section is for the old VoltaML, which is no longer maintained. It is kept here for anyone who wants to try TensorRT.
Docker setup (if required)
Setup docker on Ubuntu using these instructions.
Setup docker on Windows using these instructions
Launch voltaML container
Download the docker-compose.yml file from this repo.
⚠️ Linux: Open it in a text editor and change the path of the output folder. It was configured for Windows only.
output:
driver: local
driver_opts:
type: none
device: C:\voltaml\output # this line
o: bind
Then, open a terminal in that folder and run the following command
Linux
sudo docker-compose up
Windows
docker-compose up
How to use webUI
Once you launch the container, a flask app will run and copy/paste the url to run the webUI on your local host.
There are two backends to run the SD on, PyTorch and TensorRT (fastest version by NVIDIA).
To run on PyTorch inference, you have to select the model, the model will be downloaded (which will take a few mins) into the container and the inference will be displayed. Downloaded models will be shown as below
To run TensorRT inference, go to the Accelerate tab, pick a model from our model hub and click on the accelerate button.
Once acceleration is done, the model will show up in your TensorRT drop down menu.
Switch your backend to TensorRT, select the model and enjoy the fastest outputs 🚀🚀
Benchmark
The below benchmarks have been done for generating a 512x512 image, batch size 1 for 50 iterations.
Model | T4 (it/s) | A10 (it/s) | A100 (it/s) | 4090 (it/s) | 3090 (it/s) | 2080Ti (it/s) |
---|---|---|---|---|---|---|
PyTorch | 4.3 | 8.8 | 15.1 | 19 | 11 | 8 |
Flash attention xformers | 5.5 | 15.6 | 27.5 | 28 | 15.7 | N/A |
AITemplate | Not supported | 26.7 | 55 | 60 | N/A | Not supported |
VoltaML(TRT-Flash) | 11.4 | 29.2 | 62.8 | 85 | 44.7 | 26.2 |
⚠️ ‼️ Warnings/Caveats
This is v0.1 of the product. Things might break. A lot of improvements are on the way, so please bear with us.
- This will only work for NVIDIA GPUs with compute capability > 7.5.
- Cards with less than 12GB VRAM will have issues with acceleration, due to high memory required for the conversions. We're working on resolving these in our next release.
- While the model is accelerating, no other functionality will work since the GPU will be fully occupied