API
This is a guide to running this project with PyTorch only configuration for API development.
Requirements
- Operating system: Windows or Linux
- Graphics card: NVIDIA GPU with CUDA support
- Driver version: 515+
If you are running on Linux, you will need to install CUDA by following the instructions here or if you are on Ubuntu, in the Software & Updates manager.
Running locally
1. Clone the repository
git clone https://github.com/VoltaML/voltaML-fast-stable-diffusion.git --branch experimental2. Move into the project directory
cd voltaML-fast-stable-diffusion3. Set up environmental variables
Optional variables:
HUGGINGFACE_TOKENDISCORD_BOT_TOKENFASTAPI_ANALYTICS_KEYLOG_LEVEL
Refer to the .env file to see supported values with links and guides on how to obtain them.
Windows
Please read this guide to learn how to set up environmental variables on Windows.
Variables that are stored there are persistent and will be available after restarting your computer.
Linux
export VARIABLE_NAME=VARIABLE_VALUEPERSISTANCE
You can also add the following line to your ~/.bashrc file to make the variable persistent (or ~/.zshrc if you are using ZSH).
4. Create virtual environment to keep dependencies isolated
WARNING
If you are using Linux, you might need to install python3-virtualenv package. sudo apt install python3-virtualenv
For Windows users, run this command: pip install virtualenv
WARNING
If you are running Linux, you might need to use python3 instead of python.
python -m virtualenv venv5. Activate Virtual environment
Windows
.\venv\Scripts\activate.ps1
or
.\venv\Scripts\activate.batLinux
source venv/bin/activate6. Run the main.py file (it will install dependencies automatically)
WARNING
If you are running Linux, you might need to use python3 instead of python.
python main.py7. Access the API documentation to see if everything is working
You should now see that the WebUI is running on http://localhost:5003/.
There is also an interactive documentation for the API available at http://localhost:5003/api/docs.
VoltaML