No Description

psychedelicious 803a51d5ad Fixes previous name of repo 3 days ago
.dev_scripts d524e5797d Add regression test (#136) 3 weeks ago
.github e994073b5b fix(ci): Cherry-pick new CI to main (#646) 1 week ago
assets 2ff270f4e0 stable diffusion 1 month ago
configs 9d6d728b51 Squashed commit of the following: 1 week ago
data 2ff270f4e0 stable diffusion 1 month ago
docs a1739a73b4 linux instructions update to /InvokeAI folder name 4 days ago
ldm bf1beaa607 revert 49a96b90 due to conflicts during training 1 week ago
models 171cf29fb5 add configs for training unconditional/class-conditional ldms 9 months ago
notebooks 9d6d728b51 Squashed commit of the following: 1 week ago
scripts 9d6d728b51 Squashed commit of the following: 1 week ago
static 9d6d728b51 Squashed commit of the following: 1 week ago
tests e994073b5b fix(ci): Cherry-pick new CI to main (#646) 1 week ago
.gitattributes 1b40a31a89 Update .gitattributes 3 weeks ago
.gitignore 9d6d728b51 Squashed commit of the following: 1 week ago
.gitmodules 1c8ecacddf remove src directory, which is gumming up conda installs; addresses issue #77 1 month ago
LICENSE 3393b8cad1 added assertion checks for out-of-bound arguments; added various copyright and license agreement files 1 month ago
LICENSE-ModelWeights.txt 3393b8cad1 added assertion checks for out-of-bound arguments; added various copyright and license agreement files 1 month ago
README.md 803a51d5ad Fixes previous name of repo 2 days ago
Stable_Diffusion_v1_Model_Card.md 2ff270f4e0 stable diffusion 1 month ago
environment-mac.yaml 9d6d728b51 Squashed commit of the following: 1 week ago
environment.yaml 9b28c65e4b revert inadvertent change of conda env name (#528) 1 week ago
main.py 4fad71cd8c Training optimizations (#217) 3 weeks ago
requirements-colab.txt 9d6d728b51 Squashed commit of the following: 1 week ago
requirements-lin.txt 9d6d728b51 Squashed commit of the following: 1 week ago
requirements-mac.txt 9d6d728b51 Squashed commit of the following: 1 week ago
requirements-win.txt 9d6d728b51 Squashed commit of the following: 1 week ago
setup.py e66308c7f2 add code 9 months ago

README.md

InvokeAI: A Stable Diffusion Toolkit

last-commit stars
issues pull-requests

This is a fork of CompVis/stable-diffusion, the open source text-to-image generator. It provides a streamlined process with various new features and options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM.

Note: This fork is rapidly evolving. Please use the Issues tab to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.

This repository was formally known as lstein/stable-diffusion

Table of Contents

  1. Installation
  2. Major Features
  3. Changelog
  4. Troubleshooting
  5. Contributing
  6. Support

Installation

This fork is supported across multiple platforms. You can find individual installation instructions below.

Hardware Requirements

System

You wil need one of the following:

  • An NVIDIA-based graphics card with 4 GB or more VRAM memory.
  • An Apple computer with an M1 chip.

Memory

  • At least 12 GB Main Memory RAM.

Disk

  • At least 6 GB of free disk space for the machine learning model, Python, and all its dependencies.

Note

If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the dream script in full-precision mode as shown below.

Similarly, specify full-precision mode on Apple M1 hardware.

To run in full-precision mode, start dream.py with the --full_precision flag:

(ldm) ~/stable-diffusion$ python scripts/dream.py --full_precision

Features

Major Features

Other Features

Latest Changes

  • v1.14 (11 September 2022)

    • Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
    • Full support for Apple hardware with M1 or M2 chips.
    • Add "seamless mode" for circular tiling of image. Generates beautiful effects. (prixt).
    • Inpainting support.
    • Improved web server GUI.
    • Lots of code and documentation cleanups.
  • v1.13 (3 September 2022

    • Support image variations (see VARIATIONS (Kevin Gibbons and many contributors and reviewers)
    • Supports a Google Colab notebook for a standalone server running on Google hardware Arturo Mendivil
    • WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling Kevin Gibbons
    • WebUI supports incremental display of in-progress images during generation Kevin Gibbons
    • A new configuration file scheme that allows new models (including upcoming stable-diffusion-v1.5) to be added without altering the code. (David Wager)
    • Can specify --grid on dream.py command line as the default.
    • Miscellaneous internal bug and stability fixes.
    • Works on M1 Apple hardware.
    • Multiple bug fixes.

For older changelogs, please visit CHANGELOGS.

Troubleshooting

Please check out our Q&A to get solutions for common installation problems and other issues.

Contributing

Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how to contribute to GitHub projects, here is a Getting Started Guide.

A full set of contribution guidelines, along with templates, are in progress, but for now the most important thing is to make your pull request against the "development" branch, and not against "main". This will help keep public breakage to a minimum and will allow you to propose more radical changes.

Contributors

This fork is a combined effort of various people from across the world. Check out the list of all these amazing people. We thank them for their time, hard work and effort.

Support

For support, please use this repository's GitHub Issues tracking service. Feel free to send me an email if you use and like the script.

Original portions of the software are Copyright (c) 2020 Lincoln D. Stein (https://github.com/lstein)

Further Reading

Please see the original README for more information on this software and underlying algorithm, located in the file README-CompViz.md.