可生成高清视频的Stable Diffusion。

Nathan Raw 3ea2d71b23 Merge pull request #51 from Borda/patch-1 3 days ago
.github 23e04894f7 CD: allow from draft 4 days ago
stable_diffusion_videos 30180cb91d :bookmark: update version 4 days ago
.gitignore 2e2ffdb9ab :sparkles: add realesrgan upsampler 2 weeks ago
LICENSE aedfacb243 Initial commit 2 weeks ago
README.md 3e0053f541 Update README.md 1 week ago
packages.txt a1244dc452 :tada: init 2 weeks ago
requirements.txt d04de81ce8 :sparkles: update-for-0.3.0 2 weeks ago
setup.py 159e9ebc94 :sparkles: . 1 week ago
stable_diffusion_videos.ipynb 908ddad600 Quick update to colab to install package extra 1 week ago

README.md

stable-diffusion-videos

Try it yourself in Colab: Open In Colab

Example - morphing between "blueberry spaghetti" and "strawberry spaghetti"

https://user-images.githubusercontent.com/32437151/188721341-6f28abf9-699b-46b0-a72e-fa2a624ba0bb.mp4

How it Works

The Notebook/App

The in-browser Colab demo allows you to generate videos by interpolating the latent space of Stable Diffusion.

You can either dream up different versions of the same prompt, or morph between different text prompts (with seeds set for each for reproducibility).

The app is built with Gradio, which allows you to interact with the model in a web app. Here's how I suggest you use it:

  1. Use the "Images" tab to generate images you like.

    • Find two images you want to morph between
    • These images should use the same settings (guidance scale, scheduler, height, width)
    • Keep track of the seeds/settings you used so you can reproduce them
  2. Generate videos using the "Videos" tab

    • Using the images you found from the step above, provide the prompts/seeds you recorded
    • Set the num_walk_steps - for testing you can use a small number like 3 or 5, but to get great results you'll want to use something larger (60-200 steps).
    • You can set the output_dir to the directory you wish to save to

Python Package

Setup

Install the package

pip install stable_diffusion_videos

Authenticate with Hugging Face

huggingface-cli login

Programatic Usage

from stable_diffusion_videos import walk

walk(
    prompts=['a cat', 'a dog'],
    seeds=[42, 1337],
    output_dir='dreams',     # Where images/videos will be saved
    name='animals_test',     # Subdirectory of output_dir where images/videos will be saved
    guidance_scale=8.5,      # Higher adheres to prompt more, lower lets model take the wheel
    num_steps=5,             # Change to 60-200 for better results...3-5 for testing
    num_inference_steps=50, 
    scheduler='klms',        # One of: "klms", "default", "ddim"
    disable_tqdm=False,      # Set to True to disable tqdm progress bar
    make_video=True,         # If false, just save images
    use_lerp_for_text=True,  # Use lerp for text embeddings instead of slerp
    do_loop=False,           # Change to True if you want last prompt to loop back to first prompt
)

Run the App Locally

from stable_diffusion_videos import interface

interface.launch()

Credits

This work built off of a script shared by @karpathy. The script was modified to this gist, which was then updated/modified to this repo.

Contributing

You can file any issues/feature requests here

Enjoy 🤗

Extras

Upsample with Real-ESRGAN

You can also 4x upsample your images with Real-ESRGAN!

First, you'll need to install it...

pip install realesrgan

Then, you'll be able to use upsample=True in the walk function, like this:

from stable_diffusion_videos import walk

walk(['a cat', 'a dog'], [234, 345], upsample=True)

The above may cause you to run out of VRAM. No problem, you can do upsampling separately.

To upsample an individual image:

from stable_diffusion_videos import PipelineRealESRGAN

pipe = PipelineRealESRGAN.from_pretrained('nateraw/real-esrgan')
enhanced_image = pipe('your_file.jpg')

Or, to do a whole folder:

from stable_diffusion_videos import PipelineRealESRGAN

pipe = PipelineRealESRGAN.from_pretrained('nateraw/real-esrgan')
pipe.upsample_imagefolder('path/to/images/', 'path/to/output_dir')