Açıklama Yok

w-e-w 22ad1d3fdc revert use of gitpython hijack 2 yıl önce
.github b617c634a8 Cross attention optimization 2 yıl önce
configs aa4688eb83 disable EMA weights for instructpix2pix model, whcih should get memory usage as well as image quality to what it was before d2ac95fa7b2a8d0bcc5361ee16dba9cbb81ff8b2 2 yıl önce
embeddings 98cc6c6e74 add embeddings dir 2 yıl önce
extensions 78b879b442 delete the submodule dir (why do you keep doing this) 2 yıl önce
extensions-builtin d75ed52bfc Don't die when a LoRA is a broken symlink 2 yıl önce
html 96e446218c link footer API to Wiki when API is not active 2 yıl önce
javascript 46e4777fd6 Generate Forever during generation 2 yıl önce
localizations ac08562854 Remove old localizations from the main repo. 2 yıl önce
models 8a34671fe9 Add support for the Variations models (unclip-h and unclip-l) 2 yıl önce
modules 22ad1d3fdc revert use of gitpython hijack 2 yıl önce
scripts 72815c0211 Split Outpainting MK2 mask blur into X and Y components 2 yıl önce
test 793a491923 Overhaul tests to use py.test 2 yıl önce
textual_inversion_templates 12c4d5c6b5 hypernetwork training mk1 2 yıl önce
.eslintignore 13f4c62ba3 Add basic ESLint configuration for formatting 2 yıl önce
.eslintrc.js bc53ecf298 Add onAfterUiUpdate callback 2 yıl önce
.git-blame-ignore-revs 330f14d27a Add .git-blame-ignore-revs 2 yıl önce
.gitignore 793a491923 Overhaul tests to use py.test 2 yıl önce
.pylintrc d3ffc962dd Add basic Pylint to catch syntax errors on PRs 2 yıl önce
CHANGELOG.md 59419bd64a add changelog for 1.4.0 2 yıl önce
CODEOWNERS 9cd1a66648 remove localization people from CODEOWNERS add a note 2 yıl önce
LICENSE.txt d97f467c0d add license file 2 yıl önce
README.md 6a676cc185 Update keyboard shortcut instructions for MacOS users in text selection guidance 2 yıl önce
environment-wsl2.yaml 4fa59b045a update xformers 2 yıl önce
launch.py f9fe5e5f9d reworking launch.py: add references to renamed file 2 yıl önce
package.json 13f4c62ba3 Add basic ESLint configuration for formatting 2 yıl önce
pyproject.toml 793a491923 Overhaul tests to use py.test 2 yıl önce
requirements-test.txt 793a491923 Overhaul tests to use py.test 2 yıl önce
requirements.txt c1a5068ebe Synchronize requirements/requirements_versions 2 yıl önce
requirements_versions.txt c1a5068ebe Synchronize requirements/requirements_versions 2 yıl önce
screenshot.png 151233399c new screenshot 2 yıl önce
script.js f81931c591 Frontend: only look at top-level tabs, not nested tabs 2 yıl önce
style.css aeba3cadd5 add whitelist for environment in the report 2 yıl önce
webui-macos-env.sh 0cab07b2f1 Set PyTorch version to 2.0.1 for macOS 2 yıl önce
webui-user.bat 47a44c7e42 revert change to webui-user.bat 2 yıl önce
webui-user.sh 5fcdaa6a7f Vendor in the single module used from taming_transformers; remove taming_transformers dependency 2 yıl önce
webui.bat 46a5bd64ed Restart: only do restart if running via the wrapper script 2 yıl önce
webui.py c2808f3040 SD_WEBUI_RESTARTING 2 yıl önce
webui.sh 62860c221e Skip force pyton and pytorch ver if TORCH_COMMAND already set 2 yıl önce

README.md

Stable Diffusion web UI

A browser interface based on Gradio library for Stable Diffusion.

Features

Detailed feature showcase with images:

  • Original txt2img and img2img modes
  • One click install and run script (but you still must install python and git)
  • Outpainting
  • Inpainting
  • Color Sketch
  • Prompt Matrix
  • Stable Diffusion Upscale
  • Attention, specify parts of text that the model should pay more attention to
    • a man in a ((tuxedo)) - will pay more attention to tuxedo
    • a man in a (tuxedo:1.21) - alternative syntax
    • select text and press Ctrl+Up or Ctrl+Down (or Command+Up or Command+Down if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
  • Loopback, run img2img processing multiple times
  • X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
  • Textual Inversion
    • have as many embeddings as you want and use any names you like for them
    • use multiple embeddings with different numbers of vectors per token
    • works with half precision floating point numbers
    • train embeddings on 8GB (also reports of 6GB working)
  • Extras tab with:
    • GFPGAN, neural network that fixes faces
    • CodeFormer, face restoration tool as an alternative to GFPGAN
    • RealESRGAN, neural network upscaler
    • ESRGAN, neural network upscaler with a lot of third party models
    • SwinIR and Swin2SR (see here), neural network upscalers
    • LDSR, Latent diffusion super resolution upscaling
  • Resizing aspect ratio options
  • Sampling method selection
    • Adjust sampler eta values (noise multiplier)
    • More advanced noise setting options
  • Interrupt processing at any time
  • 4GB video card support (also reports of 2GB working)
  • Correct seeds for batches
  • Live prompt token length validation
  • Generation parameters
    • parameters you used to generate images are saved with that image
    • in PNG chunks for PNG, in EXIF for JPEG
    • can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
    • can be disabled in settings
    • drag and drop an image/text-parameters to promptbox
  • Read Generation Parameters Button, loads parameters in promptbox to UI
  • Settings page
  • Running arbitrary python code from UI (must run with --allow-code to enable)
  • Mouseover hints for most UI elements
  • Possible to change defaults/mix/max/step values for UI elements via text config
  • Tiling support, a checkbox to create images that can be tiled like textures
  • Progress bar and live image generation preview
    • Can use a separate neural network to produce previews with almost none VRAM or compute requirement
  • Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
  • Styles, a way to save part of prompt and easily apply them via dropdown later
  • Variations, a way to generate same image but with tiny differences
  • Seed resizing, a way to generate same image but at slightly different resolution
  • CLIP interrogator, a button that tries to guess prompt from an image
  • Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
  • Batch Processing, process a group of files using img2img
  • Img2img Alternative, reverse Euler method of cross attention control
  • Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
  • Reloading checkpoints on the fly
  • Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
  • Custom scripts with many extensions from community
  • Composable-Diffusion, a way to use multiple prompts at once
    • separate prompts using uppercase AND
    • also supports weights for prompts: a cat :1.2 AND a dog AND a penguin :2.2
  • No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
  • DeepDanbooru integration, creates danbooru style tags for anime prompts
  • xformers, major speed increase for select cards: (add --xformers to commandline args)
  • via extension: History tab: view, direct and delete images conveniently within the UI
  • Generate forever option
  • Training tab
    • hypernetworks and embeddings options
    • Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
  • Clip skip
  • Hypernetworks
  • Loras (same as Hypernetworks but more pretty)
  • A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
  • Can select to load a different VAE from settings screen
  • Estimated completion time in progress bar
  • API
  • Support for dedicated inpainting model by RunwayML
  • via extension: Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip images embeds (implementation of https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)
  • Stable Diffusion 2.0 support - see wiki for instructions
  • Alt-Diffusion support - see wiki for instructions
  • Now without any bad letters!
  • Load checkpoints in safetensors format
  • Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
  • Now with a license!
  • Reorder elements in the UI from settings screen

Installation and Running

Make sure the required dependencies are met and follow the instructions available for both NVidia (recommended) and AMD GPUs.

Alternatively, use online services (like Google Colab):

Installation on Windows 10/11 with NVidia-GPUs using release package

  1. Download sd.webui.zip from v1.0.0-pre and extract it's contents.
  2. Run update.bat.
  3. Run run.bat. > For more details see Install-and-Run-on-NVidia-GPUs

Automatic Installation on Windows

  1. Install Python 3.10.6 (Newer version of Python does not support torch), checking "Add Python to PATH".
  2. Install git.
  3. Download the stable-diffusion-webui repository, for example by running git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git.
  4. Run webui-user.bat from Windows Explorer as normal, non-administrator, user.

Automatic Installation on Linux

  1. Install the dependencies:

    # Debian-based:
    sudo apt install wget git python3 python3-venv
    # Red Hat-based:
    sudo dnf install wget git python3
    # Arch-based:
    sudo pacman -S wget git python3
    
  2. Navigate to the directory you would like the webui to be installed and execute the following command:

    bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
    
  3. Run webui.sh.

  4. Check webui-user.sh for options.

    Installation on Apple Silicon

Find the instructions here.

Contributing

Here's how to add code to this repo: Contributing

Documentation

The documentation was moved from this README over to the project's wiki.

Credits

Licenses for borrowed code can be found in Settings -> Licenses screen, and also in html/licenses.html file.