InvokeAI - InvokeAI:一个稳定的扩散模型工具包

Created at: 2022-08-17 09:04:27
Language: Jupyter Notebook
License: Apache-2.0

项目英雄

调用 AI - 面向专业创意的生成式 AI

用于稳定扩散的专业创意工具、定制训练模型等。

要了解有关调用 AI、立即开始使用或实施我们的业务解决方案的更多信息,请访问 invoke.ai

不和谐徽章

最新发布徽章 github 星星徽章 github 分叉徽章

主徽章上的 CI 检查 最新提交到主徽章

github 开放问题徽章 github open prs 徽章 翻译状态徽章

InvokeAI 是一个领先的创意引擎,旨在为专业人士和爱好者提供支持。使用最新的 AI 驱动技术生成和创建令人惊叹的视觉媒体。InvokeAI提供行业领先的Web界面,交互式命令行界面,并且还充当多个商业产品的基础。

快速链接: [如何安装] [Discord Server] [文档和教程] [代码和下载] [错误报告] [讨论,想法和问答]

画布预览

目录

📝 目录

开始

  1. 🏁 快速入门
  2. 🖥️ 硬件要求

有关调用的更多信息

  1. 🌟 特征
  2. 📣 最新更改
  3. 🛠️ 故障 排除

支持项目

  1. 🤝 贡献
  2. 👥 贡献
  3. 💕 支持

快速入门

有关完整的安装和升级说明,请参阅: 调用 AI 安装概述

如果从 2.3 版本升级,请先阅读将 2.3 根目录迁移到 3.0

自动安装程序(建议第一次使用的用户使用)

  1. 转到最新版本页面的底部

  2. 下载适用于你的操作系统 (Windows/macOS/Linux) 的.zip文件。

  3. 解压缩文件。

  4. 窗口:双击脚本。macOS:打开“终端”窗口,将文件从“访达”拖到“终端”中,然后按回车键。Linux:运行 .

    install.bat
    install.sh
    install.sh

  5. 系统将要求你确认要在其中安装 InvokeAI 的文件夹的位置及其映像生成模型文件。选择一个至少具有 15 GB 可用内存的位置。如果你计划安装大量模型,则更多。

  6. 等待安装程序完成其操作。安装软件后,安装程序将启动一个脚本,允许你配置 InvokeAI 并选择一组启动映像生成模型。

  7. 找到 InvokeAI 安装到的文件夹(它与解压缩的 zip 文件目录不同!此文件夹的默认位置(如果在步骤 5 中未更改它)位于 Linux/Mac 系统和 Windows 上。此目录将包含名为 和 的启动器脚本。

    ~/invokeai
    C:\Users\YourName\invokeai
    invoke.sh
    invoke.bat

  8. 在 Windows 系统上,双击该文件。在 macOS 上,打开“终端”窗口,从文件夹拖到“终端”中,然后按 Return 键。在 Linux 上,运行

    invoke.bat
    invoke.sh
    invoke.sh

  9. 按 2 打开“基于浏览器的 UI”,按回车键/返回键,等待一两分钟以启动稳定扩散,然后打开浏览器并转到 http://localhost:9090

  10. 在左上角的框中输入,然后单击

    banana sushi
    Invoke

命令行安装(适用于熟悉终端的开发人员和用户)

你必须在计算机上安装 Python 3.9 或 3.10。不支持早期版本或更高版本。Node.js 也需要与 yarn 一起安装(如果需要,可以使用命令安装)

npm install -g yarn

  1. 在计算机上打开命令行窗口。PowerShell建议用于Windows。

  2. 创建一个目录以将 InvokeAI 安装到其中。你至少需要 15 GB 的可用空间:

    mkdir invokeai
    
  3. 在此目录中创建一个名为的虚拟环境并激活它:

    .venv

    cd invokeai
    python -m venv .venv --prompt InvokeAI
    
  4. 激活虚拟环境(每次运行 InvokeAI 时都执行此操作)

    对于 Linux/Mac 用户:

    source .venv/bin/activate

    对于 Windows 用户:

    .venv\Scripts\activate
  5. 安装 InvokeAI 模块及其依赖项。选择适合你的平台和 GPU 的命令。

    对于带有 NVIDIA GPU 的 Windows/Linux:

    pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
    

    For Linux with an AMD GPU:

    pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2

    For non-GPU systems:

    pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
    

    For Macintoshes, either Intel or M1/M2:

    pip install InvokeAI --use-pep517
  6. 配置 InvokeAI 并安装一组起始映像生成模型(只需执行此操作一次):

    invokeai-configure
    
  7. 启动 Web 服务器(每次运行 InvokeAI 时都执行此操作):

    invokeai-web
    
  8. 构建节点.js资产

cd invokeai/frontend/web/
yarn vite build
  1. Point your browser to http://localhost:9090 to bring up the web interface.
  2. Type
    banana sushi
    in the box on the top left and click
    Invoke
    .

Be sure to activate the virtual environment each time before re-launching InvokeAI, using

source .venv/bin/activate
or
.venv\Scripts\activate
.

Detailed Installation Instructions

This fork is supported across Linux, Windows and Macintosh. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver). For full installation and upgrade instructions, please see: InvokeAI Installation Overview

Migrating a v2.3 InvokeAI root directory

The InvokeAI root directory is where the InvokeAI startup file, installed models, and generated images are stored. It is ordinarily named

invokeai
and located in your home directory. The contents and layout of this directory has changed between versions 2.3 and 3.0 and cannot be used directly.

We currently recommend that you use the installer to create a new root directory named differently from the 2.3 one, e.g.

invokeai-3
and then use a migration script to copy your 2.3 models into the new location. However, if you choose, you can upgrade this directory in place. This section gives both recipes.

Creating a new root directory and migrating old models

This is the safer recipe because it leaves your old root directory in place to fall back on.

  1. Follow the instructions above to create and install InvokeAI in a directory that has a different name from the 2.3 invokeai directory. In this example, we will use "invokeai-3"

  2. When you are prompted to select models to install, select a minimal set of models, such as stable-diffusion-v1.5 only.

  3. After installation is complete launch

    invokeai.sh
    (Linux/Mac) or
    invokeai.bat
    and select option 8 "Open the developers console". This will take you to the command line.

  4. Issue the command

    invokeai-migrate3 --from /path/to/v2.3-root --to /path/to/invokeai-3-root
    . Provide the correct
    --from
    and
    --to
    paths for your v2.3 and v3.0 root directories respectively.

This will copy and convert your old models from 2.3 format to 3.0 format and create a new

models
directory in the 3.0 directory. The old models directory (which contains the models selected at install time) will be renamed
models.orig
and can be deleted once you have confirmed that the migration was successful.

If you wish, you can pass the 2.3 root directory to both

--from
and
--to
in order to update in place. Warning: this directory will no longer be usable with InvokeAI 2.3.

Migrating in place

For the adventurous, you may do an in-place upgrade from 2.3 to 3.0 without touching the command line. *This recipe does not work on Windows platforms due to a bug in the Windows version of the 2.3 upgrade script. See the next section for a Windows recipe.

For Mac and Linux Users:
  1. Launch the InvokeAI launcher script in your current v2.3 root directory.

  2. Select option [9] "Update InvokeAI" to bring up the updater dialog.

  3. Select option [1] to upgrade to the latest release.

  4. Once the upgrade is finished you will be returned to the launcher menu. Select option [7] "Re-run the configure script to fix a broken install or to complete a major upgrade".

This will run the configure script against the v2.3 directory and update it to the 3.0 format. The following files will be replaced:

  • The invokeai.init file, replaced by invokeai.yaml
  • The models directory
  • The configs/models.yaml model index

The original versions of these files will be saved with the suffix ".orig" appended to the end. Once you have confirmed that the upgrade worked, you can safely remove these files. Alternatively you can restore a working v2.3 directory by removing the new files and restoring the ".orig" files' original names.

For Windows Users:

Windows Users can upgrade with the

  1. Enter the 2.3 root directory you wish to upgrade
  2. Launch
    invoke.sh
    or
    invoke.bat
  3. Select the "Developer's console" option [8]
  4. Type the following commands
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0" --use-pep517 --upgrade
invokeai-configure --root .

(Replace

v3.0.0
with the current release number if this document is out of date).

The first command will install and upgrade new software to run InvokeAI. The second will prepare the 2.3 directory for use with 3.0. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script

Migration Caveats

The migration script will migrate your invokeai settings and models, including textual inversion models, LoRAs and merges that you may have installed previously. However it does not migrate the generated images stored in your 2.3-format outputs directory. You will need to manually import selected images into the 3.0 gallery via drag-and-drop.

Hardware Requirements

InvokeAI is supported across Linux, Windows and macOS. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver).

System

You will need one of the following:

  • An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB of VRAM is highly recommended for rendering using the Stable Diffusion XL models
  • An Apple computer with an M1 chip.
  • An AMD-based graphics card with 4GB or more VRAM memory (Linux only), 6-8 GB for XL rendering.

We do not recommend the GTX 1650 or 1660 series video cards. They are unable to run in half-precision mode and do not have sufficient VRAM to render 512x512 images.

Memory - At least 12 GB Main Memory RAM.

Disk - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.

Features

Feature documentation can be reviewed by navigating to the InvokeAI Documentation page

Web Server & UI

InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.

Unified Canvas

The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.

Node Architecture & Editor (Beta)

Invoke AI's backend is built on a graph-based execution architecture. This allows for customizable generation pipelines to be developed by professional users looking to create specific workflows to support their production use-cases, and will be extended in the future with additional capabilities.

Board & Gallery Management

Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.

Other features

  • Support for both ckpt and diffusers models
  • SD 2.0, 2.1, XL support
  • Upscaling Tools
  • Embedding Manager & Support
  • Model Manager & Support
  • Node-Based Architecture
  • Node-Based Plug-&-Play UI (Beta)

Latest Changes

For our latest changes, view our Release Notes and the CHANGELOG.

Troubleshooting

Please check out our Q&A to get solutions for common installation problems and other issues.

Contributing

Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so.

To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.

If you'd like to help with translation, please see our translation guide.

If you are unfamiliar with how to contribute to GitHub projects, here is a Getting Started Guide. A full set of contribution guidelines, along with templates, are in progress. You can make your pull request against the "main" branch.

We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our community.

Welcome to InvokeAI!

Contributors

This fork is a combined effort of various people from across the world. Check out the list of all these amazing people. We thank them for their time, hard work and effort.

Support

For support, please use this repository's GitHub Issues tracking service, or join the Discord.

Original portions of the software are Copyright (c) 2023 by respective contributors.