InvokeAI 是一个领先的创意引擎,旨在为专业人士和爱好者提供支持。使用最新的 AI 驱动技术生成和创建令人惊叹的视觉媒体。InvokeAI提供行业领先的Web界面,交互式命令行界面,并且还充当多个商业产品的基础。
快速链接: [如何安装] [Discord Server] [文档和教程] [代码和下载] [错误报告] [讨论,想法和问答]
开始
有关调用的更多信息
支持项目
有关完整的安装和升级说明,请参阅: 调用 AI 安装概述
如果从 2.3 版本升级,请先阅读将 2.3 根目录迁移到 3.0。
转到最新版本页面的底部
下载适用于你的操作系统 (Windows/macOS/Linux) 的.zip文件。
解压缩文件。
窗口:双击脚本。macOS:打开“终端”窗口,将文件从“访达”拖到“终端”中,然后按回车键。Linux:运行 .
install.bat
install.sh
install.sh
系统将要求你确认要在其中安装 InvokeAI 的文件夹的位置及其映像生成模型文件。选择一个至少具有 15 GB 可用内存的位置。如果你计划安装大量模型,则更多。
等待安装程序完成其操作。安装软件后,安装程序将启动一个脚本,允许你配置 InvokeAI 并选择一组启动映像生成模型。
找到 InvokeAI 安装到的文件夹(它与解压缩的 zip 文件目录不同!此文件夹的默认位置(如果在步骤 5 中未更改它)位于 Linux/Mac 系统和 Windows 上。此目录将包含名为 和 的启动器脚本。
~/invokeai
C:\Users\YourName\invokeai
invoke.sh
invoke.bat
在 Windows 系统上,双击该文件。在 macOS 上,打开“终端”窗口,从文件夹拖到“终端”中,然后按 Return 键。在 Linux 上,运行
invoke.bat
invoke.sh
invoke.sh
按 2 打开“基于浏览器的 UI”,按回车键/返回键,等待一两分钟以启动稳定扩散,然后打开浏览器并转到 http://localhost:9090。
在左上角的框中输入,然后单击
banana sushi
Invoke
你必须在计算机上安装 Python 3.9 或 3.10。不支持早期版本或更高版本。Node.js 也需要与 yarn 一起安装(如果需要,可以使用命令安装)
npm install -g yarn
在计算机上打开命令行窗口。PowerShell建议用于Windows。
创建一个目录以将 InvokeAI 安装到其中。你至少需要 15 GB 的可用空间:
mkdir invokeai
在此目录中创建一个名为的虚拟环境并激活它:
.venv
cd invokeai
python -m venv .venv --prompt InvokeAI
激活虚拟环境(每次运行 InvokeAI 时都执行此操作)
对于 Linux/Mac 用户:
source .venv/bin/activate
对于 Windows 用户:
.venv\Scripts\activate
安装 InvokeAI 模块及其依赖项。选择适合你的平台和 GPU 的命令。
对于带有 NVIDIA GPU 的 Windows/Linux:
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
For Linux with an AMD GPU:
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
For non-GPU systems:
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
For Macintoshes, either Intel or M1/M2:
pip install InvokeAI --use-pep517
配置 InvokeAI 并安装一组起始映像生成模型(只需执行此操作一次):
invokeai-configure
启动 Web 服务器(每次运行 InvokeAI 时都执行此操作):
invokeai-web
构建节点.js资产
cd invokeai/frontend/web/
yarn vite build
- Point your browser to http://localhost:9090 to bring up the web interface.
- Type
banana sushi
in the box on the top left and click Invoke
.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using
source .venv/bin/activate
or .venv\Scripts\activate
.
Detailed Installation InstructionsThis fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
InvokeAI Installation Overview
Migrating a v2.3 InvokeAI root directoryThe InvokeAI root directory is where the InvokeAI startup file,
installed models, and generated images are stored. It is ordinarily
named
invokeai
and located in your home directory. The contents and
layout of this directory has changed between versions 2.3 and 3.0 and
cannot be used directly.
We currently recommend that you use the installer to create a new root
directory named differently from the 2.3 one, e.g.
invokeai-3
and
then use a migration script to copy your 2.3 models into the new
location. However, if you choose, you can upgrade this directory in
place. This section gives both recipes.
Creating a new root directory and migrating old modelsThis is the safer recipe because it leaves your old root directory in
place to fall back on.
-
Follow the instructions above to create and install InvokeAI in a
directory that has a different name from the 2.3 invokeai directory.
In this example, we will use "invokeai-3"
-
When you are prompted to select models to install, select a minimal
set of models, such as stable-diffusion-v1.5 only.
-
After installation is complete launch
invokeai.sh
(Linux/Mac) or
invokeai.bat
and select option 8 "Open the developers console". This
will take you to the command line.
-
Issue the command
invokeai-migrate3 --from /path/to/v2.3-root --to /path/to/invokeai-3-root
. Provide the correct --from
and --to
paths for your v2.3 and v3.0 root directories respectively.
This will copy and convert your old models from 2.3 format to 3.0
format and create a new
models
directory in the 3.0 directory. The
old models directory (which contains the models selected at install
time) will be renamed models.orig
and can be deleted once you have
confirmed that the migration was successful.
If you wish, you can pass the 2.3 root directory to both
--from
and
--to
in order to update in place. Warning: this directory will no
longer be usable with InvokeAI 2.3.
Migrating in placeFor the adventurous, you may do an in-place upgrade from 2.3 to 3.0
without touching the command line. *This recipe does not work on
Windows platforms due to a bug in the Windows version of the 2.3
upgrade script. See the next section for a Windows recipe.
For Mac and Linux Users:
-
Launch the InvokeAI launcher script in your current v2.3 root directory.
-
Select option [9] "Update InvokeAI" to bring up the updater dialog.
-
Select option [1] to upgrade to the latest release.
-
Once the upgrade is finished you will be returned to the launcher
menu. Select option [7] "Re-run the configure script to fix a broken
install or to complete a major upgrade".
This will run the configure script against the v2.3 directory and
update it to the 3.0 format. The following files will be replaced:
The original versions of these files will be saved with the suffix ".orig" appended to the end. Once you have confirmed that the upgrade worked, you can safely remove these files. Alternatively you can restore a working v2.3 directory by removing the new files and restoring the ".orig" files' original names.
Windows Users can upgrade with the
invoke.shor
invoke.bat
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0" --use-pep517 --upgrade invokeai-configure --root .
(Replace
v3.0.0with the current release number if this document is out of date).
The first command will install and upgrade new software to run InvokeAI. The second will prepare the 2.3 directory for use with 3.0. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
The migration script will migrate your invokeai settings and models, including textual inversion models, LoRAs and merges that you may have installed previously. However it does not migrate the generated images stored in your 2.3-format outputs directory. You will need to manually import selected images into the 3.0 gallery via drag-and-drop.
InvokeAI is supported across Linux, Windows and macOS. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver).
You will need one of the following:
We do not recommend the GTX 1650 or 1660 series video cards. They are unable to run in half-precision mode and do not have sufficient VRAM to render 512x512 images.
Memory - At least 12 GB Main Memory RAM.
Disk - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
Feature documentation can be reviewed by navigating to the InvokeAI Documentation page
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
Invoke AI's backend is built on a graph-based execution architecture. This allows for customizable generation pipelines to be developed by professional users looking to create specific workflows to support their production use-cases, and will be extended in the future with additional capabilities.
Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
For our latest changes, view our Release Notes and the CHANGELOG.
Please check out our Q&A to get solutions for common installation problems and other issues.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If you'd like to help with translation, please see our translation guide.
If you are unfamiliar with how to contribute to GitHub projects, here is a Getting Started Guide. A full set of contribution guidelines, along with templates, are in progress. You can make your pull request against the "main" branch.
We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our community.
Welcome to InvokeAI!
This fork is a combined effort of various people from across the world. Check out the list of all these amazing people. We thank them for their time, hard work and effort.
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
Original portions of the software are Copyright (c) 2023 by respective contributors.