请提供反馈,以便改进这个项目,
非常感谢形式:https://forms.gle/FeWV9RLEedfdkmFN6
作者:@xtekky & 维护:@hlohaus
使用此存储库或与之相关的任何代码,即表示你同意法律声明。作者不对其他用户的任何复制、复刻、重新上传或与 GPT4Free 相关的任何其他内容负责。这是作者唯一的帐户和存储库。为防止冒充或不负责任的行为,请遵守此存储库使用的 GNU GPL 许可证。
pip install -U g4f
docker pull hlohaus789/g4f
根据调查,以下是即将到来的改进列表
Openai()
docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" hlohaus789/g4f:latest
pip install -U g4f
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
python3 -m venv venv
.\venv\Scripts\activate
source venv/bin/activate
requirements.txt
pip install -r requirements.txt
test.py
import g4f
...
如果安装了 Docker,则可以轻松设置和运行项目,而无需手动安装依赖项。
首先,确保你同时安装了 Docker 和 Docker Compose。
克隆 GitHub 存储库:
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
docker pull selenium/node-chrome
docker-compose build
docker-compose up
你的服务器现在将在 上运行。你可以像往常一样与 API 交互或运行测试。
http://localhost:1337
若要停止 Docker 容器,只需运行:
docker-compose down
[!注意] 使用 Docker 时,由于文件中的卷映射,你对本地文件所做的任何更改都将反映在 Docker 容器中。但是,如果添加或删除依赖项,则需要使用 .
docker-compose.ymldocker-compose build
若要在 Web 界面中使用它,请在命令行中键入以下代码。
from g4f.gui import run_gui
run_gui()
g4f
import g4f
g4f.debug.logging = True # Enable debug logging
g4f.debug.version_check = False # Disable automatic version checking
print(g4f.Provider.Bing.params) # Print supported args for Bing
# Using automatic a provider for the given model
## Streamed completion
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
for message in response:
print(message, flush=True, end='')
## Normal response
response = g4f.ChatCompletion.create(
model=g4f.models.gpt_4,
messages=[{"role": "user", "content": "Hello"}],
) # Alternative model setting
print(response)
import g4f
allowed_models = [
'code-davinci-002',
'text-ada-001',
'text-babbage-001',
'text-curie-001',
'text-davinci-002',
'text-davinci-003'
]
response = g4f.Completion.create(
model='text-davinci-003',
prompt='say this is a test'
)
print(response)
import g4f
# Print all available providers
print([
provider.__name__
for provider in g4f.Provider.__providers__
if provider.working
])
# Execute with a specific provider
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
provider=g4f.Provider.Aichat,
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
for message in response:
print(message)
一些提供商使用浏览器绕过机器人保护。他们使用 selenium webdriver 来控制浏览器。浏览器设置和登录数据保存在自定义目录中。如果启用了无外设模式,则浏览器窗口将以不可见的方式加载。出于性能原因,建议重用浏览器实例,并在最后自行关闭它们:
import g4f
from undetected_chromedriver import Chrome, ChromeOptions
from g4f.Provider import (
Bard,
Poe,
AItianhuSpace,
MyShell,
PerplexityAi,
)
options = ChromeOptions()
options.add_argument("--incognito");
webdriver = Chrome(options=options, headless=True)
for idx in range(10):
response = g4f.ChatCompletion.create(
model=g4f.models.default,
provider=g4f.Provider.MyShell,
messages=[{"role": "user", "content": "Suggest me a name."}],
webdriver=webdriver
)
print(f"{idx}:", response)
webdriver.quit()
若要提高速度和整体性能,请异步执行提供程序。总执行时间将由最慢提供程序执行的持续时间决定。
import g4f
import asyncio
_providers = [
g4f.Provider.Aichat,
g4f.Provider.ChatBase,
g4f.Provider.Bing,
g4f.Provider.GptGo,
g4f.Provider.You,
g4f.Provider.Yqcloud,
]
async def run_provider(provider: g4f.Provider.BaseProvider):
try:
response = await g4f.ChatCompletion.create_async(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
provider=provider,
)
print(f"{provider.__name__}:", response)
except Exception as e:
print(f"{provider.__name__}:", e)
async def run_all():
calls = [
run_provider(provider) for provider in _providers
]
await asyncio.gather(*calls)
asyncio.run(run_all())
所有提供程序都支持在创建函数中指定代理和增加超时。
import g4f
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
proxy="http://host:port",
# or socks5://user:pass@host:port
timeout=120, # in secs
)
print(f"Result:", response)
你还可以通过环境变量全局设置代理:
export G4F_PROXY="http://host:port"
from g4f.api import run_api
run_api()
如果要使用嵌入功能,则需要获取 Hugging Face 令牌。你可以在 Hugging Face Tokens 获得一个。确保你的角色设置为写入。如果你有令牌,只需使用它而不是 OpenAI API 密钥即可。
运行服务器:
g4f api
或
python -m g4f.api.run
from openai import OpenAI
client = OpenAI(
# Set your Hugging Face token as the API key if you use embeddings
api_key="YOUR_HUGGING_FACE_TOKEN",
# Set the API base URL if needed, e.g., for a local development environment
base_url="http://localhost:1337/v1"
)
def main():
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "write a poem about a tree"}],
stream=True,
)
if isinstance(chat_completion, dict):
# Not streaming
print(chat_completion.choices[0].message.content)
else:
# Streaming
for token in chat_completion:
content = token.choices[0].delta.content
if content is not None:
print(content, end="", flush=True)
if __name__ == "__main__":
main()
网站 | 供应商 | GPT-3.5型 | GPT-4型 | 流 | 地位 | 认证 |
---|---|---|---|---|---|---|
bing.com | g4f.Provider.Bing |
❌ | ✔️ | ✔️ | ❌ | |
chat.geekgpt.org | g4f.Provider.GeekGpt |
✔️ | ✔️ | ✔️ | ❌ | |
gptchatly.com | g4f.Provider.GptChatly |
✔️ | ✔️ | ❌ | ❌ | |
liaobots.site | g4f.Provider.Liaobots |
✔️ | ✔️ | ✔️ | ❌ | |
www.phind.com | g4f.Provider.Phind |
❌ | ✔️ | ✔️ | ❌ | |
raycast.com | g4f.Provider.Raycast |
✔️ | ✔️ | ✔️ | ✔️ |
网站 | 供应商 | GPT-3.5型 | GPT-4型 | 流 | 地位 | 认证 |
---|---|---|---|---|---|---|
www.aitianhu.com | g4f.Provider.AItianhu |
✔️ | ❌ | ✔️ | ❌ | |
chat3.aiyunos.top | g4f.Provider.AItianhuSpace |
✔️ | ❌ | ✔️ | ❌ | |
e.aiask.me | g4f.Provider.AiAsk |
✔️ | ❌ | ✔️ | ❌ | |
chat-gpt.org | g4f.Provider.Aichat |
✔️ | ❌ | ❌ | ❌ | |
www.chatbase.co | g4f.Provider.ChatBase |
✔️ | ❌ | ✔️ | ❌ | |
chatforai.store | g4f.Provider.ChatForAi |
✔️ | ❌ | ✔️ | ❌ | |
chatgpt.ai | g4f.Provider.ChatgptAi |
✔️ | ❌ | ✔️ | ❌ | |
chatgptx.de | g4f.Provider.ChatgptX |
✔️ | ❌ | ✔️ | ❌ | |
chat-shared2.zhile.io | g4f.Provider.FakeGpt |
✔️ | ❌ | ✔️ | ❌ | |
freegpts1.aifree.site | g4f.Provider.FreeGpt |
✔️ | ❌ | ✔️ | ❌ | |
gptalk.net | g4f.Provider.GPTalk |
✔️ | ❌ | ✔️ | ❌ | |
ai18.gptforlove.com | g4f.Provider.GptForLove |
✔️ | ❌ | ✔️ | ❌ | |
gptgo.ai | g4f.Provider.GptGo |
✔️ | ❌ | ✔️ | ❌ | |
hashnode.com | g4f.Provider.Hashnode |
✔️ | ❌ | ✔️ | ❌ | |
app.myshell.ai | g4f.Provider.MyShell |
✔️ | ❌ | ✔️ | ❌ | |
noowai.com | g4f.Provider.NoowAi |
✔️ | ❌ | ✔️ | ❌ | |
chat.openai.com | g4f.Provider.OpenaiChat |
✔️ | ❌ | ✔️ | ✔️ | |
theb.ai | g4f.Provider.Theb |
✔️ | ❌ | ✔️ | ✔️ | |
sdk.vercel.ai | g4f.Provider.Vercel |
✔️ | ❌ | ✔️ | ❌ | |
you.com | g4f.Provider.You |
✔️ | ❌ | ✔️ | ❌ | |
chat9.yqcloud.top | g4f.Provider.Yqcloud |
✔️ | ❌ | ✔️ | ❌ | |
chat.acytoo.com | g4f.Provider.Acytoo |
✔️ | ❌ | ✔️ | ❌ | |
aibn.cc | g4f.Provider.Aibn |
✔️ | ❌ | ✔️ | ❌ | |
ai.ls | g4f.Provider.Ails |
✔️ | ❌ | ✔️ | ❌ | |
chatgpt4online.org | g4f.Provider.Chatgpt4Online |
✔️ | ❌ | ✔️ | ❌ | |
chat.chatgptdemo.net | g4f.Provider.ChatgptDemo |
✔️ | ❌ | ✔️ | ❌ | |
chatgptduo.com | g4f.Provider.ChatgptDuo |
✔️ | ❌ | ❌ | ❌ | |
chatgptfree.ai | g4f.Provider.ChatgptFree |
✔️ | ❌ | ❌ | ❌ | |
chatgptlogin.ai | g4f.Provider.ChatgptLogin |
✔️ | ❌ | ✔️ | ❌ | |
cromicle.top | g4f.Provider.Cromicle |
✔️ | ❌ | ✔️ | ❌ | |
gptgod.site | g4f.Provider.GptGod |
✔️ | ❌ | ✔️ | ❌ | |
opchatgpts.net | g4f.Provider.Opchatgpts |
✔️ | ❌ | ✔️ | ❌ | |
chat.ylokh.xyz | g4f.Provider.Ylokh |
✔️ | ❌ | ✔️ | ❌ |
网站 | 供应商 | GPT-3.5型 | GPT-4型 | 流 | 地位 | 认证 |
---|---|---|---|---|---|---|
bard.google.com | g4f.Provider.Bard |
❌ | ❌ | ❌ | ✔️ | |
deepinfra.com | g4f.Provider.DeepInfra |
❌ | ❌ | ✔️ | ❌ | |
huggingface.co | g4f.Provider.HuggingChat |
❌ | ❌ | ✔️ | ✔️ | |
www.llama2.ai | g4f.Provider.Llama2 |
❌ | ❌ | ✔️ | ❌ | |
open-assistant.io | g4f.Provider.OpenAssistant |
❌ | ❌ | ✔️ | ✔️ |
型 | 基本提供程序 | 供应商 | 网站 |
---|---|---|---|
手掌 | 谷歌 | G4F的。提供商.Bard | bard.google.com |
H2OGPT-GM-OASST1-EN-2048-FALCON-7B-V3 | 拥抱的脸 | G4F的。提供商.H2o | www.h2o.ai |
H2OGPT-GM-OASST1-EN-2048-FALCON-40B-V1 | 拥抱的脸 | G4F的。提供商.H2o | www.h2o.ai |
H2OGPT-GM-OASST1-EN-2048-开放式-llama-13b | 拥抱的脸 | G4F的。提供商.H2o | www.h2o.ai |
克劳德即时-v1 | 人类学 | G4F的。提供商.Vercel | sdk.vercel.ai |
克劳德-V1 | 人类学 | G4F的。提供商.Vercel | sdk.vercel.ai |
克劳德-V2 | 人类学 | G4F的。提供商.Vercel | sdk.vercel.ai |
命令-light-nightly | 凝聚 | G4F的。提供商.Vercel | sdk.vercel.ai |
命令-每晚 | 凝聚 | G4F的。提供商.Vercel | sdk.vercel.ai |
GPT-NEOX-20B型 | 拥抱的脸 | G4F的。提供商.Vercel | sdk.vercel.ai |
OASST-SFT-1-Pythia-12B | 拥抱的脸 | G4F的。提供商.Vercel | sdk.vercel.ai |
OASST-SFT-4-Pythia-12B-纪元-3.5 | 拥抱的脸 | G4F的。提供商.Vercel | sdk.vercel.ai |
圣诞编码器 | 拥抱的脸 | G4F的。提供商.Vercel | sdk.vercel.ai |
绽放 | 拥抱的脸 | G4F的。提供商.Vercel | sdk.vercel.ai |
法兰-T5-XXL | 拥抱的脸 | G4F的。提供商.Vercel | sdk.vercel.ai |
代码-DAVINCI-002 | OpenAI的 | G4F的。提供商.Vercel | sdk.vercel.ai |
GPT-3.5-涡轮增压-16K | OpenAI的 | G4F的。提供商.Vercel | sdk.vercel.ai |
GPT-3.5-涡轮增压-16K-0613 | OpenAI的 | G4F的。提供商.Vercel | sdk.vercel.ai |
GPT-4-0613型 | OpenAI的 | G4F的。提供商.Vercel | sdk.vercel.ai |
文本-ada-001 | OpenAI的 | G4F的。提供商.Vercel | sdk.vercel.ai |
文本-babbage-001 | OpenAI的 | G4F的。提供商.Vercel | sdk.vercel.ai |
文本居里-001 | OpenAI的 | G4F的。提供商.Vercel | sdk.vercel.ai |
文本-DAVINCI-002 | OpenAI的 | G4F的。提供商.Vercel | sdk.vercel.ai |
文本-DAVINCI-003 | OpenAI的 | G4F的。提供商.Vercel | sdk.vercel.ai |
llama13b-v2-聊天 | 复制 | G4F的。提供商.Vercel | sdk.vercel.ai |
llama7b-v2-聊天 | 复制 | G4F的。提供商.Vercel | sdk.vercel.ai |
🎁 项目 | ⭐ 星星 | 📚 叉 | 🛎 问题 | 📬 拉取请求 |
GPT4自由 | ||||
GPT4自由-TS | ||||
免费的AI API和潜在供应商名单 | ||||
ChatGPT-克隆 | ||||
ChatGpt Discord 机器人 | ||||
Nyx-Bot (不和谐) | ||||
LangChain gpt4免费 | ||||
ChatGpt 电报机器人 | ||||
ChatGpt 线路机器人 | ||||
Action 翻译自述文件 | ||||
Langchain文档GPT |
在终端中调用脚本:
create_provider.py
python etc/tool/create_provider.py
cURL
from __future__ import annotations
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider
class HogeService(AsyncGeneratorProvider):
url = "https://chat-gpt.com"
working = True
supports_gpt_35_turbo = True
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
yield ""
supports_stream
True
create_async_generator
yield
g4f/provider/__init__.py
中添加提供程序名称
from .HogeService import HogeService
__all__ = [
HogeService,
]
import g4f
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Provider.PROVIDERNAME,
messages=[{"role": "user", "content": "test"}], stream=g4f.Provider.PROVIDERNAME.supports_stream)
for message in response:
print(message, flush=True, end='')
贡献者列表可在此处
获得 Vercel.py
文件包含来自 @ading2210 的 vercel-llm-api 的代码,该代码在 GNU GPL v3
下获得许可 Top 1 贡献者:@hlohaus
该程序在 GNU GPL v3 下获得许可
xtekky/gpt4free: Copyright (C) 2023 xtekky This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.
本项目采用 GNU_GPL_v3.0 许可。 |
(🔼 返回页首)