由OpenAI的ChatGPT(GPT-3.5)提供支持的命令行生产力工具。作为开发人员,我们可以利用 ChatGPT 功能来生成 shell 命令、代码片段、注释和文档等。忘记备忘单和笔记,使用此工具,你可以在终端中获得准确的答案,你可能会发现自己减少了日常Google搜索,从而节省了宝贵的时间和精力。
pip install shell-gpt
你需要一个OpenAI API密钥,你可以在这里生成一个。
如果设置了环境变量,则将使用该变量,否则,系统将提示你输入密钥,然后该密钥将存储在 中。
$OPENAI_API_KEY
~/.config/shell_gpt/.sgptrc
sgpt具有多种用例,包括简单查询、shell 查询和代码查询。
我们可以将其用作普通搜索引擎,询问任何事情:
sgpt "nginx default config file location"
# -> The default configuration file for Nginx is located at /etc/nginx/nginx.conf.
sgpt "docker show all local images"
# -> You can view all locally available Docker images by running: `docker images`
sgpt "mass of sun"
# -> = 1.99 × 10^30 kg
转换各种单位和度量,而无需搜索转换公式或使用单独的转换网站。你可以转换时间、距离、重量、温度等单位。
sgpt "1 hour and 30 minutes to seconds"
# -> 5,400 seconds
sgpt "1 kilometer to miles"
# -> 1 kilometer is equal to 0.62137 miles.
你是否曾经发现自己忘记了常见的 shell 命令,例如 ,并且需要在线查找语法?使用选项,你可以在终端中快速查找并执行所需的命令。
chmod
--shell
sgpt --shell "make all files in current directory read only"
# -> chmod 444 *
由于我们接收到有效的 shell 命令,我们可以使用 但这不是很方便,相反,我们可以使用(或快捷方式)参数:
eval $(sgpt --shell "make all files in current directory read only")
--execute
-se
--shell
--execute
sgpt --shell --execute "make all files in current directory read only"
# -> chmod 444 *
# -> Execute shell command? [y/N]: y
# ...
外壳 GPT 知道操作系统并且你正在使用,它将为你拥有的特定系统提供外壳命令。例如,如果你要求更新系统,它将根据你的操作系统返回一个命令。下面是一个使用 macOS 的示例:
$SHELL
sgpt
sgpt -se "update my system"
# -> sudo softwareupdate -i -a
同样的提示,在 Ubuntu 上使用时,会生成不同的建议:
sgpt -se "update my system"
# -> sudo apt update && sudo apt upgrade -y
让我们尝试一些 docker 容器:
sgpt -se "start nginx using docker, forward 443 and 80 port, mount current folder with index.html"
# -> docker run -d -p 443:443 -p 80:80 -v $(pwd):/usr/share/nginx/html nginx
# -> Execute shell command? [y/N]: y
# ...
此外,我们可以在提示符中提供一些参数名称,例如,将输出文件名传递给 ffmpeg:
sgpt -se "slow down video twice using ffmpeg, input video name \"input.mp4\" output video name \"output.mp4\""
# -> ffmpeg -i input.mp4 -filter:v "setpts=2.0*PTS" output.mp4
# -> Execute shell command? [y/N]: y
# ...
我们可以在提示符中应用额外的 shell 魔术,在本例中将文件名传递给 ffmpeg:
ls
# -> 1.mp4 2.mp4 3.mp4
sgpt -se "using ffmpeg combine multiple videos into one without audio. Video file names: $(ls -m)"
# -> ffmpeg -i 1.mp4 -i 2.mp4 -i 3.mp4 -filter_complex "[0:v] [1:v] [2:v] concat=n=3:v=1 [v]" -map "[v]" out.mp4
# -> Execute shell command? [y/N]: y
# ...
由于 ChatGPT 还可以对输入文本进行汇总和分析,我们可以要求它生成提交消息:
sgpt "Generate git commit message, my changes: $(git diff)"
# -> Commit message: Implement Model enum and get_edited_prompt() func, add temperature, top_p and editor args for OpenAI request.
或者要求它在日志中查找错误并提供更多详细信息:
sgpt "check these logs, find errors, and explain what the error is about: ${docker logs -n 20 container_name}"
# ...
使用参数,我们只能查询代码作为输出,例如:
--code
sgpt --code "Solve classic fizz buzz problem using Python"
for i in range(1, 101):
if i % 3 == 0 and i % 5 == 0:
print("FizzBuzz")
elif i % 3 == 0:
print("Fizz")
elif i % 5 == 0:
print("Buzz")
else:
print(i)
由于它是有效的 python 代码,我们可以将输出重定向到文件:
sgpt --code "solve classic fizz buzz problem using Python" > fizz_buzz.py
python fizz_buzz.py
# 1
# 2
# Fizz
# 4
# Buzz
# Fizz
# ...
要启动聊天会话,请使用后跟唯一会话名称和提示的选项:
--chat
sgpt --chat number "please remember my favorite number: 4"
# -> I will remember that your favorite number is 4.
sgpt --chat number "what would be my favorite number + 4?"
# -> Your favorite number is 4, so if we add 4 to it, the result would be 8.
你还可以使用聊天会话通过提供其他线索来迭代改进 ChatGPT 的建议。
sgpt --chat python_requst --code "make an example request to localhost using Python"
import requests
response = requests.get('http://localhost')
print(response.text)
要求 ChatGPT 为我们的请求添加缓存。
sgpt --chat python_request --code "add caching"
import requests
from cachecontrol import CacheControl
sess = requests.session()
cached_sess = CacheControl(sess)
response = cached_sess.get('http://localhost')
print(response.text)
我们可以使用或选项来启动,因此你可以不断完善结果:
--code
--shell
--chat
sgpt --chat sh --shell "What are the files in this directory?"
# -> ls
sgpt --chat sh "Sort them by name"
# -> ls | sort
sgpt --chat sh "Concatenate them using FFMPEG"
# -> ffmpeg -i "concat:$(ls | sort | tr '\n' '|')" -codec copy output.mp4
sgpt --chat sh "Convert the resulting file into an MP3"
# -> ffmpeg -i output.mp4 -vn -acodec libmp3lame -ac 2 -ab 160k -ar 48000 final_output.mp3
要列出所有当前聊天会话,请使用以下选项:
--list-chat
sgpt --list-chat
# .../shell_gpt/chat_cache/number
# .../shell_gpt/chat_cache/python_request
要显示与特定聊天会话相关的所有消息,请使用后跟会话名称的选项:
--show-chat
sgpt --show-chat number
# user: please remember my favorite number: 4
# assistant: I will remember that your favorite number is 4.
# user: what would be my favorite number + 4?
# assistant: Your favorite number is 4, so if we add 4 to it, the result would be 8.
使用(默认)和选项控制缓存。此缓存适用于对 OpenAI API 的所有请求:
--cache
--no-cache
sgpt
sgpt "what are the colors of a rainbow"
# -> The colors of a rainbow are red, orange, yellow, green, blue, indigo, and violet.
下次,相同的确切查询将立即从本地缓存中获取结果。请注意,这将提出一个新请求,因为我们没有在以前的请求中提供(同样适用于)。
sgpt "what are the colors of a rainbow" --temperature 0.5
--temperature
--top-probability
这只是我们使用 ChatGPT 模型可以做什么的一些示例,我相信你会发现它对你的特定用例很有用。
你可以在运行时配置文件中设置一些参数:
~/.config/shell_gpt/.sgptrc
# API key, also it is possible to define OPENAI_API_KEY env.
OPENAI_API_KEY=your_api_key
# OpenAI host, useful if you would like to use proxy.
OPENAI_API_HOST=https://api.openai.com
# Max amount of cached message per chat session.
CHAT_CACHE_LENGTH=100
# Chat cache folder.
CHAT_CACHE_PATH=/tmp/shell_gpt/chat_cache
# Request cache length (amount).
CACHE_LENGTH=100
# Request cache folder.
CACHE_PATH=/tmp/shell_gpt/cache
# Request timeout in seconds.
REQUEST_TIMEOUT=60
Full list of arguments╭─ Arguments ───────────────────────────────────────────────────────────────────────────────────────────────╮
│ prompt [PROMPT] The prompt to generate completions for. │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ─────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --temperature FLOAT RANGE [0.0<=x<=1.0] Randomness of generated output. [default: 1.0] │
│ --top-probability FLOAT RANGE [0.1<=x<=1.0] Limits highest probable tokens (words). [default: 1.0] │
│ --chat TEXT Follow conversation with id (chat mode). [default: None] │
│ --show-chat TEXT Show all messages from provided chat id. [default: None] │
│ --list-chat List all existing chat ids. [default: no-list-chat] │
│ --shell Provide shell command as output. │
│ --execute Will execute --shell command. │
│ --code Provide code as output. [default: no-code] │
│ --editor Open $EDITOR to provide a prompt. [default: no-editor] │
│ --cache Cache completion results. [default: cache] │
│ --animation Typewriter animation. [default: animation] │
│ --spinner Show loading spinner during API request. [default: spinner] │
│ --help Show this message and exit. │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────╯
DockerRun the container using the
OPENAI_API_KEY
environment variable, and a docker volume to store cache:
docker run --rm \
--env OPENAI_API_KEY="your OPENAI API key" \
--volume gpt-cache:/tmp/shell_gpt \
ghcr.io/ther1d/shell_gpt --chat rainbow "what are the colors of a rainbow"
Example of a conversation, using an alias and the
OPENAI_API_KEY
environment variable:
alias sgpt="docker run --rm --env OPENAI_API_KEY --volume gpt-cache:/tmp/shell_gpt ghcr.io/ther1d/shell_gpt"
export OPENAI_API_KEY="your OPENAI API key"
sgpt --chat rainbow "what are the colors of a rainbow"
sgpt --chat rainbow "inverse the list of your last answer"
sgpt --chat rainbow "translate your last answer in french"
You also can use the provided
Dockerfile
to build your own image:
docker build -t sgpt .