描述
网址
标签
GPT
收藏类型
URL
原链接
Origin
Tags
Cubox 深度链接
收藏夹
创建时间 2
更新时间
An experimental open-source attempt to make GPT-4 fully autonomous. - Significant-Gravitas/Auto-GPT: ……一项使 GPT-4 完全自主的实验性开源尝试。 - Significant-Gravitas/Auto-GPT:……
Auto-GPT: An Autonomous GPT-4 ExperimentAuto-GPT:自主 GPT-4 实验 =======================================================================================================================================================

🔴 🔴 🔴 Urgent: USE not stable``master 🔴 🔴 🔴

🔴 🔴 🔴 紧急:不使用 stable master 🔴 🔴 🔴
We’ve improved our workflow. will often be in a broken state. Download the latest release here: https://github.com/Torantulino/Auto-GPT/releases/latest This information SUPERCEDES any following information. Takes precedence. Do this not that.master``stable 我们改进了工作流程。经常会处于破碎状态。在此处下载最新版本:https://github.com/Torantulino/Auto-GPT/releases/latest 此信息取代以下任何信息。优先。这样做不是那样。 master stable
notion image
notion image
notion image
notion image
Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, chains together LLM “thoughts”, to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI. Auto-GPT 是一个实验性开源应用程序,展示了 GPT-4 语言模型的功能。该程序由 GPT-4 驱动,将 LLM 的“思想”链接在一起,以自主实现您设定的任何目标。作为 GPT-4 完全自主运行的首批示例之一,Auto-GPT 突破了 AI 的可能性界限。

Demo (30/03/2023): 演示 (30/03/2023):

💖 Help Fund Auto-GPT’s Development 💖💖 帮助资助 Auto-GPT 的发展💖 ———————————————————————————————————————————————
If you can spare a coffee, you can help to cover the API costs of developing Auto-GPT and help push the boundaries of fully autonomous AI! A full day of development can easily cost as much as $20 in API costs, which for a free project is quite limiting. Your support is greatly appreciated 如果您能抽出一杯咖啡,就可以帮助支付开发 Auto-GPT 的 API 成本,并帮助突破完全自主 AI 的界限!一整天的开发很容易花费高达 20 美元的 API 成本,这对于一个免费项目来说是相当有限的。您的支持是极大的赞赏
Development of this free, open-source project is made possible by all the contributors and sponsors. If you’d like to sponsor this project and have your avatar or company logo appear below click here. 所有贡献者和赞助商都使这个免费的开源项目的开发成为可能。如果您想赞助该项目并让您的头像或公司徽标显示在下方,请单击此处。

Individual Sponsors 个人赞助商

notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image

Table of Contents 目录

🚀 Features 🚀 特点

  • 🌐 Internet access for searches and information gathering 🌐 用于搜索和信息收集的互联网接入
  • 💾 Long-Term and Short-Term memory management 💾 长期和短期内存管理
  • 🧠 GPT-4 instances for text generation 🧠 用于文本生成的 GPT-4 实例
  • 🔗 Access to popular websites and platforms 🔗 访问流行的网站和平台
  • 🗃️ File storage and summarization with GPT-3.5 🗃️ 使用 GPT-3.5 进行文件存储和汇总

📋 Requirements 📋 要求

Optional: 选修的:
  • PINECONE API key (If you want Pinecone backed memory) PINECONE API 密钥(如果你想要 Pinecone 支持的内存)
  • ElevenLabs Key (If you want the AI to speak) ElevenLabs Key(如果你想让人工智能说话)

💾 Installation 💾 安装

To install Auto-GPT, follow these steps: 要安装 Auto-GPT,请按照下列步骤操作:
  1. Make sure you have all the requirements above, if not, install/get them. 确保满足上述所有要求,如果没有,请安装/获取它们。
The following commands should be executed in a CMD, Bash or Powershell window. To do this, go to a folder on your computer, click in the folder path at the top and type CMD, then press enter. 以下命令应在 CMD、Bash 或 Powershell 窗口中执行。为此,请转到计算机上的文件夹,单击顶部的文件夹路径并键入 CMD,然后按 Enter。
  1. Clone the repository: For this step you need Git installed, but you can just download the zip file instead by clicking the button at the top of this page ☝️ 克隆存储库:对于此步骤,您需要安装 Git,但您可以通过单击此页面顶部的按钮来下载 zip 文件 ☝️
  1. Navigate to the project directory: (Type this into your CMD window, you’re aiming to navigate the CMD window to the repository you just downloaded) 导航到项目目录:(将其输入您的 CMD 窗口,您的目标是将 CMD 窗口导航到您刚刚下载的存储库)
  1. Install the required dependencies: (Again, type this into your CMD window) 安装所需的依赖项:(同样,将其键入您的 CMD 窗口)
  1. Rename .env.template to .env and fill in your OPENAI_API_KEY. If you plan to use Speech Mode, fill in your ELEVEN_LABS_API_KEY as well..env.template 重命名为 .env 并填写你的 OPENAI_API_KEY 。如果您打算使用语音模式,请同时填写您的 ELEVEN_LABS_API_KEY
  • Obtain your ElevenLabs API key from: https://elevenlabs.io. You can view your xi-api-key using the “Profile” tab on the website. 从以下位置获取您的 ElevenLabs API 密钥: https://elevenlabs.io 。您可以使用网站上的“个人资料”选项卡查看您的 xi-api-key。
  • If you want to use GPT on an Azure instance, set USE_AZURE to True and then: 如果要在 Azure 实例上使用 GPT,请将 USE_AZURE 设置为 True ,然后:

🔧 Usage 🔧 用法

  1. Run the autogpt Python module in your terminal: (Type this into your CMD window) 在您的终端中运行 autogpt Python 模块:(在您的 CMD 窗口中输入)
  1. After each of action, enter ‘y’ to authorise command, ‘y -N’ to run N continuous commands, ‘n’ to exit program, or enter additional feedback for the AI. 在每个动作之后,输入“y”来授权命令,“y -N”来运行 N 个连续命令,“n”来退出程序,或者为 AI 输入额外的反馈。

Logs 日志

You will find activity and error logs in the folder ./output/logs 您将在文件夹 ./output/logs 中找到活动和错误日志
To output debug logs: 输出调试日志:

Docker 码头工人

You can also build this into a docker image and run it: 您还可以将其构建到 docker 镜像中并运行它:
You can pass extra arguments, for instance, running with --gpt3only and --continuous mode: 您可以传递额外的参数,例如,以 --gpt3only--continuous 模式运行:

Command Line Arguments 命令行参数

Here are some common arguments you can use when running Auto-GPT: 以下是您在运行 Auto-GPT 时可以使用的一些常见参数:
Replace anything in angled brackets (<>) to a value you want to specify 将尖括号 (<>) 中的任何内容替换为您要指定的值
  • python scripts/main.py --help to see a list of all available command line arguments.python scripts/main.py --help 查看所有可用命令行参数的列表。
  • python scripts/main.py --ai-settings <filename> to run Auto-GPT with a different AI Settings file.python scripts/main.py --ai-settings <filename> 使用不同的 AI 设置文件运行 Auto-GPT。
  • python scripts/main.py --use-memory <memory-backend> to specify one of 3 memory backends: local, redis, pinecone or ‘no_memory’.python scripts/main.py --use-memory <memory-backend> 指定 3 个内存后端之一: localredispinecone 或“no_memory”。
NOTE: There are shorthands for some of these flags, for example -m for --use-memory. Use python scripts/main.py --help for more information 注意:其中一些标志有简写形式,例如 -m 表示 --use-memory 。使用 python scripts/main.py --help 获取更多信息

🗣️ Speech Mode 🗣️ 语音模式

Use this to use TTS for Auto-GPT 使用它来将 TTS 用于 Auto-GPT
🔍 Google API Keys Configuration🔍 谷歌 API 密钥配置 ———————————————————————————————————————–
This section is optional, use the official google api if you are having issues with error 429 when running a google search. To use the google_official_search command, you need to set up your Google API keys in your environment variables. 此部分是可选的,如果您在运行谷歌搜索时遇到错误 429 问题,请使用官方谷歌 API。要使用 google_official_search 命令,您需要在环境变量中设置 Google API 密钥。
  1. Go to the Google Cloud Console. 转到谷歌云控制台。
  1. If you don’t already have an account, create one and log in. 如果您还没有帐户,请创建一个并登录。
  1. Create a new project by clicking on the “Select a Project” dropdown at the top of the page and clicking “New Project”. Give it a name and click “Create”. 通过单击页面顶部的“选择项目”下拉菜单并单击“新建项目”来创建一个新项目。给它起个名字,然后单击“创建”。
  1. Go to the APIs & Services Dashboard and click “Enable APIs and Services”. Search for “Custom Search API” and click on it, then click “Enable”. 转到 API 和服务仪表板并单击“启用 API 和服务”。搜索“自定义搜索 API”并单击它,然后单击“启用”。
  1. Go to the Credentials page and click “Create Credentials”. Choose “API Key”. 转到凭据页面并单击“创建凭据”。选择“API 密钥”。
  1. Copy the API key and set it as an environment variable named GOOGLE_API_KEY on your machine. See setting up environment variables below. 复制 API 密钥并将其设置为计算机上名为 GOOGLE_API_KEY 的环境变量。请参阅下面的设置环境变量。
  1. Enable the Custom Search API on your project. (Might need to wait few minutes to propagate) 在您的项目上启用自定义搜索 API。 (可能需要等待几分钟才能传播)
  1. Go to the Custom Search Engine page and click “Add”. 转到自定义搜索引擎页面并单击“添加”。
  1. Set up your search engine by following the prompts. You can choose to search the entire web or specific sites. 按照提示设置搜索引擎。您可以选择搜索整个网络或特定站点。
  1. Once you’ve created your search engine, click on“Control Panel”and then“Basics”. Copy the“Search engine ID” and set it as an environment variable named CUSTOM_SEARCH_ENGINE_ID on your machine. See setting up environment variables below. 创建搜索引擎后,单击“控制面板”,然后单击“基本”。复制“搜索引擎 ID”并将其设置为您计算机上名为 CUSTOM_SEARCH_ENGINE_ID 的环境变量。请参阅下面的设置环境变量。
Remember that your free daily custom search quota allows only up to 100 searches. To increase this limit, you need to assign a billing account to the project to profit from up to 10K daily searches. 请记住,您的每日免费自定义搜索配额最多只允许 100 次搜索。要增加此限制,您需要为项目分配一个计费帐户,以从每天多达 10,000 次搜索中获利。

Setting up environment variables

设置环境变量
For Windows Users: 对于 Windows 用户:
For macOS and Linux users: 对于 macOS 和 Linux 用户:

Redis Setup Redis 设置

Install docker desktop. 安装 docker 桌面。
Run: 跑步:
See https://hub.docker.com/r/redis/redis-stack-server for setting a password and additional configuration. 有关设置密码和其他配置的信息,请参阅 https://hub.docker.com/r/redis/redis-stack-server
Set the following environment variables: 设置以下环境变量:
Note that this is not intended to be run facing the internet and is not secure, do not expose redis to the internet without a password or at all really. 请注意,这不是为了面向互联网运行并且不安全,不要在没有密码的情况下或根本没有将 redis 暴露在互联网上。
You can optionally set 您可以选择设置
To persist memory stored in Redis. 持久化存储在 Redis 中的内存。
You can specify the memory index for redis using the following: 您可以使用以下命令为 redis 指定内存索引:
🌲 Pinecone API Key Setup🌲 Pinecone API 密钥设置 ——————————————————————————————————–
Pinecone enables the storage of vast amounts of vector-based memory, allowing for only relevant memories to be loaded for the agent at any given time. Pinecone 支持存储大量基于向量的内存,允许在任何给定时间只为代理加载相关内存。
  1. Go to pinecone and make an account if you don’t already have one. 如果您还没有帐户,请前往 pinecone 并创建一个帐户。
  1. Choose the Starter plan to avoid being charged. 选择 Starter 计划以避免被收费。
  1. Find your API key and region under the default project in the left sidebar. 在左侧边栏的默认项目下找到您的 API 密钥和区域。

Setting up environment variables

设置环境变量
In the .env file set:.env 文件集中:
  • PINECONE_API_KEY
  • PINECONE_ENV (something like: us-east4-gcp)PINECONE_ENV (类似于:us-east4-gcp)
  • MEMORY_BACKEND=pinecone
Alternatively, you can set them from the command line (advanced): 或者,您可以从命令行设置它们(高级):
For Windows Users: 对于 Windows 用户:
For macOS and Linux users: 对于 macOS 和 Linux 用户:
Setting Your Cache Type设置缓存类型 —————————————————————————————-
By default Auto-GPT is going to use LocalCache instead of redis or Pinecone. 默认情况下,Auto-GPT 将使用 LocalCache 而不是 Redis 或 Pinecone。
To switch to either, change the MEMORY_BACKEND env variable to the value that you want: 要切换到任何一个,请将 MEMORY_BACKEND 环境变量更改为您想要的值:
local (default) uses a local JSON cache file pinecone uses the Pinecone.io account you configured in your ENV settings redis will use the redis cache that you configuredlocal (默认)使用本地 JSON 缓存文件 pinecone 使用您在 ENV 设置中配置的 Pinecone.io 帐户 redis 将使用您配置的 redis 缓存

View Memory Usage 查看内存使用情况

  1. View memory usage by using the -debug flag :) 使用 -debug 标志查看内存使用情况:)

🧠 Memory pre-seeding 🧠 内存预播

This script located at scripts/data_ingestion.py, allows you to ingest files into memory and pre-seed it before running Auto-GPT. 该脚本位于 scripts/data_ingestion.py,允许您将文件提取到内存中并在运行 Auto-GPT 之前预先植入它。
Memory pre-seeding is a technique that involves ingesting relevant documents or data into the AI’s memory so that it can use this information to generate more informed and accurate responses. 记忆预播是一种技术,涉及将相关文档或数据摄取到 AI 的记忆中,以便它可以使用这些信息来生成更明智和准确的响应。
To pre-seed the memory, the content of each document is split into chunks of a specified maximum length with a specified overlap between chunks, and then each chunk is added to the memory backend set in the .env file. When the AI is prompted to recall information, it can then access those pre-seeded memories to generate more informed and accurate responses. 为了预置内存,每个文档的内容被分成指定最大长度的块,块之间有指定的重叠,然后每个块被添加到 .env 文件中的内存后端集。当提示 AI 回忆信息时,它可以访问那些预先植入的记忆以生成更明智和准确的响应。
This technique is particularly useful when working with large amounts of data or when there is specific information that the AI needs to be able to access quickly. By pre-seeding the memory, the AI can retrieve and use this information more efficiently, saving time, API call and improving the accuracy of its responses. 当处理大量数据或存在 AI 需要能够快速访问的特定信息时,此技术特别有用。通过预先植入内存,人工智能可以更有效地检索和使用这些信息,从而节省时间、API 调用并提高其响应的准确性。
You could for example download the documentation of an API, a Github repository, etc. and ingest it into memory before running Auto-GPT. 例如,您可以下载 API 文档、Github 存储库等,并在运行 Auto-GPT 之前将其提取到内存中。
⚠️ If you use Redis as your memory, make sure to run Auto-GPT with the WIPE_REDIS_ON_START set to False in your .env file. ⚠️ 如果您使用 Redis 作为内存,请确保在 .env 文件中将 WIPE_REDIS_ON_START 设置为 False 来运行 Auto-GPT。
⚠️For other memory backend, we currently forcefully wipe the memory when starting Auto-GPT. To ingest data with those memory backend, you can call the data_ingestion.py script anytime during an Auto-GPT run. ⚠️对于其他内存后端,我们目前在启动Auto-GPT时强制擦除内存。要使用这些内存后端摄取数据,您可以在 Auto-GPT 运行期间随时调用 data_ingestion.py 脚本。
Memories will be available to the AI immediately as they are ingested, even if ingested while Auto-GPT is running. 即使在 Auto-GPT 运行时摄取记忆,AI 也会立即使用记忆。
In the example above, the script initializes the memory, ingests all files within the seed_data directory into memory with an overlap between chunks of 200 and a maximum length of each chunk of 4000. Note that you can also use the –file argument to ingest a single file into memory and that the script will only ingest files within the auto_gpt_workspace directory. 在上面的示例中,脚本初始化内存,将 seed_data 目录中的所有文件摄取到内存中,块之间的重叠为 200,每个块的最大长度为 4000。请注意,您还可以使用 –file 参数来摄取将单个文件放入内存,并且脚本将仅提取 auto_gpt_workspace 目录中的文件。
You can adjust the max_length and overlap parameters to fine-tune the way the docuents are presented to the AI when it “recall” that memory: 您可以调整 max_length 和 overlap 参数以微调文档在“回忆”该内存时呈现给 AI 的方式:
  • Adjusting the overlap value allows the AI to access more contextual information from each chunk when recalling information, but will result in more chunks being created and therefore increase memory backend usage and OpenAI API requests. 调整重叠值允许 AI 在调用信息时从每个块访问更多上下文信息,但会导致创建更多块,从而增加内存后端使用和 OpenAI API 请求。
  • Reducing the max_length value will create more chunks, which can save prompt tokens by allowing for more message history in the context, but will also increase the number of chunks. 减小 max_length 值将创建更多块,这可以通过在上下文中允许更多消息历史记录来节省提示标记,但也会增加块的数量。
  • Increasing the max_length value will provide the AI with more contextual information from each chunk, reducing the number of chunks created and saving on OpenAI API requests. However, this may also use more prompt tokens and decrease the overall context available to the AI. 增加 max_length 值将为 AI 提供来自每个块的更多上下文信息,从而减少创建的块数量并节省 OpenAI API 请求。然而,这也可能会使用更多的提示标记并减少 AI 可用的整体上下文。
💀 Continuous Mode ⚠️💀连续模式⚠️ ——————————————————————————
Run the AI without user authorisation, 100% automated. Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk. 无需用户授权即可 100% 自动化地运行 AI。不推荐连续模式。它具有潜在危险,可能会导致您的 AI 永远运行或执行您通常不会授权的操作。使用风险自负。
  1. Run the autogpt python module in your terminal: 在终端中运行 autogpt python 模块:
  1. To exit the program, press Ctrl + C 要退出程序,请按 Ctrl + C

GPT3.5 ONLY Mode GPT3.5 ONLY 模式

If you don’t have access to the GPT4 api, this mode will allow you to use Auto-GPT! 如果您无权访问 GPT4 api,此模式将允许您使用 Auto-GPT!
It is recommended to use a virtual machine for tasks that require high security measures to prevent any potential harm to the main computer’s system and data. 建议将虚拟机用于需要高度安全措施的任务,以防止对主计算机的系统和数据造成任何潜在危害。

🖼 Image Generation 🖼 图像生成

By default, Auto-GPT uses DALL-e for image generation. To use Stable Diffusion, a HuggingFace API Token is required. 默认情况下,Auto-GPT 使用 DALL-e 进行图像生成。要使用 Stable Diffusion,需要一个 HuggingFace API 令牌。
Once you have a token, set these variables in your .env: 获得令牌后,在 .env 中设置这些变量:

⚠️ Limitations ⚠️ 限制

This experiment aims to showcase the potential of GPT-4 but comes with some limitations: 该实验旨在展示 GPT-4 的潜力,但存在一些局限性:
  1. Not a polished application or product, just an experiment 不是完善的应用程序或产品,只是一个实验
  1. May not perform well in complex, real-world business scenarios. In fact, if it actually does, please share your results! 在复杂的真实业务场景中可能表现不佳。事实上,如果确实如此,请分享您的结果!
  1. Quite expensive to run, so set and monitor your API key limits with OpenAI! 运行成本非常高,因此请使用 OpenAI 设置和监控您的 API 密钥限制!

🛡 Disclaimer 🛡 免责声明

Disclaimer This project, Auto-GPT, is an experimental application and is provided “as-is” without any warranty, express or implied. By using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise. 免责声明 Auto-GPT 这个项目是一个实验性应用程序,按“原样”提供,没有任何明示或暗示的保证。使用本软件,即表示您同意承担与其使用相关的所有风险,包括但不限于数据丢失、系统故障或可能出现的任何其他问题。
The developers and contributors of this project do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as a result of using this software. You are solely responsible for any decisions and actions taken based on the information provided by Auto-GPT. 本项目的开发者和贡献者对因使用本软件而可能发生的任何损失、损害或其他后果不承担任何责任或义务。您对基于 Auto-GPT 提供的信息做出的任何决定和行动承担全部责任。
Please note that the use of the GPT-4 language model can be expensive due to its token usage. By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges. 请注意,由于使用代币,使用 GPT-4 语言模型可能会很昂贵。通过使用此项目,您承认您有责任监控和管理您自己的代币使用情况和相关费用。强烈建议定期检查您的 OpenAI API 使用情况并设置任何必要的限制或警报以防止意外收费。
As an autonomous experiment, Auto-GPT may generate content or take actions that are not in line with real-world business practices or legal requirements. It is your responsibility to ensure that any actions or decisions made based on the output of this software comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from the use of this software. 作为一项自主实验,Auto-GPT 可能会生成不符合现实世界商业惯例或法律要求的内容或采取的行动。您有责任确保基于此软件的输出做出的任何行动或决定符合所有适用的法律、法规和道德标准。本项目的开发者和贡献者对因使用本软件而产生的任何后果不承担任何责任。
By using Auto-GPT, you agree to indemnify, defend, and hold harmless the developers, contributors, and any affiliated parties from and against any and all claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys’ fees) arising from your use of this software or your violation of these terms. 通过使用 Auto-GPT,您同意就任何和所有索赔、损害、损失、责任、成本和费用(包括合理的律师费)对开发人员、贡献者和任何关联方进行赔偿、辩护并使其免受损害因您使用本软件或您违反这些条款而引起的。
🐦 Connect with Us on Twitter🐦 在 Twitter 上与我们联系 ——————————————————————————————————————-
Stay up-to-date with the latest news, updates, and insights about Auto-GPT by following our Twitter accounts. Engage with the developer and the AI’s own account for interesting discussions, project updates, and more. 通过关注我们的 Twitter 帐户,了解有关 Auto-GPT 的最新消息、更新和见解。与开发人员和 AI 自己的帐户进行有趣的讨论、项目更新等。
  • Developer: Follow [@siggravitas](https://twitter.com/siggravitas) for insights into the development process, project updates, and related topics from the creator of Entrepreneur-GPT. 开发者:关注 [@siggravitas](https://twitter.com/siggravitas) ,了解 Entrepreneur-GPT 的创建者的开发过程、项目更新和相关主题。
  • Entrepreneur-GPT: Join the conversation with the AI itself by following [@En_GPT](https://twitter.com/En_GPT). Share your experiences, discuss the AI’s outputs, and engage with the growing community of users. Entrepreneur-GPT:通过关注 [@En_GPT](https://twitter.com/En_GPT) 加入与 AI 本身的对话。分享您的经验,讨论 AI 的输出,并与不断壮大的用户社区互动。
We look forward to connecting with you and hearing your thoughts, ideas, and experiences with Auto-GPT. Join us on Twitter and let’s explore the future of AI together! 我们期待与您联系并聆听您对 Auto-GPT 的想法、想法和体验。加入我们的 Twitter,让我们一起探索 AI 的未来!
notion image

Run tests 运行测试

To run tests, run the following command: 要运行测试,请运行以下命令:
To run tests and see coverage, run the following command: 要运行测试并查看覆盖率,请运行以下命令:

Run linter 运行 linter

This project uses flake8 for linting. We currently use the following rules: E303,W293,W291,W292,E305,E231,E302. See the flake8 rules for more information. 该项目使用 flake8 进行 linting。我们目前使用以下规则: E303,W293,W291,W292,E305,E231,E302 。有关详细信息,请参阅 flake8 规则。
To run the linter, run the following command: 要运行 linter,请运行以下命令:
本文由简悦 SimpRead 转码