Skip to content

Your AI testing companion that writes tests on your behalf, automated to get you to build and ship faster without sacrificing unit tests.

License

Notifications You must be signed in to change notification settings

fayez-nazzal/TestGPT

Repository files navigation

TestGPT - Logo Cover

Your AI testing companion that writes tests on your behalf, automated to get you to build and ship faster without sacrificing tests.


TestGPT Show



By default, TestGPT will use the OpenAI gpt-3.5-turbo-16k model, but you can use gpt-4, or any other model you want.

Installation

VSCode Extension

An extension is available on the VSCode Marketplace

or install directly by entering this command in the VSCode command palette (Command+P) / (Ctrl+P):

   ext install fayeznazzal.testgpt

CLI Tool

To install the CLI tool, follow those steps

  1. Install TestGPT by running one of these commands:

    # Install globally
    npm install -g testgpt@latest
    
    # OR install locally in your project
    npm install testgpt@latest
  2. Get your OpenAI API Key by requesting access to the OpenAI API and obtaining your API key.

    Then export it based on your OS:

    • macOS or Linux: Add the following line to .zshrc or .bashrc in your home directory:

      export OPENAI_API_KEY="Your OpenAI API Key."

      Then run the command:

      source ~/.zshrc
    • Windows: Go to System -> Settings -> Advanced -> Environment Variables, click New under System Variables, and create a new entry with the key OPENAI_API_KEY and your OpenAI API Key as the value.

Usage

Universal / Plug and Play

Here's a simple form of a test generation command:

testgpt -i ./component.tsx -m gpt4
# Creates: ./component.test.tsx

With more options, comes more power! You can easily specify target techs, tips, and specify a custom GPT model, along with other options. Here is a breakdown table:

--inputFile | -i | [ Required ]
Path for the input file to be tested (e.g. `./Button.tsx`).
--outputFile | -o | [ Default: {inputFile}.test.{extension} ]
Path for the output file where the generated tests will be written (e.g. `./Button.spec.tsx`). If not provided, the output file will be the same as the input file, but with `.test` added before the extension.
--apiKey | -k | [ Default: OPENAI_API_KEY Env ]
OpenAI API key. If not provided, it will be taken from the `OPENAI_API_KEY` environment variable. If using an API other than OpenAI, currently, this option will be ignored.
--model | -m | [ Default: gpt-3.5-turbo-16k ]
GPT model to be used for generating tests. If using an API other than OpenAI, currently, this option will be ignored.
--stream | -s
Stream the response using OpenAI streaming feature. If using an API other than OpenAI, currently, this option will be ignored.
--systemMessage | -y
System message to be used for generating tests.
--promptTemplate | -p
Prompt template to be used for generating tests. You can substitute the following variables in the template:
  • fileName: The name of the file being tested.
  • content: The content of the file being tested.
  • techs: The technologies to be used.
  • instructions: General Instructions for generating tests.

To substitute a variable, use the following syntax: {variableName}

Here is an example:

Please provide unit tests for the file {fileName} using {techs}
{instructions}

Please begin your response with \`\`\` and end it with \`\`\` directly.

Here is the file content:
\`\`\`{content}\`\`\`
--techs | -t | [ Default: Auto Detected ]
The technologies to be used.
--examples | -e |
Example snippets to guide the AI test generation process.
--moduleEndpoint | -e |
An API endpoint for a custom model to send the request to. Only use this if you have a custom model deployed and you want to use it instead of OpenAI.
--instructions | -n
General Instructions for generating tests.
--config | -c
Path to config file.

Here is an example command that uses more options like those mentioned above:

testgpt -i ./Button.tsx -o ./Button.spec.tsx -m gpt-4 --techs "jest, testing-library" --apiKey "Your OpenAI API Key"

Locally / Config-based

For extra flexibility, having testgpt.config.yaml at your project's root allows for running shorter commands, quicker, and more friendly for repetitive usage.

An example of a testgpt.config.yaml file:

.tsx:
  techs:
    - jest
    - react-testing-library
  instructions: |-
    Wrap test groups in 'describe' blocks
  examples:
    - fileName: file1.tsx
      code: <code for file1.tsx>
      tests: <tests for file1.tsx>
    - fileName: file2.tsx
      code: <code for file2.tsx>
      tests: <tests for file2.tsx>

More and longer examples enhance the test quality. This will be more possible with high-context length models like gpt-3.5-turbo-16k or gpt-4-32k.

License

This software is licensed under the MIT License, which permits you to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the software, subject to the following conditions:

  • The above copyright notice and this permission notice shall be included in all copies or substantial portions of the software.
  • The software is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement.
  • In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the software.

Please feel free to use this software in any way you see fit, and contributions are always welcome :)

About

Your AI testing companion that writes tests on your behalf, automated to get you to build and ship faster without sacrificing unit tests.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published