Gaunt Sloth Assistant - v1.4.0
    Preparing search index...

    Gaunt Sloth Assistant - v1.4.0

    Gaunt Sloth Assistant

    Tests and Lint Integration Tests

    Gaunt Sloth Assistant is a command-line AI assistant for CI/CD workflows, code reviews, and DIY projects. It supports PR and diff reviews with requirements context, code and diff Q&A, interactive chat and coding sessions, and controlled automation through predefined tools and JSON or JavaScript configuration.

    GSloth Banner

    Based on LangChain.js

    Documentation | Official Site | NPM | GitHub

    Gaunt Sloth's promise is that it is small, extendable, cross-platform and can itself be a dependency in your project.

    The GSloth was initially built as a code review tool, fetching PR contents and Jira contents before feeding them to the LLM, but we ourselves found many more use cases which we initially did not anticipate; for example, we may have it as a dependency in an MCP project, allowing us to quickly spin it up to simulate or test some use cases.

    The promise of Gaunt Sloth:

    • Minimum dependencies. Ideally, we aim to only have CommanderJS and some packages from LangChainJS and LangGraphJS.
    • Extensibility. Feel free to write some JS and create your Tool, Provider or connect to the MCP server of your choice.
    • No vendor lock-in. Just BYO API keys.
    • Easy installation via NPM.
    • All prompts are editable via markdown files.

    Unlike autonomous coding agents or hosted review services, GSloth is a configuration-driven CLI tool that you wire into your own workflows and pipelines. You choose the model, the provider, the prompts, and the tools — GSloth orchestrates them.

    Controlled automation

    • Define custom shell tools (deployments, migrations, test runs) in JSON config with parameter validation
    • Connect to MCP servers, including remote servers with OAuth
    • Communicate with external AI agents via the A2A protocol

    Model experimentation

    • Swap models and providers through config — no code changes needed
    • Works with Anthropic, Google (Vertex AI, AI Studio), OpenAI, Groq, DeepSeek, xAI, OpenRouter, local models (LM Studio, Ollama), and any LangChain-compatible provider
    • Customizable middleware pipeline (prompt caching, summarization, or your own)
    • All system prompts are editable markdown files

    Code reviews and PR workflows

    • Review PRs with requirement context pulled from GitHub issues or Jira (gsloth pr 42 12)
    • Review local diffs before committing (git --no-pager diff | gsloth review)
    • Run automated reviews in CI/CD — post results as PR comments via GitHub Actions

    Q&A, chat, and coding sessions

    • Ask questions about specific files (gsloth ask "explain this" -f utils.js)
    • Interactive chat and coding sessions with filesystem access

    Output handling

    • Saves all responses to timestamped files (override with -w/--write-output-to-file)
    • Materializes binary model outputs (e.g. generated images) as local files
    • OpenRouter
    • Groq;
    • DeepSeek;
    • Google AI Studio and Google Vertex AI;
    • Anthropic;
    • OpenAI (and other providers using OpenAI format, such as Inception);
    • Local AI: LM Studio, Ollama, llama.cpp (Via OpenAI compatibitlity)
    • Ollama with JS config (some of the models, see https://github.com/Galvanized-Pukeko/gaunt-sloth-assistant/discussions/107)
    • xAI;

    * Any other provider supported by LangChain.JS should also work with JS config.

    gth and gsloth commands are used interchangeably, both gsloth pr 42 and gth pr 42 do the same thing.

    For detailed information about all commands, see docs/COMMANDS.md.

    These apply to every command:

    • --config <path> – load a specific config file without moving directories
    • -i, --identity-profile <name> – switch to another profile under .gsloth/.gsloth-settings/<name>/
    • -w, --write-output-to-file <value> – control response files (true by default, use -wn/-w0 for false, or pass a filename)
    • --verbose – enable verbose LangChain/LangGraph logs (useful when debugging prompts)
    • init - Initialize Gaunt Sloth in your project (auto-detects API keys when called without arguments)
    • get - Inspect the effective prompt or provider-backed input used by another command
    • pr - ⚠️ This feature requires GitHub CLI to be installed. Review pull requests with optional requirement integration (GitHub issues or Jira).
    • review - Review any diff or content from various sources
    • ask - Ask questions about code or programming topics
    • chat - Start an interactive chat session
    • code - Write code interactively with full project context

    Initialize project:

    gsloth init              # Auto-detect API keys and select provider
    gsloth init anthropic # Or specify provider directly

    Review PR with requirements:

    gsloth pr 42 23  # Review PR #42 with GitHub issue #23
    

    Inspect command inputs:

    gsloth get pr prompt
    gsloth get pr content 42
    gsloth get review requirements PROJ-123

    Review local changes:

    git --no-pager diff | gsloth review
    

    Review changes between a specific tag and the HEAD:

    git --no-pager diff v0.8.3..HEAD | gth review
    

    **Review diff between head and previous release and head using a specific requirements provider (GitHub issue 38), not the one which is configured by default:

    git --no-pager diff v0.8.10 HEAD | npx gth review --requirements-provider github -r 38
    

    Ask questions:

    gsloth ask "What does this function do?" -f utils.js
    

    Write release notes:

    git --no-pager diff v0.8.3..HEAD | gth ask "inspect existing release notes in release-notes/v0_8_2.md; inspect provided diff and write release notes to v0_8_4.md"
    

    To write this to filesystem, you'd need to add filesystem access to the ask command in .gsloth.config.json.

    {"llm": {"type": "vertexai", "model": "gemini-2.5-pro"}, "commands": {"ask": {"filesystem": "all"}}}
    

    *You can improve this significantly by modifying project guidelines in .gsloth.guidelines.md or maybe with keeping instructions in file and feeding it in with -f.

    Interactive sessions:

    gsloth chat  # Start chat session
    gsloth code # Start coding session

    Running gsloth with no subcommand also drops you into chat.

    Tested with Node 24 LTS.

    npm install gaunt-sloth-assistant -g
    

    Gaunt Sloth currently only functions from the directory which has a configuration file (.gsloth.config.js, .gsloth.config.json, or .gsloth.config.mjs) and .gsloth.guidelines.md. Configuration files can be located in the project root or in the .gsloth/.gsloth-settings/ directory.

    You can also specify a path to a configuration file directly using the -c or --config global flag, for example gth -c /path/to/your/config.json ask "who are you?" Note, however, is that project guidelines are going to be used from current directory if they exist and simple install dir prompt is going to be used if nothing found.

    Configuration can be created with gsloth init [vendor] command. Currently, openrouter, anthropic, groq, deepseek, openai, google-genai, vertexai and xai can be configured with gsloth init [vendor]. For OpenAI-compatible providers like Inception, use gsloth init openai and modify the configuration.

    More detailed information on configuration can be found in CONFIGURATION.md

    Gaunt Sloth also supports .aiignore for excluding files from filesystem tools, with overrides via config.

    Gaunt Sloth supports defining custom shell commands that the AI can execute. These custom tools:

    • Work across all commands (pr, review, code, ask, chat)
    • Can be configured globally or per-command
    • Support parameters with security validation
    • Are useful for deployments, migrations, automation, and more

    Example configuration:

    {
    "llm": {"type": "vertexai", "model": "gemini-2.5-pro"},
    "customTools": {
    "deploy": {
    "command": "npm run deploy",
    "description": "Deploy the application"
    },
    "run_migration": {
    "command": "npm run migrate -- ${name}",
    "description": "Run a database migration",
    "parameters": {
    "name": {"description": "Migration name"}
    }
    }
    }
    }

    See Custom Tools Configuration for complete documentation.

    cd ./your-project
    gsloth init google-genai

    Make sure you either define GOOGLE_API_KEY environment variable or edit your configuration file and set up your key. It is recommended to obtain API key from Google AI Studio official website rather than from a reseller.

    cd ./your-project
    gsloth init vertexai
    gcloud auth login
    gcloud auth application-default login

    As of 19 Nov 2025, Gemini 3 on Vertex AI works with global and us-central1 locations when using the default aiplatform.googleapis.com endpoint. However, regional endpoints (e.g., us-central-aiplatform.googleapis.com) currently return 404 for Gemini 3. Example config:

    {
    "llm": {
    "type": "vertexai",
    "model": "gemini-3-pro-preview",
    "location": "global"
    }
    }
    cd ./your-project
    gsloth init openrouter

    Make sure you either define OPEN_ROUTER_API_KEY environment variable or edit your configuration file and set up your key.

    cd ./your-project
    gsloth init anthropic

    Make sure you either define ANTHROPIC_API_KEY environment variable or edit your configuration file and set up your key.

    cd ./your-project
    gsloth init groq

    Make sure you either define GROQ_API_KEY environment variable or edit your configuration file and set up your key.

    cd ./your-project
    gsloth init deepseek

    Make sure you either define DEEPSEEK_API_KEY environment variable or edit your configuration file and set up your key. It is recommended to obtain API key from DeepSeek official website rather than from a reseller.

    cd ./your-project
    gsloth init openai

    Make sure you either define OPENAI_API_KEY environment variable or edit your configuration file and set up your key.

    LM Studio provides a local OpenAI-compatible server for running models on your machine:

    cd ./your-project
    gsloth init openai

    Then edit your configuration file to point to LM Studio (default: http://127.0.0.1:1234/v1). Use any string for the API key (e.g., "none") - LM Studio doesn't validate it.

    Important: The model must support tool calling. Tested models include gpt-oss, granite, nemotron, seed, and qwen3.

    See CONFIGURATION.md for detailed setup.

    For providers using OpenAI-compatible APIs:

    cd ./your-project
    gsloth init openai

    Then edit your configuration to add custom base URL and API key. See CONFIGURATION.md for examples.

    cd ./your-project
    gsloth init xai

    Make sure you either define XAI_API_KEY environment variable or edit your configuration file and set up your key.

    Any other AI provider supported by Langchain.js can be configured with js Config. For example, Ollama can be set up with JS config (some of the models, see https://github.com/Galvanized-Pukeko/gaunt-sloth-assistant/discussions/107)

    JavaScript configs enable advanced customization including custom middleware and tools that aren't available in JSON configs. See the JavaScript config example for a complete demonstration of creating custom logging middleware and custom tools.

    Example GitHub workflows integration can be found in .github/workflows/review.yml; this example workflow performs AI review on any pushes to Pull Request, resulting in a comment left by, GitHub actions bot.

    Gaunt Sloth supports connecting to MCP servers, including those requiring OAuth authentication.

    This has been tested with the Atlassian Jira MCP server. See the MCP configuration section for detailed setup instructions, or the Jira MCP example for a working configuration.

    If you experience issues with the MCP auth try finding .gsloth dir in your home directory, and delete JSON file matching the server you are trying to connect to, for example for atlassian MCP the file would be ~/.gsloth/.gsloth-auth/mcp.atlassian.com_v1_sse.json

    Gaunt Sloth supports the A2A protocol for connecting to external AI agents. See CONFIGURATION.md for setup instructions.

    Uninstall global NPM package:

    npm uninstall -g gaunt-sloth-assistant
    

    Remove global config (if any)

    rm -r ~/.gsloth
    

    Remove configs from project (if necessary)

    rm -r ./.gsloth*
    

    Contributions are welcome through GitHub Issues and pull requests. For contributor workflow, local setup, testing expectations, and PR guidance, see CONTRIBUTING.md. Project participation is also covered by the Code of Conduct.

    License is MIT. See LICENSE