Populate .gsloth.guidelines.md with your project details and quality requirements.
A proper preamble is paramount for good inference.
Check .gsloth.guidelines.md for example.
Your project should have the following files in order for gsloth to function:
.gsloth.config.js (JavaScript module).gsloth.config.json (JSON file).gsloth.config.mjs (JavaScript module with explicit module extension).gsloth.guidelines.mdGaunt Sloth currently only functions from the directory which has one of the configuration files and
.gsloth.guidelines.md. Configuration files can be located in the project root or in the.gsloth/.gsloth-settings/directory.You can also specify a path to a configuration file directly using the
-cor--configglobal flag, for examplegth -c /path/to/your/config.json ask "who are you?"
For a tidier project structure, you can create a .gsloth directory in your project root. When this directory exists, gsloth will:
.gsloth directory instead of the project root.gsloth/.gsloth-settings/ subdirectoryExample directory structure when using the .gsloth directory:
.gsloth/.gsloth-settings/.gsloth.config.json
.gsloth/.gsloth-settings/.gsloth.guidelines.md
.gsloth/.gsloth-settings/.gsloth.review.md
.gsloth/gth_2025-05-18_09-34-38_ASK.md
.gsloth/gth_2025-05-18_22-09-00_PR-22.md
If the .gsloth directory doesn't exist, gsloth will continue writing all files to the project root directory as it did previously.
Note: When initializing a project with an existing .gsloth directory, the configuration files will be created in the .gsloth/.gsloth-settings directory automatically. There is no automated migration for existing configurations - if you create a .gsloth directory after initialization, you'll need to manually move your configuration files into the .gsloth/.gsloth-settings directory.
Sometimes two different teams have different perspectives of a project. For example, developers may want to review the code for code quality. DevOps may want to be notified when some configuration files or docker image their configurations of Gaunt Sloth may be so different that this is better to keep them in complete separation.
Identity profiles may be used to define different Gaunt Sloth identities for different purposes.
Identity profiles can only be activated in directory-based configuration.
gth -i devops pr PR_NO is invoked, the configuration is pulled from .gsloth/.gsloth-settings/devops/ directory,
which may contain a full set of config files:
.gsloth.backstory.md
.gsloth.config.json
.gsloth.guidelines.md
.gsloth.review.md
When no identity profile is specified in the command, for example gth pr PR_NO,
the configuration is pulled from the .gsloth/.gsloth-settings/ directory.
-i or -identity-profile overrides entire configuration directory, which means it should contain
a configuration file and prompt files. In the case if some prompt files are missing, they will be
fetched from the installation directory.
By default, Gaunt Sloth writes each response to gth_<timestamp>_<COMMAND>.md under .gsloth/ (or the project root).
Set writeOutputToFile in your config to:
true (default) for standard filenames,false to skip writing files,"review.md") are placed in .gsloth/ when it exists, otherwise project root"./review.md" or "reviews/last.md") are always relative to project rootExamples:
"review.md" → .gsloth/review.md (when .gsloth exists) or review.md (otherwise)"./review.md" → review.md (always project root)"reviews/last.md" → reviews/last.md (always relative to project root)Override the setting per run with -w/--write-output-to-file true|false|<filename>. Shortcuts -wn or -w0 map to false.
Refer to documentation site for Configuration Interface
Refer to documentation site for Default Config Values
It is always worth checking sourcecode in config.ts for more insightful information.
Configuration can be created with gsloth init [vendor] command.
Currently, anthropic, groq, deepseek, openai, google-genai, vertexai, openrouter and xai can be configured with gsloth init [vendor].
For providers using OpenAI format (like Inception), use gsloth init openai and then modify the configuration.
cd ./your-project
gsloth init google-genai
cd ./your-project
gsloth init vertexai
gcloud auth login
gcloud auth application-default login
cd ./your-project
gsloth init anthropic
Make sure you either define ANTHROPIC_API_KEY environment variable or edit your configuration file and set up your key.
cd ./your-project
gsloth init groq
Make sure you either define GROQ_API_KEY environment variable or edit your configuration file and set up your key.
cd ./your-project
gsloth init deepseek
Make sure you either define DEEPSEEK_API_KEY environment variable or edit your configuration file and set up your key.
(note this meant to be an API key from deepseek.com, rather than from a distributor like TogetherAI)
cd ./your-project
gsloth init openai
Make sure you either define OPENAI_API_KEY environment variable or edit your configuration file and set up your key.
cd ./your-project
gsloth init openrouter
Make sure you either define OPEN_ROUTER_API_KEY environment variable or edit your configuration file and set up your key.
LM Studio provides a local OpenAI-compatible server for running models on your machine.
cd ./your-project
gsloth init openai
Then edit your configuration file to point to your LM Studio server:
{
"llm": {
"type": "openai",
"model": "openai/gpt-oss-20b",
"apiKey": "none",
"configuration": {
"baseURL": "http://127.0.0.1:1234/v1"
}
}
}
Configuration notes:
type to "openai"apiKey can be any random string (e.g., "none") - LM Studio doesn't validate itbaseURL is http://127.0.0.1:1234/v1, but adjust the port if you've configured LM Studio differentlymodel should match the model identifier shown in LM StudioFor a complete example, see examples/lmstudio/.gsloth.config.json.
For providers that use OpenAI-compatible APIs:
cd ./your-project
gsloth init openai
Then edit your configuration file to add the custom base URL and API key. For example, for Inception:
{
"llm": {
"type": "openai",
"model": "mercury-coder",
"apiKeyEnvironmentVariable": "INCEPTION_API_KEY",
"configuration": {
"baseURL": "https://api.inceptionlabs.ai/v1"
}
}
}
cd ./your-project
gsloth init xai
Make sure you either define XAI_API_KEY environment variable or edit your configuration file and set up your key.
JSON configuration is simpler but less flexible than JavaScript configuration. It should directly contain the configuration object.
Example of .gsloth.config.json for Anthropic
{
"llm": {
"type": "anthropic",
"apiKey": "your-api-key-here",
"model": "claude-sonnet-4-5"
}
}
You can use the ANTHROPIC_API_KEY environment variable instead of specifying apiKey in the config.
Example of .gsloth.config.json for Groq
{
"llm": {
"type": "groq",
"model": "deepseek-r1-distill-llama-70b",
"apiKey": "your-api-key-here"
}
}
You can use the GROQ_API_KEY environment variable instead of specifying apiKey in the config.
Example of .gsloth.config.json for DeepSeek
{
"llm": {
"type": "deepseek",
"model": "deepseek-reasoner",
"apiKey": "your-api-key-here"
}
}
You can use the DEEPSEEK_API_KEY environment variable instead of specifying apiKey in the config.
Example of .gsloth.config.json for OpenAI
{
"llm": {
"type": "openai",
"model": "gpt-4o",
"apiKey": "your-api-key-here"
}
}
You can use the OPENAI_API_KEY environment variable instead of specifying apiKey in the config.
Example of .gsloth.config.json for LM Studio (OpenAI-compatible)
{
"llm": {
"type": "openai",
"model": "openai/gpt-oss-20b",
"apiKey": "none",
"configuration": {
"baseURL": "http://127.0.0.1:1234/v1"
}
}
}
LM Studio runs locally and doesn't require a real API key. Use any string for apiKey.
Note: The model must support tool calling. Tested models include gpt-oss, granite, nemotron, seed, and qwen3.
Example of .gsloth.config.json for Inception (OpenAI-compatible)
{
"llm": {
"type": "openai",
"model": "mercury-coder",
"apiKeyEnvironmentVariable": "INCEPTION_API_KEY",
"configuration": {
"baseURL": "https://api.inceptionlabs.ai/v1"
}
}
}
You can use the INCEPTION_API_KEY environment variable as specified in apiKeyEnvironmentVariable.
Example of .gsloth.config.json for Google GenAI
{
"llm": {
"type": "google-genai",
"model": "gemini-2.5-pro",
"apiKey": "your-api-key-here"
}
}
You can use the GOOGLE_API_KEY environment variable instead of specifying apiKey in the config.
Example of .gsloth.config.json for VertexAI
{
"llm": {
"type": "vertexai",
"model": "gemini-2.5-pro"
}
}
VertexAI typically uses gcloud authentication; no apiKey is needed in the config.
Example of .gsloth.config.json for Open Router
{
"llm": {
"type": "openrouter",
"model": "moonshotai/kimi-k2"
}
}
Make sure you either define OPEN_ROUTER_API_KEY environment variable or edit your configuration file and set up your key.
When changing a model, make sure you're using a model which supports tools.
Example of .gsloth.config.json for xAI
{
"llm": {
"type": "xai",
"model": "grok-4-0709",
"apiKey": "your-api-key-here"
}
}
You can use the XAI_API_KEY environment variable instead of specifying apiKey in the config.
(.gsloth.config.js or .gsloth.config.mjs)
JavaScript configuration provides more flexibility than JSON configuration, allowing you to use dynamic imports and include custom tools.
For a complete working example demonstrating custom middleware and custom tools, see:
The example demonstrates:
beforeAgent, beforeModel, afterModel, afterAgent)tool() APIExample with Custom Tools
// .gsloth.config.mjs
import { tool } from '@langchain/core/tools';
import { z } from 'zod';
const parrotTool = tool((s) => {
console.log(s);
}, {
name: 'parrot_tool',
description: `This tool will simply print the string`,
schema: z.string(),
});
export async function configure() {
const anthropic = await import('@langchain/google-vertexai');
return {
llm: new anthropic.ChatVertexAI({
model: 'gemini-2.5-pro',
}),
tools: [
parrotTool
]
};
}
Example of .gsloth.config.mjs for Anthropic
export async function configure() {
const anthropic = await import('@langchain/anthropic');
return {
llm: new anthropic.ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
model: "claude-sonnet-4-5"
})
};
}
Example of .gsloth.config.mjs for Groq
export async function configure() {
const groq = await import('@langchain/groq');
return {
llm: new groq.ChatGroq({
model: "deepseek-r1-distill-llama-70b", // Check other models available
apiKey: process.env.GROQ_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
})
};
}
Example of .gsloth.config.mjs for DeepSeek
export async function configure() {
const deepseek = await import('@langchain/deepseek');
return {
llm: new deepseek.ChatDeepSeek({
model: 'deepseek-reasoner',
apiKey: process.env.DEEPSEEK_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
})
};
}
Example of .gsloth.config.mjs for OpenAI
export async function configure() {
const openai = await import('@langchain/openai');
return {
llm: new openai.ChatOpenAI({
model: 'gpt-4o',
apiKey: process.env.OPENAI_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
})
};
}
Example of .gsloth.config.mjs for LM Studio (OpenAI-compatible)
export async function configure() {
const openai = await import('@langchain/openai');
return {
llm: new openai.ChatOpenAI({
model: 'openai/gpt-oss-20b',
apiKey: 'none', // LM Studio doesn't validate API keys
configuration: {
baseURL: 'http://127.0.0.1:1234/v1',
},
})
};
}
Note: The model must support tool calling. Tested models include gpt-oss, granite, nemotron, seed, and qwen3.
Example of .gsloth.config.mjs for Inception (OpenAI-compatible)
export async function configure() {
const openai = await import('@langchain/openai');
return {
llm: new openai.ChatOpenAI({
model: 'mercury-coder',
apiKey: process.env.INCEPTION_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
configuration: {
baseURL: 'https://api.inceptionlabs.ai/v1',
},
})
};
}
Example of .gsloth.config.mjs for Google GenAI
export async function configure() {
const googleGenai = await import('@langchain/google-genai');
return {
llm: new googleGenai.ChatGoogleGenerativeAI({
model: 'gemini-2.5-pro',
apiKey: process.env.GOOGLE_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
})
};
}
Example of .gsloth.config.mjs for VertexAI
VertexAI usually needs gcloud auth application-default login
(or both gcloud auth login and gcloud auth application-default login) and does not need any separate API keys.
export async function configure() {
const vertexAi = await import('@langchain/google-vertexai');
return {
llm: new vertexAi.ChatVertexAI({
model: "gemini-2.5-pro", // Consider checking for latest recommended model versions
// API Key from AI Studio should also work
//// Other parameters might be relevant depending on Vertex AI API updates.
//// The project is not in the interface, but it is in documentation and it seems to work.
// project: 'your-cool-google-cloud-project',
})
}
}
Example of .gsloth.config.mjs for xAI
export async function configure() {
const xai = await import('@langchain/xai');
return {
llm: new xai.ChatXAI({
model: 'grok-4-0709',
apiKey: process.env.XAI_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
})
};
}
The configure function should simply return instance of langchain chat model. See Langchain documentation for more details.
Example GitHub workflows integration can be found in .github/workflows/review.yml this example workflow performs AI review on any pushes to Pull Request, resulting in a comment left by, GitHub actions bot.
Gaunt Sloth Assistant supports the Model Context Protocol (MCP), which provides enhanced context management. You can connect to various MCP servers, including those requiring OAuth authentication.
Gaunt Sloth now supports OAuth authentication for MCP servers. This has been tested with the Atlassian Jira MCP server.
To connect to the Atlassian Jira MCP server using OAuth, add the following to your .gsloth.config.json:
{
"llm": {
"type": "vertexai",
"model": "gemini-2.5-pro",
"temperature": 0
},
"mcpServers": {
"jira": {
"url": "https://mcp.atlassian.com/v1/sse",
"authProvider": "OAuth",
"transport": "sse"
}
}
}
For a complete working example, see examples/jira-mcp.
OAuth Authentication Flow:
~/.gsloth/.gsloth-auth/Token Storage:
~/.gsloth/.gsloth-auth/To configure local MCP server, add the mcpServers section to your configuration file,
for example, configuration for reference sequential thinking MCP follows:
{
"llm": {
"type": "vertexai",
"model": "gemini-2.5-pro"
},
"mcpServers": {
"sequential-thinking": {
"transport": "stdio",
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-sequential-thinking"
]
}
}
}
This configuration launches the MCP filesystem server using npx, providing the LLM with access to the specified directory. The server uses stdio for communication with the LLM.
Gaunt Sloth supports GitHub issues as a requirements provider using the GitHub CLI. This integration is simple to use and requires minimal setup.
Prerequisites:
Usage:
The command syntax is gsloth pr <prId> [githubIssueId]. For example:
gsloth pr 42 23
This will review PR #42 and include GitHub issue #23 as requirements.
To explicitly specify the GitHub issue provider:
gsloth pr 42 23 -p github
Configuration:
To set GitHub as your default requirements provider, add this to your configuration file:
{
"llm": {"type": "vertexai", "model": "gemini-2.5-pro"},
"commands": {
"pr": {
"requirementsProvider": "github"
}
}
}
Gaunt Sloth supports three methods to integrate with JIRA:
MCP can be used in chat and code commands.
Gaunt Sloth has OAuth client for MCP and is confirmed to work with public Jira MCP.
{
"llm": {
"type": "vertexai",
"model": "gemini-2.5-pro",
"temperature": 0
},
"mcpServers": {
"jira": {
"url": "https://mcp.atlassian.com/v1/sse",
"authProvider": "OAuth",
"transport": "sse"
}
}
}
Jira API is used with pr and review commands.
This method uses the Atlassian REST API v3 with a Personal Access Token (PAT). It requires your Atlassian Cloud ID.
Prerequisites:
Cloud ID: You can find your Cloud ID by visiting https://yourcompany.atlassian.net/_edge/tenant_info while authenticated.
Personal Access Token (PAT): Create a PAT with the appropriate permissions from Atlassian Account Settings -> Security -> Create and manage API tokens -> [Create API token with scopes].
read:jira-work (classic)read:issue-meta:jira, read:issue-security-level:jira, read:issue.vote:jira, read:issue.changelog:jira, read:avatar:jira, read:issue:jira, read:status:jira, read:user:jira, read:field-configuration:jiraRefer to JIRA API documentation for more details https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-issues/#api-rest-api-3-issue-issueidorkey-get
Environment Variables Support:
For better security, you can set the JIRA username, token, and cloud ID using environment variables instead of placing them in the configuration file:
JIRA_USERNAME: Your JIRA username (e.g., user@yourcompany.com).JIRA_API_PAT_TOKEN: Your JIRA Personal Access Token with scopes.JIRA_CLOUD_ID: Your Atlassian Cloud ID.If these environment variables are set, they will take precedence over the values in the configuration file.
JSON:
{
"llm": {"type": "vertexai", "model": "gemini-2.5-pro"},
"requirementsProvider": "jira",
"requirementsProviderConfig": {
"jira": {
"username": "username@yourcompany.com",
"token": "YOUR_JIRA_PAT_TOKEN",
"cloudId": "YOUR_ATLASSIAN_CLOUD_ID"
}
}
}
Optionally displayUrl can be defined to have a clickable link in the output:
{
"llm": {"type": "vertexai", "model": "gemini-2.5-pro"},
"requirementsProvider": "jira",
"requirementsProviderConfig": {
"jira": {
"displayUrl": "https://yourcompany.atlassian.net/browse/"
}
}
}
JavaScript:
export async function configure() {
const vertexAi = await import('@langchain/google-vertexai');
return {
llm: new vertexAi.ChatVertexAI({
model: "gemini-2.5-pro"
}),
requirementsProvider: 'jira',
requirementsProviderConfig: {
'jira': {
username: 'username@yourcompany.com', // Your Jira username/email
token: 'YOUR_JIRA_PAT_TOKEN', // Your Personal Access Token
cloudId: 'YOUR_ATLASSIAN_CLOUD_ID' // Your Atlassian Cloud ID
}
}
}
}
When you pass a Jira issue ID to gsloth pr and use the modern Jira provider (requirementsProvider: "jira"),
you can ask Gaunt Sloth to log review time back to that issue automatically by setting
commands.pr.logWorkForReviewInSeconds. The value is recorded as worklog seconds after each PR review.
{
"commands": {
"pr": {
"requirementsProvider": "jira",
"logWorkForReviewInSeconds": 600
}
}
}
This automation only runs when a requirementsId is supplied on the command line and the provider resolves to jira.
Jira API is used with pr and review commands.
This uses the Unscoped API token (Aka Legacy API token) method with REST API v2.
A legacy token can be acquired from Atlassian Account Settings -> Security -> Create and manage API tokens -> [Create API token without scopes].
Example configuration setting up JIRA integration using a legacy API token for both review and pr commands.
Make sure you use your actual company domain in baseUrl and your personal legacy token.
Environment Variables Support:
For better security, you can set the JIRA username and token using environment variables instead of placing them in the configuration file:
JIRA_USERNAME: Your JIRA username (e.g., user@yourcompany.com).JIRA_LEGACY_API_TOKEN: Your JIRA legacy API token.If these environment variables are set, they will take precedence over the values in the configuration file.
JSON:
{
"llm": {"type": "vertexai", "model": "gemini-2.5-pro"},
"requirementsProvider": "jira-legacy",
"requirementsProviderConfig": {
"jira-legacy": {
"username": "username@yourcompany.com",
"token": "YOUR_JIRA_LEGACY_TOKEN",
"baseUrl": "https://yourcompany.atlassian.net/rest/api/2/issue/"
}
}
}
JavaScript:
export async function configure() {
const vertexAi = await import('@langchain/google-vertexai');
return {
llm: new vertexAi.ChatVertexAI({
model: "gemini-2.5-pro"
}),
requirementsProvider: 'jira-legacy',
requirementsProviderConfig: {
'jira-legacy': {
username: 'username@yourcompany.com', // Your Jira username/email
token: 'YOUR_JIRA_LEGACY_TOKEN', // Replace with your real Jira API token
baseUrl: 'https://yourcompany.atlassian.net/rest/api/2/issue/' // Your Jira instance base URL
}
}
}
}
The code command can be configured with development tools via commands.code.devTools. These tools allow the AI to run build, tests, lint, and single tests using the specified commands.
The tools are defined in src/tools/GthDevToolkit.ts and include:
These tools execute the configured shell commands and capture their output.
Example configuration including dev tools (from .gsloth.config.json):
{
"llm": {
"type": "xai",
"model": "grok-4-0709"
},
"commands": {
"code": {
"filesystem": "all",
"devTools": {
"run_build": "npm build",
"run_tests": "npm test",
"run_lint": "npm run lint-n-fix",
"run_single_test": "npm test"
}
}
}
}
Note: For run_single_test, the command can include a placeholder like ${testPath} for the test file path.
Security validations are in place to prevent path traversal or injection.
Gaunt Sloth supports middleware to intercept and control agent execution at critical points. Middleware provides hooks for cost optimization, conversation management, and custom logic.
There are two predefined middleware types available:
Reduces API costs by caching prompts (Anthropic models only):
{
"llm": {
"type": "anthropic",
"model": "claude-sonnet-4-5"
},
"middleware": [
"anthropic-prompt-caching"
]
}
With custom TTL configuration:
{
"middleware": [
{
"name": "anthropic-prompt-caching",
"ttl": "5m"
}
]
}
TTL options: "5m" (5 minutes) or "1h" (1 hour)
Automatically condenses conversation history when approaching token limits:
{
"middleware": [
"summarization"
]
}
With custom configuration:
{
"middleware": [
{
"name": "summarization",
"maxTokensBeforeSummary": 8000,
"messagesToKeep": 5
}
]
}
Configuration options:
maxTokensBeforeSummary: Maximum tokens before triggering summarization (default: 10000)messagesToKeep: Number of recent messages to keep after summarizationsummaryPrompt: Custom prompt template for summarizationmodel: Custom model for summarization (defaults to main LLM)You can combine multiple middleware:
{
"llm": {
"type": "anthropic",
"model": "claude-sonnet-4-5"
},
"middleware": [
"anthropic-prompt-caching",
{
"name": "summarization",
"maxTokensBeforeSummary": 12000
}
]
}
Custom middleware objects are only available in JavaScript configurations. Always wrap them with LangChain's createMiddleware to include the required MIDDLEWARE_BRAND marker—plain objects/functions will be rejected by the registry.
// .gsloth.config.mjs
import { createMiddleware } from 'langchain';
const requestLogger = createMiddleware({
name: 'request-logger',
beforeModel: (state) => {
// Custom logic before model execution
console.log('Processing request...');
return state;
},
afterModel: (state) => {
// Custom logic after model execution
console.log('Model completed');
return state;
},
});
export async function configure() {
const anthropic = await import('@langchain/anthropic');
return {
llm: new anthropic.ChatAnthropic({
model: "claude-sonnet-4-5"
}),
middleware: [
"summarization",
requestLogger
]
};
}
The review and pr commands automatically provide automated review scoring with configurable pass/fail thresholds. Rating is enabled by default - the AI concludes every review with a numerical rating (0-10) and a comment explaining the rating.
Out of the box, without any configuration:
You can customize rating behavior for review and pr commands under commands.review.rating or commands.pr.rating:
enabled (boolean, default: true): Enable or disable review ratingpassThreshold (number 0-10, default: 6): Minimum score required to pass the reviewminRating (number, default: 0): Lower bound for the rating scalemaxRating (number, default: 10): Upper bound for the rating scaleerrorOnReviewFail (boolean, default: true): Exit with error code 1 when review fails (below threshold)Default configuration (no config needed):
Rating works out of the box with no configuration required! The defaults provide sensible CI/CD integration.
Disable rating:
{
"commands": {
"review": {
"rating": {
"enabled": false
}
}
}
}
Custom threshold:
{
"commands": {
"review": {
"rating": {
"passThreshold": 8
}
}
}
}
Different thresholds for review and PR:
{
"llm": {
"type": "anthropic",
"model": "claude-sonnet-4-5"
},
"commands": {
"review": {
"rating": {
"enabled": true,
"passThreshold": 6,
"errorOnReviewFail": true
}
},
"pr": {
"rating": {
"enabled": true,
"passThreshold": 7,
"errorOnReviewFail": true
}
}
}
}
Rating without failing the build:
{
"commands": {
"review": {
"rating": {
"enabled": true,
"passThreshold": 6,
"errorOnReviewFail": false
}
}
}
}
When rating is enabled, the review will conclude with a clearly formatted rating section:
============================================================
REVIEW RATING
============================================================
PASS 8/10 (threshold: 6)
Comment: Code quality is good with minor improvements needed.
Well-structured and follows best practices.
============================================================
For failing reviews:
============================================================
REVIEW RATING
============================================================
FAIL 4/10 (threshold: 6)
Comment: Significant issues found requiring refactoring
before this code can be merged.
============================================================
When errorOnReviewFail is set to true (default), failed reviews will exit with code 1, which will fail CI/CD pipeline steps. This is useful for enforcing code quality standards in automated workflows.
Example usage in GitHub Actions:
- name: Run code review
run: gsloth review -f changed-files.diff
# This step will fail if rating is below threshold
Note: A2A support is an experimental feature and may change in future releases.
Gaunt Sloth supports the A2A protocol for connecting to external AI agents. This allows delegating tasks to specialized agents.
Add a2aAgents to your configuration file:
{
"llm": {
"type": "YOUR_PROVIDER",
"model": "MODEL_OF_YOUR_CHOICE"
},
"a2aAgents": {
"myAgent": {
"agentId": "my-agent-id",
"agentUrl": "http://localhost:8080/a2a"
}
}
}
Each agent becomes available as a tool named a2a_agent_<agentId> in chat and code commands.
See examples/a2a for a working example.
Some AI providers provide integrated server tools, such as web search.
.gsloth.config.json for OpenAI Web Search
{
"llm": {
"type": "openai",
"model": "gpt-4o"
},
"tools": [
{ "type": "web_search_preview" }
]
}
.gsloth.config.json for Anthropic Web Search
{
"llm": {
"type": "anthropic",
"model": "claude-sonnet-4-5"
},
"tools": [
{
"type": "web_search_20250305",
"name": "web_search",
"max_uses": 10
}
]
}