Populate .gsloth.guidelines.md
with your project details and quality requirements.
A proper preamble is paramount for good inference.
Check .gsloth.guidelines.md for example.
Your project should have the following files in order for gsloth to function:
.gsloth.config.js
(JavaScript module).gsloth.config.json
(JSON file).gsloth.config.mjs
(JavaScript module with explicit module extension).gsloth.guidelines.md
Gaunt Sloth currently only functions from the directory which has one of the configuration files and
.gsloth.guidelines.md
. Configuration files can be located in the project root or in the.gsloth/.gsloth-settings/
directory.You can also specify a path to a configuration file directly using the
-c
or--config
global flag, for examplegth -c /path/to/your/config.json ask "who are you?"
For a tidier project structure, you can create a .gsloth
directory in your project root. When this directory exists, gsloth will:
.gsloth
directory instead of the project root.gsloth/.gsloth-settings/
subdirectoryExample directory structure when using the .gsloth
directory:
.gsloth/.gsloth-settings/.gsloth-config.json
.gsloth/.gsloth-settings/.gsloth.guidelines.md
.gsloth/.gsloth-settings/.gsloth.review.md
.gsloth/gth_2025-05-18_09-34-38_ASK.md
.gsloth/gth_2025-05-18_22-09-00_PR-22.md
If the .gsloth
directory doesn't exist, gsloth will continue writing all files to the project root directory as it did previously.
Note: When initializing a project with an existing .gsloth
directory, the configuration files will be created in the .gsloth/.gsloth-settings
directory automatically. There is no automated migration for existing configurations - if you create a .gsloth
directory after initialization, you'll need to manually move your configuration files into the .gsloth/.gsloth-settings
directory.
Refer to documentation site for Configuration Interface
Refer to documentation site for Default Config Values
It is always worth checking sourcecode in config.ts for more insightful information.
Configuration can be created with gsloth init [vendor]
command.
Currently, anthropic, groq, deepseek, openai, google-genai, vertexai and xai can be configured with gsloth init [vendor]
.
For providers using OpenAI format (like Inception), use gsloth init openai
and then modify the configuration.
cd ./your-project
gsloth init google-genai
cd ./your-project
gsloth init vertexai
gcloud auth login
gcloud auth application-default login
cd ./your-project
gsloth init anthropic
Make sure you either define ANTHROPIC_API_KEY
environment variable or edit your configuration file and set up your key.
cd ./your-project
gsloth init groq
Make sure you either define GROQ_API_KEY
environment variable or edit your configuration file and set up your key.
cd ./your-project
gsloth init deepseek
Make sure you either define DEEPSEEK_API_KEY
environment variable or edit your configuration file and set up your key.
(note this meant to be an API key from deepseek.com, rather than from a distributor like TogetherAI)
cd ./your-project
gsloth init openai
Make sure you either define OPENAI_API_KEY
environment variable or edit your configuration file and set up your key.
cd ./your-project
gsloth init openrouter
Make sure you either define OPEN_ROUTER_API_KEY
environment variable or edit your configuration file and set up your key.
For providers that use OpenAI-compatible APIs:
cd ./your-project
gsloth init openai
Then edit your configuration file to add the custom base URL and API key. For example, for Inception:
{
"llm": {
"type": "openai",
"model": "mercury-coder",
"apiKeyEnvironmentVariable": "INCEPTION_API_KEY",
"configuration": {
"baseURL": "https://api.inceptionlabs.ai/v1"
}
}
}
cd ./your-project
gsloth init xai
Make sure you either define XAI_API_KEY
environment variable or edit your configuration file and set up your key.
JSON configuration is simpler but less flexible than JavaScript configuration. It should directly contain the configuration object.
Example of .gsloth.config.json for Anthropic
{
"llm": {
"type": "anthropic",
"apiKey": "your-api-key-here",
"model": "claude-3-5-sonnet-20241022"
}
}
You can use the ANTHROPIC_API_KEY
environment variable instead of specifying apiKey
in the config.
Example of .gsloth.config.json for Groq
{
"llm": {
"type": "groq",
"model": "deepseek-r1-distill-llama-70b",
"apiKey": "your-api-key-here"
}
}
You can use the GROQ_API_KEY
environment variable instead of specifying apiKey
in the config.
Example of .gsloth.config.json for DeepSeek
{
"llm": {
"type": "deepseek",
"model": "deepseek-reasoner",
"apiKey": "your-api-key-here"
}
}
You can use the DEEPSEEK_API_KEY
environment variable instead of specifying apiKey
in the config.
Example of .gsloth.config.json for OpenAI
{
"llm": {
"type": "openai",
"model": "gpt-4o",
"apiKey": "your-api-key-here"
}
}
You can use the OPENAI_API_KEY
environment variable instead of specifying apiKey
in the config.
Example of .gsloth.config.json for Inception (OpenAI-compatible)
{
"llm": {
"type": "openai",
"model": "mercury-coder",
"apiKeyEnvironmentVariable": "INCEPTION_API_KEY",
"configuration": {
"baseURL": "https://api.inceptionlabs.ai/v1"
}
}
}
You can use the INCEPTION_API_KEY
environment variable as specified in apiKeyEnvironmentVariable
.
Example of .gsloth.config.json for Google GenAI
{
"llm": {
"type": "google-genai",
"model": "gemini-2.5-pro",
"apiKey": "your-api-key-here"
}
}
You can use the GOOGLE_API_KEY
environment variable instead of specifying apiKey
in the config.
Example of .gsloth.config.json for VertexAI
{
"llm": {
"type": "vertexai",
"model": "gemini-2.5-pro"
}
}
VertexAI typically uses gcloud authentication; no apiKey
is needed in the config.
Example of .gsloth.config.json for Open Router
{
"llm": {
"type": "openrouter",
"model": "moonshotai/kimi-k2"
}
}
Make sure you either define OPEN_ROUTER_API_KEY
environment variable or edit your configuration file and set up your key.
When changing a model, make sure you're using a model which supports tools.
Example of .gsloth.config.json for xAI
{
"llm": {
"type": "xai",
"model": "grok-4-0709",
"apiKey": "your-api-key-here"
}
}
You can use the XAI_API_KEY
environment variable instead of specifying apiKey
in the config.
(.gsloth.config.js or .gsloth.config.mjs)
JavaScript configuration provides more flexibility than JSON configuration, allowing you to use dynamic imports and include custom tools.
Example with Custom Tools
// .gsloth.config.mjs
import { tool } from '@langchain/core/tools';
import { z } from 'zod';
const parrotTool = tool((s) => {
console.log(s);
}, {
name: 'parrot_tool',
description: `This tool will simply print the string`,
schema: z.string(),
});
export async function configure() {
const anthropic = await import('@langchain/google-vertexai');
return {
llm: new anthropic.ChatVertexAI({
model: 'gemini-2.5-pro',
}),
tools: [
parrotTool
]
};
}
Example of .gsloth.config.mjs for Anthropic
export async function configure() {
const anthropic = await import('@langchain/anthropic');
return {
llm: new anthropic.ChatAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
model: "claude-3-5-sonnet-20241022"
})
};
}
Example of .gsloth.config.mjs for Groq
export async function configure() {
const groq = await import('@langchain/groq');
return {
llm: new groq.ChatGroq({
model: "deepseek-r1-distill-llama-70b", // Check other models available
apiKey: process.env.GROQ_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
})
};
}
Example of .gsloth.config.mjs for DeepSeek
export async function configure() {
const deepseek = await import('@langchain/deepseek');
return {
llm: new deepseek.ChatDeepSeek({
model: 'deepseek-reasoner',
apiKey: process.env.DEEPSEEK_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
})
};
}
Example of .gsloth.config.mjs for OpenAI
export async function configure() {
const openai = await import('@langchain/openai');
return {
llm: new openai.ChatOpenAI({
model: 'gpt-4o',
apiKey: process.env.OPENAI_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
})
};
}
Example of .gsloth.config.mjs for Inception (OpenAI-compatible)
export async function configure() {
const openai = await import('@langchain/openai');
return {
llm: new openai.ChatOpenAI({
model: 'mercury-coder',
apiKey: process.env.INCEPTION_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
configuration: {
baseURL: 'https://api.inceptionlabs.ai/v1',
},
})
};
}
Example of .gsloth.config.mjs for Google GenAI
export async function configure() {
const googleGenai = await import('@langchain/google-genai');
return {
llm: new googleGenai.ChatGoogleGenerativeAI({
model: 'gemini-2.5-pro',
apiKey: process.env.GOOGLE_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
})
};
}
Example of .gsloth.config.mjs for VertexAI
VertexAI usually needs gcloud auth application-default login
(or both gcloud auth login
and gcloud auth application-default login
) and does not need any separate API keys.
export async function configure() {
const vertexAi = await import('@langchain/google-vertexai');
return {
llm: new vertexAi.ChatVertexAI({
model: "gemini-2.5-pro", // Consider checking for latest recommended model versions
// API Key from AI Studio should also work
//// Other parameters might be relevant depending on Vertex AI API updates.
//// The project is not in the interface, but it is in documentation and it seems to work.
// project: 'your-cool-google-cloud-project',
})
}
}
Example of .gsloth.config.mjs for xAI
export async function configure() {
const xai = await import('@langchain/xai');
return {
llm: new xai.ChatXAI({
model: 'grok-4-0709',
apiKey: process.env.XAI_API_KEY, // Default value, but you can provide the key in many different ways, even as literal
})
};
}
The configure function should simply return instance of langchain chat model. See Langchain documentation for more details.
Example GitHub workflows integration can be found in .github/workflows/review.yml this example workflow performs AI review on any pushes to Pull Request, resulting in a comment left by, GitHub actions bot.
Gaunt Sloth Assistant supports the Model Context Protocol (MCP), which provides enhanced context management. You can connect to various MCP servers, including those requiring OAuth authentication.
Gaunt Sloth now supports OAuth authentication for MCP servers. This has been tested with the Atlassian Jira MCP server.
To connect to the Atlassian Jira MCP server using OAuth, add the following to your .gsloth.config.json
:
{
"llm": {
"type": "vertexai",
"model": "gemini-2.5-pro",
"temperature": 0
},
"mcpServers": {
"jira": {
"url": "https://mcp.atlassian.com/v1/sse",
"authProvider": "OAuth",
"transport": "sse"
}
}
}
OAuth Authentication Flow:
~/.gsloth/.gsloth-auth/
Token Storage:
~/.gsloth/.gsloth-auth/
To configure local MCP server, add the mcpServers
section to your configuration file,
for example, configuration for reference sequential thinking MCP follows:
{
"llm": {
"type": "vertexai",
"model": "gemini-2.5-pro"
},
"mcpServers": {
"sequential-thinking": {
"transport": "stdio",
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-sequential-thinking"
]
}
}
}
This configuration launches the MCP filesystem server using npx, providing the LLM with access to the specified directory. The server uses stdio for communication with the LLM.
Gaunt Sloth supports GitHub issues as a requirements provider using the GitHub CLI. This integration is simple to use and requires minimal setup.
Prerequisites:
Usage:
The command syntax is gsloth pr <prId> [githubIssueId]
. For example:
gsloth pr 42 23
This will review PR #42 and include GitHub issue #23 as requirements.
To explicitly specify the GitHub issue provider:
gsloth pr 42 23 -p github
Configuration:
To set GitHub as your default requirements provider, add this to your configuration file:
{
"llm": {"type": "vertexai", "model": "gemini-2.5-pro"},
"commands": {
"pr": {
"requirementsProvider": "github"
}
}
}
Gaunt Sloth supports three methods to integrate with JIRA:
MCP can be used in chat
and code
commands.
Gaunt Sloth has OAuth client for MCP and is confirmed to work with public Jira MCP.
{
"llm": {
"type": "vertexai",
"model": "gemini-2.5-pro",
"temperature": 0
},
"mcpServers": {
"jira": {
"url": "https://mcp.atlassian.com/v1/sse",
"authProvider": "OAuth",
"transport": "sse"
}
}
}
Jira API is used with pr
and review
commands.
This method uses the Atlassian REST API v3 with a Personal Access Token (PAT). It requires your Atlassian Cloud ID.
Prerequisites:
Cloud ID: You can find your Cloud ID by visiting https://yourcompany.atlassian.net/_edge/tenant_info
while authenticated.
Personal Access Token (PAT): Create a PAT with the appropriate permissions from Atlassian Account Settings -> Security -> Create and manage API tokens -> [Create API token with scopes]
.
read:jira-work
(classic)read:issue-meta:jira
, read:issue-security-level:jira
, read:issue.vote:jira
, read:issue.changelog:jira
, read:avatar:jira
, read:issue:jira
, read:status:jira
, read:user:jira
, read:field-configuration:jira
Refer to JIRA API documentation for more details https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-issues/#api-rest-api-3-issue-issueidorkey-get
Environment Variables Support:
For better security, you can set the JIRA username, token, and cloud ID using environment variables instead of placing them in the configuration file:
JIRA_USERNAME
: Your JIRA username (e.g., user@yourcompany.com
).JIRA_API_PAT_TOKEN
: Your JIRA Personal Access Token with scopes.JIRA_CLOUD_ID
: Your Atlassian Cloud ID.If these environment variables are set, they will take precedence over the values in the configuration file.
JSON:
{
"llm": {"type": "vertexai", "model": "gemini-2.5-pro"},
"requirementsProvider": "jira",
"requirementsProviderConfig": {
"jira": {
"username": "username@yourcompany.com",
"token": "YOUR_JIRA_PAT_TOKEN",
"cloudId": "YOUR_ATLASSIAN_CLOUD_ID"
}
}
}
Optionally displayUrl can be defined to have a clickable link in the output:
{
"llm": {"type": "vertexai", "model": "gemini-2.5-pro"},
"requirementsProvider": "jira",
"requirementsProviderConfig": {
"jira": {
"displayUrl": "https://yourcompany.atlassian.net/browse/"
}
}
}
JavaScript:
export async function configure() {
const vertexAi = await import('@langchain/google-vertexai');
return {
llm: new vertexAi.ChatVertexAI({
model: "gemini-2.5-pro"
}),
requirementsProvider: 'jira',
requirementsProviderConfig: {
'jira': {
username: 'username@yourcompany.com', // Your Jira username/email
token: 'YOUR_JIRA_PAT_TOKEN', // Your Personal Access Token
cloudId: 'YOUR_ATLASSIAN_CLOUD_ID' // Your Atlassian Cloud ID
}
}
}
}
Jira API is used with pr
and review
commands.
This uses the Unscoped API token (Aka Legacy API token) method with REST API v2.
A legacy token can be acquired from Atlassian Account Settings -> Security -> Create and manage API tokens -> [Create API token without scopes]
.
Example configuration setting up JIRA integration using a legacy API token for both review
and pr
commands.
Make sure you use your actual company domain in baseUrl
and your personal legacy token
.
Environment Variables Support:
For better security, you can set the JIRA username and token using environment variables instead of placing them in the configuration file:
JIRA_USERNAME
: Your JIRA username (e.g., user@yourcompany.com
).JIRA_LEGACY_API_TOKEN
: Your JIRA legacy API token.If these environment variables are set, they will take precedence over the values in the configuration file.
JSON:
{
"llm": {"type": "vertexai", "model": "gemini-2.5-pro"},
"requirementsProvider": "jira-legacy",
"requirementsProviderConfig": {
"jira-legacy": {
"username": "username@yourcompany.com",
"token": "YOUR_JIRA_LEGACY_TOKEN",
"baseUrl": "https://yourcompany.atlassian.net/rest/api/2/issue/"
}
}
}
JavaScript:
export async function configure() {
const vertexAi = await import('@langchain/google-vertexai');
return {
llm: new vertexAi.ChatVertexAI({
model: "gemini-2.5-pro"
}),
requirementsProvider: 'jira-legacy',
requirementsProviderConfig: {
'jira-legacy': {
username: 'username@yourcompany.com', // Your Jira username/email
token: 'YOUR_JIRA_LEGACY_TOKEN', // Replace with your real Jira API token
baseUrl: 'https://yourcompany.atlassian.net/rest/api/2/issue/' // Your Jira instance base URL
}
}
}
}
The code
command can be configured with development tools via commands.code.devTools
. These tools allow the AI to run build, tests, lint, and single tests using the specified commands.
The tools are defined in src/tools/GthDevToolkit.ts
and include:
These tools execute the configured shell commands and capture their output.
Example configuration including dev tools (from .gsloth.config.json):
{
"llm": {
"type": "xai",
"model": "grok-4-0709"
},
"commands": {
"code": {
"filesystem": "all",
"devTools": {
"run_build": "npm build",
"run_tests": "npm test",
"run_lint": "npm run lint-n-fix",
"run_single_test": "npm test"
}
}
}
}
Note: For run_single_test
, the command can include a placeholder like ${testPath}
for the test file path. p
Security validations are in place to prevent path traversal or injection.
Some AI providers provide integrated server tools, such as web search.
.gsloth.config.json for OpenAI Web Search
{
"llm": {
"type": "openai",
"model": "gpt-4o"
},
"tools": [
{ "type": "web_search_preview" }
]
}
.gsloth.config.json for Anthropic Web Search
{
"llm": {
"type": "anthropic",
"model": "claude-sonnet-4-20250514"
},
"tools": [
{
"type": "web_search_20250305",
"name": "web_search",
"max_uses": 10
}
]
}