Gaunt Sloth Assistant
    Preparing search index...

    Implements

    Index

    Constructors

    Methods

    • Invoke LLM with a message and runnable config. For streaming use #stream method, streaming is preferred if model API supports it. Please note that this when tools are involved, this method will anyway do multiple LLM calls within LangChain dependency.

      Parameters

      • messages: BaseMessage[]
      • runConfig: RunnableConfig

      Returns Promise<string>

    • Induce LLM to stream AI messages with a user message and runnable config. When stream is not appropriate use invoke.

      Parameters

      • messages: BaseMessage[]
      • runConfig: RunnableConfig

      Returns Promise<IterableReadableStream<string>>