Invoke LLM with a message and runnable config.
For streaming use #stream method, streaming is preferred if model API supports it.
Please note that this when tools are involved, this method will anyway do multiple LLM
calls within LangChain dependency.
Invoke LLM with a message and runnable config. For streaming use #stream method, streaming is preferred if model API supports it. Please note that this when tools are involved, this method will anyway do multiple LLM calls within LangChain dependency.