Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home

Site Navigation

  • Core
  • Langchain
  • Text Splitters
  • AI21
  • Airbyte
    • Anthropic
    • AstraDB
    • AWS
    • Azure Dynamic Sessions
    • Chroma
    • Cohere
    • Couchbase
    • Elasticsearch
    • Exa
    • Fireworks
    • Google Community
    • Google GenAI
    • Google VertexAI
    • Groq
    • Huggingface
    • Milvus
    • MistralAI
    • MongoDB
    • Nomic
    • Nvidia Ai Endpoints
    • Ollama
    • OpenAI
    • Pinecone
    • Postgres
    • Prompty
    • Qdrant
    • Robocorp
    • Together
    • Unstructured
    • VoyageAI
    • Weaviate
  • LangChain docs
  • GitHub
  • X / Twitter

Site Navigation

  • Core
  • Langchain
  • Text Splitters
  • AI21
  • Airbyte
    • Anthropic
    • AstraDB
    • AWS
    • Azure Dynamic Sessions
    • Chroma
    • Cohere
    • Couchbase
    • Elasticsearch
    • Exa
    • Fireworks
    • Google Community
    • Google GenAI
    • Google VertexAI
    • Groq
    • Huggingface
    • Milvus
    • MistralAI
    • MongoDB
    • Nomic
    • Nvidia Ai Endpoints
    • Ollama
    • OpenAI
    • Pinecone
    • Postgres
    • Prompty
    • Qdrant
    • Robocorp
    • Together
    • Unstructured
    • VoyageAI
    • Weaviate
  • LangChain docs
  • GitHub
  • X / Twitter

Section Navigation

  • agents
  • callbacks
  • chains
  • chat_models
  • embeddings
  • evaluation
  • globals
    • get_debug
    • get_llm_cache
    • get_verbose
    • set_debug
    • set_llm_cache
    • set_verbose
  • hub
  • indexes
  • memory
  • model_laboratory
  • output_parsers
  • retrievers
  • runnables
  • smith
  • storage
  • langchain 0.2.12
  • globals
  • set_llm_cache

set_llm_cache#

langchain.globals.set_llm_cache(value: BaseCache | None) → None[source]#

Set a new LLM cache, overwriting the previous value, if any.

Parameters:

value (BaseCache | None)

Return type:

None

Examples using set_llm_cache#

  • Astra DB

  • Cassandra

  • DSPy

  • How to cache LLM responses

  • How to cache chat model responses

  • Model caches

  • Momento

  • MongoDB Atlas

  • Redis

previous

set_debug

next

set_verbose

On this page
  • set_llm_cache()

© Copyright 2023, LangChain Inc.