Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home

Site Navigation

  • Core
  • Langchain
  • Text Splitters
  • AI21
  • Airbyte
    • Anthropic
    • AstraDB
    • AWS
    • Azure Dynamic Sessions
    • Chroma
    • Cohere
    • Couchbase
    • Elasticsearch
    • Exa
    • Fireworks
    • Google Community
    • Google GenAI
    • Google VertexAI
    • Groq
    • Huggingface
    • Milvus
    • MistralAI
    • MongoDB
    • Nomic
    • Nvidia Ai Endpoints
    • Ollama
    • OpenAI
    • Pinecone
    • Postgres
    • Prompty
    • Qdrant
    • Robocorp
    • Together
    • Unstructured
    • VoyageAI
    • Weaviate
  • LangChain docs
  • GitHub
  • X / Twitter

Site Navigation

  • Core
  • Langchain
  • Text Splitters
  • AI21
  • Airbyte
    • Anthropic
    • AstraDB
    • AWS
    • Azure Dynamic Sessions
    • Chroma
    • Cohere
    • Couchbase
    • Elasticsearch
    • Exa
    • Fireworks
    • Google Community
    • Google GenAI
    • Google VertexAI
    • Groq
    • Huggingface
    • Milvus
    • MistralAI
    • MongoDB
    • Nomic
    • Nvidia Ai Endpoints
    • Ollama
    • OpenAI
    • Pinecone
    • Postgres
    • Prompty
    • Qdrant
    • Robocorp
    • Together
    • Unstructured
    • VoyageAI
    • Weaviate
  • LangChain docs
  • GitHub
  • X / Twitter

Section Navigation

  • chat_models
    • ChatMistralAI
    • acompletion_with_retry
  • embeddings
  • langchain_mistralai 0.1.12
  • chat_models
  • acompletion_...

acompletion_with_retry#

async langchain_mistralai.chat_models.acompletion_with_retry(llm: ChatMistralAI, run_manager: AsyncCallbackManagerForLLMRun | None = None, **kwargs: Any) → Any[source]#

Use tenacity to retry the async completion call.

Parameters:
  • llm (ChatMistralAI)

  • run_manager (AsyncCallbackManagerForLLMRun | None)

  • kwargs (Any)

Return type:

Any

previous

ChatMistralAI

next

embeddings

On this page
  • acompletion_with_retry()

© Copyright 2023, LangChain Inc.