Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home

Site Navigation

  • Core
  • Langchain
  • Text Splitters
  • AI21
  • Airbyte
    • Anthropic
    • AstraDB
    • AWS
    • Azure Dynamic Sessions
    • Chroma
    • Cohere
    • Couchbase
    • Elasticsearch
    • Exa
    • Fireworks
    • Google Community
    • Google GenAI
    • Google VertexAI
    • Groq
    • Huggingface
    • Milvus
    • MistralAI
    • MongoDB
    • Nomic
    • Nvidia Ai Endpoints
    • Ollama
    • OpenAI
    • Pinecone
    • Postgres
    • Prompty
    • Qdrant
    • Robocorp
    • Together
    • Unstructured
    • VoyageAI
    • Weaviate
  • LangChain docs
  • GitHub
  • X / Twitter

Site Navigation

  • Core
  • Langchain
  • Text Splitters
  • AI21
  • Airbyte
    • Anthropic
    • AstraDB
    • AWS
    • Azure Dynamic Sessions
    • Chroma
    • Cohere
    • Couchbase
    • Elasticsearch
    • Exa
    • Fireworks
    • Google Community
    • Google GenAI
    • Google VertexAI
    • Groq
    • Huggingface
    • Milvus
    • MistralAI
    • MongoDB
    • Nomic
    • Nvidia Ai Endpoints
    • Ollama
    • OpenAI
    • Pinecone
    • Postgres
    • Prompty
    • Qdrant
    • Robocorp
    • Together
    • Unstructured
    • VoyageAI
    • Weaviate
  • LangChain docs
  • GitHub
  • X / Twitter

Section Navigation

  • agents
  • callbacks
  • chains
    • APIChain
    • Chain
    • BaseCombineDocumentsChain
    • MapReduceDocumentsChain
    • MapRerankDocumentsChain
    • AsyncCombineDocsProtocol
    • CombineDocsProtocol
    • ReduceDocumentsChain
    • RefineDocumentsChain
    • StuffDocumentsChain
    • ConstitutionalChain
    • ConstitutionalPrinciple
    • BaseConversationalRetrievalChain
    • ChatVectorDBChain
    • InputType
    • ElasticsearchDatabaseChain
    • FlareChain
    • QuestionGeneratorChain
    • FinishedOutputParser
    • HypotheticalDocumentEmbedder
    • LLMCheckerChain
    • LLMMathChain
    • LLMSummarizationCheckerChain
    • MapReduceChain
    • OpenAIModerationChain
    • NatBotChain
    • Crawler
    • ElementInViewPort
    • FactWithEvidence
    • QuestionAnswer
    • SimpleRequestChain
    • AnswerWithSources
    • BasePromptSelector
    • ConditionalPromptSelector
    • BaseQAWithSourcesChain
    • QAWithSourcesChain
    • LoadingCallable
    • RetrievalQAWithSourcesChain
    • VectorDBQAWithSourcesChain
    • StructuredQueryOutputParser
    • ISO8601Date
    • AttributeInfo
    • LoadingCallable
    • BaseRetrievalQA
    • VectorDBQA
    • MultiRouteChain
    • Route
    • RouterChain
    • EmbeddingRouterChain
    • LLMRouterChain
    • RouterOutputParser
    • MultiPromptChain
    • MultiRetrievalQAChain
    • SequentialChain
    • SimpleSequentialChain
    • SQLInput
    • SQLInputWithTables
    • LoadingCallable
    • TransformChain
    • acollapse_docs
    • collapse_docs
    • split_list_of_docs
    • create_stuff_documents_chain
    • generate_example
    • create_history_aware_retriever
    • load_chain
    • load_chain_from_config
    • create_citation_fuzzy_match_chain
    • get_openapi_chain
    • openapi_spec_to_openai_fn
    • create_qa_with_sources_chain
    • create_qa_with_structure_chain
    • create_tagging_chain
    • create_tagging_chain_pydantic
    • get_llm_kwargs
    • is_chat_model
    • is_llm
    • load_qa_with_sources_chain
    • construct_examples
    • fix_filter_directive
    • get_query_constructor_prompt
    • load_query_constructor_chain
    • load_query_constructor_runnable
    • get_parser
    • v_args
    • load_qa_chain
    • create_retrieval_chain
    • create_sql_query_chain
    • get_openai_output_parser
    • load_summarize_chain
    • AnalyzeDocumentChain
    • ConversationChain
    • ConversationalRetrievalChain
    • LLMChain
    • QAGenerationChain
    • RetrievalQA
    • create_openai_fn_chain
    • create_structured_output_chain
    • create_extraction_chain
    • create_extraction_chain_pydantic
    • create_extraction_chain_pydantic
    • create_openai_fn_runnable
    • create_structured_output_runnable
  • chat_models
  • embeddings
  • evaluation
  • globals
  • hub
  • indexes
  • memory
  • model_laboratory
  • output_parsers
  • retrievers
  • runnables
  • smith
  • storage
  • langchain 0.2.12
  • chains
  • generate_example

generate_example#

langchain.chains.example_generator.generate_example(examples: List[dict], llm: BaseLanguageModel, prompt_template: PromptTemplate) → str[source]#

Return another example given a list of examples for a prompt.

Parameters:
  • examples (List[dict])

  • llm (BaseLanguageModel)

  • prompt_template (PromptTemplate)

Return type:

str

previous

create_stuff_documents_chain

next

create_history_aware_retriever

On this page
  • generate_example()

© Copyright 2023, LangChain Inc.