LangChain

LangChain is a framework for building applications with language models. It provides chains, agents, retrievers, and integrations across providers. Respan gives you full observability over every chain run, agent step, retriever call, and LLM generation — and gateway routing through the OpenAI-compatible Respan endpoint.

Create an account at platform.respan.ai and grab an API key. For gateway, also add credits or a provider key.

Run npx @respan/cli setup to set up with your coding agent.

Setup

1

Install packages

$pip install respan-ai openinference-instrumentation-langchain langchain langchain-openai
2

Set environment variables

$export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"

OPENAI_API_KEY is used for LLM requests. RESPAN_API_KEY is used to export traces to Respan.

3

Initialize and run

1from langchain_openai import ChatOpenAI
2from respan import Respan
3from openinference.instrumentation.langchain import LangChainInstrumentor
4
5respan = Respan(instrumentations=[LangChainInstrumentor()])
6
7llm = ChatOpenAI(model="gpt-4.1-nano")
8
9response = llm.invoke("Say hello in three languages.")
10print(response.content)
11respan.flush()
4

View your trace

Open the Traces page to see your LangChain workflow with chain runs, LLM calls, retriever spans, and tool calls.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. LangChainInstrumentor()).
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

In Respan()

Set defaults at initialization — these apply to all spans.

1from respan import Respan
2from openinference.instrumentation.langchain import LangChainInstrumentor
3
4respan = Respan(
5 instrumentations=[LangChainInstrumentor()],
6 customer_identifier="user_123",
7 metadata={"service": "langchain-api", "version": "1.0.0"},
8)

With propagate_attributes

Override per-request using a context scope.

1from langchain_openai import ChatOpenAI
2from respan import Respan, propagate_attributes
3from openinference.instrumentation.langchain import LangChainInstrumentor
4
5respan = Respan(instrumentations=[LangChainInstrumentor()])
6llm = ChatOpenAI(model="gpt-4.1-nano")
7
8def handle_request(user_id: str, question: str):
9 with propagate_attributes(
10 customer_identifier=user_id,
11 thread_identifier="conv_abc_123",
12 metadata={"plan": "pro"},
13 ):
14 response = llm.invoke(question)
15 print(response.content)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators (optional)

Decorators are not required. All LangChain chains, agents, retrievers, and LLM calls are auto-traced by the instrumentor. Use @workflow and @task (Python) or withWorkflow and withTask (TypeScript) to add structure when you want to group related runs into a named workflow with nested tasks.

1from langchain_openai import ChatOpenAI
2from respan import Respan, workflow, task
3from openinference.instrumentation.langchain import LangChainInstrumentor
4
5respan = Respan(instrumentations=[LangChainInstrumentor()])
6llm = ChatOpenAI(model="gpt-4.1-nano")
7
8@task(name="generate_outline")
9def outline(topic: str) -> str:
10 return llm.invoke(f"Create a brief outline about: {topic}").content
11
12@workflow(name="content_pipeline")
13def pipeline(topic: str):
14 plan = outline(topic)
15 response = llm.invoke(f"Write content from this outline: {plan}")
16 print(response.content)
17
18pipeline("Benefits of API gateways")
19respan.flush()

Examples

Chains

Chains are auto-traced as a single workflow with nested LLM and tool spans.

1from langchain.chains import ConversationChain
2from langchain_openai import ChatOpenAI
3
4llm = ChatOpenAI(model="gpt-4.1-nano")
5chain = ConversationChain(llm=llm)
6
7response = chain.run("Tell me about artificial intelligence")
8print(response)

Streaming

Streaming responses are auto-traced like regular calls.

1llm = ChatOpenAI(model="gpt-4.1-nano", streaming=True)
2
3for chunk in llm.stream("Write a haiku about Python."):
4 print(chunk.content, end="", flush=True)