LangGraph

LangGraph is a Python framework for building stateful, multi-step agent workflows as graphs. Nodes represent operations (LLM calls, tools, routing), and edges define the flow between them. Respan gives you full observability over every graph run, node, and LLM call — and gateway routing through the OpenAI-compatible Respan endpoint.

Create an account at platform.respan.ai and grab an API key. For gateway, also add credits or a provider key.

Run npx @respan/cli setup to set up with your coding agent.

Setup

1

Install packages

$pip install respan-ai openinference-instrumentation-langchain langgraph langchain-openai
2

Set environment variables

$export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"

OPENAI_API_KEY is used for LLM requests. RESPAN_API_KEY is used to export traces to Respan.

3

Initialize and run

1from typing import TypedDict
2from langgraph.graph import StateGraph, START, END
3from langchain_openai import ChatOpenAI
4from respan import Respan
5from openinference.instrumentation.langchain import LangChainInstrumentor
6
7respan = Respan(instrumentations=[LangChainInstrumentor()])
8
9llm = ChatOpenAI(model="gpt-4.1-nano")
10
11class State(TypedDict):
12 topic: str
13 joke: str
14 review: str
15
16def generate_joke(state: State) -> dict:
17 response = llm.invoke(f"Tell me a joke about {state['topic']}")
18 return {"joke": response.content}
19
20def review_joke(state: State) -> dict:
21 response = llm.invoke(f"Rate this joke: {state['joke']}")
22 return {"review": response.content}
23
24graph = StateGraph(State)
25graph.add_node("generate", generate_joke)
26graph.add_node("review", review_joke)
27graph.add_edge(START, "generate")
28graph.add_edge("generate", "review")
29graph.add_edge("review", END)
30
31app = graph.compile()
32result = app.invoke({"topic": "AI tracing"})
33print(result)
34respan.flush()
4

View your trace

Open the Traces page to see the graph execution with node spans, LLM calls, and state transitions.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. LangChainInstrumentor()).
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

In Respan()

Set defaults at initialization — these apply to all spans.

1from respan import Respan
2from openinference.instrumentation.langchain import LangChainInstrumentor
3
4respan = Respan(
5 instrumentations=[LangChainInstrumentor()],
6 customer_identifier="user_123",
7 metadata={"service": "graph-api", "version": "1.0.0"},
8)

With propagate_attributes

Override per-request using a context scope.

1from respan import Respan, propagate_attributes
2from openinference.instrumentation.langchain import LangChainInstrumentor
3
4respan = Respan(instrumentations=[LangChainInstrumentor()])
5
6def handle_request(user_id: str, topic: str):
7 with propagate_attributes(
8 customer_identifier=user_id,
9 thread_identifier="conv_abc_123",
10 metadata={"plan": "pro"},
11 ):
12 result = app.invoke({"topic": topic})
13 print(result)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators (optional)

Decorators are not required. All LangGraph node executions and LLM calls are auto-traced by the instrumentor. Use @workflow and @task to add structure when you want to group related graph runs into a named workflow with nested tasks.

1from respan import Respan, workflow, task
2from openinference.instrumentation.langchain import LangChainInstrumentor
3
4respan = Respan(instrumentations=[LangChainInstrumentor()])
5
6@task(name="run_joke_graph")
7def run_joke_graph(topic: str):
8 return app.invoke({"topic": topic})
9
10@workflow(name="joke_pipeline")
11def pipeline(topic: str):
12 print(run_joke_graph(topic))
13
14pipeline("AI tracing")
15respan.flush()