Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://respan.ai/docs/mcp"
    }
  }
}

What is Qdrant?

Qdrant is an open-source vector similarity search engine with filtering support, designed for production-ready AI applications.

Setup

1

Install packages

pip install respan-tracing qdrant-client openai
2

Set environment variables

export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize Respan and query Qdrant

Respan auto-instruments Qdrant — search, upsert, and delete operations are captured as spans.
from respan_tracing import RespanTelemetry
from respan_tracing.decorators import workflow, task
from qdrant_client import QdrantClient
from qdrant_client.models import PointStruct, VectorParams, Distance
from openai import OpenAI

# Initialize — auto-instruments Qdrant
telemetry = RespanTelemetry()
client = OpenAI()
qdrant = QdrantClient(":memory:")  # Use url="http://localhost:6333" for production

# Create collection
qdrant.create_collection(
    collection_name="docs",
    vectors_config=VectorParams(size=1536, distance=Distance.COSINE),
)

docs = [
    "Respan provides observability for LLM applications.",
    "Traces capture the full lifecycle of an LLM request.",
]


def get_embeddings(texts: list[str]) -> list[list[float]]:
    response = client.embeddings.create(model="text-embedding-3-small", input=texts)
    return [item.embedding for item in response.data]


# Seed documents
vectors = get_embeddings(docs)
qdrant.upsert(
    collection_name="docs",
    points=[
        PointStruct(id=i, vector=vectors[i], payload={"text": docs[i]})
        for i in range(len(docs))
    ],
)


@task(name="search_docs")
def search_docs(query: str):
    query_vector = get_embeddings([query])[0]
    results = qdrant.search(
        collection_name="docs", query_vector=query_vector, limit=3
    )
    return [hit.payload["text"] for hit in results]


@workflow(name="rag_pipeline")
def rag_pipeline(query: str):
    context = search_docs(query)
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": f"Context: {chr(10).join(context)}"},
            {"role": "user", "content": query},
        ],
    )
    return response.choices[0].message.content


result = rag_pipeline("How does tracing work?")
print(result)
4

View your trace

Open the Traces page to see Qdrant operations as spans in your trace tree.

Configuration

Qdrant is auto-instrumented via Instruments.QDRANT (Python) or qdrant (JavaScript). No additional configuration is needed. See the Python Tracing SDK reference or JavaScript Tracing SDK reference for configuration options.