Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://respan.ai/docs/mcp"
    }
  }
}

What is LanceDB?

LanceDB is a serverless vector database built on the Lance columnar format. It runs embedded in your application with no server to manage.

Setup

1

Install packages

pip install respan-tracing lancedb openai
2

Set environment variables

export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize Respan and query LanceDB

Respan auto-instruments LanceDB — search and table operations are captured as spans.
from respan_tracing import RespanTelemetry
from respan_tracing.decorators import workflow, task
import lancedb
from openai import OpenAI

# Initialize — auto-instruments LanceDB
telemetry = RespanTelemetry()
client = OpenAI()
db = lancedb.connect("~/.lancedb")

docs = [
    "Respan provides observability for LLM applications.",
    "Traces capture the full lifecycle of an LLM request.",
]


def get_embeddings(texts: list[str]) -> list[list[float]]:
    response = client.embeddings.create(model="text-embedding-3-small", input=texts)
    return [item.embedding for item in response.data]


# Create table with embeddings
vectors = get_embeddings(docs)
table = db.create_table(
    "docs",
    data=[
        {"text": docs[i], "vector": vectors[i]}
        for i in range(len(docs))
    ],
    mode="overwrite",
)


@task(name="search_docs")
def search_docs(query: str):
    query_vector = get_embeddings([query])[0]
    results = table.search(query_vector).limit(3).to_list()
    return [row["text"] for row in results]


@workflow(name="rag_pipeline")
def rag_pipeline(query: str):
    context = search_docs(query)
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": f"Context: {chr(10).join(context)}"},
            {"role": "user", "content": query},
        ],
    )
    return response.choices[0].message.content


result = rag_pipeline("How does tracing work?")
print(result)
4

View your trace

Open the Traces page to see LanceDB operations as spans in your trace tree.

Configuration

LanceDB is auto-instrumented via Instruments.LANCEDB. No additional configuration is needed.
LanceDB tracing is currently available in the Python SDK only.
See the Python Tracing SDK reference for configuration options.