Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://respan.ai/docs/mcp"
    }
  }
}

What is Weaviate?

Weaviate is an open-source vector database that supports semantic search, hybrid search, and generative search with built-in vectorization modules.

Setup

1

Install packages

pip install respan-tracing weaviate-client openai
2

Set environment variables

export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize Respan and query Weaviate

Respan auto-instruments Weaviate — search, batch import, and CRUD operations are captured as spans.
from respan_tracing import RespanTelemetry
from respan_tracing.decorators import workflow, task
import weaviate
from weaviate.classes.config import Property, DataType, Configure
from openai import OpenAI

# Initialize — auto-instruments Weaviate
telemetry = RespanTelemetry()
client = OpenAI()
wv = weaviate.connect_to_local()  # or weaviate.connect_to_weaviate_cloud()

# Create collection with vectorizer
wv.collections.create(
    name="Document",
    properties=[Property(name="text", data_type=DataType.TEXT)],
    vectorizer_config=Configure.Vectorizer.text2vec_openai(),
)

collection = wv.collections.get("Document")

# Add documents
collection.data.insert_many([
    {"text": "Respan provides observability for LLM applications."},
    {"text": "Traces capture the full lifecycle of an LLM request."},
])


@task(name="search_docs")
def search_docs(query: str):
    results = collection.query.near_text(query=query, limit=3)
    return [obj.properties["text"] for obj in results.objects]


@workflow(name="rag_pipeline")
def rag_pipeline(query: str):
    context = search_docs(query)
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": f"Context: {chr(10).join(context)}"},
            {"role": "user", "content": query},
        ],
    )
    return response.choices[0].message.content


result = rag_pipeline("How does tracing work?")
print(result)

wv.close()
4

View your trace

Open the Traces page to see Weaviate operations as spans in your trace tree.

Configuration

Weaviate is auto-instrumented via Instruments.WEAVIATE. No additional configuration is needed.
Weaviate tracing is currently available in the Python SDK only.
See the Python Tracing SDK reference for configuration options.