Quickstart
This quickstart helps you to integrate your LLM application with Langfuse. It will log a single LLM call to get started.
Create new project in Langfuse
- Create Langfuse account (opens in a new tab)
- Create a new project
- Create new API credentials in the project settings
Log your first LLM call to Langfuse
pip install langfuse
.env
LANGFUSE_SECRET_KEY="sk-lf-...";
LANGFUSE_PUBLIC_KEY="pk-lf-...";
LANGFUSE_HOST="https://cloud.langfuse.com"; # πͺπΊ EU region
# LANGFUSE_HOST="https://us.cloud.langfuse.com"; # πΊπΈ US region
Example usage, most of the parameters are optional and depend on the use case. For more information, see the python docs.
server.py
from langfuse import Langfuse
# Create Langfuse client
langfuse = Langfuse()
# Create generation in Langfuse
generation = langfuse.generation(
name="summary-generation",
model="gpt-3.5-turbo",
model_parameters={"maxTokens": "1000", "temperature": "0.9"},
input=[{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Please generate a summary of the following documents \nThe engineering department defined the following OKR goals...\nThe marketing department defined the following OKR goals..."}],
metadata={"interface": "whatsapp"}
)
# Execute model, mocked here
# chat_completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])
chat_completion = "completion":"The Q3 OKRs contain goals for multiple teams..."
# Update span and sets end_time
generation.end(output=chat_completion)
# The SDK executes network requests in the background.
# To ensure that all requests are sent before the process exits, call flush()
# Not necessary in long-running production code
langfuse.flush()
β
Done, now visit the Langfuse interface to look at the trace you just created.
All Langfuse platform features
This was a very brief introduction to get started with Langfuse. Explore all Langfuse platform features in detail.
Develop
Monitor
Test