Prerequisites
- A Snowglobe account. Sign up here.
- API key for an LLM API endpoint (e.g. OpenAI, Anthropic, etc.)
Step 1: Connect Your AI Chatbot
Connect your chatbot by providing your LLM endpoint, system prompt, and API key.This example uses a simple LLM with a system prompt. For more complex chatbots, check out our chatbot connection guide.
Required Parameters
Chatbot Description
Brief description of what your chatbot does and who it’s for.
Chatbot API Endpoint
Your LLM API endpoint.
System Prompt
The system prompt that guides your chatbot’s behavior.
Step by Step Walkthrough
- Click “Connect Chatbot”
- Enter your chatbot name and copy the parameters above
- Click “Connect Chatbot”
Success Indicators
You’ll see your chatbot appear in your dashboard with a green “Connected” status.
Step 2: Configure Your Simulation
Set up a simulation to test how your chatbot handles struggling math students.Required Parameters
Simulation Intent
Describe the user behavior or edge cases you want to test.
Step by Step Walkthrough
- Click “New Simulation” from your chatbot page
- Name your simulation (e.g., “Math Struggling Students”)
- Enter your simulation intent (copy from above)
- Set number of personas (start with 10) and number of conversations (start with 50)
- Click “Start Simulation”
Success Indicators
You’ll be redirected to your simulation page, where you’ll start seeing personas being generated and ready for your approval.
Step 3: Monitor Your Simulation
Watch as Snowglobe creates diverse student personas and runs conversations with your chatbot. What’s happening: Snowglobe generates realistic student personas (struggling with different math topics, various grade levels, different learning styles) and simulates conversations to see how well your chatbot guides them without giving direct answers. Your role:- Approve personas as they’re generated (ensures they match your testing goals)
- Review conversations in real-time as they develop
- Add tags to flag interesting interactions or issues
Step by Step Walkthrough
Success Indicators
You’ll see completed conversations appearing in your simulation dashboard, each tagged with performance metrics.
Step 4: Analyze Results
Review your simulation results to identify strengths and improvement areas. Two main views:- Heatmap Overview: Quickly spot which conversations and topics have the most issues
- Per-Metric Breakdown: See which personas perform best/worst on each metric and identify challenging topics
Step by Step Walkthrough
Success Indicators
You can clearly identify specific areas where your chatbot needs improvement (e.g., “Chatbot gives direct answers to algebra word problems” or “Struggles with frustrated students”).
Next Steps
Immediate actions:- Adjust your system prompt based on identified issues
- Run another simulation to test improvements
- Try different simulation intents (e.g., “Advanced students asking complex questions” or “Students trying to get direct answers”)
SDK Quickstart
Automate Snowglobe workflows programmatically.
Simulation Intent Guide
Learn how to design more targeted tests.
Metrics Guide
Learn how to measure what matters most for your chatbot.
Chatbot Connection Guide
Learn how to connect your chatbot to Snowglobe.
- Chatbot gives direct answers: Strengthen your system prompt’s restrictions
- Conversations feel unnatural: Adjust simulation intent, or provide some grounding examples as a starting point
- Missing edge cases: Try more specific simulation intents