Get your AI chatbot stress-tested in under 5 minutes. We’ll walk through testing a student tutor chatbot that helps with homework using the Socratic method, but you can use Snowglobe with any conversational chatbot.

Prerequisites

  • A Snowglobe account. Sign up here.
  • API key for an LLM API endpoint (e.g. OpenAI, Anthropic, etc.)

Step 1: Connect Your AI Chatbot

Connect your chatbot by providing your LLM endpoint, system prompt, and API key.
This example uses a simple LLM with a system prompt. For more complex chatbots, check out our chatbot connection guide.

Required Parameters

Chatbot Description
Brief description of what your chatbot does and who it’s for.
Chatbot API Endpoint
Your LLM API endpoint.
System Prompt
The system prompt that guides your chatbot’s behavior.

Step by Step Walkthrough

  1. Click “Connect Chatbot”
  2. Enter your chatbot name and copy the parameters above
  3. Click “Connect Chatbot”

Success Indicators

You’ll see your chatbot appear in your dashboard with a green “Connected” status.

Step 2: Configure Your Simulation

Set up a simulation to test how your chatbot handles struggling math students.

Required Parameters

Simulation Intent
Describe the user behavior or edge cases you want to test.

Step by Step Walkthrough

  1. Click “New Simulation” from your chatbot page
  2. Name your simulation (e.g., “Math Struggling Students”)
  3. Enter your simulation intent (copy from above)
  4. Set number of personas (start with 10) and number of conversations (start with 50)
  5. Click “Start Simulation”

Success Indicators

You’ll be redirected to your simulation page, where you’ll start seeing personas being generated and ready for your approval.

Step 3: Monitor Your Simulation

Watch as Snowglobe creates diverse student personas and runs conversations with your chatbot. What’s happening: Snowglobe generates realistic student personas (struggling with different math topics, various grade levels, different learning styles) and simulates conversations to see how well your chatbot guides them without giving direct answers. Your role:
  • Approve personas as they’re generated (ensures they match your testing goals)
  • Review conversations in real-time as they develop
  • Add tags to flag interesting interactions or issues

Step by Step Walkthrough

Success Indicators

You’ll see completed conversations appearing in your simulation dashboard, each tagged with performance metrics.

Step 4: Analyze Results

Review your simulation results to identify strengths and improvement areas. Two main views:
  1. Heatmap Overview: Quickly spot which conversations and topics have the most issues
  2. Per-Metric Breakdown: See which personas perform best/worst on each metric and identify challenging topics

Step by Step Walkthrough

Success Indicators

You can clearly identify specific areas where your chatbot needs improvement (e.g., “Chatbot gives direct answers to algebra word problems” or “Struggles with frustrated students”).

Next Steps

Immediate actions:
  • Adjust your system prompt based on identified issues
  • Run another simulation to test improvements
  • Try different simulation intents (e.g., “Advanced students asking complex questions” or “Students trying to get direct answers”)
Common issues and solutions:
  • Chatbot gives direct answers: Strengthen your system prompt’s restrictions
  • Conversations feel unnatural: Adjust simulation intent, or provide some grounding examples as a starting point
  • Missing edge cases: Try more specific simulation intents
Ready to stress test your chatbot? Start your free simulation →