Real-time LLM Observability Dashboard
Monitor, analyze, and optimize your LLM usage across multiple providers in real-time
Created and Developed by Prajeesh Chavan
Full-Stack Developer & AI Enthusiast
๐ View Full Credits & Project Journey
๐ Quick Start โข ๐ Features โข ๐ง Installation โข ๐ Documentation โข ๐ค Contributing
Real-time monitoring of all your LLM requests with comprehensive analytics
Detailed logging of all LLM API calls with filtering and search
Detailed logging of all LLM API calls with filtering and search
Test and compare prompts across different providers and models
๐ฏ Zero-Code Integration
|
๐ Real-Time Analytics
|
๐ Multi-Provider Support
|
- ๐จ Intelligent Alert System - Proactive monitoring with enhanced real-time notifications and interactive bell icon
- ๐ Advanced Performance Alerts - Latency, error rate, and retry monitoring with detailed analysis and visual indicators
- ๐ฐ Smart Cost Alerts - Budget threshold notifications with comprehensive spending insights and trends
- ๐ Detailed Alert Analysis - In-depth alert analysis with actionable recommendations and impact assessment
- ๐ Enhanced Trend Monitoring - Continuous tracking of key performance metrics with advanced visualizations
- ๐๏ธ Notification Management - Granular notification controls with test notification capabilities
- ๐ฌ Redesigned Testing Interface - Completely redesigned model testing with template categories and step-by-step wizard
- ๐ฏ Template Categories - Organized templates for Quick Start, Development, Creative, and Analysis use cases
- ๐ Advanced Multi-Model Comparison - Enhanced side-by-side comparison with detailed performance metrics
- ๐ก Smart Template Library - Pre-built prompts with categorization, icons, and estimated completion times
- ๐ Real-time Cost Estimation - Accurate cost preview with token counting before running tests
- ๏ฟฝ Enhanced Batch Testing - Test multiple configurations with progress tracking and results analysis
- โจ Beautiful Loading Experience - Animated loading screen with gradient backgrounds and smooth transitions
- ๐จ Enhanced Responsive Design - Fully optimized layouts for desktop, tablet, and mobile devices
- โจ๏ธ Keyboard Shortcuts - Built-in shortcuts for power users (Cmd/Ctrl+K, R, Escape)
- ๐ Live Feed Mode - Real-time activity monitoring with toggleable live feed and activity counters
- ๐ญ Smooth Animations - CSS animations and transitions for enhanced user experience
- ๐ Interactive Components - Enhanced tables, modals, and interactive elements with improved UX
- ๐ Pre-built prompts for different use cases
- โก Quick Examples - Pre-built prompts for different use cases
- ๐ฐ Cost Estimation - Preview costs before running expensive tests
- ๐ Performance Benchmarking - Compare latency, quality, and costs
- ๐ Real-time Request Logging - Monitor all LLM API calls with detailed metrics
- ๐ Prompt Replay & Comparison - Re-run prompts across different providers/models
- ๐ฐ Cost Tracking & Analysis - Track spending across providers with detailed breakdowns
- โก Performance Monitoring - Latency tracking, retry analysis, and error monitoring
- ๐จ Error Tracking - Comprehensive error analysis and alerting
OpenLLM Monitor has received major UI/UX improvements and feature enhancements!
๐ What's New:
- โจ Beautiful Loading Experience with animated backgrounds
- ๐ Smart Alerts System with interactive bell notifications
- ๐ฑ Enhanced Mobile Experience with responsive design
- โจ๏ธ Keyboard Shortcuts for power users
- ๐ Live Feed Mode with real-time activity monitoring
- ๐งช Redesigned Model Testing with template categories
- ๐ Interactive Components with smooth animations
๐ See Complete Enhancement Guide: Enhanced Features Documentation
Provider | Status | Models Supported |
---|---|---|
OpenAI | โ | GPT-3.5, GPT-4, GPT-4o, DALL-E |
Ollama | โ | Llama2, Mistral, CodeLlama, Custom |
OpenRouter | โ | 100+ models via unified API |
Mistral AI | โ | Mistral-7B, Mistral-8x7B, Mixtral |
Anthropic | ๐ | Claude 3, Claude 2 |
- ๐ Zero-code Proxy Servers - Monitor without changing your code
- ๐ฆ SDK Wrappers - Drop-in replacements for popular libraries
- ๐ป CLI Monitoring - Track command-line LLM usage
- ๐ Custom Middleware - Integrate with your existing applications
- ๐ด Real-time WebSocket Updates - Live dashboard with instant updates
- ๐ Comprehensive Analytics - Usage patterns, trends, and insights
- ๐ค Export Capabilities - CSV, JSON export for logs and analytics
- ๐ Multi-Environment Support - Dev, staging, and production environments
- ๐จ Customizable Views - Personalized dashboards and filtering
Want to showcase your system immediately? Generate comprehensive seed data:
# Windows PowerShell (Recommended)
cd "scripts"
.\generate-seed-data.ps1
# Or use Node.js directly
cd scripts
npm install
node seed-data.js
โจ What you get:
- ๐ข 1,000+ realistic LLM requests across 30 days
- ๐ข Multi-provider coverage (OpenAI, Ollama, Mistral, OpenRouter)
- ๐ช 7 diverse use cases (coding, analysis, support, creative, etc.)
- ๐ Analytics-ready data for impressive demos
- ๐ฐ Cost tracking with real pricing models
- โก Performance metrics and error patterns
๐ Complete Seed Data Guide | โ๏ธ Advanced Configuration
Get up and running in less than 2 minutes:
# Clone the repository
git clone https://github.com/prajeesh-chavan/openllm-monitor.git
cd openllm-monitor
# Start with Docker (includes everything)
docker-compose up -d
# Or use our setup script
./docker-setup.sh # Linux/Mac
.\docker-setup.ps1 # Windows PowerShell
๐ Access your dashboard: http://localhost:3000
Click to expand manual installation steps
- Node.js 18+ and npm
- MongoDB (local or cloud)
- Git
git clone https://github.com/prajeesh-chavan/openllm-monitor.git
cd openllm-monitor
# Backend setup
cd backend
npm install
cp ../.env.example .env
# Frontend setup
cd ../frontend
npm install
Edit backend/.env
:
MONGODB_URI=mongodb://localhost:27017/openllm-monitor
PORT=3001
OPENAI_API_KEY=your-openai-key-here
OLLAMA_BASE_URL=http://localhost:11434
# Terminal 1: MongoDB
mongod
# Terminal 2: Backend
cd backend && npm run dev
# Terminal 3: Frontend
cd frontend && npm run dev
๐ Open: http://localhost:5173
Component | Minimum | Recommended |
---|---|---|
Node.js | 18.x | 20.x LTS |
Memory | 4GB RAM | 8GB RAM |
Storage | 10GB | 20GB SSD |
MongoDB | 4.4+ | 6.0+ |
๐ณ Docker Fastest & Easiest docker-compose up -d โ
Everything included |
๐ป Manual Install Full Control npm install โ
Customizable |
โ๏ธ Cloud Deploy Production Scale docker build -t openllm-monitor . โ
Scalable |
Windows:
# PowerShell (Recommended)
.\docker-setup.ps1
# Command Prompt
docker-setup.bat
Linux/macOS:
# Make executable and run
chmod +x docker-setup.sh
./docker-setup.sh
Validation:
# Check if everything is configured correctly
.\docker\docker-validate.ps1 # Windows
./docker/docker-validate.sh # Linux/Mac
graph TB
A[Client Applications] --> B[OpenLLM Monitor Proxy]
B --> C{LLM Provider}
C --> D[OpenAI]
C --> E[Ollama]
C --> F[OpenRouter]
C --> G[Mistral AI]
B --> H[Backend API]
H --> I[MongoDB]
H --> J[WebSocket Server]
J --> K[React Dashboard]
H --> L[Analytics Engine]
H --> M[Cost Calculator]
H --> N[Token Counter]
openllm-monitor/
โโโ ๐ฏ backend/ # Node.js + Express + MongoDB
โ โโโ controllers/ # ๐ฎ API request handlers
โ โโโ models/ # ๐ Database schemas & models
โ โโโ routes/ # ๐ฃ๏ธ API route definitions
โ โโโ middlewares/ # ๐ Custom middleware (LLM logger)
โ โโโ services/ # ๐ง LLM provider integrations
โ โโโ utils/ # ๐ ๏ธ Helper functions & utilities
โ โโโ config/ # โ๏ธ Configuration management
โ
โโโ ๐จ frontend/ # React + Vite + Tailwind
โ โโโ src/components/ # ๐งฉ Reusable UI components
โ โโโ src/pages/ # ๐ Page-level components
โ โโโ src/services/ # ๐ API communication layer
โ โโโ src/hooks/ # ๐ช Custom React hooks
โ โโโ src/store/ # ๐๏ธ State management (Zustand)
โ โโโ public/ # ๐ Static assets
โ
โโโ ๐ณ docker/ # Docker configuration
โโโ ๐ docs/ # Documentation & guides
โโโ ๐งช scripts/ # Setup & utility scripts
โโโ ๐ README.md # You are here!
# Start the proxy server
npm run proxy
# Your existing code works unchanged!
# All OpenAI calls are automatically logged
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello!" }]
});
// Add to your existing application
const { LLMLogger } = require("openllm-monitor");
const logger = new LLMLogger({
apiUrl: "http://localhost:3001",
});
// Wrap your LLM calls
const response = await logger.track(async () => {
return await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Explain quantum computing" }],
});
});
// Get comprehensive analytics
const analytics = await fetch("/api/analytics", {
method: "POST",
body: JSON.stringify({
dateRange: "last-7-days",
providers: ["openai", "ollama"],
groupBy: "model",
}),
});
console.log(analytics.data);
// {
// totalRequests: 1247,
// totalCost: 23.45,
// averageLatency: 850,
// topModels: [...]
// }
// Compare the same prompt across providers
const comparison = await fetch("/api/replay/compare", {
method: "POST",
body: JSON.stringify({
prompt: "Write a haiku about coding",
configurations: [
{ provider: "openai", model: "gpt-3.5-turbo" },
{ provider: "ollama", model: "llama2:7b" },
{ provider: "openrouter", model: "anthropic/claude-2" },
],
}),
});
Create backend/.env
from the template:
cp .env.example .env
Essential Configuration:
# ๐๏ธ Database
MONGODB_URI=mongodb://localhost:27017/openllm-monitor
# ๐ Server
PORT=3001
NODE_ENV=development
FRONTEND_URL=http://localhost:5173
# ๐ค LLM Provider API Keys
OPENAI_API_KEY=sk-your-openai-key-here
OPENROUTER_API_KEY=sk-your-openrouter-key-here
MISTRAL_API_KEY=your-mistral-key-here
# ๐ฆ Ollama (Local)
OLLAMA_BASE_URL=http://localhost:11434
# ๐ Security
JWT_SECRET=your-super-secret-jwt-key
RATE_LIMIT_MAX_REQUESTS=100
๐ค OpenAI Setup
- Visit OpenAI Platform
- Create new API key
- Add to
.env
:OPENAI_API_KEY=sk-...
- Set usage limits if needed
๐ฆ Ollama Setup
- Install Ollama
- Start Ollama:
ollama serve
- Pull a model:
ollama pull llama2
- Configure in
.env
:OLLAMA_BASE_URL=http://localhost:11434
๐ OpenRouter Setup
- Sign up at OpenRouter
- Get API key from Keys page
- Add to
.env
:OPENROUTER_API_KEY=sk-or-...
- Browse available models in dashboard
๐ค Mistral AI Setup
- Create account at Mistral Console
- Generate API key
- Add to
.env
:MISTRAL_API_KEY=...
- Choose from available models
# Start MongoDB with Docker
docker-compose up -d mongodb
# Access MongoDB Admin UI
open http://localhost:8081 # admin/admin
- Create free account at MongoDB Atlas
- Create cluster and get connection string
- Add to
.env
:MONGODB_URI=mongodb+srv://...
Windows
# Download and install MongoDB Community Server
# https://www.mongodb.com/try/download/community
# Or with Chocolatey
choco install mongodb
# Start MongoDB service
net start MongoDB
macOS
# Install with Homebrew
brew tap mongodb/brew
brew install mongodb-community
# Start MongoDB
brew services start mongodb/brew/mongodb-community
Linux (Ubuntu)
# Install MongoDB
wget -qO - https://www.mongodb.org/static/pgp/server-6.0.asc | sudo apt-key add -
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list
sudo apt-get update && sudo apt-get install -y mongodb-org
# Start MongoDB
sudo systemctl start mongod
sudo systemctl enable mongod
# Use our setup scripts
./scripts/setup-mongodb.sh # Linux/Mac
.\scripts\setup-mongodb.ps1 # Windows PowerShell
.\scripts\setup-mongodb.bat # Windows CMD
# Production build and deploy
docker-compose -f docker/docker-compose.prod.yml up -d
# With custom environment
docker-compose -f docker/docker-compose.prod.yml --env-file .env.production up -d
๐ Deploy to AWS
# Build and push to ECR
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-west-2.amazonaws.com
docker build -t openllm-monitor .
docker tag openllm-monitor:latest 123456789012.dkr.ecr.us-west-2.amazonaws.com/openllm-monitor:latest
docker push 123456789012.dkr.ecr.us-west-2.amazonaws.com/openllm-monitor:latest
# Deploy with ECS or EKS
๐ Deploy to DigitalOcean
# Use DigitalOcean App Platform
doctl apps create --spec .do/app.yaml
# Or deploy to Droplet
docker-compose -f docker/docker-compose.prod.yml up -d
โก Deploy to Vercel/Netlify
# Frontend only (with separate backend)
cd frontend
npm run build
# Deploy frontend to Vercel
vercel --prod
# Deploy backend separately to Railway/Render
Production Environment Variables:
NODE_ENV=production
MONGODB_URI=mongodb+srv://production-cluster/openllm-monitor
JWT_SECRET=super-secure-production-secret
CORS_ORIGIN=https://your-domain.com
RATE_LIMIT_MAX_REQUESTS=1000
LOG_LEVEL=info
Security Checklist:
- Change default passwords
- Use HTTPS in production
- Set up proper CORS
- Configure rate limiting
- Enable MongoDB authentication
- Use environment-specific secrets
- Set up monitoring and alerts
Endpoint | Method | Description |
---|---|---|
/api/health |
GET | Service health check |
/api/info |
GET | API version & information |
/api/status |
GET | System status & metrics |
Endpoint | Method | Description |
---|---|---|
/api/logs |
GET | Retrieve logs with filtering |
/api/logs/:id |
GET | Get specific log details |
/api/logs/stats |
GET | Dashboard statistics |
/api/logs/export |
POST | Export logs (CSV/JSON) |
/api/analytics |
POST | Advanced analytics queries |
Endpoint | Method | Description |
---|---|---|
/api/replay |
POST | Replay a prompt |
/api/replay/compare |
POST | Compare across providers |
/api/replay/estimate |
POST | Get cost estimates |
/api/replay/models |
GET | Available models list |
Endpoint | Method | Description |
---|---|---|
/api/providers |
GET | List provider configs |
/api/providers/:id |
PUT | Update provider settings |
/api/providers/:id/test |
POST | Test provider connection |
// Real-time events
socket.on("new-log", (log) => {
console.log("New request:", log);
});
socket.on("stats-update", (stats) => {
console.log("Updated stats:", stats);
});
socket.on("error-alert", (error) => {
console.log("Error detected:", error);
});
# Backend tests
cd backend
npm test # Run all tests
npm run test:watch # Watch mode
npm run test:coverage # With coverage
# Frontend tests
cd frontend
npm test # Run all tests
npm run test:ui # UI test runner
npm run test:coverage # With coverage
- Unit Tests - Individual component testing
- Integration Tests - API endpoint testing
- E2E Tests - Full user journey testing
- Performance Tests - Load and stress testing
Component | Coverage | Status |
---|---|---|
Backend API | 85% | โ Good |
Frontend Components | 78% | โ Good |
Integration Tests | 92% | โ Excellent |
E2E Tests | 65% |
# 1. Fork & Clone
git clone https://github.com/prajeesh-chavan/openllm-monitor.git
cd openllm-monitor
# 2. Create Feature Branch
git checkout -b feature/amazing-feature
# 3. Start Development
npm run dev:all # Start all services
# 4. Make Changes & Test
npm test
npm run lint
# 5. Commit & Push
git commit -m "feat: add amazing feature"
git push origin feature/amazing-feature
# 6. Create Pull Request
Component | Status | Progress |
---|---|---|
โ Backend API | Complete | 100% |
โ Database Models | Complete | 100% |
โ Provider Services | Complete | 95% |
โ WebSocket Server | Complete | 100% |
โ Frontend Dashboard | Complete | 90% |
โ Analytics Engine | Complete | 85% |
๐ง Mobile App | In Progress | 30% |
๐ API v2 | Planned | 0% |
We welcome contributions in these areas:
- ๐ Bug Fixes - Help us squash bugs
- โจ New Features - Implement requested features
- ๐ Documentation - Improve guides and docs
- ๐งช Testing - Add more test coverage
- ๐จ UI/UX - Enhance user experience
- ๐ Performance - Optimize speed and efficiency
We love contributions! Here's how you can help make OpenLLM Monitor even better:
- ๐ด Fork the repository
- ๐ Star the project (if you like it!)
- ๐ง Create your feature branch (
git checkout -b feature/AmazingFeature
) - ๐พ Commit your changes (
git commit -m 'feat: Add some AmazingFeature'
) - ๏ฟฝ Push to the branch (
git push origin feature/AmazingFeature
) - ๐ฏ Open a Pull Request
๐ Bug Reports Found a bug? |
โจ Feature Requests Have an idea? |
๐ Documentation Improve our docs |
๐งช Testing Add test coverage |
We use Conventional Commits:
feat: add new dashboard widget
fix: resolve login issue
docs: update API documentation
test: add unit tests for analytics
refactor: optimize database queries
chore: update dependencies
Thanks to all the amazing people who have contributed to this project!
๐ Documentation Read our comprehensive guides |
๐ฌ Discussions Join the community |
๏ฟฝ Issues Report bugs or request features |
๐ง Email Direct support |
Common Issues:
๐ด MongoDB Connection Failed
# Check if MongoDB is running
docker ps | grep mongo
# Restart MongoDB
docker-compose restart mongodb
# Check logs
docker-compose logs mongodb
๐ด Port Already in Use
# Find what's using the port
netstat -tulpn | grep :3001
# Kill the process
kill -9 <PID>
# Or change port in .env
PORT=3002
๏ฟฝ Ollama Not Connecting
# Check Ollama status
ollama ps
# Restart Ollama
ollama serve
# Check logs
tail -f ~/.ollama/logs/server.log
- ๐ User Guide - Complete usage guide with enhanced UI features
- ๐จ Enhanced Features Guide - Latest UI/UX improvements and new features
- ๐ Quick Start - Get running in 5 minutes
- ๐ Features Overview - Comprehensive feature documentation
- ๐ง API Docs - Full API reference
- ๏ฟฝ Development Guide - Development status and technical details
- ๏ฟฝ๐ณ Docker Guide - Docker setup and deployment
- ๐งช Test Models Guide - Enhanced model testing interface
- ๏ฟฝ Smart Alerts - Intelligent monitoring and notifications
- ๏ฟฝ๐ ๏ธ Troubleshooting - Common issues and solutions
- ๐ Changelog - All recent improvements and changes
This project is licensed under the MIT License - see the LICENSE file for details.
Copyright (c) 2024-2025 Prajeesh Chavan
MIT License
Copyright (c) 2024-2025 Prajeesh Chavan
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
About Me: I'm a passionate full-stack developer with expertise in modern web technologies and AI/ML systems. I built OpenLLM Monitor to solve the real-world challenge of monitoring and optimizing LLM usage across different providers. This project represents my commitment to creating tools that help developers work more efficiently with AI technologies.
Skills & Technologies:
- ๐ Full-Stack Development (React, Node.js, MongoDB)
- ๐ค AI/ML Integration & LLM Applications
- ๐ณ DevOps & Cloud Deployment (Docker, AWS)
- ๐ Data Analytics & Visualization
- ๐ง System Architecture & API Design
Connect with me if you have questions about the project, want to collaborate, or discuss opportunities!
This project represents months of dedicated development and continuous improvement. Here's what makes it special:
- ๐ฌ Research-Driven: Extensive research into LLM monitoring needs and best practices
- ๐๏ธ Built from Scratch: Every component carefully designed and implemented
- ๐ฏ Problem-Solving: Addresses real-world challenges faced by LLM developers
- ๐ Continuous Evolution: Regular updates and feature enhancements
- ๐ Community-First: Open source with detailed documentation and support
If you find this project valuable, please:
- โญ Star the repository to show your support
- ๐ค Connect with me on LinkedIn or Twitter
- ๐ฌ Share your feedback or suggestions
- ๐ฏ Consider hiring me for your next project!
- โ v1.0 - Core monitoring and analytics
- ๐ง v1.1 - Advanced filtering and exports
- ๐ v1.2 - Mobile app and notifications
- ๐ฎ v2.0 - AI-powered insights and recommendations
Built with โค๏ธ by Prajeesh Chavan for the LLM developer community
This project is the result of extensive research, development, and testing to provide the best LLM monitoring experience. If this project helped you, please consider giving it a โญ star on GitHub and connecting with me!
Creator: Prajeesh Chavan โข License: MIT โข Year: 2024-2025