Now Enhanced! GemmaPilot v0.1.0 brings GitHub Copilot-level features with advanced capabilities like file analysis, command execution, and beautiful chat UI - all running locally!
GemmaPilot is a powerful AI coding assistant for VS Code that provides intelligent code suggestions, explanations, and assistance using local language models via Ollama. Unlike cloud-based solutions, GemmaPilot keeps your code private and secure on your machine.
- π¬ Context-Aware Chat: Intelligent conversations with full workspace awareness
- π File Analysis: Deep code analysis and detailed explanations
- β‘ Code Completion: Smart autocomplete suggestions with context
- π File Attachment: Attach and analyze specific files in chat
- π Workspace Integration: Access and analyze your entire project structure
- βοΈ Command Execution: Run terminal commands with AI assistance (user approval required)
- Modern WebView UI: Clean, responsive chat interface with toolbar
- π Markdown & Code Rendering: Properly formatted responses with syntax highlighting
- ποΈ Context Controls: Toggle workspace, selection, and file context
- π§ Professional Design: GitHub-inspired styling with dark theme support
- π Local Processing: Uses your own Ollama instance - no data leaves your machine
- β Command Approval: User confirmation required for all command executions
- π‘οΈ Safe Filtering: Dangerous commands automatically blocked
- π Zero Data Sharing: Everything stays on your computer
- Install Ollama: Download from ollama.ai
- Pull a Model:
ollama pull codellama:7b
- Install Dependencies:
pip install fastapi uvicorn ollama pydantic
- Start Backend:
cd backend && python server.py
- Install Extension: Load
gemmapilot-0.1.0.vsix
in VS Code - Open Chat:
Ctrl+Shift+P
β "GemmaPilot: Open Chat"
- π‘ Ask Questions: "Explain this function" or "How can I optimize this code?"
- π Analyze Files: Attach files and ask "What does this code do?"
- π Context Help: Select code and ask "Refactor this function"
- β‘ Get Commands: "Run the tests" or "Install dependencies" (with approval)
βββββββββββββββββββ HTTP/REST ββββββββββββββββββββ
β VS Code β ββββββββββββββββΊ β FastAPI Backend β
β Extension β β β
β β β ββββββββββββββββ β
β βββββββββββββββ β β β Ollama β β
β β WebView UI β β β β (Local LLM)β β
β β (Chat) β β β ββββββββββββββββ β
β βββββββββββββββ β ββββββββββββββββββββ
βββββββββββββββββββ
- Frontend: TypeScript VS Code extension with WebView UI
- Backend: FastAPI server with Ollama integration
- AI Model: Local Ollama instance (codellama, gemma3, etc.)
- Communication: REST API for all interactions
Feature | Description | Status |
---|---|---|
π¬ Enhanced Chat | Context-aware conversations with workspace integration | β Ready |
π File Attachment | Attach and analyze specific files | β Ready |
π Workspace Context | Full project structure awareness | β Ready |
π― Selection Context | Analyze selected code snippets | β Ready |
βοΈ Command Execution | Run AI-suggested terminal commands | β Ready |
π File Analysis | Deep code analysis and explanations | β Ready |
β‘ Code Completion | Smart autocomplete suggestions | β Ready |
π¨ Beautiful UI | Modern WebView interface | β Ready |
- Hardware: MacBook with Apple Silicon (M1/M2/M3) recommended, 16GB+ RAM
- Software:
- macOS Ventura+ or Windows 10+ or Linux
- Visual Studio Code 1.80.0+
- Python 3.8+
- Node.js 16+ (for development)
gemmapilot/
βββ README.md # This file
βββ USAGE_GUIDE.md # Comprehensive usage guide
βββ ENHANCEMENT_COMPLETE.md # Enhancement summary
βββ .gitignore # Professional gitignore
βββ setup.sh # Automated setup script
βββ test_features.py # Feature testing script
βββ backend/ # FastAPI backend
β βββ server.py # Enhanced server with all features
βββ extension/ # VS Code extension
βββ src/ # TypeScript source
β βββ extension.ts # Main extension logic
β βββ types.ts # Type definitions
β βββ config.ts # Configuration
β βββ statusBar.ts # Status bar integration
βββ package.json # Extension manifest
βββ gemmapilot-0.1.0.vsix # Ready-to-install extension
- Primary: Python, JavaScript/TypeScript, Go, Rust
- Secondary: Java, C/C++, PHP, Ruby, Swift
- Markup: HTML, CSS, Markdown, JSON, YAML
- Databases: SQL, MongoDB queries
- DevOps: Docker, Kubernetes, Shell scripts
# Check if backend is running
curl -X GET http://localhost:8000/health
# Test chat functionality
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{"prompt":"Hello", "context":"test"}'
- Reload VS Code:
Ctrl+Shift+P
β "Developer: Reload Window" - Check Extension: Look for GemmaPilot icon in Activity Bar
- Reinstall: Uninstall and reinstall the VSIX file
- Debug:
Help
βToggle Developer Tools
for console logs
- Use smaller models for faster responses:
ollama pull codellama:7b
- Close other memory-intensive applications
- Monitor Ollama with
ollama ps
Run comprehensive feature tests:
# Test all backend features
python test_features.py
# Expected output:
# π GemmaPilot Backend Feature Tests
# β Chat endpoint working
# β Code completion working
# β File analysis working
# β Workspace file listing working
# β Command execution working
# π All tests passed!
- Local Processing: All AI inference happens on your machine
- Command Filtering: Dangerous commands (rm -rf, format, etc.) blocked
- User Approval: All command executions require explicit user consent
- No Telemetry: No usage data sent anywhere
- Sandboxed: Commands run in specified workspace directory only
Feature | GitHub Copilot | GemmaPilot | Advantage |
---|---|---|---|
Code Completion | β | β | Equal |
Chat Interface | β | β | Equal |
File Analysis | β | β | Equal |
Context Awareness | β | β | Equal |
Command Execution | β | β | π GemmaPilot |
File Attachment | β | β | π GemmaPilot |
Local Processing | β | β | π GemmaPilot |
Custom Models | β | β | π GemmaPilot |
Open Source | β | β | π GemmaPilot |
Free | β | β | π GemmaPilot |
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Make your changes and test thoroughly
- Commit:
git commit -m 'Add amazing feature'
- Push:
git push origin feature/amazing-feature
- Submit a pull request
MIT License - see LICENSE file for details
- Ollama for excellent local LLM serving
- VS Code for the powerful extension API
- FastAPI for the robust backend framework
- GitHub Copilot for inspiration and reference
- Usage Guide: See
USAGE_GUIDE.md
for comprehensive documentation - API Documentation: Visit
http://localhost:8000/docs
when backend is running - Issues: Report bugs and feature requests on GitHub
- Discussions: Join our community discussions
Experience the future of AI-assisted coding - locally, privately, and powerfully! π
Built with β€οΈ for developers who value privacy, control, and cutting-edge AI assistance.