Skip to content

A local-first AI coding assistant for VS Code, designed for privacy and offline functionality. Powered by Ollama and FastAPI, GemmaPilot offers context-aware chat, code completion, file analysis, and command execution, all while keeping your code on your machine.

Notifications You must be signed in to change notification settings

Edmon02/gemmapilot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

11 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ GemmaPilot - Advanced AI Coding Assistant

Now Enhanced! GemmaPilot v0.1.0 brings GitHub Copilot-level features with advanced capabilities like file analysis, command execution, and beautiful chat UI - all running locally!

GemmaPilot is a powerful AI coding assistant for VS Code that provides intelligent code suggestions, explanations, and assistance using local language models via Ollama. Unlike cloud-based solutions, GemmaPilot keeps your code private and secure on your machine.

✨ New Features in v0.1.0

🎯 Core AI Capabilities

  • πŸ’¬ Context-Aware Chat: Intelligent conversations with full workspace awareness
  • πŸ“„ File Analysis: Deep code analysis and detailed explanations
  • ⚑ Code Completion: Smart autocomplete suggestions with context
  • πŸ“Ž File Attachment: Attach and analyze specific files in chat
  • 🌐 Workspace Integration: Access and analyze your entire project structure
  • βš™οΈ Command Execution: Run terminal commands with AI assistance (user approval required)

🎨 Beautiful Interface

  • Modern WebView UI: Clean, responsive chat interface with toolbar
  • πŸ“ Markdown & Code Rendering: Properly formatted responses with syntax highlighting
  • πŸŽ›οΈ Context Controls: Toggle workspace, selection, and file context
  • πŸ”§ Professional Design: GitHub-inspired styling with dark theme support

πŸ”’ Privacy & Security

  • 🏠 Local Processing: Uses your own Ollama instance - no data leaves your machine
  • βœ… Command Approval: User confirmation required for all command executions
  • πŸ›‘οΈ Safe Filtering: Dangerous commands automatically blocked
  • πŸ” Zero Data Sharing: Everything stays on your computer

πŸš€ Quick Start

Installation

  1. Install Ollama: Download from ollama.ai
  2. Pull a Model: ollama pull codellama:7b
  3. Install Dependencies: pip install fastapi uvicorn ollama pydantic
  4. Start Backend: cd backend && python server.py
  5. Install Extension: Load gemmapilot-0.1.0.vsix in VS Code
  6. Open Chat: Ctrl+Shift+P β†’ "GemmaPilot: Open Chat"

Usage Examples

  • πŸ’‘ Ask Questions: "Explain this function" or "How can I optimize this code?"
  • πŸ“Ž Analyze Files: Attach files and ask "What does this code do?"
  • πŸ” Context Help: Select code and ask "Refactor this function"
  • ⚑ Get Commands: "Run the tests" or "Install dependencies" (with approval)

πŸ“Š Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    HTTP/REST    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   VS Code       β”‚ ◄──────────────► β”‚ FastAPI Backend  β”‚
β”‚   Extension     β”‚                 β”‚                  β”‚
β”‚                 β”‚                 β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚                 β”‚ β”‚   Ollama     β”‚ β”‚
β”‚ β”‚ WebView UI  β”‚ β”‚                 β”‚ β”‚   (Local LLM)β”‚ β”‚
β”‚ β”‚ (Chat)      β”‚ β”‚                 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  • Frontend: TypeScript VS Code extension with WebView UI
  • Backend: FastAPI server with Ollama integration
  • AI Model: Local Ollama instance (codellama, gemma3, etc.)
  • Communication: REST API for all interactions

πŸ› οΈ Available Features

Feature Description Status
πŸ’¬ Enhanced Chat Context-aware conversations with workspace integration βœ… Ready
πŸ“Ž File Attachment Attach and analyze specific files βœ… Ready
🌐 Workspace Context Full project structure awareness βœ… Ready
🎯 Selection Context Analyze selected code snippets βœ… Ready
βš™οΈ Command Execution Run AI-suggested terminal commands βœ… Ready
πŸ“„ File Analysis Deep code analysis and explanations βœ… Ready
⚑ Code Completion Smart autocomplete suggestions βœ… Ready
🎨 Beautiful UI Modern WebView interface βœ… Ready

🎯 Prerequisites

  • Hardware: MacBook with Apple Silicon (M1/M2/M3) recommended, 16GB+ RAM
  • Software:
    • macOS Ventura+ or Windows 10+ or Linux
    • Visual Studio Code 1.80.0+
    • Python 3.8+
    • Node.js 16+ (for development)

πŸ“ Project Structure

gemmapilot/
β”œβ”€β”€ README.md                    # This file
β”œβ”€β”€ USAGE_GUIDE.md              # Comprehensive usage guide
β”œβ”€β”€ ENHANCEMENT_COMPLETE.md     # Enhancement summary
β”œβ”€β”€ .gitignore                  # Professional gitignore
β”œβ”€β”€ setup.sh                    # Automated setup script
β”œβ”€β”€ test_features.py            # Feature testing script
β”œβ”€β”€ backend/                    # FastAPI backend
β”‚   └── server.py              # Enhanced server with all features
└── extension/                  # VS Code extension
    β”œβ”€β”€ src/                   # TypeScript source
    β”‚   β”œβ”€β”€ extension.ts       # Main extension logic
    β”‚   β”œβ”€β”€ types.ts          # Type definitions
    β”‚   β”œβ”€β”€ config.ts         # Configuration
    β”‚   └── statusBar.ts      # Status bar integration
    β”œβ”€β”€ package.json           # Extension manifest
    └── gemmapilot-0.1.0.vsix # Ready-to-install extension

πŸ”§ Supported Languages

  • Primary: Python, JavaScript/TypeScript, Go, Rust
  • Secondary: Java, C/C++, PHP, Ruby, Swift
  • Markup: HTML, CSS, Markdown, JSON, YAML
  • Databases: SQL, MongoDB queries
  • DevOps: Docker, Kubernetes, Shell scripts

πŸ› οΈ Troubleshooting

Backend Issues

# Check if backend is running
curl -X GET http://localhost:8000/health

# Test chat functionality  
curl -X POST http://localhost:8000/chat \
  -H "Content-Type: application/json" \
  -d '{"prompt":"Hello", "context":"test"}'

Extension Issues

  1. Reload VS Code: Ctrl+Shift+P β†’ "Developer: Reload Window"
  2. Check Extension: Look for GemmaPilot icon in Activity Bar
  3. Reinstall: Uninstall and reinstall the VSIX file
  4. Debug: Help β†’ Toggle Developer Tools for console logs

Performance Tips

  • Use smaller models for faster responses: ollama pull codellama:7b
  • Close other memory-intensive applications
  • Monitor Ollama with ollama ps

πŸ§ͺ Testing

Run comprehensive feature tests:

# Test all backend features
python test_features.py

# Expected output:
# πŸš€ GemmaPilot Backend Feature Tests
# βœ“ Chat endpoint working
# βœ“ Code completion working  
# βœ“ File analysis working
# βœ“ Workspace file listing working
# βœ“ Command execution working
# πŸŽ‰ All tests passed!

πŸ”’ Security Features

  • Local Processing: All AI inference happens on your machine
  • Command Filtering: Dangerous commands (rm -rf, format, etc.) blocked
  • User Approval: All command executions require explicit user consent
  • No Telemetry: No usage data sent anywhere
  • Sandboxed: Commands run in specified workspace directory only

πŸ†š vs GitHub Copilot

Feature GitHub Copilot GemmaPilot Advantage
Code Completion βœ… βœ… Equal
Chat Interface βœ… βœ… Equal
File Analysis βœ… βœ… Equal
Context Awareness βœ… βœ… Equal
Command Execution ❌ βœ… πŸ† GemmaPilot
File Attachment ❌ βœ… πŸ† GemmaPilot
Local Processing ❌ βœ… πŸ† GemmaPilot
Custom Models ❌ βœ… πŸ† GemmaPilot
Open Source ❌ βœ… πŸ† GemmaPilot
Free ❌ βœ… πŸ† GemmaPilot

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes and test thoroughly
  4. Commit: git commit -m 'Add amazing feature'
  5. Push: git push origin feature/amazing-feature
  6. Submit a pull request

πŸ“„ License

MIT License - see LICENSE file for details

πŸ™ Acknowledgments

  • Ollama for excellent local LLM serving
  • VS Code for the powerful extension API
  • FastAPI for the robust backend framework
  • GitHub Copilot for inspiration and reference

πŸ†˜ Support & Documentation

  • Usage Guide: See USAGE_GUIDE.md for comprehensive documentation
  • API Documentation: Visit http://localhost:8000/docs when backend is running
  • Issues: Report bugs and feature requests on GitHub
  • Discussions: Join our community discussions

Experience the future of AI-assisted coding - locally, privately, and powerfully! πŸš€

Built with ❀️ for developers who value privacy, control, and cutting-edge AI assistance.

About

A local-first AI coding assistant for VS Code, designed for privacy and offline functionality. Powered by Ollama and FastAPI, GemmaPilot offers context-aware chat, code completion, file analysis, and command execution, all while keeping your code on your machine.

Topics

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published