
A sophisticated 3D drone simulation built with Next.js, Three.js, and React Three Fiber, featuring AI-powered autonomous flight capabilities using deep reinforcement learning.
- Realistic 3D Drone Physics: Full 6-DOF movement with realistic tilt, thrust, and inertia
- Advanced Flight Controls: Manual control with keyboard inputs for takeoff, landing, movement, and camera control
- Dynamic Camera System: Movable gimbal camera with tilt and rotation controls
- Comprehensive Environment: Buildings, skyscrapers, dense forests, and training obstacles
- Deep Reinforcement Learning: Neural network-based autonomous flight using Q-learning
- Imitation Learning: Record and learn from human demonstrations
- Advanced Reward System: Shaped rewards for collision avoidance, mission completion, and flight efficiency
- Real-time Training: Live training with exploration/exploitation balance
- Optimized LiDAR System: 16 spherical rays providing 360ยฐ 3D obstacle detection
- Real-time Visualization: Visible ray casting with intersection markers
- Performance Optimized: Reduced from 52 to 16 rays for better simulation performance
- Dynamic Mission Generation: Randomized start and target positions
- Landing Challenges: Precision landing requirements within target zones
- Progress Tracking: Real-time distance and completion monitoring
- Configurable Difficulty: Beginner, intermediate, and advanced training modes
- Dynamic Obstacles: Moving platforms, pendulums, and wind zones
- Comprehensive Scenarios: Gates, tunnels, narrow passages, and maze walls
- Frontend: Next.js 15, React 19, TypeScript
- 3D Graphics: Three.js, React Three Fiber
- Styling: Tailwind CSS 4
- Icons: Lucide React
- AI/ML: Custom neural network implementation with reinforcement learning
- Package Manager: pnpm

- Node.js 18+
- pnpm (recommended) or npm/yarn
- Clone the repository:
git clone <repository-url>
cd autonomous-drone
- Install dependencies:
pnpm install
- Start the development server:
pnpm dev
- Open http://localhost:3000 in your browser
- Movement: Arrow keys (โโโโ)
- Altitude: Shift + โโ
- Rotation: Shift + โโ
- Takeoff: T
- Landing: L
- Hover: H
- Camera: I/K (tilt), J/O (rotate)
- Toggle AI Mode: Switch between manual and autonomous control
- Training: Enable/disable real-time learning
- Recording: Record demonstrations for imitation learning
- Save/Load: Export and import trained models
- Input Size: 40 features
- Position, velocity, rotation (9)
- Drone status (4)
- LiDAR readings (16)
- LiDAR indicators (3)
- Flight status (2)
- Mission info (6)
- Architecture: 256โ128โ64 hidden layers
- Output: 9 possible actions (movement + hover)
- Reinforcement Learning: Q-learning with experience replay
- Imitation Learning: Learn from human demonstrations
- Reward Shaping: Complex reward system for optimal behavior
- Auto-respawn: Continuous training with automatic episode management
- Reduced Ray Count: Optimized from 52 to 16 spherical rays
- 68% Performance Improvement: Significant reduction in computational overhead
- Maintained Coverage: Full 3D spatial awareness with spherical distribution
- Compact Architecture: Reduced input size from 77 to 40 features
- Efficient Training: Smaller network for faster convergence
- Real-time Inference: Optimized for live decision making
- Basic Navigation: Fly from start to target position
- Precision Landing: Land within 3-meter target zones
- Obstacle Avoidance: Navigate through complex environments
- Altitude Challenges: Maintain optimal flight altitudes
- Speed Optimization: Complete missions efficiently
- Episode Length: 3000 steps maximum
- Learning Rate: 0.0005
- Exploration: Epsilon-greedy with decay
- Replay Buffer: 100,000 experiences
- Batch Size: 32 samples
- World Size: 200m ร 200m
- LiDAR Range: 25m maximum
- Altitude Limits: 0-100m
- Mission Distance: 15-60m
- Real-time Metrics: Episode rewards, collision counts, success rates
- Training Progress: Loss curves, exploration rates, performance trends
- Flight Data: Position tracking, LiDAR readings, action history
- Model Management: Save/load trained networks and demonstrations
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
Amanuel Garomsa
- ๐ Computer Science Graduate
- ๐ข Currently working at Icoglabs, SingularityNet
- ๐ง Email: [email protected]
- ๐ฌ Personal Research Project
MIT License ยฉ 2025 Amanuel Garomsa
This project is licensed under the MIT License - see the LICENSE file for details.
This is a personal research project exploring autonomous drone flight using deep reinforcement learning and imitation learning techniques.