David Nguyen's Personal AI Assistant - Lumina is a full-stack web application that allows users to ask questions about David Nguyen, as well as any other topics, and receive instant, personalized responses powered by stateβofβtheβart AI & RAG. Users can log in to save their conversation history or continue as guests. The app uses modern technologies and provides a sleek, responsive user interface with lots of animations. π
- Live App
- Features
- Architecture
- Setup & Installation
- Deployment
- Usage
- User Interface
- API Endpoints
- Project Structure
- Dockerization
- OpenAPI Specification
- Contributing
- License
Currently, the app is deployed live on Vercel at: https://lumina-david.vercel.app/. Feel free to check it out!
For the backend (with Swagger docs), it is deployed live also on Vercel at: https://ai-assistant-chatbot-server.vercel.app/.
Alternatively, the backup app is deployed live on Netlify at: https://lumina-ai-chatbot.netlify.app/.
- AI Chatbot: Ask questions about David Nguyen and general topics; receive responses from an AI.
- User Authentication: Sign up, log in, and log out using JWT authentication.
- Conversation History: Save, retrieve, rename, and search past conversations (only for authenticated users).
- Updated & Vast Knowledge Base: Use RAG (Retrieval-Augmented Generation) & LangChain to enhance AI responses.
- Dynamic Responses: AI-generated responses with
markdown
formatting for rich text. - Interactive Chat: Real-time chat interface with smooth animations and transitions.
- Reset Password: Verify email and reset a userβs password.
- Responsive UI: Built with React and MaterialβUI (MUI) with a fully responsive, modern, and animated interface.
- Landing Page: A dynamic landing page with animations, feature cards, and call-to-action buttons.
- Guest Mode: Users may interact with the AI assistant as a guest, though conversations will not be saved.
- Dark/Light Mode: Users can toggle between dark and light themes, with the preference stored in local storage.
The project is divided into two main parts:
-
Backend:
An Express server written in TypeScript. It provides endpoints for:- User authentication (signup, login).
- Conversation management (create, load, update, and search conversations).
- AI chat integration (simulated calls to external generative AI APIs).
- Additional endpoints for email verification and password reset.
- MongoDB is used for data storage, with Mongoose for object modeling.
-
Frontend:
A React application built with TypeScript and MaterialβUI (MUI). It includes:- A modern, animated user interface for chatting with the AI.
- A landing page showcasing the appβs features.
- Pages for login, signup, and password reset.
- A collapsible sidebar for conversation history.
- Theme toggling (dark/light mode) and responsive design.
-
AI/ML: Use RAG (Retrieval-Augmented Generation) & LangChain to enhance the AI's responses by retrieving relevant information from a knowledge base or external sources. This involves:
- Retrieval: Implement a retrieval mechanism to fetch relevant documents or data from a knowledge base or external sources.
- Augmentation: Combine the retrieved information with the user's query to provide a more informed response.
- Generation: Use a generative model to create a response based on the augmented input.
- Feedback Loop: Implement a feedback loop to continuously improve the system based on user interactions and feedback.
- LangChain: Use LangChain to manage the entire process, from retrieval to generation, ensuring a seamless integration of RAG into the chatbot's workflow.
- Pinecone: Use Pinecone for vector similarity search to efficiently retrieve relevant documents or data for the RAG model.
βββββββββββββββββββββββββββββββ
β User Interaction β
β (Chat, Signup, Login, etc.) β
βββββββββββββββ¬ββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββ
β Frontend (React + MUI) β
β - Responsive UI, Animations β
β - Theme toggling, Sidebar β
β - API calls to backend β
βββββββββββββββ¬ββββββββββββββββ
β
β (REST API Calls)
β
βΌ
βββββββββββββββββββββββββββββββ
β Backend (Express + TS) β
β - Auth (JWT, Signup/Login) β
β - Chat & Convo Endpoints β
β - API orchestration β
βββββββββββββββ¬ββββββββββββββββ
β
βββββββββββββ΄βββββββββββββ¬ββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββββββββββ
β MongoDB β β Pinecone Vector β β (Additional Data: β
β - User Data ββββββββΊβ Database β β Analytics, Logs, etc.) β
β - Convo History β β - Upserted Docs β ββββββββββββββ¬βββββββββββββ
βββββββββββββββββββ β /Knowledge β β
β² β Base β βΌ
β βββββββββββ¬ββββββββ βββββββββββββββββββ
β β β Analytics & β
β (Uses stored convo β β Monitoring β
β & documents) β β Services β
βΌ βΌ βββββββββββββββββββ
βββββββββββββββββββββββββββββββββ
β AI/ML Component (RAG) β
β - Retrieval (Pinecone & β
β MongoDB conversation data) β
β - Augmentation (LangChain) β
β - Generation (OpenAI API) β
β - Feedback loop β
βββββββββββββββββ¬ββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββ
β Response Processing β
β - Compile AI answer β
| - Uses NLP & ML models β
β - Generate response with β
β LLM & Gemini AI β
β - Update conversation data β
β (MongoDB via Backend) β
βββββββββββββββ¬ββββββββββββββββ
β
β (Returns API response)
β (skip AI/ML for login/signup)
β
βΌ
βββββββββββββββββββββββββββββββ
β Frontend Display β
β - Show chat response β
β - Update UI (convo history) β
β - Sign user in/out, etc. β
βββββββββββββββββββββββββββββββ
-
Clone the repository:
git clone https://github.com/hoangsonww/AI-Assistant-Chatbot.git cd AI-Assistant-Chatbot/server
-
Install dependencies:
npm install
-
Environment Variables:
Create a.env
file in theserver
folder with the following (adjust values as needed):PORT=5000 MONGODB_URI=mongodb://localhost:27017/ai-assistant JWT_SECRET=your_jwt_secret_here GOOGLE_AI_API_KEY=your_google_ai_api_key_here AI_INSTRUCTIONS=Your system instructions for the AI assistant PINECONE_API_KEY=your_pinecone_api_key_here PINECONE_INDEX_NAME=your_pinecone_index_name_here
-
Run the server in development mode:
npm run dev
This uses nodemon with
ts-node
to watch for file changes.
-
Navigate to the client folder:
cd ../client
-
Install dependencies:
npm install
-
Run the frontend development server:
npm start
The app will run on http://localhost:3000 (or any other port you've specified in the
.env
file'sPORT
key).
-
Install necessary Node.js packages:
npm install
-
Store knowledge data in Pinecone vector database:
npm run store
Or
ts-node server/src/scripts/storeKnowledge.ts
-
Ensure you run this command before starting the backend server to store the knowledge data in the Pinecone vector database.
-
Backend:
Deploy the backend to your preferred Node.js hosting service (Heroku, AWS, etc.). Make sure to set your environment variables on the hosting platform. -
Frontend:
Deploy the React frontend using services like Vercel, Netlify, or GitHub Pages. Update API endpoint URLs in the frontend accordingly.
-
Landing Page:
The landing page provides an overview of the appβs features and two main actions: Create Account (for new users) and Continue as Guest. -
Authentication:
Users can sign up, log in, and reset their password. Authenticated users can save and manage their conversation history. -
Chatting:
The main chat area allows users to interact with the AI assistant. The sidebar displays saved conversations (for logged-in users) and allows renaming and searching. -
Theme:
Toggle between dark and light mode via the navbar. The chosen theme is saved in local storage and persists across sessions.
- POST /api/auth/signup: Create a new user.
- POST /api/auth/login: Authenticate a user and return a JWT.
- GET /api/auth/verify-email?email=[email protected]: Check if an email exists.
- POST /api/auth/reset-password: Reset a user's password.
- POST /api/conversations: Create a new conversation.
- GET /api/conversations: Get all conversations for a user.
- GET /api/conversations/:id: Retrieve a conversation by ID.
- PUT /api/conversations/:id: Rename a conversation.
- GET /api/conversations/search/:query: Search for conversations by title or message content.
- DELETE /api/conversations/:id: Delete a conversation.
- POST /api/chat: Process a chat query and return an AI-generated response.
AI-Assistant-Chatbot/
βββ docker-compose.yml
βββ openapi.yaml
βββ README.md
βββ LICENSE
βββ Jenkinsfile
βββ package.json
βββ tsconfig.json
βββ .env
βββ shell/ # Shell scripts for app setups
βββ client/ # Frontend React application
β βββ package.json
β βββ tsconfig.json
β βββ docker-compose.yml
β βββ Dockerfile
β βββ src/
β βββ App.tsx
β βββ index.tsx
β βββ theme.ts
β βββ dev/
β β βββ palette.tsx
β β βββ previews.tsx
β β βββ index.ts
β β βββ useInitial.ts
β βββ services/
β β βββ api.ts
β βββ types/
β β βββ conversation.d.ts
β β βββ user.d.ts
β βββ components/
β β βββ Navbar.tsx
β β βββ Sidebar.tsx
β β βββ ChatArea.tsx
β βββ pages/
β βββ LandingPage.tsx
β βββ Home.tsx
β βββ Login.tsx
β βββ Signup.tsx
β βββ NotFoundPage.tsx
β βββ ForgotPassword.tsx
βββ server/ # Backend Express application
βββ package.json
βββ tsconfig.json
βββ Dockerfile
βββ docker-compose.yml
βββ src/
βββ server.ts
βββ models/
β βββ Conversation.ts
β βββ User.ts
βββ routes/
β βββ auth.ts
β βββ conversations.ts
β βββ chat.ts
βββ services/
β βββ authService.ts
βββ utils/
β βββ ephemeralConversations.ts
βββ middleware/
βββ auth.ts
To run the application using Docker, simply run docker-compose up
in the root directory of the project. This will start both the backend and frontend services as defined in the docker-compose.yml
file.
Why Dockerize?
- Consistency: Ensures the application runs the same way in different environments.
- Isolation: Keeps dependencies and configurations contained.
- Scalability: Makes it easier to scale services independently.
- Simplified Deployment: Streamlines the deployment process.
- Easier Collaboration: Provides a consistent environment for all developers.
There is an OpenAPI specification file (openapi.yaml
) in the root directory that describes the API endpoints, request/response formats, and authentication methods. This can be used to generate client SDKs or documentation.
To view the API documentation, you can use tools like Swagger UI or Postman to import the openapi.yaml
file. Or just go to the /docs
endpoint of the deployed backend.
- Fork the repository.
- Create your feature branch:
git checkout -b feature/your-feature-name
- Commit your changes:
git commit -m 'Add some feature'
- Push to the branch:
git push origin feature/your-feature-name
- Open a Pull Request.
This project is licensed under the MIT License.
If you have any questions or suggestions, feel free to reach out to me:
Thank you for checking out the AI Assistant Project! If you have any questions or feedback, feel free to reach out. Happy coding! π