Skip to content

Add auto-inject functionality to pre-load MCP servers #11

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .changeset/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Changesets

Hello and welcome! This folder has been automatically generated by `@changesets/cli`, a build tool that works
with multi-package repos, or single-package repos to help you version and publish your code. You can
find the full documentation for it [in our repository](https://github.com/changesets/changesets)

We have a quick list of common questions to get you started engaging with this project in
[our documentation](https://github.com/changesets/changesets/blob/main/docs/common-questions.md)
11 changes: 11 additions & 0 deletions .changeset/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
{
"$schema": "https://unpkg.com/@changesets/[email protected]/schema.json",
"changelog": "@changesets/cli/changelog",
"commit": false,
"fixed": [],
"linked": [],
"access": "restricted",
"baseBranch": "main",
"updateInternalDependencies": "patch",
"ignore": []
}
7 changes: 7 additions & 0 deletions .changeset/thick-dodos-sneeze.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
'mcp-chat': patch
---

Add auto-inject functionality to pre-load MCP servers

- Resolves #9
4 changes: 4 additions & 0 deletions .prettierignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
node_modules
build
dist
.next
5 changes: 5 additions & 0 deletions .prettierrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"singleQuote": true,
"semi": true,
"trailingComma": "all"
}
50 changes: 49 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,12 +38,14 @@ This application supports connecting to Model Context Protocol (MCP) servers to
#### SSE Configuration

If you select SSE transport:

1. Enter the server URL (e.g., `https://mcp.example.com/token/sse`)
2. Click "Add Server"

#### stdio Configuration

If you select stdio transport:

1. Enter the command to execute (e.g., `npx`)
2. Enter the command arguments (e.g., `-y @modelcontextprotocol/server-google-maps`)
- You can enter space-separated arguments or paste a JSON array
Expand All @@ -59,6 +61,52 @@ You can use any MCP-compatible server with this application. Here are some examp
- [Zapier MCP](https://zapier.com/mcp) - Provides access to Zapier tools
- Any MCP server using stdio transport with npx and python3

## MCP Server Auto-Inject Functionality

This project supports automatic injection of MCP (Model Context Protocol) servers at runtime using a root-level `mcp.json` file. This allows you to pre-configure which MCP servers are available and which should be enabled by default when the app starts.

### How It Works

- Place an `mcp.json` file in your project root.
- Define all desired MCP servers under the `mcpServers` object.
- Each server configuration can include:
- `type`: The type of server (e.g., `stdio`, `sse`, `http`).
- `command`: The command to launch the MCP server (e.g., `npx`, `python3`).
- `args`: An array of arguments to pass to the command. For example, `['-y', '@modelcontextprotocol/server-github']` will run `npx -y @modelcontextprotocol/server-github`.
- `env`: An object of environment variables to set when launching the server. For example, `{ "GITHUB_PERSONAL_ACCESS_TOKEN": "YOUR_TOKEN" }` will set the token in the server's environment.
- `autoEnable`: If `true`, the server will be enabled automatically at app launch.

**Best Practices:**

- Use `args` to keep your command line flexible and easy to update without changing the command itself.
- Store sensitive information like API keys in `env` and reference environment variables as needed.
- You can add as many custom environment variables as your MCP server supports.

The app will read this file at startup and inject all listed servers into the UI. Servers with `autoEnable: true` will be selected for immediate use.

### Sample mcp.json

```json
{
"mcpServers": {
"github": {
"type": "stdio", // type of server, e.g., sse or stdio
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "YOUR_GITHUB_PERSONAL_ACCESS_TOKEN"
},
"autoEnable": true // auto-enable at launch
}
}
}
```

- You may add multiple servers under `mcpServers`.
- All fields are customizable per server.

---

## License

This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
92 changes: 47 additions & 45 deletions ai/providers.ts
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
import { createOpenAI } from "@ai-sdk/openai";
import { createGroq } from "@ai-sdk/groq";
import { createAnthropic } from "@ai-sdk/anthropic";
import { createXai } from "@ai-sdk/xai";
import { createOpenAI } from '@ai-sdk/openai';
import { createGroq } from '@ai-sdk/groq';
import { createAnthropic } from '@ai-sdk/anthropic';
import { createXai } from '@ai-sdk/xai';

import {
customProvider,
wrapLanguageModel,
extractReasoningMiddleware
} from "ai";
import {
customProvider,
wrapLanguageModel,
extractReasoningMiddleware,
} from 'ai';

export interface ModelInfo {
provider: string;
Expand All @@ -27,12 +27,12 @@ const getApiKey = (key: string): string | undefined => {
if (process.env[key]) {
return process.env[key] || undefined;
}

// Fall back to localStorage if available
if (typeof window !== 'undefined') {
return window.localStorage.getItem(key) || undefined;
}

return undefined;
};

Expand All @@ -54,45 +54,47 @@ const xaiClient = createXai({
});

const languageModels = {
"gpt-4.1-mini": openaiClient("gpt-4.1-mini"),
"claude-3-7-sonnet": anthropicClient('claude-3-7-sonnet-20250219'),
"qwen-qwq": wrapLanguageModel(
{
model: groqClient("qwen-qwq-32b"),
middleware
}
),
"grok-3-mini": xaiClient("grok-3-mini-latest"),
'gpt-4.1-mini': openaiClient('gpt-4.1-mini'),
'claude-3-7-sonnet': anthropicClient('claude-3-7-sonnet-20250219'),
'qwen-qwq': wrapLanguageModel({
model: groqClient('qwen-qwq-32b'),
middleware,
}),
'grok-3-mini': xaiClient('grok-3-mini-latest'),
};

export const modelDetails: Record<keyof typeof languageModels, ModelInfo> = {
"gpt-4.1-mini": {
provider: "OpenAI",
name: "GPT-4.1 Mini",
description: "Compact version of OpenAI's GPT-4.1 with good balance of capabilities, including vision.",
apiVersion: "gpt-4.1-mini",
capabilities: ["Balance", "Creative", "Vision"]
'gpt-4.1-mini': {
provider: 'OpenAI',
name: 'GPT-4.1 Mini',
description:
"Compact version of OpenAI's GPT-4.1 with good balance of capabilities, including vision.",
apiVersion: 'gpt-4.1-mini',
capabilities: ['Balance', 'Creative', 'Vision'],
},
"claude-3-7-sonnet": {
provider: "Anthropic",
name: "Claude 3.7 Sonnet",
description: "Latest version of Anthropic's Claude 3.7 Sonnet with strong reasoning and coding capabilities.",
apiVersion: "claude-3-7-sonnet-20250219",
capabilities: ["Reasoning", "Efficient", "Agentic"]
'claude-3-7-sonnet': {
provider: 'Anthropic',
name: 'Claude 3.7 Sonnet',
description:
"Latest version of Anthropic's Claude 3.7 Sonnet with strong reasoning and coding capabilities.",
apiVersion: 'claude-3-7-sonnet-20250219',
capabilities: ['Reasoning', 'Efficient', 'Agentic'],
},
"qwen-qwq": {
provider: "Groq",
name: "Qwen QWQ",
description: "Latest version of Alibaba's Qwen QWQ with strong reasoning and coding capabilities.",
apiVersion: "qwen-qwq",
capabilities: ["Reasoning", "Efficient", "Agentic"]
'qwen-qwq': {
provider: 'Groq',
name: 'Qwen QWQ',
description:
"Latest version of Alibaba's Qwen QWQ with strong reasoning and coding capabilities.",
apiVersion: 'qwen-qwq',
capabilities: ['Reasoning', 'Efficient', 'Agentic'],
},
"grok-3-mini": {
provider: "XAI",
name: "Grok 3 Mini",
description: "Latest version of XAI's Grok 3 Mini with strong reasoning and coding capabilities.",
apiVersion: "grok-3-mini-latest",
capabilities: ["Reasoning", "Efficient", "Agentic"]
'grok-3-mini': {
provider: 'XAI',
name: 'Grok 3 Mini',
description:
"Latest version of XAI's Grok 3 Mini with strong reasoning and coding capabilities.",
apiVersion: 'grok-3-mini-latest',
capabilities: ['Reasoning', 'Efficient', 'Agentic'],
},
};

Expand All @@ -114,4 +116,4 @@ export type modelID = keyof typeof languageModels;

export const MODELS = Object.keys(languageModels);

export const defaultModel: modelID = "qwen-qwq";
export const defaultModel: modelID = 'qwen-qwq';
43 changes: 23 additions & 20 deletions app/actions.ts
Original file line number Diff line number Diff line change
@@ -1,49 +1,52 @@
"use server";
'use server';

import { openai } from "@ai-sdk/openai";
import { generateObject } from "ai";
import { z } from "zod";
import { openai } from '@ai-sdk/openai';
import { generateObject } from 'ai';
import { z } from 'zod';

// Helper to extract text content from a message regardless of format
function getMessageText(message: any): string {
// Check if the message has parts (new format)
if (message.parts && Array.isArray(message.parts)) {
const textParts = message.parts.filter((p: any) => p.type === 'text' && p.text);
const textParts = message.parts.filter(
(p: any) => p.type === 'text' && p.text,
);
if (textParts.length > 0) {
return textParts.map((p: any) => p.text).join('\n');
}
}

// Fallback to content (old format)
if (typeof message.content === 'string') {
return message.content;
}

// If content is an array (potentially of parts), try to extract text
if (Array.isArray(message.content)) {
const textItems = message.content.filter((item: any) =>
typeof item === 'string' || (item.type === 'text' && item.text)
const textItems = message.content.filter(
(item: any) =>
typeof item === 'string' || (item.type === 'text' && item.text),
);

if (textItems.length > 0) {
return textItems.map((item: any) =>
typeof item === 'string' ? item : item.text
).join('\n');
return textItems
.map((item: any) => (typeof item === 'string' ? item : item.text))
.join('\n');
}
}

return '';
}

export async function generateTitle(messages: any[]) {
// Convert messages to a format that OpenAI can understand
const normalizedMessages = messages.map(msg => ({
const normalizedMessages = messages.map((msg) => ({
role: msg.role,
content: getMessageText(msg)
content: getMessageText(msg),
}));

const { object } = await generateObject({
model: openai("gpt-4.1"),
model: openai('gpt-4.1'),
schema: z.object({
title: z.string().min(1).max(100),
}),
Expand All @@ -56,8 +59,8 @@ export async function generateTitle(messages: any[]) {
messages: [
...normalizedMessages,
{
role: "user",
content: "Generate a title for the conversation.",
role: 'user',
content: 'Generate a title for the conversation.',
},
],
});
Expand Down
Loading