Skip to content

feat: Flexible brand rules management #460

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 500 commits into
base: next
Choose a base branch
from

Conversation

joedanz
Copy link
Contributor

@joedanz joedanz commented May 9, 2025

Flexible Brand Rules Management

Summary

This PR introduces a more flexible and robust approach to managing brand-specific rules in Task Master AI projects. Users can now specify which brands’ rules to include at project initialization, and can add or remove brand rules at any time post-init via dedicated commands. All documentation has been updated to reflect these improvements.


✨ Features & Changes

  • Brand-Specific Initialization

    • The init command now supports a --rules <brands> or -r <brands> flag.
    • Example:
      task-master init --rules cursor,roo
      task-master init -r windsurf
    • Only the specified brands’ rules and configuration are included in the new project. By default, only Cursor rules are included.
  • Add/Remove Brand Rules After Init

    • New commands:
      • task-master rules add <brands> — Adds one or more brand rule sets and their MCP config to your project.
      • task-master rules remove <brands> — Removes one or more brand rule sets and their MCP config.
    • Supports multiple brands at once:
      task-master rules add windsurf,roo
      task-master rules remove windsurf
    • These commands manage the creation/removal of brand rules directories (e.g., .roo/rules) and the associated MCP configuration.
  • Documentation & Tests

    • All relevant documentation (README.md, .mdc files) has been updated to clearly explain the new flags and commands, with usage examples.
    • Integration and unit tests now verify that brand rules are only included when explicitly requested, and that add/remove commands behave as expected.

🛠️ Technical/Codebase Notes

  • Refactored rule transformation helpers and CLI logic for clarity and maintainability.
  • Updated all function names and imports to use generic, brand-agnostic terminology.
  • Ensured no Roo files or directories are created by default; they are only included if requested via -r roo or added later.
  • Improved error handling and logging for rule file operations.

🧪 How to Test

  1. Initialize a new project:

    • task-master init (should only include Cursor rules)
    • task-master init -r roo (should include Roo rules)
    • task-master init -r cursor,windsurf (should include both)
  2. Add/Remove rules after init:

    • task-master rules add roo
    • task-master rules remove cursor
  3. Verify:

    • Brand rules directories and MCP config files are created/removed as expected.
    • No Roo files are present unless explicitly requested.

📝 Other

Crunchyman-ralph and others added 30 commits April 19, 2025 23:49
…yaltoledano#272)

* feat: Add --append flag to parsePRD command - Fixes eyaltoledano#207

* chore: format

* chore: implement tests to core logic and commands

* feat: implement MCP for append flag of parse_prd tool

* fix: append not considering existing tasks

* chore: fix tests

---------

Co-authored-by: Kresna Sucandra <[email protected]>
…ors Introduced config-manager.js and new utilities (resolveEnvVariable, findProjectRoot). Removed old global CONFIG object from utils.js. Updated .taskmasterconfig, mcp.json, and .env.example. Added generateComplexityAnalysisPrompt to ui.js. Removed unused updateSubtaskById from task-manager.js. Resolved SyntaxError and ReferenceError issues across commands.js, ui.js, task-manager.js, and ai-services.js by replacing CONFIG references with config-manager getters (getDebugFlag, getProjectName, getDefaultSubtasks, isApiKeySet). Refactored 'models' command to use getConfig/writeConfig. Simplified version checking. This stabilizes the codebase after initial Task 61 refactoring, fixing CLI errors and enabling subsequent work on Subtasks 61.34 and 61.35.
This commit focuses on standardizing configuration and API key access patterns across key modules as part of subtask 61.34.

Key changes include:

- Refactored `ai-services.js` to remove global AI clients and use `resolveEnvVariable` for API key checks. Client instantiation now relies on `getAnthropicClient`/`getPerplexityClient` accepting a session object.

- Refactored `task-manager.js` (`analyzeTaskComplexity` function) to use the unified `generateTextService` from `ai-services-unified.js`, removing direct AI client calls.

- Replaced direct `process.env` access for model parameters and other configurations (`PERPLEXITY_MODEL`, `CONFIG.*`) in `task-manager.js` with calls to the appropriate getters from `config-manager.js` (e.g., `getResearchModelId(session)`, `getMainMaxTokens(session)`).

- Ensured `utils.js` (`resolveEnvVariable`) correctly handles potentially undefined session objects.

- Updated function signatures where necessary to propagate the `session` object for correct context-aware configuration/key retrieval.

This moves towards the goal of using `ai-client-factory.js` and `ai-services-unified.js` as the standard pattern for AI interactions and centralizing configuration management through `config-manager.js`.
This commit centralizes configuration and environment variable access across various modules by consistently utilizing getters from scripts/modules/config-manager.js. This replaces direct access to process.env and the global CONFIG object, leading to improved consistency, maintainability, testability, and better handling of session-specific configurations within the MCP context.

Key changes include:

- Centralized Getters: Replaced numerous instances of process.env.* and CONFIG.* with corresponding getter functions (e.g., getLogLevel, getMainModelId, getResearchMaxTokens, getMainTemperature, isApiKeySet, getDebugFlag, getDefaultSubtasks).

- Session Awareness: Ensured that the session object is passed to config getters where necessary, particularly within AI service calls (ai-services.js, add-task.js) and error handling (ai-services.js), allowing for session-specific environment overrides.

- API Key Checks: Standardized API key availability checks using isApiKeySet() instead of directly checking process.env.* (e.g., for Perplexity in commands.js and ai-services.js).

- Client Instantiation Cleanup: Removed now-redundant/obsolete local client instantiation functions (getAnthropicClient, getPerplexityClient) from ai-services.js and the global Anthropic client initialization from dependency-manager.js. Client creation should now rely on the config manager and factory patterns.

- Consistent Debug Flag Usage: Standardized calls to getDebugFlag() in commands.js, removing potentially unnecessary null arguments.

- Accurate Progress Calculation: Updated AI stream progress reporting (ai-services.js, add-task.js) to use getMainMaxTokens(session) for more accurate calculations.

- Minor Cleanup: Removed unused  import from scripts/modules/commands.js.

Specific module updates:

- :

  - Uses getLogLevel() instead of process.env.LOG_LEVEL.

- :

  - Replaced direct env/config access for model IDs, tokens, temperature, API keys, and default subtasks with appropriate getters.

  - Passed session to handleClaudeError.

  - Removed local getPerplexityClient and getAnthropicClient functions.

  - Updated progress calculations to use getMainMaxTokens(session).

- :

  - Uses isApiKeySet('perplexity') for API key checks.

  - Uses getDebugFlag() consistently for debug checks.

  - Removed unused  import.

- :

  - Removed global Anthropic client initialization.

- :

  - Uses config getters (getResearch..., getMain...) for Perplexity and Claude API call parameters, preserving customEnv override logic.

This refactoring also resolves a potential SyntaxError: Identifier 'getPerplexityClient' has already been declared by removing the duplicated/obsolete function definition previously present in ai-services.js.
The interactive model setup triggered by `task-master models --setup` was previously attempting to call non-existent setter functions (`setMainModel`, etc.) in `config-manager.js`, leading to errors and preventing configuration updates.

This commit refactors the `--setup` logic within the `models` command handler in `scripts/modules/commands.js`. It now correctly:

- Loads the current configuration using `getConfig()`." -m "- Updates the appropriate sections of the loaded configuration object based on user selections from `inquirer`." -m "- Saves the modified configuration using the existing `writeConfig()` function from `config-manager.js`." -m "- Handles disabling the fallback model correctly."
…ides

BREAKING CHANGE: Taskmaster now requires a `.taskmasterconfig` file for model/parameter settings. Environment variables (except API keys) are no longer used for overrides.

- Throws an error if `.taskmasterconfig` is missing, guiding user to run `task-master models --setup`." -m "- Removed env var checks from config getters in `config-manager.js`." -m "- Updated `env.example` to remove obsolete variables." -m "- Refined missing config file error message in `commands.js`.
- Unified Service: Introduced 'scripts/modules/ai-services-unified.js' to centralize AI interactions using provider modules ('src/ai-providers/') and the Vercel AI SDK.

- Provider Modules: Implemented 'anthropic.js' and 'perplexity.js' wrappers for Vercel SDK.

- 'updateSubtaskById' Fix: Refactored the AI call within 'updateSubtaskById' to use 'generateTextService' from the unified layer, resolving runtime errors related to parameter passing and streaming. This serves as the pattern for refactoring other AI calls in 'scripts/modules/task-manager/'.

- Task Status: Marked Subtask 61.19 as 'done'.

- Rules: Added new 'ai-services.mdc' rule.

This centralizes AI logic, replacing previous direct SDK calls and custom implementations. API keys are resolved via 'resolveEnvVariable' within the service layer. The refactoring of 'updateSubtaskById' establishes the standard approach for migrating other AI-dependent functions in the task manager module to use the unified service.

Relates to Task 61.
…onfigurationError object thrown if .taskmasterconfig is not present. I'll need to implement MCP tools for model to manage models from MCP and be able to create it.
* Direct Integration of Roo Code Support

## Overview

This PR adds native Roo Code support directly within the Task Master package, in contrast to PR eyaltoledano#279 which proposed using a separate repository and patch script approach. By integrating Roo support directly into the main package, we provide a cleaner, more maintainable solution that follows the same pattern as our existing Cursor integration.

## Key Changes

1. **Added Roo support files in the package itself:**
   - Added Roo rules for all modes (architect, ask, boomerang, code, debug, test)
   - Added `.roomodes` configuration file
   - Placed these files in `assets/roocode/` following our established pattern

2. **Enhanced init.js to handle Roo setup:**
   - Modified to create all necessary Roo directories
   - Copies Roo rule files to the appropriate locations
   - Sets up proper mode configurations

3. **Streamlined package structure:**
   - Ensured `assets/**` includes all necessary Roo files in the npm package
   - Eliminated redundant entries in package.json
   - Updated prepare-package.js to verify all required files

4. **Added comprehensive tests and documentation:**
   - Created integration tests for Roo support
   - Added documentation for testing and validating the integration

## Implementation Philosophy

Unlike the approach in PR eyaltoledano#279, which suggested:
- A separate repository for Roo integration
- A patch script to fetch external files
- External maintenance of Roo rules

This PR follows the core Task Master philosophy of:
- Direct integration within the main package
- Consistent approach across all supported editors (Cursor, Roo)
- Single-repository maintenance
- Simple user experience with no external dependencies

## Testing

The integration can be tested with:
```bash
npm test -- -t "Roo"
```

## Impact

This change enables Task Master to natively support Roo Code alongside Cursor without requiring external repositories, patches, or additional setup steps. Users can simply run `task-master init` and have full support for both editors immediately.

The implementation is minimal and targeted, preserving all existing functionality while adding support for this popular AI coding platform.

* Update roo-files-inclusion.test.js

* Update README.md

* Address PR feedback: move docs to contributor-docs, fix package.json references, regenerate package-lock.json

@Crunchyman-ralph Thank you for the feedback! I've made the requested changes:

1. ✅ Moved testing-roo-integration.md to the contributor-docs folder
2. ✅ Removed manual package.json changes and used changeset instead
3. ✅ Fixed package references and regenerated package-lock.json
4. ✅ All tests are now passing

Regarding architectural concerns:

- **Rule duplication**: I agree this is an opportunity for improvement. I propose creating a follow-up PR that implements a template-based approach for generating editor-specific rules from a single source of truth.

- **Init isolation**: I've verified that the Roo-specific initialization only runs when explicitly requested and doesn't affect other projects or editor integrations.

- **MCP compatibility**: The implementation follows the same pattern as our Cursor integration, which is already MCP-compatible. I've tested this by [describe your testing approach here].

Let me know if you'd like any additional changes!

* Address PR feedback: move docs to contributor-docs, fix package.json references, regenerate package-lock.json

@Crunchyman-ralph Thank you for the feedback! I've made the requested changes:

1. ✅ Moved testing-roo-integration.md to the contributor-docs folder
2. ✅ Removed manual package.json changes and used changeset instead
3. ✅ Fixed package references and regenerated package-lock.json
4. ✅ All tests are now passing

Regarding architectural concerns:

- **Rule duplication**: I agree this is an opportunity for improvement. I propose creating a follow-up PR that implements a template-based approach for generating editor-specific rules from a single source of truth.

- **Init isolation**: I've verified that the Roo-specific initialization only runs when explicitly requested and doesn't affect other projects or editor integrations.

- **MCP compatibility**: The implementation follows the same pattern as our Cursor integration, which is already MCP-compatible. I've tested this by [describe your testing approach here].

Let me know if you'd like any additional changes!

* feat: Add procedural generation of Roo rules from Cursor rules

* fixed prettier CI issue

* chore: update gitignore to exclude test files

* removing the old way to source the cursor derived roo rules

* resolving remaining conflicts

* resolving conflict 2

* Update package-lock.json

* fixing prettier

---------

Co-authored-by: neno-is-ooo <[email protected]>
…tion docs

This commit implements several related improvements to the models command and configuration system:

- Added MCP support for the models command:
  - Created new direct function implementation in models.js
  - Registered modelsDirect in task-master-core.js for proper export
  - Added models tool registration in tools/index.js
  - Ensured project name replacement when copying .taskmasterconfig in init.js

- Improved .taskmasterconfig copying during project initialization:
  - Added copyTemplateFile() call in createProjectStructure()
  - Ensured project name is properly replaced in the config

- Restructured tool registration in logical workflow groups:
  - Organized registration into 6 functional categories
  - Improved command ordering to follow typical workflow
  - Added clear group comments for maintainability

- Enhanced documentation in cursor rules:
  - Updated dev_workflow.mdc with clearer config management instructions
  - Added comprehensive models command reference to taskmaster.mdc
  - Clarified CLI vs MCP usage patterns and options
  - Added warning against manual .taskmasterconfig editing
Resolves persistent 404 'Not Found' errors when calling Anthropic models via the Vercel AI SDK. The primary issue was likely related to incorrect or missing API headers.

- Refactors Anthropic provider (src/ai-providers/anthropic.js) to use the standard 'anthropic-version' header instead of potentially outdated/incorrect beta headers when creating the client instance.

- Updates the default fallback model ID in .taskmasterconfig to 'claude-3-5-sonnet-20241022'.

- Fixes the interactive model setup (task-master models --setup) in scripts/modules/commands.js to correctly filter and default the main model selection.

- Improves the cost display in the 'task-master models' command output to explicitly show 'Free' for models with zero cost.

- Updates description for the 'id' parameter in the 'set_task_status' MCP tool definition for clarity.

- Updates list of models and costs
- Fixed MCP server initialization warnings by refactoring config-manager.js to handle missing project roots silently during startup

- Added project root tracking (loadedConfigRoot) to improve config caching and prevent unnecessary reloads

- Modified _loadAndValidateConfig to return defaults without warnings when no explicitRoot provided

- Improved getConfig to only update cache when loading config with a specific project root

- Ensured warning messages still appear when explicitly specified roots have missing/invalid configs

- Prevented console output during MCP startup that was causing JSON parsing errors

- Verified parse_prd and other MCP tools still work correctly with the new config loading approach.

- Replaces test perplexity api key in mcp.json and rolls it. It's invalid now.
Refactored the  feature and related components (CLI command, MCP tool, direct function) to integrate with the unified AI service layer ().

Initially,  was implemented to leverage structured output generation. However, this approach encountered persistent errors:
- Perplexity provider returned internal server errors.
- Anthropic provider failed with schema type and model errors.

Due to the unreliability of  for this specific use case, the core AI interaction within  was reverted to use . Basic manual JSON parsing and cleanup logic for the text response were reintroduced.

Key changes include:
- Removed direct AI client initialization (Anthropic, Perplexity).
- Removed direct fetching of AI model configuration parameters.
- Removed manual AI retry/fallback/streaming logic.
- Replaced direct AI calls with a call to .
- Updated  wrapper to pass session context correctly.
- Updated  MCP tool for correct path resolution and argument passing.
- Updated  CLI command for correct path resolution.
- Preserved core functionality: task loading/filtering, report generation, CLI summary display.

Both the CLI command ([INFO] Initialized Perplexity client with OpenAI compatibility layer
[INFO] Initialized Perplexity client with OpenAI compatibility layer
Analyzing task complexity from: tasks/tasks.json
Output report will be saved to: scripts/task-complexity-report.json
Analyzing task complexity and generating expansion recommendations...
[INFO] Reading tasks from tasks/tasks.json...
[INFO] Found 62 total tasks in the task file.
[INFO] Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks.
Skipping 31 tasks marked as done/cancelled/deferred. Analyzing 31 active tasks.
[INFO] Claude API attempt 1/2
[ERROR] Error in Claude API call: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Non-overload Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
Claude API error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Error during AI analysis: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}
[ERROR] Error analyzing task complexity: 400 {"type":"error","error":{"type":"invalid_request_error","message":"max_tokens: 100000 > 64000, which is the maximum allowed number of output tokens for claude-3-7-sonnet-20250219"}}) and the MCP tool () have been verified to work correctly with this revised approach.
Refactored the `expandTask` feature (`scripts/modules/task-manager/expand-task.js`) and related components (`commands.js`, `mcp-server/src/tools/expand-task.js`, `mcp-server/src/core/direct-functions/expand-task.js`) to integrate with the unified AI service layer (`ai-services-unified.js`) and configuration management (`config-manager.js`).

The refactor involved:

- Removing direct AI client calls and configuration fetching from `expand-task.js`.

- Attempting to use `generateObjectService` for structured subtask generation. This failed due to provider-specific errors (Perplexity internal errors, Anthropic schema translation issues).

- Reverting the core AI interaction to use `generateTextService`, asking the LLM to format its response as JSON containing a "subtasks" array.

- Re-implementing manual JSON parsing and Zod validation (`parseSubtasksFromText`) to handle the text response reliably.

- Updating prompt generation functions (`generateMainSystemPrompt`, `generateMainUserPrompt`, `generateResearchUserPrompt`) to request the correct JSON object structure within the text response.

- Ensuring the `expandTaskDirect` function handles pre-checks (force flag, task status) and correctly passes the `session` context and logger wrapper to the core `expandTask` function.

- Correcting duplicate imports in `commands.js`.

- Validating the refactored feature works correctly via both CLI (`task-master expand --id <id>`) and MCP (`expand_task` tool) for main and research roles.

This aligns the task expansion feature with the new architecture while using the more robust text generation approach due to current limitations with structured output services. Closes subtask 61.37.
…ort integration

Refactors the `expandTask` and `expandAllTasks` features to complete subtask 61.38 and enhance functionality based on subtask 61.37's refactor.

Key Changes:

- **Additive Expansion (`expandTask`, `expandAllTasks`):**

    - Modified `expandTask` default behavior to append newly generated subtasks to any existing ones.

    - Added a `force` flag (passed down from CLI/MCP via `--force` option/parameter) to `expandTask` and `expandAllTasks`. When `force` is true, existing subtasks are cleared before generating new ones.

    - Updated relevant CLI command (`expand`), MCP tool (`expand_task`, `expand_all`), and direct function wrappers (`expandTaskDirect`, `expandAllTasksDirect`) to handle and pass the `force` flag.

- **Complexity Report Integration (`expandTask`):**

    - `expandTask` now reads `scripts/task-complexity-report.json`.

    - If an analysis entry exists for the target task:

        - `recommendedSubtasks` is used to determine the number of subtasks to generate (unless `--num` is explicitly provided).

        - `expansionPrompt` is used as the primary prompt content for the AI.

        - `reasoning` is appended to any additional context provided.

    - If no report entry exists or the report is missing, it falls back to default subtask count (from config) and standard prompt generation.

- **`expandAllTasks` Orchestration:**

    - Refactored `expandAllTasks` to primarily iterate through eligible tasks (pending/in-progress, considering `force` flag and existing subtasks) and call the updated `expandTask` function for each.

    - Removed redundant logic (like complexity reading or explicit subtask clearing) now handled within `expandTask`.

    - Ensures correct context (`session`, `mcpLog`) and flags (`useResearch`, `force`) are passed down.

- **Configuration & Cleanup:**

    - Updated `.cursor/mcp.json` with new Perplexity/Anthropic API keys (old ones invalidated).

    - Completed refactoring of `expandTask` started in 61.37, confirming usage of `generateTextService` and appropriate prompts.

- **Task Management:**

    - Marked subtask 61.37 as complete.

    - Updated `.changeset/cuddly-zebras-matter.md` to reflect user-facing changes.

These changes finalize the refactoring of the task expansion features, making them more robust, configurable via complexity analysis, and aligned with the unified AI service architecture.
…e obsolete helpers

Completes the refactoring of the AI-interacting task management functions by aligning `update-tasks.js` with the unified service architecture and removing now-unused helper files.

Key Changes:

- **`update-tasks.js` Refactoring:**

    - Replaced direct AI client calls and AI-specific config fetching with a call to `generateTextService` from `ai-services-unified.js`.

    - Preserved the original system and user prompts requesting a JSON array output.

    - Implemented manual JSON parsing (`parseUpdatedTasksFromText`) with Zod validation to handle the text response reliably.

    - Updated the core function signature to accept the standard `context` object (`{ session, mcpLog }`).

    - Corrected logger implementation to handle both MCP (`mcpLog`) and CLI (`consoleLog`) contexts appropriately.

- **Related Component Updates:**

    - Refactored `mcp-server/src/core/direct-functions/update-tasks.js` to use the standard direct function pattern (logger wrapper, silent mode, call core function with context).

    - Verified `mcp-server/src/tools/update.js` correctly passes arguments and context.

    - Verified `scripts/modules/commands.js` (update command) correctly calls the refactored core function.

- **Obsolete File Cleanup:**

    - Removed the now-unused `scripts/modules/task-manager/get-subtasks-from-ai.js` file and its export, as its functionality was integrated into `expand-task.js`.

    - Removed the now-unused `scripts/modules/task-manager/generate-subtask-prompt.js` file and its export for the same reason.

- **Task Management:**

    - Marked subtasks 61.38, 61.39, and 61.41 as complete.

This commit finalizes the alignment of `updateTasks`, `updateTaskById`, `expandTask`, `expandAllTasks`, `analyzeTaskComplexity`, `addTask`, and `parsePRD` with the unified AI service and configuration management patterns.
@joedanz joedanz requested a review from Crunchyman-ralph May 27, 2025 04:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants