zen-marketing/CLAUDE.md
Ben 371806488d Initial commit: Zen-Marketing MCP Server v0.1.0
- Core architecture from zen-mcp-server
- OpenRouter and Gemini provider configuration
- Content variant generator tool (first marketing tool)
- Chat tool for marketing strategy
- Version and model listing tools
- Configuration system with .env support
- Logging infrastructure
- Ready for Claude Desktop integration
2025-11-07 11:35:17 -04:00

18 KiB

Claude Development Guide for Zen-Marketing MCP Server

This file contains essential commands and workflows for developing the Zen-Marketing MCP Server - a specialized marketing-focused fork of Zen MCP Server.

Project Context

What is Zen-Marketing? A Claude Desktop MCP server providing AI-powered marketing tools focused on:

  • Content variation generation for A/B testing
  • Cross-platform content adaptation
  • Writing style enforcement
  • SEO optimization for WordPress
  • Guest content editing with voice preservation
  • Technical fact verification
  • Internal linking strategy
  • Multi-channel campaign planning

Target User: Solo marketing professionals managing technical B2B content, particularly in industries like HVAC, SaaS, and technical education.

Key Difference from Zen Code: This is for marketing/content work, not software development. Tools generate content variations, enforce writing styles, and optimize for platforms like LinkedIn, newsletters, and WordPress - not code review or debugging.

Quick Reference Commands

Initial Setup

# Navigate to project directory
cd ~/mcp/zen-marketing

# Copy core files from zen-mcp-server (if starting fresh)
# We'll do this in the new session

# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate

# Install dependencies (once requirements.txt is created)
pip install -r requirements.txt

# Create .env file
cp .env.example .env
# Edit .env with your API keys

Development Workflow

# Activate environment
source .venv/bin/activate

# Run code quality checks (once implemented)
./code_quality_checks.sh

# Run server locally for testing
python server.py

# View logs
tail -f logs/mcp_server.log

# Run tests
python -m pytest tests/ -v

Claude Desktop Configuration

Add to ~/.claude.json:

{
  "mcpServers": {
    "zen-marketing": {
      "command": "/home/ben/mcp/zen-marketing/.venv/bin/python",
      "args": ["/home/ben/mcp/zen-marketing/server.py"],
      "env": {
        "OPENROUTER_API_KEY": "your-openrouter-key",
        "GEMINI_API_KEY": "your-gemini-key",
        "DEFAULT_MODEL": "gemini-2.5-pro",
        "FAST_MODEL": "gemini-flash",
        "CREATIVE_MODEL": "minimax-m2",
        "ENABLE_WEB_SEARCH": "true",
        "DISABLED_TOOLS": "",
        "LOG_LEVEL": "INFO"
      }
    }
  }
}

After modifying config: Restart Claude Desktop for changes to take effect.

Tool Development Guidelines

Tool Categories

Simple Tools (single-shot, fast response):

  • Inherit from SimpleTool base class
  • Focus on speed and iteration
  • Examples: contentvariant, platformadapt, subjectlines, factcheck
  • Use fast models (gemini-flash) when possible

Workflow Tools (multi-step processes):

  • Inherit from WorkflowTool base class
  • Systematic step-by-step workflows
  • Track progress, confidence, findings
  • Examples: styleguide, seooptimize, guestedit, linkstrategy

Temperature Guidelines for Marketing Tools

  • High (0.7-0.8): Content variation, creative adaptation
  • Medium (0.5-0.6): Balanced tasks, campaign planning
  • Low (0.3-0.4): Analytical work, SEO optimization
  • Very Low (0.2): Fact-checking, technical verification

Model Selection Strategy

Gemini 2.5 Pro (gemini-2.5-pro):

  • Analytical and strategic work
  • SEO optimization
  • Guest editing
  • Internal linking analysis
  • Voice analysis
  • Campaign planning
  • Fact-checking

Gemini Flash (gemini-flash):

  • Fast bulk generation
  • Subject line creation
  • Quick variations
  • Cost-effective iterations

Minimax M2 (minimax-m2):

  • Creative content generation
  • Platform adaptation
  • Content repurposing
  • Marketing copy variations

System Prompt Best Practices

Marketing tool prompts should:

  1. Specify output format clearly (JSON, markdown, numbered list)
  2. Include platform constraints (character limits, formatting rules)
  3. Emphasize preservation (voice, expertise, technical accuracy)
  4. Request rationale (why certain variations work, what to test)
  5. Avoid code terminology (use "content" not "implementation")

Example prompt structure:

CONTENTVARIANT_PROMPT = """
You are a marketing content strategist specializing in A/B testing and variation generation.

TASK: Generate multiple variations of marketing content for testing different approaches.

OUTPUT FORMAT:
Return variations as numbered list, each with:
1. The variation text
2. The testing angle (what makes it different)
3. Predicted audience response

CONSTRAINTS:
- Maintain core message across variations
- Respect platform character limits if specified
- Preserve brand voice characteristics
- Generate genuinely different approaches, not just word swaps

VARIATION TYPES:
- Hook variations: Different opening angles
- Length variations: Short, medium, long
- Tone variations: Professional, conversational, urgent
- Structure variations: Question, statement, story
- CTA variations: Different calls-to-action
"""

Implementation Phases

Phase 1: Foundation ✓ (You Are Here)

  • Create project directory
  • Write implementation plan (PLAN.md)
  • Create development guide (CLAUDE.md)
  • Copy core architecture from zen-mcp-server
  • Configure minimax provider
  • Remove code-specific tools
  • Test basic chat functionality

Phase 2: Simple Tools (Priority: High)

Implementation order based on real-world usage:

  1. contentvariant - Most frequently used (subject lines, social posts)
  2. subjectlines - Specific workflow mentioned in project memories
  3. platformadapt - Multi-channel content distribution
  4. factcheck - Technical accuracy verification

Phase 3: Workflow Tools (Priority: Medium)

  1. styleguide - Writing rule enforcement (no em-dashes, etc.)
  2. seooptimize - WordPress SEO optimization
  3. guestedit - Guest content editing workflow
  4. linkstrategy - Internal linking and cross-platform integration

Phase 4: Advanced Features (Priority: Lower)

  1. voiceanalysis - Voice extraction and consistency checking
  2. campaignmap - Multi-touch campaign planning

Tool Implementation Checklist

For each new tool:

Code Files:

  • Create tool file in tools/ (e.g., tools/contentvariant.py)
  • Create system prompt in systemprompts/ (e.g., systemprompts/contentvariant_prompt.py)
  • Create test file in tests/ (e.g., tests/test_contentvariant.py)
  • Register tool in server.py

Tool Class Requirements:

  • Inherit from SimpleTool or WorkflowTool
  • Implement get_name() - tool name
  • Implement get_description() - what it does
  • Implement get_system_prompt() - behavior instructions
  • Implement get_default_temperature() - creativity level
  • Implement get_model_category() - FAST_RESPONSE or DEEP_THINKING
  • Implement get_request_model() - Pydantic request schema
  • Implement get_input_schema() - MCP tool schema
  • Implement request/response formatting hooks

Testing:

  • Unit tests for request validation
  • Unit tests for response formatting
  • Integration test with real model (optional)
  • Add to quality checks script

Documentation:

  • Add tool to README.md
  • Create examples in docs/tools/
  • Update PLAN.md progress

Common Development Tasks

Adding a New Simple Tool

# tools/mynewtool.py
from typing import Optional
from pydantic import Field
from tools.shared.base_models import ToolRequest
from .simple.base import SimpleTool
from systemprompts import MYNEWTOOL_PROMPT
from config import TEMPERATURE_BALANCED

class MyNewToolRequest(ToolRequest):
    """Request model for MyNewTool"""
    prompt: str = Field(..., description="What you want to accomplish")
    files: Optional[list[str]] = Field(default_factory=list)

class MyNewTool(SimpleTool):
    def get_name(self) -> str:
        return "mynewtool"

    def get_description(self) -> str:
        return "Brief description of what this tool does"

    def get_system_prompt(self) -> str:
        return MYNEWTOOL_PROMPT

    def get_default_temperature(self) -> float:
        return TEMPERATURE_BALANCED

    def get_model_category(self) -> "ToolModelCategory":
        from tools.models import ToolModelCategory
        return ToolModelCategory.FAST_RESPONSE

    def get_request_model(self):
        return MyNewToolRequest

Adding a New Workflow Tool

# tools/mynewworkflow.py
from typing import Optional
from pydantic import Field
from tools.shared.base_models import WorkflowRequest
from .workflow.base import WorkflowTool
from systemprompts import MYNEWWORKFLOW_PROMPT

class MyNewWorkflowRequest(WorkflowRequest):
    """Request model for workflow tool"""
    step: str = Field(description="Current step content")
    step_number: int = Field(ge=1)
    total_steps: int = Field(ge=1)
    next_step_required: bool
    findings: str = Field(description="What was discovered")
    # Add workflow-specific fields

class MyNewWorkflow(WorkflowTool):
    # Implementation similar to Simple Tool
    # but with workflow-specific logic

Testing a Tool Manually

# Start server with debug logging
LOG_LEVEL=DEBUG python server.py

# In another terminal, watch logs
tail -f logs/mcp_server.log | grep -E "(TOOL_CALL|ERROR|MyNewTool)"

# In Claude Desktop, test:
# "Use zen-marketing to generate 10 subject lines about HVAC maintenance"

Project Structure

zen-marketing/
├── server.py                 # Main MCP server entry point
├── config.py                 # Configuration constants
├── PLAN.md                   # Implementation plan (this doc)
├── CLAUDE.md                 # Development guide
├── README.md                 # User-facing documentation
├── requirements.txt          # Python dependencies
├── .env.example              # Environment variable template
├── .env                      # Local config (gitignored)
├── run-server.sh             # Setup and run script
├── code_quality_checks.sh    # Linting and testing
│
├── tools/                    # Tool implementations
│   ├── __init__.py
│   ├── contentvariant.py     # Bulk variation generator
│   ├── platformadapt.py      # Cross-platform adapter
│   ├── subjectlines.py       # Email subject line generator
│   ├── styleguide.py         # Writing style enforcer
│   ├── seooptimize.py        # SEO optimizer
│   ├── guestedit.py          # Guest content editor
│   ├── linkstrategy.py       # Internal linking strategist
│   ├── factcheck.py          # Technical fact checker
│   ├── voiceanalysis.py      # Voice extractor/validator
│   ├── campaignmap.py        # Campaign planner
│   ├── chat.py               # General chat (from zen)
│   ├── thinkdeep.py          # Deep thinking (from zen)
│   ├── planner.py            # Planning (from zen)
│   ├── models.py             # Shared models
│   ├── simple/               # Simple tool base classes
│   │   └── base.py
│   ├── workflow/             # Workflow tool base classes
│   │   └── base.py
│   └── shared/               # Shared utilities
│       └── base_models.py
│
├── providers/                # AI provider implementations
│   ├── __init__.py
│   ├── base.py               # Base provider interface
│   ├── gemini.py             # Google Gemini
│   ├── minimax.py            # Minimax (NEW)
│   ├── openrouter.py         # OpenRouter fallback
│   ├── registry.py           # Provider registry
│   └── shared/
│
├── systemprompts/            # System prompts for tools
│   ├── __init__.py
│   ├── contentvariant_prompt.py
│   ├── platformadapt_prompt.py
│   ├── subjectlines_prompt.py
│   ├── styleguide_prompt.py
│   ├── seooptimize_prompt.py
│   ├── guestedit_prompt.py
│   ├── linkstrategy_prompt.py
│   ├── factcheck_prompt.py
│   ├── voiceanalysis_prompt.py
│   ├── campaignmap_prompt.py
│   ├── chat_prompt.py        # From zen
│   ├── thinkdeep_prompt.py   # From zen
│   └── planner_prompt.py     # From zen
│
├── utils/                    # Utility functions
│   ├── conversation_memory.py # Conversation continuity
│   ├── file_utils.py         # File handling
│   └── web_search.py         # Web search integration
│
├── tests/                    # Test suite
│   ├── __init__.py
│   ├── test_contentvariant.py
│   ├── test_platformadapt.py
│   ├── test_subjectlines.py
│   └── ...
│
├── logs/                     # Log files (gitignored)
│   ├── mcp_server.log
│   └── mcp_activity.log
│
└── docs/                     # Documentation
    ├── getting-started.md
    ├── tools/
    │   ├── contentvariant.md
    │   ├── platformadapt.md
    │   └── ...
    └── examples/
        └── marketing-workflows.md

Key Concepts from Zen Architecture

Conversation Continuity

Every tool supports continuation_id to maintain context across interactions:

# First call
result1 = await tool.execute({
    "prompt": "Analyze this brand voice",
    "files": ["brand_samples/post1.txt", "brand_samples/post2.txt"]
})
# Returns: continuation_id: "abc123"

# Follow-up call (remembers previous context)
result2 = await tool.execute({
    "prompt": "Now check if this new draft matches the voice",
    "files": ["new_draft.txt"],
    "continuation_id": "abc123"  # Preserves context
})

File Handling

Tools automatically:

  • Expand directories to individual files
  • Deduplicate file lists
  • Handle absolute paths
  • Process images (screenshots, brand assets)

Web Search Integration

Tools can request Claude to perform web searches:

# In system prompt:
"If you need current information about [topic], request a web search from Claude."

# Claude will then use WebSearch tool and provide results

Multi-Model Orchestration

Tools specify model category, server selects best available:

  • FAST_RESPONSE → gemini-flash or equivalent
  • DEEP_THINKING → gemini-2.5-pro or equivalent
  • User can override with model parameter

Debugging Common Issues

Tool Not Appearing in Claude Desktop

  1. Check server.py registers the tool
  2. Verify tool is not in DISABLED_TOOLS env var
  3. Restart Claude Desktop after config changes
  4. Check logs: tail -f logs/mcp_server.log

Model Selection Issues

  1. Verify API keys in .env
  2. Check provider registration in providers/registry.py
  3. Test with explicit model name: "model": "gemini-2.5-pro"
  4. Check logs for provider errors

Response Formatting Issues

  1. Validate system prompt specifies output format
  2. Check response doesn't exceed token limits
  3. Test with simpler input first
  4. Review logs for truncation warnings

Conversation Continuity Not Working

  1. Verify continuation_id is being passed correctly
  2. Check conversation hasn't expired (default 6 hours)
  3. Validate conversation memory storage
  4. Review logs: grep "continuation_id" logs/mcp_server.log

Code Quality Standards

Before committing:

# Run all quality checks
./code_quality_checks.sh

# Manual checks:
ruff check . --fix          # Linting
black .                      # Formatting
isort .                      # Import sorting
pytest tests/ -v             # Run tests

Marketing-Specific Considerations

Character Limits by Platform

Tools should be aware of:

  • Twitter/Bluesky: 280 characters
  • LinkedIn: 3000 chars (1300 optimal)
  • Instagram: 2200 characters
  • Facebook: No hard limit (500 chars optimal)
  • Email subject: 60 characters optimal
  • Email preview: 90-100 characters
  • Meta description: 156 characters
  • Page title: 60 characters

Writing Style Rules from Project Memories

  • No em-dashes (use periods or semicolons)
  • No "This isn't X, it's Y" constructions
  • Direct affirmative statements over negations
  • Semantic variety in paragraph openings
  • Concrete metrics over abstract claims
  • Technical accuracy preserved
  • Author voice maintained

Testing Angles for Variations

  • Technical curiosity
  • Contrarian/provocative
  • Knowledge gap emphasis
  • Urgency/timeliness
  • Insider knowledge positioning
  • Problem-solution framing
  • Before-after transformation
  • Social proof/credibility
  • FOMO (fear of missing out)
  • Educational value

Next Session Goals

When you start the new session in ~/mcp/zen-marketing/:

  1. Copy Core Files from Zen

    • Copy base architecture preserving git history
    • Remove code-specific tools
    • Update imports and references
  2. Configure Minimax Provider

    • Add minimax support to providers/
    • Register in provider registry
    • Test basic model calls
  3. Implement First Simple Tool

    • Start with contentvariant (highest priority)
    • Create tool, system prompt, and tests
    • Test end-to-end with Claude Desktop
  4. Validate Architecture

    • Ensure conversation continuity works
    • Verify file handling
    • Test web search integration

Questions to Consider

Before implementing each tool:

  1. What real-world workflow does this solve? (Reference project memories)
  2. What's the minimum viable version?
  3. What can go wrong? (Character limits, API errors, invalid input)
  4. How will users test variations? (Output format)
  5. Does it need web search? (Current info, fact-checking)
  6. What's the right temperature? (Creative vs analytical)
  7. Simple or workflow tool? (Single-shot vs multi-step)

Resources

  • Zen MCP Server Repo: Source for architecture and patterns
  • MCP Protocol Docs: https://modelcontextprotocol.com
  • Claude Desktop Config: ~/.claude.json
  • Project Memories: See PLAN.md for user workflow examples
  • Platform Best Practices: Research current 2025 guidelines

Ready to build? Start the new session with:

cd ~/mcp/zen-marketing
# Then ask Claude to begin Phase 1 implementation