zen-marketing/CLAUDE.md
Ben 122797d325 Update CLAUDE.md with improved /init format
- Restructured to follow /init specifications for Claude Code
- Added clear command reference for setup and development
- Documented architecture patterns (tool system, providers, conversation continuity)
- Explained schema generation and file processing systems
- Removed planning/roadmap content (belongs in PLAN.md)
- Added practical debugging tips and implementation patterns
- Focused on non-obvious architecture insights
2025-11-07 12:17:26 -04:00

12 KiB

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Project Overview

Zen-Marketing is an MCP server for Claude Desktop providing AI-powered marketing tools. It's a fork of Zen MCP Server, specialized for marketing workflows rather than software development.

Key distinction: This server generates content variations, enforces writing styles, and optimizes for platforms (LinkedIn, newsletters, WordPress) - not code review or debugging.

Target user: Solo marketing professionals managing technical B2B content (HVAC, SaaS, technical education).

Essential Commands

Setup and Running

# Initial setup
./run-server.sh

# Manual setup
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
# Edit .env with API keys

# Run server
python server.py

# Run with debug logging
LOG_LEVEL=DEBUG python server.py

Testing and Development

# Watch logs during development
tail -f logs/mcp_server.log

# Filter logs for specific tool
tail -f logs/mcp_server.log | grep -E "(TOOL_CALL|ERROR|contentvariant)"

# Test with Claude Desktop after changes
# Restart Claude Desktop to reload MCP server configuration

Claude Desktop Configuration

Configuration file: ~/.claude.json or ~/Library/Application Support/Claude/claude_desktop_config.json

{
  "mcpServers": {
    "zen-marketing": {
      "command": "/Users/ben/dev/mcp/zen-marketing/.venv/bin/python",
      "args": ["/Users/ben/dev/mcp/zen-marketing/server.py"],
      "env": {
        "GEMINI_API_KEY": "your-key",
        "DEFAULT_MODEL": "google/gemini-2.5-pro-latest",
        "FAST_MODEL": "google/gemini-2.5-flash-preview-09-2025",
        "CREATIVE_MODEL": "minimax/minimax-m2",
        "ENABLE_WEB_SEARCH": "true",
        "LOG_LEVEL": "INFO"
      }
    }
  }
}

Critical: After modifying configuration, restart Claude Desktop completely for changes to take effect.

Architecture Overview

Tool System Design

The codebase uses a two-tier tool architecture inherited from Zen MCP Server:

  1. Simple Tools (tools/simple/base.py): Single-shot request/response tools for fast iteration

    • Example: contentvariant - generates 5-25 variations in one call
    • Use ToolModelCategory.FAST_RESPONSE for quick operations
    • Inherit from SimpleTool base class
  2. Workflow Tools (tools/workflow/base.py): Multi-step systematic processes

    • Track step_number, total_steps, next_step_required
    • Maintain findings and confidence across steps
    • Example: styleguide - detect → flag → rewrite → validate
    • Use ToolModelCategory.DEEP_THINKING for complex analysis

Provider System

Model providers are managed through a registry pattern (providers/registry.py):

  • Priority order: GOOGLE (Gemini) → OPENAI → XAI → DIAL → CUSTOM → OPENROUTER
  • Lazy initialization: Providers are only instantiated when first needed
  • Model categories: Tools request FAST_RESPONSE or DEEP_THINKING, registry selects best available model
  • Fallback chain: If primary provider fails, falls back to next in priority order

Key providers:

  • gemini.py - Google Gemini API (analytical work)
  • openai_compatible.py - OpenAI and compatible APIs
  • openrouter.py - Fallback for cloud models
  • custom.py - Self-hosted models

Conversation Continuity

Every tool supports continuation_id for stateful conversations:

  1. First call returns a continuation_id in response
  2. Subsequent calls include this ID to preserve context
  3. Stored in-memory with 6-hour expiration (configurable)
  4. Managed by utils/conversation_memory.py

This allows follow-up interactions like:

  • "Now check if this new draft matches the voice" (after voice analysis)
  • "Generate 10 more variations with different angles" (after initial generation)

File Processing

Tools automatically handle file inputs (utils/file_utils.py):

  • Directory expansion (recursively processes all files in directory)
  • Deduplication (removes duplicate file paths)
  • Image support (screenshots, brand assets via utils/image_utils.py)
  • Path resolution (converts relative to absolute paths)

Files are included in model context, so tools can reference brand guidelines, content samples, etc.

Schema Generation

Tools use a builder pattern for MCP schemas (tools/shared/schema_builders.py):

  • SchemaBuilder - Generates MCP tool schemas from Pydantic models
  • WorkflowSchemaBuilder - Specialized for workflow tools with step tracking
  • Automatic field type conversion (Pydantic → JSON Schema)
  • Shared field definitions (files, images, continuation_id, model, temperature)

Tool Implementation Pattern

Creating a Simple Tool

# tools/mymarketingtool.py
from pydantic import Field
from tools.shared.base_models import ToolRequest
from tools.simple.base import SimpleTool
from tools.models import ToolModelCategory
from systemprompts import MYMARKETINGTOOL_PROMPT
from config import TEMPERATURE_CREATIVE

class MyMarketingToolRequest(ToolRequest):
    content: str = Field(..., description="Content to process")
    platform: str = Field(default="linkedin", description="Target platform")

class MyMarketingTool(SimpleTool):
    def get_name(self) -> str:
        return "mymarketingtool"

    def get_description(self) -> str:
        return "Brief description shown in Claude Desktop"

    def get_system_prompt(self) -> str:
        return MYMARKETINGTOOL_PROMPT

    def get_default_temperature(self) -> float:
        return TEMPERATURE_CREATIVE

    def get_model_category(self) -> ToolModelCategory:
        return ToolModelCategory.FAST_RESPONSE

    def get_request_model(self):
        return MyMarketingToolRequest

Registering a Tool

In server.py, add to the _initialize_tools() method:

from tools.mymarketingtool import MyMarketingTool

def _initialize_tools(self) -> list[BaseTool]:
    tools = [
        ChatTool(),
        ContentVariantTool(),
        MyMarketingTool(),  # Add here
        # ... other tools
    ]
    return tools

System Prompt Structure

System prompts live in systemprompts/:

# systemprompts/mymarketingtool_prompt.py
MYMARKETINGTOOL_PROMPT = """
You are a marketing content specialist.

TASK: [Clear description of what this tool does]

OUTPUT FORMAT:
[Specify exact format - JSON, markdown, numbered list, etc.]

CONSTRAINTS:
- Character limits for platform
- Preserve brand voice
- Technical accuracy required

PROCESS:
1. Step one
2. Step two
3. Final output
"""

Import in systemprompts/init.py:

from .mymarketingtool_prompt import MYMARKETINGTOOL_PROMPT

Temperature Configurations

Defined in config.py for different content types:

  • TEMPERATURE_PRECISION (0.2) - Fact-checking, technical verification
  • TEMPERATURE_ANALYTICAL (0.3) - Style enforcement, SEO optimization
  • TEMPERATURE_BALANCED (0.5) - Strategic planning, guest editing
  • TEMPERATURE_CREATIVE (0.7) - Platform adaptation
  • TEMPERATURE_HIGHLY_CREATIVE (0.8) - Content variation, subject lines

Choose based on whether tool needs creativity (variations) or precision (fact-checking).

Platform Character Limits

Defined in config.py as PLATFORM_LIMITS:

PLATFORM_LIMITS = {
    "twitter": 280,
    "bluesky": 300,
    "linkedin": 3000,
    "linkedin_optimal": 1300,
    "instagram": 2200,
    "facebook": 500,
    "email_subject": 60,
    "email_preview": 100,
    "meta_description": 156,
    "page_title": 60,
}

Tools reference these when generating platform-specific content.

Model Selection Strategy

Tools specify category, not specific model:

  • FAST_RESPONSE → Uses FAST_MODEL from config (default: gemini-flash)
  • DEEP_THINKING → Uses DEFAULT_MODEL from config (default: gemini-2.5-pro)

Users can override:

  1. Via model parameter in tool request
  2. Via environment variables (DEFAULT_MODEL, FAST_MODEL, CREATIVE_MODEL)

Default models:

  • Analytical work: google/gemini-2.5-pro-latest
  • Fast generation: google/gemini-2.5-flash-preview-09-2025
  • Creative content: minimax/minimax-m2

Debugging Tips

Tool Not Appearing

  1. Check tool is registered in server.py
  2. Verify not in DISABLED_TOOLS env var
  3. Check logs: tail -f logs/mcp_server.log
  4. Restart Claude Desktop after config changes

Model Errors

  1. Verify API key in .env file
  2. Check provider supports requested model
  3. Look for provider errors in logs
  4. Test with explicit model name override

Response Issues

  1. Check system prompt specifies output format clearly
  2. Verify response doesn't exceed token limits
  3. Review logs for truncation warnings
  4. Test with simpler input first

Conversation Context Lost

  1. Verify continuation_id passed correctly
  2. Check conversation hasn't expired (6 hours default)
  3. Look for memory errors in logs: grep "continuation_id" logs/mcp_server.log

Project Structure

Key directories:

  • server.py - MCP server implementation, tool registration
  • config.py - Configuration constants, temperature defaults, platform limits
  • tools/ - Tool implementations (simple/ and workflow/ subdirs)
  • providers/ - AI model provider implementations
  • systemprompts/ - System prompts for each tool
  • utils/ - Shared utilities (file handling, conversation memory, image processing)
  • logs/ - Server logs (gitignored)

Marketing-Specific Context

Writing Style Rules

From project memories, tools should enforce:

  • No em-dashes (use periods or semicolons)
  • No "This isn't X, it's Y" constructions
  • Direct affirmative statements over negations
  • Semantic variety in paragraph openings
  • Concrete metrics over abstract claims

Testing Angles for Variations

Common psychological angles for A/B testing:

  • Technical curiosity
  • Contrarian/provocative
  • Knowledge gap emphasis
  • Urgency/timeliness
  • Insider knowledge
  • Problem-solution framing
  • Before-after transformation
  • Social proof/credibility
  • FOMO (fear of missing out)
  • Educational value

Platform Best Practices

  • LinkedIn: 1300 chars optimal (3000 max), professional tone
  • Twitter/Bluesky: 280 chars, conversational, high engagement hooks
  • Email subject: 60 chars, action-oriented, clear value prop
  • Instagram: 2200 chars, visual storytelling, emojis appropriate
  • Blog/WordPress: SEO-optimized titles (<60 chars), meta descriptions (<156 chars)

Key Differences from Zen MCP Server

  1. Removed tools: debug, codereview, refactor, testgen, secaudit, docgen, tracer, precommit
  2. Added tools: contentvariant, platformadapt, subjectlines, styleguide, seooptimize, guestedit, linkstrategy, factcheck
  3. Kept tools: chat, thinkdeep, planner (useful for marketing strategy)
  4. New focus: Content variation, platform adaptation, voice preservation
  5. Model preference: Minimax for creative generation, Gemini for analytical work

Current Implementation Status

Completed:

  • Core architecture from Zen MCP Server
  • Provider system (Gemini, OpenAI, OpenRouter)
  • Tool base classes (SimpleTool, WorkflowTool)
  • Conversation continuity system
  • File processing utilities
  • Basic tools: chat, contentvariant, listmodels, version

In Progress:

  • Additional simple tools (platformadapt, subjectlines, factcheck)
  • Workflow tools (styleguide, seooptimize, guestedit, linkstrategy)
  • Minimax provider configuration
  • Advanced features (voiceanalysis, campaignmap)

See PLAN.md for detailed implementation roadmap.

Git Workflow

Commit signature: Ben Reed ben@tealmaker.com (not Claude Code)

Commit frequency: After reasonable amount of updates (not after every small change)

Resources