- Restructured to follow /init specifications for Claude Code - Added clear command reference for setup and development - Documented architecture patterns (tool system, providers, conversation continuity) - Explained schema generation and file processing systems - Removed planning/roadmap content (belongs in PLAN.md) - Added practical debugging tips and implementation patterns - Focused on non-obvious architecture insights
12 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
Zen-Marketing is an MCP server for Claude Desktop providing AI-powered marketing tools. It's a fork of Zen MCP Server, specialized for marketing workflows rather than software development.
Key distinction: This server generates content variations, enforces writing styles, and optimizes for platforms (LinkedIn, newsletters, WordPress) - not code review or debugging.
Target user: Solo marketing professionals managing technical B2B content (HVAC, SaaS, technical education).
Essential Commands
Setup and Running
# Initial setup
./run-server.sh
# Manual setup
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
# Edit .env with API keys
# Run server
python server.py
# Run with debug logging
LOG_LEVEL=DEBUG python server.py
Testing and Development
# Watch logs during development
tail -f logs/mcp_server.log
# Filter logs for specific tool
tail -f logs/mcp_server.log | grep -E "(TOOL_CALL|ERROR|contentvariant)"
# Test with Claude Desktop after changes
# Restart Claude Desktop to reload MCP server configuration
Claude Desktop Configuration
Configuration file: ~/.claude.json or ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"zen-marketing": {
"command": "/Users/ben/dev/mcp/zen-marketing/.venv/bin/python",
"args": ["/Users/ben/dev/mcp/zen-marketing/server.py"],
"env": {
"GEMINI_API_KEY": "your-key",
"DEFAULT_MODEL": "google/gemini-2.5-pro-latest",
"FAST_MODEL": "google/gemini-2.5-flash-preview-09-2025",
"CREATIVE_MODEL": "minimax/minimax-m2",
"ENABLE_WEB_SEARCH": "true",
"LOG_LEVEL": "INFO"
}
}
}
}
Critical: After modifying configuration, restart Claude Desktop completely for changes to take effect.
Architecture Overview
Tool System Design
The codebase uses a two-tier tool architecture inherited from Zen MCP Server:
-
Simple Tools (
tools/simple/base.py): Single-shot request/response tools for fast iteration- Example:
contentvariant- generates 5-25 variations in one call - Use
ToolModelCategory.FAST_RESPONSEfor quick operations - Inherit from
SimpleToolbase class
- Example:
-
Workflow Tools (
tools/workflow/base.py): Multi-step systematic processes- Track
step_number,total_steps,next_step_required - Maintain
findingsandconfidenceacross steps - Example:
styleguide- detect → flag → rewrite → validate - Use
ToolModelCategory.DEEP_THINKINGfor complex analysis
- Track
Provider System
Model providers are managed through a registry pattern (providers/registry.py):
- Priority order: GOOGLE (Gemini) → OPENAI → XAI → DIAL → CUSTOM → OPENROUTER
- Lazy initialization: Providers are only instantiated when first needed
- Model categories: Tools request
FAST_RESPONSEorDEEP_THINKING, registry selects best available model - Fallback chain: If primary provider fails, falls back to next in priority order
Key providers:
gemini.py- Google Gemini API (analytical work)openai_compatible.py- OpenAI and compatible APIsopenrouter.py- Fallback for cloud modelscustom.py- Self-hosted models
Conversation Continuity
Every tool supports continuation_id for stateful conversations:
- First call returns a
continuation_idin response - Subsequent calls include this ID to preserve context
- Stored in-memory with 6-hour expiration (configurable)
- Managed by
utils/conversation_memory.py
This allows follow-up interactions like:
- "Now check if this new draft matches the voice" (after voice analysis)
- "Generate 10 more variations with different angles" (after initial generation)
File Processing
Tools automatically handle file inputs (utils/file_utils.py):
- Directory expansion (recursively processes all files in directory)
- Deduplication (removes duplicate file paths)
- Image support (screenshots, brand assets via
utils/image_utils.py) - Path resolution (converts relative to absolute paths)
Files are included in model context, so tools can reference brand guidelines, content samples, etc.
Schema Generation
Tools use a builder pattern for MCP schemas (tools/shared/schema_builders.py):
SchemaBuilder- Generates MCP tool schemas from Pydantic modelsWorkflowSchemaBuilder- Specialized for workflow tools with step tracking- Automatic field type conversion (Pydantic → JSON Schema)
- Shared field definitions (files, images, continuation_id, model, temperature)
Tool Implementation Pattern
Creating a Simple Tool
# tools/mymarketingtool.py
from pydantic import Field
from tools.shared.base_models import ToolRequest
from tools.simple.base import SimpleTool
from tools.models import ToolModelCategory
from systemprompts import MYMARKETINGTOOL_PROMPT
from config import TEMPERATURE_CREATIVE
class MyMarketingToolRequest(ToolRequest):
content: str = Field(..., description="Content to process")
platform: str = Field(default="linkedin", description="Target platform")
class MyMarketingTool(SimpleTool):
def get_name(self) -> str:
return "mymarketingtool"
def get_description(self) -> str:
return "Brief description shown in Claude Desktop"
def get_system_prompt(self) -> str:
return MYMARKETINGTOOL_PROMPT
def get_default_temperature(self) -> float:
return TEMPERATURE_CREATIVE
def get_model_category(self) -> ToolModelCategory:
return ToolModelCategory.FAST_RESPONSE
def get_request_model(self):
return MyMarketingToolRequest
Registering a Tool
In server.py, add to the _initialize_tools() method:
from tools.mymarketingtool import MyMarketingTool
def _initialize_tools(self) -> list[BaseTool]:
tools = [
ChatTool(),
ContentVariantTool(),
MyMarketingTool(), # Add here
# ... other tools
]
return tools
System Prompt Structure
System prompts live in systemprompts/:
# systemprompts/mymarketingtool_prompt.py
MYMARKETINGTOOL_PROMPT = """
You are a marketing content specialist.
TASK: [Clear description of what this tool does]
OUTPUT FORMAT:
[Specify exact format - JSON, markdown, numbered list, etc.]
CONSTRAINTS:
- Character limits for platform
- Preserve brand voice
- Technical accuracy required
PROCESS:
1. Step one
2. Step two
3. Final output
"""
Import in systemprompts/init.py:
from .mymarketingtool_prompt import MYMARKETINGTOOL_PROMPT
Temperature Configurations
Defined in config.py for different content types:
TEMPERATURE_PRECISION(0.2) - Fact-checking, technical verificationTEMPERATURE_ANALYTICAL(0.3) - Style enforcement, SEO optimizationTEMPERATURE_BALANCED(0.5) - Strategic planning, guest editingTEMPERATURE_CREATIVE(0.7) - Platform adaptationTEMPERATURE_HIGHLY_CREATIVE(0.8) - Content variation, subject lines
Choose based on whether tool needs creativity (variations) or precision (fact-checking).
Platform Character Limits
Defined in config.py as PLATFORM_LIMITS:
PLATFORM_LIMITS = {
"twitter": 280,
"bluesky": 300,
"linkedin": 3000,
"linkedin_optimal": 1300,
"instagram": 2200,
"facebook": 500,
"email_subject": 60,
"email_preview": 100,
"meta_description": 156,
"page_title": 60,
}
Tools reference these when generating platform-specific content.
Model Selection Strategy
Tools specify category, not specific model:
- FAST_RESPONSE → Uses
FAST_MODELfrom config (default: gemini-flash) - DEEP_THINKING → Uses
DEFAULT_MODELfrom config (default: gemini-2.5-pro)
Users can override:
- Via
modelparameter in tool request - Via environment variables (
DEFAULT_MODEL,FAST_MODEL,CREATIVE_MODEL)
Default models:
- Analytical work:
google/gemini-2.5-pro-latest - Fast generation:
google/gemini-2.5-flash-preview-09-2025 - Creative content:
minimax/minimax-m2
Debugging Tips
Tool Not Appearing
- Check tool is registered in
server.py - Verify not in
DISABLED_TOOLSenv var - Check logs:
tail -f logs/mcp_server.log - Restart Claude Desktop after config changes
Model Errors
- Verify API key in
.envfile - Check provider supports requested model
- Look for provider errors in logs
- Test with explicit model name override
Response Issues
- Check system prompt specifies output format clearly
- Verify response doesn't exceed token limits
- Review logs for truncation warnings
- Test with simpler input first
Conversation Context Lost
- Verify
continuation_idpassed correctly - Check conversation hasn't expired (6 hours default)
- Look for memory errors in logs:
grep "continuation_id" logs/mcp_server.log
Project Structure
Key directories:
server.py- MCP server implementation, tool registrationconfig.py- Configuration constants, temperature defaults, platform limitstools/- Tool implementations (simple/ and workflow/ subdirs)providers/- AI model provider implementationssystemprompts/- System prompts for each toolutils/- Shared utilities (file handling, conversation memory, image processing)logs/- Server logs (gitignored)
Marketing-Specific Context
Writing Style Rules
From project memories, tools should enforce:
- No em-dashes (use periods or semicolons)
- No "This isn't X, it's Y" constructions
- Direct affirmative statements over negations
- Semantic variety in paragraph openings
- Concrete metrics over abstract claims
Testing Angles for Variations
Common psychological angles for A/B testing:
- Technical curiosity
- Contrarian/provocative
- Knowledge gap emphasis
- Urgency/timeliness
- Insider knowledge
- Problem-solution framing
- Before-after transformation
- Social proof/credibility
- FOMO (fear of missing out)
- Educational value
Platform Best Practices
- LinkedIn: 1300 chars optimal (3000 max), professional tone
- Twitter/Bluesky: 280 chars, conversational, high engagement hooks
- Email subject: 60 chars, action-oriented, clear value prop
- Instagram: 2200 chars, visual storytelling, emojis appropriate
- Blog/WordPress: SEO-optimized titles (<60 chars), meta descriptions (<156 chars)
Key Differences from Zen MCP Server
- Removed tools: debug, codereview, refactor, testgen, secaudit, docgen, tracer, precommit
- Added tools: contentvariant, platformadapt, subjectlines, styleguide, seooptimize, guestedit, linkstrategy, factcheck
- Kept tools: chat, thinkdeep, planner (useful for marketing strategy)
- New focus: Content variation, platform adaptation, voice preservation
- Model preference: Minimax for creative generation, Gemini for analytical work
Current Implementation Status
Completed:
- Core architecture from Zen MCP Server
- Provider system (Gemini, OpenAI, OpenRouter)
- Tool base classes (SimpleTool, WorkflowTool)
- Conversation continuity system
- File processing utilities
- Basic tools: chat, contentvariant, listmodels, version
In Progress:
- Additional simple tools (platformadapt, subjectlines, factcheck)
- Workflow tools (styleguide, seooptimize, guestedit, linkstrategy)
- Minimax provider configuration
- Advanced features (voiceanalysis, campaignmap)
See PLAN.md for detailed implementation roadmap.
Git Workflow
Commit signature: Ben Reed ben@tealmaker.com (not Claude Code)
Commit frequency: After reasonable amount of updates (not after every small change)
Resources
- MCP Protocol: https://modelcontextprotocol.com
- Zen MCP Server (parent project): https://github.com/BeehiveInnovations/zen-mcp-server
- Claude Desktop download: https://claude.ai/download
- Project planning: See PLAN.md for tool designs and implementation phases
- User documentation: See README.md for end-user features