Skip to main content

Building a Local MCP Server for Bucketeer Documentation

· 5 min read
Naoki Kuroda

As feature flag management becomes increasingly crucial for modern software development, having quick access to comprehensive documentation can make the difference between smooth deployments and troubleshooting headaches. Today, I'm excited to share how I built a locally running Model Context Protocol (MCP) server that brings Bucketeer's entire documentation directly into AI assistants like Claude and Cursor.

Why Build a Documentation MCP Server?

Working with feature flag platforms like Bucketeer often involves frequent documentation lookups. Whether you're implementing SDK integrations, configuring targeting rules, or troubleshooting evaluation logic, developers constantly need to reference official documentation. The traditional workflow involves:

  1. Switching between your IDE and browser
  2. Searching through documentation sites
  3. Finding relevant sections across multiple pages
  4. Copying code examples and configuration snippets

This context-switching disrupts the development flow and slows down productivity. By building an MCP server that integrates Bucketeer's documentation directly into AI assistants, developers can get instant, contextual answers without leaving their development environment.

Technical Implementation Highlights

Architecture Overview

The MCP server consists of several key components working together to provide seamless documentation access:

Implementation Challenges and Solutions

Challenge 1: GitHub API Rate Limits

When fetching documentation from the Bucketeer documentation repository, GitHub's API rate limits could potentially interrupt the indexing process.

Solution: Implemented smart batching with delays between requests and exponential backoff for failed requests:

const batchSize = 3; // Use smaller batch sizes to respect API limits
for (let i = 0; i < filesToProcess.length; i += batchSize) {
const batch = filesToProcess.slice(i, i + batchSize);
// Process batch with Promise.allSettled for error resilience

// Add delay to avoid rate limits
if (i + batchSize < filesToProcess.length) {
await new Promise(resolve => setTimeout(resolve, 1000));
}
}

Challenge 2: Search Relevance for Feature Flag Content

Creating a search system that understands Bucketeer's domain-specific terminology and provides relevant results was crucial for user experience.

Solution: Implemented a multi-layered search algorithm that prioritizes Bucketeer-specific terms and uses weighted scoring:


// Prioritize Bucketeer-specific terminology
const technicalTerms = new Set([
'feature', 'flag', 'bucket', 'targeting', 'segment', 'variation', 'rollout',
'experiment', 'api', 'sdk', 'environment', 'evaluation', 'event', 'goal'
]);

// Multi-layered scoring system
searchTerms.forEach(term => {
const titleCount = (titleLower.match(new RegExp(term, 'g')) || []).length;
const contentCount = (contentLower.match(new RegExp(term, 'g')) || []).length;

score += titleCount * 3 + contentCount * 1; // Weight titles higher
});

The search algorithm uses a sophisticated scoring system:

  • Exact keyword matches in titles (3x weight): Prioritizes documents where search terms appear in titles, as these are typically most relevant
  • Keyword matches in content (1x weight): Includes matches found within document content for comprehensive coverage
  • Full-text search for broader coverage: Falls back to full-text search when keyword matching doesn't yield sufficient results, ensuring users always get helpful answers

Challenge 3: Efficient Documentation Synchronization

Keeping local documentation cache synchronized with the remote GitHub repository while minimizing unnecessary network requests and processing overhead.

Solution: Implemented SHA-based change detection with intelligent caching strategy:

// SHA-based change detection
private filterFilesToUpdate(files: GithubApiContent[], cache: CacheData): GithubApiContent[] {
return files.filter((file) => {
const cachedSha = cache.urls[file.path];
return !cachedSha || cachedSha !== file.sha; // Only process if SHA changed
});
}

// Content hash verification for local files
const currentHash = generateHash(contentStr);
if (await fileExists(filePath)) {
const existingContent = await readFile(filePath);
const existingHash = generateHash(existingContent);
modified = currentHash !== existingHash;
}

// Recursive directory traversal with error resilience
for (const item of contents) {
if (item.type === 'dir') {
try {
const subFiles = await this.listRepositoryFiles(item.path);
files = files.concat(subFiles);
} catch (error) {
console.error(`Error fetching subdirectory ${item.path}:`, error.message);
}
}
}

Available Tools and Usage

The MCP server exposes three specialized tools:

// Example usage in AI assistant
{
"name": "search_docs",
"arguments": {
"query": "SDK integration authentication",
"limit": 5
}
}

2. Specific Document Retrieval

// Example usage in AI assistant
{
"name": "get_document",
"arguments": {
"path": "getting-started/create-feature-flag"
}
}

Real-world Usage Example

Here's how developers can interact with Bucketeer documentation through their AI assistant:

Developer: "How do I enable local evaluation in the Go SDK?"

AI Assistant: Let me search the Bucketeer documentation for information about local evaluation in the Go SDK.

[>Called MCP tool search_docs]

Based on the Bucketeer documentation, here's how to enable local evaluation in the Go SDK:

ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
client, err := bucketeer.NewSDK(
ctx,
bucketeer.WithAPIKey("YOUR_API_KEY"),
bucketeer.WithHost("YOUR_API_ENDPOINT"),
bucketeer.WithTag("YOUR_FEATURE_TAG"),
bucketeer.WithEnableLocalEvaluation(true),
bucketeer.WithCachePollingInterval(10*time.Minute),
)
if err != nil {
log.Fatalf("Failed initialize the new client: %v", err)
}

Getting Started

Setting up the Bucketeer Docs MCP Server is straightforward:

  1. Build the index:
npx @bucketeer/docs-local-mcp-server build-index
  1. Configure your AI Assistant:

{
"mcpServers": {
"bucketeer-docs": {
"type": "stdio",
"command": "npx",
"args": ["@bucketeer/docs-local-mcp-server"]
}
}
}
  1. Start Using: The server provides instant access to Bucketeer's complete documentation through your AI assistant.

Future Enhancements

Looking ahead, potential improvements include:

  • Advanced Search Filters: Document type filtering (API reference, tutorials, guides), date-based filtering, and tag-based categorization for more precise search results
  • Enhanced Caching: More sophisticated caching strategies for better performance

Conclusion

This MCP server represents our commitment to improving the developer experience for Bucketeer users. By bringing Bucketeer's comprehensive documentation directly into AI assistants, we want to eliminate friction and make feature flag management more accessible than ever.

The server is open-source and ready for the Bucketeer community to use. Whether you're implementing your first feature flag with Bucketeer or scaling complex experimentation workflows, having instant access to our documentation makes the entire development process smoother and more efficient.

📁 GitHub Repository: https://github.com/bucketeer-io/bucketeer-docs-local-mcp-server

Try it out and let us know your feedback!