mmcps · anonymized llm gateway

Introduction

MMCPS (Make MCP Safe) is an open-source privacy-preserving proxy that sits between your employees and any external LLM. It automatically anonymizes sensitive information before it leaves your network, validates the prompt, forwards it to the external LLM, and restores the original values before returning the response — so the LLM never sees the real data.

Background

LLMs from major AI providers have become a natural part of how people work — writing, summarizing, debugging, drafting. The problem is that using them means sending your data to someone else's servers.

This happens more than most organizations realize:

  • According to IBM's 2025 Cost of Data Breach Report, 97% of AI-related security incidents involved AI systems that lacked proper access controls, and most affected organizations had no governance policies in place to manage AI usage.
  • In 2025, a contractor working with the New South Wales Reconstruction Authority uploaded a spreadsheet containing personal information from around 3,000 flood victims — including names, contact details, and health information — to ChatGPT while reviewing disaster recovery applications. The incident triggered a government investigation and new AI usage policies, as reported by Breach Secure Now.
  • According to Reco's AI and Cloud Security Breaches 2025 report, shadow AI-related incidents cost organizations an average of $670,000 more than traditional breaches, affect roughly one in five organizations, and take an average of 247 days to detect.

Why We Built MMCPS

Organizations have responded in two ways, and neither works well. Banning LLM tools pushes employees toward personal devices, eliminating any organizational visibility. Allowing unrestricted access exposes sensitive data to third-party servers, creating compliance risks under GDPR, HIPAA, and internal confidentiality policies.

Tools like Microsoft Presidio exist to help — they detect and anonymize PII in text before it is sent anywhere. But Presidio is a developer SDK. It requires integration work, offers no way to validate that the LLM's response does not reconstruct the original data, and provides no interface that a non-technical employee can actually use.

Why MMCPS

  • Anonymizes sensitive input using Microsoft Presidio before anything leaves your network
  • Validates the prompt with guard rules before forwarding to the external LLM
  • Checks the LLM response to ensure no sensitive content was reconstructed
  • Restores original values and returns a coherent reply to the user
  • Provides a web interface that any employee can use directly, no technical setup required
  • Fully open source and can be self-hosted, keeping the entire pipeline on your own machine

Get Started

There are two ways to try MMCPS:

  • Try the hosted playground — no setup required. Head straight to Anonymized Chat to anonymize and send text prompts, or Anonymized Image & File to process images. Your data passes through our servers during the anonymization step.
  • Self-host on your own machine — if keeping data entirely on-premises is a requirement, MMCPS can be run locally. See the Getting Started page for setup instructions.