Why We’re Not Using MCP (Yet) — and Why That’s a Good Thing
During New York Tech Week, our CEO Arina Curtis was on the ground talking to founders, builders, and a fair number of curious onlookers. And one question kept coming up:
“So… what’s your take on MCP? Using it? Planning to use it?”
For those not neck-deep in the AI protocol weeds, MCP (Model Context Protocol) is Anthropic’s new framework for connecting LLM agents and tools more intelligently—think smarter APIs for smarter agents. It’s the hot new acronym. Agents + MCP is 2025’s peanut butter and jelly.
So, are we jumping on the bandwagon?
As CTO, my first instinct was to look at it through the lens of our existing systems and priorities. Yes, we could support MCP. We’ve even tossed around a few interesting ideas internally. But our north star hasn’t changed: building the best analytics tools on the market. Shiny protocol wrappers are fun, but speed, clarity, and insight still win.
And to be honest, I defaulted to my usual conservative approach:
“Let’s trial it internally, dogfood it, and see if it earns its place.”
Old habits die hard—I’m ex-Microsoft, where the RAID bug-tracking system taught us one thing: eat your own dogfood before handing it to customers.
But as we dug deeper, the truth became clear.
The reason we’ve often built our own internal tooling isn’t because we like reinventing the wheel. It’s because we’re solving a very specific kind of problem.
Take our Compute Engine, for example. It was born out of frustration—waiting 10 minutes for Sisense dashboards to load while burning $20k/month on BigQuery and paying for extra caching. (True story.) I had no patience, so we built something faster. Way faster.
We didn’t just duct-tape faster dashboards together—we engineered a solution from the ground up. And now? I dream of one day going back, ripping out the old stack, and benchmarking the load times we could hit today. Spoiler: it’d be fast.
And MCP? We’ve got something similar. But instead of tons of microtools, we use a structured, dynamic system that adapts to the task, keeps context windows lean, and batches LLM requests efficiently. That means lower inference costs, smarter context management, and faster results.
So while MCP is exciting—and we’re watching closely—it’s not (yet) a match for what we’ve built.
Our custom system isn’t just faster. It’s cheaper. It’s smarter. And those savings and performance gains? They go straight to our customers.
TL;DR:
- MCP is cool.
- We built our own thing because we had to—and it’s really good.
- We’ll keep an eye on MCP. But for now? We're sticking with what works.
If you're serious about learning more from your data—and want tools engineered for speed, insight, and efficiency—let's talk.
— The DataGPT Team
Darren Pegg is CTO at DataGPT - A Place to ask questions
Book a demo to explore how DataGPT can enhance your business operations.